text
stringlengths
1
1.78M
meta
dict
\section{Introduction} A fundamental principle of the quantum theory is that, while the underlying wave function may be complex, eigenvalues of energy and other physically relevant quantities must be real, which is provided by the condition that the respective Hamiltonian is self-conjugate (Hermitian) \cite{qm}. On the other hand, the condition of the reality of the entire energy spectrum does not necessarily imply that it is generated by a Hermitian Hamiltonian. Indeed, it had been demonstrated, about twenty years ago, that non-Hermitian Hamiltonians obeying the parity-time ($\mathcal{PT}$) symmetry may also produce entirely real spectra~\cite% {bender1,dorey,bender2,bender3,bender4,review,ptqm}. In terms of the usual single-particle Hamiltonian, which includes potential $U(\mathbf{r})$, the $% \mathcal{PT}$ symmetry implies that the potential is complex, $U(\mathbf{r}% )=V(\mathbf{r})+iW(\mathbf{r})$ (the usual Hermitian Hamiltonian contains a strictly real potential), its real and imaginary parts being, respectively, even and odd functions of coordinates \cite{bender1}:. \begin{equation} V(\mathbf{r})=V(-\mathbf{r}),W(-\mathbf{r})=-W(\mathbf{r}),~\mathrm{i.e.,~}% U(-\mathbf{r})=U^{\ast }(\mathbf{r}), \label{minus} \end{equation}% ~ where $\ast $ stands for the complex conjugate. For a given real part of the potential, the spectrum of $\mathcal{PT}$-symmetric models remains completely real, i.e., physically relevant, as long as the strength of the imaginary component of the potential is kept below a certain critical value, which is a threshold of the $\mathcal{PT}$-symmetry breaking, above which the system becomes unstable. The loss of the $\mathcal{PT}$ symmetry may be preceding by the onset of the jamming anomaly, which means transition from increase to decreases of the power flux between the gain and loss elements in the system following the increase of the gain-loss coefficient \cite% {jamming1,jamming2}. It is relevant to mention that some relatively simple $% \mathcal{PT}$-symmetric systems may be explicitly transformed into an alternative form admitting a representation in terms of an Hermitian \ Hamiltonian \cite{Barash2,Barash}. While the concept of $\mathcal{PT}$-symmetric Hamiltonians remained an abstract one in the framework of the quantum theory per se, theoretical works had predicted a possibility to emulate this concept in optical media with symmetrically placed gain and loss elements \cite{theo1}-\cite{Kominis}% , making use of the commonly known similarity between the Schr\"{o}dinger equation in quantum mechanics and the classical equation governing the paraxial light propagation in classical waveguides. These predictions were followed by the implementation in optical waveguiding settings of various types \cite{exp1}-\cite{exp7}, as well as in metamaterials \cite{exp4}, lasers \cite{exp5} (and laser absorbers \cite{Longhi}), microcavities \cite% {exp6}, optically induced atomic lattices \cite{exp8}, exciton-polariton condensates \cite{exci1}-\cite{exci3}, and in other physically relevant contexts. In particular, the transitions from unbroken to broken $\mathcal{PT% }$ symmetry was observed in many experiments. One of prominent experimentally demonstrated applications of the $\mathcal{PT}$ symmetry in optics is unidirectional transmission of light \cite{uni}. Other classical waveguiding settings also admit emulation of the $\mathcal{PT% }$ symmetry, as demonstrated in acoustics \cite{acoustics} and predicted in optomechanical systems \cite{om}. Also predicted were realizations of this symmetry in atomic Bose-Einstein condensates \cite{Cartarius}, magnetism \cite{magnetism}, mechanical chains of coupled pendula \cite{Peli}, and electronic circuits \cite{electronics} (in the latter case, the prediction was also demonstrated experimentally). In terms of the theoretical analysis, $\mathcal{PT}$-symmetric extensions were also elaborated for Korteweg - de Vries \cite{KdV1,KdV2}, Burgers \cite{Zhenya-Burgers}, and sine-Gordon \cite% {Cuevas} equations, as well as in a system combining the $\mathcal{PT}$ symmetry with the optical emulation of the spin-orbit coupling \cite{HS}. While the $\mathcal{PT}$ symmetry is a linear property of the system, it may be naturally combined with intrinsic nonlinearity of the medium in which the symmetry is realized, such as the ubiquitous Kerr nonlinearity of optical waveguides. Most typically, these settings are modelled by nonlinear Schr% \"{o}dinger equations (NLSEs) with the $\mathcal{PT}$-symmetric potentials, subject to constraint (\ref{minus}), and cubic terms. Such models may give rise to $\mathcal{PT}$-symmetric solitons, which were considered, chiefly theoretically, in a large number of works (see, in particular, theoretical papers \cite{soliton}, \cite{Konotop}-\cite{Alexeeva} and recent reviews \cite{review1,review2}), and experimentally demonstrated too \cite{exp7}. While most of these works were dealing with one-dimensional (1D) models, stable $\mathcal{PT}$-symmetric solitons were also found in some two-dimensional (2D) models \cite{Yang}, \cite{2D-1}-\cite{2D-4}, \cite{HS}. A characteristic feature of solitons in $\mathcal{PT}$-symmetric systems is that, although these systems model, generally speaking, dissipative dynamics (the systems have no dynamical invariants), their solitons form continuous families like in conservative systems (defined by usual Hermitian Hamiltonians) \cite{families}, while traditional dissipative nonlinear systems normally give rise to isolated solutions in the form of dissipative solitons, which do not form families (if a dissipative soliton is stable, it plays the role of an attractor in the system's dynamics \cite{diss1}-\cite% {diss3}). Similar to their linear counterparts, soliton states are also subject to destabilization via the breaking of the $\mathcal{PT}$ symmetry at a critical value of the strength of the gain-loss terms \cite{breaking}. Nevertheless, there are specific models which make the solitons' $\mathcal{PT% }$ symmetry \emph{unbreakable}, extending it to arbitrarily large values of the gain-loss strength, i.e., the coefficient in front of the non-Hermitian part of the respective Hamiltonian \cite{unbreakable}-\cite{China}. The particular property of those models is that self-trapping of solitons is provided not by the usual self-focusing sign of the cubic nonlinearity, but by the opposite defocusing sign, with the local strength of the self-defocusing growing fast enough from the center to periphery. For conservative systems (in the absence of gain and loss), this scheme of the self-trapping of stable 1D, 2D, and 3D solitons was elaborated previously in a number of works \cite{Barcelona0}-\cite{Barcelona11}. The objective of the present article is to provide a brief survey of systems which may support unbreakable $\mathcal{PT}$ symmetry, as this property is quite promising for potential applications, and is interesting in its own right. It was recently elaborated in two completely different settings. One is the above-mentioned model with the solitons supported by the spatially growing strength of local self-defocusing. On the other hand, a possibility of creating the $\mathcal{PT}$ symmetry persisting up to indefinitely large values of the gain-loss coefficient was also discovered in the context of nanophotonics, considering light propagation in structures combining refractive, amplifying, and attenuating elements at a subwavelength scale \cite{sub}. This setting was theoretically analyzed in a purely linear form, with an essential peculiarity that the corresponding model is, naturally, based on the full system of Maxwell's equations, rather than the paraxial-propagation equation of the Schr\"{o}dinger type, which was used in an absolute majority of works dealing with the $\mathcal{PT}$ symmetry in optical media. Basic findings for the restoration of the $\mathcal{PT}$ symmetry, and a possibility of making it completely unbreakable in the linear nanophotonic model are presented below in Section II. The results for unbreakable 1D $\mathcal{PT}$-symmetric solitons in the model based on the paraxial-propagation NLSE with the spatially growing strength of the self-defocusing nonlinearity are summarized in Section III. It is followed by Section IV, which reports \emph{new results} for 2D extensions of the unbreakable $\mathcal{PT}$ symmetry in a nonlinear model of a similar type. We consider two different versions of the 2D system, with the quasi-one-dimensional or fully two-dimensional $\mathcal{PT}$ symmetry, the former meaning that the gain and loss are swapped by reflection $% x\Leftrightarrow -x$, while the reflection in the perpendicular direction, $% y\Leftrightarrow -y$ leaves the gain-loss pattern invariant. The main issue is the stability of the 2D $\mathcal{PT}$-symmetric solitons, which turn out to be essentially more stable in the case of the quasi-1D symmetry than in the framework of the full 2D scheme. An essential asset of the 1D, quasi-1D, and full 2D models is that a number of soliton solutions can be obtained in an exact analytical form, even if not all of them are stable. \section{Restoration and persistence of the $\mathcal{PT}$ symmetry in the photonic medium with a subwavelength structure} Following Ref. \cite{sub}, we here consider the propagation of monochromatic light beams with the TM (transverse-magnetic) polarization , which include only $\mathcal{E}_{x}$, $\mathcal{E}_{z}$, and $\mathcal{H}_{y}$ components of the electric and magnetic fields. The propagation is considered along the $z$ axis in an effectively 2D medium whose dielectric permittivity is modulated in the transverse direction, $x$. The spatial evolution of the field components is governed by the reduced system of the Maxwell's equations:% \begin{gather} i\frac{\partial E_{x}}{\partial z}=-\frac{1}{\varepsilon _{0}\omega }\frac{% \partial }{\partial x}\left( \frac{1}{\varepsilon _{\mathrm{rel}}}\frac{% \partial \mathcal{H}_{y}}{\partial x}\right) -\mu _{0}\omega \mathcal{H}_{y}, \notag \\ i\frac{\partial \mathcal{H}_{y}}{\partial z}=-\varepsilon _{0}\varepsilon _{% \mathrm{rel}}\omega \mathcal{E}_{x}, \label{ME} \\ \mathcal{E}_{z}=\frac{i}{\varepsilon _{0}\varepsilon _{\mathrm{rel}}\omega }% \frac{\partial \mathcal{H}_{y}}{\partial x}, \notag \end{gather}% where $\omega $ is the frequency of the monochromatic carrier, $\varepsilon _{0}$ and $\mu _{0}$ are the vacuum permittivity and permeability, and $% \varepsilon _{\mathrm{rel}}=\varepsilon _{\mathrm{bg}}+\varepsilon ^{\mathrm{% re}}(x)+i\varepsilon ^{\mathrm{im}}(x)$ is the complex relative permittivity of the $\mathcal{PT}$-symmetric structure, with $x$-dependent real and imaginary parts, added to the background permittivity, $\varepsilon _{% \mathrm{bg}}$. Two different modulation patterns were considered in Ref. \cite{sub}, corresponding, respectively, to a single waveguiding channel or a periodic guiding structure in the $\left( x,z\right) $ plane. In this article, we focus on solitary (localized) modes, therefore only the former pattern is explicitly considered. It is defined by the following transverse ($x$% -dependent) profile:% \begin{equation} \varepsilon _{\mathrm{rel}}(x)=\varepsilon _{\mathrm{bg}}+\mathrm{sech}% ^{2}\left( \frac{x}{d}\right) \left[ p~+i\alpha \mathrm{~sinh}\left( \frac{x% }{d}\right) \right] , \label{channel} \end{equation}% where $d$ and $p>0$ represent, severally, the width and depth of the guiding channel, while $\alpha >0$ is the strength of the gain-loss term. In accordance with the the general definition of the $\mathcal{PT}$ symmetry, the real and imaginary parts of the profile are even and odd functions of $x$% , respectively, cf. Eq. (\ref{minus}). Eigenmodes for subwavelength beams with propagation constant $b$ are looked for as solutions to Eq. (\ref{ME}) in the form of% \begin{equation} \left\{ \mathcal{E}_{x}(x,z),\mathcal{H}_{y}\left( x,z\right) ,\mathcal{E}% _{z}\left( x,z\right) \right\} =e^{ibz}\left\{ E_{x}(x)H_{y}\left( x\right) ,E_{z}\left( x\right) \right\} . \label{eigen} \end{equation}% Numerical solution of Eq. (\ref{ME}) with modulation profile (\ref{channel}) has produced three types of the solutions \cite{sub}: (i) ones with real $b>% \sqrt{\varepsilon _{\mathrm{bg}}}$ represent stable $\mathcal{PT}$-symmetric beams guided by the channel; (ii) solutions with a complex propagation constant, which has Re$\left( b\right) >\sqrt{\varepsilon _{\mathrm{bg}}}$, Im$(b)\neq 0$ represent, as it follows from Eq. (\ref{eigen}), exponentially growing (unstable) channel-guided modes with broken $\mathcal{PT}$ symmetry, and (iii) delocalized modes, which are not actually guided by the channel, have Re$\left( b\right) <\sqrt{\varepsilon _{\mathrm{bg}}}$. The situation which occurs in a majority of previously studied models is that, with the increase of the gain-loss strength, $\alpha $, the $\mathcal{% PT}$ symmetry of the guided states suffers breaking at a critical value, $% \alpha _{\mathrm{cr}}$. This is indeed observed in the present case in the nearly-paraxial regime, namely, at $d/\lambda \gtrsim 1/5$, where $\lambda $ is the underlying wavelength of the optical beam (below, following Ref. \cite% {sub}, particular results are displayed for $\lambda =632.8$ nm (visible red), and $\varepsilon _{\mathrm{bg}}=2.25$). In particular, at $d=120$ nm, the breaking of the $\mathcal{PT}$ symmetry takes place at $\alpha _{\mathrm{% cr}}\approx 1.95$, see Fig. \ref{fig1}(a) (in Fig. 1, the $\mathcal{PT}$ symmetric modes exist at a single value of the propagation constant, as the underlying wavelength is fixed). However, in the deeply subwavelength situation, corresponding to essentially smaller channel's widths, such as $% d=60$ nm $\simeq \lambda /10$ and $30$ nm $\simeq \lambda /20$ (see Figs. % \ref{fig1}(b,c)), a drastically different situation is observed: in the former case, the breaking of the\ $\mathcal{PT}$ symmetry is followed its \emph{restoration} at still larger values of $\alpha $, and in the latter case the breaking \emph{does not happen} at all. It is relevant to mention that a similar effect of the spontaneous restoration of the $\mathcal{PT}$ symmetry, although not the full elimination of the symmetry breaking, was reported too in some other models (based on the paraxial, rather than subwavelength, equations), including a linear discrete system of the Aubry-Andr\'{e} type \cite{Joglekar}, and a nonlinear model based on the NLSE in 1D \cite{Segev}. Examples of unbreakable $\mathcal{PT}$ symmetry are known too in simple models with few degrees of freedom, such as a $\mathcal{PT}$ dimer \cite{Barash2}. \begin{figure}[tbp] \centering \includegraphics[width=0.5\textwidth]{Fig1.pdf} \caption{Real and imaginary parts of the propagation constant, $b_{\mathrm{r}% }$ and $b_{\mathrm{i}}$, versus the gain-loss strength, $\protect\alpha $, in Eq. (\protect\ref{channel}), for the guiding channel with depth $p=1.7$, and widths $d=120$ nm (a), $d=60$ nm (b), and $d=30$ nm (c) (as per Ref. \protect\cite{sub}). Circles in panel (b) designate examples of the eigenmodes displayed in Fig. \protect\ref{fig2}. The underlying wavelength is $\protect\lambda =632.8$ nm, and the background dielectric permeability is $\protect\varepsilon _{\mathrm{bg}}=2.25$. The emergence of $b_{\mathrm{i}% }$ in panels (a) and (b) signals the breaking of the $\mathcal{PT}$ symmetry, while the disappearance of $b_{\mathrm{i}}$ in (b) implies the% \emph{\ restoration} of the symmetry. In the case shown in (c), the $% \mathcal{PT}$ symmetry is \emph{never broken}.} \label{fig1} \end{figure} A set of typical eigenmodes of the electromagnetic fields, which correspond, respectively, to the unbroken, broken, and restored $\mathcal{PT}$ symmetry, are displayed in Fig. \ref{fig2}. It is clearly seen that, in the case of the unbroken and restored symmetry, each field component is either spatially even or odd, while the modal spatial (anti)symmetry is broken too when the $% \mathcal{PT}$ symmetry does not hold. \begin{figure}[tbp] \centering\includegraphics[width=0.5\textwidth]{Fig2.pdf} \caption{Profiles of the guided modes designated by circles in Fig. \protect \ref{fig1}, at $\protect\alpha =2.0$ (a), $4.5$ (b), and $9.5$ (c), which are typical modes with unbroken, broken, and \emph{restored} $\mathcal{PT}$ symmetry, respectively (as per Ref. \protect\cite{sub}). The fields are plotted in dimensionless units, while transverse coordinate $x$ is measured in $\mathrm{\protect\mu }$m.} \label{fig2} \end{figure} Finally, the results of the consideration of the model are summarized in Fig. \ref{fig3}, which shows regions of the unbroken, broken, and restored $% \mathcal{PT}$ symmetry in the plane of the essential control parameters, \textit{viz}., the gain-loss coefficient, $\alpha $, and the width of the guiding channel, $d$. Relatively small areas where no guided modes exist (in the latter case, the optical beam coupled into the channel waveguide suffers delocalization, spreading out into the entire $\left( x,z\right) $ plane) are shown too. The conclusion suggested by Fig. \ref{fig3} is quite clear: in the near-paraxial regime, corresponding to a relatively broad guiding channel, with $d\gtrsim 120$ nm, the usual scenario of the $\mathcal{PT}$% -symmetry breaking, following the increase of $\alpha $, is observed. However, in the deeply subwavelength region, the symmetry (hence, the stability of the guided modes too) is either readily \emph{restored} with the further increase of $\alpha $, or is \emph{never broken}. Figure \ref% {fig3} demonstrates too that region \textbf{3} of the unbroken and restored stability tends to expand, although not very dramatically, with the increase of the channel's depth, $p$ (see Eq. \ref{channel})), while, quite naturally, the delocalization area \textbf{2} shrinks. \begin{figure}[tbp] \centering\includegraphics[width=0.5\textwidth]{Fig3.pdf} \caption{Domains of the existence and stability of the $\mathcal{PT}$% -symmetric modes guided by channel (\protect\ref{channel}), in the plane of the channel's width, $d$, and gain-loss coefficient, $\protect\alpha $ (as per Ref. \protect\cite{sub}). The depth of the channel is $p=0.3$ in (a), which represents a shallow conduit, and $p=1.7$ in (b), representing a deep one. The symmetry is broken in region \textbf{1}, and unbroken \emph{or restored} in region \textbf{3}, respectively, while region \textbf{2} does not support any localized mode.} \label{fig3} \end{figure} \section{Unbreakable $\mathcal{PT}$-symmetric solitons in one dimension} The 1D model which is capable to support solitons with unbreakable $\mathcal{% PT}$ symmetry by means of the self-defocusing nonlinearity with the local strength, $S(\eta )$, growing from the center to infinity, as a function of coordinate $\eta $, is based on the NLSE for the amplitude of the electromagnetic field, $u$ \cite{unbreakable}:% \begin{equation} i\frac{\partial u}{\partial \xi }+\frac{1}{2}\frac{\partial ^{2}u}{\partial \eta ^{2}}-S(\eta )|u|^{2}u=-iR(\eta )u, \label{qq} \end{equation}% where $\xi $ is the propagation coordinate, and $S(\eta )$ provides for the self-trapping of 1D solitons under that condition that$\ S(\eta )$ grows faster than $|\eta |$ at $|\eta |\rightarrow \infty $ \cite% {Barcelona1,Barcelona2}. Here, following Ref. \cite{unbreakable}, we adopt a steep anti-Gaussian modulation profile,% \begin{equation} S(\eta )=\left( 1+\sigma \eta ^{2}\right) \exp \left( \frac{1}{2}\eta ^{2}\right) , \label{Gauss} \end{equation}% where coefficients equal to $1$ and $1/2$ may be fixed to these values by means of rescaling of a more general expression. Further, the spatially-odd gain-loss modulation profile is adopted also as it was done in Ref. \cite% {unbreakable}:% \begin{equation} R(\eta )=\beta \eta \exp \left( -\Gamma \eta ^{2}\right) , \label{gamma} \end{equation}% with $\beta >0$ and $\Gamma \geq 0$. An advantage of fixing the profiles in the form of Eqs. (\ref{Gauss}) and (% \ref{gamma}) is that they admit a particular exact solution for the self-trapped $\mathcal{PT}$-symmetric soliton \cite{unbreakable}, provided that $\Gamma =0$ is set in Eq. (\ref{gamma}):% \begin{equation} u\left( \eta ,\xi \right) =\frac{1}{2\sqrt{2\sigma }}\exp \left( ib\xi -2i\beta \eta -\frac{1}{4}\eta ^{2}\right) , \label{exact} \end{equation}% at a single value of the propagation constant:% \begin{equation} b=-\left( 2\beta ^{2}+\frac{1}{4}+\frac{1}{8\sigma }\right) . \label{b} \end{equation}% The availability of the exact solution is principally important for establishing the concept of the \textit{unbreakability} of the $\mathcal{PT}$ symmetry: obviously, the solution given by Eqs. (\ref{exact}) and (\ref{b}) exist for \emph{arbitrarily large} values of the gain-loss strength, $\beta $% , there being no critical value beyond which solitons would not exist. Moreover, in Ref. \cite{unbreakable} it was checked, at least in a part of the parameter plane $\left( \beta ,\sigma \right) $, that the exact solitons are stable. It is relevant to stress that the model with the sufficiently quickly growing nonlinearity coefficient $S(\eta )$ is \emph{nonlinearizable}: the form of decaying tails of generic self-trapped modes can be investigated analytically (it turns out to be the same as in the particular exact solution (\ref{exact})), but it is necessary to keep the nonlinear term in Eq. (\ref{qq}) for this purpose \cite{Barcelona0,Barcelona1}. Accordingly, the linear spectrum of the present model cannot be defined, the respective concept of the $\mathcal{PT}$ symmetry and its breaking or unbreakability being a nonlinear one too. The same pertains to the 2D model considered in the next section. Numerical solution of Eq. (\ref{qq}) produces many families of complex solitons with real propagation constant $b$, in the form of% \begin{equation} u\left( \eta ,\xi \right) =\exp \left( ib\xi \right) \left[ w_{\mathrm{r}% }(\eta )+iw_{\mathrm{i}}(\eta )\right] , \label{w} \end{equation}% which may be naturally identified as fundamental solitons, dipoles, tripoles, quadrupoles, and so on. These solution types feature profiles of $% |w(\eta )|\equiv \sqrt{w_{\mathrm{r}}^{2}(\eta )+w_{\mathrm{i}}^{2}(\eta )}$ with, respectively, one, two, three, etc. peaks (local maxima). Solitons are characterized by their integral power,% \begin{equation} U=\int_{-\infty }^{+\infty }\left\vert w(\eta )\right\vert ^{2}d\eta . \label{U} \end{equation} Characteristic examples of stable fundamental and dipole solutions are displayed in Fig. \ref{fig4} (they were obtained for $\sigma =0$, in which case exact soliton (\ref{exact}) does not exist, but numerically found solitons are available and may be stable). It is seen that the increase of the gain-loss coefficient, $\beta $, makes the shape of the solitons more complex, but the fundamental and dipole solitons remain fully stable as long as they exist, while higher-order tripoles and quadrupoles have \ both stability and instability areas \cite{unbreakable} (as briefly shown in Fig. % \ref{fig5}(b)). \begin{figure}[tbp] \centering\includegraphics[width=0.5\textwidth]{Fig4.pdf} \caption{Profiles of fundamental (a),(b) and dipole (c),(d) stable one-dimensional solitons, found as numerical solutions of Eq. (\protect\ref% {qq}) with $\protect\sigma =0$ and $\protect\gamma =1/2$, for a fixed value of the propagation constant, $b=-10$ (as per Ref. \protect\cite{unbreakable}% ). Panels (a), (c) and (b), (d) pertain, severally, to $\protect\beta =1.04$ and $3.47$.} \label{fig4} \end{figure} Most essential results characterizing the behavior of solitons in the present model are collected in Fig. \ref{fig5}. In particular, Fig. \ref% {fig5}(a) shows that, at fixed $b$, branches of the fundamental and dipole solitons, remaining completely stable, merge and disappear, with the increase of the gain-loss coefficient, $\beta $, at a critical (\textquotedblleft upper") value, which is $\beta _{\mathrm{upp}}\approx 2.135$ in Fig. \ref{fig5}(a). However, stable fundamental and dipole soliton can be found at arbitrarily high values of $\beta $, as demonstrated by the lower curve in Fig. \ref{fig5}(c), which shows the critical value $\beta _{% \mathrm{upp}}$ vs. $b$: obviously, $\beta $ may become indefinitely large with the increase of $|b|$. In addition, the upper curve shows the growth with $|b|$ of a similar critical (\textquotedblleft upper") value at which another pair of solitons, \textit{viz}., tripoles and quadrupoles, merge, as can be seen in Fig. \ref{fig5}(b) (however, unlike the fundamental and dipole modes, the tripole and quadrupole branches become unstable prior to the merger, as seen in \ref{fig5}(b)). \begin{figure}[tbp] \centering\includegraphics[width=0.5\textwidth]{Fig5.pdf} \caption{The solitons' integral power, defined in Eq. (\protect\ref{U}), vs. the gain-loss strength, $\protect\beta $, for branches of the fundamental and dipole solitons (a), and ones of the tripole and quadrupole types (b) (as per Ref. \protect\cite{unbreakable}). In (b), black and red segments designate stable and unstable solitons, respectively (the fundamental and dipole solitons are completely stable in their existence areas). Circles in (a) correspond to examples of the solutions shown in Fig. \protect\ref{fig4} (circles in (b) correspond to examples of stable and unstable tripole and quadrupole solitons which can be found in Ref. \protect\cite{unbreakable}, but are not shown here). The families are produced for $\protect\sigma =0$ in Eq. (\protect\ref{Gauss}), $\protect\gamma =1/2$ in Eq. (\protect\ref% {gamma}), and fixed propagation constant, $b=-10$. The fundamental and dipole families merge at $\protect\beta \approx 2.135$, while the tripole and quadrupole ones merge at $\protect\beta \approx 3.565$. (c) The critical (\textquotedblleft upper") value, $\protect\beta _{\mathrm{upp}}^{\mathrm{d}% } $, at which the fundamental and dipole branches merge, vs. the propagation constant, $b$. Curve $\protect\beta _{\mathrm{upp}}^{\mathrm{q}}(b)$ shows the same for the merger of the tripole and quadrupole branches.} \label{fig5} \end{figure} \section{Unbreakable $\mathcal{PT}$-symmetric solitons in two dimensions} \subsection{The model and analytical solutions} Results presented in the above sections summarize findings originally published in Refs. \cite{sub} and \cite{unbreakable}, respectively. Here we report previously unpublished analytical and numerical results obtained for 2D generalizations of the model based on Eq. (\ref{qq}). The 2D model with transverse coordinates $\left( x,y\right) $ and propagation distance $z$ is based on the following NLSE for the amplitude of the electromagnetic field, $% w\left( x,y,z\right) $: \begin{equation} i\frac{\partial w}{\partial z}+\frac{1}{2}\left( \frac{\partial ^{2}w}{% \partial x^{2}}+\frac{\partial ^{2}w}{\partial y^{2}}\right) - S (r)|w|^{2}w=i R \left( x,y\right) w, \label{NLS} \end{equation}% where $r\equiv \sqrt{x^{2}+y^{2}}$ is the radial coordinate, and the nonlinearity-modulation profile is chosen similar to its 1D counterpart (\ref% {Gauss}): \begin{equation} S(r)=\left( 1+\sigma r^{2}\right) \exp \left( r^{2}\right) , \label{sigma} \end{equation}% with $\sigma \geq 0$. Here we consider two different versions of the gain-loss spatial profile: a quasi-1D one, symmetric only with respect to $x$:% \begin{equation} R\left( x,y\right) =\beta _{0}x\exp \left( -\Gamma r^{2}\right) , \label{x} \end{equation}% and a profile symmetric with respect to $x$ and $y$, which may be called a fully 2D one: \begin{equation} R\left( x,y\right) =\beta _{0}xy\exp \left( -\Gamma r^{2}\right) , \label{xy} \end{equation}% with constants $\Gamma \geq 0$ and $\beta _{0}>0$. Stationary solutions with a real propagation constant, $b$, are looked for as \begin{equation} w\left( x,y\right) =\exp \left( ibz\right) W\left( x,y\right) , \label{uU} \end{equation}% with complex function $W\left( x,y\right) $ satisfying the following equation:% \begin{equation} bW=\frac{1}{2}\left( \frac{\partial ^{2}W}{\partial x^{2}}+\frac{\partial ^{2}W}{\partial y^{2}}\right) - S (r)|W|^{2}W - i R \left( x,y\right) W. \label{UU} \end{equation} In the case of $\Gamma =0$ in Eqs. (\ref{x}) and (\ref{xy}), Eq. (\ref{UU}), with $\sigma $ and $R\left( x,y\right) $ taken in the form of Eqs. (\ref% {sigma}) and (\ref{x}), gives rise to an exact analytical solution:% \begin{equation} W\left( x,y\right) =W_{0}\exp \left( -\frac{1}{2}r^{2}-i\beta _{0}x\right) , \label{exact1} \end{equation}% (cf. the 1D solution (\ref{exact})), with% \begin{equation} W_{0}^{2}=\frac{1}{2\sigma },~b=-\left( 1+\frac{\beta _{0}^{2}}{2}+\frac{1}{% 2\sigma }\right) . \label{exact1parameters} \end{equation}% This solution exists for all values of the control parameters, $\beta _{0}$ and $\sigma $, except for $\sigma =0$. Further, Eq. (\ref{U}) with $\sigma $ and $R\left( x,y\right) $ taken in the form of Eqs. (\ref{sigma}) and (\ref% {xy}), where $\Gamma =0$ is again fixed, also gives rise to an exact solution:% \begin{equation} W\left( x,y\right) =W_{0}\exp \left( -\frac{1}{2}r^{2}-\frac{1}{2}i\beta _{0}xy\right) , \label{exact2} \end{equation}% this time with% \begin{equation} W_{0}^{2}=\frac{1}{2\sigma }\left( 1-\left( \frac{\beta _{0}}{2}\right) ^{2}\right) ,~b=-\left[ 1+\frac{1}{2\sigma }\left( 1-\left( \frac{\beta _{0}% }{2}\right) ^{2}\right) \right] . \label{exact2parameters} \end{equation}% This solution exists if Eq. (\ref{exact2parameters}) yields $W_{0}^{2}>0$, i.e., $\beta _{0}<2$ and $\sigma >0$. Another exact solution of Eq. (\ref{U}), with $\sigma $ and $R\left( x,y\right) $ again taken in the form of Eqs. (\ref{sigma}) and (\ref{xy}), exists under the special condition, \begin{equation} \beta _{0}=2,\sigma =0,\Gamma =0. \label{special} \end{equation}% This solution is also found in the form of ansatz (\ref{exact2}), precisely with $\frac{1}{2}\beta _{0}$ replaced by $1$, as per Eq. (\ref{special}). However, unlike the solution represented by Eqs. (\ref{exact2}) and (\ref% {exact2parameters}), this time it is not a single one, but a \emph{% continuous family }of exact solutions, with \emph{arbitrary amplitude} $% W_{0} $, and propagation constant% \begin{equation} b=-\left( 1+W_{0}^{2}\right) . \label{k} \end{equation}% The possibility to obtain the continuous family of the exact 2D solitons, instead of an isolated one, is a compensation for selecting the special values of the parameters, as fixed by Eq. (\ref{special}). The exact solutions clearly suggest that the quasi-1D model, based on Eq. (% \ref{x}), features the unbreakable $\mathcal{PT}$ symmetry, as the respective solution, given by Eqs. (\ref{exact1}) and (\ref{exact1parameters}% ), exists for an arbitrarily large strength of the gain-loss term, $\beta _{0}$. On the other hand, the full 2D model, based on Eq. (\ref{xy}), gives rise to the exact solutions, in the form of Eqs. (\ref{exact2}), (\ref% {exact2parameters}) or (\ref{special}), (\ref{k}), which exist only at $% \beta _{0}\leq 2$, hence the unbreakability of the $\mathcal{PT}$ symmetry is not guaranteed in the latter case. \subsection{Numerical results} \subsubsection{The quasi-1D model} The exact solution of the model with the quasi-1D gain-loss modulation, given by Eqs. (\ref{exact1}) and (\ref{exact1parameters}) can be embedded into a family of solitons produced by a numerical solution of Eq. (\ref{UU}% ), with $S(r)$ and $R\left( x,y\right) $ taken as per Eqs. (\ref{sigma}) and (\ref{x}), respectively (the latter is taken here with $\Gamma =0$). The stationary 2D solutions were constructed by means of the Newton's conjugate gradient method. Then, the stability of the stationary states was identify by numerical computation of eigenvalues of small perturbations, using linearized equations for perturbations around the stationary solitons. This computation was performed with the help of the spectral collocation method. Finally, the stability prediction, based on the eigenvalues, was verified through direct simulations of the perturbed evolution of the solitons. Generic examples of numerically found stable and unstable solitons, which may have single- and dual-peak shapes, are shown in Fig. \ref{fig6}. In accordance with these examples, all the double-peak solitons are unstable, and almost all the single-peak ones are stable. In particular, all the exact solutions, given by Eqs. (\ref{exact1}) and (\ref{exact1parameters}), are found to be stable. \begin{figure}[tbp] \centering \subfigure{\includegraphics[width=0.45\textwidth]{Fig6a.pdf}} \newline \subfigure{\includegraphics[width=0.45\textwidth]{Fig6b.pdf}} \caption{Typical examples of 2D $\mathcal{PT}$-symmetric solitons produced by the model with the quasi-1D gain-loss profile defined by Eq. (\protect\ref% {x}). The top and bottom panels display, severally, a stable single-peak soliton with propagation constant $b=-2$, and an unstable dual-peak one with $b=-2.7$. In both cases, other parameters are $\protect\beta _{0}=0.8$, $% \protect\sigma =1$, and $\Gamma =0$.} \label{fig6} \end{figure} Results of the stability analysis for the $\mathcal{PT}$-symmetric solitons in the model with the quasi-1D shape of the gain-loss term, based on the eigenvalue computation, are summarized by the stability chart in the plane of $\left( b,\beta _{0}\right) $, i.e., the soliton's propagation constant and strength of the gain-loss term in Eq. (\ref{x}), which is displayed in Fig. \ref{fig7}. Direct simulations completely corroborate the predictions produced by the stability eigenvalues. In particular, the solitons which are predicted to be unstable get destructed, decaying in the course of the perturbed evolution. This figure corroborates the unbreakable character of the $\mathcal{PT}$ symmetry in the model, as the stability region does not exhibit a boundary at large values of $\beta _{0}$. \begin{figure}[tbp] \centering\includegraphics[width=0.5\textwidth]{Fig7.pdf} \caption{(Color online) The stability chart for the solitons supported by quasi-1D $\mathcal{PT}$-symmetric gain-loss profile (\protect\ref{x}) with $% \Gamma =0$, in the case of $\protect\sigma =1$ in Eq. (\protect\ref{sigma}). Exact soliton solutions, given by Eqs. (\protect\ref{exact1}) and (\protect \ref{exact1parameters}), are indicated by stars (they all are stable), while stable and unstable numerically found solitons are shown by green and red dots, respectively. Numbers near the dots denote the number of peaks in each soliton (one or two). No soliton solutions were found in white areas.} \label{fig7} \end{figure} The stability chart, drawn in Fig. \ref{fig7} for $\sigma =1$ in Eq. (\ref% {sigma}), is quite similar to its counterparts produced at other values of $% \sigma >0$. The situation is different in the case of $\sigma =0$, when the exact solution given by Eqs. (\ref{exact1}) and (\ref{exact1parameters}) does not exist. The respective stability chart, displayed in Fig. \ref{fig8}% , demonstrates essential differences from the one in Fig. \ref{fig7}: the stability area is conspicuously smaller, all unstable solutions, as well as stable ones, featuring the single-peak shape. \begin{figure}[tbp] \centering\includegraphics[width=0.5\textwidth]{Fig8.pdf} \caption{The same as in Fig. \protect\ref{fig7} (the stability chart for $% \mathcal{PT}$-symmetric solitons), but for $\protect\sigma =0$ in Eq. (% \protect\ref{sigma}). Note an essential reduction of the stability area in comparison with its counterpart in Fig. \protect\ref{fig7}.} \label{fig8} \end{figure} \subsubsection{The full 2D model} A drastic difference produced by the stability analysis for exact solutions of the full 2D model, given by Eqs. (\ref{exact2}) and (\ref% {exact2parameters}) for $\sigma >0$, $\Gamma =0$ and arbitrary $\beta _{0}$, and by Eq. (\ref{k}) for the special case (\ref{special}), is that these solutions are completely \emph{unstable}. Furthermore, all numerical solutions found in the full 2D model with $\Gamma =0$ in Eq. (\ref{xy}) are unstable too. The stabilization in this model may be provided by $\Gamma >0$% , i.e., by confining the spatial growth of the local gain and loss in Eq. (% \ref{xy}). For fixed $\sigma $, there is a minimum value $\Gamma _{\min }$ of $\Gamma $ which provides for the stabilization. In fact, $\Gamma _{\min }$ depends on the size of the solution domain: in an extremely large domain, one may find very broad solitons, i.e., ones with very small $b$ (see Eq. (% \ref{UU})), at any $\Gamma >0$. Practically speaking, the size of the domain is always finite, as the steep growth of $S\left( r\right) $, defined as per Eq. (\ref{sigma}), cannot extend to infinity. As shown in Refs. \cite% {Barcelona0}-\cite{Barcelona10}, it is sufficient to secure the adopted modulation profile of $S(r)$ on a scale which is essentially larger than a characteristic size of the soliton supported by this profile. Thus, we have concluded that, for instance, in the domain of size $|x|,|y|~\leq 9$ the solitons are stable in the model with $\sigma =1$ in Eq. (\ref{sigma}) at $% \Gamma \geq 0.2$ in Eq. (\ref{xy}), being explicitly unstable, e.g., at $% \Gamma =0.1$. Typical examples of the stability charts for the $\mathcal{PT}$% -symmetric solitons, numerically produced in the full 2D model with $\beta >0 $, are displayed in Fig. \ref{fig9}. Naturally, the stability area expands with the increase of $\Gamma $. It is worthy to note that Fig. \ref% {fig9}(b) clearly suggests that the $\mathcal{PT}$ symmetry in the model with the full 2D modulation of the gain-loss term may also be unbreakable, as the stability chart does features no upper boundary. \begin{figure}[tbp] \centering \subfigure{\includegraphics[width=0.5\textwidth]{Fig9a.pdf}} \newline \subfigure{\includegraphics[width=0.5\textwidth]{Fig9b.pdf}} \caption{The same (stability charts) as in Figs. \protect\ref{fig7} and \protect\ref{fig8}, but for the full 2D model based on Eq. (\protect\ref{xy}% ), with $\Gamma =0.5$ , $\protect\sigma =1$ and $\protect\sigma =0$ in the top and bottom panels, respectively.} \label{fig9} \end{figure} These charts include unstable and (very few) stable solitons with multi-peak shapes. Indeed, taking larger $\Gamma $, i.e., stronger confinement of the gain and loss in Eq. (\ref{xy}), it is possible to find \emph{stable} multi-peak solitons with rather complex shapes, an example being a stable four-peak soliton displayed in Fig. \ref{fig10} for $\Gamma =0.5$. \begin{figure}[tbp] \centering\includegraphics[width=0.45\textwidth]{Fig10.pdf} \caption{An example of a stable $\mathcal{PT}$-symmetric four-peak soliton with propagation constant $b=-3$, found in the full 2D models with $\protect% \sigma =1$ in Eq. (\protect\ref{sigma}) and $\Gamma =0.5$ in Eq. (\protect \ref{xy}).} \label{fig10} \end{figure} The results for the quasi-1D and full 2D systems, reported in this section, do not provide an exhaustive analysis of these models. A comprehensive analysis, including, in particular, the consideration of possible solitons with embedded vorticity, will be presented elsewhere. \section{Conclusion} The objective of this article is to summarize theoretical results which demonstrate the stabilization of the $\mathcal{PT}$ symmetry in both linear and nonlinear systems, making it possible to produce $\mathcal{PT}$% -symmetric states at arbitrarily large values of the strength of the gain-loss terms in the system, i.e., of the coefficient in front of the non-Hermitian part of the underlying $\mathcal{PT}$-symmetric Hamiltonian. In Sections II and III, we have surveyed previously reported results obtained in two altogether different settings. Namely, the possibility of the restoration and complete stabilization of the $\mathcal{PT}$ symmetry in the linear nanophotonic model of the waveguiding channel with a subwavelength width, the analysis of which is based on the full system of the Maxwell's equations, was recapitulated in Section II. The full stabilization, i.e., removal of the symmetry-breaking transition, takes place in the deeply subwavelength region. In Section III we have summarized results concerning the possibility of finding stable 1D solitons supported by the model with arbitrarily large values of the gain-loss coefficient, where the self-trapping of the solitons is provided by the self-defocusing nonlinearity with the local strength growing fast enough from the center to periphery.\ The model admits a particular exact solution for the fundamental soliton, the families of both fundamental and dipole modes being entirely stable. Section IV has presented new results for the unbreakable $\mathcal{PT}$% -symmetric solitons in two 2D extensions of the 1D model, \textit{viz}., with the quasi-1D and full 2D modulation profiles of the local gain-loss coefficient. These models also admit particular exact solutions, this time for 2D solitons. As a result, it is found the quasi-1D model readily gives rise to the stable family of fundamental (single-peak) 2D solitons for an arbitrarily large strength of the gain-loss term, while dual-peak ones are unstable. On the other hand, the stability of the solitons in the model with the full 2D $\mathcal{PT}$ symmetry requires to impose spatial confinement on the gain-loss term. Further results for the 2D models will be presented elsewhere. \section*{Acknowledgments} This work was supported, in part, by Grant No. 2015616 from the joint program in physics between the NSF and Binational (US-Israel) Science Foundation, and by Grant No. 1286/17 from the Israel Science Foundation.
{ "timestamp": "2018-02-27T02:03:22", "yymm": "1802", "arxiv_id": "1802.08783", "language": "en", "url": "https://arxiv.org/abs/1802.08783" }
\section*{Introduction} Bound quiver algebras of finite connected quivers strongly influence research on representation theory of Artin algebras. Gabriel found a correspondence between finite dimensional algebras and linear representations of bound quivers (\cite{G}, \cite[II]{ASS}), so it follows that studying modules of finite dimensional algebras is reduced to studying modules of bound quiver algebras. In this paper, we concentrate on the study of path algebras, which is one type of bound quiver algebras. Nakayama Conjecture, Tachikawa Conjecture, and Auslander-Reiten Conjecture are some major research projects in ring theory that present sufficient conditions for projective modules. Related to this, it has been known the following related result for Artin algebras: $(*)$ {\em For any finite dimensional algebra\footnote{Any finite dimensional algebra is Artin.} $\Lambda$ over an algebraically closed field of finite global dimension and any finitely generated $\Lambda$-module $M$, if ${\rm Ext}^{\geq 1}_\Lambda(M,\Lambda)=0$, then $M$ is projective} ({\bfsc Theorem \ref{fdA}}). A typical example of finite dimensional algebras is a path algebra of a finite acyclic quiver over an algebraically closed field. Since any path algebra of a quiver over an algebraically closed field is hereditary (even when the quiver is not finite, see e.g. \cite[\S8.2]{GR}), that is, its global dimension is not larger than $1$, the following assertion also holds: {\em For any algebraically closed field $K$, any finite acyclic quiver $Q$ and any finitely generated $KQ$-module $M$, if ${\rm Ext}^{1}_{KQ}(M,KQ)=0$, then $M$ is projective.} In this paper, it is shown that the above assertion is also true for finite dimensional $K$-linear representations of {\em some} infinite quivers, for example, the following quiver of $A_\infty$ type (see {\bfsc Theorem \ref{fqa cor}}) \[ \xymatrix@C15pt{ \circ & \circ \ar[l] & \circ \ar[l] & \cdots \ar[l] & \circ \ar[l] & \circ \ar[l] & \cdots \ar[l] } . \] We consider some specific quivers $Q$, as specified in {\bfsc Theorem \ref{cor final}}, one of which is the above quiver of $A_\infty$ type, to construct an infinitely generated {\em non projective} $KQ$-module, which is denoted by $M_{\omega_1}$. To analyze such a $KQ$-module $M_{\omega_1}$, Martin's Axiom $\MA_{\aleph_1}$ (for $\aleph_1$ many dense sets) is used. $\MA_{\aleph_1}$ is a combinatorial axiom of set theory that cannot neither be proved nor refuted from Zermelo-Fraenkel axiomatic set theory {\sf ZFC} with the axiom of choice \cite{MartinSolovay, SolovayTennenbaum: iteration}. $\MA_{\aleph_1}$ is applied in many areas of mathematics to show that some mathematical statements cannot be refuted from {\sf ZFC} (see e.g. \cite{Fremlin}). One of such examples is Shelah's solution of Whitehead Problem \cite{Shelah:W}. Our main result states that {\em if $K$ is a countable algebraically closed field and Martin's Axiom $\MA_{\aleph_1}$ holds, then ${\rm Ext}^1_{KQ}(M_{\omega_1},KQ)=0$} ({\bfsc Theorems \ref{inf simple thm}}, {\bfsc \ref{2 thm}} and {\bfsc Theorem \ref{cor final}}). Therefore, {\em under $\MA_{\aleph_1}$ and the assumption that $K$ is a countable algebraically closed field, the above assertion $(*)$ fails for quivers $Q$ as in {\bfsc Theorem \ref{cor final}} and infinitely generated $KQ$-modules}. Trlifaj's construction is used to build such infinitely generated $KQ$-modules, which will be presented in \S\ref{ods}. This paper is intended to be fairly self contained, but we will assume some basic knowledge about ordinals (see e.g. \cite[II.1, II.4]{EM} and \cite[I.7, III.6]{Kunen:new}). \S\ref{prel} provides necessary knowledge, which includes some facts on path algebras and set theory. \S \ref{inf gen pa} provides the proof of the main result of this paper. \section{Preliminaries} \label{prel} Throughout this paper module means right module. For a ring $R$, ${\rm Mod}R$ denotes the category of the $R$-modules, ${\rm mod}R$ denotes the category of the finitely generated $R$-modules. For an $R$-module $M$ and a subset $X$ of $M$, $\seq{X}_R$ denotes the $R$-submodule of the module $M$ generated by $X$. For an $R$-module $M$ and $R$-submodules $N_i$, $i\in I$, of $M$, $\displaystyle \sum_{i\in I} N_i$ denotes the $R$-submodule that is the $R$-linear span of the set $\displaystyle \bigcup_{i\in I} N_i$. We follow the notation of outer direct sums in \cite[I.2.]{EM}. For a family $\set{M_i:i\in I}$ of modules, the {\rm product module} $\displaystyle \prod_{i\in I} M_i$ is the module whose underling set is the set of functions $f$ with domain $I$ such that for each $i\in I$, $f(i)$ belongs to the set $M_i$, and the operations are defined coordinate-wise. For a member $f$ of the product $\displaystyle \prod_{i\in I} M_i$, the {\em support} ${\rm supp}(f)$ of $f$ is defined by the set \[ \set{i\in I: f(i)\neq 0_{M_i}} . \] The outer direct sum $\displaystyle \bigoplus_{i\in I} M_i$ of a family $\set{M_i:i\in I}$ of modules is the submodule of the product module $\displaystyle \prod_{i\in I} M_i$ which consists of the members of the set $\displaystyle \prod_{i\in I} M_i$ whose supports are finite. We adopt ordinals as the von Neumann ordinals, that is, an ordinal $\alpha$ means the set of ordinals less than $\alpha$. So for ordinals $\alpha$ and $\beta$, $\alpha$ is less than $\beta$ iff $\alpha\in\beta$. $\omega$ is the set of all finite ordinals (non negative integers), $\omega_1$ is the least uncountable ordinal (which is a cardinal). $\Lim$ denotes the class of all limit ordinals. The following is a well-known equivalence about projectivity. \begin{thm}[\text{\rm E.g. \cite[17.1., 17.2 Propositions]{AF}}] For a ring $R$ with identity and an $R$-module $P$, the following statements are equivalent. \begin{enumerate}[{\rm (1)}] \item For every $R$-epimorphism $f$ from an $R$-module $M$ onto an $R$-module $N$ and $R$-homomorphism $g$ from $P$ into $N$, there exists an $R$-homomorphism $h$ from $P$ into $M$ such that $g=f\circ h$. \item Every $R$-epimorphism from an $R$-module onto $P$ splits, that is, it is right invertible. \item The functor ${\rm Hom}_R(P, -)$ within the category ${\rm Mod}R$ is exact, that is, for every $R$-module $M$, ${\rm Ext}^1_R (P,M)=0$. \item $P$ is isomorphic to a direct summand of a free $R$-module. \end{enumerate} \end{thm} It is known that the statements from (1) to (3) are also equivalent even if the ring $R$ has no identity. \subsection{Path algebras} Quivers, path algebras, and linear representations of quivers are some basic concepts of representation theory of Artin algebras. Our notation and terminology are fairly standard, see e.g. \cite{ARS, ASS}. In the next paragraphs, we refer definitions, notation, and terminology to \cite[Chapters II-III]{ASS}. A {\em quiver} denotes a directed graph. Any quiver $Q$ consists of a pair of a set $Q_0$ of vertices and a set $Q_1$ of arrows. Each arrow $a$ is equipped with its source $s(a)$ and its target $t(a)$. A quiver $Q=(Q_0,Q_1)$ is called {\em finite} if both $Q_0$ and $Q_1$ are finite sets. A {\em path} of the quiver $Q$ is a finite sequence $a_0 a_1 \cdots a_n$ of arrows of the quiver $Q$ such that, for each $i$ with $0\leq i <n$, the target of the arrow $a_i$ coincides with the source of the arrow $a_{i+1}$. The path $a_0 a_1 \cdots a_n$ has length $n+1$. For each vertex $v$ of the quiver $Q$, we agree to associate with it a path of length $0$, called the {\em trivial path} or the {\em stationary path} {\em at the vertex} $v$, which is denoted by $e_v$. A {\em cycle} is a non-trivial path whose source and target coincide. A quiver is called {\em acyclic} if there are no cycles in the quiver. For a quiver $Q$, $\overline Q$ denotes the underlying graph of $Q$ that is obtained from $Q$ by forgetting the orientation of the arrows, and a quiver $Q$ is called {\em connected} if the graph $\overline Q$ is a connected graph. For a field $K$ and a quiver $Q$, the {\em path algebra $KQ$ of the quiver $Q$ over the field $K$} is the $K$-algebra whose underlying set is the $K$-vector space whose basis is the set of all the paths of the quiver $Q$ (which includes all the stationary paths) such that the product of two paths $a_0 a_1 \cdots a_{m-1}$ and $b_0 b_1 \cdots b_{n-1}$ is defined as follows: \[ a_0 a_1 \cdots a_{m-1} \cdot b_0 b_1 \cdots b_{n-1} = \left\{ \begin{array}{ll} a_0 a_1 \cdots a_{m-1} b_0 b_1 \cdots b_{n-1} & \text{if $t(a_{m-1}) = s(b_0)$} \\[1em] 0_{KQ} & \text{otherwise.} \end{array} \right. \] The product of basic elements is extended to arbitrary elements of $KQ$ by distributivity. We note that, for any field $K$ and a quiver $Q$ with $Q_0$ finite, $KQ$ also has an identity, which is of the form $\displaystyle \sum_{v\in Q_0} e_v$. However, for any quiver $Q$ with infinitely many vertices, $KQ$ does not have an identity. We recall that any path algebra $KQ$ of a quiver $Q$ over an algebraically closed field $K$ is hereditary even when a quiver $Q$ is not finite (see e.g. \cite[\S8.2]{GR}), that is, its global dimension is not larger than $1$. For a quiver $Q=(Q_0,Q_1)$ and a field $K$, a {\em $K$-linear representation} of the quiver $Q$ is a system $\cX=\seq{\cX_v,\cX_a:v\in Q_0, a\in Q_1}$ such that, for each vertex $v\in Q_0$, $\cX_v$ is a $K$-vector space and, for each arrow $a\in Q_1$, $\cX_a$ is a $K$-linear map from the $K$-vector space $\cX_{s(a)}$ into the $K$-vector space $\cX_{t(a)}$. A $K$-linear representation is called {\em finite dimensional} if each $\cX_v$, $v\in Q_0$, is a finite dimensional $K$-vector space. For two $K$-linear representations $\cX$ and $\cY$, a {\em morphism from $\cX$ into $\cY$} is a tuple $\varphi=\seq{\varphi_v:v\in Q_0}$ such that, for each $v\in Q_0$, $\varphi_v$ is a $K$-linear map from the $K$-vector space $\cX_v$ into the $K$-vector space $\cY_v$ and, for each arrow $a\in Q_1$, the following diagram commutes: \[ \xymatrix{ \cX_{s(a)} \ar[r]^-{\cX_a} \ar[d]_-{\varphi_{s(a)}} & \cX_{t(a)} \ar[d]^-{\varphi_{t(a)}} \\ \cY_{s(a)} \ar[r]^-{\cY_a} & \cY_{t(a)} } \text{\raisebox{-4em}{.}} \] ${\rm Rep}_K Q$ denotes the category of the $K$-linear representations of a quiver $Q$ over a field $K$, and ${\rm rep}_K Q$ denotes the category of the finite dimensional $K$-linear representations $\cX$ of $Q$ over $K$. In \cite{ASS}, these are defined for finite quivers, however, we adopt them for all quivers. There is a correspondence between $KQ$-modules and $K$-linear representation of $Q$ (see e.g. \cite[III 1.6. Theorem]{ASS}). For a $KQ$-module $M$, define the $K$-linear representation $F(M)$ of $Q$ such that, for each $v\in Q_0$, $ F(M)_v:= M e_v =\set{m e_v: m\in M} , $ and, for each $a\in Q_1$, $F(M)_a$ is the $K$-homomorphism from $F(M)_{s(a)}$ into $F(M)_{t(a)}$ such that, for each $x\in F(M)_{s(a)}$, $F(M)_a (x)=xa$. For a $K$-linear representation $\cX$ of $Q$, define the $KQ$-module $G(\cX)$ whose underlying set is the direct sum $\displaystyle \bigoplus_{v\in Q_0} \cX_v$ such that, for each element $m= \displaystyle \sum_{v\in Q_0} x_v$ of $\displaystyle \bigoplus_{v\in Q_0} \cX_v$ (in this notation, for all but finitely many $v\in Q_0$, $x_v$ is the zero of $\cX_v$), $w\in Q_0$ and $a\in Q_1$, $ m e_w := x_w $ and $ m a := \cX_a(m e_{s(a)}) , $ and the product by any arbitrary element of $KQ$ is extended by distributivity. We notice that, for every $K$-linear representation $\cX$ of $Q$, $F(G(\cX))=\cX$, and, for every $KQ$-module $M$, if $M= \displaystyle \sum_{m \in M} mKQ$ then $G(F(M))=M$. Therefore, if $Q$ is a finite connected quiver, then the category ${\rm Mod}KQ$ is equivalent to the category ${\rm Rep}_K Q$ \cite[III 1.6. Theorem]{ASS} and, for any finite acyclic quiver $Q$, ${\rm mod}KQ$ is equivalent to ${\rm rep}_K Q$ \cite[III 1.7. Theorem]{ASS}. \ The following theorem gives a sufficient condition for finitely generated projective modules over a finite dimensional algebra. For example, the following is mentioned without proof in the proof of \cite[Theorem 4.7]{Minamoto}. \begin{thm}[Folklore] \label{fdA} Suppose that $\Lambda$ is a finite dimensional algebra over an algebraically closed field $K$ with finite global dimension. Then for any finitely generated $\Lambda$-module $M$, if ${\rm Ext}^{\geq 1}_\Lambda(M,\Lambda)=0$ then $M$ is projective. \end{thm} \iffalse \begin{proof} Suppose that $M$ is a finitely generated $\Lambda$-module and ${\rm Ext}^{\geq 1}_\Lambda(M,\Lambda)=0$. The point of the proof is that {\em for any finitely generated projective $\Lambda$-module $P$, ${\rm Ext}^{\geq 1}_\Lambda(M, P)=0$.} To see this, let $P'$ be a complementary direct summand of $P$ such that $P \oplus P' $ is isomorphic to a direct sum of finitely many copies $\Lambda^{n}$ of $\Lambda$. Then for each integer $k\geq 1$, \[ {\rm Ext}^{k}_{\Lambda}(M,\Lambda^{(n)}) =\left({\rm Ext}^k_{\Lambda}(M,\Lambda)\right)^n =0 , \] and \[ {\rm Ext}^k_\Lambda(M,\Lambda^{n}) = {\rm Ext}^k_\Lambda(M,P) \oplus {\rm Ext}^k_\Lambda(M,P') . \] Therefore, ${\rm Ext}^k_\Lambda(M,P)=0$, and hence ${\rm Ext}^{\geq 1}_\Lambda(M,P)=0$. Since $\Lambda$ is a finite dimensional algebra, there exists a short exact sequence \[ \xymatrix@C=20pt{ 0 \ar[r] & K_0 \ar[r] & P(M) \ar[r] & M \ar[r] & 0 } \] where $P(M)$ is the projective cover of $M$ and $K_0:=\Omega M$ is the sygyzy of $M$ such that both $P(M)$ and $K_0$ are finitely generated $\Lambda$-module (see e.g. \cite[I.5.8 Theorem]{ASS}). It suffices to show that this sequence splits (because then $M$ is a direct summand of the projective module $P(M)$, which is also projective). To show this, we prove that ${\rm Ext}^1_\Lambda(M,K_0)=0$. Since $\Lambda$ is a finite dimensional algebra of finite global dimension and $K_0$ is finitely generated, there exists a (minimal) projective resolution \[ \xymatrix@C=8pt@R=3pt{ 0 \ar[rr] && P_d \ar[rr]^-{f_d} && P_{d-1} \ar[rr]^-{f_{d-1}} \ar[rd] && \cdots & \cdots & \cdots \ar[rr]^{f_1} \ar[rd] && P_0 \ar[rr]^{f_0} && K_0 \ar[rr && 0 \\ &&&&& K_{d-1} \ar[ru] \ar[rd] &&&& K_1 \ar[ru] \ar[rd] \\ &&&& 0 \ar[ru] && 0 && 0 \ar[ru] && 0 } \] such that all of $P_i$'s and $K_i$'s are finitely generated $\Lambda$-modules and all of $P_i$'s are projective (see e.g. \cite[I.5.10 Corollary]{ASS}). Applying ${\rm Hom}_\Lambda(M, \text{-})$ to the short exact sequence $0 \to P_d \to P_{d-1} \to K_{d-1} \to 0$ and thinking of the above observation, for each $k\geq 1$ we obtain the following exact sequence \[ 0={\rm Ext}^k_\Lambda(M, P_{d-1}) \to {\rm Ext}^k_\Lambda(M,K_{d-1}) \to {\rm Ext}^{k+1}_\Lambda(M,P_d)=0 , \] and hence ${\rm Ext}^{\geq 1}_\Lambda(M,K_{d-1})=0$. Applying ${\rm Hom}_\Lambda(M, \text{-})$ to the short exact sequence $0 \to K_{d-1} \to P_{d-2} \to K_{d-2} \to 0$ and thinking of the above observation again, we obtain ${\rm Ext}^{\geq 1}_\Lambda(M,K_{d-2})=0$. So by downward induction on $i<d-1$, we get ${\rm Ext}^{\geq 1}_\Lambda(M,K_{i})=0$, and at the end, we get ${\rm Ext}^{1}_\Lambda(M,K_{0})=0$, which finishes the proof. \end{proof} \fi \begin{proof} Suppose that $M$ is a finitely generated $\Lambda$-module and ${\rm Ext}^{\geq 1}_\Lambda(M,\Lambda)=0$. The point of the proof is to show that, {\em for any finitely generated projective $\Lambda$-module $P$, ${\rm Ext}^{\geq 1}_\Lambda(M, P)=0$.} To see this, let $P'$ be a complementary direct summand of $P$ such that $P \oplus P' $ is isomorphic to a direct sum $\Lambda^{n}$ of finitely many copies of $\Lambda$. Then for each integer $k\geq 1$, \[ {\rm Ext}^{k}_{\Lambda}(M,\Lambda^{n}) =\left({\rm Ext}^k_{\Lambda}(M,\Lambda)\right)^n =0 , \] and \[ {\rm Ext}^k_\Lambda(M,\Lambda^{n}) = {\rm Ext}^k_\Lambda(M,P) \oplus {\rm Ext}^k_\Lambda(M,P') . \] Therefore, ${\rm Ext}^k_\Lambda(M,P)=0$. Hence ${\rm Ext}^{\geq 1}_\Lambda(M,P)=0$. Let $d$ be the projective dimension ${\rm pd}M$ of $M$. Since $\Lambda$ has finite global dimension, $0\leq d < \infty$. Assume, towards a contradiction, that $d\geq 1$. Let the sequence \[ \xymatrix@C=8pt@R=3pt{ 0 \ar[rr] && P_d \ar[rr]^-{f_d} && P_{d-1} \ar[rr]^-{f_{d-1}} \ar[rd] && \cdots & \cdots & \cdots \ar[rr]^{f_1} && P_0 \ar[rr]^{f_0} && M \ar[rr && 0 \\ &&&&& \Omega^{d-1}M \ar[rd] \\ &&&& && 0 } \] be a projective resolution of $M$ of length $d$ such that each $P_i$ is finitely generated, where $\Omega^{d-1}M$ is the $(d-1)$-th syzygy of $M$. Since ${\rm pd }M=d$, the projective dimension of the $\Lambda$-module $\Omega^{d-1}M$ is exactly $1$, in particular, $\Omega^{d-1}M$ is not projective. Applying ${\rm Hom}_\Lambda(\text{-}, P_d)$ to the short exact sequence $0 \to P_d \to P_{d-1} \to \Omega^{d-1}M \to 0$, we obtain the following exact sequence \begin{multline*} \xymatrix@C=10pt{ 0 \ar[r] & {\rm Hom}_\Lambda(\Omega^{d-1}M, P_d) \ar[r] & } \\ \xymatrix@C=10pt{ {\rm Hom}_\Lambda(P_{d-1}, P_d) \ar[rrrr]^-{{\rm Hom}_\Lambda(f_d,P_d)} &&&& {\rm Hom}_\Lambda(P_{d}, P_d) \ar[r] & {\rm Ext}^1_\Lambda(\Omega^{d-1}M, P_d) }. \end{multline*} Since $P_d$ is a finitely generated projective $\Lambda$-module, \[ {\rm Ext}^1_\Lambda(\Omega^{d-1}M, P_d) = {\rm Ext}^{d}_\Lambda(M, P_d) =0 . \] Thus ${\rm Hom}_\Lambda(f_d,P_d)$ is surjective. So there exists a homomorphism $g_d$ from $P_{d-1}$ into $P_d$ such that the composition $g_d \circ f_d$ is the identity on $P_d$. Therefore the short exact sequence $0 \to P_d \to P_{d-1} \to \Omega^{d-1}M \to 0$ splits, and hence $\Omega^{d-1}M$ is a direct summand of the projective module $P_{d-1}$, which is a contradiction. \end{proof} \begin{defn} For a ring $R$ and a subclass $\fM$ of ${\rm Mod}R$, we define the assertion ${\sf P}_R(\fM)$ that means that, for any $M\in\fM$, if ${\rm Ext}^{\geq 1}_R(M,R)=0$ then $M$ is projective. \end{defn} \begin{remark} \label{Noether ring} For any Noetherian ring $\Lambda$ with finite global dimension and any finitely generated $\Lambda$-module $M$, there is a projective cover of $M$ which is finitely generated. So the above proof works for any Noetherian ring of finite global dimension. Therefore, for any Noetherian ring $\Lambda$ of finite global dimension, ${\sf P}_\Lambda({\rm mod}\Lambda)$. \end{remark} It is known that any path algebra $KQ$, even when the quiver $Q$ is not finite, is hereditary, that is, its global dimension is not larger than $1$ (see e.g. \cite[\S8.2]{GR}). So {\bfsc Theorem \ref{fdA}} implies the following. \begin{cor} \label{fqa} Suppose that $K$ is an algebraically closed field and $Q$ is a finite acyclic quiver. Then ${\sf P}_{KQ}({\rm mod}KQ)$. In particular, for any finitely generated $KQ$-module $M$, if ${\rm Ext}^{1}_{KQ}(M, KQ)=0$ then $M$ is projective. \end{cor} \begin{remark} A finite quiver of the form \[ \begin{xy} (10,20) *++={\circ} ="A", (17,17) *++={\circ}="B", (20,10) *++={\circ}="C", (17,3) *++={\circ}="D", (3,3) *++={\circ}="E", (0,10) *++={\circ}="F", (3,17) *++={\circ}="G", (10,0) ="H", \ar @/^2pt/ "A";"B" , \ar @/^2pt/ "B";"C" , \ar @/^2pt/ "C";"D" , \ar @/^2pt/ @{.} "D";"H" , \ar @/^2pt/ @{.} "H";"E" , \ar @/^2pt/ "E";"F" , \ar @/^2pt/ "F";"G" \ar @/^2pt/ "G";"A" \end{xy} \] is called a cyclic quiver. Since the path algebra of a cyclic quiver $Q$ over an algebraically closed field $K$ is Noetherian with identity, it follows from {\itsc Remark} {\rmsc \ref{Noether ring}} that ${\sf P}_{KQ}({\rm mod}KQ)$. \end{remark} \medskip We can extend the above corollary to {\em some} infinite quivers. To introduce such infinite quivers explicitly, we define the following notions. \begin{defn}\begin{enumerate} \item A quiver $P=(P_0,P_1)$ is called a {\em subquiver} of a quiver $Q=(Q_0,Q_1)$ if $P_0$ and $P_1$ are subsets of $Q_0$ and $Q_1$ respectively (hence, for any $a\in P_1$, $s(a)$ and $t(a)$ belong to $P_0$). \item For a quiver $Q$, a subquiver $P$ of $Q$, a field $K$ and a $K$-linear representation $\cX$ of $Q$, the $K$-linear representation $\cX\restriction P$ of the quiver $P$ is called the {\em restricted representation of $\cX$ by $P$} if for every $v\in P_0$ and $a\in P_1$, $(\cX\restriction P)_v=\cX_v$ and $(\cX\restriction P)_a=\cX_a$. \item For a quiver $Q=(Q_0,Q_1)$ and a subset $P_0'$ of $Q_0$, the {\em closure of $P_0'$ under $Q$} is the subquiver $\overline{P_0'}^Q= \left( \left( \overline{P_0'}^Q \right)_0 , \left( \overline{P_0'}^Q \right)_1 \right)$ of the quiver $Q$ such that \begin{multline*} \left( \overline{P_0'}^Q \right)_0 := \big\{ v\in Q_0: \text{ there exists a path from a member of $P_0'$} \\ \text{to the vertex $v$ through the quiver $Q$} \big\} \end{multline*} and \[ \left( \overline{P_0'}^Q \right)_1 :=\set{a\in Q_1: s(a)\in \left( \overline{P_0'}^Q \right)_0 }. \] A subquiver $P$ of a quiver $Q$ is called a {\em closed subquiver} of $Q$ if $P$ is a closure of some subset of $Q_0$ under $Q$. A subquiver $P$ of a quiver $Q$ is called a {\em finite closed subquiver} of $Q$ if $P$ is a closed subquiver of $Q$ and it is also a finite quiver. \end{enumerate} \end{defn} \begin{prop} \label{closed quiver} Suppose that $K$ is an algebraically closed field, $Q$ is a quiver, $\cX$ is a $K$-linear representation of $Q$ such that ${\rm Ext}^1_{KQ}( G(\cX) , KQ)=0$, and $P$ is a closed subquiver of $Q$. Then ${\rm Ext}^1_{KP}( G(\cX\restriction P) , KP)=0$. \end{prop} \begin{proof} Let $S$ be the functor from the category ${\rm Rep}_K Q$ into the category ${\rm Rep}_K P$ such that, for each $K$-linear representation $\cY$ of $Q$, $S(\cY):=\cY\restriction P$, and let $T$ be the functor from ${\rm Rep}_K P$ into ${\rm Rep}_K Q$ such that, for each $K$-linear representation $\mathcal Z$ of $P$, $T(\mathcal Z)$ is the $K$-linear representation of $Q$ such that $T(\mathcal Z)_v := \mathcal Z_v$ for every $v\in P_0$, $T(\mathcal Z)_v$ is the trivial $K$-vector space for every $v\in Q_0\setminus P_0$, $T(\mathcal Z)_a:= \mathcal Z_a$ for every $a\in P_1$, and $T(\mathcal Z)_a$ is the unique $K$-linear map from the trivial $K$-vector space $T(\mathcal Z)_{s(a)}$ into the $K$-vector space $T(\mathcal Z)_{t(a)}$ for every $a\in Q_1\setminus P_1$. We notice that both $S$ and $T$ are exact functors. Moreover, since $P$ is a closed subquiver of $Q$, $T$ is well-defined, that is, the above $T(\mathcal Z)$ is certainly a $K$-representation of $Q$. We also notice that the composition $S\circ T$ is the identity functor over ${\rm Rep}_K P$, and \[ KP = \bigoplus_{v\in P_0} e_v KP = \bigoplus_{v\in P_0} e_v KQ , \] which implies that $T(F(KP))$ is a direct summand of $F(KQ)$. Note that $G(T(F(KP)))$ is just $KP$ as a $KQ$-module, so it follows from our assumption that ${\rm Ext}^1_{KQ}( G(\cX) , KP )=0$. ${\rm Ext}^1_{KQ}( G(\cX) , KP )=0$ means that any short exact sequence of $K$-linear representations of $Q$ of the form \[ \xymatrix@C=15pt{ 0 \ar[r] & T(F(KP)) \ar[r] & \cE \ar[r]^-{\varphi} & \cX \ar[r] & 0 } \] splits. We note that in such a short exact sequence, for any $v\in Q_0\setminus P_0$, $\cE_v = \cX_v$ and $\varphi_v$ is an automorphism of $\cX_v$ (because $T(F(KP))_v$ is the trivial $K$-vector space). So, for any short exact sequence of $K$-linear representations of $P$ of the form \[ L': \ \xymatrix@C=15pt{ 0 \ar[r] & F(KP) \ar[r] & \cE' \ar[r] & \cX\restriction P \ar[r] & 0 }, \] there exists a short exact sequence of $K$-linear representations of $Q$ of the form \[ L: \ \xymatrix@C=15pt{ 0 \ar[r] & T(F(KP)) \ar[r] & \cE \ar[r] & \cX \ar[r] & 0 } \] such that $S(L)=L'$. Therefore, it follows that any short exact sequence of $K$-linear representations of $P$ of the form \[ \xymatrix@C=15pt{ 0 \ar[r] & F(KP) \ar[r] & \cE \ar[r] & \cX\restriction P \ar[r] & 0 } \] splits, which is equivalent to say that ${\rm Ext}^1_{KP}( G(\cX\restriction P) ,KP)=0$. \end{proof} \begin{prop} \label{proj rep} Suppose that $K$ is an algebraically closed field, $Q$ is a quiver and $v\in Q_0$ such that $ \overline{\{v\}}^Q$ is a finite acyclic quiver. Then $e_v KQ$ is a projective $KQ$-module. \end{prop} \begin{proof} Let $M$ be a $KQ$-module, and $\varphi$ a $KQ$-epimorphism from $M$ onto $e_v KQ$. We show that $\varphi$ splits. $P$ denotes the closure $ \overline{\{v\}}^Q$ of the set $\{v\}$ under $Q$. We notice that $e_v KP$ is a projective $KP$-module \cite[\S III.2]{ASS}, and its underlying set is equal to $e_v KQ$. Denote \[ M\cdot KP:= \set{m x: m\in M, x\in KP} , \] which is a $KP$-module, and a subset of $M$. Then $\varphi\restriction M\cdot KP$ is also an $KP$-epimorphism from $M \cdot KP$ onto $e_v KP$, so $\varphi\restriction M\cdot KP$ splits. Choose a $KP$-homomorphism $\psi$ from $e_v KP$ into $M \cdot KP$ such that $(\varphi\restriction M\cdot KP)\circ \psi$ is the identity. We also notice that $\psi$ can be considered as a $KQ$-homomorphism from $e_v KQ$, and then $\varphi\circ \psi$ is also the identity, which finishes the proof. \end{proof} \begin{prop} \label{ext not vanish} Suppose that $K$ is an algebraically closed field, $Q$ is an acyclic quiver that contains the quiver \[ \xymatrix@C=15pt{ v_0 & v_1 \ar[l] & v_2 \ar[l] & \cdots \ar[l] & v_n \ar[l] & v_{n+1} \ar[l] & \cdots \ar[l] } \] as a subquiver, and $\cX$ is a finite dimensional $K$-linear representation of $Q$ such that, for each $n\in\omega$, $\cX_{v_n}\neq\{0_K\}$, and $\cX\restriction \overline{\set{v_n}}^Q$ is a direct sum of finitely many copies of the corresponding $K$-linear representation $F(e_{v_n}KQ)$ of $e_{v_n}KQ$. Then ${\rm Ext}^1_{KQ}(G(\cX),KQ)\neq 0$. \end{prop} \begin{proof} Let $P:= \overline{\set{v_n: n\in\omega}}^Q$. By {\bfsc Proposition \ref{closed quiver}}, it suffices to show that ${\rm Ext}^1_{KP}(G(\cX\restriction P),KP)\neq 0$. Since \[ KP = \left( \bigcup_{n\in \omega} e_{v_n} KP \right) \oplus \left( \bigcup_{v\in P_0 \setminus \set{v_n:n\in\omega}} e_v KP \right) \] and \[ {\rm Ext}^1_{KQ}(M,N_0 \oplus N_1)= {\rm Ext}^1_{KQ}(M,N_0) \oplus {\rm Ext}^1_{KQ}(M,N_1) \] in general, it suffices to show that ${\rm Ext}^1_{KP}(G(\cX\restriction P), \displaystyle \bigcup_{n\in\omega} e_{v_n} KP )\neq 0$. For each $n\in\omega$, let \[ d_n := \max_{i\in\omega}\set{ \left| \set{p:\text{ $p$ is a path from $v_i$ to $v_n$ on $P$}} \right|} , \] when such maximum exists as a finite number, or $d_n:=\infty$ otherwise. Notice that, for each $n\in\omega$, the dimension of the $K$-vector space $F(e_{v_n} KQ)_{v_0}$ is equal the number of paths from $v_n$ to $v_0$. So, if infinitely many $d_n$ were larger than $1$, then the dimension of $\cX_{v_0}$ had to be infinite. Thus, for all but finitely many $n\in\omega$, $d_n=1$. Therefore, without loss of generality we may assume that, for every $n\in\omega$, $d_n=1$. Hence there is a $d\in\omega\setminus\{0\}$ such that, for any $n\in\omega$, \[ \cX\restriction \overline{\set{v_n}}^P = \displaystyle \bigoplus_{d} F(e_{v_n}KP) , \] where the last term is the outer direct sum of $d$ many copies of $F(e_{v_n}KP)$. (Notice that $d$ is the dimension of $\cX_{v_n}$.) For each $n\in\omega$, let $a_n$ be the unique arrow from $v_{n+1}$ to $v_n$, and, for each $v\in P$, let \[ m(v):=\min\set{m\in\omega : \text{ there is a path from $v_m$ to $v$}} . \] Then, for any $v\in P$ and $n\geq m(v)$, any path from $v_n$ to $v$ is of the form $a_{n-1}\cdots a_{m(v)} p'$, for some path $p'$ from $v_{m(v)}$ to $v$ in $P$. Thus \[ \cX\restriction P = \displaystyle \bigoplus_{d} \cX^0 , \] where $\cX^0$ is the $K$-linear representation of $P$ such that: for each $v\in P_0$, $\cX^0_v$ is the $K$-vector space with basis the set of all paths from $v_{m(v)}$ to $v$; for each $n\in\omega$, $\cX^0_{a_n}$ is the $K$-linear map from $\cX^0_{v_{n+1}}$ onto $\cX^0_{v_n}$ such that $\cX^0_{a_n}(e_{v_{n+1}})=e_{v_n}$; and, for each $a\in P_1\setminus\set{a_n:n\in\omega}$, $\cX^0_a$ is the $K$-linear map from $\cX^0_{s(a)}$ onto $\cX^0_{t(a)}$ such that, for each path $p$ from $v_{m(s(a))}$ to $s(a)$, $\cX^0_{a}(p):= pa$. Since \[ {\rm Ext}^1_{KP}(\bigoplus_{i\in I}M_i,KP)= \prod_{i\in I}{\rm Ext}^1_{KP}(M_i,KP) \] in general, it suffices to show that ${\rm Ext}^1_{KP}(G(\cX^0), \displaystyle \bigcup_{n\in\omega} e_{v_n} KP )\neq 0$. To see this, let $\pi$ be the canonical $KP$-epimorphism from $\displaystyle \bigcup_{n\in\omega} e_{v_n} KP$ onto $G(\cX^0)$ such that, for each $n\in \omega$, $\pi(e_{v_n}):= e_{v_n}$, and, for each $v\in P_0$ and each path $p$ in $P$ ending in $v$ of the form $ p= a_{n-1} \cdots a_{m(v)} p' $, $\pi(p):=p'$. Then ${\rm Ker}(\pi)$ is the $KP$-submodule of $\displaystyle \bigcup_{n\in\omega} e_{v_n} KP$ which is generated by the set \[ \set{e_{v_m}- a_n \cdots a_m : m,n\in\omega, m \leq n} . \] Applying ${\rm Hom}_{KP}(-,KP)$ to the exact sequence \[ \xymatrix@C=10pt{\displaystyle 0 \ar[r] & {\rm Ker}(\pi) \ar[rrr]^-{{\rm id}_{{\rm Ker}(\pi)}} &&& {\displaystyle \bigcup_{n\in\omega} e_{v_n} KP} \ar[r] & G(\cX^0) \ar[r] & 0 }, \] we obtain the exact sequence \begin{multline*} \xymatrix@C=10pt{ 0 \ar[r] & {\rm Hom}_{KP}( G(\cX^0) ,KP) \ar[r] & \ } \\ \xymatrix@C=10pt{{\rm Hom}_{KP}( {\displaystyle \bigcup_{n\in\omega} e_{v_n} KP} ,KP ) \ar[rrrrrr]^-{ {\rm Hom}_{KP}({\rm id}_{{\rm Ker}(\pi)},KP)} &&&&&& {\rm Hom}_{KP}({\rm Ker}(\pi),KP) }. \end{multline*} Then \[ {\rm Ext}^1_{KP}( G(\cX^0) , {\displaystyle \bigcup_{n\in\omega} e_{v_n} KP} ) = {\rm Hom}_{KP}({\rm Ker}(\pi),KP ) \Big/ {\rm Im}( {\rm Hom}_{KP}({\rm id}_{{\rm Ker}(\pi)},KP) ) . \] For each non-stationary path $b_{n}\cdots b_0$ of $P$, we fix the notation $b_{i}\cdots b_0$ by induction on $i\leq n$ in such a way that \[ b_0 \cdots b_0 :=b_0 \] and, for $i\leq n$, \[ b_{i+1}\cdots b_0 := b_{i+1} b_{i}\cdots b_0 . \] For each $m,n\in \omega$ with $m\leq n$, define \[ \varphi(e_{v_m}- a_n \cdots a_m) := \sum_{i=0}^{n-m} a_{m+i} \cdots a_m . \] Then, for each $l,m,n\in\omega$ with $l<m\leq n$, \[ \begin{array}{r@{\ = \ }l} \varphi(e_{v_{m}}- a_{n} \cdots a_m) a_{m-1} \cdots a_l & \displaystyle \left( \sum_{i=0}^{n-m} a_{m+i} \cdots a_m \right) a_{m-1} \cdots a_l \\[2em] & \displaystyle \left( \sum_{i=0}^{n-l} a_{l+i} \cdots a_l \right) - \left( \sum_{i=0}^{m-1-l} a_{l+i} \cdots a_l \right) \\[2em] & \varphi(e_{v_l}- a_{n} \cdots a_l) - \varphi(e_{v_l}- a_{m-1} \cdots a_l) \end{array} \] and \[ \begin{array}{r@{\ = \ }l} (e_{v_{m}}- a_{n} \cdots a_m) a_{m-1} \cdots a_l & a_{m-1} \cdots a_l - a_{n} \cdots a_l \\[1em] & \left( e_{v_l} - a_{n} \cdots a_l \right) - \left( e_{v_l} - a_{m-1} \cdots a_l \right) . \end{array} \] Thus, we can extend $\varphi$ to a $KP$-homomorphism from ${\rm Ker}(\pi)$ into $KP$. To finish the proof, it is sufficient to show that $\varphi$ is not in ${\rm Im}( {\rm Hom}_{KP}({\rm id}_{{\rm Ker}(\pi)},KP) )$. Assume it is, and let $\psi$ be a $KP$-homomorphism on $\displaystyle \bigcup_{n\in\omega} e_{v_n} KP$ such that \[ \varphi = {\rm Hom}_{KP}({\rm id}_{{\rm Ker}(\pi)},KP) (\psi) = \psi\restriction {\rm Ker}(\pi) . \] For each $n\in\omega$, \[ \begin{array}{r@{\ = \ }l} \psi(e_{v_0}) - \psi(e_{v_{n+2}})a_{n+1} \cdots a_0 & \psi(e_{v_0}) - \psi( a_{n+1} \cdots a_0) \\[1em] & \psi(e_{v_0} - a_{n+1} \cdots a_0) \\[1em] & \displaystyle \sum_{i=0}^{n+1} a_i \cdots a_0. \end{array} \] Therefore, for {\em every} $n\in \omega$, $\psi(e_{v_0})$ belongs to the set \[ \left(\sum_{i=0}^{n} a_i \cdots a_0\right) + KP \left(a_{n+1} \cdots a_0\right) . \] However, this is a contradiction because $\psi(e_{v_0})$ have to belong to $KP$. \end{proof} \begin{thm} \label{fqa cor} Suppose that $K$ is an algebraically closed field, and $Q$ is a connected quiver such that, for any finite subset $P_0'$ of $Q_0$, the closure of $P_0'$ under $Q$ is a finite acyclic quiver. Then ${\sf P}_{KQ}({\rm rep}_K Q)$. \end{thm} For example, the following quivers satisfy the assumption of the theorem: \[ \xymatrix@C=15pt{ \circ & \circ \ar[l] & \circ \ar@<-0.7ex>[l] \ar@<0.7ex>[l] & \circ \ar[l] & \circ \ar@<-0.7ex>[l] \ar@<0.7ex>[l] & \circ \ar[l] & \circ \ar@<-0.7ex>[l] \ar@<0.7ex>[l] & \cdots \ar[l] }, \] \[ \xymatrix@C=15pt@R=0pt{ & \circ \ar@/_3pt/[dl] &&& \circ \ar@/_3pt/[dl] &&& \circ \ar@/_3pt/[dl] \\ \circ & \circ \ar[l] & \circ \ar[l] \ar@/_3pt/[ul] & \circ \ar[l] \ar@/^3pt/[dl] & \circ \ar[l] & \circ \ar[l] \ar@/_3pt/[ul] & \circ \ar[l] \ar@/^3pt/[dl] & \circ \ar[l] & \cdots \text{ ,} \ar[l] \ar@/_3pt/[ul] \\ && \circ \ar@/^3pt/[ul] &&& \circ \ar@/^3pt/[ul] &&& \ \ \ \ \ \ar@/^3pt/[ul] } \] \[ \xymatrix@C=15pt@R=5pt{ \circ \\ \circ & \circ \ar[lu] & & \cdots \\ \circ & \circ \ar[lu] & \circ \ar[lu] & \cdots \\ \circ & \circ \ar[l] \ar[lu] & \circ \ar[l] \ar[lu] & \circ \ar[l] \ar[lu] &\cdots \text{ ,}\ar[l] } \] \[ \xymatrix@C=15pt@R=0pt{ &&&&& \circ \ar[dl] & \circ \ar[l] & \circ \ar[l] & \cdots \ar[l] \\ && \circ \ar[ddl] & \circ \ar[l] & \circ \ar[l] \\ &&&&& \circ \ar[ul] & \circ \ar[l] & \circ \ar[l] & \cdots \ar[l] \\ \circ & \circ \ar[l] & \\ &&&&& \circ \ar[dl] & \circ \ar[l] & \circ \ar[l] & \cdots \ar[l] \\ && \circ \ar[uul] & \circ \ar[l] & \circ \ar[l] \\ &&&&& \circ \ar[ul] & \circ \ar[l] & \circ \ar[l] & \cdots \text{ .} \ar[l] } \] Note that any infinite quiver as in the assumption of the theorem contains at least one of the following quivers as a subquiver: \[ \xymatrix@C=15pt{ \circ & \circ \ar[l] & \circ \ar[l] & \cdots \ar[l] & \circ \ar[l] & \circ \ar[l] & \cdots \ar[l] }\text{ , } \] \[ \xymatrix@C=15pt@R=15pt{ &&& \circ & &&& \\ \circ \ar[urrr & \circ \ar[urr & \circ \ar[ur & \cdots & \circ \ar[ul & \circ \ar[ull & \cdots \cdots \text{ ,} } \] \[ \xymatrix@C=10pt@R=3pt{ &&&& \circ && && && \circ \\ &&& \circ \ar[ur] && \circ \ar[lu] && && \circ \ar[ur] && \circ \ar[ul] &&&& \cdots \cdots \\ && \circ \ar@{.>}[ur] && && \circ \ar@{.>}[ul] && \circ \ar@{.>}[ur] &&& & \circ \ar@{.>}[ul] && &\cdots \cdots \\ &\circ \ar[ur] &&& &&& \circ \ar[ur] \ar[ul] &&& &&& \circ \ar[ur] \ar[ul] &&& \text{.} } \] \begin{proof} This theorem has been proved when $Q$ is a finite quiver in {\bfsc Corollary \ref{fqa}}. Suppose that $Q$ is an infinite quiver, and $\cX$ is a finite dimensional $K$-linear representation of $Q$ such that ${\rm Ext}^1_{KQ}( G(\cX) ,KQ)=0$. We show that $\cX$ is projective. Since \[ {\rm Ext}^1_{KQ}(\bigoplus_{i\in I}M_i,KQ)= \prod_{i\in I}{\rm Ext}^1_{KQ}(M_i,KQ) \] in general, without loss of generality we may assume that $\cX$ is indecomposable. By {\bfsc Proposition \ref{closed quiver}}, for every finite closed subquiver $P$ of $Q$, ${\rm Ext}^1_{KP}( G(\cX\restriction P) ,KP)$ $=0$. Therefore, by ${\sf P}_{KP}({\rm mod}KP)$, $\cX\restriction P$ is projective. It is known that any indecomposable projective $KP$-module is of the form $e_v KP$ for some $v\in P_0$ \cite[\S III.2]{ASS}. Since $P$ is a closed subquiver of $Q$, the underlying set of $e_v KP$ is equal to $e_v KQ$, and $F(e_v KQ)\restriction P = F(e_v KP)$. So, since $\cX$ is finite dimensional, $\cX\restriction P$ is isomorphic to a direct sum of finitely many $K$-linear representations of the form $F(e_v KQ)$ for $v\in P_0$. Therefore, since $\cX$ is indecomposable, only one of the following statements hold: \begin{enumerate} \item $\cX$ is isomorphic to $F(e_v KQ)$ for some $v\in Q_0$, or \item $Q$ contains the quiver \[ \xymatrix@C=15pt{ v_0 & v_1 \ar[l] & v_2 \ar[l] & \cdots \ar[l] & v_n \ar[l] & v_{n+1} \ar[l] & \cdots \ar[l] } \] as a subquiver such that, for each $n\in\omega$, $\cX_{v_n}\neq\{0_K\}$, and $\cX\restriction \overline{\set{v_n}}^Q$ is a direct sum of finitely many copies of $F(e_{v_n}KQ)$. \end{enumerate} By {\bfsc Proposition \ref{ext not vanish}} and the assumption that ${\rm Ext}^1_{KQ}( G(\cX) ,KQ)=0$, $\cX$ is isomorphic to $F(e_v KQ)$ for some $v\in Q_0$. Therefore, $\cX$ is projective by {\bfsc Proposition \ref{proj rep}}. \end{proof} \subsection{Martin's Axiom} Martin's Axiom was introduced by Martin and Solovay \cite{MartinSolovay}. This axiom cannot be neither proved nor refuted from axiomatic set theory $\ZFC$, so it is consistent with $\ZFC$. Martin's Axiom can be considered as a generalization of the Baire category theorem (see e.g. \cite[Theorem III.4.7]{Kunen:new}). In this paper we use {\sf UP}\footnote{This notation follows \cite{EM} but it is not that common in set theory. }, which is one combinatorial consequence from Martin's Axiom. \begin{defn} \begin{enumerate} \item A {\em ladder system (on $\omega_1$)} is a sequence $\seq{C_\alpha:\alpha\in\omega_1\cap \Lim}$ such that \begin{itemize} \item for each $\alpha\in\omega_1\cap \Lim$, $C_\alpha$ is a cofinal subset of $\alpha$, that is, \[ \forall \xi \in\alpha\exists \eta\in C_\alpha \big( \xi\in \eta\big) , \] and \item $C_\alpha$ is of order type $\omega$, that is, the elements of $C_\alpha$ can be enumerated as $\set{\zeta^\alpha_n:n\in\omega}$ increasingly, that is, for every $m,n\in\omega$, if $m\in n$, then $\zeta^\alpha_m \in \zeta^\alpha_n$. \end{itemize} \item A {\em coloring} $\seq{d_\alpha:\alpha\in\omega_1\cap \Lim}$ of a ladder system $\seq{C_\alpha:\alpha\in\omega_1\cap \Lim}$ is a sequence of functions such that the domain of each $d_\alpha$ is $C_\alpha$. \item We say that a function $f$ with domain $\omega_1$ {\em uniformizes} a coloring $\langle d_\alpha:\alpha\in\omega_1\cap \Lim \rangle$ of a ladder system $\seq{C_\alpha:\alpha\in\omega_1\cap \Lim}$, $C_\alpha=\set{\zeta^\alpha_n:n\in\omega}$, if for every $\alpha\in\omega_1\cap \Lim$, the restricted function $f\restriction C_\alpha$ of $f$ by $C_\alpha$ is equal to the function $d_\alpha$ for all but finitely many points, that is, there exists an $N\in\omega$ such that, for any $n\in\omega\setminus N$, $f(\zeta^\alpha_n)=d_\alpha(\zeta^\alpha_n)$. \item The assertion {\sf UP} means that, for any sequence $\seq{X_\beta : \beta\in\omega_1}$ of countable sets and any coloring $\seq{d_\alpha:\alpha\in\omega_1\cap \Lim}$ of a ladder system $\seq{C_\alpha:\alpha\in\omega_1\cap \Lim}$, whenever $d_\alpha(\zeta^\alpha_n)$ belongs to $X_{\zeta^\alpha_n}$ for any $\alpha\in\omega_1\cap \Lim$ and $n\in\omega$, there exists a function with domain $\omega_1$ which uniformizes the coloring $\seq{d_\alpha:\alpha\in\omega_1\cap \Lim}$. \end{enumerate} \end{defn} For the proof of the following result, see e.g. \cite[VI.4.6 Proposition]{EM} or \cite[\S3]{Yorioka:MArec}. \begin{thm}[Devlin-Shelah \text{\cite[5.2.Theorem]{DevlinShelah: weak diamond}}] Martin's Axiom implies {\sf UP}. \end{thm} The assertion {\sf UP} was inspired by Shelah's proof that Martin's Axiom implies the existence of a non free Whitehead group \cite[Theorem 3.5]{Shelah:W} (see also \cite{E Shelah}). \subsection{Trlifaj's construction} \label{ods} In this paper, our modules are built by modifying Trlifaj's construction. As every proof in \S\ref{inf gen pa} is fairly self-contained, the reader does not need to be familiar with this construction. Trlifaj's construction is a quotient module of the outer direct sum $\displaystyle \bigoplus_{\xi\in\omega_1} F^\xi$ of some sequence $\seq{F^\xi:\xi\in\omega_1}$ of modules, defined in \cite[Definition 1.1]{Tr ES} and \cite[Notation 5.3]{HT}, which seems to be inspired by Shelah's solution of Whitehead Problem \cite{Shelah:W}. To fix our notation and understand our construction better, Trlifaj's construction is presented as follows. Let $R$ be a ring with identity and let \[ \xymatrix@C=30pt{ F_0 \ar[r]^-{\text{\normalsize $f_0$}} & F_1 \ar[r]^-{\text{\normalsize $f_1$}} & \cdots \ar[r]^-{\text{\normalsize $f_{n-1}$}} & F_n \ar[r]^-{\text{\normalsize $f_n$}} & F_{n+1} \ar[r]^-{\text{\normalsize $f_{n+1}$}} & \cdots } \] be a countable direct system of $R$-modules. Let $\seq{C_\alpha:\alpha\in\omega_1\cap\Lim}$ be a ladder system such that \[ C_\alpha=\set{\zeta^\alpha_n:n\in\omega} \] is an increasing enumeration and assume that, for each $n\in\omega$, $\zeta^\alpha_n$ is of the form $\delta+n+1$ for some $\delta\in\alpha\cap (\{0\}\cup \Lim)$ (there is such a ladder system). Define $F^0 := \{0_R\}$; for each $\gamma\in\omega_1\setminus\Lim$ with $\gamma=\delta+n_\gamma+1$ for some $\delta\in\gamma\cap (\{0\}\cup \Lim)$ and $n_\gamma\in\omega$, define $F^\gamma:=F_{n_\gamma}$; and, for each $\delta\in\omega_1\cap \Lim$, define $F^\delta:= \displaystyle \bigoplus_{n\in\omega}F_n$. So, for each member $x$ of the outer direct sum $\displaystyle \bigoplus_{\xi\in\omega_1} F^\xi$, $x$ forms a finite support function with domain included in $\omega_1$ and, for each $\alpha\in\omega_1$, $x(\alpha)$ belongs to $F^\alpha$. Hence, if $\delta\in\omega_1\cap \Lim$, then $x(\delta)$ belongs to the outer direct sum $F^\delta:= \displaystyle \bigoplus_{n\in\omega}F_n$, which also forms a finite support function with domain included in $\omega$. For each $\delta\in\omega_1\cap \Lim$, define the $R$-submodule \begin{multline*} G_\delta:= \left\langle \bigg\{x\in \bigoplus_{\xi\in\omega_1} F^\xi : \text{ for some $n\in\omega$, } \supp(x)=\set{\zeta^\delta_n, \delta}, \right. \\ \supp(x(\delta))=\set{n, n+1}, x(\zeta^\delta_n)=x(\delta)(n), \\ \left. \text{ and } x(\delta)(n+1)= f_n(x(\delta)(n)) \bigg\} \right\rangle_{R} \end{multline*} of the $R$-module $\displaystyle \bigoplus_{\xi\in\omega_1} F^\xi$, and define \[ I_{\omega_1}:=\sum_{\delta\in\omega_1\cap\Lim}G_\delta , \] which is an $R$-submodule of the $R$-module $\displaystyle \bigoplus_{\xi\in\omega_1} F^\xi$. Trlifaj's construction is the quotient $R$-module $\displaystyle \left.\bigoplus_{\xi\in\omega_1} F^\xi \right/ I_{\omega_1}$ of the $R$-module $\displaystyle \bigoplus_{\xi\in\omega_1} F^\xi$ by the $R$-submodule $I_{\omega_1}$. For each $x\in \displaystyle \bigoplus_{\xi\in\omega_1} F^\xi$, $x+ I_{\omega_1}$ denotes the equivalent class of $x$ in this quotient. Trlifaj applied this construction for a non-left perfect ring \cite{Tr ES}, and Herbera-Trlifaj applied it to analyze some classes of modules called Kaplansky classes or deconstructible classes \cite{HT}. For further properties of this module, see \cite[\S5]{HT}. \section{Some infinitely generated modules of path algebras} \label{inf gen pa} Throughout this section, we fix a ladder system $\seq{C_\alpha:\alpha\in\omega_1\cap\Lim}$ such that \[ C_\alpha=\set{\zeta^\alpha_n:n\in\omega} \] is an increasing enumeration and, for each $n\in\omega$, $\zeta^\alpha_n$ is of the form $\delta+n+1$ for some $\delta\in\alpha\cap( \{0\}\cup \Lim )$. We note that, for any $\alpha,\beta\in\omega_1\cap \Lim$ and $m,n\in\omega$, if $\zeta^\alpha_m=\zeta^\beta_n$ then $m=n$. For $\gamma\in\omega_1\setminus(\{0\}\cup \Lim)$, let $n_\gamma\in\omega$ be the unique integer such that $\gamma=\delta+n_\gamma+1$ for some (unique) $\delta\in \omega_1\cap (\{0\}\cup \Lim)$. For each subsection of this section, we deal with some quiver $Q$ and build a non projective $KQ$-module $M_{\omega_1}$. For each quiver $Q$ in each subsection, we use the following notation. For each $v\in Q_0$, $e_{v}$ denotes the path of length $0$ from the vertex $v$ (to itself). For $\gamma\in\omega_1 \setminus \Lim$, $\alpha\in\omega_1\cap \Lim$ and $n\in\omega$, let $F^\gamma=F^{\alpha,n}:=KQ$, and let $F^\alpha$ be the outer direct sum $\displaystyle \bigoplus_{n\in\omega} F^{\alpha,n}$. For $\gamma\in\omega_1\setminus\Lim$ and $v\in Q_0$, let $e^{\gamma}_{v}$ be the member of the outer direct sum $\displaystyle \bigoplus_{\xi\in\omega_1} F^\xi$ of $KQ$-modules such that \[ \supp(e^{\gamma}_{v})=\{\gamma\}, \ \ \ e^{\gamma}_{v}(\gamma)=e_{v} . \] For $\alpha\in\omega_1\cap \Lim$, $n\in\omega$ and $v\in Q_0$, let $e^{\alpha,n}_{v} \in \displaystyle \bigoplus_{\xi\in\omega_1} F^\xi$ be such that \[ \supp(e^{\alpha,n}_{v})=\{\alpha\}, \ \ \ \supp(e^{\alpha,n}_{v}(\alpha))=\{n\}, \ \ \ e^{\alpha,n}_{v}(\alpha)(n)= e_{v} . \] \begin{remark} \label{direct sum F xi} The set \[ \set{e^{\gamma}_{v} , e^{\alpha,n}_{v} : \gamma\in\omega_1\setminus\Lim, v\in Q_0, \alpha\in \omega_1 \cap\Lim, n\in\omega } \] is linearly independent with respect to $KQ$ in $\displaystyle \bigoplus_{\xi\in\omega_1} F^\xi$. \end{remark} \subsection{On a quiver of $A_\infty$ type} \label{inf simple} Throughout this subsection, let $K$ be an algebraically closed field and $Q$ the following quiver \[ \xymatrix@C=30pt{ 0 & 1 \ar[l]_-{\text{\normalsize $a_0$}} & \cdots \ar[l]_-{\text{\normalsize $a_1$}} & n \ar[l]_-{\text{\normalsize $a_{n-1}$}} & n+1 \ar[l]_-{\text{\normalsize $a_n$}} & \cdots \ar[l]_-{\text{\normalsize $a_{n+1}$}} } \ , \] that is, the set $Q_0$ of vertices is the set of all non negative integers and the set $Q_1$ of arrows is defined by \[ \set{ \xymatrix@C=18pt{n & n+1 \ar[l]_-{\text{\normalsize $a_{n}$}} : n\in\omega} } . \] Since $Q_0$ is infinite, $KQ$ does not have an identity. By simplifying the notation in this subsection, for each $\alpha\in\omega_1\cap\Lim$ and $n\in\omega$, \[ e^{\alpha}_{n} := e^{\alpha,0}_{n} . \] For each $\alpha\in\omega_1\cap \Lim$, define \[ G_\alpha := \left\langle \left\{ e^{\zeta^\alpha_n}_{n} - e^{\alpha}_{n} + e^{\alpha}_{n+1} a_{n} : n\in\omega \right\}\right\rangle_{KQ} , \] \[ I_{\omega_1}:=\sum_{\xi\in\omega_1\cap\Lim}G_\xi \] and, for each $\xi\in\omega_1+1$, define the $KQ$-module $M_\xi$ by \[ \left\langle \set{e^\gamma_{n_\gamma} + I_{\omega_1} : \gamma\in \xi\setminus\Lim} \cup \set{ e^{\alpha}_{n} + I_{\omega_1} : \alpha\in \xi\cap\Lim, n\in\omega} \right\rangle_{KQ} , \] which is considered as a $KQ$-submodule of the quotient module $\displaystyle \left. \displaystyle \bigoplus_{\xi\in\omega_1} F^\xi \right/I_{\omega_1}. $ \begin{remark} \label{inf ctbl I basis} The set $\left\{ e^{\zeta^\alpha_n}_{n} - e^{\alpha}_{n} + e^{\alpha}_{n+1} a_{n} : \alpha\in\omega_1\cap \Lim, n\in\omega \right\}$ is linearly independent with respect to $KQ$ in $\displaystyle \bigoplus_{\xi\in\omega_1} F^\xi$. \end{remark} In this paper, ${\displaystyle \bigoplus_{\omega_1} KQ}$ denotes the outer direct sum of $\omega_1$ many copies of $KQ$, which is considered as a $KQ$-module. \begin{claim} \label{inf ctbl M alpha 1} ${\rm Ext}^1_{KQ}( M_{\omega_1}, {\displaystyle \bigoplus_{\omega_1} KQ} ) \neq 0$. \end{claim} This claim indicates that $M_{\omega_1}$ is not a projective $KQ$-module. \begin{proof} $F_{\omega_1}$ denotes the $KQ$-module \[ \left\langle \set{e^\gamma_{n_\gamma} : \gamma\in \omega_1\setminus\Lim} \cup \set{ e^{\alpha}_{n} : \alpha\in \omega_1\cap\Lim, n\in\omega} \right\rangle_{KQ} . \] Applying ${\rm Hom}_{KQ}( - , {\displaystyle \bigoplus_{\omega_1} KQ} )$ to the exact sequence \[ \xymatrix@C=10pt{\displaystyle 0 \ar[r] & I_{\omega_1} \ar[rr]^-{{\rm id}_{I_{\omega_1}}} && F_{\omega_1} \ar[r] & M_{\omega_1} \ar[r] & 0 }, \] we obtain the exact sequence \begin{multline*} \xymatrix@C=10pt{ 0 \ar[r] & {\rm Hom}_{KQ}(M_{\omega_1}, {\displaystyle \bigoplus_{\omega_1} KQ} ) \ar[r] & \ } \\ \xymatrix@C=10pt{{\rm Hom}_{KQ}(F_{\omega_1}, {\displaystyle \bigoplus_{\omega_1} KQ} ) \ar[rrrrrr]^-{ {\rm Hom}_{KQ}({\rm id}_{I_{\omega_1}}, \underset{\omega_1}{\bigoplus}KQ)} &&&&&& {\rm Hom}_{KQ}(I_{\omega_1}, {\displaystyle \bigoplus_{\omega_1} KQ} ) }. \end{multline*} Then \[ {\rm Ext}^1_{KQ}(M_{\omega_1}, {\displaystyle \bigoplus_{\omega_1} KQ} ) = \left. {\rm Hom}_{KQ}(I_{\omega_1}, {\displaystyle \bigoplus_{\omega_1} KQ} ) \right/ {\rm Im}( {\rm Hom}_{KQ}({\rm id}_{I_{\omega_1}}, {\displaystyle \bigoplus_{\omega_1} KQ} ) ) . \] By {\itsc Remark} {\rmsc \ref{inf ctbl I basis}}, we can find a $KQ$-homomorphism $\varphi$ in ${\rm Hom}_{KQ}( I_{\omega_1} , {\displaystyle \bigoplus_{\omega_1} KQ} )$ such that for each $\alpha\in\omega_1\cap\Lim$ and $n\in\omega$, \[ \varphi( e^{\zeta^\alpha_n}_{n} - e^{\alpha}_{n} + e^{\alpha}_{n+1} a_{n}) := e^\alpha_{n} . \] We show that $\varphi$ does not belong to ${\rm Im}({\rm Hom}_{KQ}({\rm id}_{I_{\omega_1}}, {\displaystyle \bigoplus_{\omega_1} KQ} )$. Assume that $\varphi\in {\rm Im}({\rm Hom}_{KQ}({\rm id}_{I_{\omega_1}}, {\displaystyle \bigoplus_{\omega_1} KQ} ) )$, and let $\psi \in {\rm Hom}_{KQ}( F_{\omega_1}, {\displaystyle \bigoplus_{\omega_1} KQ} )$ such that \[ {\rm Hom}_{KQ}({\rm id}_{I_{\omega_1}}, {\displaystyle \bigoplus_{\omega_1} KQ} ) (\psi)=\psi\circ {\rm id}_{I_{\omega_1}} = \psi\restriction I_{\omega_1}=\varphi . \] We note that for each $\gamma\in\omega_1$ and $n\in\omega$, $\supp(\psi(e^{\gamma}_{n}))$ is a finite subset of $\omega_1$. So we can take an $\alpha\in\omega_1\cap \Lim$ such that, for every $\gamma\in\alpha$ and $n\in\omega$, $\supp(\psi(e^{\gamma}_{n}))$ is a finite subset of $\alpha$\footnote{ This can be done by e.g. \cite[Exercise III.6.20]{Kunen:new}}. For each $n\in\omega$, \[ \psi(e^{\zeta^\alpha_n}_{n}) - \psi(e^{\alpha}_{n}) + \psi(e^{\alpha}_{n+1}) a_{n} = \psi( e^{\zeta^\alpha_n}_{n} - e^{\alpha}_{n} + e^{\alpha}_{n+1} a_{n}) = e^\alpha_n . \] Therefore, by induction on $n\in\omega$, \[ \begin{array}{r@{\ }c@{\ }l} \psi(e^\alpha_0) & = & \psi(e^{\zeta^\alpha_0}_{0}) - e^{\alpha}_{0} + \psi(e^{\alpha}_{1}) a_{0} \\[1em] \rule{25pt}{0pt} & = & \psi(e^{\zeta^\alpha_0}_{0}) - e^{\alpha}_{0} + \left( \psi(e^{\zeta^\alpha_1}_1) - e^\alpha_1 + \psi(e^\alpha_2) a_1 \right) a_{0} \\[1em] \rule{25pt}{0pt} & = & \psi(e^{\zeta^\alpha_0}_{0}) - e^{\alpha}_{0} + \psi(e^{\zeta^\alpha_1}_1) a_{0} - e^\alpha_1 a_{0} + \psi(e^\alpha_2) a_1 a_{0} \\[1em] \rule{25pt}{0pt} & = & \cdots \\ \rule{25pt}{0pt} & = & \displaystyle \psi(e^{\zeta^\alpha_0}_0) - e^\alpha_0 + \sum_{i=1}^{n} \psi(e^{\zeta^\alpha_i}_i) a_{i-1}\cdots a_0 - \sum_{i=1}^{n}e^\alpha_i a_{i-1}\cdots a_0 \\ \rule{25pt}{0pt} & & + \psi(e^{\alpha}_{n+1}) a_{n} \cdots a_0. \end{array} \] Hence, for {\em every} $n\in\omega$, since each $\supp(\psi(e^{\zeta^\alpha_i}_i))$ does not contain $\alpha$ as a member, \[ \psi(e^\alpha_0 )(\alpha) \not\in K Q_{\leq n} , \] where $KQ_{\leq n}$ is the subset of $KQ$ generated by all paths of length $\leq n$. This is a contradiction. \end{proof} The following is similar to \cite[XII 2.2 Theorem, XIII 0.2 Proposition]{EM}. \begin{thm} \label{inf simple thm} Suppose that $K$ is a countable algebraically closed field. Then {\sf UP} implies that ${\rm Ext}^1_{KQ}(M_{\omega_1},KQ)=0$. In particular, ${\sf P}_{KQ}({\rm Mod}KQ)$ fails. \end{thm} \begin{proof} Applying ${\rm Hom}_{KQ}( - , KQ)$ to the exact sequence \[ \xymatrix@C=10pt{\displaystyle 0 \ar[r] & I_{\omega_1} \ar[rr]^-{{\rm id}_{I_{\omega_1}}} && F_{\omega_1} \ar[r] & M_{\omega_1} \ar[r] & 0 }, \] we obtain the exact sequence \begin{multline*} \xymatrix@C=10pt{ 0 \ar[r] & {\rm Hom}_{KQ}(M_{\omega_1},KQ) \ar[r] & {\rm Hom}_{KQ}(F_{\omega_1}, KQ ) \ar[rrrrr]^-{ {\rm Hom}_{KQ}({\rm id}_{I_{\omega_1}}, KQ)} &&&&& {\rm Hom}_{KQ}(I_{\omega_1}, KQ) }. \end{multline*} Then \[ {\rm Ext}^1_{KQ}(M_{\omega_1}, KQ ) = \left. {\rm Hom}_{KQ}(I_{\omega_1}, KQ ) \right/ {\rm Im}( {\rm Hom}_{KQ}({\rm id}_{I_{\omega_1}}, KQ ) ) . \] Let $\varphi\in {\rm Hom}_{KQ}(I_{\omega_1},KQ) $. We show that $\varphi$ belongs to ${\rm Im}( {\rm Hom}_{KQ}({\rm id}_{I_{\omega_1}}, KQ ) )$. For each $\alpha\in\omega_1\cap\Lim$ and $n\in\omega$, define \[ d_\alpha(\zeta^\alpha_n) := \varphi( e^{\zeta^\alpha_n}_{n} - e^{\alpha}_{n} + e^{\alpha}_{n+1} a_{n} ) . \] We notice that, for each $n\in\omega$, \[ \varphi( e^{\zeta^\alpha_n}_{n} - e^{\alpha}_{n} + e^{\alpha}_{n+1} a_{n} ) \, e_{n} = \varphi( e^{\zeta^\alpha_n}_{n} - e^{\alpha}_{n} + e^{\alpha}_{n+1} a_{n} ) , \] and, for any $m\in Q_0\setminus \{n\}$, \[ \varphi( e^{\zeta^\alpha_n}_{n} - e^{\alpha}_{n} + e^{\alpha}_{n+1} a_{n} ) \, e_{m} = \varphi(0_{\underset{\xi\in\omega_1}{\bigoplus} F^\xi}) = 0_{KQ}. \] Thus, for each $n\in\omega$, $d_\alpha(\zeta^\alpha_n)$ belongs to the countable set \[ \sum_{p \text{ : path in $Q$ ending in $n$}} K p . \] Therefore, by {\sf UP}, we can find a uniformization $f$ of the ladder system coloring $\langle d_\alpha:\alpha\in\omega_1\cap \Lim \rangle$, that is, for each $\alpha\in\omega_1\cap \Lim$, there is an $N_\alpha\in\omega$ such that, for every $n\geq N_\alpha$, $f(\zeta^\alpha_n)=d_\alpha(\zeta^\alpha_n)$. For each $\alpha\in\omega_1\cap\Lim$ and $n\in\omega$, define \begin{itemize} \item $\psi(e^{\zeta^\alpha_n}_{n}) := f(\zeta^\alpha_n)$, \item $\psi(e^{\alpha}_{n}) := 0_{KQ}$ when $n\geq N_\alpha$, and \item by downward induction on $n<N_\alpha$, define \[ \psi(e^{\alpha}_{n}) := \psi(e^{\zeta^\alpha_n}_{n}) + \psi(e^{\alpha}_{n+1}) a_{n} \\ - \varphi( e^{\zeta^\alpha_n}_{n} - e^{\alpha}_{n} + e^{\alpha}_{n+1} a_{n} ) . \] \end{itemize} By {\em Remark} \ref{direct sum F xi}, $\psi$ can be extended to a $KQ$-homomorphism from $F_{\omega_1}$ into $KQ$. Therefore \[ \psi\restriction I_{\omega_1} = \psi\circ {\rm id}_{I_{\omega_1}} = {\rm Hom}_{KQ}({\rm id}_{I_{\omega_1}}, KQ )(\psi) = \varphi , \] which finishes the proof. \end{proof} \begin{remark} By a similar argument to the one in the previous theorem, it can be proved that {\em if $K$ is a countable algebraically closed field and {\sf UP} holds, then $ {\rm Ext}^1_{KQ}(M_{\omega_1}, \displaystyle \bigoplus_{\omega} KQ)$ $=0$. } \end{remark} \begin{remark} \label{inf simple non v ext} By a similar argument as in \cite[4.3 Lemma]{E Shelah}, we can show that {\em if $K$ is a countable algebraically closed field, then $\diamondsuit$\footnote{$\diamondsuit$ is one set theoretic axiom consistent with $\ZFC$, see e.g. \cite{Kunen:new}.} implies ${\rm Ext}^1_{KQ}(M_{\omega_1},KQ)\neq0$.} The main ingredient to prove this is the following fact. \begin{claim} Suppose that ${\rm Ext}_{KQ}(M_{\alpha+1}/M_\alpha,KQ)\neq 0$, and let \[ \xymatrix@C=15pt{\displaystyle 0 \ar[r] & KQ \ar[r] & C_\alpha \ar[r]^-{\pi} & M_{\alpha} \ar[r] & 0 } \] be a short exact sequence that splits, that is, there exists a homomorphism $\rho$ from $M_\alpha$ into $C_\alpha$ such that $\pi\circ \rho= {\rm id}_{M_\alpha}$. Then there exists a short exact sequence \[ \xymatrix@C=15pt{\displaystyle 0 \ar[r] & KQ \ar[r] & C_{\alpha+1} \ar[r]^-{\pi'} & M_{\alpha+1} \ar[r] & 0 } \] such that \[ \pi'\restriction C_\alpha=\pi \] and there is no homomorphism $\rho'$ from $M_{\alpha+1}$ into $C_{\alpha+1}$ such that \[ \pi'\circ\rho'={\rm id}_{M_\alpha+1} \ \ \ \text{ and } \ \ \ \rho'\restriction M_\alpha=\rho. \] \end{claim} \noindent Since $KQ$ is countable and {\bfsc Claim \ref{inf ctbl M alpha 1}} holds, a similar argument as in \cite[6.3 Theorem]{E Shelah} works well to show that ${\rm Ext}_{KQ}(M_{\omega_1},KQ)\neq 0$. Moreover, by a similar argument as in \cite{HS}, we can show that, {\em if $K$ is a countable algebraically closed field and there is a set $\set{S_\alpha:\alpha\in\omega_1}$ of pairwise disjoint stationary subsets of $\omega_1$ such that $\diamondsuit_{S_\alpha}$ holds for each $\alpha\in\omega_1$, then the cardinality of ${\rm Ext}_{KQ}(M_{\omega_1},KQ)$ is greater than $\aleph_1$.} \end{remark} \subsection{On a circular quiver} In this subsection, let $K$ be an algebraically closed field and $Q$ the following quiver. \[ \scalebox{0.75}{ \begin{xy} (20,40) *++={0} ="A", (34,34) *++={1}="B", (40,20) *++={2}="C", (34,6) *++={3}="D", (6,6) *+-={k-2}="E", (0,20) *+-={k-1}="F", (6,34) *++={k}="G", (20,0) ="H", \ar^{\text{\normalsize $a_0$}} @/^4pt/ "A";"B" , \ar^{\text{\normalsize $a_1$}} @/^4pt/ "B";"C" , \ar^{\text{\normalsize $a_2$}} @/^4pt/ "C";"D" , \ar @/^5pt/ @{.} "D";"H" , \ar @/^5pt/ @{.} "H";"E" , \ar^{\text{\normalsize $a_{k-2}$}} @/^4pt/ "E";"F" , \ar^{\text{\normalsize $a_{k-1}$}} @/^4pt/ "F";"G" \ar^{\text{\normalsize $a_k$}} @/^4pt/ "G";"A" \end{xy} } \] Then the path $a_0 a_1 \cdots a_k$ is a path in $Q$ whose source and target are both the vertex $0$. We denote the path \[ \left(a_0 a_1 \cdots a_k\right)^0 = e_0 , \] and, for each $n\in\omega$, define the path \[ \left(a_0 a_1 \cdots a_k\right)^{n+1} = \left(a_0 a_1 \cdots a_k\right)^n a_0 a_1 \cdots a_k . \] Recall that $\displaystyle \sum_{v\in Q_0}e_v$ is the identity of $KQ$. For each $\alpha\in\omega_1\cap\Lim$, define \[ G_\alpha := \left\langle \left\{ e^{\zeta^\alpha_n}_{0} - e^{\alpha,n}_{0} + e^{\alpha,n+1}_{0} a_0 a_1 \cdots a_k : n\in\omega \right\}\right\rangle_{KQ} , \] \[ I_{\omega_1}:=\sum_{\xi\in\omega_1\cap\Lim}G_\xi , \] and, for each $\xi\in\omega_1+1$, define the $KQ$-module $M_\xi$ by \[ \left\langle \set{e^{\gamma}_{0} + I_{\omega_1} : \gamma\in \xi\setminus\Lim} \cup \set{ e^{\alpha,n}_{0} + I_{\omega_1} : \alpha\in \xi\cap\Lim, n\in\omega} \right\rangle_{KQ} , \] which is considered as a $KQ$-submodule of the quotient module $\displaystyle \left. \displaystyle \bigoplus_{\xi\in\omega_1} F^\xi \right/I_{\omega_1}. $ \begin{claim} \label{2 M alpha 1} ${\rm Ext}^1_{KQ}( M_{\omega_1},\displaystyle \bigoplus_{\omega_1} KQ) \neq 0$. \end{claim} It follows from this claim that $M_{\omega_1}$ is not projective. \begin{proof} This can be proved in a similar way as in {\bfsc Claim \ref{inf ctbl M alpha 1}}. To see this, it suffices to replace the formula \[ \varphi( e^{\zeta^\alpha_n}_{n} - e^{\alpha}_{n} + e^{\alpha}_{n+1} a_{n}) := e^\alpha_n \] by the formula \[ \varphi( e^{\zeta^\alpha_n}_{0} - e^{\alpha,n}_{0} + e^{\alpha,n+1}_{0} a_0 a_1 \cdots a_k) := e^{\alpha,n}_{0} \] in the proof of {\bfsc Claim \ref{inf ctbl M alpha 1}}. \end{proof} Moreover, by a similar proof as {\bfsc Theorem \ref{inf simple thm}}, the following theorem can be proved. \begin{thm} \label{2 thm} Suppose that $K$ is a countable algebraically closed field. Then {\sf UP} implies that ${\rm Ext}^1_{KQ}(M_{\omega_1},KQ)=0$, In particular, ${\sf P}_{KQ}({\rm Mod}KQ)$ fails. \end{thm} \begin{remark} \label{2 non v ext} As in {\itsc Remark} {\rmsc \ref{inf simple non v ext}}, if $K$ is a countable algebraically closed field and $\diamondsuit$ holds, then ${\rm Ext}^1_{KQ}(M_{\omega_1},KQ)\neq 0$. \end{remark} \subsection{Generalizations} \begin{thm} \label{cor final} Suppose that $K$ is a countable algebraically closed field and $Q'$ is a quiver that contains a subquiver $Q$ of one of the following types \[ \begin{xy} (10,20) *++={v} ="A", (17,17) *++={\circ}="B", (20,10) *++={\circ}="C", (17,3) *++={\circ}="D", (3,3) *++={\circ}="E", (0,10) *++={\circ}="F", (3,17) *++={\circ}="G", (10,0) ="H", \ar @/^2pt/ "A";"B" , \ar @/^2pt/ "B";"C" , \ar @/^2pt/ "C";"D" , \ar @/^2pt/ @{.} "D";"H" , \ar @/^2pt/ @{.} "H";"E" , \ar @/^2pt/ "E";"F" , \ar @/^2pt/ "F";"G" \ar @/^2pt/ "G";"A" \end{xy} \text{\raisebox{3em}{ \ , \ \ \ $\xymatrix@C=15pt{ v & \circ \ar[l] & \circ \ar[l] & \cdots \ar[l] & \circ \ar[l] & \circ \ar[l] & \cdots \ar[l] }$}} \ \ \ \] \noindent in such a way that the set of all paths in $Q'$ ending in $v$ is countable. Then {\sf UP} implies the failure of ${\sf P}_{KQ'}({\rm Mod}KQ')$. \end{thm} \begin{proof} Let $M_{\omega_1}$ be one of the $KQ$-modules constructed before. Then, $M_{\omega_1}$ can be considered as a $KQ'$-module and, by a similar argument as before, it can be proved that ${\rm Ext}^1_{KQ'}(M_{\omega_1}, \displaystyle \bigoplus_{\omega_1}KQ')\neq 0$, and that {\sf UP} implies ${\rm Ext}^1_{KQ'}(M_{\omega_1},KQ') =0$. \end{proof} \noindent {\bfsc Acknowledgement}. \ The authors thank Hiroyuki Minamoto, Izuru Mori and Kenta Ueyama for useful comments about this research. Especially, they provided us advice and information about {\bfsc Theorem \ref{fdA}}, {\itsc Remark} {\rmsc \ref{Noether ring}} and {\bfsc Proposition \ref{closed quiver}}.
{ "timestamp": "2018-02-27T02:05:33", "yymm": "1802", "arxiv_id": "1802.08836", "language": "en", "url": "https://arxiv.org/abs/1802.08836" }
\section{Introduction} The spectrum of triply-ionized tin (Sn$^{3+}$) was first observed by Rao nearly a century ago \cite{Rao1926}. Accurate estimations of transition lines of Sn IV have astrophysical interest since their detection in the stellar medium \cite{O'Toole2004, Chayer2005}. Using the Hubble's Space Telescope Imaging Spectrograph (STIS), \cite{O'Toole2004} observed the presence of Sn IV in several sdB and sdOB stars (HZ44, HD4539, HD171858, HD185510, HD205805, FFAqr, Feige48, Fiege65, CPD-64$^\circ$481, PG0342+026, PG1032+406, PG1104+243, PG1219+534, HD149382, Fiege66, Ton S 227, PHL932, UVO1758+36) in the spectroscopic analysis at temperature ranging from 22000 K to 40000 K. \cite{Chayer2005} detected the presence of Sn IV in the atmosphere of the cool white dwarfs and observed five spectral lines of Sn IV in the ultraviolet (UV) spectra. \cite{Proffit2001} determined the tin abundance of the early B-main-sequence star AV304 in the small Magellanic cloud(SMC) by using archival STIS/HST G140M spectral data to measure the 1314.54 $\mathring{\text{A}}$ resonance line of Sn IV. Chemical evaluation models predict that about two third of tin-abundance in solar system is produced by the S-process, with most of the remainder due to the r-process \cite{Arlandini1999, Sneden1996}. \cite{Sofia1999} stated that tin is the first element to show a well-determined interstellar gas phase abundance, formed primarily by the main s-process. Both Sn IV and Sn II can be observed in the different scenario of star evolution. The detections of low ionized tin in the spectra of Goddard High Resolution spectrograph (GHRS) \cite{Hobbs1993} have drawn the attention of precise experimental and/or theoretical works for high precision spectroscopy of these ions. The EUV photo absorption spectra recorded, using the dual laser producing plasma (DLP) technique has promising application in lithography \cite{Lysaght2005}. \cite{Dunne1992} have given prediction of the strengths of EUV transition lines of Sn IV using similar procedure. The technological advancement of observational instruments even have resolution to identify hyperfine lines, which are signature of relativistic effect in atoms or ions. There have been few experimental works to measure the lifetimes of few energy states of Sn IV \cite{ Andersen1972, Kernahan1985, Pinnington1987}. Theoretical calculations of energies of some low-lying states of this ion are also available in literature \cite{Migdalek2000, Leszek2009, Ryabtsev2006, Ryabtsev2007}. To the best of our knowledge, there is no theoretical calculations of atomic properties for Sn IV which accounts correlations exhaustively. Therefore, there is a requirement of the highly correlated relativistic calculations of transition parameters of SN IV in the determination of its abundances at different stellar and inter-stellar media. In this paper, we use a non-linear relativistic coupled cluster (RCC) theory \cite{ Lindgren1987, Bishop1991, Dixit2007a, Dutta2011, Dutta2012, Dutta2016, Bhowmik2017a, Bhowmik2017b, Bhowmik2018} to calculate electromagnetically allowed and forbidden transition strengths and lifetimes of low lying energy states of Sn IV. The theory includes almost all kinds of many body correlations including core correlation, core polarization, pair correlation \cite{ Dutta2016, Das2018} with high efficiency. In section 2, we brief about the RCC method and provide the expressions of the spectroscopic parameters used in this paper. Section 3 compares our calculated results with the experimental and theoretical values extracted from literature, and provide precise spectroscopic data of transition lines of Sn IV, relevant for astrophysical or astronomical observations. \section{theory} The RCC theory is a well-established many body theory which accounts the correlation exhaustively. The correlated wave function for atoms or ions of $N$ electrons is generated using the single valence Dirac-Fock reference state $|\Phi_v\rangle$. Single valence state means a single electron at '$v$'th valence orbital on top of the closed-shell Slater determinant state $|\Phi\rangle$. All the occupied (core) and unoccupied orbitals of the closed-shell are generated at the Dirac Fock (DF) level under the potential of $ (N-1) $ electrons following Koopman's theorem (\cite{Szabo1996}). Using the RCC formalism, we can relate the correlated wave function corresponding to valence orbital '$v$' with its reference state $|\Phi_v\rangle$ as, \begin{equation} |\Psi_v\rangle = e^T(1+S_v)|\Phi_v\rangle \hspace {0.2cm} \text{where} , |\Phi_v\rangle = a^{\dagger}_v|\Phi\rangle. \end{equation} Here, the operator $T$ considers the excitations from the core orbitals. $ S_v $ excites at least one electron from the valence orbital which gives rise to valence and core-valence excited configurations \cite{Lindgren1987}. Corresponding amplitudes of these excitation operators are solved using matrix equations of the closed-shell and open-shell systems as discussed in \cite{Dutta2016}. In this present work the correlations are restricted to single, double and triple excitations to calculate the correlated wavefunctions. Due to enormous computational effort and resource, less significant (much less than 1\%) \cite{Lindgren1987}, \cite{Lindgren1985}, \cite{Haque1984}, \cite{Dutta2012}, \cite{Roy2014} quadrupole excitations are neglected in our calculations. The transition matrix element of an operator $ {\hat{O}} $ can be represented as follows \begin{center} \begin{eqnarray} &&O_{ k\rightarrow i} =\frac{\langle{\Psi_k|\hat{O}|\Psi_i}\rangle}{\sqrt{\langle{\Psi_k|\Psi_k}\rangle{\langle{\Psi_i|\Psi_i}\rangle}}}, \nonumber \\ =&&\hspace{-0.7cm}\frac{\langle{\Phi_k|(1+S^{\dagger}_k)e^{T^{\dagger}}{\hat{O}}e^T(1+S_i)|\Phi_i}\rangle}{\sqrt {P_{kk} P_{ii}}} \nonumber \\ =&&\hspace{-0.7cm}\frac{\big[\langle\Phi_k|\bar{O} + (\bar{O}S_{1i} + S_{1f}^{\dagger}\bar{O}) + (\bar{O}S_{2i} + S_{2f}^{\dagger}\bar{O}) + ...|\Phi_i\rangle\big]}{N}, \end{eqnarray} \end{center} where, the normalization factor $N$ contains overlap matrix $P_{ii}=\langle{\Phi_i|(1+S^{\dagger}_k)e^{T^{\dagger}}e^T(1+S_k)|\Phi_k}\rangle$ and core-correlated operator is defined as $\bar{O}$ =$ e^{T^\dagger}\hat{O}e^T$. The subscripts '1' and '2' of open-shell cluster operator represents the single and double excitations. The matrix elements $(\bar{O}S_{1i} + S_{1f}^{\dagger}\bar{O})$ and $(\bar{O}S_{2i} + S_{2f}^{\dagger}\bar{O})$ yield the pair correlation and core polarization, respectively. The detail descriptions electric dipole ($E_1$), qudrupole ($E_2$) and magnetic dipole ($M_1$) matrix element with respect to orbitals are available in ref. \cite{Dutta2016} The transition probabilities, $A_{k\rightarrow i}$, of above electromagnetic multipole channels among the atomic states $|\Psi_k\rangle$ and $|\Psi_i\rangle$ are given by \begin{align} g_k\lambda^3 A^{E_1}_{k\rightarrow i} &=2.0261 \times 10^{18}{|\langle{i|\hat{O}^{E_1}|k\rangle}|}^2 ,\\ g_k\lambda^5A^{E_2}_{k\rightarrow i} &= 1.1199 \times 10^{18}{|\langle{ i|\hat{O}^{E_2}|k\rangle}|}^2 ,\\ g_k\lambda^3A^{M_1}_{k\rightarrow i} &= 2.697 \times 10^{13}{|\langle{i|\hat{O}^{E_2}|k}\rangle|}^2 . \end{align} Here $ \lambda $ is the wave length of transition in $ \mathring{\text{A}} $ and degeneracy factor of the $k$-th state is $g_{k} = 2J_k + 1$ . The absorption oscillator strengths due to $E1$ transition are given by \cite{Kelleher2008} \begin{equation} g_if_{ik} = 1.4991938\times 10^{-16}g_k\lambda^2 A^{E_1}_{k\rightarrow i}. \end{equation} The lifetime of a state $'k' $ is calculated by considering all transition probabilities to the lower energy states $ 'i' $ and is given by \begin{equation} \tau_k = \frac{1}{\sum_i {A_{k\rightarrow i}}}. \end{equation} \section{results and discussions} The quality of correlated wave functions produced by the RCC method is based on the generation of accurate DF orbitals as explained in the previos section. To generate the precise wavefunctions of DF orbitals, we use basis-set expansion approach (\cite{Clementi1990}) in self-consistent field calculations. Here the radial part of each basis considered to have a Gaussian-type orbital (GTO) having even-tempered exponents \cite{Huzinaga1993} as basis parameters. The optimization of the parameters is performed \cite{Roy2015} such that the even-tempered basis based DF wave functions over radial extent mostly agree with the DF wave functions obtained using a sophisticated numerical approach, GRASP92 (\cite{Parpia2006}). For $s$, $p$, $d$, $f$, $g$ and $h$ symmetries, we have chosen 16, 15, 15, 11, 8 and 8 active orbitals, respectively, for the RCC calculations out of 33, 30, 28, 25, 21 and 20 DF orbitals. This choice of active orbitals depend upon the convergence of the core correlation energy in the closed shell system (\cite{Dixit2008}). The maximum difference between our RCC excitation energies and NIST \cite{NIST} results occurs for $6s_{1/2}$ state, which is around $0.45 \%$, and the average discrepancy between the two results is about 0.3 \%. Table 1. shows the electric dipole ($E1$) transition amplitudes in both length and velocity gauge forms \cite{Grant2007, Johnson2006} along with the comparison between our calculated and NIST \cite{NIST} extracted transition wavelengths. The correlation contributions to each of the matrix elements can be easily found from the difference between the DF and the RCC transition amplitudes in both the gauge forms. Apart from $5s_{1/2}$ $\rightarrow $ $6p_{1/2,3/2}$ transtions, the average correlation contribution is about {5\%}. Including these two transitions the average correlation contribution becomes 23\%. It notifies that the correlations are significant to those relatively weaker transitions. The table also shows good agreement between the length and velocity gauge results and the average difference between them is $ 3.2 \%$. This agreement is one of the uncertainty estimations of our calculated wavefunctions. The consistent improvement of the ratio between these two gauged results further highlight the accuracy of our calculations. This ratio is 1.06 and 1.01 for $5p \rightarrow 5d$ and $5p\rightarrow 6d$ transitions, respectively. Also, table shows 0.96 and 0.95 values of the ratio for $4f \rightarrow 5d$ and $4f \rightarrow 6d$, respectively. Further accuracy can be analyzed by the consistency of the ratios, 20:10:2 (approximately) , of our calculated transition strengths among $^2P_{3/2}\rightarrow {^2}D_{3/2}$, $^2P_{1/2}\rightarrow {^2}D_{3/2}$ and $^2P_{3/2}\rightarrow {^2}D_{3/2}$, respectively \cite{cowan1981}. Most of the $E1$ transitions, presented here, fall in the ultraviolet region of electromagnetic spectrum apart from few and are especially useful in space telescope based astronomy (\cite{Goad2016}). Some of the transitions, $5s_{1/2}$ $\rightarrow $ $6p_{1/2,3/2}$, $6s_{1/2}$ $\rightarrow $ $6p_{1/2}$, $4f_{5/2}$ $\rightarrow $ $6d_{3/2,5/2}$ and $4f_{7/2}$ $\rightarrow $ $6d_{5/2}$ belong to visible energy spectrum and can be used in laser spectroscopy. Near and Mid-infrared observations in Astronomy using space based telescope, Infrared Space Observatory \cite{Kessler1996} has opened different areas of astrophysical studies in cool region of space, like interstellar medium \cite{Feuchtgruber1997}, planetary nebulae \cite{Liu2001}. Table 2. presents such fine structure transition lines of Sn IV. Apart from their astrophysical importance these transitions present different correlation features of many-body formalism. Therefore, we present the forbidden electric quadrupole $(E2)$ and magnetic dipole $(M1)$ transitions of them. Like, electric dipole transitions, the accuracy of the calculations can be understood from the comparison of estimations using formulations based on length and velocity gauges. In these transitions the difference is on average 5\% apart from $5g_{7/2}\rightarrow 5g_{9/2}$ $E2$ transition where we find large discrepancy. In the later case same discrepancy is observed even at the DF level using sophisticated numerical code GRASP92 \cite{Parpia2006}, which is not possible to mitigate by any correlated method. Also, our estimation for microwave transition among $4f$ multiplets show good agreement with relativistic configuration interaction calculations by \cite{Ding2012}. As expected, the correlation contributions to the fine structure $M1$ transitions are negligible and therefore for these transitions DF results are very good approximation of the RCC values. In Table 3., we have listed oscillator strengths of $E1$ transitions, calculated with their experimental transition wavelengths. Due to the better stability of the length gauge form of transition matrix elements compared to its velocity gauge form \cite{Grant2007}, we have used the former form of $E1$ transition amplitudes to calculate the oscillator strengths. Our calculated results of oscillator strengths are compared with the previously reported theoretical [\cite{Leszek2009, Cheng1979, Migdalek2000, Migdalek1979}] and experimental data [\cite{Andersen1972, Pinnington1987}]. Table shows excellent agreement of our results for $5s_{1/2}$ $\rightarrow $ $5p_{1/2, 3/2}$ transitions with the calculations of \cite{Leszek2009} using configuration interaction method based on DF wavefunctions generated with non-integer (CIDF(q)) outermost core shell occupation number. They have shown that the contribution of fractional occupancy parameter at the DF level contributes around 20\% to the oscillator strengths. Their configuration space is made up with the single and double excitations as well as some triple excitations from ground state to a few low-lying states. But, our present RCC calculations contain a larger active orbital space to exhaust to the correlation contributions. The comparison with the old calculation by Cheng and Kim using relativistic Hartree-Fock method \cite{Cheng1979} shows the correlation contributions in these evaluations. Migdalek and Baylis performed relativistic Hartree-Fock theory with core polarization using semi-empirically fitted polarization potential \cite{Migdalek1979} for few transitions. Migdalek and Garmulewicz reported the oscillator strengths for a few transitions of Sn IV using two different methods \cite{Migdalek2000} and they only differ by the treatment of valence-core exchange potential. The superiority of the present RCC method over all other many-body approaches discussed above that former includes all leading order terms corresponding to core polarization, pair correlation, core correlation along with higher-order terms for the transition matrix element calculations \cite{Dixit2007a, Dixit2008, Dutta2016, Bhowmik2017b}. Our detail study of correlations show that the core polarization has the dominant contribution in the total correlation in the presented $E1$ transition amplitudes. Therefore, our RCC results are in good agreement with all the core polarization augmented DF results. Some of the experimental data of oscillator strengths of $E1$ transitions of Sn IV were previously reported using Beam-foil technique \cite{Andersen1972, Pinnington1987} and they are seen to agree excellently with our RCC results. There is only disagreement seen for $5p_{3/2}\rightarrow 5d_{5/2}$. According to the suggestion of \cite{cowan1981}, the ratio of the $f-$ values for $5d_{5/2}\rightarrow 5p_{3/2}$, $5d_{3/2}\rightarrow 5p_{3/2}$ and $5d_{3/2}\rightarrow 5p_{3/2}$ transitions should be around 6:5:1, respectively. Our calculations show this ratio is 4.6:4.5:1, respectively, and therefore, the Beam-foil experiment by \cite{Pinnington1987} underestimated the f-value. Table 4. presents the transition rates of $E2$ and $M1$ transitions along with the corresponding experimental wavelengths. Most of the presented transitions are either in the ultra-violet or in the infra-red (IR) regions. The transitions which fall in the ultraviolet region are very important, in general, in astronomical observation and plasma research (\cite{Saloman2004}; \cite{Morgan1995}; \cite{Fahy2007}; \cite{Morita2010}). Recent study of forbidden transitions of monovalent atoms and ions by \cite{Safronova2017} suggests the superior advantage of these transitions in numerical areas in physics and engineering, particularly precision measurement of time and fundamental constants. $5d_{3/2, 5/2} \rightarrow 6s_{1/2}$ transitions may have applications in infrared laser spectroscopy and plasma research \cite{Thogersen1996}. Among all the presented $E2$ transitions, $5s_{1/2}$ $- $ $6d_{3/2, 5/2}$ transition matrix elements are maximally correlated, approximately 23\% and 27\%, respectively. All the other presented $E2$ transitions are less than 8\% correlated. However, the $M1$ transitions, $5d_{3/2}$ $- $ $6d_{5/2}$ is abnormally correlated (around 82\%) due to the large pair correlation effect \cite{Dixit2007a, Dixit2008, Bhowmik2017b}. As seen from the table, $5s_{1/2}$ $- $ $5d_{3/2, 5/2}$ transitions have stronger probability (almost $10^5$s$^{-1}$) compare to other $E2$ transitions. $M1$ transitions which have probabilities more than $10^{-3}$s$^{-1}$, are shown in the table. Since there is no metastable state of this ion, lifetime of the excited states are expected to be like neutral alkali atoms, of the order of nanosecond (ns). In Table 6, we compare the present lifetimes for few low lying states of Sn IV with the theoretical as well as a few experimental results. The life times are calculated using present RCC amplitudes and the experimental wavelengths from the NIST \cite{NIST} to minimize uncertainity due to the transition wavelengths. Andr\'es-Garc\'ia et al. have used the Griem semi-empirical approach using the COWAN computer code \cite{Andres-Garcia2016}. The other theoretical lifetimes calculated by Cheng and Kim \cite{Cheng1979} and Migdalek and Baylis \cite{Migdalek1979} are also presented in the table for comparison. The listed experimental lifetimes of different excited states of Sn IV measured using Beam-foil spectroscopy \cite{Pinnington1987, Kernahan1985, Andersen1972} are very close to our RCC lifetimes results apart from $5d_{5/2}$ due to discrepancy of the $5p_{3/2}\rightarrow 5d_{5/2}$ $E1$ transition as discussed in earlier paragraph. The theoretical uncertainties in the calculated property parameters are evaluated by the quality of the wave functions at the levels of the DF. Also, we consider the uncertainty contributions from other correlation terms, not considered in this work, and quantum electrodynamics effects. Later contributes at most $\pm$2\%. Therefore, in this work, the maximum uncertainties are $\pm$2.5\% and $\pm$2.3\% for the allowed and forbidden transition amplitudes, respectively. \section{conclusions} This paper presents the transition amplitudes, strengths and rates of astrophysically important allowed and forbidden transitions for the ion Sn IV using a highly correlated relativistic many-body approach. The transitions presented in this paper, are in ultraviolet, visible, infrared and microwave regions. Our calculated estimations of most of these transitions are in good agreements with the available experimental observations and theoretical calculations. The differences in results are explained and further justified with the help of estimations based on the length and velocity gauged forms. The spectroscopic data of the present work will be useful in the estimations of abundance of the ion in the different astronomical bodies, astrophysical plasma, laboratory plasma, specially in stellar and interstellar media. Since some of the transitions are estimated first time in literature, to best of our knowledge, our calculated data may also help to the astronomer to discover the undiscovered lines in astronomical systems. \begin{table*} \centering \caption{The electric dipole transition matrix elements (in a.u.) at the DF \& the RCC levels of calculations for both length and velocity gauge forms. Also experimental transition wavelengths ($\lambda_{NIST}$) is compared with the same from our RCC calculations ($\lambda_{RCC}$). Wavelengths are in $\mathring{\text{A}}$ unit.} \begin{tabular}{c c c c c c c c c c c c} \hline & & & & &\multicolumn{3}{c}{ Length Gauge} & & \multicolumn{3}{c}{ Velocity Gauge} \\ \cline{6-8}\cline{10-12} Transition & $J_l$ &$J_u$ & $ \lambda_ {RCC}$ & $ \lambda _{NIST}$ &DF && RCC &&DF & &RCC\\ \hline $5s \longrightarrow 5p$&1/2 &1/2 &1440.8 &1437.5 &1.8671 &&1.5659 &&1.8226 &&1.5900 \\ &1/2 &3/2 &1320.2 &1314.5 &2.6433 &&2.2253 &&2.5697 &&2.2536 \\ $5s \longrightarrow 6p$&1/2 &1/2 &507.2 &505.4 &0.0812 &&0.1836 &&0.0914 &&0.1682 \\ &1/2 &3/2 &501.8 &499.9 &0.0329 &&0.1834 && 0.0512 &&0.1604 \\ $6s \longrightarrow 6p$&1/2 &1/2 &4204.5 &4217.3 &3.6738 &&3.5135 &&3.6283 &&3.4928 \\ &1/2 &3/2 &3859.4 &3862.2&5.1600 &&4.9406 &&5.0860 &&4.9058 \\ $5p \longrightarrow 6s$&1/2 &1/2 &962.1 &956.3 &1.0166 &&1.0120 &&0.9921 &&0.9888 \\ &3/2 &1/2 &1024.5 &1019.7 &1.5836 &&1.5645 &&1.5413 &&1.5181 \\ $5p \longrightarrow 7s$&3/2 &1/2 &621.4 &619.0 &3.2234 &&3.3226 &&3.1607 &&3.2529 \\ &1/2 &1/2 &597.8 &595.1 &0.3093 &&0.3139 &&0.2982 &&0.3073 \\ $5p \longrightarrow 5d$&1/2 &3/2 &1041.4 &1044.5 &2.9490 &&2.6038 &&2.8498 &&2.5826 \\ &3/2 &3/2 &1114.9 &1120.7 &1.3294 &&1.2778 &&1.2702 &&1.2051 \\ &3/2 &5/2 & 1113.1 &1119.3 &4.0488 &&3.8480 &&3.8618 &&3.6150 \\ $5p \longrightarrow 6d$ &1/2 &3/2 &607.1 & 605.2 &0.5164 &&0.3803 &&0.4791 &&0.3815 \\ &3/2 &3/2 &631.4 &630.0 &2.3877 &&2.3413 &&2.3499 &&2.3157 \\ &3/2 &5/2 &631.2 &628.7 &7.1195 &&7.0158 &&7.0037 &&6.9390 \\ $ 6p \longrightarrow 7s$ &1/2 &1/2 &2528.7 &2514.8 & 2.0992 &&2.0612 &&2.0623 && 2.0253 \\ &3/2 &1/2 &2672.4 & 2660.6 &3.2234 &&3.1575 &&3.1607 &&3.0923 \\ $ 6p \longrightarrow 6d$&1/2 &3/2 &2703.6 &2706.7 &5.1306 &&4.9242 &&5.0383 &&4.8533 \\ &3/2 &3/2 &2868.6 & 2876.5 &2.3877 && 2.2918 && 2.3499 && 2.2660 \\ &3/2 &5/2 &2864.2 &2849.3 &7.1195 &&6.8656 &&7.0037 &&6.7880 \\ $ 5d \longrightarrow 6p$&3/2 &1/2 &3154.8 &3072.6 &3.0850 &&3.0584 &&2.9315 &&2.9094 \\ &3/2 &3/2 &2956.4 &2879.7 &1.3294 &&1.3231 &&1.2702 &&1.2634 \\ &5/2 &3/2 &2969.8 &2888.5 &4.0488 &&3.9850 &&3.8618 &&3.7922 \\ $5d \longrightarrow 4f$&3/2 &5/2 &2241.3 &2221.6 &5.5646 &&5.1718 &&5.7219 &&5.4041 \\ &5/2 &7/2 &2265.5 &2229.8 &6.6731 &&6.1593 &&6.9000 &&6.4668 \\ &5/2 &5/2 &2249.0 &2226.8 &1.4933 &&1.3840 &&1.5262 &&1.4361 \\ $4f \longrightarrow 6d$&7/2 &5/2 &4090.7 &4020.9 &4.0987 &&3.8654 &&4.3571 &&4.0997 \\ &5/2 &3/2 &4154.7 &4085.4 &3.4912 &&3.3390 &&3.6742 &&3.4986 \\ &5/2 &5/2 &4145.6 &4030.7 &0.9218 &&0.8897 &&0.9572 &&0.9200 \\ $ 4f \longrightarrow 5g$&7/2 &7/2 &2107.8 &2082.2 &1.3616 &&1.2583 &&1.3579 &&1.2613 \\ &7/2 &9/2 &2107.8 &2082.3 &8.0560 &&7.4448 &&8.0346 &&7.4643 \\ &5/2 &7/2 &2122.3 &2084.9 &7.0970 &&6.6454 &&7.0793 &&6.6629 \\ \hline \end{tabular} \end{table*} \begin{table*} \centering \caption{Fine structure transition amplitudes (in a.u.) of Sn IV in DF and RCC level of calculation.} \begin{tabular}{c c c c c c c c c c c c c } \hline & & &\multicolumn{3}{c}{ Length Gauge} & & \multicolumn{3}{c}{ Velocity Gauge} & & & \\ \cline{4-6}\cline{8-10} Transition & $J_l$ &$J_u$ &$O_{DF}^{E2}$ && $O_{RCC}^{E2}$ &&$O_{DF}^{E2}$ & &$O_{RCC}^{E2}$ &$O_{DF}^{M1}$ &$O_{RCC}^{M1}$ &$O_{other}^{M1}$ \\ \hline $5p\longrightarrow 5p$ &1/2 &3/2 &6.5555 && 6.1617 &&6.2810 &&5.7857 & 1.1531 &1.1532 & \\ $6p\longrightarrow 6p$ &1/2 &3/2 &25.4611 &&24.5380 &&24.3819 &&23.4516 &1.1528 &1.1531 & \\ $5d\longrightarrow 5d $&3/2 &5/2 &8.1878 &&7.8169 &&7.9249 &&7.5636 &1.5490 &1.5488 &\\ $6d\longrightarrow 6d $&3/2 &5/2 &27.7864 &&26.9633 &&27.5002 &&26.6322 &1.5488 &1.5490 & \\ $4f\longrightarrow 4f $&5/2 &7/2 &8.7703 &&8.1111 &&7.0339 &&6.8043 &1.8515 &1.8511 &$1.8586^a$ \\ $5g\longrightarrow 5g$ & 7/2 &9/2 &21.4040 &&21.0925 &&98.3823 &&98.0656 &2.1082 &2.1082 & \\ \hline a$\implies$\cite{Ding2012} \\ \end{tabular} \end{table*} \begin{table*} \caption{Comparison of oscillator strengths ($f$) (in a.u.) of electric dipole transitions between the RCC and other endavours (experimental and theoretical). RCC results are obtained using experimental wavelength ($ \lambda _{NIST}$) as given here in $\mathring{\text{A}}$ unit. } \centering \begin{tabular}{c r c c c c c } \hline \hline Transition & $J_l$ & $J_u$ & $ \lambda _{NIST}$ &$ f_{RCC} $ & $ f_{Others} $\\ \hline $5s \longrightarrow 5p$ &1/2 &1/2 &1437.5 &0.259 &$0.307^{a}$,$0.258^{b}$, $0.350^{c}$, $0.243^{d}$ \\ & & & & & 0.263$^e$, 0.241$^f$, 0.240$^g$, $ 0.225^h $\\ & & & & &0.260$^i$, 0.240$^j$\\ & 1/2 &3/2 &1314.5 &0.572 &$0.671^{a}$,$0.567^{b}$, $0.764^{c}$, $0.538^{d}$ \\ & & & & &0.583$^e$, 0.534$^f$, 0.535$^g$, $ 0.500^h$\\ & & & & & 0.560$^i$, 0.640$^j$\\ $ 5s \longrightarrow 6p $&1/2 &1/2 &505.4 &0.010 &- \\ &1/2 &3/2 & 499.9 &0.010 &- \\ $6s \longrightarrow 6p$ &1/2 &1/2 &4217.3 &0.445 &- \\ &1/2 & 3/2 &3862.2 &0.960 &- \\ $5p \longrightarrow 6s $&1/2 &1/2 &956.3 &0.163 &$0.165^d$, 0.161$^e$, 0.162$^f$, 0.159$^g$ \\ & & & & & 0.160$^j$\\ &3/2 &1/2 &1019.7 &0.182 &$0.185^d$, 0.182$^e$, 0.182$^f$, 0.180$^g$ \\ & & & & & 0.180$^j$\\ $5p \longrightarrow 7s$ &1/2 &1/2 &595.1 &0.025 &- \\ &3/2 &1/2 &619.0 &1.354 &- \\ $6p \longrightarrow 7s $&1/2 &1/2 &2514.8 &0.257 &- \\ &3/2 &1/2 &2660.6 &0.285 &- \\ $5p \longrightarrow 5d$ &1/2 &3/2 &1044.5 &0.986 &$1.180^c$, $0.972^d$, 0.965$^e$, 0.963$^f$\\ & & & & & 0.947$^g$, 0.943$^h$, 0.960$^j$\\ &3/2 &3/2 &1120.7 &0.111 & $0.088^d$, 0.098$^e$, 0.097$^f$, 0.096$^g$ \\ & & & & & 0.095$^h$\\ &3/2 &5/2 &1119.3 &1.005 &$1.069^c$, $0.885^d$, 0.881$^e$, 0.878$^f$, \\ & & & & & 0.866$^g$, 0.852$^h$, 0.630$^j$\\ $5p \longrightarrow 6d$ &1/2 &3/2 &605.2 &0.036 &- \\ &3/2 &3/2 &630.0 &0.661 &- \\ &3/2 &5/2 &628.7 &5.945 &- \\ $6p \longrightarrow 6d$ &1/2 &3/2 &2706.7 &1.360 &- \\ &3/2 &3/2 &2876.5 & 0.139 & -\\ &3/2 &5/2 &2849.3 &1.256 &- \\ $5d \longrightarrow 6p$ &3/2 &1/2 &3072.6 &0.231 &- \\ &3/2 &3/2 &2879.7 &0.046 &- \\ &5/2 &3/2 &2888.5 &0.278 &- \\ $5d \longrightarrow 4f$ &3/2 &5/2 &2221.6 &0.914 &$1.036^c$ \\ &5/2 &7/2 &2229.8 &0.861 &$0.977^c$ \\ &5/2 &5/2 &2226.8 &0.044 &- \\ $4f \longrightarrow 6d$ &7/2 &5/2 &4020.9 &0.141 &- \\ &5/2 &3/2 &4085.4 &0.138 &- \\ &5/2 &5/2 &4030.7 &0.010 &- \\ $4f \longrightarrow 5g$&7/2 &7/2 &2082.2 &0.029&- \\ &7/2 &9/2 &2082.3 &1.011 &$1.100^c$ \\ &5/2 &7/2 &2084.9 &1.072 &$1.135^c$ \\ \hline \hline \end{tabular} \hspace{-2.5cm} $a$ $\implies$ CIDF method with integer occupation number[\cite{Leszek2009}]. \\ \hspace{-1.65cm} $b$ $\implies$CIDF(q) method with non-integer occupation number[\cite{Leszek2009}]. \\ \hspace{-5.1cm} c $\implies$ Relativistic Hartree-Fock method [\cite{Cheng1979}]. \\ \hspace{-5.96cm} $d$ $\implies $ DF+CP method[\cite{Migdalek2000}]. \\ \hspace{-2.30cm} $e$ $\implies $ DX+CP method with SCE model potential [\cite{Migdalek2000}].\\ \hspace{-1.66cm} $f$ $\implies $ DX+CP method with CAFEGE model potential [\cite{Migdalek2000}].\\ \hspace{-1.80cm}$g$ $\implies $ DX+CP method with HFEGE model potential [\cite{Migdalek2000}].\\ \hspace{-1.27cm}$h$ $\implies $ Semiempirical relativistic Hartree–Fock (Dirac–Fock) results [\cite{Migdalek1979}] \\ \hspace{-7.10cm} i $\implies $ Beam-foil technique [\cite{Andersen1972}]\\ \hspace{-6.85cm}j $\implies $Beam-foil technique [\cite{Pinnington1987}] \end{table*} \begin{table*} \caption {DF and RCC transition rate (in s$^{-1}$) of $E2$ in length gauge ($A_{DF}^{E2} $ and $ A_{RCC}^{E2} $) and $M1$ ($ A_{DF}^{M1} $ and $ A_{RCC}^{M1} $) along with the experimental wavelengths ( $\lambda_{NIST} $) in \AA. The notation $P(Q)$ for transition rates means $P\times10^{Q}$.} \centering \centering \begin{tabular}{c c c c c c c c c } \hline Transition &$J_l$ &$J_u$ & $ \lambda_{NIST} $ & $A_{DF}^{E2} $ & $ A_{RCC}^{E2} $ & $ A_{DF}^{M1} $ & $ A_{RCC}^{M1} $ \\ \hline\\ $5s-5d$ &1/2 &3/2 & 604.9 &1.0598 (+05) &9.2311(+04) & & & \\ &1/2 &5/2 &604.6 &1.0563(+05) &9.2260(+04) \\ $5s-6d $&1/2 &3/2 &425.9 &1.6629(+04) &1.2858(+04) & & & \\ &1/2 &5/2 &425.3 &1.7794(+04) &1.2912(+04) & & \\ $5p-6p$ &1/2 &3/2 &766.5 &1.4222(+04) &1.3055(+04) &1.6191(+01) &1.3863(+01) \\ &3/2 &1/2 &821.2 &2.8691(+04) &2.6272(+04)&2.9772(+01) &3.4242(+01) \\ &3/2 &3/2 &806.7 &1.3823(+04) &1.2681(+04) & & \\ $ 5p-4f$&3/2 &7/2 &745.2 &9.5768(+04) &8.4420(+04) & & \\ &3/2 &5/2 &744.9 &2.1323(+04) &1.8862(+04) &$2.1018(-03)$ &$2.5918(-03)$ \\ &1/2 &5/2 &710.5 &8.5436(+04) &7.5457(+04) & & \\ $6p-4f$ &3/2 &7/2 &9778.1 &1.2745$(+00)$ & 1.1060$(+00)$ & & \\ &3/2 & 5/2 & 9720.6 &2.9366$(-01)$ &2.6202$(-01)$& & \\ $5d-6s$ &3/2 &1/2 &11319.8 &5.4750$(-01)$ &5.1505$(-01)$ & & \\ &5/2 &1/2 &11457.4 &7.9011$(-01)$ &7.3143$(-01)$ & & \\ $5d-7s$ &3/2 &1/2 &1382.9 &1.0096(+03) &9.7368(+02) & & \\ &5/2 &1/2 &1384.9 &1.6291(+03) &1.4869(+03) & & \\ $5d-6d $&3/2 &3/2 &1439.0 &2.7130(+03) &2.5876(+03) & & \\ &3/2 &5/2 &1432.2 &7.7068(+02) &7.5189(+02) &1.7655$(-01)$ &6.1292$(-03)$ \\ &5/2 &5/2 &1434.4 &3.2066$(+03)$ &3.0232$(+03)$ & & \\ $5d-5g$ &3/2 &7/2 &1075.5 &4.2270(+04) &3.9004(+04) & & \\ &5/2 &7/2 &1076.8 &4.8024(+03) &4.3446(+03) & & \\ &5/2 &9/2 &1076.8 &4.8033(+04) &4.3457(+04) & & \\ $6d-5g$ &5/2 &7/2 &4318.7 & 4.7514$(+01)$&4.5416$(+01)$ & & \\ &5/2 &9/2 &4318.8 &4.7502$(+02)$ & 4.5402$(+02)$ & & \\ \hline \\ \end{tabular} \end{table*} \begin{table*} \begin{center} \caption{Lifetimes in ns of few low-lying states. } \centering \begin{tabular}{c c c c c } \hline\hline Level &present work &other work(experiment) &other work(theory) \\ \hline $5p_{1/2}$ &1.20 & 1.29 $\pm0.20^a$, 0.73 $\pm0.40^b$& $1.03^d$, $0.95^e$, $0.89^f$ \\ $5p_{3/2}$ &0.90 & 0.81$ \pm0.15^a $, 0.93$\pm0.23^b$, &$0.79^d$, $0.74^e$, $0.68^f$ \\ & & 1.00$\pm0.10^c$ & \\ $5d_{3/2}$ &0.28 & 0.34$ \pm0.04^a $, 0.35$\pm0.03^b$ &$0.23^d$, $0.26^e$, $0.33^f$ \\ $5d_{5/2}$ &0.28 & 0.45$\pm0.05^a$, 0.41$\pm0.03^b$, &$0.30^d$, $0.29^e$, $0.35^f$ \\ & & 0.52$\pm0.08^c$ & \\ \hline \\ \end{tabular} \end{center} a$\implies$ \cite{Pinnington1987}, b$\implies$ \cite{Kernahan1985}, c$\implies$ \cite{Andersen1972} \\ d$\implies$ \cite{Andres-Garcia2016} e$\implies$ [\cite{Cheng1979}], f$\implies$[\cite{Migdalek1979}] \\ \end{table*} \begin {thebibliography}{mnras} \bibitem[\protect\citeauthoryear{Andersen et al.}{1972}]{Andersen1972} Andersen T., Nielsen, A. K., and Sorensen, G., 1972, Phys. Scr., 6, 122 \bibitem[\protect\citeauthoryear{Andr\'es-Garc\'ia et al.}{2016}]{Andres-Garcia2016} Andr\'es-Garc\'ia I. de., Alonso-Medina A., Col\'on C., 2016, MNRAS, 455, 1145 \bibitem[\protect\citeauthoryear{Arlandini et al.}{1999}]{Arlandini1999} Arlandini C., K\'{a}ppeler F., Wisshak K., Gallino R., Lugaro M., Busso M., and Straniero O., 1999, ApJ, 525, 886 \bibitem[\protect\citeauthoryear{Bhowmik et al.}{2017a}] {Bhowmik2017a} Bhowmik A., Dutta N. N., and Roy S., 2017, ApJ, 836, 125 \bibitem[\protect\citeauthoryear{Bhowmik et al.}{2017b}]{Bhowmik2017b} Bhowmik A., Roy S., Dutta N. N., and Majumder S., 2017 J. Phys. B: At. Mol. Opt. Phys., 50, 125005 \bibitem[\protect\citeauthoryear{Bhowmik et al.}{2018}]{Bhowmik2018} Bhowmik A., Dutta N. N., and Majumder S., 2018, Phy. Rev. A. 97, 022511 \bibitem[\protect\citeauthoryear{Bishop}{1991}]{Bishop1991} Bishop R F, 1991, Theor. Chim. Acta, 80 95 \bibitem[\protect\citeauthoryear{Chayer et al.}{2005}]{Chayer2005} Chayer P., Vennes S., Dupuis J., Kruk W., 2005, ApJ, 630, L169 \bibitem[\protect\citeauthoryear{Cheng \& Kim}{1979}]{Cheng1979} Cheng K. -T., Kim Y. -K., 1979, J. Opt. Soc. Am., 69, 125. \bibitem[\protect\citeauthoryear{Clementi}{1990}]{Clementi1990} Clementi E. (Ed), Modern Techniques in Computational Chemistry: MOTECC-90, ESCOM Science Publishers B. V., 1990. \bibitem[\protect\citeauthoryear{Cowan}{1981}]{cowan1981} Cowan R. D., 1981 The Theory of Atomic Structure and Spectra (Berkeley, CA: University of California Press) \bibitem[\protect\citeauthoryear{Das et al.}{2018}]{Das2018} Das A., Bhowmik A., Dutta N. N., and Majumder S., 2018 J. Phys. B: At. Mol. Opt. Phys., 51, 025001 \bibitem[\protect\citeauthoryear{Das \& Idress}{1990}]{Das1990} Das B. P. and Idress M., 1990, Phys. Rev. A, 42, 6900. \bibitem[\protect\citeauthoryear{Ding et al.}{2012}]{Ding2012} Ding X. B., Koike F., Murakami I., Kato D., Sakaue H. A., Dong C. -Z., and Nakamura N., 2012, J. Phys. B: At. Mol. Opt. Phys. 45 035003 \bibitem[\protect\citeauthoryear{Dixit et al.}{2007}]{Dixit2007a} Dixit G., Sahoo B. K., Chaudhuri R. K., Majumder S., 2007, Phys. Rev.A. 76, 042505 \bibitem[\protect\citeauthoryear{Dixit et al.}{2008}]{Dixit2008} Dixit G., Nataraj H. S., Sahoo B. K., Chaudhuri R. K., Majumder S., 2008, J. Phys. B: At. Mol. Opt. Phys. 41, 025001. \bibitem[\protect\citeauthoryear{Dunne \& O\'Sullivan}{1992}]{Dunne1992} Dunne P. and O\'Sullivan G., 1992, J. Phys. B: At. Mol. Opt. Phys. 25, L593 \bibitem[\protect\citeauthoryear{Dutta \& Majumder}{2011}]{Dutta2011} Dutta N. N. and Majumder S., 2011, ApJ, 737, 25 \bibitem[\protect\citeauthoryear{Dutta \& Majumder}{2012}]{Dutta2012} Dutta N. N. and Majumder S., 2012, Phys. Rev. A 85, 032512 \bibitem[\protect\citeauthoryear{Dutta \& Majumder}{2016}]{Dutta2016} Dutta N. N. and Majumder S., 2016, Indian J. Phy. 90, 373 \bibitem[\protect\citeauthoryear{Fahy et al.}{2007}]{Fahy2007} Fahy K., Sokell E., O'Sullivan G., Aguilar A., Pomeroy J. M., Tan J. N., Gillaspy J. D., 2007, Phys. Rev. A 75, 032520 \bibitem[\protect\citeauthoryear{Feuchtgruber et al.}{1997}]{Feuchtgruber1997} Feuchtgruber, H., et al. 1997, ApJ, 487, 962 \bibitem[\protect\citeauthoryear{Glowacki \& Migdalek}{2009}]{Leszek2009} Glowacki L., and Migdalek J., 2009, Phys. Rev. A, 80, 042505 \bibitem[\protect\citeauthoryear{Goad et al.}{2016}]{Goad2016} Goad M. R., et al., 2016, ApJ, 824, 11 \bibitem[\protect\citeauthoryear{Grant}{2007}]{Grant2007} Grant, I. P. 2007, Relativistic Quantum Theory of Atoms and Molecules: Theory and Computation (Berlin: Springer). \bibitem[\protect\citeauthoryear{Grumer et al.}{2014}]{Grumer2014} Grumer J., Zhao R., Brage T., Li W., Huldt S., Hutton R and Zou Y., 2014 Phys. Rev. A 89, 062511 \bibitem[\protect\citeauthoryear{Gutterres et al.}{2002}]{Gutterres2002} Gutterres R. F., Amiot C., Fioretti A., Gabbanini C., Mazzoni M., Dulieu O., 2002, Phys. Rev. A, 66, 024502 \bibitem[\protect\citeauthoryear{Haque \& Mukherjee}{1984}]{Haque1984} Haque A. and Mukherjee D., 1984, J. Chem. Phy., 80, 5058 \bibitem[\protect\citeauthoryear{Hobbs et al.}{1993}]{Hobbs1993} Hobbs L. M., Welty D. E., Morton D. C., Spitzer L., York D. G., 1993, ApJ, 411, 750. \bibitem[\protect\citeauthoryear{Huzinaga \& Klobukowsk}{1993}]{Huzinaga1993} Huzinaga S. and Klobukowski M., 1993, Chem. Phys. Letts, 212, 260 \bibitem[\protect\citeauthoryear{Johnson}{2006}]{Johnson2006} Johnson W. R., 2006, Atomic Structure Theory: Lectures on Atomic Physics (Berlin: Springer) \bibitem[\protect\citeauthoryear{Kelleher \& Podobedova}{2008}]{Kelleher2008} Kelleher D. E. and Podobedova L. I., 2008, J. Phys. Chem. Ref. Data, 37, 267 \bibitem[\protect\citeauthoryear{Kernahan et al.}{1985}]{Kernahan1985} Kernahan J. A., Pinnington E. H., Ansbacher W., Bahr J. L., 1985, Nucl. Instr. and Meth., 9, 616. \bibitem[\protect\citeauthoryear{Kessler}{1996}]{Kessler1996} Kessler M. F., et al., 1996, A\&A , 315, L27 \bibitem[\protect\citeauthoryear{Kramida}{2017}]{NIST} Kramida A., Ralchenko Y., Reader J., and NIST ASD Team (2017). NIST Atomic Spectra Database (ver. 5.5.1), [Online]. Available: https://physics.nist.gov/asd [2017, December 8]. National Institute of Standards and Technology, Gaithersburg, MD \bibitem[\protect\citeauthoryear{Leonard et al.}{2015}]{Leonard2015} Leonard R. H., Fallon A. J., Sackett C. A., Safronova M. S., 2015, Phys. Rev. A 92, 052501 \bibitem[\protect\citeauthoryear{Lindgren \& Morrison}{1985}]{Lindgren1985} Lindgren I., Morrison J., in Atomic Many-body Theory, ed., Lambropoulos G. E., Walther H., 1985 (3rd. ed., Berlin: Springer), 3. \bibitem[\protect\citeauthoryear{Lindgren \& Mukherjee}{1987}]{Lindgren1987} Lindgren I. and Mukherjee D., 1987. Phys. Rept., 151, 93 \bibitem[\protect\citeauthoryear{Liu et al.}{2001}]{Liu2001} Liu, X. -W., Barlow, M. J., Cohen, M., et al. 2001, MNRAS, 323, 343 \bibitem[\protect\citeauthoryear{Lysaght et al.}{2005}]{Lysaght2005} Lysaght M., Kilbane D., Murphy N., Cummings A., Dunne P. and O\'Sullivan G., 2005, Phys. Rev. A, 72, 014502 \bibitem[\protect\citeauthoryear{Marian et al.}{2005}]{Marian2005} Marian A., Stowe M. C., Felinto D., Ye J., 2005, Phys. Rev. Lett. 95, 023001 \bibitem[\protect\citeauthoryear{Migdalek \& Baylis }{1979}]{Migdalek1979} Migdalek J. and Baylis W. E., 1979, J. Quant. Spectrosc. Radiat. Tranfer, 22, 113 \bibitem[\protect\citeauthoryear{Migdalek \& Garmulewicz}{2000}]{Migdalek2000} Migdalek J. and Garmulewicz M., 2000, J. Phys. B, 33, 1735. \bibitem[\protect\citeauthoryear{ Morgan et al.}{1995}]{Morgan1995} Morgan C. A., Serpa F. G., Takacs E., Meyer E. S., Gillaspy J. D., Sugar J., Roberts J. R., Brown C. M., Feldman U., 1995, Phys. Rev. Lett. 74, 1716 \bibitem[\protect\citeauthoryear{Morita et al.}{2010}]{Morita2010} Morita S., Goto M., Katai R., Dong C., Sakaue H., Zhou H., 2010, Plasma Science and Technology, 12, 341 \bibitem[\protect\citeauthoryear{O\'Toole}{2004}]{O'Toole2004} O\'Toole S. J., 2004, A\& A, \textbf{423}, L25 \bibitem[\protect\citeauthoryear{Parpia et al.}{2006}]{Parpia2006} Parpia F. A., Fischer C. F., and Grant I. P., 2006 Comput. Phys. Commun., 175, 745 \bibitem[\protect\citeauthoryear{Pinnington et al.}{1987}]{Pinnington1987} Pinnington E. H., Kernahan J. A., Ansbacher W., 1987, Can. J. Phy., 65, 7 \bibitem[\protect\citeauthoryear{Proffitt et al.}{2001}]{Proffit2001} Proffitt C. R., Sansonetti C. J., Reader J., 2001, ApJ, 557, 320 \bibitem[\protect\citeauthoryear{Rao}{1926}]{Rao1926} Rao K. R., 1926, Proc. Phy. Soc. 39, 408 \bibitem[\protect\citeauthoryear{Roy et al}{2014}]{Roy2014} Roy S., Dutta N. N., Majumder S., 2014, Phys. Rev. A, 89, 042511 \bibitem[\protect\citeauthoryear{Roy \& Majumder}{2015}]{Roy2015} Roy S., and Majumder S., 2015, Phy. Rev. A, 92, 012508 \bibitem[\protect\citeauthoryear{Ryabtsev et al}{2006}]{Ryabtsev2006} Ryabtsev A. N., Churilov S. S. and Kononov \'E. Ya., 2006, Opt. Spectrosc., 100, 652 \bibitem[\protect\citeauthoryear{Ryabtsev et al}{2007}]{Ryabtsev2007} Ryabtsev A. N., Churilov S. S. and Kononov \'E. Ya., 2007, Opt. Spectrosc., 102, 354 \bibitem[\protect\citeauthoryear{Safronova et al.} {2017}]{Safronova2017} Safronova U. I., Safronova M. S., and Johnson W. R., 2017, Phy. Rev. A, 95, 042507 \bibitem[\protect\citeauthoryear{Saloman}{2004}]{Saloman2004} Saloman E. B., 2004, J. Phys. Chem. Ref. Data, 33, 765 \bibitem[\protect\citeauthoryear{Sneden et al.}{1996}]{Sneden1996} Sneden C., Cowan J. J., Dufton P. L., Burris D. L., and Armosky B.J., 1996, ApJ 467, 819 \bibitem[\protect\citeauthoryear{Sofia et al.}{1999}]{Sofia1999} Sofia U.J., Meyer D.M. and Cardelli J. A., 1999, ApJ, 522, L137 \bibitem[\protect\citeauthoryear{Szabo \& Ostlund}{1996}]{Szabo1996} Szabo, A., and Ostlund, N. S., 1996, Modern Quantum Chemistry: Introduction to Advanced Electronic Structure Theory (Dover, Mineola) \bibitem[\protect\citeauthoryear{Thogersen et al.}{1996}]{Thogersen1996} Thogersen J., Scheer M., Steele L. D., Haugen H. K., wijesundera W. P., 1996, Phys. Rev. Lett. 76, 2870 \end{thebibliography} \label{lastpage} \end{document}
{ "timestamp": "2018-02-27T02:07:35", "yymm": "1802", "arxiv_id": "1802.08893", "language": "en", "url": "https://arxiv.org/abs/1802.08893" }
\section{Background} \label{sec:background} When it comes to malware detection, anti-virus software has long been the first line of defense. However, for almost as long as AV engines have been around, they have been recognized to be far from perfect, in terms of their false negative rates~\cite{tenable, gashi2009experimental}, false positive rates, and, lastly, in terms of performance~\cite{real-time-av,Dien2014}. Clearly, the choice of signature database plays a decisive role in the success of the AV solution. This is illustrated by the promises of SaneSecurity (\url{http://sanesecurity.com}) to deliver improved detection rates, by using the free open-source ClamAV detection engine and their own carefully curated and frequently updated database of signatures. For example, as of August~2016, they claim a detection rate of~97.11\% vs only~13.82\% for out-of-the-box ClamAV, using a database of little over~4,000 signatures vs. almost~250,000 for ClamAV. \hide{ Several researchers have proposed generating new signatures automatically~\cite{newsome2005polygraph,perdisci2010behavioral,griffin2009automatic,sathyanarayan2008signature,zolkipli2010framework}. Signature \emph{addition} seems to largely remain a manual process, supplemented with testing potential AV signatures against known deployments, often within virtual machines. } While the issue of false negatives is generally known to industry insiders, the issue of false \emph{positives} receives much more negative press. Reluctance to remove or disable older signatures runs the risk of unnecessarily triggering false positives, which are supremely costly to the AV vendor, as reported by AV-Comparatives~\cite{av-fp-sink} (\url{http://www.av-comparatives.org}). \subsection{The Changing Attack Landscape} When the attacks for which some mitigation mechanism was designed are no longer observed in the wild, it might seem very alluring to remove said mechanism. However, in most real world scenarios, we are not able to fully retire mitigation techniques. Before disabling some mitigation mechanism, one should examine the reason these attacks are no longer being observed and whether this is simply due to the observational mechanism and data collection approach being faulty. Today, large-scale exploitation is often run as a business, meaning it is driven largely, but not entirely, by economic forces. The lack of observed attacks might simply be associated with an increased difficulty in monetizing the attack. Given that the attacker is aiming to profit from the attack, if the cost of mounting a successful attack is too high compared to either the possible gains or alternative attack vectors, the attacker will most likely opt not to execute it as it is no longer cost-effective. We see these forces in practice as the attack landscape changes, with newer attacks such as ransomware becoming increasingly popular in the last several years while older attacks leading to the theft of account credentials becoming less common because of two-factor authentication, geo-locating the user, etc. To summarize, we identify two common cases where monetizing becomes hard. \begin{itemize}\itemsep=-1pt \item \textbf{No longer profitable.} The first is the result of causes other than the mitigation mechanism. For example, when clients are no longer interested in the possible product of the attack or if there are other security mechanisms in place that prohibit the usefulness of the attack's outcome. In such cases, removing the mitigation mechanism in question will most likely not have a practical negative effect on the system since the attack remains not cost-effective. \item \textbf{Effective mitigation.} The other case is when the attack is not cost-effective due to difficulties imposed by the mitigation mechanism. In such cases, removing the mechanism will result in an increase in the cost-effectiveness of the attacks it was aimed to prevent. This might result in the reemergence of such attacks. \end{itemize} By sampling the relative frequency of attacks of a particular kind, we cannot always determine which case we are currently faced with. It may also be a combination of these two factors. However, there is growing evidence that attackers are frequently going after the low-hanging fruit, effectively behaving lazily~\cite{LazyAttacker}. We therefore suggest an alternative that acts as a middle ground by introducing sampling rates for all mitigation mechanisms. In the first case, the mitigation mechanism is no longer needed, therefore adding a sampling rate will not reduce the security of the system, but will provide fewer benefits than a complete removal. On the other hand, in the second case, the security of the system is somewhat lowered, but the statistical nature of the sampling rate maintains some deterrence against attackers. \subsection{Study of Snort Signatures} To test some of these educated guesses, we have performed an in-depth study of Snort signatures. Focusing on the dynamics of signature addition and removal, we have mined the database of Snort signatures, starting on~12/30/2007 and ending on~9/6/2016. Daily updates to the Snort signature database are distributed through the {\textsc{Emerging Threats}}\xspace mailing list archived at \url{https://lists.emergingthreats.net}, which we used to determine which signatures, if any were 1) added, 2) removed, or 3) modified every single day. Figure~\ref{fig:snort-add-remove} presents statistics of Snort signatures obtained by exploring the dataset we collected by crawling the mailing list archive. It is evident from the figure that there are many more signature additions than removals. These statistics support our claim that signature dataset sizes are growing out of control and becoming unsustainable. \begin{figure*}[h!] \centering \begin{subfigure}[tb]{1\textwidth} \centering \ifpdf \includegraphics[width=\textwidth]{figure4.jpg} \fi \caption{Additions of signatures in the Snort emerging threats database.} \end{subfigure} \\ \begin{subfigure}[tb]{1\textwidth} \centering \ifpdf \includegraphics[width=\textwidth]{figure5.jpg} \fi \caption{Removals of signatures in the Snort emerging threats database.} \end{subfigure}\\ \begin{subfigure}[tb]{1\textwidth} \centering \ifpdf \includegraphics[width=\textwidth]{figure6.jpg} \fi \caption{Updates of signatures in the Snort emerging threats database.} \end{subfigure} \caption{Dynamics of Snort signatures between~12/30/2007 and~9/6/2016.} \label{fig:snort-add-remove} \end{figure*} \begin{figure}[tb] \centering \tiny \begin{Verbatim} 2023020 - ET TROJAN ProjectSauron Remsec DNS Lookup (rapidcomments.com) (trojan.rules) 2023021 - ET TROJAN ProjectSauron Remsec DNS Lookup (bikessport.com) (trojan.rules) 2023022 - ET TROJAN ProjectSauron Remsec DNS Lookup (myhomemusic.com) (trojan.rules) 2023023 - ET TROJAN ProjectSauron Remsec DNS Lookup (flowershop22.110mb.com) (trojan.rules) 2023024 - ET TROJAN ProjectSauron Remsec DNS Lookup(wildhorses.awardspace.info) (trojan.rules) 2023025 - ET TROJAN ProjectSauron Remsec DNS Lookup (asrgd-uz.weedns.com) (trojan.rules) 2023026 - ET TROJAN ProjectSauron Remsec DNS Lookup (sx4-ws42.yi.org)(trojan.rules) 2023027 - ET TROJAN ProjectSauron Remsec DNS Lookup (we.q.tcow.eu)(trojan.rules) \end{Verbatim} \caption{Connecting signatures to ProjectSauron malware (\code{http://bit.ly/2eX1O9h}).} \label{fig:sauron-malware} \hide{ \end{subfigure} \\[2mm] \begin{subfigure}{\columnwidth} \tiny \begin{Verbatim} 2003182 || ET TROJAN Prg Trojan v0.1-v0.3 Data Upload || URL 2003183 || ET TROJAN Prg Trojan Server Reply || URL 2003184 || ET TROJAN Prg Trojan v0.1 Binary In Transit || URL 2003185 || ET TROJAN Prg Trojan v0.2 Binary In Transit || URL 2003186 || ET TROJAN Prg Trojan v0.3 Binary In Transit || URL 2007688 || ET TROJAN Prg Trojan HTTP POST v1 || URL 2007724 || ET TROJAN Prg Trojan HTTP POST version 2 || URL URL = url,www.securescience.net/FILES/securescience/10378/pubMalwareCaseStudy.pdf \end{Verbatim} \caption{Zeus (Prg) malware (\code{http://bit.ly/2bIS3hk}).} \label{fig:zeus-malware} \end{subfigure} \caption{Connecting signatures to known malware } } \end{figure} Below we give several representative examples of signature addition, update, and removal. \begin{ex}{\bf Signature addition.} Figure~\ref{fig:sauron-malware} shows the addition of new signatures in response to observations of the ProjectSauron~\cite{project-sauron} malware in the wild. The connection between the malware and {\textsc{Emerging Threats}}\xspace signatures designed to prevent it is evident from the signature description. \end{ex} \begin{ex}{\bf Signature update.} \hide{ Figure~\ref{fig:dhl-update} shows an example of a typical mailing list exchange leading up to a signature change. \begin{figure}[tb] \tiny \begin{Verbatim} > We have 2010148 already: > content:"Content-Disposition|3A| attachment|3b|"; nocase; > content:"filename"; within:100; content:"DHL_"; nocase; > within:50; > pcre:"/filename\s*=\s*\"DHL_(Label_|document_| > package_label_|print_label_).{5,7}\.zip/mi"; > > I'll modify to fit the new style. The old is gone! > > Thanks Jason! > > Matt > > On Nov 3, 2010, at 10:35 AM, Weir, Jason wrote: > > > Seeing these as inbound smtp attachments > > > > DHL_label_id.Nr21964.zip > > DHL_label_id.Nr48305.zip > > DHL_label_id.Nr3139.zip > > DHL_label_id.Nr15544.zip > > DHL_label_id.Nr7085.zip > > > > How about this for current events > > > > alert tcp $EXTERNAL_NET any -> $SMTP_SERVERS 25 (msg:"ET > > CURRENT_EVENTS DHL Spam Inbound"; flow:established,to_server; > > content:"Content-Disposition|3a| attachment|3b|"; nocase; > > content:"filename=|22|DHL_label_id."; nocase; > > pcre:"/filename=\x22DHL_label_id\.Nr[0-9]{4,5}\.ZIP\x22/i"; > > classtype:trojan-activity; sid:xxxxxxx; rev:0;) \end{Verbatim} \caption{Updating a signature based on customer feedback.} \label{fig:dhl-update} \end{figure} } In the case of the signature marked as 2011124, we see a false positive report about traffic on port 110 on 04/04/2016, which receives a response from the maintainers within two days: \tiny \begin{Verbatim} we get quite a lot of false positives with this one due to the POP3 protocol on port 110, it would be great if port 110 or more generally POP3 traffic could be excluded from this rule -- JohnNaggets - 2016-04-02 Thanks, we'll get this out today! -- DarienH - 2016-04-04 \end{Verbatim} \noindent The maintainers added port~110 the same day they responded, resulting in this signature revision~19: \tiny \begin{Verbatim} alert ftp $HOME_NET ![21,25,110,119,139,445,465,475,587,902,1433,2525] -> any any (msg:"ET MALWARE Suspicious FTP 220 Banner on Local Port (spaced)"; flow:from_server,established,only_stream; content:"220 "; depth:4; content:!"SMTP"; within:20; reference:url,doc.emergingthreats.net/2011124; classtype:non-standard-protocol; sid:2011124; rev:19;)$ \end{Verbatim} \end{ex} \begin{figure}[tb] \centering \ifpdf \includegraphics[width=.9\columnwidth]{figure7.jpg} \fi \caption{Number of days between CVE announcement and corresponding signature addition.} \label{fig:delay} \end{figure} \hide{ \begin{verbatim} -> Added to emerging-exploit.rules (5): #by Rich Rumble #PWDump6 #FGDump #This should catch both FGDump and PWDump #by Chandan at Secpod.com \end{verbatim} } \point{CVE Additions: Evaluation of the Delay} Another question to consider is how fast signatures for known threats are added after the threats are discovered or publicly announced. To estimate that, we have correlated~\empirical{176}\xspace CVEs to~\empirical{40,884}\xspace~{\textsc{Emerging Threats}}\xspace signatures. This process usually involves analyzing the comments embedded in the signature to find CVE references (for example, \texttt{reference:cve,2003-0533}). We have plotted a histogram (Figure~\ref{fig:delay}) that shows the \emph{delay} in days between the CVE (according to the NIST NVD database) and the signature introduction date. As we can see, many signatures are created and added the same day the CVE is disclosed. Sadly, in quite a significant percentage of cases, signatures are added two weeks and more \emph{after} the CVE release date. What is perhaps most surprising is that many signatures are created quite a bit \emph{before} the disclosure, in many cases the day before, and in some cases over a month prior to it. This is due to other information sources that lead to signature generation. \hide{ \subsection{Scanning costs} It is important to note that scanning costs clearly increase as we add more signatures. Appendix~\ref{sec:av-costs} describes experiments with ClamAV and Snort that show that this increase is linear. Reducing the amount of scanned signatures thus directly improves scan performance. } \section{Conclusions} \label{sec:conc} Securing today's complex systems does not come free of charge. The most common cost is performance. Using three security mechanism examples, anti-virus signatures, Snort malware signatures, and ad-blocking lists, we show that the cost of security enforcement (measured in terms of latency) often grow linearly with the number of policies that are involved. It is therefore imperative to devise ways to limit the enforcement cost. In this paper we argued for a new kind of \emph{tunable framework}, on which to base security mechanisms. This new framework enables a more reactive approach to security, thus allowing us to optimize the deployment of security mechanisms based on the current state of attacks. Based on actual evidence of exploitation collected from the field, our framework can choose which mechanisms to enable/disable so that we can minimize the overall costs and false positive rates, while maintaining a satisfactory level of security in the system. Our responsive strategy is both computationally affordable and results in significant \emph{reductions} in false positives, at the cost of introducing a moderate number of false negatives. Through measurements performed in the context of large-scale simulations, we find that the time to find the optimal sampling strategy is mere seconds for the non-overlap case, and under~\empirical{2.5}\xspace minutes in~\empirical{98\%}\xspace of overlap cases. The reduction in the number of false positives is significant~(about~\empirical{9.2 million}, when removed from traces that are about~\empirical{9}\xspace years long,~\empirical{20.13\%}\xspace and~\empirical{19.23\%}, with and without overlap, respectively). \section{Experimental Evaluation} \label{sec:expt} In this section we first describe our simulation design and then discuss both how much our approach helps with achieving optimization objectives such as reducing false positives, and how long it takes to solve the optimization problems on a daily or weekly basis. \subsection{Simulation Design} \label{sec:simDesign} To evaluate the benefits of our approach we performed several simulations mimicking real-world anti-virus activity. For the purposes of our simulation, we collected detailed information from Snort signature activity summaries from~12/30/2007 until~9/6/2016, entailing signature additions, updates and removals, as shown in Figure~\ref{fig:snort-add-remove}. In total, we've collected information regarding~\empirical{40,884}\xspace signatures. \hide{, each of which we use as a classifier in our simulation. In practice, we use~\empirical{3,029} of these signatures in our simulation (see appendix~\ref{sec:appendix} for details).} \point{Generating simulated malware traffic} We generate malware traffic traces (observed true positive and false positive samples) based on the collected Snort signature information. We base these traces on the following assumptions: \begin{enumerate}[i]\itemsep=-1pt \item Each signature was introduced to counteract some specific malware. \item Nonetheless, some signatures might unintentionally flag malware other than the one it was intended for (resulting in classifier overlap). \item The main purpose of signature updates is to address some false negatives, resulting in increased true positive and false positive observations. \end{enumerate} The following design decisions are also incorporated into the trace generation: \begin{enumerate}[i]\itemsep=-1pt \item The decline in true positive observations count for a specific signature is modeled as a power law decay curve. \item False positive observations count is modeled as a percentage of the true positive count (denoted as a simulation parameter $\theta$). \item Amount of true positive traffic does not affect false positive observations. \item Observations may be captured by more than one signature (referred to as ``classifier overlap''). \end{enumerate} For more details, please see Section~\ref{sec:design-decisions}. \point{Simulation scenario} We aim to simulate a real world usage in our simulation. The scenario we are simulating is when \emph{once every~3 days} our tool is applied to the latest observations and updates the sampling rates for all active signatures. New signatures might still be introduced between sampling rate updates and are set to full sampling until the next update. We believe this to be a reasonable setting that is quite likely to be implemented in practice. Additionally, under some conditions, Infer.NET's inference algorithm might fail. Such conditions are very rare (inference for only \empirical{1.5\%}\xspace of days either failed or timed-out). However, if they occur we allow the simulation to keep using the sampling rates computed on the last update, which we believe to be a solution applicable in practice. We defined our optimization goal using a budget-aware objective. Assuming a known estimated cost ratio $\beta$ between false negatives and false positives, we phrase the objective as $FP+\beta\cdot FN$. We leave $\beta$ as a parameter for the simulation. Essentially, we set our goal to minimize the overall cost incurred by scanning. We note that, as Section~\ref{formalizing} mentions, there are many possible goals. Based on discussions with commercial vendors, we believe the overall incurred cost is a measure likely to be used in practice. Another reasonable goal is the total cost of scanning. In this setting each signature would have to be associated with a scanning cost the the system would look for a solution that either minimizes it or keeps it below some given threshold. \subsubsection{Design Decisions} \label{sec:design-decisions} The following design decisions are incorporated into the trace generation: \begin{figure*}[tb] \centering \ifpdf \includegraphics[width=0.8\textwidth]{figure8.jpg} \fi \caption{Number of active signatures in the Snort archive (12/30/2007--9/6/2016). Figure~\ref{fig:snort-add-remove} shows some of the update dynamics.} \label{fig:simAttacks} \end{figure*} \point{Simulating the decline in true positives} We use a power law decay curve to simulate the decline of a specific type of malware over time. Prior research on large-scale exploitation (75~million recorded attacks detected by Symantec sensors) in the wild suggests the power law as an accurate way to model diminishing exploitation rates~\cite{Allodi2015}. We initialize the curve for an initial true positive count of approximately~\empirical{500} observations per day and calibrate it to fit the lifespan of the signature intended for that malware. We filter out signatures which were added or removed outside of our sampling period, for which we cannot determine a life span, since we are unable to calibrate a proper decay curve for these signatures. We also eliminate temporary short-lived signatures (less than~7 days). Out of the~\empirical{40,884}\xspace collected signatures, we are left with~\empirical{3,029} signatures after filtering, which we use in the simulation. Figure~\ref{fig:simAttacks} shows the number of active signatures for each day of our simulation. We note that in many real-world cases, a signature is introduced only a few days after a malware appears in the wild and is removed \emph{at least} a few days after the relevant malware disappeared from the attack landscape. We adjust the overall signature lifespan, introduction and removal dates accordingly to incorporate this insight into the attack model. \point{Simulating false positives} We model the amount of false positive observations as a percentage of the true positive observations count. We denote this percentage as $\theta$, used as a parameter for the simulation. As false positive observations originate solely from legitimate traffic which remains mostly unchanged, we determine that false positive observations should remain constant as long as the signature is not changed. We therefore update the false positive traces only when the signature is updated. The curve in Figure~\ref{fig:modeledAttack} show an example of the number of true positive and false positive observation for Snort signature~2007705 for a set $\theta$. \begin{figure*} \centering \ifpdf \includegraphics[width=0.85\textwidth]{figure9.jpg} \fi \caption{Modeling the true positive count and the false positive count per day for {\textsc{Emerging Threats}}\xspace signature~2007705, assuming power law decay. The dashed lines at the ends of the figure indicate malware appearance, signature introduction, malware disappearance, and signature removal. The dashed lines in the middle of the figure indicate signature updates.} \label{fig:modeledAttack} \end{figure*} \point{Simulating classifier overlap} From the collected signature information, we learn that, while there exists some overlap between signatures, most signatures only flag one kind of malware. To simulate overlap we randomly choose for each signature and each observation with how many other signatures it overlaps. We draw this value from the distribution shown in Figure~\ref{fig:overlap}, calibrated to match our collected signature information. \begin{figure} \centering \ifpdf \includegraphics[width=.8\columnwidth]{figure10.jpg} \fi \caption{Probability distribution used to decide amount of overlap for each signature.} \label{fig:overlap} \end{figure} \subsection{Experimental Setup} To compare different simulation conditions, we ran several simulations, each with a different combination of values for $\theta$ and $\beta$, both with classifier overlap and without. The simulations were executed on a Linux machine with~64 AMD Opteron(TM)~6376 processors, operating at~2.3GHz each, and~128~GB RAM, running Ubuntu~14.04. Each simulation was assigned the exclusive use of a single core. \subsection{Precision and Recall Results} By applying the sampling rates computed by our system, one can eliminate part of the false positives previously observed at the expense of losing part of the true positive observations. In Figure~\ref{fig:roc}, we show the percentage of true positives remaining compared to the percentage of false positives remaining. The dashed line across each of the figures symbolizes an equal loss of both false positives and true positives. The area above the dashed line matches settings in which less true positives are lost compared to false positives. This is the area we should strive to be in, since it represents a sampling which is relatively cost-effective. One can clearly see from the figures that, regardless of classifier overlap, all of our simulations reside above the dashed line. \begin{figure*}[tb] \centering \begin{subfigure}{.25\textwidth} \centering \ifpdf \includegraphics[width=\columnwidth]{figure11.jpg} \fi \caption{precision without overlap} \label{fig:precision_01} \end{subfigure}% \begin{subfigure}{.25\textwidth} \centering \ifpdf \includegraphics[width=\columnwidth]{figure12.jpg} \fi \caption{precision with overlap} \label{fig:precision_sampling} \end{subfigure}% \begin{subfigure}{.25\textwidth} \centering \ifpdf \includegraphics[width=\columnwidth]{figure13.jpg} \fi \caption{recall without overlap} \label{fig:recall_01} \end{subfigure}% \begin{subfigure}{.25\textwidth} \centering \ifpdf \includegraphics[width=\columnwidth]{figure14.jpg} \fi \caption{recall with overlap} \label{fig:recall_sampling} \end{subfigure}% \caption{Classification precision and recall as a function of $\theta$ for different values of $\beta$ with and without classifier overlap.} \label{fig:precision_and_recall} \end{figure*} \begin{figure}[tb] \centering \ifpdf \includegraphics[width=.9\columnwidth]{figure15.jpg} \fi \caption{Percentage of true positives remaining compared to percentage of false positives eliminated with and without classifier overlap. Different setting for FP-threshold and cost ratio correspond to different points on the curves. } \label{fig:roc} \end{figure} Figure~\ref{fig:precision_and_recall} show the classification precision and recall as a function of $\theta$ for different values of $\beta$, both with and without classifier overlap. The figures show that, regardless of overlap, both precision and recall drop when $\beta$ and $\theta$ rise. A rise of $\theta$ means there are more false positive observations, which reduces the portion of observed true positives, thus affecting the overall precision and recall. Similarly, a rise of $\beta$ means that relative cost of a false negative is higher than that of a false positive. Therefore, based on the optimization objective we set in Section~\ref{sec:simDesign}, it is only logical that the system will choose to allow for more false positives, rather than risking a false negative, thus again affecting both precision and recall. \point{Adapting to the situation} From the aforementioned figures, we learn that the effectiveness of applying sampling rates depends greatly on the operating scenario. In some cases, where for example false negatives are extremely expensive (as might be the case for corporate datacenters), the sampling rates remain rather high and thus the overall true positive and false positive counts remain mostly unaltered. On the other hand, when false negatives are relatively cheap (as is often the case for private, user owned desktops), we can expect our system to determine sampling rates that are relatively low. We believe our system is especially well suited for large-scale services such as web-mail providers. In this scenario, each user can set it's own willingness to accept risk, i.e. the subjective cost of a false negative. Doing so will allow the service's servers to greatly reduce their scanning workload, which will have a tremendous effect on the overall server performance as the amount of data scanned by these servers is huge. Such services would like obtain the greatest benefit from our approach. \subsection{Solution Times} We recognize that for a system such as the one proposed in this paper to be applicable to real world scenarios, it is required that solving and computing new sampling rates be very fast and cheap. Long solving times mean the system would not be able to quickly adapt to changing landscapes and respond by setting new sampling rates in a timely fashion. Figure~\ref{fig:times-histogram} shows a cumulative distribution of the total solving times (both PuLP and Infer.NET) measured during our simulations for each day. The figure shows that over~\empirical{80\%} of simulated days were solved in under~\empirical{20} seconds. The day which took our system the longest to solve took less than~\empirical{5} minutes~(\empirical{285} seconds to be exact). This tells us that using this kind of system in a responsive manner is indeed feasible. \begin{figure}[tb] \centering \ifpdf \includegraphics[width=.9\columnwidth]{figure16.jpg} \fi \caption{Cumulative distribution of overall solution times (PuLP + Infer.NET).} \label{fig:times-histogram} \end{figure} \begin{figure}[tb] \centering \ifpdf \includegraphics[width=.9\columnwidth]{figure17.jpg} \fi \caption{Average daily solving times. The blue line (at the bottom) represents PuLP solving time (\empirical{$\le 1$} second). The black line represent Infer.NET solving time. The grey line in the background show number of active signatures per day. } \label{fig:solution-times} \end{figure} Figure~\ref{fig:solution-times} shows the average solving time needed for each day of our simulation. We first note that the solving times for PuLP (represented by the blue line) are extremely low, constantly below 1 second. When there is no classifier overlap, the solution provided by PuLP is sufficient as the sampling rates for the classifiers. This means that when there is no overlap, solving is extremely fast. Also, there is a clear correlation between Infer.NET solving time and the number of active signatures, both following the same trends. This correlation is interesting as it indicates that (1)~solving time can be anticipated in advance, and (2)~we can accelerate the solving of days with many active signatures using a ``divide-and-conquer'' approach, meaning we can split them to smaller batches and solve each one separately. \subsection{Experimental Summary} \point{Reduction in false positives} Regardless of classifier overlap, when comparing the reduction in the number of false positives to that of true positives, we find that our responsive optimization technique removes \emph{more} false positives than true positives, both in terms of percentages (\empirical{19.23\%} compared to~\empirical{12.39\%} without overlap;~\empirical{20.13\%}\xspace compared to~\empirical{11.96\%}\xspace with overlap) with overlap) and in terms of absolute values (\empirical{9,286,530} compared to~\empirical{8,002,871.5} without overlap;~\empirical{9,225,422.6} compared to~\empirical{8,065,888.6} with overlap). The reduction in absolute values is surprisingly significant, considering that the highest value of $\theta$ in our simulations was set to~\empirical{25\%} of the true positive rate. In settings with classifier overlap, applying sampling rates is \emph{more beneficial} than in settings without classifier overlap. This can be observed from the relevant reduction rates,~\empirical{20.13\%}\xspace compared to~\empirical{19.23\%} of false positives and~\empirical{11.96\%}\xspace compared to~\empirical{12.39\%} of true positives. This means that, on average, we can eliminate more false positives at the expense of fewer true positives. While in some settings the benefits of our approach are not as drastic as in others, our simulations clearly demonstrate that this approach is advantageous regardless of the operating scenario. \point{Solver running time} In settings \emph{without} classifier overlap, the sampling rates for all simulated days were computed in mere seconds. In settings with overlap, timing measurements indicate that our approach for setting sampling rates is computationally feasible and applicable to real-world settings. The sampling rates for over~\empirical{98\%}\xspace of simulated days were computed in under~\empirical{2.5}\xspace minutes per day, with an average and median of~\empirical{15} seconds. \section{Introduction} \label{sec:intro} Much of the focus in the security community in the last several decades has been on discovering, preventing, and patching vulnerabilities. While both new vulnerability classes and new vulnerabilities are discovered seemingly every day, the exploitation landscape often remains murky. For example, despite buffer overruns, cross-site scripting~(XSS), and SQL injection attacks~(SQLIA) being heralded as the vulnerabilities of the decade~\cite{veracode-popular}, there is precious little published evidence of how commonly \emph{exploited} XSS or SQLIA might be in practice; of course, there is a number of studies~\cite{sqlia-prevalent} on how \emph{vulnerability trends} change over time. One of the studies we present in this paper suggests that, for example, XSS \emph{exploitation} is not nearly as common as would be suggested by the daily stream of discovered \emph{vulnerabilities} (found at \url{www.openbugbounty.org/} or \url{xssed.org}). \point{Changing exploitation landscape} The security industry produces regular reports that generally present a growing number of vulnerabilities, some in widely-deployed software. It is exceedingly tempting to misinterpret this as a growth trend in the number of actual exploits. However, the evidence for the latter is scant at best. Due to a number of defense-in-depth style measures over the last decade, including stack canaries, ASLR, XRF tokens, automatic data sanitization against XSS and a number of others, practical exploitation on a mass scale now requires an increasingly sophisticated attacker. We see this in the consolidation trends of recent years. For example, individual drive-by attacks have largely been replaced by exploit kits~\cite{kotov,pexy,kizzle16}. In practice, mass-scale attacks generally appear to be driven by a combination of two factors:~1) ease of exploitation, and~2)~whether attacks are consistently monetizable. This is clearly different from targeted attacks and APTs where the upside of a single successful exploitation attempt may be quite significant to the attacker. Given the growing scarcity in \emph{exploitable} vulnerabilities, there is some recent evidence that attackers attempt to take advantage of publicly disclosed attacks right after their announcement, while the window of vulnerability is still open and many end-users are still unpatched; Bilge~{\em et al.}\xspace~\cite{Bilge:2012:BWK:2382196.2382284} report an increase of~\emph{5 orders of magnitude} in attack volume following public disclosures of zero-day attacks. The situation described above leads to a long tail of attacks~---~a period of time when attacks are still possible but are increasingly rare. It is tempting to keep the detection mechanism on during the long tail. However, it is debatable whether that is a good strategy, given the downsides. We argue that the usual human behavior in light of the rapidly-changing landscape is inherently \emph{reactive}, however, often not reactive enough. \subsection{Mounting Costs of Security Mechanisms } One of the challenges of security mechanisms is that their various costs can easily mount if unchecked over time. \setlength{\leftmargini}{1em} \begin{itemize \itemsep=-1pt \item {\bf False positives.} FPs have nontrivial security implications~\cite{av-fp-sink,sukwong2011commercial}. According to a recent Ponemon Institute report~\cite{Ponemon:2015}, ``The average cost of time wasted responding to inaccurate and erroneous intelligence can average \$1.27 million annually.'' Futrthermore, `` Of all alerts, 19\% are considered reliable \emph{but} only 4\% are investigated.'' \item {\bf Performance overhead.} Our studies of scanning and filtering costs in Section~\ref{sec:av-costs} show that while the IO overhead from opening files, etc. can be high, the costs of scanning and filtering increases significantly as more signatures are added to the signature set. When many solutions are applied simultaneously within a piece of software, the overhead of even relatively affordable mechanisms, such as stack canaries~\cite{canaries-cost} and ASLR~\cite{payer2012too}, can be \emph{additive}. There is a growing body of evidence that security mechanisms that incur an overhead of~10\% or more do not tend to get widely deployed~\cite{war-in-memory}. However, several low-overhead solutions one on top of another can clearly easily exceed the~10\% mark. \hide{ \item {\bf Infrastructure.} The cost of backend maintenance \emph{and} sending security updates is difficult to estimate. These may include analyzing malware, testing signatures, running honeypots, etc.~\cite{provos2009cybercrime,Provos:2008:YIP:1496711.1496712}. Furthermore, reducing the signature set means less data that needs to be transmitted to the end-user, thus reducing network costs and disk space requirements. \item {\bf Technical debt.} There is a problem of technical debt associated with \emph{maintaining} existing security mechanisms as attack volume diminishes. This is considered to be a growing problem in the machine learning community~\cite{Sculley:SE4ML:2014}, but is not often cited as an issue in security and privacy. For example, Kerschbaumer~\cite{firefox-csp} reports that only \emph{modernizing} Firefox's CSP (content security policy) implementation took over~125,000 lines of code over a period of~20 months by a dedicated developer. Clearly, not every project has these kinds of resources. When considered over a long period of time, the technical debt accumulates to a point that the software maker can no longer deal with the maintenance issues or can only do so at the expense of introducing new defense strategies, to deal with today's issues. } \end{itemize} The performance and false positive costs are in addition to maintenance and update costs, which are also difficult to predict i.e. analyzing malware, testing signatures, running honeypots, etc.~\cite{provos2009cybercrime,Provos:2008:YIP:1496711.1496712}). We argue that as a result of the factors above, over a long period of time, a situation in which we only \emph{add} security mechanisms is unsustainable due to the mounting performance costs and accumulating false positives. This is akin to performing a DOS attack against oneself; in the limit, the end-user would not be able to do much useful work because of the overhead and false positive burden of existing software. \point{Reluctance to disable} At the same time, actively \emph{removing} a security mechanism is tricky, if only from a PR standpoint. In fact, we are not aware of a recent major security mechanism that has been officially disabled. One of the obvious downsides of tuning down any security mechanism is that the recall decreases as well. However, it is important to realize that when tuning down a specific mechanism is driven by representative measurements of its effectiveness in the wild, this is a good strategy for dealing for mass attacks. Targeted attacks, on the other hand, are likely to be able to overcome most existing defenses, as we have seen from Pwn2own competition, XSS filter bypasses~\cite{waf-bypass-bh-2016,protocol-level-evasion-2012,Bates:WWW:2010}, etc. We hypothesize that sophisticated targeted attacks are not particularly affected by existing defenses. However, reducing the level of defense may invite new waves of mass attacks, which can be mitigated by upping the level of enforcement once again. We therefore consider entirely taking out a security mechanism, rather than simply disabling or tuning it down, to be ill-advised. \subsubsection{Scanning and Filtering Costs} \label{sec:av-costs} \begin{figure*}[h!] \centering \begin{subfigure}[tb]{0.3\textwidth} \centering \ifpdf \includegraphics[width=1\textwidth]{figure1.jpg} \fi \caption{Using ClamAV~(0.99.2) to scan a single~248~MB file.} \label{fig:clamav} \end{subfigure} \begin{subfigure}[tb]{0.3\textwidth} \centering \ifpdf \includegraphics[width=1\textwidth]{figure2.jpg} \fi \caption{Using Snort~(2.9.9.0) to scan a~4.1~GB pcap file.} \label{fig:snort} \end{subfigure} \begin{subfigure}[tb]{0.3\textwidth} \centering \ifpdf \includegraphics[width=1\textwidth]{figure3.jpg} \fi \caption{Using Brave's adblock engine~(4.1.1) to scan 15.5k URLs.} \label{fig:adblock} \end{subfigure} \caption{Average scanning time as a function of the rule/signatures set size. The dashed lines represent linear fits to the measurement.} \end{figure*} To demonstrate our claims of the inefficiency and mounting costs of maintaining a large amount of outdated security mechanism, we turn to ClamAV~\cite{clamav}, Snort~\cite{snort}, and Brave's ad blocking engine~\cite{adblock}. \begin{itemize} \item We installed the latest version of the ClamAV engine~(0.99.2) and used it to scan a single file of~\empirical{248}~MB. To understand how the running time is affected by the size of the signature set, we ran the scan several times using different sized subsets of the ClamAV signature dataset. Figure~\ref{fig:clamav} shows the obtained scan times, in seconds. \item We performed similar measurements using the Snort NIDS engine. Using the latest version of Snort~(2.9.9.0), we scanned a collection of pcap files, which we obtained from~\cite{pcap}, with signature sets of different sizes. Figure~\ref{fig:snort} shows the results of these measurements. \item Lastly, we used the ad-block engine of the Brave browser~\cite{brave, adblock} to the effect the ad-block rule set size has on the average scan time per url. We used rules from the EasyList and EasyPrivacy rule sets~\cite{easylist}, totaling at~ 86,665 rules. Utilizing the Brave ad-block engine~(4.1.1), we scanned a set of URLs containing~15,507 elements. We ran the experiment using different-sized subsets of the rule set. Figure~\ref{fig:adblock} shows the obtained average scan times per URL, in milliseconds. \end{itemize} All three of the above figures clearly exhibits an approximately \emph{linear} correlation between the rule/signature set size and the average scan time. These charts demonstrate the hidden cost over time of adding signatures and rarely removing them. This factor together with false positives argues for removing signatures more aggressively. \subsection{Toward Tunable Security} In recent years we have seen growing evidence that vulnerability statistics and exploit statistics are at odds. Nayak~{\em et al.}\xspace~\cite{exploited-in-wild-raid} report that despite an increase in reported vulnerabilities in the last several years, while the amount of exploitation goes down. Furthermore, only a portion of vulnerabilities (about~35\%) actually gets exploited in a practical sense. Furthermore, vulnerability severity rankings are often misleading, as they do not necessarily correlate with the attractiveness or practicality of exploitation~\cite{misleading-severity}. Barth~{\em et al.}\xspace~\cite{Barth:FC:2010} advocate a reactive approach to security where previous attack patterns are used to decide how to spend the defender's finite budget. \point{Methodology} In this paper, we experimentally prove the advantages of reactive security. We look at widely deployed security mechanisms, where the potential, for example, of a single false positive is amplified by the potentially vast installation base. A good example of such a system is an anti-virus (AV) engine or an IDS. The reactive approach to security is also supported by the number of zero-days that are observed in the wild and reported by Bilge~{\em et al.}\xspace~\cite{Bilge:2012:BWK:2382196.2382284}. Our experimental evaluation in Section~\ref{sec:expt} is based on~\empirical{9}\xspace years of Snort signatures. We argue that a well-calibrated simulation is the best way we have to assess the reactive security mechanism proposed in this paper. We use an extremely valuable study of over~75 million real-world attacks collected by Symantec~\cite{Allodi2015} from~2015 to calibrate our simulation. \point{Tunable security mechanism} We propose an alternative: a tunable mechanism, guided by a strategy that allows the defender to selectively apply the mechanism based on internal and/or external sets of conditions. For example, a centralized server can notify the client that certain categories of attacks are no longer in the wild, causing the client to reduce their sampling rates when it comes to suspicious traffic. Signatures are thus retired when the threats they are designed to look for become less prevalent. For this to work, there needs to be a greater level of independence between the security mechanism and the policy, the kind of separation that is already considered a major design principle~\cite{mechanism-policy-separation,policy-hydra}. \point{Beyond signatures} It important to emphasize that while we primarily experiment with signatures in this paper, the ideas apply well beyond signatures. Specifically, most runtime security enforcement mechanisms, albeit not all, can be turned off, either partially or entirely. Mechanisms that rely on matching lists for their enforcement include XSS filters~\cite{Bates:WWW:2010} and ad blockers~\cite{Berlin2015,Merzdovnik,Mughees}, to name just a few. Similarly, one can apply XFI or CFI to some DLLs, but not others, that have not been implicated in recent attacks. Of course, the ability to adjust which DLLs are CFI-protected depends greatly on having a dynamic software deployment infrastructure, which we claim to be a desirable goal. \point{Matching today's reality} In many ways, our proposal is aligned with practical security enforcement practices of adjusting the sensitivity levels for detectors depending on the FP-averseness of the environment in which the detector is deployed. The Insight reputation-based detector from Symantec allows the user to do just that~\cite{symantec-reputation}. Our ultimate goal here, of course, is to reduce the factors listed above, i.e. the false positive rate and the performance overhead. \subsection{Contributions} Our paper makes the following contributions: \begin{itemize}\itemsep=-1pt \item\textbf{Tunable\xspace.} We point out that today's approach to proactive security leads to inflexible and bloated solutions over time. We instead advocate a novel notion of tunable\xspace security design, which allows flexible and fine-grained policy adjustments on top of existing security enforcement mechanisms. This way, the protection level is tuned to match the current threat landscape and not either the worst-case scenario or what that landscape might have been in the past. \item\textbf{Sampling-based approach.} For a collection of mechanisms that can be turned on or off independently, we propose a strategy for choosing which mechanisms to enable for an optimal combination of true positives, false positives, and performance overheads. \item\textbf{Formalization.} We formalize the problem of optimal adjustment for a mechanism that includes an ensemble of classifiers, which, by adjusting the sampling rates, produces the optimal combination of true positives, false positives, and performance overheads. \item\textbf{Simulation.} Using a simulation based on a history of Snort signature updates over a period of about~\empirical{9}\xspace years, we show that we can adjust the sampling rates within a window of minutes. This means that we can rapidly react to a fast-changing exploit landscape. \end{itemize} \hide{ This is a paper about better ways to build security mechanisms and to more effectively tune security policies. We have chosen to evaluate it by doing similaiton on historical data. We realize that this approach is imperfect: it would be more convincing to deploy our flexible tinable technique, however, the only people who can really do so are those who deploy protection mechanisms daily, as the threat landscape changes, such as AV or malware protection companies. Given that we are not employed in such a context and the difficulty of deploying these mechanisms otherwise, we feel we are doing our best with the help of historical data to make our argument. } \hide{ \subsection{Paper Organization} The rest of the paper is organized as follows. Section~\ref{sec:background} gives an overview of the exploitation landscape. Section~\ref{sec:sampling} defines an optimization problem that improves the true positive rates and reduces the false positive rate and enforcement costs, while favoring higher-severity warnings. Section~\ref{sec:expt} describes our experimental evaluation. Section~\ref{sec:limitations} talks about the limiations of our approach. Finally, Sections~\ref{sec:related} and~\ref{sec:conc} describe related work and conclude. } \section{Limitations} \label{sec:limitations} We acknowledge that our approach has the following limitations. \hide{ \point{Reliance on simulation} In this paper, we evaluate our approach based solely on simulation. Doing so allows us to compare the relative gains our approach provides compared to existing practices of handling outdated signatures. We did not evaluate using a real-world experimental data because we could not obtain access to a large enough daily dataset of flagged samples needed to train and adapt our technique. Such a dataset is likely only available to commercial security vendors such as Symantec of McAfee, although it is far from clear that these types of vendors are likely to provide access to the data we would need. } \hide{ A simulation also has the added advantage of allowing us to verify that the benefits we predicted are actually observable by using our approach. We did not wish to endanger any users involved in a real-world experiment by leaving them exposed to attacks without any practical guarantees.} \point{Code complexity} The adoptive approach may increase development and maintenance costs to developers or security researchers, because a new mechanism for introducing the sampling rates will need to be implemented. We also note that debugging might prove more difficult since, in addition to the inputs introduced to the system, the sampling rates used would also have to be taken into account, leading to possible non-determinism. When it comes to the issue of long-term cost of enforcement, we remind the reader that our approach doesn't \emph{remove} signatures, it merely disables them. \point{Adaptive attacker countermeasures} Most security mechanism entail new opportunities for attackers and our approach is no exception. An attacker familiar with the working of a system utilizing our approach could attempt to use it to her advantage. With the approach in this paper, classifiers are no longer always applied and any attack has some probability to get through the classifiers if it is not sampled. Using this knowledge, the attacker can attempt the same attack several times, hoping that at least one instance will be overlooked. Additionally, classifiers for uncommon attacks would be sampled at lower rates. An attacker can deliberately use \emph{outdated} vulnerabilities, which are likely not sampled frequently to increase her chances of a successful attack. The attacker can use a variety of attacks to force a high sampling rate on \emph{all} classifiers. This would at most cause the system to revert to the current default situation, in which all classifiers are always on. Using one or more of the approaches outlined above, the attacker can \emph{temporarily} increase her chances of success. Doing so will increase the amount of observed attacks for the relevant classifiers, thus triggering an increase in respective sampling. As a result, we anticipate that the window of vulnerability will be small, and that relatively few users will be affected. % \section{Related Work} \label{sec:related} \point{Reactive vs. proactive Security} There has been some interest in comparing the relative performance of proactive and reactive security over the years. Barth~{\em et al.}\xspace~\cite{Barth:FC:2010} make the interesting observation that proactive security is not always more cost-effective than reactive security. Just like in our paper, they support their claim using simulations. Barreto~{\em et al.}\xspace~\cite{barreto2013controllability} formulate a detailed adversary model that considers different levels of privilege for the attacker such as read and write access to information flows. Blum~{\em et al.}\xspace~\cite{Blum:2014} demonstrate that optimizing defences as if attackers have unlimited knowledge is close to optimal when confronted with a diligent attacker, with limited knowledge regarding current defences prior to attacking. Such attackers are more realistic, thus supporting our results highlighting the benefits of tunable security. Classifiers designed to detect malicious traffic or samples are often affected by an issue known as ``Concept drift''~\cite{Widmer1996}. Researchers have proposed a number of ways to address concept drift through retraining classifiers. For instance, Soska and Christin~\cite{DBLP:conf/uss/2014} present techniques that allow to proactively identify likely targets for attackers as well as sites that may be hosted by malicious users. Several researchers have proposed generating new signatures automatically~\cite{newsome2005polygraph,perdisci2010behavioral,griffin2009automatic,sathyanarayan2008signature,zolkipli2010framework}. Signature \emph{addition} seems to largely remain a manual process, supplemented with testing potential AV signatures against known deployments, often within virtual machines. Automatic signature generation will likely improve security but will worsen the false positive and performance issues addressed in this paper. \point{Exploitation In the Wild} Nayak~{\em et al.}\xspace~\cite{exploited-in-wild-raid} highlights the lack of a clear connection between vulnerabilities and metrics such as attack surface or amount of exploitation that takes place. They use field data to get a more comprehensive picture of the exploitation landscape as it is changing. None of the products in their study had more than~35\% of their disclosed vulnerabilities exploited in the wild. These findings resonate with the premise of our paper. One of the better ways to understand the exploitation landscape is by consulting intelligence reports published by large security software vendors. Of these, reports published by Microsoft~\cite{microsoft-intelligence-20} and Symantec~\cite{symantec-2016} stand out, both published on a regular basis. A recent report from Microsoft~\cite{windows-10-mitigations,rsa-microsoft-2015} highlights the importance of focusing on exploitation and not only vulnerabilities. The current approach to software security at Microsoft is driven by data. This approach involves proactive monitoring and analysis of exploits found in-the-wild to better understand the types of vulnerabilities being exploited and exploitation techniques being used. \hide{ Sabottke~{\em et al.}\xspace focus on determining which vulnerabilities are likely to be exploited after a disclosure~\cite{sabottke2015vulnerability}. They conduct a quantitative and qualitative exploration of the vulnerability-related information disseminated on Twitter. } Bilge~{\em et al.}\xspace~\cite{Bilge:2012:BWK:2382196.2382284} focus on the prevalence and exploitation of zero-days. \hide{Searching this dataset for malicious files that exploit known vulnerabilities indicates which files appeared on the Internet before the corresponding vulnerabilities were disclosed. They identify~18 vulnerabilities exploited before disclosure, of which~11 were not previously known to have been employed in zero-day attacks.} They discover that a typical zero-day attack lasts~312 days on average and that, after vulnerabilities are disclosed publicly, the volume of attacks exploiting them increases by up to~5 orders of magnitude. These findings were important in designing credible simulations in this paper. \point{Economics of Security Attacks and Defenses} Herley~{\em et al.}\xspace~\cite{Herley:PNAS:2016} point out that, while it can be claimed that some security mechanism improves security, it is impossible to prove that a mechanism is necessary or sufficient for security, meaning there is no other way to prevent an attack or that no other mechanism is needed. They also make a similar observation stating that one can never prove that a security mechanism is redundant and not needed. These observations put into words the frame of mind that resulted in the current overwhelming number of active security mechanisms. We try to address this problem using the proposed sampling rates. A report by the Ponemon Institute~\cite{Ponemon:2015} estimated the costs of false positives to industry companies. The estimation was based on a survey filled by~18,750 people in various positions. While the numbers portrayed in the report are not accurate, they do paint an interesting picture. The average cost of false positives to a company was estimated at~1.27 million dollars per year. These estimations include the cost analyzing and investigating false positive reports as well as the cost of not responding in time to other true positive reports. \point{Models of Malware Propagation} Arbaugh~{\em et al.}\xspace~\cite{Arbaugh:IEEE:2000} introduced a vulnerability life-cycle model supported by case studies. The introduced model is different than the intuitive model one would imagine a vulnerability follows. We relied on the insights presented in this paper in designing our models for trace generation. Many works have focused on modeling malware propagation. Bose~{\em et al.}\xspace~\cite{bose2013agent} study several aspects crucial to the problem, such as user interactions, network structure and network coordination of malware (e.g. botnets). Gao~{\em et al.}\xspace~\cite{mobile-malware} study and model the propagation of mobile viruses through Bluetooth and SMS using a two-layer network model. Fleizach~{\em et al.}\xspace~\cite{fleizach2007can} evaluate the effects of malware self-propagating in mobile phone networks using communication services. Garetto~{\em et al.}\xspace~\cite{garetto2003modeling} present analytical techniques to better understand malware behavior. They develop a modeling methodology based on Interactive Markov Chains that captures many aspects of the problem, especially the impact of the underlying topology on the spreading characteristics of malware. Edwards~{\em et al.}\xspace~\cite{edwards2012beyond} present a simple Markov model of malware spread through large populations of websites and studies the effect of two interventions that might be deployed by a search provider: blacklisting and depreferencing, a generalization of blacklisting in which a website's ranking is decreased by a fixed percentage each time period the site remains infected. Grottke~{\em et al.}\xspace~\cite{grottke2015wap} define metrics and models for the assessment of coordinated massive malware campaigns targeting critical infrastructure sectors. Cova~{\em et al.}\xspace~\cite{Cova2010} offer the first broad analysis of the infrastructure underpinning the distribution of rogue security software by tracking~6,500 malicious domains. Hang~{\em et al.}\xspace~\cite{Hang2016InfectmenotAU} conduct an extensive study of malware distribution and follow a website-centric and user-centric point of view. Kwon~{\em et al.}\xspace~\cite{tanaka2016analysis} analyzed approximately~43,000 malware download URLs over a period of over~1.5 years, in which they studied the URLs' long-term behavior. \hide{ \subsection{mostly irrelevant} \point{Evolution of Exploit Kits (2015)} \cite{TrendMicro:2015} A report by Trend Micro regarding exploit kits. They briefly mention some trends in which exploits are being used in kits. Not enough data to actually base anything of it and not very interesting/related. \point{Time-To-Compromise Model For Cyber Risk Reduction Estimation (2005)} \cite{McQueen:2006} The authors try to estimate how long it will take an attacker to find compromise a system (including finding a new vulnerability, creating an exploit, etc.). They do not have any real data to support their claims. I find this paper mostly useless and meaningless to us (here only for documentation). \subsection{Bypasses} \point{Scriptless Attacks - Stealing the Pie Without Touching the Sill (2012)} \cite{Heiderich:CCS:2012} This paper examines the attack surface remaining after deployment of xss filters and other mitigation techniques. They discuss several attacks using CSS, svg, fonts, etc. \point{Heat-seeking Honeypots: Design and Experience (2011)} \cite{John:WWW:2011} Used honeypots to attract attackers. Couldn't get access to their data. According to the published results they only found a very small amount of XSS (which I assume is most likely only testers, not real exploitation). \point{mXSS Attacks: Attacking well-secured Web-Application by using innerHTML Mutations (2013)} \cite{Heiderich:CCS:2013} Bypasses XSS filter based on differences between filtering and rendering. Uses malformed code injected code that will bypass filter, but will execute after rendering engine (which is assumed to be more permissive) fixes malformed parts. Applicable only to DOM-based XSS exploited via innerHTML and similar methods. WAF bypasses: \cite{waf-bypass-bh-2016} and \cite{protocol-level-evasion-2012} \point{25 Million Flows Later - Large-scale Detection of DOM-based XSS (2013)} \cite{Lekies:CCS:2013} Used taint analysis in the browser to detect DOM-based XSS vulnerabilities. Implemented automatic exploit generation to verify found vulnerabilities (focusing on required context escaping, doesn't try to avoid filters) Taint analysis isn't very scalable or suitable for widespread deployment. \point{Evaluating the customer journey of crypto-ransomware and the paradox behind it (2016)} \cite{FSecure:2016} Interesting but not really relevant to us. In the beginning they mention the decline of other attack methods to make room for new, more profitable, attacks (they don't mention XSS specifically, but we can claim that the same holds for XSS which lost prominence because of problems monetizing it). \point{A Survey on Transfer Learning (2010)} \cite{Pan:IEEE:2010} Transfer learning are methods for applying a model learned use a dataset X to some other dataset Y. They assume that the dataset characteristics (such as the probability distribution of signals) is known. Our setting is most similar to the one they refer to as transductive transfer learning, in which the both datasets use the same alphabet and set of labels, X is somewhat labeled, Y is completely unlabeled and the probability distributions of X and Y are different. This is not directly applicable because of the assumption that Y's probabilities are known but it can be used to updating some learned model for filtering when the difference in probabilities measured over distinct time periods is very small (for example, collect probabilities for each week, if the difference from last week is below some threshold update the model, otherwise retrain). \point{Formulating Cyber-Security as Convex Optimization Problems (2013)} \cite{Vamvoudakis:2013} Defines the attacker's point-of-view of cyber-security as an optimization problem where the attacker wants to optimize it's reward compared to the resources invested in the attack. Theory evaluated over logs of attacks from the 2011 iCTF competition. What we want to do is provide a similar perspective from the defender's point-of-view. \point{Multi-objective Optimizations} \cite{Caramia:2008} This paper summarizes the problem of multi-objective optimizations and the possible solution approaches to it. From the continuous solution techniques proposed in 2.3, I think: - 2.3.3 is the closest to our goals (so far) - 2.3.2 and 2.3.4 are other reasonable solutions - 2.3.1 is, so far, my least favorite solution \point{AVCLASS: A Tool for Massive Malware Labeling (2016)} \cite{Sebastian:RAID:2016} Presents a large dataset of malware that might be useful for our simulation/evaluation. \subsection{Specific Attack Techniques} \point{DieHard: Probabilistic Memory Safety for Unsafe Languages (2006)} \cite{Berger:PLDI:2006} Uses redundancy (and some randomization) to secure against memory-related bugs. Basic idea is that if you have several replicas of the memory/program and they disagree on some values then that indicates an attacks. \point{Regular Expressions Considered Harmful in Client-Side XSS Filters (2010)} \cite{Bates:WWW:2010} Explains XSS and the design of XSS filters. Describes problems with IE filter architecture, emphasizing the fact that the filter parses data separately from rendering engine. XSSAuditor is a result of this work. \point{The Conundrum of Declarative Security HTTP Response Headers: Lessons Learned (2010)} \cite{Sood:CollSec:2010} Describes the current state and problems of declarative security (such as CSP). Problem is that each developer/administrator needs to manually declare these directives for their site. Security is not applied by default and are therefore not widely deployed yet. \point{Back in Black: Towards Formal, Black box Analysis of Sanitizers and Filters (2016)} \cite{Argyros:IEEESEC:2016} Presents a method for automatically deducing transducers that represent simple sanitizers (such as HTML and URL encoders/decoders, etc). This is an interesting idea that might also be applicable in our setting to learn the input/output relation of the server side. I think for our need they're implementation is overly complicated and a simpler, more naïve, approach can be used. \point{A tale of the weaknesses of current client-side XSS filtering (2013)} \cite{Lekies:BlackHat:2014} Lists problems/bypasses of XSSAuditor. Seems most of these problems lie in areas which the XSSAuditor doesn't try to filter or deal with (such as DOM-based XSS , double injections, etc). Provides a good explanation of how the XSSAuditor filter works. } \section{Optimally Setting Sampling Rates} \label{sec:sampling} In this section we set the stage for a general approach to selecting sampling rates in response to changes to the data. We evaluate these ideas with practical multi-year traces in Section~\ref{sec:expt}. We start with a model in which we have a set of classifiers (in the context of Snort signatures discussed in Section~\ref{sec:background}, each signature is a classifier) at our disposal and we need to assign a sampling rate to each of them, so as to match our optimization goals. These goals include higher true positive rates, lower false positive rates, and lower overheads\hide{, and a higher level of severity of the detected threats}. We formalize this as problem of selecting a bit-vector $\alpha,$ indicating the sampling rate for each classifier. \subsection{Active Classifier Set} \label{formalizing} We assume that our classifiers send some portion of the samples they flagged as malicious for further analysis. This matches how security products, such as those from Symantec, use user machines for in-field monitoring of emerging threats. In the context of a large-scale deployment, this results in a large, frequently updated dataset, which consists solely of true positive and false positive samples. We can use this dataset to evaluate the average true positive, false positive, true negative, and false negative rates, induced by sampling rates for our classifiers. Specifically, our aim is to choose a sampling bit-vector $\bar{\alpha}$ that will keep the true positive and true negatives above some threshold, while keeping the false positive rate, false negatives, and performance costs below some maximum acceptable values. We found this formulation to be most useful in our evaluation. \point{Constraints} Formally, these goals can be specified as a set of inequalities, one for each threshold: \begin{subequations}\label{goals \begin{equation}\label{tp_goal} TP(\bar{\alpha}) \ge X_p \end{equation} \begin{equation}\label{tn_goal} TN(\bar{\alpha}) \ge X_n \end{equation} \begin{equation}\label{fp_goal} FP(\bar{\alpha}) \le Y_p \end{equation} \begin{equation}\label{fn_goal} FN(\bar{\alpha}) \le Y_n \end{equation} \begin{equation}\label{cost_goal} Cost(\bar{\alpha}) \le Z \end{equation} \end{subequations} \point{Parametrization} Given a dataset $D$ and a set of classifiers $C$ we define the following parametrization: \begin{itemize}\itemsep=-1pt \item $D_i$ is the $i^{th}$ entry in the dataset and $C_j$ is the $j^{th}$ classifier; \item $G \in (0/1)^{|D|}$, such that $G_i$ is 1 iff $D_i$ is a malicious entry in the ground-truth; \item $R \in (0/1)^{|D|\times|C|}$, such that $R_{i,j}$ is 1 if $D_i$ is classified as malicious by $C_j$ or 0 otherwise; \item $P \in \mathbb{R}^{|C|}$, such that $P_j$ is the average cost of classifying an entry from the dataset using $C_j$; \item $\alpha \in [0,1]^{|C|}$, such that $\alpha_j$ is the sampling rate for classifier $c_j$. \end{itemize} For each set sampling rate $\alpha$ we can compute the average cost of executing the entire set of classifiers on an entry from the dataset as: \begin{equation} Cost(\alpha) = P^T \cdot \bar{\alpha} \end{equation} \point{Optimization} To evaluate the true positive and false positive rates induced by a sampling rate $\alpha,$ we first need to evaluate the probability that an entry will be classified as malicious. Given a constant $R$, this probability can be expressed as: \begin{equation}\label{probability} Pr_i(\alpha) = 1-\Pi_{j=0}^{|C|}(1-R_{i,j}\cdot\alpha_j) \end{equation} Based on this probability, we can express the true/false-positive rates as: \begin{subequations}\label{noweights} \begin{equation}\label{tp} TP(\bar{\alpha}) = \frac{\Sigma_{i=0}^{|D|}(G_i \cdot Pr_i(\alpha))}{\Sigma_{i=1}^{|D|}(G_i)} \end{equation} \begin{equation}\label{fp} FP(\bar{\alpha}) = \frac{\Sigma_{i=0}^{|D|}((1-G_i) \cdot Pr_i(\alpha))}{\Sigma_{i=1}^{|D|}(G_i)} \end{equation} \begin{equation}\label{tn} TN(\bar{\alpha}) = \frac{\Sigma_{i=0}^{|D|}((1-G_i) \cdot (1-Pr_i(\alpha)))}{\Sigma_{i=1}^{|D|}(1-G_i)} \end{equation} \begin{equation}\label{fn} FN(\bar{\alpha}) = \frac{\Sigma_{i=0}^{|D|}((G_i \cdot (1-Pr_i(\alpha)))}{\Sigma_{i=1}^{|D|}(1-G_i)} \end{equation} \end{subequations} In practice, not all suggested goals are always necessary and not all goals are always meaningful. Finding the optimal sampling rate usually depends on the setting for which it is needed. \hide{ We note that, because $0 \le \alpha_j \le 1$ for all components of~$0 \le j < |C|$, $TP(\bar{\alpha})$ and $FP(\bar{\alpha})$ are both monotonically increasing (and so is $Cost(\bar{\alpha})$) while $TN(\bar{\alpha})$ and $FN(\bar{\alpha})$ are monotonically decreasing. Based on this observation, each of our goals from equation~\ref{goals} determines a subspace of $[0,1]^{|C|}$. The intersection of these subspaces results in a convex subspace of valid assignments to $\bar{\alpha}$ that satisfies all goals. } Next we discuss a few hypothetical scenarios and which approaches might best suit them. \point{Prioritized objectives} When the user can state that one objective is more important than others, a multi-leveled optimization goal can be used. In such a solution, the objective with the highest priority is optimized first. If there is more than a single possible solution, the second objective is used to choose between them, and so on. We note that in our scenario it is extremely unlikely that one can reduce the sampling rate of a classifier without affecting the true-positive and false-positive rates. As a result, using strict objectives, such as maximize true-positives, would result in a single solution, often enabling all classifiers completely (or disabling all, depending the chosen objective). Therefore it is recommended to phrase the objectives as ``maintain X\% of true-positives'', so that some flexibility remains. \point{Budget-aware objectives} Often when assessing the effect a security mechanism has on a company's budget, a cost is assigned to each false positive and each false negative produced by the mechanism. These assessments can be used to minimize the total budgetary effect of the mechanism and expected expenses. Assuming $Cost_{FN}$ and $Cost_{FP}$ are the costs of false negatives and false positives respectively, we can express the expected expenses as: \begin{equation} Expenses(\alpha) = Cost_{FN} \cdot FN(\alpha) + Cost_{FP} \cdot FP(\alpha) \end{equation} Using this formulation we can: \begin{itemize}\itemsep=-1pt \item Define a budget, $Expenses(\alpha)\le BUDGET$, as a strict requirement from any sampling rate. \item Define our problem as a standard optimization problem with the objective $minimize Expenses(\alpha)$. \end{itemize} \point{Balancing true positives and false positives} In this scenario, we correlate our sampling rate optimization problem to a standard classifier optimization setting, such that our true-positive rate is equivalent to the classifier's precision while the false-positive rate becomes the recall. In such cases, a ROC curve induced by different sampling rates can be used to select the best rate. Taking some inspiration from the well-known $F1$-score, a similar score, $F1_{sr}$, expressed in formula~\ref{f1score} can be used to transform our problem to a single-objective optimization problem. \begin{equation}\label{f1score} F1_{sr}=2\cdot\frac{TP\cdot FP}{TP+FP} \end{equation} For efficiency, we split the process of classifier sampling rate optimization into two steps. Real-world data often contains classifier overlap, that is, samples that are flagged by more than one classifier. We split our dataset into batches based on the classifier overlap, so that the samples in each batch are flagged by the same set of classifiers. Each batch is associated with true positive and false positive counts. The first step consists of choosing the batches that are cost-effective. \subsection{0/1 Sampling} \label{chooseSamples} Based on the desired optimization objective and the estimated cost ratio between false negatives and false positives (if applicable), we proceed to define the problem of finding the optimal subset of sample batches as a \emph{linear programming} problem. At this stage, since each batch is determined by a specific classifier overlap, there is no overlap between the batches. Therefore, the computation of the true/false positives/negatives becomes a simple summation of the associated true/false positive counts. For example, the total true positive count is the sum of true positives associated with batches that are determined as enabled (meaning they should be sampled) and the total false negative count is the sum of the true positive counts of disabled batches. To encode this problem, we assign each batch $b_i$ with a boolean variable $v_i$, representing whether or not the batch should be active. We then encode the optimization goal using these variables and the associated counts. We use a linear programming solver called Pulp~\cite{pulp}, which finds an assignment to \mbox{$\bar{v}=\{v_1,v_2,...\}$} that optimizes the optimization objective. The output of this step is a division of the samples into enabled (meaning the classifier should sample them) and disabled samples. When the dataset contains \emph{no classifier overlap}, meaning each sample is sampled by exactly one classifier, the output of the first step can be used as the classifier sampling rates. In this setting, the batches essentially correspond to single classifiers and therefore disabled batches correspond to classifiers that are deemed not cost-effective according to the optimization objective. The optimal solution in this case would be to fully enable all cost-effective classifiers and fully disable the rest. \subsection{Inferring Sampling Rates} In the second step of our solution, given a set of enabled samples and a set of disable samples as described in Section~\ref{chooseSamples}, we infer sampling rates for all classifiers that will induce the desired separation. The key insight we use to infer the classifier sampling rates is to express our problem in the form of \emph{factor graphs}~\cite{factor-graphs}. Factor graphs are probabilistic graphical models composed of two kinds of nodes: variables and factors. A variable can be either an evidence/observation variable, meaning it's value is set, or a query variable, whose value needs to be inferred. A factor is a special node that defines the relationships between variables. For example, given variable $A$ and $B$, a possible factor connecting them could be $A\rightarrow{B}$. \begin{figure} \centering \begin{tikzpicture}[decision/.style={draw=none,fill=none}, block/.style ={rectangle, draw, text centered, rounded corners}, line/.style ={draw,-latex'}, node distance = 0.75cm, auto] \node [block] (s1) {$S_1$}; \node [block, right= of s1] (s2) {$S_2$}; \node [block, right= of s2] (s3) {$S_3$}; \node [block, right= of s3] (s4) {$S_4$}; \node [block, right= of s4] (s5) {$S_5$}; \node [block, above= of s1] (f1) {$F_1$}; \node [block, right= of f1] (f2) {$F_2$}; \node [block, right= of f2] (f3) {$F_3$}; \node [block, right= of f3] (f4) {$F_4$}; \node [block, right= of f4] (f5) {$F_5$}; \node [block, above= of f2] (c1) {$C_1$}; \node [block, right= of c1] (c2) {$C_2$}; \node [block, right= of c2] (c3) {$C_3$}; \path [line] (c1) -- (f1); \path [line] (c1) -- (f2); \path [line] (c2) -- (f2); \path [line] (c2) -- (f3); \path [line] (c2) -- (f4); \path [line] (c3) -- (f4); \path [line] (c3) -- (f5); \path [line] (s1) -- (f1); \path [line] (s2) -- (f2); \path [line] (s3) -- (f3); \path [line] (s4) -- (f4); \path [line] (s5) -- (f5); \end{tikzpicture} \caption{Example factor graph, containing~three classifiers $C_1\dots C_3$ and~five samples $S_1\dots S_5$. The structure of the factor graph determines the overlap among the classifiers.} \label{factorgraph} \end{figure} \begin{ex} Consider the example factor graph in Figure~\ref{factorgraph}. The graph consists of~8 variables:~3 named $C_1$--$C_3$ representing~3 different classifiers and~5 variables named $S_1$-$S_5$ representing samples. These variable are connected using~5 factors, $F_1$--$F_5$, such that \mbox{$F_i(\bar{C},S_i) = \vee \bar{C}\rightarrow S_i$}, where $\bar{C}$ is the set of classifiers with edges entering the factor. The variables $S_i$ are treated as observations, meaning their value is set, and the variables $C_i$ are treated as query variables. The inference algorithm chooses at which probability is each $C_i$ set to \code{true}, such that the factors satisfy the observations. If we set all observations $S_i$ to \code{true}, the inference of the factor graph returns the trivial solution of always setting all $C_i$ to \code{true} (meaning $C_i$ is \code{true} with probability~1.0). When we set some $S_i$ to \code{false}, the inference algorithm is able to provide more elaborate answers. For example, setting $S_4$ to false, results in probability~1.0 for $C_1$ and probability~0.5 to both $C_2$ and $C_3$. \end{ex} Given the sets of enabled and disabled samples from the previous step, we translate the problem to a factor graph as follows: \begin{enumerate}\itemsep=-1pt \item For each classifier $i$ we define a query variable $C_i$; \item for each sample $j$ we define an observation variable $S_j$ and a factor $F_j$; \item if sample $j$ was set to be enabled we set $S_j$ to \code{true}, otherwise to \code{false}; \item we connect each $S_j$ to its corresponding $F_j$; \item for every pair of classifier and sample $(i,j)$, if classifier $i$ flags sample $j$ we connect $C_i$ to $F_j$. \end{enumerate} Using this construction, we get a factor graph similar in structure to the graph in figure~\ref{factorgraph}. The inferred probabilities for the query variables~$C_i$ are used as the sampling rates for the corresponding classifiers. To solve the factor graph and infer the probabilities for $C_i,$ we use Microsoft's Infer.NET~\cite{infernet}, a framework for running Bayesian inference in graphical models\footnote{We use \code{ExpectationPropagation} as the inference algorithm, which proved empirically fastest in our experiments.}. We evaluate the performance of the solver in Section~\ref{sec:expt}. \subsection{Discussion} \point{Maintaining the dataset} We intend to build our dataset using samples flagged as malicious by our classifiers. This kind of dataset will naturally grow over time to become very large. Two problems arise from this situation. The first problem is that after a while most of the dataset will become outdated. While we usually wouldn't want to completely drop old samples, since they still represent possible attacks, we would like to give precedence to newer samples over older ones (which essentially should result in higher sampling rates for current attacks). To facilitate this we can assign a weight to each sample in the dataset. We represent these weights using $W\in[0,1]^{|D|}$ and rewrite the formulas from~\ref{noweights} as: \begin{subequations} \label{weights} \begin{equation}\label{tp_weighted} TP(\alpha) = \frac{\Sigma_{i=0}^{|D|}(W_i \cdot G_i \cdot Pr_i(\alpha))}{\Sigma_{i=1}^{|D|}(W_i \cdot G_i)} \end{equation} \begin{equation}\label{fp_weighted} FP(\alpha) = \frac{\Sigma_{i=0}^{|D|}(W_i \cdot (1-G_i) \cdot Pr_i(\alpha))}{\Sigma_{i=1}^{|D|}(W_i \cdot G_i)} \end{equation} \begin{equation}\label{tn_weighted} TN(\alpha) = \frac{\Sigma_{i=0}^{|D|}(W_i \cdot (1-G_i) \cdot (1-Pr_i(\alpha)))}{\Sigma_{i=1}^{|D|}(W_i \cdot (1-G_i))} \end{equation} \begin{equation}\label{fn_weighted} FN(\alpha) = \frac{\Sigma_{i=0}^{|D|}(W_i \cdot G_i \cdot (1-Pr_i(\alpha)))}{\Sigma_{i=1}^{|D|}(W_i \cdot (1-G_i))} \end{equation} \end{subequations} While many different weighting techniques can be used, two examples are: (1)~Assign weight~0 to all old samples, essentially dropping old samples from the dataset; (2)~Assign some initial weight $w_0$ to each new sample and exponentially decrease the weights of all samples after each sampling rate selection. The second problem stems from the sampling rates themselves. Given 2 classifiers, $C_1$ and $C_2$, and their corresponding sampling rates, $\alpha_1$ and $\alpha_2$, if $\alpha_1$ is higher than $\alpha_2$ the dataset will contain more samples of attacks blocked by $C_1$ than by $C_2$. This creates a biased dataset that, in turn, will influence sampling rates selected in the future. This problem can also be addressed using the weights mechanism. One possible approach will be to assign weights in reverse ratio to the sampling rates (so that samples matching $C_2$ will be assigned a higher rate than samples matching $C_1$). Other viable approaches exist and the most suitable approach should be chosen based on the setting in which the classifiers are used. \point{Minimum sampling rates} In Section~\ref{formalizing} we've defined a sampling rate as $\alpha \in [0,1]^{|C|}$. This definition allows for a complete disable of a classifier by setting it's sampling rate to~0. In practice, since we can never be sure that an attack has completely disappeared from the landscape, it is unlikely that we will want to completely disable a classifier. A possible approach to address this is by setting a minimal sampling rate for the classifier. Given that the attack for which this classifier was intended is extremely unlikely to be encountered we don't want to apply the classifier to every sample encountered. however since the attack is still possible we should statistically apply the classifier to some samples to maintain some chance of blocking and noticing an attack (if one appears). Given an inferred sampling rate $S$, the minimal sampling rate can be introduced in many forms, such as \begin{itemize}\itemsep=-1pt \item a lower bound $L$ on the sampling rate assigned to each classifier ($S>=L$); \item a constant value $X$ added to the sampling rate ($S+X$); \item some percentage $Y$ reduced from the non-sampled portion ($S+(1-S)\cdot Y$). \end{itemize} We note that the minimum sampling rate for each classifier should be proportional to the severity of the attacks for which it was intended. If the impact of a successful attack is minuscule, we may set a lower minimum sampling rate because even if we miss the attack the consequences are not severe. However, if the impact is drastic, meaning the severity of the attack is high, then we should set a higher minimum sampling rate as a precaution. We can formalize the notion of minimal sampling rates as $MinSR\in[0,1]^{|C|}$, which is based on a severity mapping $S\in\mathbb{N}^{|C|}$ (such that $S_j$ is the severity of the attacks for which classifier $C_j$ was intended), and use $MinSR_j$ as either $L$,$X$, or $Y$ from the examples above.
{ "timestamp": "2018-08-10T02:04:49", "yymm": "1802", "arxiv_id": "1802.08915", "language": "en", "url": "https://arxiv.org/abs/1802.08915" }
\section{Introduction} \label{sec:intro} The goal of DARPA's Low Resource Languages for Emergent Incidents program (LORELEI) is the rapid development of human language technologies for low-resource languages, specifically in support of situational awareness for emergent missions such as humanitarian assistance, disaster relief, or response to an infectious disease outbreak \cite{strassel2016lorelei, malandrakis2017extracting}. The situational awareness gained from speech and text documents collected ``in the wild" is encoded in document descriptors called Situation Frames (SF). A SF consists of three elements that must be recognized, whenever present, in each speech document: \begin{itemize} \item {\bf Relevance} -- Produce a score of the document's relevance to the emergent incident, \item {\bf Situation Type} -- Produce one or more of 11 predefined topics mentioned in the document, \item {\bf Location} -- Extract any place names related to the incident mentioned in the document. \end{itemize} The 11 topics were specified by the LORELEI program. The LORELEI SF detection task is characterized by extremely limited training resources. The only available resources for each evaluation language, called an Incident Language (IL) are: \begin{enumerate} \item Monolingual text (only some of which is related to the incident) \item Untranscribed, unlabeled audio \item 10 hours of consultation with a native informant (NI). \item A small amount of IL-English parallel text \end{enumerate} The NI is a native speaker of the IL with at least intermediate proficiency in English. System developers may ask the NI to perform any annotation tasks deemed necessary to build a system for extractng SFs from speech, e.g. transcribing speech or labeling documents with situation frames. The lack of supervised training data in the IL demands the use of zero resource techniques, of cross-lingual knowledge transfer on many different levels, or of combinations thereof. To this end, we (i) developed an automatic speech recognition (ASR) system using {\it universal phone models}, (ii) explored {\it transfer of acoustic models} trained on closely related languages, and (iii) trained {\it language-independent classifiers} for situation types. These three approaches are the focus of this paper, and are applicable to other very-low-resource settings. To obtain at least some labeled data in the IL|for adaptation of language universal systems|we asked the NI to read some IL text, transcribe some IL speech, and provide situation type labels for some documents in the IL. To increase the NI's annotation efficiency, all NI tasks were conducted via a web browser-based user interface tailored to the specific LORELEI tasks. We were able to obtain a few minutes (15-30) of transcribed IL speech and a few hundred (150-300) SF Type labels, which significantly improved performance. The read speech turned out to be useful for dianostic purposes during system development, but did not impact performance. Other LORELEI project participants \cite{papadopoulos2017team} have used acoustic models trained on data collected during NI sessions and used an IL-to-English machine translation system and English-language SF-Type classifier. \cite{Littell2017} also train an English SF-Type classifier for this task, but translate the model's features to the IL, in which classification is then performed. As an alternative to such training of an ASR system from IL speech, we opted for a transfer learning paradigm and started with models trained on one or more higher-resource language(s). Other previous approaches \cite{loof2009cross,vu2011cross,mohan2012subspace,knill2014language} have explored cross-language ASR transfer assuming shared phonemic representations, generally using the \texttt{GlobalPhone} corpus \cite{schultz2002globalphone}, while \cite{ghoshal2013multilingual} examines multilingual training of a deep neural networks. Unlike these approaches, which had on the order of hours of target language speech, we are dealing with only minutes of adaptation speech. In the remainder of the paper we describe the general system and its primary components. We describe the universal phone set ASR and language agnostic SF-Type classifier developed. Finally, we show results from the evaluation and analyze the extent to which adaptation of various components (using the data elicited from the NI) improves SF-Type task performance. \section{General System} \label{sec:system} For the NIST LoReHLT 2017 evaluation the two ILs were Tigrinya (IL5) and Oromo (IL6). Both languages are spoken primarily in the Horn of Africa and are related to varying degrees to Amharic. For each IL two sets of audio data are provided: the development set called \texttt{set0 Speech}, and the evaluation set called \texttt{setE Speech}. The audio data consists of audio stories segmented into audio clips lasting no more than 2 minutes. For instance, the \texttt{set0 Speech} for IL5 consists of 83 audio stories segmented into 1323 audio clips; the \texttt{setE Speech} for IL5 consists of 116 audio stories segmented into 1095 audio segments. We refer to these audio clips as speech documents. In our approach, we first convert the speech documents into sequences of tokens. The tokens can be words in the IL, or English translations of these words produced by a cascade of IL ASR and IL-to-English machine translation (MT). They can also be phone-like units discovered via acoustic unit discovery (AUD) \cite{ondel2016variational, liu2017empirical} or word-like units discovered via unsupervised term discovery (UTD) \cite{jansen2011efficient}. \begin{figure} \includegraphics[width=0.45\textwidth]{NI_Block.png} \caption{Using the English SF-Type classifier to obtain adaptation/training data}\label{fig:NI_pipeline} \end{figure} We then select audio documents for transcription and/or SF-type annotation in order of their estimated informativeness. After the NI has transcribed or annotated these documents, the transcriptions are used to adapt the ASR system and the SF-type annotations are added into the pool of training examples for the English SF-Type classifier. See Fig.~\ref{fig:NI_pipeline}. Additionally, the labeled documents can be used to train three IL specific classifiers on the AUD, UTD, and IL word tokenizations of labeled \texttt{set0 Speech} audio documents respectively. In this way each tokenization scheme has a corresponding classifier capable of producing SF-Type scores for audio documents. Finally, for each of the four tokenizations of audio documents from \texttt{setE}, we use the corresponding SF-Type classifiers to produce SF-Type scores. Our final SF-Type scores are obtained as a weighted linear combination of the scores from the four different SF-Type classifiers. See the Fig.~\ref{fig:SF_classifier} \subsection{Data Selection} The selection procedure described above relies heavily on English translations of the IL words. Each IL document can be classified using an English-language SF-Type classifier, trained in advance using only data from other languages. More precisely, we produce an SF-Type score for each document using the English-language SF-Type classifier. We then select the documents with the highest scores from each SF-Type to present to the NI for labeling (correcting) and/or transcription. We found that our data selection method outperforms random selection. \begin{figure} \includegraphics[width=0.45\textwidth]{SF_Classifier.png} \caption{SF-Type classification process}\label{fig:SF_classifier} \end{figure} \section{Automatic Speech Recognition} \label{sec:asr} The two main obstacles to building ASR in the IL are training the acoustic models with little or no transcribed data and creating a suitable pronunciation lexicon. \vspace{-1em}\subsection{Acoustic Models} \label{sec:acoustic_models} The NI sessions are too short to collect enough data to train IL acoustic models from scratch. Hence, we depend on preexisting speech corpora to train acoustic models. All ASR systems were built using Kaldi \cite{povey2011kaldi}. We investigate acoustic model transfer from models trained on a single related language, as well as models trained on many unrelated languages. In both methods of acoustic model transfer, some ad-hoc manual work may be required to map extra phones from one language to another. It is then possible to rebuild the ASR decoding graph by providing both an IL pronunciation lexicon and IL language model (LM). In both cases, a small amount of transcribed data can be used for subsequent acoustic model adaptation. \vspace{-1em}\subsubsection{Universal Phone Set ASR} \label{sec:universal_phoneset_asr} We refer to the transfer of acoustic models trained on many languages sharing a common phonemic representation as universal phone set ASR. Our approach is similar to \cite{knill2014language}. We use a selection of 10 BABEL languages for training, 7 of which were chosen as in \cite{knill2014language}, with 3 more chosen arbitrarily (Guarani, Mongolian, Dholuo). Diphthongs and triphtongs are split into their constituent phones to reduce the number, and enforce sharing, of phonemes. Also, as in \cite{knill2014language}, we standardize the representation of tone (tonal trajectory) across all training languages. The final acoustic models are time-delay neural networks (TDNNs, \cite{peddinti2015time}) trained using the LF-MMI criterion (\cite{povey2016purely}). \vspace{-1em}\subsubsection{Acoustic Model Adaptation} We used a weights transfer approach for model adaptation from source to target language using transcribed data collected during the NI sessions. We used the same method that was used in \cite{manohar2017mgb3}. \subsection{Pronunciation Lexicons and Language Models} We bootstrapped the lexicon using a G2P trained on a seed lexicon derived from the provided resources. For IL5 (Tigriyna) the seed was a dictionary of words with IPA pronunciations, and for IL6 (Oromo) the seed was an approximate grapheme-to-phoneme map. The vocabulary (word list) was generated from the provided monolingual text. We (re)normalized the text according to IL specific punctuation rules. Additional sources of words were the bilingual gazetteer, transcripts obtained during the NI sessions, and any provided dictionaries. The LM was trained on the same text. LM hyper-parameters were chosen to minimize perplexity on a held-out set (small subset of the monolingual text not used for LM training). \section{Situation Frame Type Classifiers} \label{sec:sf classifiers} We use two different approaches for Situation Frame classification. The first, based on IL tokenizations, requires SF-Type labels obtained during the NI sessions, but no IL MT. The second is a cross-lingual approach based on English tokenizations, requiring machine translation, but no IL SF-Type labels. \subsection{IL Classifier} After we tokenize the speech (see section \ref{sec:system}) we represent each speech document as a bag-of-words on unigram or $n$-gram occurrence counts of the tokens. Each vector is then scaled by the inverse document frequency (IDF) and normalized to $\mathcal{L}^2$ norm unit length. For each SF type, a single classifier is trained as in \cite{liu2017topic}. Specifically, we use a set of 11 SVMs (Support Vector Machine classifiers), one for each type, trained on the bag-of-words features. We used stochastic gradient descent (SGD) based linear SVMs with hinge loss and $\mathcal{L}^2$ norm regularization~\cite{shalev2007pegasos, scikit-learn}. The SF-Type labels used for classifier training were obtained during the NI sessions. \subsection{English Classifier} \label{sec:Eng_Classifier} If no IL SF-type labels are available we can still leverage the existing speech corpora of other development languages, which are annotated for SF-Type, in order to train a universal SF-Type classifier. For each development language\footnote{Turkish (LDC2016E109), Arabic (LDC2016E123), Spanish (LDC2016E127), and Mandarin (LDC2016E108)}, we can construct an ASR system using existing ASR training data, transcribe the documents and translate the transcripts to English. After that, a single SF-Type classifier can be trained on the combined data. In our system, we translate each word into its four most likely English translations according to the probabilistic bilingual translation table employed in the MT system that was developed for a separate LoReHLT MT evaluation. The translation table is derived from the provided parallel training data with words aligned automatically by the GIZA++~\cite{giza++} and Berkeley aligner~\cite{berkeley_aligner}. In addition to using the training data provided for the evaluation, native informants were also consulted (independently under the MT effort) to produce hundreds of parallel sentences and word translation pairs that are used in training to increase the coverage of the MT system. We then produce bag-of-words features over English words. If or when the SF-type labels of some IL documents become available, we can simply add these into the training data. \section{Experiments} \label{sec:eval} Table \ref{table:NIResources} summarizes the resources collected during the NI sessions. \begin{table} \small \caption{Overview of resources gathered during the NI sessions} \centering \begin{tabular}{c c c c} \hline\hline & Read & Transcribed & Labeled documents \\ [0.5ex] \hline IL5 & 20 mins & 27 mins & 159 \\ IL6 & 31 mins & 18 mins& 364 \\ \end{tabular} \label{table:NIResources} \end{table} We use this data to adapt the systems described in sections~\ref{sec:asr} and ~\ref{sec:sf classifiers}. The labeled documents were used to train the IL SF-Type classifiers on UTD, AUD, and ASR tokenizations. We performed AUD as described in \cite{ondel2016variational}, but with two major modifications. First, the HMM model was embedded in a neural network generative model, known as Variational AutoEncoder (VAE) \cite{kingma2013auto}. Second, the model was initially trained supervisedly on a subset of the BABEL Amharic training data. For both incident languages, the model (VAE-HMM) was re-trained unsupervisedly. We performed both AUD and UTD on multilingual TDNN-based bottleneck features \cite{liu2017topic} of audio segments corresponding to speech. The segments were obtained from a DNN-based speech activity detection system that segmented audio into speech and silence. We also processed only speech segments when decoding the adapted IL5 ASR as this gave a slight improvement in performance. For both IL5 and IL6 we treated Amharic as the related language and we trained a TDNN-LSTM system on the BABEL Amharic corpus. We generated triphone alignments as in \ref{sec:universal_phoneset_asr}. Our final IL5 system used the Amharic ASR, though we later found the adapted Universal model performed better. Our final IL6 system used the universal phone set ASR. Both systems were adapted using the collected transcribed speech. An adapted English SF-Type classifier for each language was trained by including all collected SF-type labels in the specific language. We used the read speech to evaluate the quality of both adapted and unadapted ASRs in both languages, as shown in table \ref{table:ASR}. Systems were evaluated on the \texttt{setE} \texttt{Speech} in two layers: the \emph{Relevance} layer (to separate the documents with at least 1 SF from non-relevant documents with zero SF present), and \emph{Type} layer (to detect all present SF types), using average precision (AP, equal to the area under the precision-recall curve). More evaluation metric details can be found in \cite{malandrakis2017extracting}. \begin{table}[!htb] \footnotesize \caption{ASR Impact on SF-type Detection} \centering \begin{tabular}{c c c c} \hline\hline ASR & SF-Type & SF-Relevance & WER \\ [0.5ex] \hline IL5 Universal& 0.22 & 0.44 & 75.9 \\ IL5 Related& 0.26 & 0.46 & 68.5 \\ IL5 Adapt Related& 0.34 & 0.54 & 53.7\\ \textbf{IL5 Adapt Universal}& \textbf{0.35} & \textbf{0.54} & \textbf{51.6}\\ [0.3ex] \hline\hline IL6 Universal& 0.34 & 0.73 & 63.0 \\ IL6 Related& 0.35 & 0.74 & 47.9\\ IL6 Adapt Related& 0.37 & 0.77 & 44.4\\ \textbf{IL6 Adapt Universal}& \textbf{0.37} & \textbf{0.77} & \textbf{39.8}\\ [1ex] \end{tabular} \label{table:ASR} \end{table} Table \ref{table:ILResults} shows the performance of our final submission systems. All ASR systems are adapted, and ASR+MT refers to the system using the English SF-Type classifier described in section \ref{sec:Eng_Classifier}. \begin{table}[!htb] \footnotesize \caption{IL5 and IL6 Final Results} \centering \begin{tabular}{c|cc|cc} \hline\hline & \multicolumn{2}{c|}{IL5} & \multicolumn{2}{c}{IL6} \\ Method & \footnotesize{SF-Type} & \footnotesize{SF-Relevance} & \footnotesize{SF-Type} & \footnotesize{SF-Relevance} \\ \hline ASR+MT & 0.34 & 0.54 & 0.37 & 0.77 \\ ASR & 0.26 & 0.56 & 0.38 & 0.76 \\ AUD & 0.11 & 0.41 & 0.34 & 0.80 \\ UTD & 0.10 & 0.44 & 0.27 & 0.76 \\ \hline \textbf{Combined} & \textbf{0.35} & \textbf{0.58} & \textbf{0.41} & \textbf{0.80} \end{tabular} \label{table:ILResults} \end{table} \subsection{ASR Adaptation} Table \ref{table:ASR} compares the performance of the related-language and the universal phone set ASR before and after adaptation. ASR adaptation on the 15-30 min of collected transcribed speech improves SF-type classification modestly. Furthermore, WER seems to track SF-type classification, which supports the utility of the SF-type task as an extrinsic measure of ASR performance. We also see that the universal phone set ASR has a similar WER to the adapted related language ASR when adapted on only 15-30 min of transcribed speech. While ASR adaptation resulted in large gains in IL5 (59\% SF-Type, 23\% SF-Relevance relative improvement), it helped only marginally in IL6 despite similar WER gains in both languages. Possible explanations are the smaller amount of IL6 adaptation data collected and/or MT quality (BLEU-4 ≈ 0.16 vs. BLEU-4 ≈ 0.09 for IL5/6 respectively). \subsection{Classifier Adaptation} The English SF-Type classifier was the best performing system (see row 1 of table~\ref{table:ILResults}). For IL5, it was the best performing system by a wide margin, indicating that SF-Type labels derived from datasets from other languages can be extremely beneficial. We also examined how using the SF-Type labels from other languages affects performance. Table \ref{table:labels_ASR+MT}, shows how including various types of labels in training impacts performance. \setlength{\tabcolsep}{0.10cm} \begin{table}[!htb] \small \caption{IL SF-Type labels impact on SF-Type Classifiers. Adapted ASR, is the ASR used in the evaluation. MT is the IL-to-English MT described in \ref{sec:Eng_Classifier} using SF-Type labels ($\sim$ 3000) from other languages. Labels refers to IL specific labels collected from the NI. } \vspace{-0.1cm} \centering \begin{tabular}{c|cc|cc} \hline\hline & \multicolumn{2}{c|}{IL5} & \multicolumn{2}{c}{IL6} \\ System & \footnotesize{Type} & \footnotesize{Rel} & \footnotesize{SF-Type} & \footnotesize{Rel} \\ \hline Adapted ASR + MT + Labels & 0.35 & 0.54 & 0.37 & 0.77 \\ Adapted ASR + MT + No Labels & 0.26 & 0.46 & 0.19 & 0.73 \\ Adapted ASR + Labels & 0.26 & 0.56 & 0.38 & 0.77 \end{tabular} \label{table:labels_ASR+MT} \end{table} \begin{figure}[!htb] \centering \includegraphics[width=0.4\textwidth]{Tigrinya.png} \includegraphics[width=0.4\textwidth]{Oromo.png} \caption{IL5,6 SF-Type Classifier performance as a function of the number of SF-Type labels in training. The vertical dotted line shows the number of SF-Type labels collected from the NI in Tigrinya (IL5). The vertical dashed line shows the number of SF-Type labels collected in Oromo (IL6). Since the SF-Type labels used are from \texttt{setE} \texttt{Speech}, there is a small discrepancy in type and relevance scores compared to the evaluation results.} \label{fig:Classifier_adaptation} \end{figure} \vspace{-0.1cm} We note that using the English SF-Type classifier trained only on the combined set of 3000 SF-Type labels from the development languages (row 2 of table \ref{table:labels_ASR+MT}) yields similar performance in IL5 as training an IL SF-Type classifier (row 3 of table \ref{table:labels_ASR+MT}) on only 159 IL specific SF-Type labels. While the English SF-Type classifier performed significantly worse on IL6 results (row 2 of table \ref{table:labels_ASR+MT}), we believe that the English SF-Type classifier trained on labels from other languages can match the performance of an IL-specific SF-Type classifier. However, adding the IL specific SF-Type labels to the English SF-Type classifier training data always improves performance (rows 1,3 of table \ref{table:labels_ASR+MT}). To demonstrate the value of IL specific SF-Type labels we performed the following experiment on the \texttt{setE} \texttt{Speech} ground truth SF-Type labels of both IL5 and IL6. For each language, and each of 6 tokenizations (see Fig. \ref{fig:Classifier_adaptation}) we trained IL specific SF-Type classifiers, varying the number of SF-Type labels used in training. We split the \texttt{setE} \texttt{Speech} of each language into 10 folds and measured the performance, by 10-fold cross validation, of each SF-Type classifier trained on between 1 and 9 folds worth of labels. Figure \ref{fig:Classifier_adaptation} shows the results of this experiment. We see from figure \ref{fig:Classifier_adaptation} that IL5 and IL6 SF-Type classifiers trained on the same number of IL SF-Type labels perform similarly for AUD, UTD and unadapted ASR tokenizations; the IL6 AUD and UTD systems likely outperformed the corresponding IL5 systems because we collected more IL6 specific SF-Type labels. Collecting more IL specific SF-Type labels always helps performance. We also see in IL5 that adding 159 SF-Type labels to training ($\sim$ 2h NI time) is comparable to ASR-adaptation on 27 min of transcribed speech ($\sim$ 6h NI time). \section{Conclusions} This paper presents an SF-Type classification system of speech documents used in the LoReHLT 2017 evaluation. The system combines universal acoustic modeling, IL-to-English machine translation (MT) and an English-language topic classifier. This combination requires no transcribed speech in the evaluation language, leading to near language-agnostic operation. We demonstrated that adaptation on a small amount of transcribed speech yields modest improvement in SF-type classification. However, with enough IL specific SF-Type labels, an MT-free system can achieve the same performance. Finally we must consider that the intrinsic value of ASR-based systems lies in the semantically meaningful tokenization they produce. Using ASR-based systems opens up a promising venue of research directed towards detecting names of people and places in speech. This can be formulated as a keyword search task using word-based search \cite{trmal2014keyword, trmal2017kaldi}, phonetic-based search, or a fusion of the two \cite{liu2014low}. \bibliographystyle{IEEEtran}
{ "timestamp": "2018-06-20T02:01:59", "yymm": "1802", "arxiv_id": "1802.08731", "language": "en", "url": "https://arxiv.org/abs/1802.08731" }
\section{Introduction} \label{sec:intro} Jupiter-family Comet D/1770 L1 (Lexell) was the first known Near-Earth Object (NEO)\footnote{The first observed NEO known to date is Comet 1P/Halley, recorded by Chinese chroniclers in 240~BC \citep[e.g.][]{Stephenson1985}.}. Found by Charles Messier \citep{Messier1776} and named after its orbit computer Anders Johan Lexell \citep{Lexell1778a}, D/Lexell approached to a distance of only 0.015~au from the Earth on 1770 Jul 1, a record that has not been surpassed by any known comet so far\footnote{Comets C/1491 B1 and P/1999 J6 (SOHO) may have passed closer than D/Lexell at their respective close approach to the Earth in 1491 and 1999, but their orbits are somewhat uncertain, therefore the approach distance of each comet cannot be precisely calculated.}. Although the orbit calculated by Lexell showed a period of 5.58~yrs, the comet was not seen after 1770. In his celebrated work, \citet{Lexell1778a} suggested that a close approach to Jupiter in 1779 had perturbed the comet into a high perihelion orbit, while the comet was behind the Sun as seen from the Earth during its 1776 perihelion and was therefore unobservable. This result was confirmed by Johann Karl Burckhardt \citep{Burckhardt1807}, winning him a prize dedicated to this problem offered by the Paris Academy of Sciences. The work by \citet{Leverrier1844a,Leverrier1844b} reconfirmed the results by Lexell and Burckhardt and provided a very complete review of the matter. Despite the consensus that D/Lexell has evolved into a very different orbit, the interest about the fate of the comet is long-lived. Some 80 years later, \citet{Chandler1889,Chandler1890} suggested that the newly-discovered 16P/Brooks could be the return of D/Lexell. It took another 15 years for \citet{Poor1905} to demonstrate that such linkage was unlikely. After the 1950s, the development of meteor astronomy sparked searches for meteor activity associated with D/Lexell \citep{Nilsson1963,Kresakova1980,Carusi1982,Carusi1983,Olsson-Steel1988}, although no definite conclusions have been reached. The recent years have witnessed tremendous progress in the studies of NEOs and their dust production. We have reached 90\% completion of NEOs greater than 1~km in diameter \citep{Jedicke2015}. Some 800 meteoroid streams have been reported by various radio and video meteor surveys \citep{Jenniskens2017}, many without an identified parent NEO. These new data would benefit a renewed search for D/Lexell and/or its descendants. Here we present a reexamination of topic using the original observations of D/Lexell and the most recent observations of NEOs and meteor showers. \section{Reconstruction of Orbit} \label{sec:orbit} Almost all of the surviving astrometric measurements of D/Lexell were made by Messier, who observed the comet from his discovery of it on 1770 Jun 14 through Oct 3, when he was also the last astronomer to observe the comet. Since we have no reason to believe that the few other measurements would be of significantly higher quality than Messier's, we focus exclusively on Messier's observations, which are available from \textit{Memoires~de~l'Academie~Royale~des~Sciences} \citep{Messier1776}. These observations were taken in 18th century Paris, so they referred to the Paris meridian, which is $2^\circ20'14''$ east of the now-used Greenwich meridian. The astronomical time in the 18th century also started at noon. We correct for the different meridian and time definitions and assume the positions Messier reports refer to the epoch of the observation, which we precessed to the J2000 epoch. The corrected positions are tabulated in Table~\ref{tbl:obs}. \startlongtable \begin{deluxetable}{llll} \tablecaption{Messier's observations of D/Lexell, precessed to J2000 epoch. All observations were taken at Paris (Minor Planet Center Observatory Code 007)} \label{tbl:obs} \tablehead{ \colhead{Time (UT)} & \colhead{R.A.} & \colhead{Dec.} & \colhead{Note} \\ } \startdata 1770 Jun 14.97256 & 18h24m52.9s & -16$^\circ$39$'$54$''$ & \\ 1770 Jun 15.96809 & 18 25 06.7 & -16 22 54 & \\ 1770 Jun 17.95989 & 18 25 32.7 & -15 40 59 & \\ 1770 Jun 20.93853 & 18 26 22.7 & -14 13 33 & \\ 1770 Jun 21.92948 & 18 26 44.0 & -13 34 18 & \\ 1770 Jun 22.92798 & 18 27 11.2 & -12 43 16 & \\ 1770 Jun 23.00022 & 18 27 14.8 & -12 39 34 & \\ 1770 Jun 26.05458 & 18 29 33.1 & -08 22 30 & \\ 1770 Jun 28.04442 & 18 32 46.5 & -02 04 19 & \\ 1770 Jun 28.94253 & 18 35 42.7 & +03 20 00 & \\ 1770 Jun 29.99314 & 18 42 02.1 & +14 57 42 & \\ 1770 Jun 30.99575 & 18 58 35.1 & +37 48 06 & estimated without instrument during break in clouds \\ 1770 Jul 01.99588 & 21 34 06.1 & +78 01 52 & estimated without instrument \\ 1770 Jul 03.95447 & 06 13 37.5 & +48 58 23 & estimated without instrument while at Minister of State's house \\ 1770 Aug 03.12079 & 06 39 59.4 & +22 18 30 & \\ 1770 Aug 04.10822 & 06 41 36.5 & +22 13 39 & \\ 1770 Aug 05.08575 & 06 43 11.3 & +22 09 53 & \\ 1770 Aug 06.10375 & 06 45 04.7 & +22 04 34 & \\ 1770 Aug 07.09749 & 06 46 52.0 & +22 00 36 & comet viewed with difficulty, observations doubtful \\ 1770 Aug 08.11111 & 06 48 50.5 & +21 55 37 & \\ 1770 Aug 09.09021 & 06 50 43.5 & +21 50 28 & \\ 1770 Aug 10.11031 & 06 52 55.4 & +21 44 45 & \\ 1770 Aug 11.08690 & 06 54 56.0 & +21 42 28 & \\ 1770 Aug 12.09311 & 06 57 05.7 & +21 37 54 & \\ 1770 Aug 13.10910 & 06 59 20.7 & +21 32 43 & \\ 1770 Aug 15.10289 & 07 03 59.9 & +21 23 14 & \\ 1770 Aug 16.14878 & 07 06 26.5 & +21 18 06 & \\ 1770 Aug 19.10084 & 07 13 50.3 & +21 03 43 & \\ 1770 Aug 20.10693 & 07 16 24.9 & +20 58 18 & \\ 1770 Aug 27.14605 & 07 35 16.5 & +20 17 51 & \\ 1770 Aug 29.10749 & 07 40 35.2 & +20 05 09 & \\ 1770 Aug 30.13372 & 07 43 25.3 & +19 59 02 & \\ 1770 Aug 31.11045 & 07 46 04.3 & +19 51 29 & \\ 1770 Sep 01.10342 & 07 48 48.1 & +19 45 00 & \\ 1770 Sep 05.12152 & 07 59 36.1 & +19 16 25 & \\ 1770 Sep 06.11063 & 08 02 15.3 & +19 08 25 & \\ 1770 Sep 09.15815 & 08 10 13.7 & +18 45 37 & \\ 1770 Sep 10.12304 & 08 12 45.4 & +18 37 40 & \\ 1770 Sep 11.17852 & 08 15 17.4 & +18 28 51 & \\ 1770 Sep 15.08826 & 08 25 10.9 & +17 58 33 & \\ 1770 Sep 18.15534 & 08 32 32.8 & +17 33 54 & \\ 1770 Sep 19.13952 & 08 34 52.0 & +17 25 19 & \\ 1770 Sep 20.13174 & 08 37 10.3 & +17 18 47 & \\ 1770 Sep 21.14191 & 08 39 27.4 & +17 10 39 & \\ 1770 Sep 30.13505 & 08 58 18.1 & +15 57 48 & \\ 1770 Oct 02.13471 & 09 02 06.5 & +15 42 05 & \\ 1770 Oct 03.14878 & 09 04 00.8 & +15 33 40 & \\ \enddata \end{deluxetable} The orbit of comet is calculated using the FindOrb package developed by Bill Gray\footnote{\url{https://www.projectpluto.com/find_orb.htm}.} and is tabulated in Table~\ref{tbl:orb}. Time differences between the reported observations, Terrestrial Time (TT) and Barycentric Dynamical Time (TDB) used in the numerical integrations are also being handled by FindOrb, which in turn uses the conversion table given in \citet[][p. 72]{Meeus1991}. Original notes from Messier indicated that observations on 1770 Jun 30, Jul 1, Jul 3, and Aug 7 are less accurate; these observations are excluded for our calculation. All other observations are used unweighted, assuming an astrometric precision of 1 arc-minute\footnote{This value is assigned empirically as it is not otherwise retrievable; however, considering that the angular resolution of human eye is about $1'$ \citep{Yanoff2009} and a telescope-equipped Messier must be able to achieve better resolution, the assumption of 1' is reasonable. Most of Messier's reported observations are performed with small micrometer-equipped refracting telescopes by comparing with nearby reference stars, though a few listed as 'measured without instrument' were naked-eye observations}. The root mean square (RMS) of the residuals for the best fit is 35'' (Figure~\ref{fig:res}). To determine the likely trajectory of the comet after its close approach to Jupiter, we also computed the orbit covariance, which represents the statistical orbital uncertainty as estimated from the observational data. We generate 10000 clones from the orbit covariance using a Monte Carlo scheme, and integrate them to the year of 2000 using the RADAU integrator \citep{Everhart1985}. The gravitational perturbations of the Sun, the Earth-Moon sytem with the Earth and the Moon being considered as two separate bodies, and seven other major planets are included in the force model. We find that by the year 2000, only 2.0\% of the clones had escaped or been destroyed, while 85\% remained bound to the Sun with perihelion $q < 3$~au, 40\% having $q$ within the Earth's orbit. When the same number of clones are run, each with randomly assigned non-gravitational constants of $A_1=\pm1.0$, $A_2=\pm1.0$ and $A_3=0.0$ (in units of $10^{-8}$~au~day$^{-2}$ \citep{Marsden1973}), 2.3\% are lost or destroyed by 2000, while 83\% remain bound with $q<3$~au, 45\% with $q<1$~au. Previously it was assumed that the 1779 encounter between D/Lexell and Jupiter moved it out of the inner Solar System. This encounter certainly occurred but not all of our clones suffer strong perturbations: only 1.8\% are unbound after the encounter, while 89\% remain bound with $q < 3$~au, and 68\% are bound with $q<1$~au (Figure~\ref{fig:video}). We verified this result with an independent integrator running on the Bulirsch-Stoer algorithm\citep{Bulirsch1966} whereas the Earth-Moon system is considered as a single mass. In this case 3.7\% of clones have escaped the Solar System or been destroyed by solar/planetary impacts by 2000, which is in line with the earlier result. The predominant majority of the surviving clones remain in Jupiter-family comet (JFC) like orbits (Figure~\ref{fig:ev}). \begin{table*} \begin{center} \caption{Orbit of D/Lexell calculated by this work and \citet{Leverrier1844a,Leverrier1844b}, both in ecliptic J2000 reference frame. Orbital elements listed in the table are epoch, time of perihelion passage ($t_\mathrm{p}$), perihelion distance ($q$), eccentricity ($e$), inclination ($i$), longitude of the ascending node ($\Omega$), and argument of perihelion ($\omega$).\label{tbl:orb}} \begin{tabular}{lcccccccc} \tableline\tableline & Epoch (TT) & $t_\mathrm{p}$ (TT) & $q$ (au) & $e$ & $i$ & $\Omega$ & $\omega$ & Mean residual \\ \tableline This work & 1770 Aug 14.0 & 1770 Aug 14.05 & $0.6746$ & $0.7856$ & $1.550^\circ$ & $134.50^\circ$ & $224.98^\circ$ & $35.4''$ \\ & & $\pm0.03$~day & $\pm0.0003$ & $\pm0.0013$ & $\pm0.004^\circ$ & $\pm0.12^\circ$ & $\pm0.12^\circ$ & \\ \hline \citet{Leverrier1844a,Leverrier1844b} & 1770 Aug 14.0\tablenotemark{a} & 1770 Aug 14.04 & $0.6744$ & $0.7861$ & $1.55^\circ$ & $134.47^\circ$ & $225.02^\circ$ & - \\ \tableline \end{tabular} \tablenotetext{a}{May be 1770 Aug 14.5 due to hour ambiguity.} \end{center} \end{table*} \begin{figure} \includegraphics[width=0.5\textwidth]{lexell-res.eps} \caption{Astrometric R.A. and Dec. residuals of Messier's observations with respect to our best fit solution in Table~\ref{tbl:orb}.\label{fig:res}} \end{figure} \begin{figure*} \includegraphics[width=\textwidth]{Lexell-02.eps} \caption{Motions of 10~000 clones of D/Lexell from 1770 to 1790. Orange dots indicate ejected clones; grey dots indicate bound clones with perihelion outside Earth's orbit; and green dots indicate bound clones with perihelion inside Earth's orbit. It can be seen that most clones stay bound to the Solar System after the 1779 encounter with Jupiter.\label{fig:video}} \end{figure*} \begin{figure*} \includegraphics[width=\textwidth]{lexell-ev.eps} \caption{$68\%$ and $95\%$ probability contours of D/Lexell's evolutionary path over 1500--2000~AD assuming no non-gravitational effect (left) and non-gravitational effect (right), with $\mathcal{A}_1=10^{-8}~\mathrm{au~d^{-2}}$, $\mathcal{A}_2=10^{-8}~\mathrm{au~d^{-2}}$, which is on the upper range of typical values for JFCs \citep[c.f.][]{Yeomans2004}. The black curve represents the median path.\label{fig:ev}} \end{figure*} Contrary to previous estimates, our statistics-based simulations argue that it is quite probable that D/Lexell remains in the inner Solar System. This argument remains valid even if we assume some non-gravitational effects such as are typically found on comets. Could D/Lexell still be wandering in the Solar System? \section{Physical Characteristics of D/Lexell} \label{sec:pr} To discuss the visibility of D/Lexell after 1770, we must first examine the intrinsic brightness of the comet. A highly complete compilation of brightness estimates and other morphological quantities of D/Lexell during its 1770 apparition is provided by \citet{Kronk1999b} and is tabulated in Table~\ref{tbl:mag}, with a few additional details extracted from \citet{Messier1776}. If we fit the observations with the standard formula \citep[e.g.][]{Everhart1967,Hughes1987}, $m=M_1+5\log_{10}{\varDelta}+10\log_{10}{r_\mathrm{H}}$, where $M_1$ is the absolute total magnitude of the comet, $\varDelta$ is geocentric distance and $r_\mathrm{H}$ is heliocentric distance (both in au), we find $M_1=7$. This would make D/Lexell one of the brightest comets (in terms of absolute total magnitude) that approach the Earth. 1P/Halley, for instance, has $M_1=5.5$. \begin{table} \begin{center} \caption{Brightness estimates and other morphological quantities of D/Lexell reported by various observers. Extracted from \citet{Kronk1999b} unless otherwise noted.\label{tbl:mag}} \begin{tabular}{cccccl} \tableline\tableline Date (1770) & Size of central condensation\tablenotemark{a} & Coma size & Tail & Magnitude & Observer \\ \tableline Jun 14/15 & - & - & - & 5 & Messier \\ Jun 17/18 & 22'' & 5'23'' & N & - & Messier \\ Jun 22/23 & 33'' & 18' & N & - & Messier \\ Jun 24/25 & 1'15'' & 27' & - & 2 & Messier \\ Jun 27/28 & - & 0.5$^\circ$ & N & - & S. Dunn \\ Jun 29/30 & 1'22'' & 54' & - & - & Messier \\ .. & - & - & N & $<1$\tablenotemark{b} & W. Earl \\ Jul 1/2 & 1'26'' & 2$^\circ$23' & N & - & Messier \\ Aug 2/3 & 54'' & 15' & N & - & Messier \\ Aug 11/12\tablenotemark{c} & 43'' & 3'36'' & - & - & Messier \\ Aug 12/13 & - & - & N & 4--5 & Messier \\ Aug 18/19 & 38'' & - & Y & - & Messier \\ Aug 19/20 & - & - & Y & - & Messier \\ Aug 25/26\tablenotemark{c} & - & - & Y, 1$^\circ$ & - & Messier \\ Aug 27/28 & - & - & - & 5--6 & Messier \\ \tableline \end{tabular} \tablenotetext{a}{Called ``nucleus size'' in the original document, but by no means were the 18th century observers really observing the actual nucleus of the comet, as the nucleus would be about 1000~km in size which is highly unlikely. There is also no reason to believe that Messier and his colleagues could separate the actual nucleus from the coma using 18th century technology.} \tablenotetext{b}{The original text was ``...larger than a star of the first magnitude'', our interpretation is that the comet was at 1st magnitude, since a star would be a point source and would not have measurable size. On the other hand, J. Six's observation on Jul. 2.0 noted ``...appeared as large as the planet Jupiter'', which we interpreted as a description of the spatial size of the cometary nucleus, since Jupiter is an extended source and its spatial size ($\sim1$') is comparable to other observations of D/Lexell near this date.} \tablenotetext{c}{Extracted from \citet{Messier1776}.} \end{center} \end{table} \begin{figure} \includegraphics[width=0.5\textwidth]{lexell-mag.eps} \caption{Observed and fitted light-curve of D/Lexell during its 1770 apparition \label{fig:mag}} \end{figure} The total magnitude also provides a way to constrain the size of D/Lexell. This can be done by looking at comets whose brightness and activity have been accurately measured, such as those that have been visited by spacecraft. Based on the correlation presented in Figure~\ref{fig:m1-size}, we infer the active area of the nucleus of D/Lexell is $50$--$1600$~km$^2$. This translates to a nucleus of $4$--$22$~km in diameter taking that the fractional active area must be smaller than 1. If we take an active fraction of 0.2 \citep[which is on the high end of typical values, see][]{AHearn1995}, the corresponding diameter is $9$--$50$~km. \begin{figure} \includegraphics[width=0.5\textwidth]{m1-size.eps} \caption{Correlation between absolute total magnitude (a measure of the productivity of dust and gas) of the comet and size of active area on the the comet. Shaded area represents $1\sigma$ prediction level. Absolute total magnitudes are extracted from the JPL Small-Body Database (\url{https://ssd.jpl.nasa.gov/sbdb.cgi}). Cometary nucleus size and fraction of active area are extracted from \citet{Tancredi2006} except for 1P/Halley \citep{Nes1986}. Only comets with quality class QC$\leq3$ are used \citep[see the description of Table~2 in][]{Tancredi2006}. \label{fig:m1-size}} \end{figure} \citet{Messier1776} also documented the apparent size of the coma and the existence of a tail in detail (summarized in Table~\ref{tbl:mag}). This enables us to model what he saw, at least at a qualitative level. Model images are created using the Monte Carlo dust code developed in \citet{Ye2014}, using two sets of input parameters representing different levels of cometary activity (Table~\ref{tbl:mdl-input}). Note that the gas component is not included in the model, as observations by Messier and others reveal a largely continuous spectrum \citep[e.g. ``silver-colored'' noted by James Six, see][]{Kronk1999b} consistent with a dominance of scattered light from dust particles. The model images, shown in Figure~\ref{fig:morph-mdl}, suggest that the activity of D/Lexell was close to average level. There is some degree of inconsistency between Messier's observations and the model images towards August 1770, where a tail is clearly seen in the model images but was not reported by Messier, despite his apparent efforts to look for one. We attribute this inconsistency to the interference from the last quarter Moon, as D/Lexell was also a morning target at that time. The tail was reported on and after Aug 18/19 as the Moon moved to the conjunction (the New Moon was on Aug 20, 1770). \begin{table*} \begin{center} \caption{Input parameters for Monte Carlo simulation of coma morphology.\label{tbl:mdl-input}} \begin{tabular}{ccccl} \tableline\tableline & Normal & Low activity \\ \tableline Dust size range ($\micron$) & 1--100 & 1--100 \\ Dust size index & -3.6 & -3.6 \\ Dust bulk density ($~\mathrm{kg~m^{-3}}$) & 2000 & 2000 \\ Dust production-heliocentric distance exponent & -4.0 & -4.0 \\ Mean ejection speed of $1~\micron$-sized dust at 1~au ($m~s^{-1}$) & 400 & 40 \\ Activate $r_\mathrm{H}$ (au) & 2.3 & 1.4 \\ Example comet & 67P/Churyumov-Gerasimenko\tablenotemark{a} & 209P/LINEAR\tablenotemark{b} \\ \tableline \end{tabular} \tablenotetext{a}{\citet{Ishiguro2008}.} \tablenotetext{b}{\citet{Ye2016}.} \end{center} \end{table*} \begin{figure*} \includegraphics[width=\textwidth]{morph-mdl.eps} \caption{Model images of D/Lexell created using two sets of input parameters representing normal and low levels of cometary activity as compared to Messier's observation. For comparison, the apparent size of the Moon is drawn at the upper-left corner. For the depiction of Messier's observations, filled circles in light grey represent the coma, while dots in dark grey represent the central condensation described as the ``nucleus'' by Messier (as discussed in the notes of Table~\ref{tbl:mag}). The relative position of the central condensation to the coma was not provided by Messier and is arbitrarily drawn.\label{fig:morph-mdl}} \end{figure*} \section{In Search of D/Lexell and Its Descendant} \label{sec:desc} If D/Lexell remains in the inner Solar System and simply stopped being active, there is a good chance that it might have been recovered by modern NEO surveys as an asteroid, given the large size of the nucleus. To test this hypothesis, we examined the orbits of known asteroids and compared them to the orbit of D/Lexell. This was done by integrating the orbits of these asteroids to 1770 and computing their Southworth-Hawkins dissimilarity criterion, or the $D$-criterion \citep[][denoted as $D_\mathrm{SH}$ hereafter]{Southworth1963} against D/Lexell. The $D$-criterion is a quantitative measure of the similarity of two orbits, with smaller $D$'s indicating more similar orbits. There are several variants of the original \citet{Southworth1963} version of $D$-criterion, but since we only intend to use the $D$-criterion as a relative metric, the difference among these variants is unimportant for our purpose. Therefore we simply adopt the original expression introduced by Southworth \& Hawkins. Here we focus at NEOs of 1~km or larger as D/Lexell is likely a km-sized object as discussed above. To ensure we only examine asteroids with well-determined orbits, we focus on asteroids with orbit uncertainty number $U\leq2$ \citep{MPC1995}. It is usually assumed that objects with $D$-criterion smaller than a certain value are likely related. However, this critical value is dependent to the size of the sample and orbital background, which varies from cases to cases. Here we start with a generous cutoff of $D_\mathrm{SH}\leq0.2$ for further examination. This particular value is loosely chosen based on the results of previous experiments \citep[c.f.][]{Drummond2000} which suggested a range of optimal cutoffs to be in the range of 0.1 to 0.2. Following the discussion in \citet{Wiegert2004} and \citet{Ye2016a}, we determine the expected number of associations with smaller $D$'s, $\langle X \rangle$. This is done in two steps: first, a set of synthetic NEO populations are generated using \citet{Greenstreet2015}'s de-biased NEO population model; second, the number of synthetic objects ($\langle X \rangle$) that have $D_\mathrm{SH}$ smaller than that of the proposed linkage is calculated. We identify four objects that are in the proximity of D/Lexell in the $D_\mathrm{SH}$ space: 2010 JL$_{33}$ ($D_\mathrm{SH}=0.087$), 1999 XK$_{136}$ ($D_\mathrm{SH}=0.104$), 2011 LJ$_1$ ($D_\mathrm{SH}=0.171$), and 2001 YV$_3$ ($D_\mathrm{SH}=0.198$), each has a $\langle X \rangle$ of 1 in 125, 1 in 3, 6 in 1, and 14 in 1, respectively (Table~\ref{tbl:assoc}). However, the readers should bear in mind these numbers only represent the \textit{expected} number of associations one can find in a NEO population model where a large number of NEO population samples are generated; in reality, one must consider the probability of chance alignments. If we assume the local orbital distribution follows Poisson statistics, the probability of finding at least one paired object due to chance is then $$P(n\geq1) = e^{-\langle X \rangle} \sum_{n=1}^{\infty} \frac{\langle X \rangle ^ n}{n!}$$ \noindent Here we note that the assumption of Poisson statistics will be valid as long as the rate of object-pair due to chance is constant across the local orbital space, a region that can be understood as a quasi-infinitesimal region in the orbital space where NEO orbits are nearly uniformly distributed. We derive the probabilities of chance alignments of 2010 JL$_{33}$, 1999 XK$_{136}$, 2011 LJ$_1$ and 2001 YV$_3$ to be $0.8\%$, $26\%$, $99.8\%$, $\sim100\%$ respectively. Apparently, 2010 JL$_{33}$ is the most promising candidate as D/Lexell's descendant. 2010 JL$_{33}$ has a diameter of 1.8~km and an albedo of 0.047, with a rotation period of 9.41~h \citep{Blaauw2011, Mainzer2011}, which is compatible to a large nucleus of D/Lexell and typical properties of cometary nuclei \citep{Snodgrass2006,Mommert2015}. By contrast, 1999 XK$_{136}$ has a smaller diameter of 0.8~km and a similarly low albedo of 0.020 \citep{Mainzer2014} with unknown rotation period. The physical properties of the less statistically significant associations, 2011 LJ$_1$ and 2001 YV$_3$, are not known. \begin{table*} \begin{center} \caption{Orbit of D/Lexell (calculated by this work) compares to four possible associations: 2010 JL$_{33}$, 1999 XK$_{136}$, 2011 LJ$_1$, and 2001 YV$_3$, ordered by their $D_\mathrm{SH}$ with respect to D/Lexell. All elements are in ecliptic J2000 reference frame. Orbital elements listed in the table are epoch, time of perihelion passage ($t_\mathrm{p}$), perihelion distance ($q$), eccentricity ($e$), inclination ($i$), longitude of the ascending node ($\Omega$), and argument of perihelion ($\omega$). Also listed are the $D_\mathrm{SH}$ values with respect to D/Lexell, the expected number of km-sized NEOs with smaller $D_\mathrm{SH}$ ($\langle X \rangle$), and probability of chance alignment $P(X\geq1)$.\label{tbl:assoc}} \begin{tabular}{lcccccccccc} \tableline\tableline & Epoch (TT) & $t_\mathrm{p}$ (TT) & $q$ (au) & $e$ & $i$ & $\Omega$ & $\omega$ & $D_\mathrm{SH}$ & $\langle X \rangle$ & $P(X\geq1)$ \\ \tableline D/Lexell & 1770 Aug 14.0 & 1770 Aug 14.05 & $0.6746$ & $0.7856$ & $1.55^\circ$ & $134.50^\circ$ & $224.98^\circ$ & - & - & - \\ 2010 JL$_{33}$ & 1770 Aug 14.0 & 1770 Jun 25.32 & $0.7120$ & $0.7338$ & $4.42^\circ$ & $95.32^\circ$ & $263.38^\circ$ & $0.087$ & $0.008$ & $0.8\%$ \\ 1999 XK$_{136}$ & 1770 Aug 14.0 & 1771 Aug 24.65 & $0.7003$ & $0.7073$ & $2.57^\circ$ & $71.75^\circ$ & $291.58^\circ$ & $0.104$ & $0.3$ & $26\%$ \\ 2011 LJ$_1$ & 1770 Aug 14.0 & 1771 Jul 15.89 & $0.7290$ & $0.6970$ & $8.28^\circ$ & $146.86^\circ$ & $207.75^\circ$ & $0.171$ & $6$ & $99.8\%$ \\ 2001 YV$_3$ & 1770 Aug 14.0 & 1771 Jan 30.41 & $0.5438$ & $0.7193$ & $5.45^\circ$ & $114.05^\circ$ & $236.71^\circ$ & $0.198$ & $14$ & $\sim100\%$ \\ \tableline \end{tabular} \end{center} \end{table*} The orbit of 2010 JL$_{33}$ is very well constrained thanks to Doppler observations taken by NASA's Goldstone Solar System Radar in 2010. Integration of 2010 JL$_{33}$ back to the year of 1770 shows little dispersal: $1\sigma$ dispersion from nominal is only $2\times10^{-5}$~au. However, with this orbit, 2010 JL$_{33}$ does not approach the Earth at the correct time to be D/Lexell. Could it be cometary non-gravitational effects that placed D/Lexell on the present orbit of 2010 JL$_{33}$? To test this hypothesis, we attempt to link Messier's observations of D/Lexell to modern observations of 2010 JL$_{33}$. This is first done by integrating the orbit of 2010 JL$_{33}$ backwards in time while applying some degree of non-gravitational effect, assuming that the effect remains constant over time. A wide range of parameter space is tested, covering from $\mathcal{A}_x=10^{-12}$--$10^{-6}~\mathrm{au~d^{-2}}$ \citep[where $x=1,2,3$ denotes radial, transverse and the normal directions, c.f.][]{Marsden1973} which encompass almost all known values of non-gravitational forces \citep{Yeomans2004, Hui2015}, including 10 million different combinations of $\mathcal{A}_x$. We find that a modest degree of non-gravitational effect is sufficient to bring the orbit of 2010 JL$_{33}$ into a configuration qualitatively resembling D/Lexell's 1770 passage (Figure~\ref{fig:lex-jl33}). However, getting a precise match proves much more challenging and has not yet been successfully accomplished with the constant non-gravitational model. We then consider a simple time-varying non-gravitational model: D/Lexell would initially be affected by non-gravitational effect until the epoch of $t_\mathrm{deact}$, at which point the comet deactivates and non-gravitational effect disappears. This model, however, does not yield considerably better match than the constant non-gravitational model. We also attempt to link D/Lexell to 2010 JL$_{33}$ using an orbital determination program such as FindOrb, which is also unsuccessful. \begin{figure*} \includegraphics[width=\textwidth]{Lexell-onsky.eps} \caption{A comparison of the on-sky paths of some of the non-gravitational models of 2010 JL$_{33}$ with the D/Lexell observations taken by Messier, most taken with the aid of a micrometeor equiped refracting telescope. Blue circles indicate those he made without instrumental aid, because of substantial cloud or in the case of those of July 3, because he was away from the observatory at a dinner with the Minister of State.\label{fig:lex-jl33}} \end{figure*} Much of the difficulty arises from the fact that the models of 2010 JL$_{33}$ experience a series of moderately close encounters with Jupiter during the $\sim200$ years in question. Consequently, the final result is extremely sensitive to the details of these encounters. This makes the search for a best fit very complicated, fraught with local minima and sharp gradients. \section{Meteors from D/Lexell} \label{sec:meteor} Another way to trace D/Lexell is to search for its dust footprint, detectable to the observers on the Earth as meteors. This is possible because the Earth passes close to D/Lexell's 1770 orbit twice a year, with minimum orbit intersection distance of 0.015~au in July and 0.024~au in December. The detection or non-detection of meteors from D/Lexell can be used to better constrain the orbit of the comet, as it can reveal orbital solutions of D/Lexell that are compatible or incompatible with such meteors. The presence of meteor activity can also give critical information regarding whether D/Lexell was deactivated or disintegrated at some point. However, the chaotic nature of D/Lexell's orbit remains a major burden. As will be shown in the following, even for the period of 1767--1779 (when D/Lexell's trajectory is relatively well known), the outcome of meteor activity prediction is extremely dependent to the initial position of the parent. We approach this problem by generating 10 clones from the covariance matrix of the orbit and simulate the meteor activities from these clones. (Doing more clones becomes more computationally expensive, and we believe that 10 clones still permit a reasonable result to be derived.) The simulation uses the same setup described in \S~\ref{sec:orbit} except that radiation pressure and the Poynting-Robertson effect are now considered. We focus on the meteoroids produced by D/Lexell (and its clones) after 1767, the year that the comet was placed on its Earth-approaching orbit after a close encounter with Jupiter. Each of the 10 clones is integrated from 1770 back to 1767 as well as forward to the end point of the simulation, which we choose as the year of 2000. During the integration, each clone releases meteoroids at each of its respective perihelion passages following the meteoroid/dust ejection model described in \S~\ref{sec:pr}, except that the range of meteoroid size is from 1~mm to 10~cm and the size index is -2.8. The size of the meteoroids adopted here is slightly larger than similar studies because of the low encounter speed of the meteoroids, which means larger meteoroids are needed to produce the same amount of light; the size index of -2.8 is used as appropriated to meteoroids at such sizes \citep{McDonnell1987}. Meteoroids are integrated alongside with their parent clones, meteoroids that approach the Earth are recorded following the procedure described in \citet{Vaubaillon2005a}. The intensity of meteor activity can be modeled from the dust production rate of the comet. Judging from the total magnitude, D/Lexell is about 10 times more active than comet 55P/Tempel-Tuttle (of which $M_1=10.0$ according to the JPL Small Body Database), the well-studied parent comet of the Leonid meteor shower, therefore we multiply the dust production rate of P/Tempel-Tuttle \citep[derived by][]{Vaubaillon2005a} by 10, which gives a dust production rate of about $10^{12}$~kg per orbit, and take it as the dust production rate of D/Lexell. The strength of meteor activity is most straightforwardly measured by the Zenith Hourly Rate (ZHR), the number of meteors per hour that an observer would see providing that the sky is dark and clear, and that the radiant is at zenith. We identify significant meteor showers that are likely to be noticed and calculate their peak time as well as the corresponding ZHR, following the technique described by \citet{Vaubaillon2005a}. \startlongtable \begin{deluxetable}{lclcclc} \tablecaption{Predicted significant meteor showers from 10 clones of D/Lexell in 1770--2000. Also listed is the clone's orbit in 1770. The table is arranged by the increment of the clone's perihelion distance $q$ in 1770.\label{tbl:met}} \tablehead{ \colhead{Clone} & \colhead{$D_\mathrm{SH}$ to nominal orbit} & \colhead{Center time (UT)} & \colhead{Duration} & \colhead{Geocentric radiant} & \colhead{Trail} & \colhead{ZHR} \\ } \startdata 1 & 0.0016 & \multicolumn{5}{c}{$q=0.6742$~au, $e=0.7872$, $i=1.55^\circ$} \\ && 1832 Aug 23 19:37 & 1~hr & $\alpha=266^\circ$, $\delta=+24^\circ$, $v_\mathrm{G}=13$~km/s & 1781 & 110 \\ && 1852 Jul 13 08:32 & 3~hr & $\alpha=274^\circ$, $\delta=-16^\circ$, $v_\mathrm{G}=17$~km/s & 1770, 1776 & 50 \\ && 1864 Aug 8 11:15 & 3~hr & $\alpha=268^\circ$, $\delta=-2^\circ$, $v_\mathrm{G}=12$~km/s & 1770, 1776 & 20 \\ && 1887 Jul 7 19:25 & 8~hr & $\alpha=265^\circ$, $\delta=-14^\circ$, $v_\mathrm{G}=16$~km/s & 1788 & 110 \\ && 1888 Jul 7 08:40 & 12~hr & $\alpha=265^\circ$, $\delta=-14^\circ$, $v_\mathrm{G}=16$~km/s & 1788 & 350 \\ && 1947 Aug 21 19:26 & 6~hr & $\alpha=196^\circ$, $\delta=-59^\circ$, $v_\mathrm{G}=12$~km/s & 1851, 1858 & 30 \\ && 1953 Aug 25 12:04 & 8~hr & $\alpha=192^\circ$, $\delta=-58^\circ$, $v_\mathrm{G}=13$~km/s & 1845, 1851 & 40 \\ && 1958 Aug 28 17:37 & 4~hr & $\alpha=238^\circ$, $\delta=-67^\circ$, $v_\mathrm{G}=12$~km/s & 1888 & 30 \\ && 1993 Aug 26 20:23 & 1~d & $\alpha=205^\circ$, $\delta=-50^\circ$, $v_\mathrm{G}=11$~km/s & 1864 & 90 \\ \hline 2 & 0.0012 & \multicolumn{5}{c}{$q=0.6742$~au, $e=0.7867$, $i=1.55^\circ$} \\ && 1781 Aug 31 04:56 & 2~d & $\alpha=261^\circ$, $\delta=-11^\circ$, $v_\mathrm{G}=10$~km/s & 1770, 1776 & 120 \\ \hline 3 & 0.0012 & \multicolumn{5}{c}{$q=0.6744$~au, $e=0.7868$, $i=1.55^\circ$} \\ && 1781 Aug 31 17:11 & 2~d & $\alpha=260^\circ$, $\delta=-10^\circ$, $v_\mathrm{G}=10$~km/s & 1770, 1776 & 270 \\ \hline 4 & 0.0011 & \multicolumn{5}{c}{$q=0.6744$~au, $e=0.7867$, $i=1.56^\circ$} \\ && 1781 Aug 31 03:21 & 2~d & $\alpha=261^\circ$, $\delta=-10^\circ$, $v_\mathrm{G}=10$~km/s & 1770, 1776 & 90 \\ \hline 5 & 0.0010 & \multicolumn{5}{c}{$q=0.6744$~au, $e=0.7866$, $i=1.55^\circ$} \\ && 1781 Aug 30 19:24 & 2~d & $\alpha=261^\circ$, $\delta=-10^\circ$, $v_\mathrm{G}=10$~km/s & 1770, 1776 & 40 \\ \hline 6 & 0.0005 & \multicolumn{5}{c}{$q=0.6746$~au, $e=0.7851$, $i=1.55^\circ$} \\ && 1913 Jun 23 12:44 & 2~hr & $\alpha=277^\circ$, $\delta=-21^\circ$, $v_\mathrm{G}=26$~km/s & 1841 & 2200 \\ && 1918 Jan 7 07:00 & 2~hr & $\alpha=285^\circ$, $\delta=-25^\circ$, $v_\mathrm{G}=26$~km/s & 1826 & 12000 \\ && 1934 Jun 19 13:30 & 2~hr & $\alpha=275^\circ$, $\delta=-21^\circ$, $v_\mathrm{G}=28$~km/s & 1872 & 2800 \\ && 1944 Jan 13 13:49 & 2~hr & $\alpha=289^\circ$, $\delta=-25^\circ$, $v_\mathrm{G}=27$~km/s & 1914, 1919 & 5700 \\ && 1944 Jun 19 18:38 & 2~hr & $\alpha=275^\circ$, $\delta=-21^\circ$, $v_\mathrm{G}=27$~km/s & 1883 & 1200 \\ && 1969 Jun 28 16:34 & 4~hr & $\alpha=280^\circ$, $\delta=-21^\circ$, $v_\mathrm{G}=26$~km/s & 1857 & 2700 \\ && 1979 Jan 8 10:45 & 1~hr & $\alpha=287^\circ$, $\delta=-25^\circ$, $v_\mathrm{G}=26$~km/s & 1862 & 1600 \\ \hline \multicolumn{7}{l}{Nominal orbit of D/Lexell in 1770: $q=0.6746$~au, $e=0.7856$, $i=1.55^\circ$} \\ \hline 7 & 0.0006 & \multicolumn{5}{c}{$q=0.6747$~au, $e=0.7851$, $i=1.55^\circ$} \\ && 1894 Jan 4 20:40 & 6~hr & $\alpha=285^\circ$, $\delta=-25^\circ$, $v_\mathrm{G}=25$~km/s & 1782--1792 & 1600 \\ && 1908 Jan 10 04:56 & 2~hr & $\alpha=286^\circ$, $\delta=-25^\circ$, $v_\mathrm{G}=27$~km/s & 1857 & 1400 \\ && 1913 Jun 23 20:32 & 4~hr & $\alpha=277^\circ$, $\delta=-21^\circ$, $v_\mathrm{G}=26$~km/s & 1841 & 2900 \\ && 1918 Jun 24 21:02 & 6~hr & $\alpha=277^\circ$, $\delta=-21^\circ$, $v_\mathrm{G}=26$~km/s & 1831--1836 & 1400 \\ && 1928 Jun 21 09:40 & 4~hr & $\alpha=276^\circ$, $\delta=-21^\circ$, $v_\mathrm{G}=28$~km/s & 1826 & 5900 \\ && 1933 Jun 21 18:29 & 6~hr & $\alpha=276^\circ$, $\delta=-21^\circ$, $v_\mathrm{G}=27$~km/s & 1826 & 10000 \\ && 1969 Jan 5 19:17 & 1~d & $\alpha=285^\circ$, $\delta=-25^\circ$, $v_\mathrm{G}=25$~km/s & 1867, 1872 & 430 \\ && 1970 Jul 2 02:47 & 12~hr & $\alpha=282^\circ$, $\delta=-21^\circ$, $v_\mathrm{G}=25$~km/s & 1826, 1831 & 430 \\ && 1980 Jan 6 12:03 & 2~hr & $\alpha=285^\circ$, $\delta=-25^\circ$, $v_\mathrm{G}=25$~km/s & 1867 & 4300 \\ \hline 8 & 0.0011 & \multicolumn{5}{c}{$q=0.6749$~au, $e=0.7846$, $i=1.55^\circ$} \\ && 1939 Jan 11 19:38 & 2~hr & $\alpha=287^\circ$, $\delta=-24^\circ$, $v_\mathrm{G}=27$~km/s & 1836 & 2400 \\ && 1949 Jan 5 01:04 & 1~hr & $\alpha=284^\circ$, $\delta=-25^\circ$, $v_\mathrm{G}=26$~km/s & 1831 & 900 \\ && 1949 Jun 27 02:47 & 6~hr & $\alpha=279^\circ$, $\delta=-21^\circ$, $v_\mathrm{G}=25$~km/s & 1770, 1776 & 340 \\ \hline 9 & 0.0015 & \multicolumn{5}{c}{$q=0.6749$~au, $e=0.7841$, $i=1.55^\circ$} \\ && 1882 Jan 3 21:28 & 12~hr & $\alpha=283^\circ$, $\delta=-25^\circ$, $v_\mathrm{G}=26$~km/s & 1802--1812 & 1200 \\ && 1913 Jul 27 01:42 & 12~hr & $\alpha=294^\circ$, $\delta=-19^\circ$, $v_\mathrm{G}=19$~km/s & 1770, 1776 & 20 \\ && 1924 Dec 4 07:00 & 6~hr & $\alpha=267^\circ$, $\delta=-26^\circ$, $v_\mathrm{G}=19$~km/s & 1776--1812 & 550 \\ && 1959 Jun 23 04:42 & 4~hr & $\alpha=277^\circ$, $\delta=-21^\circ$, $v_\mathrm{G}=27$~km/s & 1908 & 2500 \\ && 1960 Jun 20 09:06 & 4~hr & $\alpha=276^\circ$, $\delta=-21^\circ$, $v_\mathrm{G}=28$~km/s & 1918 & 4400 \\ && 1969 Jan 14 00:41 & 4~hr & $\alpha=290^\circ$, $\delta=-24^\circ$, $v_\mathrm{G}=27$~km/s & 1913 & 5800 \\ && 1974 Jan 13 21:18 & 8~hr & $\alpha=290^\circ$, $\delta=-24^\circ$, $v_\mathrm{G}=27$~km/s & 1776--1934 & 1800 \\ && 1979 Jan 13 23:44 & 2~hr & $\alpha=290^\circ$, $\delta=-24^\circ$, $v_\mathrm{G}=27$~km/s & 1949 & 4800 \\ && 1983 Jul 7 06:10 & 8~hr & $\alpha=285^\circ$, $\delta=-21^\circ$, $v_\mathrm{G}=24$~km/s & 1832--1872 & 6200 \\ && 1984 Jan 1 19:45 & 8~hr & $\alpha=283^\circ$, $\delta=-25^\circ$, $v_\mathrm{G}=24$~km/s & 1776--1872 & 11000 \\ && 1984 Jul 1 10:36 & 2~hr & $\alpha=282^\circ$, $\delta=-21^\circ$, $v_\mathrm{G}=25$~km/s & 1908 & 2800 \\ && 1989 Jul 7 01:56 & 8~hr & $\alpha=285^\circ$, $\delta=-20^\circ$, $v_\mathrm{G}=24$~km/s & 1832--1867 & 7200 \\ && 1990 Jan 1 03:14 & 6~hr & $\alpha=283^\circ$, $\delta=-25^\circ$, $v_\mathrm{G}=24$~km/s & 1832--1872 & 15000 \\ \hline 10 & 0.0016 & \multicolumn{5}{c}{$q=0.6750$~au, $e=0.7840$, $i=1.54^\circ$} \\ && 1882 Jan 4 02:49 & 8~hr & $\alpha=283^\circ$, $\delta=-25^\circ$, $v_\mathrm{G}=26$~km/s & 1807, 1812 & 900 \\ && 1913 Dec 12 16:47 & 12~hr & $\alpha=274^\circ$, $\delta=-26^\circ$, $v_\mathrm{G}=19$~km/s & 1781--1802 & 40 \\ && 1924 Dec 4 06:45 & 6~hr & $\alpha=267^\circ$, $\delta=-26^\circ$, $v_\mathrm{G}=19$~km/s & 1781--1807 & 600 \\ && 1974 Jan 13 18:03 & 1~hr & $\alpha=290^\circ$, $\delta=-24^\circ$, $v_\mathrm{G}=27$~km/s & 1812 & 60 \\ && 1983 Jul 6 04:06 & 12~hr & $\alpha=284^\circ$, $\delta=-21^\circ$, $v_\mathrm{G}=24$~km/s & 1822--1877 & 2800 \\ && 1984 Jan 1 23:47 & 8~hr & $\alpha=283^\circ$, $\delta=-25^\circ$, $v_\mathrm{G}=24$~km/s & 1822--1872 & 7300 \\ && 1984 Jul 1 13:41 & 6~hr & $\alpha=282^\circ$, $\delta=-21^\circ$, $v_\mathrm{G}=25$~km/s & 1887--1892 & 380 \\ && 1989 Jul 7 05:51 & 8~hr & $\alpha=285^\circ$, $\delta=-21^\circ$, $v_\mathrm{G}=24$~km/s & 1822--1857 & 90 \\ && 1990 Jan 1 05:06 & 6~hr & $\alpha=283^\circ$, $\delta=-25^\circ$, $v_\mathrm{G}=24$~km/s & 1822--1867 & 11000 \\ && 1995 Jan 9 15:47 & 4~hr & $\alpha=288^\circ$, $\delta=-25^\circ$, $v_\mathrm{G}=26$~km/s & 1827--1887 & 1000 \\ \enddata \end{deluxetable} The result, tabulated in Table~\ref{tbl:met}, clearly shows the transition of timings and intensities of meteor showers across the orbital space of the clones. Clones with $q$ close to or larger than the nominal (clones 6--10) tend to produce stronger meteor showers, typically associated with the materials released in the 19th century; clones with $q$ smaller than the nominal (clones 1--5) produce meteor showers associated with the material released in 1770 and 1776. Since the dynamical state of D/Lexell is only relatively well known in 1767--1779, i.e. between the two close encounters to Jupiter prior to and right after the observed 1770 apparition, meteor activity from these two apparitions provides critical diagnostic information about the exact trajectory of the comet. For showers associated with the 1770--1776 ejections from clones 1--5, the radiant would have been conveniently situated in the constellation of Ophiuchus, which is easily observable in the summer months that the meteor showers are predicted to occur; for clones 2--5, meteor showers are expected to be moderately strong (as a comparison, the annual Perseid meteor shower has ZHR=100) and long-lasting thanks to the slow encounter speed and the shallow orbit of the parent. Therefore, we believe that the meteors from the 1770 and 1776 apparitions could have been noticeable even by observers of the Age of Enlightenment had D/Lexell been placed on the right orbit. On the other hand, clones 6--10 produce frequent meteor showers, with many surpassing storm level (showers with ZHR$>1000$ are defined as meteor storms) which should be easily noticeable by unaided observers. We search for modern and historic sightings of the predicted showers. The showers in the late 19th to 20th century are relatively easy to examine due to the abundance of data; the 18th century ones are particularly challenging. One way to respond to this challenge is to consult Chinese chronicles, which have a reputation of being the most complete records for pre-modern astronomical showers. However, we note that even with Chinese records, the examination would be far from exhaustive, partially because Chinese astronomers stopped recording meteor phenomena on a regular basis after around 1650, as European missionaries introduced Aristotle's theory about meteors to royal Chinese astronomers, which was quietly accepted by the latter. Nevertheless, we diligently examine the Chinese chronicles, including the Draft History of Qing, the draft\footnote{The project was put to an end in 1930 due to the Chinese Civil War and was never completed.} of the official history of the Qing dynasty (1644--1911), Veritable Records of the Qing, as well as local, unofficial accounts. We do not find any records that match the predicted timing -- either among modern records of 20th century showers or Chinese chronicles for the 1781 event. This suggests two scenarios: either Chinese astronomers missed or did not record the 1781 event, or D/Lexell became inactive before $\sim$1800. Compared to historic observations, contemporary meteor surveys such as the Canadian Meteor Orbit Radar \citep{Brown2008} and Cameras for Allsky Meteor Surveillance \citep{Jenniskens2011} have much better sensitivity and consistency despite having shorter temporal coverage. To investigate meteor activity that may be detectable by contemporary and future meteor surveys, we rerun the aforementioned simulation with the same set of clones, except that the meteoroid size range is extended to 0.5~mm and the integration is continued to the year 2050. As shown in Table~\ref{tbl:met-fu}, we find that half of the clones (clones 6--10) would have produced significant meteor showers in recent years. We then search the IAU Meteor Data Center, the global clearinghouse for meteor shower detections \citep{Jopek2017}, without finding any matching records. This suggests that the true orbit of D/Lexell likely did not resemble that of any of clones 6--10. If D/Lexell was on a smaller $q$ orbit, the dynamics of the resulting meteoric materials would be much more sensitive to the details of the close approaches to Jupiter, even though the parent would likely remain in a short-period orbit (Figure~\ref{fig:5clones}). As it can be seen in Table~\ref{tbl:met-fu}, the meteor showers originating from clones with smaller $q$ (clones 0--5) have few similarities, implying that the dynamical evolution of the materials varies wildly from one clone to another. Simulation of clones on a denser grid over the orbital space will bring a clearer picture but is more computationally demanding. At this stage, we tentatively conclude that meteors potentially originating from a lower-$q$ D/Lexell will arrive in August to September from southerly radiants, and the speed will be very low. \startlongtable \begin{deluxetable}{lclcclc} \tablecaption{Predicted significant meteor showers in 2000--2050 originating from the 10 clones over the apparitions of 1770 and 1776. Also listed is the clone's orbit in 1770. The table is arranged by the increment of the clone's perihelion distance $q$ in 1770.\label{tbl:met-fu}} \tablehead{ \colhead{Clone} & \colhead{$D_\mathrm{SH}$ to nominal orbit} & \colhead{Center time (UT)} & \colhead{Duration} & \colhead{Geocentric radiant} & \colhead{Trail} & \colhead{ZHR} \\ } \startdata 1 & 0.0016 & \multicolumn{5}{c}{$q=0.6742$~au, $e=0.7872$, $i=1.55^\circ$} \\ && 2025 Sep 25 03:10 & 1~d & $\alpha=214^\circ$, $\delta=-53^\circ$, $v_\mathrm{G}=13$~km/s & 1770, 1776 & 5 \\ \hline 2 & 0.0012 & \multicolumn{5}{c}{$q=0.6742$~au, $e=0.7867$, $i=1.55^\circ$} \\ && 2030 Aug 15 20:27 & 12~hr & $\alpha=254^\circ$, $\delta=+10^\circ$, $v_\mathrm{G}=11$~km/s & 1776 & 30 \\ \hline 3 & 0.0012 & \multicolumn{5}{c}{$q=0.6744$~au, $e=0.7868$, $i=1.55^\circ$} \\ \multicolumn{7}{c}{n/a} \\ \hline 4 & 0.0011 & \multicolumn{5}{c}{$q=0.6744$~au, $e=0.7867$, $i=1.56^\circ$} \\ \multicolumn{7}{c}{n/a} \\ \hline 5 & 0.0010 & \multicolumn{5}{c}{$q=0.6744$~au, $e=0.7866$, $i=1.55^\circ$} \\ && 2043 Sep 22 14:46 & 6~hr & $\alpha=219^\circ$, $\delta=-46^\circ$, $v_\mathrm{G}=12$~km/s & 1770 & 70 \\ \hline 6 & 0.0005 & \multicolumn{5}{c}{$q=0.6746$~au, $e=0.7851$, $i=1.55^\circ$} \\ && 2005 Jul 3 03:28 & 1~hr & $\alpha=283^\circ$, $\delta=-21^\circ$, $v_\mathrm{G}=25$~km/s & 1776 & 100 \\ && 2026 Aug 16 15:58 & 2~hr & $\alpha=305^\circ$, $\delta=-23^\circ$, $v_\mathrm{G}=16$~km/s & 1776 & 5 \\ && 2041 Jan 13 07:04 & 1~hr & $\alpha=290^\circ$, $\delta=-24^\circ$, $v_\mathrm{G}=27$~km/s & 1770 & 200 \\ \hline \multicolumn{7}{l}{Nominal orbit of D/Lexell in 1770: $q=0.6746$~au, $e=0.7856$, $i=1.55^\circ$} \\ \hline 7 & 0.0006 & \multicolumn{5}{c}{$q=0.6747$~au, $e=0.7851$, $i=1.55^\circ$} \\ && 2005 Jul 3 03:00 & 1~hr & $\alpha=283^\circ$, $\delta=-21^\circ$, $v_\mathrm{G}=25$~km/s & 1770, 1776 & 100 \\ && 2005 Sep 24 12:38 & 3~hr & $\alpha=306^\circ$, $\delta=-22^\circ$, $v_\mathrm{G}=10$~km/s & 1776 & 10 \\ && 2031 Jan 13 15:17 & 3~hr & $\alpha=290^\circ$, $\delta=-24^\circ$, $v_\mathrm{G}=26$~km/s & 1776 & 10 \\ && 2036 Jan 14 04:47 & 2~hr & $\alpha=291^\circ$, $\delta=-24^\circ$, $v_\mathrm{G}=26$~km/s & 1770, 1776 & 10 \\ && 2041 Jan 13 20:56 & 6~hr & $\alpha=291^\circ$, $\delta=-24^\circ$, $v_\mathrm{G}=27$~km/s & 1770, 1776 & 500 \\ \hline 8 & 0.0011 & \multicolumn{5}{c}{$q=0.6749$~au, $e=0.7846$, $i=1.55^\circ$} \\ && 2005 Jan 5 19:01 & 1~d & $\alpha=286^\circ$, $\delta=-25^\circ$, $v_\mathrm{G}=25$~km/s & 1770, 1776 & 30 \\ && 2005 Jul 3 03:21 & 1~d & $\alpha=283^\circ$, $\delta=-21^\circ$, $v_\mathrm{G}=25$~km/s & 1770, 1776 & 600 \\ && 2016 Jan 6 17:22 & 12~hr & $\alpha=286^\circ$, $\delta=-25^\circ$, $v_\mathrm{G}=25$~km/s & 1770, 1776 & 30 \\ && 2016 Jul 3 12:00 & 1~d & $\alpha=283^\circ$, $\delta=-21^\circ$, $v_\mathrm{G}=24$~km/s & 1770 & 10 \\ && 2030 Jun 27 14:16 & 1~d & $\alpha=280^\circ$, $\delta=-21^\circ$, $v_\mathrm{G}=26$~km/s & 1776 & 200 \\ && 2031 Jan 13 15:39 & 1~d & $\alpha=291^\circ$, $\delta=-24^\circ$, $v_\mathrm{G}=26$~km/s & 1770, 1776 & 50 \\ && 2035 Jun 27 18:43 & 1~d & $\alpha=280^\circ$, $\delta=-21^\circ$, $v_\mathrm{G}=26$~km/s & 1776 & 10 \\ && 2036 Jan 14 05:43 & 1~d & $\alpha=291^\circ$, $\delta=-24^\circ$, $v_\mathrm{G}=27$~km/s & 1770, 1776 & 300 \\ && 2041 Jan 14 00:55 & 12~hr & $\alpha=291^\circ$, $\delta=-24^\circ$, $v_\mathrm{G}=27$~km/s & 1770, 1776 & 1600 \\ && 2045 Jun 28 11:24 & 6~hr & $\alpha=281^\circ$, $\delta=-21^\circ$, $v_\mathrm{G}=26$~km/s & 1776 & 100 \\ && 2046 Jun 26 06:06 & 12~hr & $\alpha=280^\circ$, $\delta=-21^\circ$, $v_\mathrm{G}=27$~km/s & 1776 & 500 \\ && 2050 Jun 29 01:38 & 8~hr & $\alpha=281^\circ$, $\delta=-21^\circ$, $v_\mathrm{G}=26$~km/s & 1776 & 400 \\ \hline 9 & 0.0015 & \multicolumn{5}{c}{$q=0.6749$~au, $e=0.7841$, $i=1.55^\circ$} \\ && 2005 Jan 8 21:45 & 12~hr & $\alpha=288^\circ$, $\delta=-25^\circ$, $v_\mathrm{G}=26$~km/s & 1770, 1776 & 30 \\ && 2005 Jul 3 03:41 & 1~d & $\alpha=283^\circ$, $\delta=-21^\circ$, $v_\mathrm{G}=25$~km/s & 1770 & 100 \\ && 2015 Jul 4 08:56 & 2~hr & $\alpha=284^\circ$, $\delta=-21^\circ$, $v_\mathrm{G}=25$~km/s & 1770, 1776 & 40 \\ && 2016 Jan 6 14:06 & 1~d & $\alpha=286^\circ$, $\delta=-25^\circ$, $v_\mathrm{G}=25$~km/s & 1770 & 100 \\ && 2016 Jul 2 14:49 & 1~d & $\alpha=284^\circ$, $\delta=-21^\circ$, $v_\mathrm{G}=25$~km/s & 1770, 1776 & 60 \\ && 2025 Jun 30 12:08 & 8~hr & $\alpha=282^\circ$, $\delta=-20^\circ$, $v_\mathrm{G}=26$~km/s & 1776 & 300 \\ && 2031 Jan 8 09:12 & 1~hr & $\alpha=288^\circ$, $\delta=-25^\circ$, $v_\mathrm{G}=25$~km/s & 1770, 1776 & 30 \\ && 2036 Jan 15 16:41 & 1~d & $\alpha=292^\circ$, $\delta=-24^\circ$, $v_\mathrm{G}=27$~km/s & 1776 & 200 \\ && 2050 Jun 28 17:40 & 3~hr & $\alpha=281^\circ$, $\delta=-21^\circ$, $v_\mathrm{G}=26$~km/s & 1776 & 300 \\ \hline 10 & 0.0016 & \multicolumn{5}{c}{$q=0.6750$~au, $e=0.7840$, $i=1.54^\circ$} \\ && 2005 Jul 2 21:37 & 12~hr & $\alpha=283^\circ$, $\delta=-21^\circ$, $v_\mathrm{G}=25$~km/s & 1770 & 200 \\ && 2016 Jan 6 01:03 & 1~d & $\alpha=286^\circ$, $\delta=-25^\circ$, $v_\mathrm{G}=25$~km/s & 1770 & 90 \\ && 2031 Jan 8 08:14 & 1~hr & $\alpha=288^\circ$, $\delta=-25^\circ$, $v_\mathrm{G}=25$~km/s & 1770, 1776 & 10 \\ && 2035 Jun 28 10:10 & 10~hr & $\alpha=281^\circ$, $\delta=-21^\circ$, $v_\mathrm{G}=26$~km/s & 1776 & 300 \\ && 2036 Jan 16 01:53 & 6~hr & $\alpha=292^\circ$, $\delta=-24^\circ$, $v_\mathrm{G}=27$~km/s & 1776 & 300 \\ && 2041 Jan 14 10:23 & 10~hr & $\alpha=291^\circ$, $\delta=-24^\circ$, $v_\mathrm{G}=27$~km/s & 1776 & 300 \\ && 2046 Jun 26 06:05 & 15~hr & $\alpha=280^\circ$, $\delta=-21^\circ$, $v_\mathrm{G}=27$~km/s & 1776 & 400 \\ && 2050 Jun 28 17:53 & 3~hr & $\alpha=281^\circ$, $\delta=-21^\circ$, $v_\mathrm{G}=26$~km/s & 1776 & 300 \\ \enddata \end{deluxetable} \begin{figure} \includegraphics[width=0.5\textwidth]{fig-5clones.eps} \caption{Evolutionary paths of clones 1--5 over 1500--2000~AD assuming no non-gravitational effect.\label{fig:5clones}} \end{figure} \section{Conclusion} \label{sec:sum} We reviewed the case of long-lost comet D/Lexell, mainly based on a reanalysis of the observations taken by Charles Messier. We recalculated the orbit of D/Lexell and deduced the associated orbital covariance matrix, an important quantity that helped us to investigate the likely trajectory of the comet, especially after its close encounter with Jupiter in 1779. We found that there was a $98\%$ probability that D/Lexell has remained in the inner Solar System. This conclusion remains valid even if a significant degree of non-gravitational effect is considered. From Messier's observations, we deduced that D/Lexell was one of the largest near-Earth comets currently known, with a nucleus at the order of 10~km in diameter. The activity of the comet was close to the average within the cometary population. The large size of the nucleus suggested that if D/Lexell remained in the inner Solar System, it should have been detected, either as an active comet or as an asteroid in disguise. The first scenario had been discussed throughout the 19th century and was concluded to be unlikely; we investigated the second scenario by looking among the known asteroids for orbital similarities with D/Lexell. We found asteroid 2010 JL$_{33}$ has a similar orbit to D/Lexell. A test with the NEO population model suggested that the probability of chance alignment between D/Lexell and 2010 JL$_{33}$ is 0.8\%. We unsuccessfully attempted to derive a unique orbital solution (including non-gravitational effects) that links D/Lexell and 2010 JL$_{33}$. We noted that the orbital solution was extremely dependent to the details of D/Lexell's close approaches to Jupiter, therefore the case concerning the relation between 2010 JL$_{33}$ and D/Lexell is far from conclusive. We also simulated the dust footprint produced by a set of orbital clones of D/Lexell and found that, under certain circumstances, the footprint would be detectable at the Earth as one or more meteor showers. Clones with larger perihelion distances compared to the nominal orbit produced stronger (exceeding storm level) and more frequent meteor showers; clones with smaller perihelion distances were found to be more sensitive to close encounters with Jupiter and produced fewer meteor showers. The absence of strong meteor showers compatible with the predicted configuration suggests that the true orbit of D/Lexell resembles the latter case. This would make the dynamical evolution of associated meteoric materials more chaotic, while the parent would likely remain in a short-period orbit. The evidence available at this stage does not allow a conclusive statement to be made. 2010 JL$_{33}$ could well be D/Lexell itself or its descendant, but establishing a dynamical pathway that places D/Lexell onto the orbit of 2010 JL$_{33}$ while satisfying every detail is challenging. Even if a pathway can be found, it would be another challenge to demonstrate that such a pathway is a unique solution of the problem rather than an ad-hoc solution that merely satisfies our assumptions. Meanwhile, careful observations of the meteors potentially originating from D/Lexell could provide important diagnostic information that would not be otherwise retrievable, which could allow post-facto orbit improvement of D/Lexell even though the comet is long lost. \acknowledgments We thank an anonymous referee for his/her careful reading and valuable comments that help improve the manuscript. Q.-Z. is supported by the GROWTH project (National Science Foundation Grant No. 1545949). P. W. is supported by the Natural Sciences and Engineering Research Council of Canada. M.-T. is supported by a NASA grant to David Jewitt. This work was made possible by the facilities of the Shared Hierarchical Academic Research Computing Network (SHARCNET:www.sharcnet.ca) and Compute/Calcul Canada. The authors wish to dedicate this work to Charles Messier (1730--1817), whom, besides inspiring the authors to draft this work, is most notable for publishing the celebrated Messier catalog, a compilation that has kept the authors busy during their stargazing sessions. \end{CJK*} \bibliographystyle{aasjournal}
{ "timestamp": "2018-02-28T02:06:07", "yymm": "1802", "arxiv_id": "1802.08904", "language": "en", "url": "https://arxiv.org/abs/1802.08904" }
\section{\@startsection{section}{1}% \z@{.7\linespacing\@plus\linespacing}{.5\linespacing}% {\normalfont\scshape\centering}} \makeatletter \renewcommand\section{% \@ifstar{% \@startsection{section}{1}% \z@{.7\linespacing\@plus\linespacing}{.5\linespacing}% {\normalfont\scshape\centering% }}{% \@startsection{section}{1}% \z@{.7\linespacing\@plus\linespacing}{.5\linespacing}% {\normalfont\scshape\centering% \S }}} \makeatother \theoremstyle{plain} \newtheorem{theorem}{Theorem} \let\epsilon=\varepsilon \newtheoremstyle{note {4pt {4pt {\sl { {\itshape {. {.5em { \theoremstyle{note} \newtheorem{claim}{Claim} \usepackage{lineno} \newcommand*\patchAmsMathEnvironmentForLineno[1]{% \expandafter\let\csname old#1\expandafter\endcsname\csname #1\endcsname \expandafter\let\csname oldend#1\expandafter\endcsname\csname end#1\endcsname \renewenvironment{#1}% {\linenomath\csname old#1\endcsname}% {\csname oldend#1\endcsname\endlinenomath} \newcommand*\patchBothAmsMathEnvironmentsForLineno[1]{% \patchAmsMathEnvironmentForLineno{#1}% \patchAmsMathEnvironmentForLineno{#1*}}% \AtBeginDocument{% \patchBothAmsMathEnvironmentsForLineno{equation}% \patchBothAmsMathEnvironmentsForLineno{align}% \patchBothAmsMathEnvironmentsForLineno{flalign}% \patchBothAmsMathEnvironmentsForLineno{alignat}% \patchBothAmsMathEnvironmentsForLineno{gather}% \patchBothAmsMathEnvironmentsForLineno{multline}% } \begin{document} \onehalfspace \footskip=28pt \shortdate \yyyymmdddate \settimeformat{ampmtime} \date{\today, \currenttime} \title[Powers of Hamilton cycles in perturbed hypergraphs]% {Powers of tight Hamilton cycles in randomly perturbed hypergraphs} \author[W. Bedenknecht]{Wiebke Bedenknecht} \author[J. Han]{Jie Han} \author[Y. Kohayakawa]{Yoshiharu Kohayakawa} \author[G. O. Mota]{Guilherme Oliveira Mota} \address{Fachbereich Mathematik, Universit\"at Hamburg, Bundesstra\ss{}e~55, D-20146 Hamburg, Germany} \email{Wiebke.Bedenknecht@uni-hamburg.de} \address{Instituto de Matem\'atica e Estat\'{\i}stica, Universidade de S\~ao Paulo, S\~ao Paulo, Brazil} \email{\{jhan|yoshi\}@ime.usp.br} \address{Centro de Matem\'atica, Computa\c c\~ao e Cogni\c c\~ao, Universidade Federal do ABC, Santo Andr\'e, Brazil} \email{g.mota@ufabc.edu.br} \thanks{% The second author was supported by FAPESP (Proc.~2014/18641-5). The third author was supported by FAPESP (Proc.~2013/03447-6) and by CNPq (Proc.~459335/2014-6, 310974/2013-5). The fourth author was supported by FAPESP (Proc.~2013/11431-2, 2013/20733-2). The collaboration of the authors was supported by the CAPES/DAAD PROBRAL programme (Proj.~430/15, 57143515). } \keywords{Powers of Hamilton cycles, random hypergraphs, perturbed hypergraphs} \subjclass[2010] } \begin{abstract} For $k\ge 2$ and $r\ge 1$ such that $k+r\ge 4$, we prove that, for any $\alpha>0$, there exists $\epsilon>0$ such that the union of an $n$-vertex $k$-graph with minimum codegree $\(1-\binom{k+r-2}{k-1}^{-1}+\alpha\)n$ and a binomial random $k$-graph $\mathbb{G}^{(k)}(n,p)$ with $p\ge n^{-\binom{k+r-2}{k-1}^{-1}-\epsilon}$ on the same vertex set contains the $r$${}^{\text{th}}${} power of a tight Hamilton cycle with high probability. This result for $r=1$ was first proved by McDowell and Mycroft. \end{abstract} \maketitle \section{Introduction} \label{sec:intro} \subsection{Hamiltonian cycles} The study of Hamiltonicity (the existence of a cycle as a spanning subgraph) has been a central and fruitful area in graph theory. In particular, by a celebrated result of Karp~\cite{Karp}, the decision problem for Hamiltonicity in general graphs is known to be NP-complete. Therefore it is likely that good characterizations of graphs with Hamilton cycles do not exist, and it becomes natural to study sufficient conditions that guarantee Hamiltonicity. Among a large variety of such results, the most famous one is the classical theorem of Dirac from~1952: every $n$-vertex graph ($n\ge 3$) with minimum degree at least $n/2$ is Hamiltonian~\cite{Di52}. Another well-studied object in graph theory is the binomial random graph $\mathbb{G}(n,p)$, which contains $n$ vertices and each pair of vertices forms an edge with probability $p$ independently from all other pairs. P\'osa~\cite{Posa} and Korshunov~\cite{Korshunov} independently determined the threshold for Hamiltonicity in $\mathbb{G}(n,p)$, which is~$(\log n)/n$. This implies that almost all dense graphs are Hamiltonian. In this sense the degree constraint in Dirac's theorem is very strong. In fact, Bohman, Frieze and Martin~\cite{BFM} studied the random graph model that starts with a given, dense graph and adds $m$ random edges. In particular, they showed that for every $\alpha>0$ there is $c=c(\alpha)$ such that if we start with a graph with minimum degree at least~$\alpha n$ and we add~$cn$ random edges, then the resulting graph is Hamiltonian a.a.s.\ (as usual, we say that an event happens \emph{asymptotically almost surely}, or a.a.s., if it happens with probability tending to $1$ as~$n\to\infty$). By considering the complete bipartite graph with vertex classes of sizes~$\alpha n$ and~$(1-\alpha)n$, one sees that the result above is tight up to the value of~$c$. It is natural to study Hamiltonicity problems in uniform hypergraphs. Given $k\ge 2$, a \emph{$k$-uniform hypergraph} (in short, \emph{$k$-graph}) $H=(V,E)$ consists of a vertex set $V$ and an edge set $E\subseteq \binom{V}{k}$; thus, every edge of~$H$ is a $k$-element subset of~$V$. Given a $k$-graph $H$ with a set $S$ of $d$ vertices (where $1 \le d \le k-1$) we define $N_{H} (S)$ to be the collection of $(k-d)$-sets $T$ such that $S\cup T\in E(H)$, and let $\deg_H(S):=|N_H(S)|$ (the subscript $H$ is omitted whenever~$H$ is clear from the context). The \emph{minimum $d$-degree $\delta _{d} (H)$} of $H$ is the minimum of $\deg_{H} (S)$ over all $d$-vertex sets $S$ in $H$. We refer to $\delta _{k-1}(H)$ as the \emph{minimum codegree} of~$H$. In the last two decades, there has been growing interest in extending Dirac's theorem to $k$-graphs. Among other notions of cycles in $k$-graphs (e.g., Berge cycles), the following `uniform' cycles have attracted much attention. For integers $1\le \ell \le k-1$ and $m\ge 3$, a $k$-graph $F$ with $m(k-\ell)$ vertices and $m$ edges is a called an \emph{$\ell$-cycle} if its vertices can be ordered cyclically so that each of its edges consists of $k$ consecutive vertices and every two consecutive edges (in the natural order of the edges) share exactly $\ell$ vertices. Usually $(k-1)$-cycles are also referred to as \emph{tight} cycles. We say that a $k$-graph contains a \emph{Hamilton $\ell$-cycle} if it contains an $\ell$-cycle as a spanning subgraph. In view of Dirac's theorem, minimum $d$-degree conditions that force Hamilton $\ell$-cycles (for $1\le d,\,\ell\le k-1$) have been studied intensively~\cite{BMSSS1, BMSSS2, BHS, CzMo, GPW, HS, HZ2, HZ1, KKMO, KMO, KO, RRRSS, RoRu14, RoRuSz06, RRS08, RRS11}. Let $\mathbb{G}^{(k)}(n,p)$ denote the binomial random $k$-graph on $n$ vertices, where each $k$-tuple forms an edge independently with probability $p$. The threshold for the existence of Hamilton $\ell$-cycles has been studied by Dudek and Frieze~\cite{DuFr1, DuFr2}, who proved that for $\ell=1$ the threshold is $(\log n)/n^{k-1}$, and for $\ell\ge 2$ the threshold is $1/n^{k-\ell}$ (they also determined sharp thresholds for every~$k\ge 4$ and $\ell=k-1$). Krivelevich, Kwan and Sudakov~\cite{KKS} considered randomly perturbed $k$-graphs, which are $k$-graphs obtained by adding random edges to a fixed $k$-graph. They proved the following theorem, which mirrors the result of Bohman, Frieze and Martin~\cite{BFM} for randomly perturbed graphs mentioned earlier. \begin{thm}\label{thm:KKS}\cite{KKS} For any~$k\geq2$ and~$\alpha>0$, there is~$c_k=c_k(\alpha)$ for which the following holds. Let~$H$ be a $k$-graph on $n\in(k-1)\mathbb{N}$ vertices with $\delta_{k-1}(H)\geq \alpha n$. If $p=c_k n^{-(k-1)}$, then the union $H\cup \mathbb{G}^{(k)}(n,p)$ a.a.s.\ contains a Hamilton $1$-cycle. \end{thm} The authors of~\cite{KKS} also obtained a similar result for perfect matchings. These results are tight up to the value of~$c_k$, as shown by a simple `bipartite' construction. McDowell and Mycroft~\cite{McMy} and, subsequently, Han and Zhao~\cite{HZ_pert} extended Theorem~\ref{thm:KKS} to Hamilton $\ell$-cycles and other degree conditions. \subsection{Powers of Hamilton cycles} Powers of cycles are natural generalizations of cycles. Given $k\ge 2$ and~$r\ge 1$, we say that a $k$-graph with~$m$ vertices is an \emph{$r$${}^{\text{th}}${} power of a tight cycle} if its vertices can be ordered cyclically so that each consecutive $k+r-1$ vertices span a copy of~$K_{k+r-1}^{(k)}$, the complete $k$-graph on $k+r-1$ vertices, and there are no other edges than the ones forced by this condition. This extends the notion of (tight) cycles in hypergraphs, which corresponds to the case $r=1$. The existence of powers of paths and cycles has also been intensively studied. For example, the famous P\'osa--Seymour conjecture, which was proved by Koml\'os, S\'ark\"ozy and Szemer\'edi~\cite{KoSaSz98a,KoSaSz98b} for sufficiently large graphs, states that every $n$-vertex graph with minimum degree at least $r n/(r+1)$ contains the $r$${}^{\text{th}}${} power of a Hamilton cycle. A general result of Riordan~\cite{Ri00} implies that, for $r\geq 3$, the threshold for the existence of the $r$${}^{\text{th}}${} power of a Hamilton cycle in $\mathbb{G}(n,p)$ is $n^{-1/r}$. The case~$r=2$ was investigated by K\"uhn and Osthus~\cite{KuOs12}, who proved that $p\ge n^{-1/2+\epsilon}$ suffices for the existence of the square of a Hamilton cycle in $\mathbb{G}(n,p)$, which is sharp up to the~$n^\epsilon$ factor. This was further sharpened by Nenadov and {\v S}kori{\'c}~\cite{NeSk16}. Moreover, Bennett, Dudek and Frieze~\cite{BeDuFr16} proved a result for the square of a Hamilton cycle in randomly perturbed graphs, extending the result of Bohman, Frieze and Martin~\cite{BFM}. \begin{thm}\label{thm:BDF}\cite{BeDuFr16} For any $\alpha>0$ there is~$K>0$ such that the following holds. Let~$G$ be a graph with $\delta(G)\ge (1/2+\alpha) n$ and suppose $p=p(n)\ge K n^{-2/3}\log^{1/3}n$. Then the union $H\cup \mathbb{G}(n,p)$ a.a.s.\ contains the square of a Hamilton cycle. \end{thm} Note that in Theorem~\ref{thm:BDF} the randomness that is required is much weaker than the one needed in the result for the pure random model (which is essentially $n^{-1/2}$). The authors of~\cite{BeDuFr16} also asked for similar results for higher powers of Hamilton cycles in randomly perturbed graphs. Parczyk and Person~\cite[Theorem~3.7]{PP} proved that, for $k\geq 3$ and $r\geq 2$, the threshold for the existence of an $r$${}^{\text{th}}${} power of a tight Hamilton cycle in the random $k$-graph $\mathbb{G}^{(k)}(n,p)$ is $n^{-\binom{k+r-2}{k-1}^{-1}}$. Our main result, Theorem~\ref{main} below, shows that if we consider randomly perturbed $k$-graphs $H\cup \mathbb{G}^{(k)}(n,p)$ with $\delta_{k-1}(H)$ reasonably large, then $p=p(n)\ge n^{-\binom{k+r-2}{k-1}^{-1}-\epsilon}$ is enough to guarantee the existence of an $r$${}^{\text{th}}${} power of a tight Hamilton cycle with high probability. \begin{thm}[Main result]\label{main} For all integers $k\ge 2$ and $r\ge 1$ such that $k+r\ge 4$ and $\alpha>0$, there is $\epsilon>0$ such that the following holds. Suppose~$H$ is a $k$-graph on $n$ vertices with \begin{equation} \label{eq:main_min_deg} \delta_{k-1}(H)\ge \left( 1- \binom{k+r-2}{k-1}^{-1} + \alpha \right) {n} \end{equation} and $p=p(n)\ge n^{-\binom{k+r-2}{k-1}^{-1}-\epsilon}$. Then~{a.a.s.}~the union $H\cup \mathbb{G}^{(k)}(n,p)$ contains the $r$${}^{\text{th}}${} power of a tight Hamilton cycle. \end{thm} We remark that our proof only gives a small $\epsilon$, and it would be interesting to know if one can get a larger gap in comparison with the result in the purely random model, as in Theorem~\ref{thm:BDF}. We remark that the case $k\ge 3$ and $r=1$ of Theorem~\ref{main} was first proved by McDowell and Mycroft~\cite{McMy}. Other results in randomly perturbed graphs can be found in~\cite{BTW, BMPP, KKS2, BHKMPP, HZ_pert}. The core of the proof of Theorem~\ref{main} follows the \textit{Absorbing Method} introduced by R\"odl, Ruci\'nski, and Szemer\'edi in~\cite{RoRuSz06}, combined with results concerning binomial random hypergraphs. This paper is organized as follows. In Section~\ref{sec:random} we prove some results concerning random hypergraphs. Section~\ref{sec:abs-con} contains two essential lemmas in our approach, namely, Lemma~\ref{lm:conn} (Connecting Lemma) and Lemma~\ref{lm:abs} (Absorbing Lemma). In Section~\ref{main} we prove our main result, Theorem~\ref{main}. Some remarks concerning the hypotheses in Theorem~\ref{main} are given in Section~\ref{sec:concluding}. Throughout the paper, we omit floor and ceiling functions. \section{Subgraphs of random hypergraphs}\label{sec:random} In this section we prove some results related to binomial random $k$-graphs. We will apply Chebyshev's inequality and Janson's inequality to prove some concentration results that we shall need. For convenience, we state these two inequalities in the form we need (inequalities~\eqref{eq:2} and~\eqref{eq:1} below follow, respectively, from Janson's and Chebyshev's inequalities; see, e.g.,~\cite[Theorem~2.14]{JLR}). We first recall Janson's inequality. Let $\Gamma$ be a finite set and let $\Gamma_p$ be a random subset of $\Gamma$ such that each element of $\Gamma$ is included in $\Gamma_p$ independently with probability $p$. Let ${\mathcal S}$ be a family of non-empty subsets of $\Gamma$ and for each $S\in {\mathcal S}$, let $I_S$ be the indicator random variable for the event $S\subseteq \Gamma_p$. Thus each $I_S$ is a Bernoulli random variable $\mathop{\text{\rm Be}}\nolimits(p^{|S|})$. Let $X:=\sum_{S\in {\mathcal S}} I_S$ and $\lambda := \mathbb E(X)$. Let $\Delta_{X} := \sum_{S\cap T\neq\emptyset}\mathbb{E}(I_S I_T)$, where the sum is over all ordered pairs $S, T\in {\mathcal S}$ (note that the sum includes the pairs $(S,S)$ with $S\in {\mathcal S}$). Then Janson's inequality says that, for any $0\le t\le \lambda$, \begin{equation} \mathbb{P}(X\leq \lambda -t)\leq \exp \left ( -\dfrac{t^2}{2\Delta_{X}}\right ). \label{eq:2} \end{equation} Next note that $\mathop{\text{\rm Var}}\nolimits(X)=\mathbb E(X^2) - \mathbb E(X)^2 \le \Delta_X$. Then, by Chebyshev's inequality, \begin{equation} \mathbb{P}(X\ge 2\lambda) \le \frac{\mathop{\text{\rm Var}}\nolimits(X)}{\lambda^2} \le \frac{\Delta_X}{\lambda^2}. \label{eq:1} \end{equation} Consider the random $k$-graph $\mathbb{G}^{(k)}(n, p)$ on an $n$-vertex set $V$. Note that we can view $\mathbb{G}^{(k)}(n, p)$ as $\Gamma_p$ with $\Gamma = \binom{V}k$. For two $k$-graphs $G$ and $H$, let $G\cap H$ (or $G\cup H$) denote the $k$-graph with vertex set $V(G)\cap V(H)$ (or $V(G)\cup V(H)$) and edge set $E(G)\cap E(H)$ (or $E(G)\cup E(H)$). Finally, let \begin{equation*} \Phi_F = \Phi_F(n, p)= \min \{n^{v_H} p^{e_H}: H\subseteq F\text{ and } e_H>0\}. \end{equation*} The following simple proposition is useful. \begin{prop}\label{prop:1 Let $F$ be a $k$-graph with $s$ vertices and $f$ edges and let $G:=\mathbb{G}^{(k)}(n, p)$. Let ${\mathcal A}$ be a family of ordered $s$-subsets of $V=V(G)$. For each~$A\in{\mathcal A}$, let~$I_A$ be the indicator random variable of the event that $A$ spans a labelled copy of~$F$ in~$G$. Let $X=\sum_{A\in {\mathcal A}} I_A$. Then $\Delta_{X} \leq s! 2^{2s} n^{2s}p^{2f}/\Phi_F$. \end{prop} \begin{proof} For each ordered $s$-subset $A$ of $V$, let $\alpha_A$ be the bijection from $V(F)$ to $A$ following the orders of $V(F)$ and $A$. Let~$F_A$ be the labelled copy of $F$ spanned on $A$. For any $T\subseteq V(F)$ with $|F[T]|>0$, denote by $W_{T}$ the set of all pairs $A,\,B\in {\mathcal A}$ such that $A\cap B=\alpha_A(T)$. If~$T$ has $s'$ vertices and $F[T]$ has $f'$ edges, then for every $\{A, B\}\in W_{T}$, $F_A\cup F_B$ has exactly $2s-s'$ vertices and at least $2f-f'$ edges. Therefore, we can bound $\Delta_{X}$ by \[ \Delta_{X} \le \sum_{T\subseteq V(F)} |W_{T}| p^{2f - f'}\, . \] Given integers $n$ and $b$, let $(n)_b:=n(n-1)(n-2)\cdots (n-b+1) = n!/(n-b)!$. Note that there are at most $\binom{n}{2s-s'}$ choices for the vertex set of $F_A\cup F_B$, and there are at most \[ (2s-s')_s\cdot\binom{s}{s'}s!\le (2s-s')! s!2^{s} \] ways to label each $(2s-s')$-set to get $\{A, B\}$. Thus we have $|W_{T}|\le s! 2^{s}n^{2s-s'}$ and \[ \Delta_{X} \le \sum_{T\subseteq V(F)} s! 2^{s}n^{2s-s'} p^{2f - f'}\le \sum_{T\subseteq V(F)} s!2^{s}n^{2s} p^{2f}/\Phi_F \leq s! 2^{2s} n^{2s}p^{2f}/\Phi_F, \] because there are at most $2^s$ choices for $T$. \end{proof} The following lemma gives the properties of $\mathbb{G}^{(k)}(n,p)$ that we will use. Throughout the rest of the paper, we write $\alpha \ll \beta \ll \gamma$ to mean that `we can choose the positive constants $\alpha$, $\beta$ and~$\gamma$ from right to left'. More precisely, there are functions $f$ and $g$ such that, given~$\gamma$, whenever $\beta \leq f(\gamma)$ and $\alpha \leq g(\beta)$, the subsequent statement holds. Hierarchies of other lengths are defined similarly. \begin{lemma}\label{lm:gnp} Let $F$ be a labelled $k$-graph with $b$ vertices and $a$ edges. Suppose $1/n\ll 1/C \ll \gamma, 1/a, 1/b, 1/s$. Let $V$ be an $n$-vertex set, and let ${\mathcal F}_1, \dots, {\mathcal F}_t$ be $t\le n^{s}$ families of $\gamma n^{b}$ ordered $b$-sets on $V$. If $p=p(n)$ is such that $\Phi_{F}(n,p) \ge C n$, then the following properties hold for the binomial random $k$-graph $G=\mathbb{G}^{(k)}(n,p)$ on~$V$. \begin{enumerate}[label=\upshape({\itshape \roman*\,})] \item With probability at least $1-\exp(-n)$, every induced subgraph of $G$ of order $\gamma n$ contains a copy of $F$.\label{item-I} \item With probability at least $1-\exp(-n)$, for every $i\in [t]$, there are at least $(\gamma/2) n^{b}p^{a}$ ordered $b$-sets in ${\mathcal F}_i$ that span labelled copies of $F$.\label{item-II} \item With probability at least $1-1/\sqrt n$, there are at most $2 n^{b} p^{a}$ ordered $b$-sets of vertices of $G$ that span labelled copies of $F$.\label{item-III} \item With probability at least $1-1/{\sqrt{ n}}$, the number of overlapping (i.e., not vertex-disjoint) pairs of copies of $F$ in $G$ is at most $4b^2 n^{2b-1} p^{2a}$.\label{item-IV} \end{enumerate} \end{lemma} \begin{proof Let ${\mathcal A}$ be a family of ordered $b$-sets of vertices in~$V$. For each $A\in {\mathcal A}$, let~$I_A$ be the indicator random variable of the event that $A$ spans a labelled copy of~$F$ in~$G$. Let $X_{\mathcal A}=\sum_{A\in {\mathcal A}} I_A$. From the hypothesis that~$\Phi_{F} \ge C n$ and Proposition~\ref{prop:1}, we have \begin{equation}\label{eq:delta} \Delta_{X} \leq b! 2^{2b} n^{2b}p^{2a}/\Phi_{F}\le b! 2^{2b} n^{2b}p^{2a}/(C n). \end{equation} Furthermore, let ${\mathcal S}$ consist of the edge sets of the labelled copies of $F$ spanned on $A$ in the complete $k$-graph on $V$ for all $A\in {\mathcal A}$. Since we can write $X_{\mathcal A}=\sum_{S\in {\mathcal S}} I_S$, where $I_S$ is the indicator variable for the event $S\subseteq E(G)$, we can apply~\eqref{eq:2} to $X_{\mathcal A}$. For~\ref{item-I}, fix a vertex set $W$ of $G$ with $|W|=\gamma n$. Let ${\mathcal A}$ be the family of all labelled $b$-sets in $W$. Let $X_{\mathcal A}$ be the random variable that counts the number of members of ${\mathcal A}$ that span a labelled copy of $F$ and thus $\mathbb{E}[X_{\mathcal A}]= (\gamma n)_{b} p^a$. By~\eqref{eq:delta} and~\eqref{eq:2} and the fact that $1/C\ll \gamma, 1/b$, we have $\mathbb{P} (X_{\mathcal A}=0) \le \exp( -2 n)$. By the union bound, the probability that there exists a vertex set $W$ of size $\gamma n$ such that $X_{\mathcal A}=0$ is at most $2^n \exp(-2 n) \le \exp(-n)$, which proves~\ref{item-I}. For~\ref{item-II}, fix $i\in [t]$ and let $X_{{\mathcal F}_i}$ be the random variable that counts the members of ${\mathcal F}_i$ that span $F$. Note that $\mathbb{E}[X_{{\mathcal F}_i}]=\gamma n^{b}p^a$. Thus~\eqref{eq:2} implies that $\mathbb{P}\big(X_{{\mathcal F}_i}\leq (\gamma/2)n^{b}p^{a}\big)\le \exp(- 2n)$. By the union bound and the fact that $n^{s} \exp(- 2n) \le \exp(- n)$, we see that~\ref{item-II} holds. For~\ref{item-III}, let $X_3$ be the random variable that counts the number of labelled copies of~$F$ in~$G$. Since $\mathbb{E}(X_3)= (n)_{b}p^{a}$, by~\eqref{eq:delta} and~\eqref{eq:1}, we obtain \[ \mathbb{P}(X_3\ge 2p^an^b)\leq \mathbb{P}(X_3\ge 2\mathbb{E}[X_3])\le \frac{\Delta_{X_3}}{\mathbb{E}[X_3]^2} \le \frac{ b! 2^{2b} n^{2b}p^{2a}/(C n) }{( (n)_{b}p^{a})^2} \le \frac1{\sqrt n}. \] For~\ref{item-IV}, let $Y$ be the random variable that denotes the number of overlapping pairs of copies of~$F$ in~$G$. We first estimate $\mathbb{E}[Y]$. We write $Y=\sum_{A\in {\mathcal Q}}I_A$, where ${\mathcal Q}$ is the collection of the edge sets of overlapping pairs of labelled copies of $F$ in the complete $k$-graph on $n$ vertices. Note that if two overlapping copies of $F$ do not share any edge, then they induce at most $2b-1$ vertices and exactly $2a$ edges. Note that for $1\le i\le b$, there are \[ \binom{n}{2b-i} (2b-i)_{b}\binom{b}{i} b! = (n)_{2b - i} \binom b i (b)_i \le (n)_{2b - i}(b)_i ^2 \] members of ${\mathcal Q}$ whose two copies of $F$ share exactly $i$ vertices. Thus, the number of choices for the vertex sets of pairs of copies which induce at most $2b-2$ vertices is at most $\sum_{2\le i\le b} (n)_{2b - i} (b)_i^2 \le n^{2b-1}$. By the definition of $\Delta_{X_3}$ and \eqref{eq:delta} we have \[ {n}^{2b - 1} b^2 p^{2a}/2\le \mathbb{E}[Y]\le (n)_{2b - 1} b^2 \cdot p^{2a} + n^{2b-1} \cdot p^{2a} + \Delta_{X_3} \le 2 b^2 {n}^{{2b - 1}} p^{2a}. \] We next compute $\Delta_Y$. For each $A\in {\mathcal Q}$, let $S_A$ denote the $k$-graph induced by $A$ (thus $S_A$ is the union of two overlapping copies of $F$). For each $A, B\in {\mathcal Q}$, write $S_A:=F_1\cup F_2$ and $S_B:=F_3\cup F_4$, where each $F_i$ is a copy of $F$ for $i\in [4]$ such that $E(F_1)\cap E(F_3)\neq \emptyset$. Define $H_1:=F_1\cap F_2$, $H_2:=(F_1\cup F_2)\cap F_3$ and $H_3:=(F_1\cup F_2\cup F_3)\cap F_4$. Since $V(F_1)\cap V(F_2)\neq\emptyset $, $V(F_3)\cap V(F_4)\neq\emptyset $, and $E(F_1)\cap E(F_3)\neq \emptyset$, we know that $v_{H_i}\ge 1$ for $i=1,2,3$. We claim that $n^{v_{H_i}}p^{e_{H_i}}\ge n$ for $i=1,2,3$. Indeed, since each $H_i$ is a subgraph of $F$, if $e_{H_i}\ge 1$, then $n^{v_{H_i}}p^{e_{H_i}}\ge \Phi_{F}\ge C n$; otherwise $e_{H_i}=0$ and then we have $n^{v_{H_i}}p^{e_{H_i}} = n^{v_{H_i}} \ge n^1=n$. So we have \begin{equation}\label{eq:estnp} n^{v_{H_1}}p^{e_{H_1}}\cdot n^{v_{H_2}}p^{e_{H_2}}\cdot n^{v_{H_3}}p^{e_{H_3}} \ge n^{3}. \end{equation} Now we define $\Delta_{H_1, H_2, H_3}= \sum_{A, B} \mathbb{E}[I_A I_B]$, where the sum is over the pairs $\{A, B\}$ with $A\cap B\neq \emptyset$ that generate $H_1, H_2, H_3$. Observe that the sum contains at most \[ \binom n{4b-v_{H_1} - v_{H_2} -v_{H_3}} (4b-v_{H_1} - v_{H_2} -v_{H_3})_{b}^4 < n^{4b-(v_{H_1} + v_{H_2} + v_{H_3})} (4b)^{3b} \] terms. Thus, from~\eqref{eq:estnp}, we obtain \begin{align*} \Delta_{H_1, H_2, H_3}= \sum_{A, B} \mathbb{E}[I_A I_B] \le (4b)^{3b} n^{4b-(v_{H_1} + v_{H_2} + v_{H_3})} p^{4a - (e_{H_1} + e_{H_2} + e_{H_3})} \le (4b)^{3b} n^{4b - 3} p^{4a}. \end{align*} Let $D=D(b,k,r)$ be the number of choices for $H_1, H_2, H_3$, thus \[ \Delta_Y = \sum_{H_1, H_2, H_3} \Delta_{H_1, H_2, H_3} \le D (4b)^{3b} n^{4b-3}p^{4a}. \] Therefore, by~\eqref{eq:1} and the fact that $n$ is large enough, we get \begin{align*} \mathbb{P}\big(Y\ge 4b^2 n^{2b-1}p^{2a}) &\le \mathbb{P}\big(Y\ge 2\mathbb{E}[Y]) \le \frac{\Delta_Y}{\mathbb{E}[Y]^2} \le \frac{D(4b)^{3b} n^{4b-3}p^{4a}}{({n}^{{2b-1}} p^{2a}/2)^2} \le \frac1{\sqrt{n}}. \end{align*} This verifies~\ref{item-IV}. \end{proof} For $m\ge k+r-1$, denote by $P_{m}^{k,r}$ the $r$${}^{\text{th}}${} power of a $k$-uniform tight path on $m$ vertices. Similarly, write~$C_{m}^{k,r}$ for the $r$${}^{\text{th}}${} power of a $k$-uniform tight cycle on $m$ vertices. For simplicity we say that $P_{m}^{k,r}$ is an \emph{$(r,k)$-path} and~$C_{m}^{k,r}$ is an \emph{$(r,k)$-cycle}. We write $P_m^r$ for $P_{m}^{k,r}$ whenever~$k$ is clear from the context. Moreover, the ends of $P_m^r$ are its first and last $k+r-1$ vertices (with the order in the $(r,k)$-path). We end this section by computing $\Phi_{P_b^r}$ for the $p=p(n)\ge n^{-\binom{k+r-2}{k-1}^{-1}-\epsilon}$ as in Theorem~\ref{main}. For $b\geq k+r-1$, let \[ g(b):= \left( b - \frac{(k-1)(k+r-1)}{k} \right) \binom{k+r-2}{k-1}. \] Clearly $g$ is an increasing function. Note that the number of edges in $P_m^{k,r}$ is given by \begin{align*} \big|E\big(P_m^{k,r}\big)\big|&=\binom{k+r-1}{k} + \left(m-(k+r-1)\right)\binom{k+r-2}{k-1}\\ & = \left( m - \frac{(k-1)(k+r-1)}{k} \right) \binom{k+r-2}{k-1} = g(m). \end{align*} \begin{prop}\label{prop:phiP} Suppose~$k\ge 2$, $r\geq 1$, $b\geq k+r-1$, $k+r\ge 4$ and $C>0$. Let $\epsilon$ be such that $0<\epsilon<\min\big\{(2g(b))^{-1},\big(3\binom{k+r-1}{k}\big)^{-1}\big\}$. Suppose $1/n\ll1/C,\, 1/k,\, 1/r,\, 1/b$. If $p=p(n)\ge n^{-\binom{k+r-2}{k-1}^{-1}-\epsilon}$, then $\Phi_{P_b^r} \ge C n$. \end{prop} \begin{proof} Let $H$ be a subgraph of $P_b^r$. Since for any integer $k+r-1\le b'\le b$, any subgraph of $P_{b'}^r$ has at most $g(b')$ edges, we have the following observations. \begin{enumerate}[label={(\alph*)}] \item If $e_H> g(b')$ for some $b'\ge k+r-1$, then $v_H\ge b'+1$;\label{eq-edgea} \item if $e_H > \binom{i}{k}$ for some $k-1\le i< k+r-1$, then $v_H\ge i+1$.\label{eq-edgeb} \end{enumerate} By~\ref{eq-edgea}, we have \[ \min_{g(k+r-1)< e_H\le g(b)} n^{v_H} p^{e_H} = \min_{k+r-1\le b'< b}\left( \min_{g(b')< e_H\le g(b'+1)} n^{v_H} p^{e_H}\right) \ge \min_{k+r-1\le b'< b} n^{b'+1} p^{g(b'+1)}. \] Since $p\ge n^{-1/\binom{k+r-2}{k-1}-\epsilon}$, and $g(b'+1) >0$, the following holds for any $b'<b$: \begin{multline*} \qquad n^{b'+1}p^{g(b'+1)}\ge n^{b'+1}\(n^{-1/\binom{k+r-2}{k-1}-\epsilon}\)^{g(b'+1)} \\ = n^{-g(b'+1)\epsilon}n^{(k-1)(k+r-1)/k} \ge n^{-g(b)\epsilon}n^{(k-1)(k+r-1)/k}\ge C n, \qquad \end{multline*} where we used $(k-1)(k+r-1)/k\ge 3/2$ and $g(b)\epsilon< 1/2$. Therefore, \begin{equation}\label{eq:estimate1} \min_{g(k+r-1)< e_H\le g(b)} n^{v_H} p^{e_H} \geq C n. \end{equation} On the other hand, noting that $g(k+r-1)=\binom{k+r-1}{k}$, by~\ref{eq-edgeb} we have \[ \min_{0< e_H\le g(k+r-1)} n^{v_H} p^{e_H} = \min_{k-1\le i< k+r-1} \left(\min_{\binom{i}{k}< e_H\le \binom{i+1}{k}} n^{v_H} p^{e_H} \right)\ge \min_{k-1\le i< k+r-1} n^{i+1} p^{\binom{i+1}{k}}. \] Since $p\ge n^{-1/\binom{k+r-2}{k-1}-\epsilon}$, and $\binom{i+1}{k}\epsilon\leq 1/3$ for any $k-1\le i\le k+r-2$, if $i\ge 2$, then \[n^{i+1} p^{\binom{i+1}{k}} \ge n^{i+1} n^{-(1/\binom{k+r-2}{k-1}+\epsilon)\frac{i+1}{k}\binom{i}{k-1}} \ge n^{i+1 - \frac{i+1}{k}-\binom{i+1}{k}\epsilon}\ge C n. \] Otherwise $i=1$ and thus $k=2$, in which case we have $n^{i+1} p^{\binom{i+1}{k}}= n^2 p \ge C n$. Therefore, \begin{equation}\label{eq:estimate2} \min_{0< e_H\le g(k+r-1)} n^{v_H} p^{e_H} \geq C n. \end{equation} From \eqref{eq:estimate1} and \eqref{eq:estimate2}, we have $\Phi_{P_b^r} \ge C n$, as desired. \end{proof} \section{The Connecting and Absorbing Lemmas} \label{sec:abs-con} For brevity, throughout the rest of this paper, we write \begin{equation*} h:=k+r-1\text{,}\quad t:=g(2h)\text{,}\quad c:=\binom{k+r-2}{k-1}^{-1}. \end{equation*} Recall that the ends of an $(r,k)$-path are ordered $h$-sets that span a copy of $K_h^{(k)}$ in~$H$. \subsection{The Connecting Lemma} Given a $k$-graph $H$ and two ordered $h$-sets of vertices $A$ and $B$ each spanning a copy of $K_h^{(k)}$ in $H$, we say that an ordered $2h$-set of vertices $C$ \textit{connects}~$A$ and~$B$ if $C\cap A=C\cap B=\emptyset$ and the concatenation $\text{\textit{ACB}}$ spans a labelled copy of~$P_{4h}^r$. We are now ready to state our connecting lemma. \begin{lemma}[Connecting Lemma]\label{lm:conn} Suppose $1/n \ll\epsilon \ll \beta \ll \alpha'\ll 1/k, 1/r$. Let~$H$ be an $n$-vertex $k$-graph with $\delta_{k-1}(H)\ge \left( 1- c + \alpha' \right) {n}$ and suppose $p=p(n)\ge n^{-c-\epsilon}$. Then a.a.s.\ $H\cup \mathbb{G}^{(k)}(n,p)$ contains a set $\mathcal{C}$ of vertex-disjoint copies of $P_{2h}^r$ with $|\mathcal{C}|\le \beta n$ such that, for every pair of disjoint ordered $h$-sets spanning a copy of $K_h^{(k)}$ in $H$, there are at least $\beta^2 n/(2h)^2$ ordered copies of $P_{2h}^r$ in $\mathcal C$ that connect them. \end{lemma} \begin{proof} Let ${\mathcal S}$ be the set of pairs of disjoint ordered $h$-sets that each span a copy of $K_h^{(k)}$ in~$H$. Fix $\{S, S'\}\in {\mathcal S}$ and write $S:=(v_1, \dots, v_{h})$ and $S':=(w_{h}, \dots, w_1)$. Since $\delta_{k-1}(H)\ge \left( 1- c + \alpha' \right) {n}$, we can extend $S$ to an $(r,k)$-path with vertices $(v_1, \dots, v_{2h})$ such that the vertices of this $(r,k)$-path are disjoint with $\{w_{h}, \dots, w_1\}$ and there are at least $(\alpha' n/2)^{h}$ choices for the ordered set $(v_{h+1}, \dots, v_{2h})$. Similarly, we can extend $S'$ to an $(r,k)$-path $(w_{2h}, \dots, w_1)$ such that the vertices of this $(r,k)$-path are disjoint with $\{v_1, \dots, v_{2h}\}$ and there are at least $(\alpha' n/2)^{h}$ choices for the ordered set $(w_{2h}, \dots, w_{h+1})$. So there are at least $(\alpha' n/2)^{2h} \ge 24\beta n^{2h}$ choices for the ordered $2h$-sets $(v_{h+1}, \dots, v_{2h}, w_{2h}, \dots, w_{h+1})$. Let $\mathcal C_{S, S'}$ be a collection of exactly $24\beta n^{2h}$ such ordered $2h$-sets of vertices. Clearly if an ordered set~$C$ in $\mathcal C_{S, S'}$ spans a copy of $P_{2h}^r$, then $C$ connects $S$ and $S'$. Now we will use the edges of $G=\mathbb{G}^{(k)}(n, p)$ to obtain the desired copies of $P^r_{2h}$ that connect the pairs in ${\mathcal S}$. Let ${\mathcal T}$ be the set of all labelled copies of $P_{2h}^r$ in $G$. We claim that the following properties hold with probability at least $1-3/\sqrt n$: \begin{enumerate}[label={(\alph*)}] \item\label{item:a} $|{\mathcal T}|\le 2 p^{t} n^{2h} $; \item\label{item:b} for every $\{S,S'\}\in {\mathcal S}$, at least $12\beta p^{t} n^{2h}$ members of ${\mathcal T}$ connect $S$ and $S'$; \item\label{item:c} the number of overlapping pairs of members of ${\mathcal T}$ is at most $4(2h)^2 p^{2t}n^{4h-1}$. \end{enumerate} To see that the claim above holds, note that by Proposition~\ref{prop:phiP}, we can apply Lemma~\ref{lm:gnp} with $F=P_{2h}^r$, $\gamma=24\beta$ and $\mathcal C_{S, S'}$ in place of ${\mathcal F}_i$. Items~\ref{item:a}, \ref{item:b} and~\ref{item:c} follow, respectively, from Lemma~\ref{lm:gnp}~\ref{item-III}, \ref{item-II} and~\ref{item-IV}. Next we select a random collection $\mathcal{C'}$ by including each member of ${\mathcal T}$ independently with probability $q:=\beta/(2(2h)^2 n^{2h-1}p^{t})$. By using Chernoff's inequality (for~\ref{item:i} and~\ref{item:ii} below) and Markov's inequality (for~\ref{item:iii} below), we know that there is a choice of $\mathcal C'$ that satisfies the following properties: \begin{enumerate}[label={(\roman*)}] \item\label{item:i}$|\mathcal C'|\le 2q |{\mathcal T}|\le \beta n$; \item\label{item:ii} for every $\{S,S'\}\in {\mathcal S}$, there are at least $12\beta (q/2) n^{2h}p^{t} = 3 \beta^2 n/(2h)^2$ members of $\mathcal C'$ that connect $S$ and $S'$; \item\label{item:iii} the number of overlapping pairs of members of $\mathcal C'$ is at most $ 8(2h)^2q^2 n^{4h-1} p^{2t}= 2\beta^2 n/(2h)^2$. \end{enumerate} Deleting one member from each overlapping pair, we obtain a collection $\mathcal C$ of vertex disjoint copies of $P_{2h}^r$ with $|\mathcal C|\le \beta n$, and such that, for every pair of disjoint ordered $h$-sets each spanning a~$K_h^{(k)}$ in~$H$, there are at least $3\beta^2 n/(2h)^2 - 2\beta^2 n/(2h)^2= \beta^2 n/(2h)^2$ sets of~$2h$ vertices connecting them. \end{proof} \subsection{The Absorbing Lemma} In this subsection we prove our absorbing lemma. \begin{lemma}[Absorbing Lemma]\label{lm:abs} Suppose $1/n\ll\epsilon \ll \zeta \ll \alpha\ll 1/k,1/r$. Let~$H$ be an $n$-vertex $k$-graph with $\delta_{k-1}(H)\ge \left( 1- c + \alpha \right) {n}$ and suppose $p=p(n)\ge n^{-c-\epsilon}$. Then a.a.s. $H\cup \mathbb{G}^{(k)}(n, p)$ contains an $(r,k)$-path $P_{\mathop{\text{\rm abs}}\nolimits}$ of order at most $6h \zeta n$ such that, for every set $X\subseteq V(H)\setminus V(P_{\mathop{\text{\rm abs}}\nolimits})$ with $|X|\leq \zeta^2n/(2h)^2$, there is an $(r,k)$-path in $H$ on $V(P_{\mathop{\text{\rm abs}}\nolimits})\cup X$ that has the same ends as~$P_{\mathop{\text{\rm abs}}\nolimits}$. \end{lemma} We call the $(r,k)$-paths~$P_{\mathop{\text{\rm abs}}\nolimits}$ in Lemma~\ref{lm:abs} \emph{absorbing paths}. We now define \emph{absorbers}. \begin{dfn} Let $v$ be a vertex of a $k$-graph. An ordered $2h$-set of vertices $(w_1,\dots ,w_{2h})$ is a \emph{$v$-absorber} if $(w_1,\dots,w_{2h})$ spans a labelled copy of~$P_{2h}^r$ and $(w_1,\dots,w_{h},v,w_{h+1},\dots,w_{2h})$ spans a labelled copy of~$P_{2h+1}^r$. \end{dfn} \begin{proof}[Proof of Lemma~\ref{lm:abs}] Suppose $1/n\ll\epsilon \ll \zeta\ll \beta \ll \alpha\ll 1/k,\,1/r$. We split the proof into two parts. We first find a set $\mathcal{F}$ of absorbers and then connect them to an $(r,k)$-path by using Lemma~\ref{lm:conn} (Connecting Lemma). We will expose $G= \mathbb{G}^{(k)}(n, p)$ in two rounds: $G=G_1\cup G_2$ with~$G_1$ and $G_2$ independent copies of~$\mathbb{G}^{(k)}(n,p')$, where $(1-p')^2 = 1-p$. Fix a vertex $v$. By the codegree condition of $H$, we can extend $v$ to a labelled copy of~$P_{2h+1}^r$ in the form $(w_1, \dots, w_{h}, v, w_{h+1},\dots, w_{2h})$ such that there are at least $(\alpha n/2)^{2h} \ge 24\zeta n^{2h}$ choices for the ordered $2h$-set $(w_{1}, \dots, w_{2h})$. Let $\mathcal A_{v}$ be a collection of exactly $24\zeta n^{2h}$ such ordered $2h$-sets. By definition, if an ordered set $A$ in $\mathcal A_{v}$ spans a labelled copy of $P_{2h}^r$, then~$A$ is a $v$-absorber. Now consider $G_1=\mathbb{G}^{(k)}(n, p')$ and let ${\mathcal T}$ be the set of all labelled copies of $P_{2h}^r$ in $G_1$. By Proposition~\ref{prop:phiP}, we can apply Lemma~\ref{lm:gnp} with $F=P_{2h}^r$ and ${\mathcal A}_v$ in place of ${\mathcal F}_i$. Using the union bound we conclude that the following properties hold with probability at least $1-3/\sqrt n$: \begin{enumerate}[label={(\alph*)}] \item $|{\mathcal T}|\le 2 p^{t} n^{2h} $;\label{propc} \item for every vertex $v$ in $H$, at least $12\zeta p^{t} n^{2h}$ members of ${\mathcal T}$ are $v$-absorbers;\label{propb} \item the number of overlapping pairs of members of ${\mathcal T}$ is at most $4(2h)^2 p^{2t}n^{4h-1}$.\label{propd} \end{enumerate} Next we select a random collection $\mathcal{F}'$ by including each member of ${\mathcal T}$ independently with probability $q=\zeta/(2(2h)^2 p^{t} n^{2h-1})$. In view of the properties above, by using Chernoff's inequality (for~\ref{item:i.2} and~\ref{item:ii.2} below) and Markov's inequality (for~\ref{item:iii.2} below), we know that there is a choice of $\mathcal F'$ that satisfies the following properties: \begin{enumerate}[label={(\roman*)}] \item\label{item:i.2} $|\mathcal F'|\le \zeta n$; \item\label{item:ii.2} for every vertex $v$, at least $12\zeta (q/2) p^{t} n^{2h} = 3\zeta^2 n/(2h)^2$ members of $\mathcal F'$ are $v$-absorbers; \item\label{item:iii.2} there are at most $8(2h)^2 q^2 n^{4h-1} p^{2t}= 2\zeta^2 n/(2h)^2$ overlapping pairs of members of $\mathcal F'$. \end{enumerate} By deleting from ${\mathcal F}'$ one member from each overlapping pair and all members that are not in ${\mathcal T}$, we obtain a collection ${\mathcal F}$ of vertex-disjoint copies of $P_{2h}^r$ such that $|{\mathcal F}|\le \zeta n$, and for every vertex~$v$, there are at least $3\zeta^2 n/(2h)^2 - 2\zeta^2 n/(2h)^2= \zeta^2 n/(2h)^2$ $v$-absorbers. Now we connect these absorbers using Lemma~\ref{lm:conn}. Let $V'=V(H)\setminus V(\mathcal{F})$ and $n'=|V'|$. In particular, $n'\geq n/2$ is sufficiently large. Now consider $H'=H[V']$ and $G'=G_2[V'] = \mathbb{G}^{(k)}(n',p')$. Since $|V(\mathcal{F})|\leq 2h\cdot \zeta n \leq \alpha^2 n$, we have $\delta_{k-1}(H')\ge \left( 1- c + \alpha/2 \right) {n}$. We apply Lemma~\ref{lm:conn} on $H'$ and $G'$ with $\alpha'=\alpha/2$ and $\beta$, and conclude that {a.a.s.}~$H'\cup G'$ contains a set $\mathcal{C}$ of vertex-disjoint copies of $P_{2h}^r$ such that $|\mathcal{C}|\leq \beta n$ and for every pair of ordered $h$-sets in $V'$, there are at least $\beta^2 n$ members of $\mathcal{C}$ connecting them. For each copy of $P_{2h}^r$ in $\mathcal F$, we greedily extend its two ends by $h$ vertices such that all new paths are pairwise vertex disjoint and also vertex disjoint from $V(\mathcal C)$. This is possible because of the codegree condition of $H_0$ and $|V(\mathcal{F})| + 2h|\mathcal{F}| + |V(\mathcal C)|\leq 2h \zeta n + 2h \zeta n + 2h\cdot \beta n< \alpha n/4$. Note that both ends of these $(r,k)$-paths $P_{4h}^r$ are in $V'\setminus V(\mathcal C)$. Since $\zeta n\le \beta^2 n'/(2h)^2$, we can greedily connect these $P_{4h}^r$. Let $P_{\mathop{\text{\rm abs}}\nolimits}$ be the resulting $(r,k)$-path. By construction, $|V(P_{\mathop{\text{\rm abs}}\nolimits})|\le (4h + 2h)\cdot \zeta n = 6h \zeta n$. Moreover, for any $X\subseteq V\setminus V(P_{\mathop{\text{\rm abs}}\nolimits})$ such that $|X|\leq \zeta^2n/(2h)$, since each vertex $v$ has at least $\zeta^2 n/(2h)^2$ $v$-absorbers in $\mathcal F$, we can absorb them greedily and conclude that there is an $(r,k)$-path on $V(P_{\mathop{\text{\rm abs}}\nolimits})\cup X$ that has the same ends as $P_{\mathop{\text{\rm abs}}\nolimits}$. \end{proof} \section{Proof of Theorem~\ref{main}}\label{sec:main} We now combine Lemmas~\ref{lm:conn}~and~\ref{lm:abs} to prove Theorem~\ref{main}. \begin{proof}[Proof of Theorem~\ref{main}] Suppose $1/n \ll \epsilon\ll \beta \ll \zeta\ll \alpha, 1/k, 1/r$. Furthermore, recall that $c:=\binom{k+r-2}{k-1}^{-1}$ and suppose $H\cup \mathbb{G}^{(k)}(n, p)$ is an $n$-vertex $k$-graph with $\delta_{k-1}(H)\ge \left( 1- c + \alpha \right) {n}$ and $p=p(n)\ge n^{- c - \epsilon}$. We will expose $G:=\mathbb{G}^{(k)}(n, p)$ in three rounds: $G=G_1\cup G_2\cup G_3$ with~$G_1$, $G_2$ and~$G_3$ three independent copies of $\mathbb{G}^{(k)}(n, p')$, where $(1-p')^3=1-p$. Note that $p' >p/3 > n^{-c-2\epsilon}$. By Lemma~\ref{lm:abs} with $2\epsilon$ in place of $\epsilon$, {a.a.s.}~the $k$-graph $H\cup G_1$ contains an absorbing $(r,k)$-path $P_{\mathop{\text{\rm abs}}\nolimits}$ of order at most $6h \zeta n$, that is, for every set $X\subseteq V(H)\setminus V(P_{\mathop{\text{\rm abs}}\nolimits})$ such that $|X|\leq \zeta^2n/(2h)^2$, there is an $(r,k)$-path in $H$ on $V(P_{\mathop{\text{\rm abs}}\nolimits})\cup X$ which has the same ends as~$P_{\mathop{\text{\rm abs}}\nolimits}$. Let $V'= V(H)\setminus V(P_{\mathop{\text{\rm abs}}\nolimits})$ and $n'=|V'|$. In particular, $n'\ge (1-6h\zeta)n$ and, since $\zeta$ is small enough, we have $(n')^{c+\epsilon} \ge n^{c+\epsilon}/2$. Thus $p' > p/2 \ge n^{-c-\epsilon}/2 \ge (n')^{-c-\epsilon}/4 \ge (n')^{-c-2\epsilon}$. Now consider $H'=H[V']$ and let $G_2':=\mathbb{G}^{(k)}(n', p')$ be the subgraph of $G_2$ induced by~$V'$. Note that $\delta_{k-1}(H')\ge \delta_{k-1}(H) - |V(P_{\mathop{\text{\rm abs}}\nolimits})|\ge \left( 1- c + \alpha/2 \right) {n'}$. By Lemma~\ref{lm:conn}, {a.a.s.}~the $k$-graph $H'\cup G_2'$ contains a set $\mathcal{C}$ of vertex-disjoint copies of $P_{2h}^r$ such that $|\mathcal{C}|\le \beta n$ and for every pair of disjoint ordered $h$-sets in $V'$ that each spans a copy of $K_h^{(k)}$, there are at least $\beta^2 n'/(2h)^2$ members of $\mathcal C$ connecting them. Since $|V(\mathcal C)|+|V(P_{\mathop{\text{\rm abs}}\nolimits})|\le 2h\cdot \beta n+ 6h \zeta n\le \alpha n/2$, we can greedily extend the two ends of $P_{\mathop{\text{\rm abs}}\nolimits}$ by $h$ vertices so that the two new ends $E_1, E_2$ are in $V'\setminus V(\mathcal C)$. Let $m:=g^{-1}(1/(2\epsilon))$. Note that $m\ge 1/\sqrt{\epsilon}$ because $\epsilon$ is small enough and $g$ is linear. By Proposition~\ref{prop:phiP}, we can apply Lemma~\ref{lm:gnp}~\ref{item-I} with $b=m$ on $G_3$ and conclude that {a.a.s.}~every induced subgraph of $G_3$ of order $\beta n$ contains a copy of $P_m^r$. Thus we can greedily find at most $\sqrt\epsilon n$ vertex-disjoint copies of $P_m^r$ in $V'\setminus (V(\mathcal C)\cup E_1\cup E_2)$, which together covers all but at most $\beta n$ vertices of $V'\setminus V(\mathcal C)$. Since $\sqrt\epsilon n+1\le \beta^2 n'/(2h)^2$, we can greedily connect these $(r,k)$-paths $P_m^r$ and $P_{\mathop{\text{\rm abs}}\nolimits}$ to an $(r,k)$-cycle $Q^r$. Let $R:=V(H)\setminus V(Q^r)$ and note that $|R|\le |V(\mathcal C)|+\beta n\le (2h+1) 2\beta n\le \zeta^2 n/(2h)^2$. Since $P_{\mathop{\text{\rm abs}}\nolimits}$ is an absorber, there is an $(r,k)$-path on $V(P_{\mathop{\text{\rm abs}}\nolimits})\cup R$ which has the same ends as $P_{\mathop{\text{\rm abs}}\nolimits}$. So we can replace $P_{\mathop{\text{\rm abs}}\nolimits}$ by this $(r,k)$-path in $Q^r$ and obtain the $r$${}^{\text{th}}${} power of a tight Hamilton cycle. Moreover, since all previous steps can be achieved {a.a.s.}, by the union bound, $H\cup G$ {a.a.s.}~contains the desired $r$${}^{\text{th}}${} power of a tight Hamilton cycle. \end{proof} \section{Concluding remarks} \label{sec:concluding} Let us briefly discuss the hypotheses in Theorem~\ref{main}. Note that, for~$r=1$, the condition in~\eqref{eq:main_min_deg} is simply $\delta_{k-1}(H)\geq\alpha n$, with~$\alpha$ any arbitrary positive constant. Thus, in this case, our theorem is in the spirit of the original Bohman, Frieze and Martin~\cite{BFM} set-up, in the sense that we have a similar minimum degree condition on the deterministic graph~$H$. However, if~$r>1$, then our minimum condition~\eqref{eq:main_min_deg} is of the form $\delta_{k-1}(H)\geq(\sigma+\alpha)n$ for some~$\sigma=\sigma(k,r)>0$ (and arbitrarily small~$\alpha>0$). Thus, for~$r>1$, our result is more in line with Theorem~\ref{thm:BDF} of Bennett, Dudek and Frieze~\cite{BeDuFr16} (in fact, we have~$\sigma(2,2)=1/2$ in our result, which matches the minimum degree condition in Theorem~\ref{thm:BDF}). It is natural to ask whether one can weaken the condition in~\eqref{eq:main_min_deg} to~$\delta_{k-1}(H)\geq\alpha n$, that is, whether one can have~$\sigma=0$. This problem was settled positively by B\"ottcher, Montgomery, Parczyk and Person for graphs~\cite{BMPP}. However, the problem remains open for $k$-graphs ($k\geq3$). \begin{question} \label{prog:main} Let integers $k\ge3$ and $r\ge2$ and $\alpha>0$ be given. Is there $\epsilon>0$ such that, if~$H$ is a $k$-graph on~$n$ vertices with~$\delta_{k-1}(H)\ge\alpha n$ and $p=p(n)\ge n^{-\binom{k+r-2}{k-1}^{-1}-\epsilon}$, then~{a.a.s.}\ $H\cup \mathbb{G}^{(k)}(n,p)$ contains the $r$${}^{\text{th}}${} power of a tight Hamilton cycle? \end{question} Two remarks on the value of~$\sigma=\sigma(k,r)$ in our degree condition~\eqref{eq:main_min_deg} follow. These remarks show that, even though~$\sigma>0$ if~$r>1$, the value of~$\sigma$ is (in the cases considered) below the value that guarantees that~$H$ on its own contains the $r$${}^{\text{th}}${} power of a tight Hamilton cycle. Let us first consider the case~$k=2$, that is, the case of graphs. In this case, $\sigma=1-1/r$ and condition~\eqref{eq:main_min_deg} is~$\delta(H)\geq(1-1/r+\alpha)n$. We observe that this condition does \textit{not} guarantee that~$H$ contains the $r$${}^{\text{th}}${} power of a Hamilton cycle; the minimum degree condition that does is $\delta(H)\geq(1-1/(r+1))n=rn/(r+1)$, and this value is optimal. Let us now consider the case $k=3$ and $4\mid n$. In this case, a construction of Pikhurko~\cite{Pik08} shows that the condition~$\delta_2(H)\geq3n/4$ does not guarantee the existence of the square of a tight Hamilton cycle in~$H$ (in fact, his constructions is stronger and shows that this condition does not guarantee a $K_4^{(3)}$-factor in~$H$). Our minimum degree condition for~$k=3$ and~$r=2$ is~$\delta_2(H)\geq(2/3+\alpha)n$. Finally, a simple calculation shows that the expected number of~$P_n^r$ in~$\mathbb G^{(k)}(n,p)$ is~$o(1)$ if~$p\leq\big((1-\epsilon)e/n\big)^{\binom{k+r-2}{k-1}^{-1}}$ and~$\epsilon>0$. Thus, for such a~$p$, a.a.s.~$\mathbb G^{(k)}(n,p)$ does \textit{not} contain the $r$${}^{\text{th}}${} power of a tight Hamilton cycle. \renewcommand{\doitext}{DOI\,} \renewcommand{\PrintDOI}[1]{\doi{#1}} \renewcommand{\eprint}[1]{\href{http://arxiv.org/abs/#1}{arXiv:#1}} \begin{bibdiv} \begin{biblist} \bib{BTW}{article}{ author = {Balogh, J.}, author = {Treglown, A.}, author = {Wagner, A.~Z.}, title = {Tilings in randomly perturbed dense graphs}, eprint = {1708.09243}, year = {2017}, month = {Aug}, } \bib{BMSSS1}{article}{ Author = {Bastos, J. {de} O.}, author={Mota, G. O.}, author={Schacht, M.}, author={Schnitzer, J.}, author= {Schulenburg, F.}, title={Loose Hamiltonian cycles forced by large ${(k-2)}$-degree---approximate version}, DOI = {10.1137/16M1065732}, journal={SIAM J. Discrete Math.}, volume={31}, date={2017}, number={4}, pages={2328--2347}, issn={0895-4801}, } \bib{BMSSS2}{article}{ Author = {Bastos, J. {de} O.}, author={Mota, G. O.}, author={Schacht, M.}, author={Schnitzer, J.}, author= {Schulenburg, F.}, Date-Added = {2017-02-14 19:33:21 +0000}, Date-Modified = {2017-02-14 19:33:21 +0000}, Journal = {Contributions to Discrete Mathematics}, note={To appear}, Title = {Loose Hamiltonian cycles forced by large $(k-2)$-degree---sharp version} } \bib{BeDuFr16}{article}{ author = {Bennett, P.}, author = {Dudek, A.}, author = {Frieze, A.}, title = {Adding random edges to create the square of a Hamilton cycle}, archivePrefix = {arXiv}, eprint = {1710.02716}, primaryClass = {math.CO}, keywords = {Mathematics - Combinatorics}, year = {2017}, month = {Oct} } \bib{BFM}{article}{ AUTHOR = {Bohman, Tom}, author={Frieze, Alan}, author={Martin, Ryan}, TITLE = {How many random edges make a dense graph {H}amiltonian?}, JOURNAL = {Random Structures Algorithms}, FJOURNAL = {Random Structures \& Algorithms}, VOLUME = {22}, YEAR = {2003}, NUMBER = {1}, PAGES = {33--42}, ISSN = {1042-9832}, MRCLASS = {05C80 (05C45 60C05)}, MRNUMBER = {1943857}, MRREVIEWER = {Bert Fristedt}, DOI = {10.1002/rsa.10070}, URL = {http://dx.doi.org/10.1002/rsa.10070}, } \bib{BHKMPP}{article}{ Author = {B\"ottcher, J.}, author={Han, J.}, author={Kohayakawa, Y.}, author={Montgomery, R.}, author={Parczyk, O.}, author={Person, Y.}, eprint = {1802.04707}, year = {2018}, Title = {Universality of bounded degree spanning trees in randomly perturbed graphs}} \bib{BMPP}{article}{ Author = {B\"ottcher, J.}, author={Montgomery, R.}, author={Parczyk, O.}, author={Person, Y.}, eprint = {1802.04603}, year = {2018}, Title = {Embedding spanning bounded degree subgraphs in randomly perturbed graphs}} \bib{BHS}{article}{ Author = {Bu{\ss}, E.}, author={H{\`a}n, H.}, author={Schacht, M.}, Date-Added = {2017-02-14 19:33:33 +0000}, Date-Modified = {2017-02-14 19:33:33 +0000}, Doi = {10.1016/j.jctb.2013.07.004}, Fjournal = {Journal of Combinatorial Theory. Series B}, Issn = {0095-8956}, Journal = {J. Combin. Theory Ser. B}, Mrclass = {05C65 (05C45)}, Mrnumber = {3127586}, Mrreviewer = {Martin Sonntag}, Number = {6}, Pages = {658--678}, Title = {Minimum vertex degree conditions for loose {H}amilton cycles in 3-uniform hypergraphs}, Url = {http://dx.doi.org/10.1016/j.jctb.2013.07.004}, Volume = {103}, Year = {2013}, Bdsk-Url-1 = {http://dx.doi.org/10.1016/j.jctb.2013.07.004}} \bib{CzMo}{article}{ Author = {Czygrinow, A.}, author={Molla, T.}, Date-Added = {2017-02-14 19:33:33 +0000}, Date-Modified = {2017-02-14 19:33:33 +0000}, Doi = {10.1137/120890417}, Fjournal = {SIAM Journal on Discrete Mathematics}, Issn = {0895-4801}, Journal = {SIAM J. Discrete Math.}, Mrclass = {05D40 (05C65)}, Mrnumber = {3150175}, Mrreviewer = {Deryk Osthus}, Number = {1}, Pages = {67--76}, Title = {Tight codegree condition for the existence of loose {H}amilton cycles in 3-graphs}, Url = {http://dx.doi.org/10.1137/120890417}, Volume = {28}, Year = {2014}, Bdsk-Url-1 = {http://dx.doi.org/10.1137/120890417}} \bib{Di52}{article}{ author = {Dirac, G. A.}, title = {Some Theorems on Abstract Graphs}, journal = {Proceedings of the London Mathematical Society}, volume = {s3-2}, number = {1}, publisher = {Oxford University Press}, issn = {1460-244X}, url = {http://dx.doi.org/10.1112/plms/s3-2.1.69}, doi = {10.1112/plms/s3-2.1.69}, pages = {69--81}, year = {1952}, } \bib{DuFr2}{article}{ AUTHOR = {Dudek, Andrzej}, author={Frieze, Alan}, TITLE = {Loose {H}amilton cycles in random uniform hypergraphs}, JOURNAL = {Electron. J. Combin.}, FJOURNAL = {Electronic Journal of Combinatorics}, VOLUME = {18}, YEAR = {2011}, NUMBER = {1}, PAGES = {Paper 48, 14}, ISSN = {1077-8926}, MRCLASS = {05C80 (05C45 05C65)}, MRNUMBER = {2776824}, } \bib{DuFr1}{article}{ AUTHOR = {Dudek, Andrzej}, author={Frieze, Alan}, TITLE = {Tight {H}amilton cycles in random uniform hypergraphs}, JOURNAL = {Random Structures \& Algorithms}, FJOURNAL = {Random Structures \& Algorithms}, VOLUME = {42}, YEAR = {2013}, NUMBER = {3}, PAGES = {374--385}, ISSN = {1042-9832}, MRCLASS = {05C80 (05C45 05C65)}, MRNUMBER = {3039684}, MRREVIEWER = {Andrew Clark Treglown}, DOI = {10.1002/rsa.20404}, URL = {http://dx.doi.org/10.1002/rsa.20404}, } \bib{GPW}{article}{ Author = {Glebov, R.}, author={Person, Y.}, author={Weps, W.}, Date-Added = {2017-02-14 19:33:33 +0000}, Date-Modified = {2017-02-14 19:33:33 +0000}, Doi = {10.1016/j.ejc.2011.10.003}, Fjournal = {European Journal of Combinatorics}, Issn = {0195-6698}, Journal = {European J. Combin.}, Mrclass = {05C35 (05C45 05C65)}, Mrnumber = {2864440}, Mrreviewer = {Martin Sonntag}, Number = {4}, Pages = {544--555}, Title = {On extremal hypergraphs for {H}amiltonian cycles}, Url = {http://dx.doi.org/10.1016/j.ejc.2011.10.003}, Volume = {33}, Year = {2012}, Bdsk-Url-1 = {http://dx.doi.org/10.1016/j.ejc.2011.10.003}} \bib{HZ2}{article}{ Author = {Han, J.}, author={Zhao, Y.}, Date-Added = {2017-02-14 19:33:21 +0000}, Date-Modified = {2017-02-14 19:33:21 +0000}, Doi = {10.1016/j.jcta.2015.01.004}, Issn = {0097-3165}, Journal = {J. Combin. Theory Ser. A}, Keywords = {Regularity lemma}, Number = {0}, Pages = {194--223}, Title = {Minimum codegree threshold for Hamilton $\ell$-cycles in k-uniform hypergraphs}, Url = {http://www.sciencedirect.com/science/article/pii/S0097316515000059}, Volume = {132}, Year = {2015}, Bdsk-Url-1 = {http://www.sciencedirect.com/science/article/pii/S0097316515000059}, Bdsk-Url-2 = {http://dx.doi.org/10.1016/j.jcta.2015.01.004}} \bib{HZ1}{article}{ Author = {Han, J.}, author={Zhao, Y.}, Date-Added = {2017-02-14 19:33:21 +0000}, Date-Modified = {2017-02-14 19:33:21 +0000}, Doi = {10.1016/j.jctb.2015.03.007}, Journal = {J. Combin. Theory Ser. B}, Pages = {70--96}, Title = {Minimum degree thresholds for loose {Hamilton} cycle in 3-uniform hypergraphs}, Volume = {114}, Year = {2015}, Bdsk-Url-2 = {https://doi.org/10.1016/j.jctb.2015.03.007}} \bib{HZ_pert}{article}{ Author = {Han, J.}, author={Zhao, Y.}, eprint = {1802.04586}, year = {2018}, Title = {Hamiltonicity in randomly perturbed hypergraphs}} \bib{HS}{article}{ Author = {H\`an, H.}, author= {Schacht, M.}, Date-Added = {2017-02-14 19:33:21 +0000}, Date-Modified = {2017-02-14 19:33:21 +0000}, DOI = {10.1016/j.jctb.2009.10.002}, Issue = {3}, Journal = {J. Combin. Theory Ser. B}, Pages = {332--346}, Title = {Dirac-type results for loose {Hamilton} cycles in uniform hypergraphs}, Volume = {100}, Year = {2010}} \bib{JLR}{book}{ AUTHOR = {Janson, Svante}, author={\L uczak, Tomasz}, author={Ruci\'nski, Andrzej}, TITLE = {Random graphs}, SERIES = {Wiley-Interscience Series in Discrete Mathematics and Optimization}, PUBLISHER = {Wiley-Interscience, New York}, YEAR = {2000}, PAGES = {xii+333}, ISBN = {0-471-17541-2}, MRCLASS = {05C80 (60C05 82B41)}, MRNUMBER = {1782847}, MRREVIEWER = {Mark R. Jerrum}, DOI = {10.1002/9781118032718}, URL = {http://dx.doi.org/10.1002/9781118032718}, } \bib{Karp}{article}{ author = {Karp, Richard M.}, TITLE = {Reducibility among combinatorial problems}, BOOKTITLE = {Complexity of computer computations ({P}roc. {S}ympos., {IBM} {T}homas {J}. {W}atson {R}es. {C}enter, {Y}orktown {H}eights, {N}.{Y}., 1972)}, PAGES = {85--103}, PUBLISHER = {Plenum, New York}, YEAR = {1972}, MRCLASS = {68A20}, MRNUMBER = {0378476}, MRREVIEWER = {John T. Gill}, } \bib{KKMO}{article}{ Author = {Keevash, P.}, author={K\"uhn, D.}, author= {Mycroft, R.}, author= {Osthus, D.}, Date-Added = {2017-02-14 19:33:21 +0000}, Date-Modified = {2017-02-14 19:33:21 +0000}, DOI = {10.1016/j.disc.2010.11.013}, Journal = {Discrete Math.}, Number = {7}, Pages = {544--559}, Title = {Loose {Hamilton} cycles in hypergraphs}, Volume = {311}, Year = {2011}} \bib{Korshunov} {article}{ AUTHOR = {Kor\v sunov, A. D.}, TITLE = {Solution of a problem of {P}. {E}rd\H os and {A}. {R}\'enyi on {H}amiltonian cycles in nonoriented graphs}, JOURNAL = {Diskret. Analiz}, NUMBER = {31 Metody Diskret. Anal. v Teorii Upravljaju\v s\v cih Sistem}, YEAR = {1977}, PAGES = {17--56, 90}, MRCLASS = {05C35}, MRNUMBER = {0543833}, } \bib{KKS}{article}{ AUTHOR = {Krivelevich, Michael}, author={Kwan, Matthew}, author={Sudakov, Benny}, TITLE = {Cycles and matchings in randomly perturbed digraphs and hypergraphs}, JOURNAL = {Combin. Probab. Comput.}, FJOURNAL = {Combinatorics, Probability and Computing}, VOLUME = {25}, YEAR = {2016}, NUMBER = {6}, PAGES = {909--927}, ISSN = {0963-5483}, MRCLASS = {05C80 (05C35 05C65)}, MRNUMBER = {3568952}, DOI = {10.1017/S0963548316000079}, URL = {http://dx.doi.org/10.1017/S0963548316000079}, } \bib{KKS2}{article}{ AUTHOR = {Krivelevich, Michael}, author={Kwan, Matthew}, author={Sudakov, Benny}, title={Bounded-degree spanning trees in randomly perturbed graphs}, DOI = {10.1137/15M1032910}, journal={SIAM Journal on Discrete Mathematics}, volume={31}, number={1}, pages={155--171}, year={2017}, publisher={SIAM} } \bib{KMO}{article}{ author={K\"uhn, D.}, author= {Mycroft, R.}, author= {Osthus, D.}, Date-Added = {2017-02-14 19:33:21 +0000}, Date-Modified = {2017-02-14 19:33:21 +0000}, DOI = {10.1016/j.jcta.2010.02.010}, Journal = {J. Combin. Theory Ser. A}, Number = {7}, Pages = {910--927}, Title = {Hamilton $\ell$-cycles in uniform hypergraphs}, Volume = {117}, Year = {2010}} \bib{KO}{article}{ Author = {K\"uhn, D.}, author= {Osthus, D.}, Date-Added = {2017-02-14 19:33:21 +0000}, Date-Modified = {2017-02-14 19:33:21 +0000}, DOI = {10.1016/j.jctb.2006.02.004}, Journal = {J. Combin. Theory Ser. B}, Number = {6}, Pages = {767--821}, Title = {Loose {Hamilton} cycles in 3-uniform hypergraphs of high minimum degree}, Volume = {96}, Year = {2006}} \bib {KuOs12}{article}{ AUTHOR = {K\"uhn, D.}, author={Osthus, D.}, TITLE = {On {P}\'osa's conjecture for random graphs}, JOURNAL = {SIAM J. Discrete Math.}, FJOURNAL = {SIAM Journal on Discrete Mathematics}, VOLUME = {26}, YEAR = {2012}, NUMBER = {3}, PAGES = {1440--1457}, ISSN = {0895-4801}, MRCLASS = {05C80 (05C45)}, MRNUMBER = {3022146}, MRREVIEWER = {A. G. Thomason}, DOI = {10.1137/120871729}, URL = {http://dx.doi.org/10.1137/120871729}, } \bib {KoSaSz98a}{article}{ AUTHOR = {Koml\'os, J.}, author ={S\'ark\"ozy, G.}, author={Szemer\'edi, E.}, TITLE = {On the {P}\'osa-{S}eymour conjecture}, JOURNAL = {J. Graph Theory}, FJOURNAL = {Journal of Graph Theory}, VOLUME = {29}, YEAR = {1998}, NUMBER = {3}, PAGES = {167--176}, ISSN = {0364-9024}, MRCLASS = {05C45 (05C35)}, MRNUMBER = {1647806}, MRREVIEWER = {Ralph Faudree}, } \bib {KoSaSz98b}{article}{ AUTHOR = {Koml\'os, J.}, author ={S\'ark\"ozy, G.}, author={Szemer\'edi, E.}, TITLE = {Proof of the {S}eymour conjecture for large graphs}, JOURNAL = {Ann. Comb.}, FJOURNAL = {Annals of Combinatorics}, VOLUME = {2}, YEAR = {1998}, NUMBER = {1}, PAGES = {43--60}, ISSN = {0218-0006}, MRCLASS = {05C45}, MRNUMBER = {1682919}, MRREVIEWER = {Akira Saito}, DOI = {10.1007/BF01626028}, URL = {http://dx.doi.org/10.1007/BF01626028}, } \bib{McMy}{article}{ Author = {McDowell, A.}, author={Mycroft, R.}, Title = {Hamilton {$\ell$}-cycles in randomly-perturbed hypergraphs}, eprint = {1802.04242}, year = {2018}, adsurl = {http://adsabs.harvard.edu/abs/2018arXiv180204242M}, adsnote = {Provided by the SAO/NASA Astrophysics Data System}, } \bib{NeSk16}{article}{ author = {Nenadov, R.}, author = {{\v S}kori{\'c}, N.}, title = {Powers of Hamilton cycles in random graphs and tight Hamilton cycles in random hypergraphs}, archivePrefix = {arXiv}, eprint = {1601.04034}, keywords = {Mathematics - Combinatorics}, year = {2016}, adsurl = {http://adsabs.harvard.edu/abs/2016arXiv160104034N}, adsnote = {Provided by the SAO/NASA Astrophysics Data System}, } \bib{PP}{article}{ author = {{Parczyk}, O.}, author= {{Person}, Y.}, title = {Spanning structures and universality in sparse hypergraphs}, DOI = {10.1002/rsa.20690}, journal = {Random Structures \& Algorithms}, year = {2016}, volume = {49}, number = {4}, pages= {819--844} } \bib{Pik08}{article}{ AUTHOR = {Pikhurko, Oleg}, TITLE = {Perfect matchings and {$K^3_4$}-tilings in hypergraphs of large codegree}, JOURNAL = {Graphs Combin.}, FJOURNAL = {Graphs and Combinatorics}, VOLUME = {24}, YEAR = {2008}, NUMBER = {4}, PAGES = {391--404}, ISSN = {0911-0119}, MRCLASS = {05C65 (05C70)}, MRNUMBER = {2438870}, MRREVIEWER = {Deryk Osthus}, DOI = {10.1007/s00373-008-0787-7}, URL = {http://dx.doi.org/10.1007/s00373-008-0787-7}, } \bib{Posa}{article}{ AUTHOR = {P\'osa, L.}, TITLE = {Hamiltonian circuits in random graphs}, JOURNAL = {Discrete Math.}, FJOURNAL = {Discrete Mathematics}, VOLUME = {14}, YEAR = {1976}, NUMBER = {4}, PAGES = {359--364}, ISSN = {0012-365X}, MRCLASS = {05C35}, MRNUMBER = {0389666}, MRREVIEWER = {F. Harary}, DOI = {10.1016/0012-365X(76)90068-6}, URL = {http://dx.doi.org/10.1016/0012-365X(76)90068-6}, } \bib{RRRSS}{article}{ author = {Reiher, C.}, author = {R{\"o}dl, V.}, author = {Ruci{\'n}ski, A.}, author = {Schacht, M.}, author = {Szemer{\'e}di, E.}, title = {Minimum vertex degree condition for tight Hamiltonian cycles in 3-uniform hypergraphs}, archivePrefix = {arXiv}, eprint = {1611.03118}, primaryClass = {math.CO}, keywords = {Mathematics - Combinatorics}, year = {2016}, month = {nov}, adsurl = {http://adsabs.harvard.edu/abs/2016arXiv161103118R}, } \bib{Ri00}{article}{ AUTHOR = {Riordan, O.}, TITLE = {Spanning subgraphs of random graphs}, JOURNAL = {Combin. Probab. Comput.}, FJOURNAL = {Combinatorics, Probability and Computing}, VOLUME = {9}, YEAR = {2000}, NUMBER = {2}, PAGES = {125--148}, ISSN = {0963-5483}, MRCLASS = {05C80}, MRNUMBER = {1762785}, MRREVIEWER = {Lyuben R. Mutafchiev}, DOI = {10.1017/S0963548399004150}, URL = {http://dx.doi.org/10.1017/S0963548399004150}, } \bib{RoRu14}{article}{ Author = {R{\"o}dl, V.}, author={Ruci{\'n}ski, A.}, Date-Added = {2017-02-14 19:33:33 +0000}, Date-Modified = {2017-02-14 19:33:33 +0000}, Doi = {10.7151/dmgt.1743}, Fjournal = {Discussiones Mathematicae. Graph Theory}, Issn = {1234-3099}, Journal = {Discuss. Math. Graph Theory}, Mrclass = {05D05 (05C65)}, Mrnumber = {3194042}, Mrreviewer = {Peter James Dukes}, Number = {2}, Pages = {361--381}, Title = {Families of triples with high minimum degree are {H}amiltonian}, Url = {http://dx.doi.org/10.7151/dmgt.1743}, Volume = {34}, Year = {2014}, Bdsk-Url-1 = {http://dx.doi.org/10.7151/dmgt.1743}} \bib{RoRuSz06}{article}{ Author = {R\"odl, V.}, author={Ruci\'nski, A.}, author={Szemer\'edi, E.}, title={A Dirac-type theorem for 3-uniform hypergraphs}, journal={Combin. Probab. Comput.}, volume={15}, date={2006}, number={1-2}, pages={229--251}, issn={0963-5483}, doi={10.1017/S0963548305007042}, } \bib{RRS08}{article}{ Author = {R\"odl, V.}, author={Ruci\'nski, A.}, author={Szemer\'edi, E.}, Date-Added = {2017-02-14 19:33:33 +0000}, Date-Modified = {2017-02-14 19:33:33 +0000}, DOI = {10.1007/s00493-008-2295-z}, Journal = {Combinatorica}, Number = {2}, Pages = {229--260}, Title = {An approximate {D}irac-type theorem for k-uniform hypergraphs}, Volume = {28}, Year = {2008}} \bib{RRS11}{article}{ Author = {R\"odl, V.}, author={Ruci\'nski, A.}, author={Szemer\'edi, E.}, Date-Added = {2017-02-14 19:33:33 +0000}, Date-Modified = {2017-02-14 19:33:33 +0000}, DOI = {10.1016/j.aim.2011.03.007}, Journal = {Advances in Mathematics}, Number = {3}, Pages = {1225--1299}, Title = {Dirac-type conditions for {Hamiltonian} paths and cycles in 3-uniform hypergraphs}, Volume = {227}, Year = {2011}} \end{biblist} \end{bibdiv} \endgroup \end{document}
{ "timestamp": "2018-02-27T02:07:40", "yymm": "1802", "arxiv_id": "1802.08900", "language": "en", "url": "https://arxiv.org/abs/1802.08900" }
\subsection{Experimental Setup} \label{subsec:setup} For our experiments we use the \emph{iCubWorld} dataset which was created as a benchmark for object recognition in robotics \cite{Pasquale2015MLIS}. A human teacher shows a set of objects to the iCub robot, which uses a tracking routine to follow them with its gaze. Supervision is in form of objects' labels provided verbally by the human. In particular we focus on the \emph{iCubWorld Transformations} \cite{Pasquale2016IROS} (see Figure \ref{fig:dataset_setting}-Left) which comprises 150 objects evenly divided into 15 categories with their appearance changing: each object is acquired while undergoing isolated visual transformations in order to study invariance to real-world nuisances. We considered two cases (see Figure \ref{fig:dataset_setting}-Right): \vspace{2mm} \begin{list}{\labelitemi}{\leftmargin=1em} \item{\textbf{Translation:}} the human moves in semi-circle around the iCub robot keeping approximately the same distance and pose of the object in the hand with respect to the cameras. Out of the whole rotation, that is covered on average by 150 images, we define two domains by considering only the first and last 50 images showing the object at the extreme of the semi-circle, starting on the left and ending on the right. Indeed in this movement the appearance of the object background changes significantly as well as the illumination conditions, while the object remains the same. \vspace{2mm} \item{\textbf{Scale:}} the human moves the hand holding the object back and forth, thus changing the object scale with respect to the cameras. Also in this case the whole movement is depicted in 150 images on average and we pick only the first and the last 50 images to define our domains. The object in its extreme far and extreme close positions to the camera occupies a different portion of the image, thus inducing a significant changing in its overall appearance. \end{list} \vspace{2mm} Since every category contains 10 object instances we divided them into 6 and 4 for the two domains, introducing a small unbalance that can be naturally present between source and target. Thus in both transformation cases, one domain (\emph{left}~/~\emph{close}) contains 4500 images while the other (\emph{right}~/~\emph{far}) contains 3000 images. Note also that the the set of object instances in the source and in the target do not overlap. Both domains are used as source and target in turn in our experiments. In particular we consider two settings: \vspace{2mm} \begin{list}{\labelitemi}{\leftmargin=1em} \item{\textbf{Adapt on whole-target}:} the whole labeled source data and the whole unlabeled target data are exploited during training and adaptation. At test time the learned model is used to annotate the target samples. \vspace{2mm} \item{\textbf{Adapt on sub-target}:} during training all the source data are available, but only a sub-part of the target is provided. Specifically, the source is composed by samples of all the 15 object categories, while the target visible at the training phase contains only 8 object categories. Thus, while the source classifier can still be trained to recognize 15 classes, the joint adaptation process can leverage only 8 object categories. At test time the whole 15 object target set should be annotated, containing both the classes available during training and the initially unseen 7 categories. \end{list} \vspace{2mm} To quantitatively verify the presence of the visual domain shift between the described data domains we run a first set of experiments by training a classifier on the source and then comparing the performance while testing on the source and on the target \cite{office}. We started by extracting features from all the images by using the second fully-connected layer (fc7) of AlexNet pretrained on Imagenet which provides a representation vector of 4096 dimensions for each image. The source domain is then randomly divided into 80\% - 20\% sets respectively used for training and testing a linear SVM classifier. The same model is finally tested also on the target. The obtained results are reported in Table \ref{table:domain_shift} and show a drop in performance which indicate a significant amount of domain shift, even more evident in the scale case than in the translation one. \begin{table}[tb!] \caption{Evaluation of the domain shift in our experimental setting when representing the images with AlexNet fc7 features. Here S/T stand for source/target, while X $\rightarrow$ Y means that the classifier is trained on X and tested on Y. The presence of domain shift is indicated by the large drop in performance between S $\rightarrow$ S and S $\rightarrow$ T.} \begin{center} \begin{tabular}{|c|c|c|c|c|} \hline & S & T & {S $\rightarrow$ S} & {S $\rightarrow T$} \\ \hline {\multirow{2}{*}{translation}} & left & right &98.33 & 45.80 \\ & right & left &99.33 & 54.49 \\ \hline {\multirow{2}{*}{scale}} & close & far &99.45 & 18.44 \\ & far & close &98.67 & 28.80 \\ \hline \end{tabular} \end{center} \label{table:domain_shift} \vspace{-4mm} \end{table} \begin{table*}[tb!] \caption{Percentage classification accuracy of our LoAd network and several baselines on the defined iCubWorld Transformation settings. To take into consideration possible small fluctuations due to the network batch learning, the experiments with deep methods have been repeated five times and we report here the average results with their standard deviation. The best results for each setting are highlighted in bold.} \begin{center} \begin{tabular}{|c|c|c|c|c||c|c|c|} \hline \multicolumn{2}{|c|}{}& \multicolumn{3}{c||}{iCubWorld Translation} & \multicolumn{3}{c|}{iCubWorld Scale} \\ \cline{3-8} \multicolumn{2}{|c|}{}& left $\rightarrow$ right & right $\rightarrow$ left & average & close $\rightarrow$ far & far $\rightarrow$ close & average\\ \hline \multicolumn{2}{|c|}{Alexnet} & 50.41 $\pm$ 0.98 & 54.01 $\pm$ 0.59 & 52.21 & 18.20 $\pm$ 0.71 & 27.45 $\pm$ 1.23 & 22.83 \\ \hline & {DANN} \cite{DANN} & 62.60 $\pm$ 1.29 & 41.08 $\pm$ 0.10 & 51.08 & \textbf{38.86 $\pm$ 1.86} & \textbf{47.87 $\pm$ 0.31} & \textbf{43.37} \\ adap on & {Auto-DIAL} \cite{carlucci2017auto}& 58.51 $\pm$ 1.16 & 60.67 $\pm$ 1.21 & 59.59 & 36.33 $\pm$ 0.10 & 42.58 $\pm$ 0.12& 39.46 \\ whole-target & {ROOTS} \cite{roots} & 54.29 & 55.29 & 54.79 & 20.07 & 34.71 & 27.39 \\ & {LoAd} &\textbf{67.35 $\pm$ 0.97} &\textbf{61.75 $\pm$ 0.54} &\textbf{64.55} & 29.68 $\pm$ 1.52 & 31.43 $\pm$ 0.49& 30.56 \\ \hline & DANN \cite{DANN} & 57.94 $\pm$ 2.54 & 40.37 $\pm$ 0.53 & 49.16 & 22.71 $\pm$ 1.52 & 34.56 $\pm$ 0.36 & 28.63 \\ adapt on & Auto-DIAL \cite{carlucci2017auto}& 49.89 $\pm$ 0.19 & 44.78 $\pm$ 0.17 & 47.33 & 24.69 $\pm$ 0.10 & 25.22 $\pm$ 0.13 & 24.96 \\ sub-target & {ROOTS} \cite{roots} & 54.29 & 55.29 & 54.79 & 20.07 & \textbf{34.71} & 27.39 \\ & LoAd & \textbf{67.10 $\pm$ 0.97 } & \textbf{60.53 $\pm$ 0.53 } & \textbf{63.82 } & \textbf{28.38 $\pm$ 1.45 } & 31.65 $\pm$ 0.49 & \textbf{30.01} \\ \hline \end{tabular} \end{center} \label{table:results} \vspace{-2mm} \end{table*} \subsection{Experimental Analysis} \label{subsec:analysis} We evaluate the performance of our LoAd network in reducing the domain gap for the translation and scale settings described above. As benchmark reference we use three domain adaptation approaches that have shown state of the art performance in computer vision for several non-robotic tasks: \vspace{1mm} \begin{list}{\labelitemi}{\leftmargin=1em} \item{\textbf{DANN.}} The Domain Adversarial Neural Network \cite{DANN} takes as input both the labeled source and the unlabeled target data and promotes the emergence of features that are discriminative for the main learning task on the source domain and indiscriminate with respect to the shift between the domains. This is obtained by keeping a single CNN path till the second fully connected layer and then doubling the final part of the network with the standard branch that minimizes the classification loss and a new branch that learns to confuse the domain discriminator. \vspace{2mm} \item{\textbf{Auto-DIAL.}} This is a deep learning network whose final objective is to minimize both the source classification loss and the target entropy loss by exploiting embedded domain alignment layers \cite{carlucci2017auto}. These layers perform batch normalization and induce a transformation of both the source and target distribution with a mixing parameter which is learned automatically. \vspace{2mm} \item{\textbf{ROOTS.}} We use this short name to indicate the first work about learning the spatial roots of visual domain shift \cite{roots}. As already mentioned in Section \ref{sec:related} this approach was based on a computational intensive image patch occlusion process to produce domainness maps which were then statistically evaluated to create image features at different domainness levels. Linear SVM classifiers are then trained on the representations obtained at each level and the final annotation is based on the combined margin outputs. Here we adopt the strategy described in \cite{roots} to obtain the image representation, but we start from the maps produced through our efficient domain localization network (see section \ref{subsec:domloc}). \end{list} \vspace{1mm} Finally, as non-adaptive reference we use standard {AlexNet}: the last fully connected layer of the network pre-trained on ImageNet is fine-tuned on the source samples and tested on the target. Table \ref{table:results} reports the classification accuracy obtained by the reference methods listed above and by our LoAd network. Let's focus first on the left part of the table containing results for the translation case. Here both DANN and Auto-DIAL improve over the standard CNN with the second better than the first on average. As we know, neither of these methods use local information but the image features are directly produced by end-to-end architectures. On the other hand, ROOTS exploits the domainness maps but the AlexNet features extracted from different image patches are then hand-crafted. The result is a method that still overcomes standard CNN but has lower performance than the other adaptive approaches. Finally our LoAd network outperforms all the competing methods. This remains true even when the target data available during the training phase covers just a subset of the source categories (adapt on sub-target rows of the table). Here DANN and Auto-DIAL show a significant performance drop while the accuracy of LoAd is almost unchanged showing its robustness. Note that ROOTS results also remain unchanged and this support the hypothesis that the maps produced by the domain localization network when starting from a subset of categories contain the same information than that obtained over all the classes. The right part of Table \ref{table:results} shows the classification accuracy on the scale domain case produced by all the considered methods. As already indicated by the results in Table \ref{table:domain_shift}, object scaling induces a larger domain shift than the one caused by translation, thus the adaptation task is more challenging. Here the best results are obtained by DANN, while LoAd improves on average over the AlexNet non adaptive baseline and over ROOTS but underperforms with respect to DANN and Auto-DIAL. However, when considering only a subset of the target classes for adaptation, the results obtained by DANN and Auto-DIAL decrease, while LoAd presents on average the best performance being just slightly affected by the change in training setting. \begin{figure} \begin{center} \begin{tabular}{|c c|c | c|} \hline \multicolumn{2}{|c|}{\multirow{2}{*}{images}} & \multicolumn{2}{c|}{maps} \\ \cline{3-4} & & left & right\\ \hline {\multirow{2}{*}{left}} & \includegraphics[width=0.07\textwidth]{images_in_table_left_right/left_remote.jpg} & \includegraphics[width=0.07\textwidth]{images_in_table_left_right/left_left_remote_96.png} & \includegraphics[width=0.07\textwidth]{images_in_table_left_right/left_right_remote_96.png} \\ & \includegraphics[width=0.07\textwidth]{images_in_table_left_right/left_cellphone.jpg} & \includegraphics[width=0.07\textwidth]{images_in_table_left_right/left_left_cellphone.png} & \includegraphics[width=0.07\textwidth]{images_in_table_left_right/left_right_cellphone.png} \\ \hline {\multirow{2}{*}{right}} & \includegraphics[width=0.07\textwidth]{images_in_table_left_right/right_cup_00002380.jpg} & \includegraphics[width=0.07\textwidth]{images_in_table_left_right/right_left_cup.png} & \includegraphics[width=0.07\textwidth]{images_in_table_left_right/right_right_cup.png} \\ & \includegraphics[width=0.07\textwidth]{images_in_table_left_right/right_pin_00007885.jpg} & \includegraphics[width=0.07\textwidth]{images_in_table_left_right/right_left_pin.png}& \includegraphics[width=0.07\textwidth]{images_in_table_left_right/right_right_pin.png}\\ \hline \hline & & close & far\\ \hline {\multirow{2}{*}{close}} & \includegraphics[width=0.07\textwidth]{images_in_table_close_far/close_flower.jpg} & \includegraphics[width=0.07\textwidth]{images_in_table_close_far/close_close_flower6_day8_left_1_00001600.jpg} & \includegraphics[width=0.07\textwidth]{images_in_table_close_far/close_far_flower6_day8_left_1_00001600.jpg} \\ & \includegraphics[width=0.07\textwidth]{images_in_table_close_far/close_soapdispenser.jpg} & \includegraphics[width=0.07\textwidth]{images_in_table_close_far/close_close_soapdispenser3_day4_left_1_00001741.jpg} & \includegraphics[width=0.07\textwidth]{images_in_table_close_far/close_far_soapdispenser3_day4_left_1_00001741.jpg} \\ \hline {\multirow{2}{*}{far}} & \includegraphics[width=0.07\textwidth]{images_in_table_close_far/far_book.jpg} & \includegraphics[width=0.07\textwidth]{images_in_table_close_far/far_close_book7_day5_scale_2_00001160.jpg} & \includegraphics[width=0.07\textwidth]{images_in_table_close_far/far_far_book7_day5_scale_2_00001160.jpg} \\ & \includegraphics[width=0.07\textwidth]{images_in_table_close_far/far_wallet.jpg} & \includegraphics[width=0.07\textwidth]{images_in_table_close_far/far_close_wallet7_day4_left_2_00001346.jpg} & \includegraphics[width=0.07\textwidth]{images_in_table_close_far/far_far_wallet7_day4_left_2_00001346.jpg} \\ \hline \end{tabular} \end{center} \caption{Examples of domainness maps produced by the domain localization network: they highlight domain-specific and domain-generic image areas.} \label{fig:domain_spec_gen} \vspace{-1mm} \end{figure} \subsection{Discussion} \label{subsec:ablation} For a more in-depth analysis of the LoAd network, we present here an ablation study with separate evaluations on its components. \subsubsection{Domain localization performance} \label{subsec:ablation:domainness} The role and functionality of the domain localization network can be qualitatively analyzed by observing the produced domainness maps. In the top part of Figure \ref{fig:domain_spec_gen} we show four images from the translation setting, two from the \emph{left} domain and two from the \emph{right} domain. For each image the domain localization network produces two complementary maps: if the image belongs to the \emph{left} domain, the left-map indicates what is domain-specific, while the right-map highlights domain-generic areas \emph{i.e.}\xspace the image parts that mostly make this sample similar to the right domain. Analogously, if the image belongs to the \emph{right} domain, the left-map indicates domain-generic areas. By comparing the maps with the images it can be easily seen that domain generic regions focus on the objects, while the domain specific ones focus on the background which is typical for each domain. The LoAd network uses the convolutional filters associated to the shown domain-generic maps while learning the object classification model from the source data. \begin{table}[tb!] \caption{Effect of the careful design of the last CNN pooling layer which also allow a reduction in the number of neurons in fully connected layers.} \vspace{-4mm} \begin{center} \begin{tabular}{|c|c|c|c|} \hline \multirow{2}{*}{} & \multicolumn{3}{c|}{iCubWorld Translation} \\ \cline{2-4} & left $\rightarrow$ right & right $\rightarrow$ left & average\\ \hline pool5:$6\times6$, fc:4096 & 55.40 & 57.40 & 56.40 \\ pool5:$4\times4$, fc:2048 & 65.53 & 61.18 & 63.36 \\ pool5:$3\times3$, fc:2048 & 65.23 & 60.87 & 63.05 \\ \hline \hline {\multirow{2}{*}{}}& \multicolumn{3}{c|}{iCubWorld Scale} \\ \cline{2-4} & close $\rightarrow$ far & far $\rightarrow$ close & average\\ \hline pool5:$6\times6$, fc:4096 & 17.91 & 28.38 & 23.15 \\ pool5:$4\times4$, fc:2048 & 25.64 & 29.49 & 27.57 \\ pool5:$3\times3$, fc:2048 & 24.74 & 31.27 & 28.01 \\ \hline \end{tabular} \end{center} \label{table:pooling} \vspace{-4mm} \end{table} Similar considerations can be done for the scale case with examples shown in the bottom part of Figure \ref{fig:domain_spec_gen}. Here it can also be noticed that for the images of the \emph{close} domain the far-maps tend to highlight only a portion of the object as well as part of the top corners background which is actually shared among the domains. We speculate that this effect influences LoAd results on scale and can explain the lower performance with respect to the competing DANN in the whole-target adaptation setting. \subsubsection{Careful choice of spatial pooling dimensions} \label{subsec:ablation:pooling} When dealing with a domain shift that may be spatially grounded in the images, side tools that carefully manage localization can be extremely useful. The last pooling layer in standard deep architectures is at the interface between the convolutional part of the network that maintains local information from the images, and the fully connected part which instead disregards it. The design of this layer may have a significant impact on the final performance: we experienced it by changing the final dimensionality of the feature maps and adaptively calculating the window size and stride by following the same approach of \cite{sppooling}. The pool5 layer in AlexNet produces maps of size $6\times6$ while we tested the effect of $4\times4$ and $3\times3$ maps. Besides reducing the filter dimensionality, this choice allows also to decrease the number of neurons in the fully connected layers to $2048 \rightarrow 2048 \rightarrow K$ where $K=15$ in our setting. We report in Table \ref{table:pooling} the obtained results on one classification run of the translation and scale domain shift tasks. Intuitively to reduce the map dimensionality we have to increase the pooling window size and this helps to produce a representation more invariant to the specific object location. Our LoAd network was designed to include adaptive pooling layers producing $3\times3$ feature maps and fully connected layers of 2048 dimensions. By comparing the results here with that reported in Table \ref{table:results} we can conclude that the careful design of the pooling layer has a relevant role in the final performance of LoAd. \subsubsection{Domain localization} \label{subsec:domloc} Discriminative information about two domains can be easily collected by training a deep model that differentiates among them. These information are stored in the inner layers of the network: inspired by \cite{gradcam}, we exploit the gradients flowing into the final convolutional layer of a CNN to produce coarse maps about the spatial grounding of the domain in an image. Basically the source and target images are given as input to the network sketched in Figure \ref{fig:gradcam}, which is pre-trained on Imagenet and fine-tuned on a single sigmoid unit for binary classification. We indicate with $y^c = f(x)$ the softmax score at the last layer. Here $f()$ is the input-output network mapping, and $c=1,2$ are the classes \emph{i.e.}\xspace the domains in our case. The last convolutional layer produces $n = 1, \ldots, N$ feature maps $F_{ij}^n \in \mathbb{R}^{u \times v}$, where $i,j$ span on the width $u$ and height $v$. These feature maps retain spatial information and we can measure how much each of them contributes to the final score for a class by calculating the derivatives $\frac{\partial y^c}{\partial F_{ij}^n}$ which are then global average pooled in order to get a single number: \begin{equation} w_n^c = \overbrace{\frac{1}{Z}\sum_{i=1}^u\sum_{j=1}^v}^\text{global average pooling} \underbrace{\frac{\partial y^c}{\partial F_{ij}^n}}_\text{gradients via backprop}~. \end{equation} We can see that the derivative is positive if an increase in the value of the pixel $F_{ij}^n$ yields an increase of the value of $y^c$. These weights are then used to define a rectified weighted linear combination of the feature maps \begin{equation} H^c = ReLU\left(\sum_{n=1}^N w_n^c F^n\right) \in \mathbb{R}^{u \times v} \end{equation} which can be seen as coarse heatmaps. When starting from the AlexNet \cite{alexnet} basic architecture we have $N=256$ and $u = v = 13$. For visualization the heatmaps can be upsampled through bi-linear interpolation and we can use them to identify image regions shared by the domains, or specific of each of them: given an image of domain $c=1$, the regions highlighted as important for domain $c=2$ correspond to the \emph{domain-generic} areas, while the regions highlighted as important for domain $c=1$ are \emph{domain-specific}. Same holds when inverting the domains. By following \cite{roots} we indicate the obtained heatmaps as \emph{domainness maps}. Together with the network architecture, Figure \ref{fig:gradcam} shows examples of domainness maps which highlight domain generic image regions. \vspace{2mm} \subsubsection{Spatial attention by multiplicative fusion} Besides providing visual explanations about \emph{where} the domain insists in an image, the procedure described above can be used to reduce the domain shift by guiding a learning procedure to attend the regions shared by the two domains. With this goal in mind we keep the \emph{per-feature map activations} from the convolutional layer without summing over depth \begin{equation} W^c_n = ReLU\left(w_n^c F^n\right) \in \mathbb{R}^{u \times v}\quad\quad\text{for } n = 1\ldots N, \label{eq:mapactivation} \end{equation} in this way the information about how each filter contributed to the decision of a certain class (\emph{i.e.}\xspace domain) is integrated in the filter itself. We propose to exploit the knowledge provided by these filters through \emph{multiplicative fusion} \cite{multfusion} when training a network for object classification on the source. A deep network with the same basic structure of that used for domain recognition (AlexNet) pre-trained on Imagenet and fine-tuned to minimize a softmax loss on the source object categories, will produce feature maps $C$ of the same size of $W$ at the last convolutional layer. Object and domain knowledge are then integrated by element-wise multiplication $M = C \odot W$ to get convolutional maps that attend only domain-shared regions. The ReLU function in (\ref{eq:mapactivation}) is strictly selective and may reduce to zero many pixels also in gray areas of the image filters where domain specificity is questionable. With the aim of enhancing the domain-shared information without loosing information from the overall image appearance, we add back the original object filters by concatenating $C$ to $M$\footnote{We also tried the alternative solution of using only multiplicative fusion in the form $M = C \odot (W + 1)$ with lower results.}. \vspace{2mm} \subsubsection{LoAd Implementation} The final architecture of our LoAd network can be seen in Figure \ref{fig:load}. From the input layer to the last convolutional layer, the architecture is the same as a standard AlexNet. Then the output feature maps are duplicated: one copy is kept as in its original form, the other is multiplicatively fused with the domain localization filters. Max pooling with an adaptive choice of the kernel and stride dimensions (see Section \ref{subsec:ablation}) is performed on each of the two branches which are then recombined by concatenation producing enriched feature maps with a highlight on the domain generic part of the images. The network ends with three fully connected layers with dimension $2048\rightarrow 2048 \rightarrow K$, where $K$ is the number of object categories to be recognized through the minimization of a softmax loss. We implemented the LoAd network\footnote{https://github.com/blackecho/LoAd-Network} with Torch7 \cite{torch7}. Element-wise multiplication as well as concatenation are basic operations in Torch and could be easily used to prepare the filter integration layers. The adaptive max pooling layer was implemented as an extension of Torch7 while the domain localization network was implemented by starting from the code of \cite{gradcam} and we modified it according to the description provided above. We always start from Imagenet pre-trained models, so for both networks the initial layers till the last fully convolutional one are frozen while the remaining layers are either fine-tuned or trained from random initialization with learning rate $5\times 10^{-4}$ and weight decay of $5\times 10^{-4}$ as regularizer. For the fully connected layers the regularization is obtained through a dropout of 0.6. All these hyperparameters have been chosen using a validation set. The optimizer used in the training procedure is Stochastic Gradient Descent with momentum parameter \cite{momentum} set to 0.9 and Nesterov method enabled \cite{nesterov-momentum}. We used a batch size of 256 and training was performed in parallel on four NVIDIA Titan X GPUs. \subsubsection{Domain Adaptation} The goal of domain adaptation is to produce good models on a target domain, by training on labeled data from the source domain and leveraging unlabeled samples from the target domain as auxiliary information during training. The problem of reducing the domain shift between train and test data was first tackled in the area of natural language processing \cite{domain-adaptation-review} and in the last years has gained an increasing attention in computer vision for solving dataset bias in object recognition and detection \cite{DBLP:journals/corr/Csurka17}. Although much more robust than previous learning technologies, deep learning is still affected by domain shift \cite{decaf} in particular in the considered unsupervised setting where no labeled target data are available and standard fine-tuning is not applicable \cite{finetune}. We can identify two main solving directions, one based on \emph{instance re-weighting} and the other on \emph{feature alignment}. In the first case, the basic approach consists in evaluating the similarity of source instances to the target with the aim of balancing their importance or eventually sub-select them before learning a model. Different measures of similarity have been proposed in combination with shallow learning methods for this weighting procedure \cite{KMM,landmarks,ChuDC13}. More recently in \cite{Zeng2014} a deep autoencoder was trained to weigh the importance of source samples by fitting the marginal distributions of target samples for pedestrian detection. The second adaptive solution based on feature transformation and alignment has been declined in a large number of ways, all based on searching a common subspace to minimize the difference among the corresponding domain distributions. Feature transformation was obtained through metric learning in \cite{office,Kulis2011}, PCA in \cite{Fernando2013b}, while multiple intermediate projection steps were considered in \cite{GongSSG12}. Intermediate features were also obtained in \cite{coral} by aligning source and target covariance matrices. Deep learning architectures for object classification have been modified to accommodate the second objective of minimizing a domain divergence measure \cite{Long:2015,dcoral,residual}. An alternative way to measure domain similarity is that of discriminating among them and using the domain recognition loss in an adversarial min-max game while training for object classification \cite{DANN}. Finally, a different solution based on the introduction of adaptive network layers for tunable batch normalization has shown high performance on several object classification benchmark datasets \cite{carlucci2017auto}. All the existing CNN-DA methods, with the notable exception of \cite{openset2017}, are restricted by the assumption that samples of source and target share the same label space and that data of all the categories are provided at training time. In case only a subset of the target categories is available during training, a possible solution is to restrict also the source category set, but this implies that the classifier will not cover all the classes and it should be trained again in case new unlabeled target samples of the remaining classes become available. Alternatively the whole source can be used for adaptation with the partial target but this unbalanced condition generally affect the final classification performance. Our work overcomes this limitation, by proposing a modular approach that yields robustness with respect to variations in the number of classes from the source to the target domains. In the latest robotics literature a lot of attention has been dedicated to adaptive methods for agents deployed in the real world but trained in simulation environments and with synthetic data produced or collected from free Web resources \cite{sim2real,abbeelIROS17, tzengWAFR16}. Although bridging the so-called reality-gap is important to allow a reduction in the need of manually annotated data, robotic perception provides ample motivations for exploring domain adaptation methods even within real world settings, when changing the visual conditions \cite{WulfmeierIROS2017} or when transferring the knowledge acquired by one robot to another \cite{acrossRobots}. In our work we focus on a single robot that needs to recognize object categories undergoing significant appearance changes due to scaling and translation and we show that the corresponding domain gap can be reduced with a tailored localized adaptive solution based on the identification of domain-invariant image regions. \vspace{2mm} \subsubsection{CNN Visual Analysis} One of the advantages of convolutional neural networks is that they can be made transparent by visualizing the region of the input images that are important for the trained model. A widely adopted strategy to get those visualizations consists in starting from individual feature maps at any layer in the network and project them back towards the input image to identify the patches more responsible for strong activation \cite{Simonyan_ICLR_2014,guidedbackprop}. The produced heatmaps are informative at high level but often they are not class discriminative. A recent method named \emph{Class Activation Mapping} \cite{zhou2015cnnlocalization} replaces fully-connected layers of image classification CNN architectures with convolutional layers and global average pooling to obtain class-specific feature maps, but the network modifications tend to reduce the accuracy performance in the final classification task. The \emph{Grad-CAM} approach introduced in \cite{gradcam} does not need any change in the network architecture: on the basis of class specific gradients, it re-weights and combines the convolutional feature maps. This method produces class-discriminative localization heatmaps as good as those obtained with strategies that directly start from the input images and either use a mask-out procedure occluding specific object parts, or classify multiple image patches to collect class-wise scores per pixels \cite{zeiler13,Oquab:2014}. However, differently for these last approaches which needs several pass per image through the network, Grad-CAM has a single forward and partial back-ward pass per image, resulting an order of magnitude more efficient. Recently patch-occlusion CNN visualization has been used to search for the spatial roots of visual domain adaptation \cite{roots} and produced the so called \emph{domainness maps} that indicate the localization of domain-specific and domain-generic regions. Features at different levels of domain specificity were extracted and evaluated for cross-domain object classification with shallow models. In our work we propose a different and more efficient strategy for domain localization by following \cite{gradcam} and we integrate the obtained domainness maps into an end-to-end deep learning architecture. \begin{figure*}[tb] \centering \includegraphics[width=0.8\textwidth]{images/gradcam6.pdf} \caption{Outline of the Domain Localization Network that produces the domainness maps. Here we consider as example the domain shift induced by object translation. As it can be expected, domain-generic regions are localized in correspondence of the objects while the background is domain specific. We use the same notation as in sec. \ref{sec:load}, we get a set of $W_n$ filters per class that will guide the training of LoAd (see Fig. \ref{fig:load}). Figure adapted from \cite{gradcam}.} \label{fig:gradcam} \end{figure*} \section{Introduction} \label{sec:intro} \input{intro_new.tex} \section{Related Work} \label{sec:related} \input{related} \section{LoAd: Local Adaptive Network} \label{sec:load} \input{load_network} \section{Experiments} \label{sec:exper} \input{expers} \section{Conclusions} \label{sec:concl} \input{conclusions} \bibliographystyle{IEEEtran}
{ "timestamp": "2018-02-27T02:05:31", "yymm": "1802", "arxiv_id": "1802.08833", "language": "en", "url": "https://arxiv.org/abs/1802.08833" }
\section*{Introduction} The theory of Calabi-Yau algebras and categories have proven to be very important in symplectic topology and the study of topological field theories \cite{costello}, \cite{ks}, \cite{kv}, \cite{lurie}, \cite{cg}. One of the goals of this paper is to adapt this theory to the setting of spectra in stable homotopy theory, and to apply it to prove and explain a duality relationship between the string topology of a manifold, and the string topology of a classifying space of a compact Lie group. We also use this notion to study Lagrangian immersions of spheres. By way of background, recall that ``string topology" is a term that was originally coined by Chas and Sullivan in their seminal paper \cite{chassullivan}. In that paper this term referred to certain algebraic properties of the homology of the loop space of a closed, oriented manifold, $H_*(LM)$, that were the result of a type of intersection theory in $LM$. This intersection theory came about by studying the fibration $\Omega M \to LM \xrightarrow{ev} M$, where $ev$ evaluates a loop at $1 \in S^1$. Even though the loop space is itself infinite dimensional, the intersection theory defining the string topology operations is ultimately possible because of the finite dimensionality and compactness of $M$, as well as the fiberwise multiplicative properties of this fibration. Since that time the subject has expanded considerably. An important variation of the string topology intersection theory was described by Chataur and Menichi in \cite{chataurmenichi} where they defined operations on the cohomology of the loop space of the classifying space of a compact Lie group, $LBG$. In this setting the analogous fibration, $G \to LBG \xrightarrow{ev} BG$ is studied, and the intersection theory defining these operations is possible because of the compactness of $G$ as well as the fiberwise multiplicative properties of this fibration. A theory that includes both the string topology of a manifold and that of classifying spaces was developed in the setting of stacks in \cite{LUX} and \cite{BGNX}. In this setting the intersection theory is done in an appropriate algebraic geometric category. An observation that helped to shed light on this intersection theory was made by the first author and Klein in \cite{CohenKlein} when they classified ``umkehr maps" that satisfy appropriate naturality and linearity properties. This led to the observation that the ring spectrum $LM^{-TM}$, which was shown to realize the Chas-Sullivan loop product by the first author and Jones in \cite{cohenjones}, can be viewed as a twisted generalized cohomology theory evaluated on the manifold $M$. Specifically, if one takes the fiberwise suspension spectrum of the fibration $\Omega M \to LM \xrightarrow{ev} M$, and denotes the resulting parameterized spectrum by the notation $\Sigma^\infty(\Omega M_+) \to \Sigma^\infty_M(LM_+) \to M,$ then the result is a parameterized ring spectrum which defines a twisted cohomology theory $\mathcal{S}^\bullet_M$ from the category of spaces over $M$, $\mathcal{T}_M$, to an appropriate category of spectra. If $f : X \to M$ is an object in $\mathcal{T}_M$, then $\mathcal{S}_M^\bullet (X,f) = \Gamma_X(f^*(\Sigma^\infty_M(LM_+)))$, the spectrum of sections over $X$ of the pullback via $f$ of the parameterized spectrum $\Sigma^\infty_M(LM_+) \to M$. See \cite{CohenKlein} and \cite{MS} for details. Since $\Sigma^\infty_M(LM_+)$ is a parameterized ring spectrum, this spectrum of sections inherits a ring spectrum structure. Moreover it was proved in \cite{CohenKlein} the value of this cohomology on the identity map $id : M \xrightarrow{=} M \in \mathcal{T}_M$, has the homotopy type $$ \mathcal{S}_M^\bullet (M) = \Gamma_M(\Sigma^\infty_M(LM_+)) \simeq LM^{-TM} $$ as ring spectra. This equivalence is a type of twisted Poincar\'e or Atiyah duality as explained in \cite{CohenKlein}. Moreover one sees that the string topology intersection pairing (loop product) on $H_*(LM^{-TM}) \cong H_{*+n}(LM)$ corresponds, via this twisted Poincar\'e duality, to a generalized cup product pairing in the cohomology $\mathcal{S}_M^\bullet (M)$. This is a twisted generalization of the well-known phenomenon that the intersection product in $H_*(M)$ corresponds up to sign under traditional Poincar\'e duality, to the cup product in $ H^*(M)$. As observed by Gruher and Salvatore \cite{gruhersalvatore}, the string topology product exists in the presence of any fiberwise monoid over a closed manifold, $Q \to E \to M$. Here $Q$ is a monoid, and the bundle $E$ comes equipped with a fiberwise product $E \times_M E \to E$ over $M$, consistent with the monoid structure of the fiber $Q$. In this case the Thom spectrum $E^{-TM}$ is a ring spectrum. It was also observed in \cite{gruhersalvatore} that principal bundles $G \to P \to M$ give rise to fiberwise monoids by taking the associated adjoint bundle, $G \to P^{Ad} \to M$ where $P^{Ad} = P \times_G G^{Ad}$. Here $G^{Ad}$ denotes $G$ with the left $G$-action given by conjugation. As observed in \cite{cjgauge}, the string topology of principal bundles over manifolds can also be represented by twisted cohomology theories. The representing parameterized spectrum is the fiberwise suspension spectrum $\Sigma^\infty (G_+) \to \Sigma^\infty_M (P^{Ad}_+) \to M$. Let $\mathcal{S}_P^\bullet$ denote the corresponding twisted cohomology theory. In particular $$ \mathcal{S}_P^\bullet (M) = \Gamma_M(\Sigma^\infty_M(P^{Ad}_+)) \simeq (P^{Ad})^{-TM}, $$ and the ring structure comes from a generalized cup product on $\Sigma^\infty_M(P^{Ad}_+)^\bullet (M)$. We refer to $\mathcal{S}_P^\bullet (-)$ as the ``{\it manifold string topology}" structure on the principal bundle $P$. This perspective on the string topology spectrum, $LM^{-TM}$, or more generally $ (P^{Ad})^{-TM}$, in terms of the sections of a parameterized spectrum was particularly useful in \cite{cjgauge}, where the units of these ring spectra were studied. In particular it was shown that the gauge group $\mathcal{G} (P)$ of the principal bundle acts naturally on the string topology spectrum, and so there is a homomorphism, $$ \mathcal{G} (P) \to GL_1((P^{Ad})^{-TM}) $$ which was studied and computed in \cite{cjgauge}. \medskip The first goal of this paper is to show that there is a dual construction for the string topology of the classifying space of a compact Lie group, to investigate this duality using a stable homotopy theoretic version of compact Calabi-Yau algebras, and to compute some of its properties including gauge symmetry. \medskip We now state the results more precisely. Let $G$ be a compact Lie group, and let $G \to P \to X$ be a principal $G$-bundle. In this, $X$ can be any space of the homotopy type of a $CW$-complex. It need not be finite. In particular, an important example is the universal principal bundle $G \to EG \to BG$. As before, let $G \to P^{Ad} \to X$ be the corresponding adjoint bundle. Recall that in the case of the universal bundle, $EG^{Ad} \simeq LBG$. Consider the fiberwise suspension spectrum, $$ \Sigma^\infty (G_+) \to \Sigma^\infty_M(P^{Ad}_+) \to X, $$ and let $\mathcal{D} (\Sigma^\infty_M(P^{Ad}_+))$ be the fiberwise Spanier-Whitehead dual, as in \cite{MS}. This is a parameterized spectrum over $X$, whose fibers are the Spanier-Whitehead duals of the fibers of $\Sigma^\infty_M(P^{Ad}_+)$: $$ G^\vee \to \mathcal{D} (\Sigma^\infty_M(P^{Ad}_+)) \to X, $$ where $G^\vee = Map (\Sigma^\infty (G_+), \mathbb{S})$. Here $\mathbb{S}$ denotes the sphere spectrum. Notice that $G^\vee$ is a coalgebra spectrum, with coalgebra structure dual to the ring structure on $\Sigma^\infty (G_+)$. We denote the twisted homology theory associated to this parameterized spectrum by $\mathcal{S}^P_\bullet : \mathcal{T}_X \to Spectra$. The following will be proved in section 1. \medskip \begin{theorem}\label{main} The parameterized spectrum $\mathcal{D} (\Sigma^\infty (G_+)) \to \mathcal{D}(\Sigma^\infty_M(P^{Ad}_+)) \to X$ is a weak fiberwise coalgebra spectrum satisfying the following properties. \begin{enumerate} \item Let $f : Y\to X$ be an object in $\mathcal{T}_X$. Then the induced twisted homology $ \mathcal{S}^P_\bullet (Y, f) $ is a weak coalgebra spectrum. \item There is an equivalence of spectra, $$ \alpha : (P^{Ad})^{-T_{vert}} \xrightarrow{\simeq} \mathcal{S}^P_\bullet (X) $$ where $ (P^{Ad})^{-T_{vert}}$ is the Thom spectrum of minus the vertical tangent bundle $T_{vert} P^{Ad} \to P^{Ad}$. Furthermore a Pontrjagin-Thom construction gives $(P^{Ad})^{-T_{vert}}$ a natural coproduct which is taken by $\alpha$ to the coproduct in $\mathcal{S}^P_\bullet (X) $. \item If one takes the cohomology of the coalgebra spectrum, $H^*(\mathcal{S}^P_\bullet (Y, f) ; k )$ (here the coefficients are in a field $k$), one obtains a graded algebra, $$ H^*(\mathcal{S}^P_\bullet (Y) ; k) \otimes H^*(\mathcal{S}^P_\bullet (Y); k ) \to H^*(\mathcal{S}^P_\bullet (Y) ; k) $$ which we call the ``Lie group string topology algebra of $f^*(P)$". Using the equivalence in part 2, then when the vertical tangent bundle $T_{vert} \to P^{Ad}$ is orientable, one obtains a graded algebra of degree $-d$, where $d = dim \, G$: $$ H^p(P^{Ad}) \otimes H^q(P^{Ad}) \to H^{p+q-d}(P^{Ad}). $$ \item In the case of the universal principal bundle $G \to EG \to BG$, this algebra is isomorphic to the algebra structure in the string topology of the classifying space $BG$ as described by Chataur and Menichi \cite{chataurmenichi} $$ H^*(\mathcal{S}^{EG}_\bullet (BG)) \cong H^*(LBG). $$ \end{enumerate} \end{theorem} \bigskip \noindent {\bf Comments:} \begin{enumerate} \item The notion of a ``weak" fiberwise coalgebra spectrum will be defined in section 1. \item We refer to the coalgebra spectrum $\mathcal{S}^P_\bullet (X) \simeq (P^{Ad})^{-T_{vert}}$ as the ``{\it Lie group string topology spectrum of the principal bundle $P$}". \item The equivalence $\alpha : (P^{Ad})^{-T_{vert}} \xrightarrow{\simeq} \mathcal{S}^P_\bullet (X) = \mathcal{D} (\Sigma^\infty_M(P^{Ad}_+))/X $ can be viewed as a fiberwise Atiyah duality, which on the level of fibers is the classical Atiyah equivalence \cite{atiyahdual}, $$\alpha : G^{-TG} \simeq \Sigma^{-\mathfrak{g}}(G_+) \xrightarrow{\simeq} G^\vee $$ where $G^{-TG}$ is the Thom spectrum of minus the tangent bundle, which is equivariantly equivalent to the desuspension of $\Sigma^\infty(G_+)$ by the adjoint representation of $G$ on the Lie algebra $\mathfrak{g}$. \item The fact that the cohomology algebra $H^*(LBG^{-T_{vert}}) \cong H^{*+d}(LBG)$ is the string topology of classifying spaces was proved by Gruher in \cite{gruher}. \end{enumerate} \bigskip Once this theorem is established we restrict to the situation where we have a principal $G$ - bundle over a closed manifold: $G \to P \to M$. In this case we can study both the ``manifold string topology structure" of $P$, that is, the twisted cohomology theory $$\mathcal{S}_P^\bullet (M) = \Gamma_M(\Sigma^\infty_M(P^{Ad}_+)) \simeq (P^{Ad})^{-TM}$$ as well as the ``Lie group string topology structure" of $P$, which is to say the twisted homology theory $$ \mathcal{S}^P_\bullet (M) \xrightarrow{\simeq} (P^{Ad})^{-T_{vert}}.$$ The following is a consequence of Theorem \ref{main} as well as Gruher's work \cite{gruher}. \medskip \begin{corollary}\label{frobenius} Let $G \to P \to M$ be a principal bundle where $G$ is a compact Lie group of dimension $d$ and $M$ is a closed manifold of dimension $n$. The string topology spectra $\mathcal{S}_P^\bullet (M) \simeq (P^{Ad})^{-TM}$ and $ \mathcal{S}^P_\bullet (M) \simeq (P^{Ad})^{-T_{vert}},$ are Spanier--Whitehead dual to each other with the algebra structure of the former corresponding to the coalgebra structure of the latter under this duality. When one applies homology, this gives $H_*(P^{Ad})$ the structure of a Frobenius algebra of dimension $n-d$. The multiplication in this Frobenius algebra comes from the manifold string topology, and the comultiplication comes from the Lie group string topology. \end{corollary} \medskip In section 2 we will define the notion of a ``twisted compact Calabi-Yau" ring spectrum (``twisted cCY"), which can be viewed as a strengthened, derived version of Frobenius algebra in the category of spectra. This definition is adapted from the notion of a ``compact Calabi-Yau algebra" defined by Kontsevich and Soibelman \cite{ks}, as a way of studying two dimensional topological field theories. (We note that Kontsevich and Soibelman used different terminology for this concept.) Related notions were defined by Costello \cite{costello} and Lurie \cite{lurie}. In these definitions, the algebra (or ring spectrum) involved must satisfy a finiteness condition called ``compactness". In the spectrum setting this means that the spectrum is a perfect module over the sphere spectrum. In our definition of this structure in the setting of spectra, a key role is played by a ``twisting bimodule" over the compact ring spectrum. The following is the main result of this section. \medskip \begin{theorem}\label{Frob} Let $G \to P \to M$ be a principal bundle with compact Lie group fiber and closed manifold base. Then the manifold string topology $\mathcal{S}_P^\bullet (M)$ naturally admits the structure of a twisted, compact Calabi-Yau ring spectrum of dimension $n-d$. The Lie group string topology spectrum $ \mathcal{S}^P_\bullet (M)$ is the twisting bimodule spectrum in this structure. Moreover if $E_*$ is a generalized homology theory with respect to which both the vertical tangent bundle $T_{vert} \to P^{Ad}$ and the tangent bundle $TM \to M$ are oriented, then the Calabi-Yau structure on $\mathcal{S}_P^\bullet (M)$ induces a Frobenius algebra structure on the homology of the manifold string topology, $E_*(\mathcal{S}_P^\bullet (M))$ whose dual is the homology of the Lie group string topology spectrum, $E_*(\mathcal{S}^P_\bullet (M))$. \end{theorem} \medskip In \cite{cjgauge} an action of the gauge group $\mathcal{G} (P)$ of the principal bundle $G \to P \to M$ on the manifold string topology spectrum $\mathcal{S}_P^\bullet (M) = (P^{Ad})^{-TM}$ was described and computed. In section 3 we use Theorems \ref{main} and \ref{Frob} to describe a similar action of $\mathcal{G} (P)$ on the Lie group string topology spectrum $\mathcal{S}^P_\bullet (M) = (P^{Ad})^{-T_{vert}}$. We also show that this gauge symmetry respects the Calabi-Yau structure. See Theorem \ref{gaugesym} below for a precise statement. We then compute some explicit examples of this gauge symmetry. In section 4 we introduce the related notion of twisted \sl smooth \rm Calabi-Yau ring spectra. Smoothness is a different form of smallness property than compactness. A ring spectrum $A$ is \sl smooth \rm if it is perfect as a bimodule over itself. That is, it is a perfect as a left $(A \wedge A^{op})$-module spectrum. The spectrum notion of a ``twisted sCY" structure, is adapted from the notion of ``sCY" algebras and categories, that was first defined by Kontsevich and Vlassopolous \cite{kv} and used by the first author and Ganatra in \cite{cg} to compare the string topology topological field theory to the Floer symplectic field theory of cotangent bundles. In the spectral theory a twisting bimodule spectrum plays an important role. We show that this structure occurs in certain Thom spectra of virtual bundles over the based loop space of a manifold, $\Omega M$. That is we prove the following theorem: \begin{theorem}\label{smooth} Let $M$ be a closed manifold, and $f : M \to BBO$ be a map to a delooping of $BO$. Here, by Bott periodicity we may take $BBO$ to be the infinite homogeneous space $SU/SO$. Consider the induced map of loop spaces, $ \Omega f : \Omega M \to BO$. Then its Thom spectrum, which we denote by $(\Omega M)^{\Omega f}$, naturally admits the structure of a twisted, smooth Calabi-Yau ring spectrum. \end{theorem} \medskip \noindent \bf Remark. \rm When $f : M \to BBO$ is the constant map, this theorem implies that the suspension spectrum $\Sigma^\infty (\Omega M_+)$ has the structure of a twisted sCY ring spectrum. This strengthens a result of the first author and Ganatra in \cite{cg} saying that the singular chain complex $C_*(\Omega M)$ admits the structure of a smooth Calabi-Yau differential graded algebra. \medskip Also in section 4, we describe how these ring spectra arise naturally in the study of Lagrangian immersions. In particular, for the case of spheres, we combine the results of Abouzaid and Kragh \cite{ak} with those of the first author, Blumberg, and Schlichtkrull \cite{BCS} to prove the following (see Theorem \ref{lagTHH} for a more precise statement). \medskip \begin{theorem} Associated to a Lagrangian immersion $\phi : S^n \to T^*S^n$ there is a loop map $\Omega \alpha_\phi : \Omega S^n \to BU$. If the Lagrangian immersion $\phi$ is Lagrangian isotopic to a Lagrangian embedding, then there is an equivalence of topological Hochschild homology spectra, $$ THH ((\Omega S^n)^{\Omega \alpha_\phi}) \simeq THH(\Sigma^\infty (\Omega S^n_+)). $$ \end{theorem} \medskip We then use this theorem, together with homotopy theoretic results about the image of the $J$-homomorphism to recast results in \cite{ak} giving examples of Lagrangian immersions of spheres that are not Lagrangian isotopic to embeddings, but \sl are \rm smoothly isotopic to embeddings. \medskip \medskip Finally in section 5 we describe this structure from the perspective of topological Hochschild (co)homology. More specifically, let $G \to P \to M$ be a smooth principal bundle, where $G$ is a compact Lie group and $M$ is a smooth, closed manifold. Let $h : \Omega M \to G$ be the holonomy of a connection on $P$. This induces a map of ring spectra $h : \Sigma^\infty (\Omega M_+) \to \Sigma^\infty (G_+)$. Thus $h$ defines bimodule structures on $\Sigma^\infty (G_+)$ over $\Sigma^\infty (\Omega M_+)$. The main result of this section is the following. \medskip \begin{theorem}\label{hochschild} We have the following equivalences involving topological Hochschild homology $THH_\bullet$ and topological Hochschild cohomology $THH^\bullet$. \begin{enumerate} \item $THH_\bullet(\Sigma^\infty (\Omega M_+), \Sigma^\infty (G_+)) \simeq \Sigma^\infty(P^{Ad}_+)$ \notag \\ \item $ THH^\bullet (\Sigma^\infty (\Omega M_+), \Sigma^\infty (G_+)) \simeq (P^{Ad})^{-TM} \simeq \mathcal{S}_P^\bullet (M). $ \quad \text{This equivalence is one of ring spectra.} \\ \item $THH_\bullet(\Sigma^\infty (\Omega M_+), G^\vee) \simeq (P^{Ad})^{-T_{vert}} \simeq \mathcal{S}_\bullet^P(M).$ \quad \text{This equivalence is one of coalgebra spectra.} \end{enumerate} \end{theorem} \medskip We end by describing the twisted Calabi-Yau structure on the string topology spectrum from the perspective of these topological Hochschild homology spectra. A consequence of the resulting duality properties is the following: \medskip \begin{corollary}\label{hochschild2} If $M$ is oriented, there is a nondegenerate bilinear form on Hochschild homology, $$HH_*(C_*(\Omega M), C_*(G)) \times HH_*(C_*(\Omega M), C_*(G)) \to k. $$ That is, this Hochschild homology space is self dual. \end{corollary} \paragraph{Acknowledgments.} The second author would like to thank Arpon Raksit for illuminating conversations. \section{A twisted homology theory representing Lie group string topology } The goal of this section is to describe Lie group string topology as a twisted generalized homology theory, and to prove Theorem \ref{main}. The main issue in proving this theorem is to describe a parameterized form of Atiyah duality. We begin by recalling the specific map yielding the Atiyah duality between the Thom spectrum of minus the tangent bundle of a closed manifold $M$, and the Spanier-Whitehead dual of $M$ \cite{atiyahdual}, \cite{cohenatiyah}. \medskip Let $M^n$ be a closed $n$-dimensional manifold and $e : M \hookrightarrow \mathbb{R}^k$ be an embedding into Euclidean space with normal bundle $\eta_e \to M$. By the tubular neighborhood theorem, for sufficiently small $\epsilon > 0$, the open set $\nu_\epsilon (e) \subset \mathbb{R}^k$ consisting of points within a distance of $\epsilon$ of $e(M)$ can be identified with the total space $\eta_e$. Consider the map \begin{align}\label{alex} \alpha : \left( \mathbb{R}^k - \nu_\epsilon (e)\right) \times M & \to \mathbb{R}^k - B_\epsilon (0) \simeq S^{k-1} \notag \\ (v, y) &\longrightarrow v - e(y) \end{align} where $B_\epsilon (0)$ is the open ball of radius $\epsilon$. This map induces the Alexander duality isomorphism $$ \begin{CD} \tilde H_q(\mathbb{R}^k - e(M)) \cong \tilde H_q(\mathbb{R}^k - \nu_\epsilon (e)) @>\cong >> \tilde H^{k-q-1}(M). \end{CD} $$ Atiyah duality \cite{atiyahdual} is induced by the same map: \begin{align} M^{\eta_e} \wedge M_+ \cong (\mathbb{R}^k \times M )/\left( (\mathbb{R}^k - \nu_\epsilon (e)) \times M \right) &\longrightarrow \mathbb{R}^k /( \mathbb{R}^k - B_\epsilon (0)) \quad \cong S^k \notag \\ (v, y) &\longrightarrow v-e(y). \end{align} The adjoint of this map gives a map from the Thom space of $\eta_e$ to the mapping space, $ \alpha : M^{\eta_e} \longrightarrow Map(M, S^k) $ which defines the Atiyah duality equivalence of spectra, \begin{equation}\label{atdual} \alpha : M^{-TM} \longrightarrow Map(M, \mathbb{S}). \end{equation} Here this notation refers to the mapping spectrum between the suspension spectrum $\Sigma^\infty (M_+)$ to the sphere spectrum $\mathbb{S}$. This is the Spanier-Whitehead dual of $M$, and will be denoted by $M^\vee$. Indeed in \cite{cohenatiyah} the author constructed a symmetric ring spectrum (without unit), $M^{-TM}$. The $k^{th}$ space of this spectrum is equivalent, through a range of dimensions that increases with $k$, to the Thom space $M^{\eta_e}$ and is constructed by allowing the embeddings and the choices of $\epsilon$ to vary. The $k^{th}$ space of the mapping spectrum $Map(M,\mathbb{S})$ has the homotopy type of $Map(M, S^k)$. It was shown in \cite{cohenatiyah} that the map $\alpha$ induces an equivalence of symmetric ring spectra. We refer the reader to \cite{cohenatiyah} for details. We now pass to the parameterized setting. Our goal is to describe a parameterized form of this Atiyah duality equivalence. Let $G \to P \to X$ be a principal bundle with compact Lie group fiber. By the fiberwise duality theorem of May and Sigurdsson (Theorem 15.1.1 of \cite{MS}), the parameterized suspension spectrum $\Sigma^\infty (G_+) \to \Sigma^\infty_X(P^{Ad}_+) \to X$ is (fiberwise) dualizable because each fiber spectrum is dualizable. This in turn is because every fiber spectrum is equivalent to $\Sigma^\infty (G_+) $, which is dualizable since $G$ is compact. The parameterized Spanier-Whitehead dual is what we called $G^\vee \to \mathcal{D} (\Sigma^\infty_X(P^{Ad}_+)) \to X$ in the introduction. The construction in \cite{MS} is quite general, in this particular case, however, we will describe this fiberwise dual explicitly. The spectra we work with will be orthogonal spectra, and when we describe a group action, we use $RO(G)$-indexed orthogonal spectra. We refer the reader to \cite{MM} for details. Recall that $P^{Ad} = P \times_G G^{Ad}$. Let $V$ be a finite dimensional orthogonal representation of $G$, and let $S^V = V \cup \infty$ be the one-point compactification where the $G$-action fixes $\infty$. The conjugation action of $G$ on itself defines an action of $G$ on $Map (G, S^V)$, \begin{align}\label{gact} g \cdot \phi : G &\to S^V\\ h &\to g\phi (g^{-1}hg). \notag \end{align} This defines an $RO(G)$-graded $G$-spectrum, which we call $G^\vee$. We defined the parameterized spectrum $\mathcal{D} (\Sigma^\infty_X(P^{Ad}_+))$ as an $RO(G)$-graded spectrum. For a representation $W$, the $W$-space is defined to be \begin{equation}\label{ksp} \mathcal{D} (\Sigma^\infty_X(P^{Ad}_+))_W = P \times_G Map(G, S^W) \end{equation} which fibers over $X = P/G$ with fiber $ Map(G, S^W) = Map (G, S^k)$, where $k = dim \, W$. The fiberwise suspension by a representation $U$ is given by $\Sigma^U_X( \mathcal{D} (\Sigma^\infty_X(P^{Ad}_+))_W )$ is $P\times_G S^U\wedge(Map(G, S^W))$ and the structure map $ \epsilon_U : \Sigma^U_X( \mathcal{D} (\Sigma^\infty_X(P^{Ad}_+))_W ) \to \mathcal{D} (\Sigma^\infty_X(P^{Ad}_+))_{W\oplus U} $ is induced by the $G$-equivariant map \begin{align} \epsilon_U : S^U \wedge (Map (G, S^W)) &\to Map (G, S^{W\oplus U})\notag \\ \epsilon_U (t\wedge \phi)(g) &= \phi (g) \wedge t . \notag \end{align} \medskip Notice that since the multiplication map $G \times G \to G$ is equivariant with respect to the adjoint action (the action on $G\times G$ is diagonal), the induced comultiplication in the Spanier-Whitehead dual spectrum $G^\vee \to G^\vee \wedge G^\vee$ is also equivariant, and so induces a weak fiberwise coalgebra structure on the parameterized spectrum $\mathcal{D} (\Sigma^\infty_X(P^{Ad}_+))$. By a \sl ``weak fiberwise coalgebra" \rm structure on a parameterized spectrum, we simply mean the following. \begin{definition}\label{coalgebra} A parameterized spectrum $ E \to \mathcal{E} \to X$ is a ``weak fiberwise coalgebra" if there is a ``comultiplication" map $\gamma : \mathcal{E} \to \mathcal{E} \wedge_X \mathcal{E}$ and a ``counit" $\eta : \mathcal{E} \to \mathbb{S}_X$ in the category of parameterized spectra over $X$, that satisfy the usual co-associativity and co-unit properties up to homotopy. No coherence conditions on the homotopies are assumed. Here $\mathbb{S} \to \mathbb{S}_X \to X$ is the parameterized sphere spectrum. Namely the $n^{th}$ space of $\mathbb{S}_X$ is $X \times S^n$. \end{definition} Notice that given a fiberwise coalgebra spectrum $E \to \mathcal{E} \to X$, then for any object $f : Y\to X$ in $\mathcal{T}_X$, the twisted homology spectrum $\mathcal{E}_\bullet (Y, f) = \mathcal{E}/X$ is an ordinary coalgebra spectrum. \medskip The source of the parameterized Atiyah duality map is a parameterized Thom spectrum. More precisely, let $e : G \subset V$ be an equivariant embedding of $G$ with its conjugation action into a finite dimensional $G$-representation $V$. Let $k = dim \, V$. Let $\nu_\epsilon (e) $ be an equivariant tubular neighborhood as above. It is equivariantly diffeomorphic to the normal bundle $\eta_V\to G$. (We are suppressing the embedding $e$ from the notation.) We let $\eta_V^{vert} \to P^{Ad}$ be the vector bundle \begin{equation} \eta_V^{vert} = P \times_G \eta_V \to P\times_G G^{Ad} = P^{Ad} \end{equation} The fiberwise Thom space of this bundle is homeomorphic to the fiberwise one-point compactification of the tubular neighborhood, $$ P \times_G G^{\eta_V} \cong P \times_G (\nu_\epsilon (e) \cup \infty). $$ Notice also that there is a map from the fiberwise suspension $$ \epsilon_W : \Sigma^W_X (P \times_G G^{\eta_V}) = P \times_G \left(S^W \wedge G^{\eta_V}\right) \xrightarrow{\simeq} P \times_G G^{\eta_{V\oplus W}} $$ This data defines an $RO(G)$-graded parameterized spectrum $ P \times_G G^{-TG}$ over $X$ whose $W^{th}$ space is $\Omega^{V\oplus W} (P \times_G G^{\eta_{V\oplus W}})$. Here, for a representation $U$, $\Omega^U$ refers to the $U$-fold loop space, $Map_\bullet (S^U, - )$. Furthermore, the Atiyah duality map described above defines a map $\alpha : G^{\eta_V} \cong \nu_\epsilon (e) \cup \infty \to Map (G, S^V)$. This map is equivariant, and so defines Atiyah duality maps $\bar{\alpha_V} : P \times_G G^{\eta_V} \to P \times_G Map (G, S^V)$. These maps respect the spectrum structure maps and so prove the following: \medskip \begin{lemma}\label{paramAtiya} The maps $\bar{\alpha_k}$ define an equivalence of parameterized spectra over $X$, $$ \bar \alpha : P \times_G G^{-TG} \xrightarrow{\simeq} \mathcal{D} (\Sigma^\infty_X(P^{Ad}_+)). $$ \end{lemma} \medskip The map $\bar \alpha$ therefore defines an equivalence of the generalized twisted homology theories these parameterized spectra represent. Given an object $f : Y \to X$ in $\mathcal{T}_X$, the twisted homology the parameterized spectrum $\mathcal{D} (\Sigma^\infty_X(P^{Ad}_+))$ represents is what we called $\mathcal{S}_\bullet^P(Y, f)$ in the introduction. The twisted homology the parameterized spectrum $P \times_G G^{-TG}$ represents is given by the (ordinary) spectrum $f^*(P)_+ \wedge_G G^{-TG}$, which is the Thom spectrum of the virtual bundle $-f^*(T_{vert})$ over $f^*(P^{Ad})$, where $T_{vert} \to P^{Ad}$ is the vertical tangent bundle. Applying this to $X$ itself, we have the equivalence of spectra, \begin{equation}\label{-tvert} \bar{\alpha} : (P^{Ad})^{-T_{vert}} \xrightarrow{\simeq}\mathcal{S}^P_{\bullet}(X) \end{equation} \medskip Now recall that given any map $\phi : M \to N$ between closed manifolds, the Pontrjagin-Thom construction defines a map $\tau_g: N^{-TN} \to M^{-TM}$ making the following diagram of spectra homotopy commute: \begin{equation}\label{atiyahcompat} \begin{CD} N^{-TN} @>\tau_g >> M^{-TM} \\ @V\alpha V\simeq V @V\simeq V\alpha V \\ N^\vee @>> g^\vee > M^\vee \end{CD} \end{equation} Applying this to the multiplication map $\mu : G \times G \to G$, we get a homotopy commutative diagram $$ \begin{CD} G^{-TG} @>\tau_\mu >> G^{-TG}\wedge G^{-TG} \\ @V\alpha V\simeq V @V\simeq V\alpha V \\ G^\vee @>> \mu^\vee > G^\vee \wedge G^\vee \end{CD} $$ Given the adjoint action of $G$ on itself, and the diagonal adjoint action of $G$ on $G \times G$, the multiplication map $\mu : G \times G \to G $ is equivariant. Therefore there is an induced fiberwise coproduct on the parameterized spectrum $P \times_G G^{-TG}$, as there is on $ \mathcal{D} (\Sigma^\infty_X(P^{Ad}_+))$. We now verify that the induced map $ \bar \alpha : P \times_G G^{-TG} \xrightarrow{\simeq} \mathcal{D} (\Sigma^\infty_X(P^{Ad}_+))$ preserves these coproducts. We do this by studying the definition of the maps involved more carefully. Toward this end let $e : G \subset V$ be an equivariant embedding of $G$ with its conjugation action into a finite dimensional $G$-representation $V$, as above. Let $k = dim \, V$. We then have an induced composition of equivariant embeddings, \begin{equation} G \times G \xrightarrow{\mu \times e \times e} G \times V \times V \xrightarrow{e \times 1 \times 1} V \times V \times V. \end{equation} Recall that the tangent bundle of $G$ has an equivariant trivialization $TG \cong G \times \frak{g}$ where $\frak{g}$ is the Lie algebra with its adjoint action. Differentiating $e : G \hookrightarrow V$ at the identity gives a linear equivariant embedding $\frak{g} \hookrightarrow V$. We let $\frak{g}^\perp$ be the orthogonal complement with its induced action. The total space of the normal bundle of $G \times V \times V \xrightarrow{e \times 1 \times 1} V \times V \times V$ is clearly equivariantly isomorphic to $G \times \frak{g}^\perp \times V \times V$. We perform the Pontrjagin-Thom construction on the induced (equivariant) embedding of the restriction of the total space $$ (\mu \times e \times e)^* (G \times \frak{g}^\perp \times V \times V) \hookrightarrow G \times \frak{g}^\perp \times V \times V. $$ This is a codimension $2k-d$ embedding. The Pontrjagin-Thom construction gives an equivariant map $$ \tau_\mu^{V\times V\times V} : G_+ \wedge S^{\frak{g}^\perp} \wedge S^V \wedge S^V \longrightarrow (G\times G)_+ \wedge S^{\frak{g}^\perp} \wedge S^{\frak{g}^\perp} \wedge S^V, $$ or equivalently, $$ \tau_\mu^{V\times V\times V} : G^{\eta_e} \wedge S^V \wedge S^V \longrightarrow G^{\eta_e} \wedge G^{\eta_e} \wedge S^V. $$ This defines the map $$ \tau_\mu^{V\times V\times V} : P \times_G \left(G^{\eta_e} \wedge S^V \wedge S^V \right) \longrightarrow P \times_G \left( G^{\eta_e} \wedge G^{\eta_e} \wedge S^V\right). $$ Similarly, the Atiyah duality map, which as discussed above is defined via a Pontrjagin-Thom collapse, is an equivariant map $$ \alpha : G^{\eta_e} \wedge S^V \wedge S^V \to Map (G, S^V) \wedge S^V \wedge S^V, $$ which induces a map $$ \alpha : P \times_G \left(G^{\eta_e} \wedge S^V \wedge S^V \right) \longrightarrow P \times_G \left(Map (G, S^V) \wedge S^V \wedge S^V\right). $$ The compatibility of these Pontrjagin-Thom maps yields that the following diagram commutes: \begin{equation} \begin{CD} P \times_G \left(G^{\eta_e} \wedge S^V \wedge S^V \right) @>\tau_\mu^{V\times V\times V}>> P \times_G \left( G^{\eta_e} \wedge G^{\eta_e} \wedge S^V\right) \\ @V\alpha VV @VV\alpha V \\ P \times_G \left(Map (G, S^V) \wedge S^V \wedge S^V\right) @>>\mu^\vee > P \times_G \, \left(Map (G \times G, S^V \wedge S^V) \wedge S^V\right). \end{CD} \end{equation} Passing to spectra, this says that the following diagram of parameterized spectra over $X$ homotopy commutes: \begin{equation} \begin{CD} P \times_G G^{-TG} @>\tau_\mu >> P \times_G G^{-TG}\wedge G^{-TG} \\ @V\alpha V\simeq V @V\simeq V\alpha V \\ P\times_G G^\vee @>> \mu^\vee > P\times_G (G^\vee \wedge G^\vee) \end{CD} \end{equation} Or, written with the notation used above, the following diagram of parameterized spectra over $X$ homotopy commutes: \begin{equation} \begin{CD} P \times_G G^{-TG} @>\tau_\mu >> P \times_G G^{-TG} \wedge_X P \times_G G^{-TG} \\ @V\bar \alpha V\simeq V @V\simeq V\bar \alpha V \\ \mathcal{D} (\Sigma^\infty_X(P^{Ad}_+)) @>> \mu^\vee > \mathcal{D} (\Sigma^\infty_X(P^{Ad}_+)) \wedge_X \mathcal{D} (\Sigma^\infty_X(P^{Ad}_+)). \end{CD} \end{equation} In other words, the induced map $$\alpha : (P^{Ad})^{-T_{vert}} \xrightarrow{\simeq} \mathcal{S}^P_\bullet (X) $$ respects coproducts up to homotopy. This completes the proof of parts (1) and (2) of Theorem \ref{main}. Part (3) of Theorem \ref{main} follows from part (1) and the Thom isomorphism applied to the vertical tangent bundle $T_{vert} \to P^{Ad}$. The algebra structure on $H^*(P^{Ad})$ was discovered first by Gruher in \cite{gruher}. The main point of part (1) of Theorem \ref{main} is that it realizes the work of Gruher on the level of parameterized spectra and the induced twisted homology theory. In \cite{gruher} it was shown that in the case of the universal bundle $G \to EG \to BG$, the algebra structure on $H^*(P^{Ad} \simeq LBG)$ (or equivalently the coalgebra structure on $H_*(LBG)$) is isomorphic to the Lie group string topology algebra of Chataur-Menichi \cite{chataurmenichi}. This completes the proof of Theorem \ref{main}. \section{ Twisted, compact Calabi-Yau ring spectra and the duality between manifold and Lie group string topology} The goal of this section is to study duality phenomena in the string topology of a principal bundle $G \to P \to M$, where $G$ is a compact, $d$-dimensional Lie group, and $M$ is a closed, $n$-dimensional manifold. More specifically, our goal is to study the duality between the manifold string topology and the Lie group string topology in this setting. To do this we describe the notion of ``twisted, compact Calabi-Yau ring spectra" and show how the string topology of such a principal bundle has this structure. This notion is a lifting to the category of spectra, of the notion of ``Calabi-Yau" algebras and categories as defined by Costello \cite{costello}, Kontsevich and his collaborators \cite{ks} \cite{ kv}, Lurie \cite{lurie}, and the author and Ganatra \cite{cg}. Our first result is the following: \begin{theorem}\label{SWdual} For a principal bundle $G \to P \to M$ where $G$ a compact Lie group and $M$ is a closed manifold, the manifold string topology spectrum $\mathcal{S}_P^\bullet (M)$ and the Lie group string topology spectrum $\mathcal{S}^P_\bullet (M)$ are Spanier-Whitehead dual. Under this duality the ring spectrum structure of $\mathcal{S}_P^\bullet (M)$ corresponds to the coalgebra structure of $\mathcal{S}^P_\bullet (M)$. \end{theorem} \begin{proof} Recall from \cite{cjgauge} (and restated in the introduction above), the manifold string topology $\mathcal{S}_P^\bullet$ is the twisted cohomology theory corresponding to the fiberwise suspension spectrum $\Sigma^\infty (G_+) \to \Sigma^\infty_M (P^{Ad}_+) \to M$. Using particular a version of Poincar\'e duality proven in by Klein in \cite{klein} (called ``Atiyah duality" in this paper), in \cite{CohenKlein} the first author and Klein showed that $$ \mathcal{S}_P^\bullet (M) = \Gamma_M(\Sigma^\infty_M(P^{Ad}_+)) \simeq (P^{Ad})^{-TM}, $$ and the ring structure comes from a generalized cup product in this (twisted) cohomology theory arising from the fiberwise ring structure of this parameterized spectrum. Furthermore Theorem \ref{main} above states that the Lie group string topology $\mathcal{S}^P_\bullet$ is the twisted homology theory corresponding to the fiberwise Spanier-Whitehead dual spectrum, $G^\vee \to \mathcal{D}(\Sigma^\infty_M(P^{Ad}_+)) \to M$. It was also shown that this is a fiberwise coalgebra spectrum whose coalgebra structure is (fiberwise) Spanier-Whitehead dual to the ring structure of the parameterized spectrum $ \Sigma^\infty (G_+) \to \Sigma^\infty_M (P^{Ad}_+) \to M$. Finally it was shown that there is a coproduct-preserving equivalence of spectra, $$ (P^{Ad})^{-T_{vert}} \xrightarrow{\simeq} \mathcal{S}^P_\bullet (X) . $$ We remark that the fact that the Thom spectra $(P^{Ad})^{-TM} \simeq \mathcal{S}_P^\bullet (M)$ and $(P^{Ad})^{-T_{vert}} \simeq \mathcal{S}^P_\bullet (M)$ are Spanier-Whitehead dual follows from classical Atiyah duality \cite{atiyahdual}. This completes the proof. \end{proof} \medskip \noindent \bf Remark. \rm When $M$ is oriented, one can apply the two Thom isomorphisms, $$H_*(P^{Ad}) \cong H_{*-n}((P^{Ad})^{-TM}) \quad \text{ and} \quad H_*(P^{Ad}) \cong H_{*-d}((P^{Ad})^{-T_{vert}}).$$ The Spanier-Whitehead duality above then yields a Frobenius algebra structure on $H_*(P^{Ad})$ as discovered by Gruher \cite{gruher}. \bigskip We now strengthen this result by proving that in this situation, i.e a principal bundle $G \to P \to M$, where $G$ is a compact Lie group and $M$ is a closed, smooth manifold, that the spectrum $(P^{Ad})^{-TM}$ is a ``twisted, compact Calabi-Yau ring spectrum". The notions of Calabi-Yau differential graded algebras or $A_\infty$ algebras or (higher) categories were introduced in \cite{costello}, \cite{lurie}, \cite{ks}, \cite{kv} because of their connections with two-dimensional topological field theories. This notion can be viewed as a derived version of a Frobenius algebra. This will be made precise in Proposition \ref{cCYFrob} below. In this paper we lift these ideas to the category of spectra, where we must deal with ``twisted" versions of these notions in order to get many interesting examples. We actually introduce two versions of twisted Calabi-Yau ring spectra: a compact version and a smooth version. This follows the ideas of Kontsevich and his collaborators \cite{ks}, and \cite{kv}, who worked with $A_\infty$ algebras over a field of characteristic zero, and of the author and Ganatra \cite{cg} who worked with $A_\infty$-algebras or categories over arbitrary fields. \medskip We begin with the notion of a \sl ``twisted, compact, Calabi-Yau" \rm ring spectrum. \bigskip \medskip Recall that a compact $E_1$- ring spectrum $R$ is one that is perfect as an $\mathbb{S}$-module. \begin{definition} A \bf ``twisted, compact Calabi-Yau ring spectrum" \sl (twisted $cCY$) \rm of dimension $n$ is a triple $(R, Q, t)$, where $R$ is a compact ring spectrum, $Q$ is an $R$-bimodule which is compact as an $\mathbb{S}$-module, and has the same $\mathbb{Z}/2$-homology as $R$: $$H_*(Q; \mathbb{Z}/2) \cong H_*(R; \mathbb{Z}/2).$$ We refer to $Q$ as the ``twisting" bimodule. If $Q = R$ we say that $R$ has \sl trivial \rm twisting. $$t : THH(R; Q) \to \Sigma^{-n}\mathbb{S}$$ is a map of spectra we call the ``$n$-dimensional \sl trace map" \rm that has the following duality property: The pairing defined by the composition $$ \langle \, , \, \rangle : R \wedge Q \xrightarrow{\mu} Q \hookrightarrow THH (R; Q) \xrightarrow{t} \Sigma^{-n}\mathbb{S} $$ is nondegenerate in the sense that the adjoint $R \to \Sigma^{-n}Q^\vee$ is an equivalence of $R$-bimodule spectra. Here $\mu : R\wedge Q \to Q$ is the module structure, $Q \hookrightarrow THH (R; Q)$ is the inclusion of the spectrum of zero simplices, and $Q^\vee$ is the Spanier-Whitehead dual of $Q$, which exists because of the compactness assumption. \end{definition} \medskip The following observation is an immediate consequence of the definition. \begin{proposition}\label{coalg} Let $(R, Q, t)$ be a twisted compact Calabi-Yau ring spectrum. Then the duality between $R$ and $Q$ defined by the nondegenerate pairing $\langle \, , \, \rangle $ defines a coalgebra structure on the twisting bimodule $Q$, whose co-product is Spanier-Whitehead dual to the product in the ring structure $R$. \end{proposition} \medskip The main applications of compact Calabi-Yau ring spectra occur in the presence of orientations. We now define what we mean by this. \medskip \begin{definition} \rm Let $(R, Q, t)$ be a twisted, $cCY$ ring spectrum of dimension $n$, and let $E$ be a ring spectrum representing a homology theory $E_*$. An \sl ``$E_*$-orientation" \rm of $(R, Q, t)$ is a pair $(u, \tilde t_E)$, where $$ u : Q \wedge E \xrightarrow{\simeq} R \wedge E $$ is an equivalence of the $E_*$-homology spectra as $R$-bimodules. Here $R$ acts trivially on $E$. $$\tilde t_E : THH(R \, R)_{hS^1}\wedge E \to \Sigma^{-n} E $$ is an $E$-module map from the homotopy orbit spectrum of the $S^1$-action induced by the cyclic structure, which factorizes the trace map $t$ in $E$-homology. That is, the induced trace map $t_E = t \wedge 1 : THH(R; \, Q)\wedge E \to \Sigma^{-n}\mathbb{S}\wedge E$ is homotopic to the composition \begin{align} t_E : THH(R; \, Q)\wedge E \xrightarrow{u} THH(R ; \, R)\wedge E &\xrightarrow{project} THH(R; \, R)_{hS^1}\wedge E \\ &\xrightarrow{\tilde t_E} \Sigma^{-n}E. \end{align} \end{definition} \medskip When $E =Hk$, the Eilenberg-MacLane spectrum for a field $k$, then a twisted, compact Calabi-Yau ring spectrum $(R, Q, t)$ together with an $Hk$-orientation $(u, \tilde t_{Hk})$ defines a compact Calabi-Yau algebra structure on the singular chains with $k$-coefficients, $C_*(R; k)$, as defined in \cite{kv} and in \cite{cg}. \medskip The following gives a precise relation between twisted $cCY$-ring spectra and Frobenius algebras. \medskip \begin{proposition}\label{cCYFrob} Let $(R, Q, t) $ be a twisted $cCY$ ring spectrum of dimension $n$, and let $E$ be a ring spectrum representing a homology theory $E_*$ with respect to which $(R, Q, t)$ has orientation $(u, \tilde t_E)$. Then $R\wedge E$ is a Frobenius algebra over $E$ of dimension $n$. That is, the pairing \begin{align} \langle \, , \, \rangle : (R\wedge E) \wedge (R\wedge E) &\xrightarrow{multiply} R\wedge E \xrightarrow{\iota} THH(R\wedge E, R\wedge E) \notag \\ &\xrightarrow{project}THH(R\wedge E, R\wedge E)_{hS^1} \xrightarrow{\tilde t_E} \Sigma^{-n}E \end{align} is a nondegenerate pairing of $E-modules$. Here $\iota : R\wedge E \hookrightarrow THH(R\wedge E, R\wedge E)$ is the inclusion of the spectrum of $0-simplices$. ``Nondegeneracy" means that the adjoint of this pairing, $$ R\wedge E \to Rhom_E(R\wedge E, \Sigma^{-n}E) $$ is an equivalence of $E$-modules. \end{proposition} \begin{proof} It is easily checked from the definition of orientation that the pairing $\langle \, , \, \rangle$ defined above is homotopic to the composition $$ (R\wedge E) \wedge (R\wedge E) \xrightarrow {1\wedge u^{-1}} (R\wedge E) \wedge (Q\wedge E) \xrightarrow{\mu} Q \wedge E \hookrightarrow THH (R; Q)\wedge E \xrightarrow{t_E = t\wedge 1} \Sigma^{-n}\mathbb{S} \wedge E. $$ But this pairing is nondegenerate by the definition of the twisted Calabi-Yau structure. \end{proof} \medskip We now give two important examples of twisted $cCY$ ring spectra. \medskip \noindent \bf Example 1. \rm The first example shows how ordinary Poincar\'e or Atiyah duality fits the definition of twisted compact Calabi-Yau. \medskip \begin{proposition} Let $M$ be a closed $n$-dimensional manifold. Then its Spanier-Whitehead dual, $M^\vee$, which, by Atiyah duality is equivalent to $M^{-TM}$, comes naturally equipped with the structure of a twisted $cCY$ ring spectrum of dimension $n$. \end{proposition} \begin{proof} The suspension spectrum $\Sigma^\infty (M_+)$ can be viewed as a $M^\vee$ bimodule in the usual way. Notice that since, by Atiyah duality, $M^\vee$ is equivalent to $M^{-TM}$, then the Thom isomorphism gives $$ H_*(\Sigma^\infty (M_+); \mathbb{Z}/2) \cong H_{*-n}(M^\vee ; \mathbb{Z}/2). $$ So we let $R = M^\vee$ and let the twisting bimodule $Q = \Sigma^{-n}\Sigma^\infty (M_+)$, which we simply denote $\Sigma^{-n}(M_+)$. In order to define the $n$-dimensional trace map on $THH(R; Q)$, we first study its homotopy type. This is a simplicial spectrum of finite type. That is, for each $k$, the spectrum of $k$-simplicies is a finite spectrum. For such a simplicial spectrum $\mathbb{X}_\bullet$ we define its Spanier-Whitehead dual $\mathbb{X}^\vee$ to be the totalization of the cosimplicial spectrum whose spectrum of $k$-simplices is the Spanier-Whitehead dual $\mathbb{X}_k^\vee = Map (\mathbb{X}_k, \mathbb{S}).$ We then have the following result. \medskip \begin{lemma} For $M$ a closed $n$-manifold, $R = M^\vee$ and $Q = \Sigma^{-n}(M_+)$, then the Spanier-Whitehead dual of $THH(R; Q)$ is given by $$ THH(R; Q)^\vee \simeq \Sigma^n LM^{-TM}. $$ \end{lemma} \medskip \begin{proof} Note that $$ THH(R;Q)_k = R^{(k)} \wedge Q = (M^k)^\vee \wedge \Sigma^\infty(M_+)\wedge S^{-n}. $$ Therefore in the cosimplicial spectrum $THH(R; Q)^\vee$, the spectrum of $k$-simplices is given by $$ THH(R;Q)^\vee_k = \Sigma^\infty (M^k_+)\wedge M^\vee \wedge S^{n}. $$ The coface maps are determined by the coalgebra structure of $\Sigma^\infty (M_+)$ defined by the diagonal map of $M$, as well as the bi-comodule structure of $M^\vee$, which up to homotopy can be described by the maps $$ M^\vee \simeq M^{-TM} \to M_+ \wedge M^{-TM} \simeq M_+ \wedge M^\vee \quad \text{and} \quad M^\vee \simeq M^{-TM} \to M^{-TM} \wedge M_+ = M^\vee \wedge M_+. $$ These maps are the maps of Thom spectra induced by the diagonal $M \to M \times M$. This cosimplicial spectrum is the $n$-fold suspension of the cosimplicial spectrum studied in \cite{cohenjones} where it was shown to have totalization equivalent to $LM^{-TM}$. \end{proof} \medskip \noindent {\bf Remark.} Notice that the inclusion of the spectrum of zero simplices, $$ \Sigma^{-n}(M_+) \hookrightarrow THH(R; Q) $$ is Spanier - Whitehead dual to the map $$ \Sigma^n LM^{-TM} \xrightarrow{eval.} \Sigma^nM^{-TM} \simeq \Sigma^n M^\vee $$ induced on Thom spectra by the usual evaluation fibration $LM \to M$. \medskip One way of thinking of the $n$-dimensional trace map $t : THH(R; Q) \to \Sigma^{-n}\mathbb{S}$ is that it is Spanier-Whitehead dual to the $n$-fold suspension of the unit map in the ring structure of $LM^{-TM}$: $$ \Sigma^n \mathbb{S} \to\Sigma^n LM^{-TM}. $$ More concretely notice that the augmentation map of $R$, $$\epsilon : R = M^\vee \to \mathbb{S}$$ and the map induced by sending all of $M$ to the non-base point $$p: \Sigma^{-n}(M_+) \to \Sigma^{-n}\mathbb{S}$$ define a map $$ t : THH(R; Q) = THH (M^\vee; \Sigma^{-n}(M_+)) \xrightarrow{(\epsilon, \, p)} THH(\mathbb{S}; \Sigma^{-n}\mathbb{S}) = \Sigma^{-n}\mathbb{S}. $$ The reader can now check that the composition $$ M^\vee \wedge \Sigma^{-n}(M_+) \xrightarrow{\mu} \Sigma^{-n}(M_+) \to THH (M^\vee; \Sigma^{-n} (M_+)) \xrightarrow{t} \Sigma^{-n}\mathbb{S} $$ is simply the $n$-fold desuspension of the duality map, and therefore is nondegenerate. This proves that $(M^\vee, \Sigma^{-n}(M_+), t)$ is twisted, compact Calabi-Yau ring spectrum of dimension $n$. \end{proof} \medskip We now consider orientations. Let $E$ be any ring spectrum representing a generalized homology theory with respect to which $M$ is oriented. The Thom isomorphism then defines an equivalence $$ u : \Sigma^{-n}(M_+) \wedge E \xrightarrow{\simeq} M^{-TM} \wedge E \simeq M^\vee \wedge E $$ which is clearly an equivalence of $M^\vee$-bimodules. Again consider the augmentation map $\epsilon : M^\vee \to \mathbb{S}$. Now the orientation induces a Thom class map $\tau : M^\vee \simeq M^{-TM} \to \Sigma^{-n}E$. These maps define a composition $$ \tilde t_E: THH(M^\vee; M^\vee)_{hS^1}\wedge E \xrightarrow{(\epsilon, \tau)} THH(\mathbb{S}; \mathbb{S})_{hS^1}\wedge \Sigma^{-n}E \simeq \Sigma^{-n}(BS^1_+)\wedge E \xrightarrow{p\wedge 1} \Sigma^{-n} E $$ where $p : BS^1_+ \to S^0$ is the projection map. We leave it to the reader to check that the composition $$ THH (M^\vee, \Sigma^{-n} (M_+))\wedge E \xrightarrow{u} THH (M^\vee, M^\vee) \wedge E \xrightarrow{projection} THH (M^\vee, M^\vee)_{hS^1} \wedge E \xrightarrow{\tilde t_E} \Sigma^{-n}E $$ is equivalent to $t\wedge 1 : THH (M^\vee, \Sigma^\infty (M_+))\wedge E \to \Sigma^{-n}\wedge \mathbb{S} \wedge E$. This proves that the pair $(u, \tilde t_E)$ defines an orientation of the twisted $cCY$ structure on $M^\vee$ with respect to $E$. \medskip \noindent {\bf Remark.} The above discussion together with Proposition \ref{cCYFrob} implies that if $M^n$ is an oriented closed manifold, $M^\vee \wedge H\mathbb{Z}$ is a Frobenius algebra over the Eilenberg-MacLane spectrum $H\mathbb{Z}$. Using the Atiyah duality equivalence $M^\vee \simeq M^{-TM}$ we see that $M^{-TM} \wedge H\mathbb{Z} \simeq \Sigma^{-n}(M_+ \wedge H\mathbb{Z})$ is a Frobenius algebra. The multiplication reflects the classical intersection product on the level of chains, $C_{*+n}(M; \mathbb{Z})$. The comultiplication comes from the diagonal, $M \to M\times M$. \medskip \noindent \bf Example 2. \rm The following example supplies the main ingredient for the proof of Theorem \ref{Frob} as stated in the introduction. \medskip \begin{proposition}\label{ptmcy} Let $G \to P \to M$ be a principal bundle where $G$ is a compact Lie group of dimension $d$ and $M$ is a closed manifold of dimension $n$. Then the manifold string topology ring spectrum $R = \mathcal{S}^\bullet_P(M) \simeq (P^{Ad})^{-TM}$ naturally admits the structure of a twisted, compact Calabi-Yau ring spectrum of dimension $n-d$. \end{proposition} \begin{proof} We need to produce the twisting module $Q$ and a trace map $t : THH (R; Q) \to \Sigma^{d-n}\mathbb{S}$. For the twisting module we take the Lie group string topology spectrum $Q = \Sigma^{d-n}\mathcal{S}^P_\bullet (M) \simeq \Sigma^{d-n}(P^{Ad})^{-T_{vert}}$. The fact that $R$ and $Q$ have isomorphic mod-$2$ homology follows from the Thom isomorphism. The fact that $Q$ is indeed an $R$-bimodule follows from the Spanier-Whitehead duality of $R = \mathcal{S}^\bullet_P(M) $ and $\Sigma^{n-d}Q = \mathcal{S}^P_\bullet (M)$ established in Theorem \ref{SWdual}, reflecting Gruher's work \cite{gruher}. The bimodule structure of $Q$ over $R$ is then the dual of the bimodule structure of $R$ over itself. Notice also that $R = \mathcal{S}^\bullet_P (M) \simeq (P^{Ad})^{-TM}$ has an augmentation $\epsilon : R \to \mathbb{S}$. To see this consider the following diagram, which we view as a map of principal bundles. The right vertical sequence is thought of as a principal bundle with trivial group. $$ \begin{CD} G @>>> \{id \} \\ @VVV @VVV \\ P @>>> M \\ @VVV @VV=V \\ M @>>=> M \end{CD} $$ This defines a map of twisted cohomology ring spectra $$ \mathcal{S}_P^\bullet (M) \longrightarrow \mathcal{S}_M^\bullet (M) $$ or equivalently, $$ \Gamma_M(\Sigma^\infty_M(P^{Ad}_+)) \longrightarrow Map(\Sigma^\infty (M_+), \mathbb{S}) = M^\vee. $$ The augmentation is then given by $$ \epsilon : R = \mathcal{S}_P^\bullet (M) \to M^\vee \to \mathbb{S} $$ where the second map in this composition is the augmentation of $M^\vee \to \mathbb{S}$. Notice that the above diagram also defines a map of bimodules, $$ Q = \Sigma^{d-n}\mathcal{S}^P_\bullet (M) \simeq \Sigma^{d-n}(P^{Ad})^{-T_{vert}} \to \Sigma^{d-n}\mathcal{S}^M_\bullet (M) = \Sigma^{d-n} (M_+). $$ Composing this map with the projection $p : \Sigma^{d-n}(M_+) \to \Sigma^{d-n} \mathbb{S}$ defines a map $u : Q = \Sigma^{d-n} \mathcal{S}^P_\bullet (M) \to \Sigma^{d-n}\mathbb{S}$. Putting these maps together gives a map of topological Hochschild homologies, $$ t : THH(R, Q) \xrightarrow{(\epsilon, u)} THH(\mathbb{S}, \Sigma^{d-n}\mathbb{S}) = \Sigma^{d-n}\mathbb{S}. $$ We leave it to the reader to verify that the pairing defined by the composition \begin{equation}\label{pairing} \langle \, , \, \rangle : R\wedge Q = \mathcal{S}_P^\bullet (M) \wedge \Sigma^{d-n}\mathcal{S}^P_\bullet (M) \xrightarrow{\mu} Q = \Sigma^{d-n}\mathcal{S}^P_\bullet (M) \hookrightarrow THH(R, Q) \xrightarrow{t} \Sigma^{d-n}\mathbb{S} \end{equation} is the duality map given by Theorem \ref{SWdual} above. It is therefore nondegenerate. This proves that the triple $(\mathcal{S}_P^\bullet (M) , \Sigma^{d-n}\mathcal{S}^P_\bullet (M), t)$ is a twisted compact Calabi-Yau ring spectrum of dimension $n-d$. \end{proof} \medskip Notice that Propositions \ref{ptmcy}, \ref{coalg}, and \ref{cCYFrob} imply both Corollary \ref{frobenius} and Theorem \ref{Frob} as stated in the introduction. \section{Gauge symmetry} In this section we continue considering a principal bundle $G \to P \xrightarrow{p} M$, where $G$ is a compact Lie group of dimension $d$, and $M^n$ is a closed manifold of dimension $n$. Recall that the \sl gauge group \rm $\mathcal{G} (P)$ of the bundle $P$ is the group of $G$-equivariant bundle automorphisms of $P$ living over the identity of $M$. Said another way, let $G \to \mathcal{A} ut^G (P) \to M$ be the fibration whose fiber over $x \in M$ is the group of $G$-equivariant automorphisms of the fiber $p^{-1}(x)$. This bundle is a fiberwise group, and the gauge group is the group of sections $$ \mathcal{G} (P) = \Gamma_M(\mathcal{A} ut^G(P)). $$ Now a standard exercise shows that the bundle $\mathcal{A} ut^G(P)$ is isomorphic to the adjoint bundle $G \to P^{Ad} \to M$. Thus we may identify $$ \mathcal{G} (P) = \Gamma_M(P^{Ad}). $$ In \cite{cjgauge} a fiberwise stabilization map was defined and studied: \begin{equation}\label{stab1} \rho : \Sigma^\infty (\mathcal{G} (P)_+) = \Sigma^\infty (\Gamma_M(P^{Ad})) \to \Gamma_M(\Sigma^\infty(P^{Ad}_+)) \simeq (P^{Ad})^{-TM} = \mathcal{S}^\bullet_P(M). \end{equation} $\rho$ is a map of ring spectra and also defines a map to the group of units of the (manifold) string topology ring spectrum \begin{equation}\label{stab2} \rho : \mathcal{G} (P) \to GL_1(\mathcal{S}^\bullet_P(M)). \end{equation} In \cite{cjgauge} this map was studied and computed in several important cases. Now recall from Proposition \ref{ptmcy} that in the twisted compact Calabi-Yau structure of $ \mathcal{S}^\bullet_P(M)$, the twisting bimodule is given by a suspension of the Lie group string topology spectrum, $Q = \Sigma^{d-n}\mathcal{S}^P_\bullet (M) \simeq \Sigma^{d-n}(P^{Ad})^{-T_{vert}}$. In particular the Lie group string topology spectrum $\mathcal{S}^P_\bullet (M)$ inherits a coalgebra structure. One of the goals of this section is to show that there is a similarly defined and compatible gauge symmetry on this spectrum. We also show how these actions are related, and describe different perspectives on this action. We then compute two examples of this gauge symmetry. \medskip \begin{theorem}\label{gaugesym} The twisting bimodule structure on the Lie group string topology spectrum, $Q = \Sigma^{d-n}\mathcal{S}^P_\bullet (M) $ has a natural action of the gauge group $\mathcal{G} (P)$. That is, $\Sigma^{d-n}\mathcal{S}^P_\bullet (M) $ is a module spectrum over the ring spectrum $\Sigma^\infty (\mathcal{G} (P)_+)$. Furthermore this action is compatible with the gauge symmetry on the manifold string topology spectrum $R = \mathcal{S}^\bullet_P (M)$ via its twisted Calabi-Yau duality pairing $$ \langle \, , \, \rangle : R\wedge Q = \mathcal{S}^\bullet_P (M) \wedge \Sigma^{d-n}\mathcal{S}^P_\bullet (M) \to \Sigma^{d-n}\mathbb{S} $$ as defined in the proof of Proposition \ref{ptmcy} (see Equation \ref{pairing}). That is, the adjoint equivalence $$ \mathcal{S}^\bullet_P (M) \xrightarrow{\simeq} \mathcal{S}^P_\bullet (M)^\vee $$ is equivariant with respect to the gauge symmetry of these spectra. \end{theorem} \begin{proof} Recall that $\mathcal{S}^P_\bullet (M)$ is the generalized homology associated to the parameterized spectrum $$ G^\vee \to \mathcal{D} (\Sigma^\infty_M(P^{Ad}_+)) \to M. $$ We may take $\mathcal{D} (\Sigma^\infty_M(P^{Ad}_+))$ to be the parameterized spectrum whose $k^{th}$ space over $M$ is the fibration $$ Map (G, S^k) \to P \times_G Map (G, S^k) \to M $$ where the action of $G$ on $Map (G, S^k)$ is the dual of the adjoint action as described in (\ref{gact}). This is because the spaces $Map(G,S^k)$ with this action form the underlying naive $G$-spectrum of $G^\vee$, on which the homotopy theory of $G^\vee$ as a $\Sigma^\infty(G_+)$-module is determined. This fibration has a canonical section $$ \sigma : M = P \times_G point = P \times_G \epsilon \hookrightarrow P \times_G Map (G, S^k).$$ Here $\epsilon : G \to S^k$ is the constant map at the basepoint $(1, 0, \cdots, 0) \in S^k$. Then the $k^{th}$-space of the generalized homology spectrum $$ \mathcal{S}^P_\bullet (M) = \mathcal{D} (\Sigma^\infty_M(P^{Ad}_+)) / \sigma (M) $$ is given by $$ P_+\wedge_G Map (G, S^k). $$ The structure maps are given by $$ \Sigma (P_+\wedge_G Map (G, S^k)) = P_+\wedge_G \Sigma(Map (G, S^k)) \xrightarrow{1 \wedge s} P_+\wedge_G Map (G, S^{k+1}) $$ where $s : \Sigma(Map(G, S^k)) \to Map(G, \Sigma S^k)$ is given by $s(t, \phi)(g) = t\wedge\phi (g)$. Now the bundle $p : P \times_G Map (G, S^k) \to M$ is $\mathcal{G} (P)$-equivariant with respect to the following action. Let $\phi \in \mathcal{G} (P) = \Gamma_M(P^{Ad})$, and let $(y, \theta) \in P \times Map (G, S^k)$ represent an element in $P \times_G Map (G, S^k)$. Then \begin{equation}\label{act} \phi \cdot (y, \theta) = (y, h\cdot \theta) \end{equation} where $h \in G$ is the unique element so that $\phi (p(y, \theta)) \in P \times_G G^{Ad}$ is represented by $(y, h) \in P \times G$. The reader can check that this action is well-defined, and that the section $\sigma(M = P \times_G\epsilon)$ consist of fixed points of this action. It therefore descends to a $\mathcal{G} (P)$-action on $P_+\wedge_G Map (G, S^k)$. These actions (one for each $k$) clearly respect the structure maps and therefore defines an action of $\mathcal{G} (P)$ on the spectrum $\mathcal{D} (\Sigma^\infty_M(P^{Ad}_+)) / \sigma (M) = \mathcal{S}^P_\bullet (M)$. Now as seen in Corollary \ref{frobenius} the Lie group string topology spectrum $\mathcal{S}^P_\bullet (M)$ is Spanier-Whitehead dual to the manifold string topology spectrum $\mathcal{S}_P^\bullet (M) = \Gamma_M(\Sigma^\infty_M(P^{Ad}_+))$. The action of the gauge group on this ring spectrum is given by the stabilization representation (\ref{stab1}), (\ref{stab2}), and it is immediate that the gauge symmetry defined on the Lie group string topology spectrum $\mathcal{S}^P_\bullet (M)$ in (\ref{act}) is the dual action. This implies that with respect to the twisted Calabi-Yau duality pairing $$ \langle \, , \, \rangle : R\wedge Q = \mathcal{S}^\bullet_P (M) \wedge \Sigma^{d-n}\mathcal{S}^P_\bullet (M) \to \Sigma^{d-n}\mathbb{S} $$ then the corresponding adjoint equivalence, $$ \mathcal{S}^\bullet_P (M) \xrightarrow{\simeq} \mathcal{S}^P_\bullet (M)^\vee $$ is equivariant with respect to the gauge symmetry of these spectra. \end{proof} \bigskip We now study examples of this gauge symmetry and describe this symmetry from different perspectives. \medskip \noindent {\bf Example 1}. Consider the $U(1)$ Hopf bundle $$ U(1) \to P= S^{2n+1} \to \mathbb{C}\mathbb{ P}^n. $$ Since $U(1)$ is abelian, the adjoint bundle $P^{Ad}$ is trivial, $U(1) \to \mathbb{C}\mathbb{ P}^n \times U(1) \to \mathbb{C}\mathbb{ P}^n.$ Therefore the gauge group is given by the mapping group, $$ \mathcal{G} (P) = Map (\mathbb{C}\mathbb{ P}^n, U(1)). $$ Also, the fiberwise suspension spectrum $\Sigma^\infty (U(1)_+) \to \Sigma^\infty_{\mathbb{C}\mathbb{ P}^n}(P^{Ad}_+) \to \mathbb{C}\mathbb{ P}^n$ is given by the trivially parameterized spectrum $$ \Sigma^\infty (U(1)_+) \to \bc \bp^n_+ \wedge \Sigma^\infty (U(1)_+) \to \bc \bp^n. $$ (By the ``trivially parameterized spectrum $ \bc \bp^n_+ \wedge \Sigma^\infty (U(1)_+)$" we mean the parameterized spectrum whose $k^{th}$ space is the trivial fibration $ \bc \bp^n \times \Sigma^k(U(1)_+) \to \bc \bp^n$.) The twisted cohomology theory this parameterized spectrum represents is therefore actually untwisted, and so the defining spectrum of sections is the mapping spectrum, $$ R = \mathcal{S}^\bullet_P(\bc \bp^n) = Map(\Sigma^\infty(\bc \bp^n), \Sigma^\infty(U(1)_+)). $$ This is an $E_\infty$- ring spectrum because the source of the mapping spectrum is an $E_\infty$- coalgebra spectrum and the target is an $E_\infty$-ring spectrum. The action of the gauge group $\mathcal{G} (P) = Map (\mathbb{C}\mathbb{ P}^n, U(1))$ is then given by the map of ring spectra \begin{equation}\label{stab} \Sigma^\infty (\mathcal{G} (P)_+) = \Sigma^\infty (Map (\mathbb{C}\mathbb{ P}^n, U(1))_+) \xrightarrow{\sigma} Map(\Sigma^\infty(\bc \bp^n), \Sigma^\infty(U(1)_+)) = \mathcal{S}^\bullet_P(\bc \bp^n) = R \end{equation} where $\sigma$ is the obvious stabilization map. The role of stabilization in understanding gauge symmetry on manifold string topology spectra was studied in general in \cite{cjgauge}. Now consider the gauge symmetry on the bimodule $Q = \Sigma^{1-2n}\mathcal{S}^P_\bullet(\bc \bp^n) = \Sigma^{1-2n}(S^{2n+1}_+\wedge_{U(1)}U(1)^\vee)$ where the action of $U(1)$ on the Spanier-Whitehead dual $U(1)^\vee$ is (the dual of) the conjugation action. Again, since $U(1)$ is abelian this action is trivial, so $$ Q = \Sigma^{1-2n}(\bc \bp^n_+ \wedge U(1)^\vee). $$ Of course $U(1) \cong S^1$, so $U(1)^\vee \simeq \Sigma^\infty (S^{-1} \vee S^0).$ By Spanier-Whitehead duality, the action of the gauge group $\mathcal{G} (P)$ is given by composing the stabilzation map described above (\ref{stab}) $$ \sigma : \Sigma^\infty (\mathcal{G} (P)_+) \to R $$ with the $R$-bimodule action on the desuspension of its dual, $Q$, as described in the proof of Proposition \ref{ptmcy}. \bigskip Before we move on to another example, we consider the action of the gauge group on the level of Thom spectra. The point being that the Calabi-Yau ring spectrum in question is $R \simeq (P^{Ad})^{-TM}$ and the twisting bimodule is $Q \simeq \Sigma^{d-n}(P^{Ad})^{-T_{vert}}$ are both Thom spectra. To understand the induced gauge symmetry on these Thom spectra, we first observe that the gauge group actually acts on the space $P^{Ad}$, and the actions on the Thom spectra are induced from it. Let $G \to P \xrightarrow{p} M$ be a principal bundle with $M$ a closed $n$-manifold and $G$ a compact Lie group of dimension $d$. By abuse of notation we also call the projection map of the induced bundle $p : P^{Ad} \to M$. Let $\phi \in \mathcal{G} (P) = \Gamma_M(P^{Ad})$. Since $\phi$ is a section of $P^{Ad}$, for $y \in P^{Ad}$, $\phi (p(y))$ and $y$ live in the same fiber over $M$. That is, $p(\phi (p(y)) = p(y) \in M$. Thus the pair, $(\phi (p(y)), y)$ lies in the fiber product $P^{Ad} \times_M P^{Ad}$. Since $P^{Ad}$ is a fiberwise group, we can compose with the fiberwise multiplication $\mu : P^{Ad} \times_M P^{Ad} \to P^{Ad}$ to produce an element $\phi \cdot y = \mu (\phi (p(y)), y) \in P^{Ad}$. The map \begin{align} \mathcal{G} (P) \times P^{Ad} &\to P^{Ad} \notag \\ (\phi, y) &\to \phi \cdot y \notag \end{align} defines an action of the gauge group on $P^{Ad}$. This in fact defines a $\mathcal{G} (P)$-equivariance on the fiber bundle, $G \to P^{Ad} \to M$. That is, the following diagram commutes: \begin{equation}\label{commutes} \begin{CD} \mathcal{G} (P) \times P^{Ad} @>\cdot >> P^{Ad} \\ @VVV @VVpV \\ M @>>=> M \end{CD} \end{equation} where the left vertical arrow composes the projection map $\mathcal{G} (P) \times P^{Ad} \to P^{Ad}$ with the bundle map $P^{Ad} \to M$. Therefore this action induces an action on any (virtual) vector bundle over $P^{Ad}$ that is pulled back from a bundle over $M$. In particular, on the level of Thom spectra, there is an induced action $$ \mathcal{G}(P)_+ \wedge (P^{Ad})^{-TM} \to (P^{Ad})^{-TM}. $$ This is easily seen to be equivalent to the action of $\mathcal{G} (P)$ on $\mathcal{S}^\bullet_P(M) \simeq (P^{Ad})^{-TM}$ described above. \medskip We now observe that the gauge symmetry on the Lie group string topology spectrum $\mathcal{S}^P_\bullet (M) \simeq (P^{Ad})^{-T_{vert}}$ described above can also be viewed in terms of the space level action of $\mathcal{G} (P)$ on $P^{Ad}$. This is a consequence of the following observation. \medskip \begin{proposition} Let $Act : \mathcal{G} (P) \times P^{Ad} \to P^{Ad}$ be the action map described above. Then there is an isomorphism of virtual bundles over $\mathcal{G} (P) \times P^{Ad}$, $$ \mathcal{G} (P) \times -T_{vert} P^{Ad} \xrightarrow{\cong} Act^*(-T_{vert} P^{Ad}). $$ \end{proposition} \begin{proof} We first observe that the commutativity of diagram (\ref{commutes}) says that there is an isomorphism of vector bundles over $\mathcal{G} (P) \times P^{Ad}$, $$ \mathcal{G} (P) \times p^*(TM) \cong Act^*(p^*(TM)). $$ Notice also that there is an isomorphism of vector bundles, $D : \mathcal{G} (P) \times TP^{Ad} \xrightarrow{\cong} Act^*(TP^{Ad})$, where $TP^{Ad} \to P^{Ad}$ is the tangent bundle. The isomorphism is given by differentiation of the action. Now notice that there is an induced isomorphism of virtual bundles, which by abuse of notation we call $-D : \mathcal{G} (P) \times -TP^{Ad} \xrightarrow{\cong} Act^*(-TP^{Ad})$. This is defined by the composition of isomorphisms \begin{align} Act^*(-TP^{Ad}) &= -Act^*(TP^{Ad}) \notag \\ &\cong -(\mathcal{G} (P) \times TP^{Ad}) \quad \text{by the above}, \notag \\ &= \mathcal{G} (P) \times -TP^{Ad}. \notag \end{align} Now, using the fact that $$ -\tvP^{Ad} \cong -TP^{Ad} \oplus p^*(TM) $$ we have that \begin{align} Act^*(-T_{vert} P^{Ad}) &\cong Act^*(-TP^{Ad}) \oplus Act^*(p^*(TM)) \notag \\ &\cong \mathcal{G} (P) \times \left(-TP^{Ad} \oplus p^*(TM)\right) \quad \text{by the above,} \notag \\ &\cong \mathcal{G} (P) \times -T_{vert} P^{Ad}. \end{align} \end{proof} \medskip The last explicit example of this gauge symmetry will be one that was studied initially in \cite{cjgauge}. \bigskip \noindent {\bf Example 2.} Consider the principal $SU(2)$- bundle over an oriented $4$-dimensional sphere, $$ SU(2) \to P_k \to S^4 $$ having second Chern class $c_2(P_k) = k \in H^4(S^4) \cong \mathbb{Z}.$ In this case we restrict our attention to the \sl based \rm gauge group $\mathcal{G}^b(P_k)$ which is defined to be the kernel of the homomorphism, \begin{align} \mathcal{G}(P_k) &\to SU(2) \notag \\ \phi &\to \phi (\infty). \notag \end{align} Here we are thinking of $S^4$ as the one point compactification, $\mathbb{R}^4 \cup \infty$. \medskip In the case $k=1$ the (based) gauge symmetry on the manifold string topology ring spectrum $\mathcal{S}^\bullet_{P_k}(S^4)$ was studied in \cite{cjgauge}. We now observe that the argument presented in \cite{cjgauge} quickly extends to $P_k$ for all $k$, and we then show how it gives an understanding of the gauge symmetry of the Lie group string topology spectrum $\mathcal{S}^{P_k}_\bullet (S^4)$ as well. \medskip As has become standard notation, given a ring spectrum $R$, let $GL_1(R)$ denote the ``group of units" of $R$. More precisely, $GL_1(R)$ is defined so that the following diagram of spaces is homotopy cartesian: \medskip \begin{equation}\label{units} \begin{CD} GL_1(R) @>>> \Omega^\infty (R) \\ @VVV @VVcomponents V \\ \pi_0 (R)^\times @>>\hookrightarrow > \pi_0(R) \end{CD} \end{equation} Here $ \pi_0(R)$ is the discrete ring of components and $ \pi_0 (R)^\times$ is its group of units. \medskip In other words, $GL_1(R)$ consists of those path components of the zero space $\Omega^\infty (R)$ consisting of homotopy invertible elements. An action of a group $G$ on a ring spectrum $R$ (i.e a $\Sigma^\infty (G_+)$-module structure on $R$) is induced by an $A_\infty$-morphism (``representation") $$ \rho : G \to GL_1(R). $$ (See for example {\cite{lind}). To understand the gauge symmetry on the manifold string topology spectrum $\mathcal{S}^\bullet_{P_k}(S^4)$ we therefore want to describe the representation \begin{equation}\label{rep} \mathcal{G} (P_k) \to GL_1(\mathcal{S}^\bullet_{P_k}(S^4)). \end{equation} \medskip Now as observed in \cite{cjgauge}, the group-like monoid $GL_1(\mathcal{S}^\bullet_{P_k}(S^4))$ is equivalent to the grouplike monoid $hAut(\Sigma^\infty_{S^4}((P_k)_+)$ of homotopy automorphisms of the parameterized spectrum $\Sigma^\infty_{S^4}((P_k)_+)$. To understand this monoid of homotopy automorphisms, note that given any ring spectrum $R$ and parameterized $R$-line bundle $\mathcal{E}$ over $M$, there is a fibration sequence \begin{equation}\label{based} hAut^b(\mathcal{E}) \to hAut (\mathcal{E}) \xrightarrow{ev} hAut^R(\mathcal{E}_{x_0}) = GL_1(R) \end{equation} where the map $ev$ evaluates an automorphism on the fiber over the basepoint $x_0 \in M$. The fiber is $ hAut^b(R)$, which is the $A_\infty$ group-like monoid of based homotopy automorphisms. This is the subgroup of $hAut (\mathcal{E})$ consisting of those homotopy automorphisms that are equal to the identity on the fiber spectrum at the basepoint $\mathcal{E}_{x_0}$. Putting these facts together yields a fibration sequence of group-like monoids, \begin{equation}\label{haut} hAut^b(\Sigma^\infty_{S^4}((P_k)_+)) \to GL_1(\mathcal{S}^\bullet_{P_k}(S^4)) \to GL_1(\Sigma^\infty (SU(2)_+)). \end{equation} As was done in \cite{cjgauge}, we observe that since $SU(2) \cong S^3$, then the defining diagram (\ref{units}) becomes, in the case of $R = \Sigma^\infty (SU(2)_+)$, $$ \begin{CD} GL_1(\Sigma^\infty (SU(2)_+)) @>>> Q(S^3_+) \\ @VVV @VVcomponents V \\ \pm 1 @>>\hookrightarrow > \mathbb{Z} \end{CD} $$ That is, $ GL_1(\Sigma^\infty (SU(2)_+))$ consists of two path components of the infinite loop space $Q(S^3_+) $ corresponding to the units $\pm 1 \in \mathbb{Z} \cong \pi_0(Q(S^3_+))$. We denote this space by $Q_{\pm 1}(S^3_+)$. Therefore fibration (\ref{haut}) has base space $Q_{\pm 1}(S^3_+)$. We now examine the homotopy type of the fiber, $hAut^b(\Sigma^\infty_{S^4}((P_k)_+))$. By one of the main results of \cite{cjgauge} (Theorem 3), there is an equivalence $$ hAut^b(\Sigma^\infty_{S^4}((P_k)_+)) \xrightarrow{\simeq} \Omega Map^b_k(S^4, BGL_1(\Sigma^\infty (SU(2)_+))) = \Omega^4_k GL_1(\Sigma^\infty (SU(2)_+)) $$ where $Map^b_k$ denotes the path component of the based mapping space corresponding to $k \in \mathbb{Z} = \pi_0(Map^b(S^4, BGL_1(\Sigma^\infty (SU(2)_+)))$. Similarly $\Omega^4_k$ denotes the corresponding path component in $\Omega^4GL_1(\Sigma^\infty (SU(2)_+)).$ Now since $\Omega^4GL_1(\Sigma^\infty (SU(2)_+))$ is a group-like monoid, all of its path components are homotopy equivalent. So we therefore have the following result, which gives a good understanding of the group of units of the manifold string topology spectrum of the principal bundle $SU(2) \to P_k \to S^4$. \begin{lemma}\label{fibgauge} For any $k$, there is an equivalence of group-like monoids, $ \phi_k : hAut^b(\Sigma^\infty_{S^4}((P_k)_+)) \xrightarrow{\simeq} \Omega^4Q(S^3_+).$ Furthermore there are homotopy fibration sequences of group-like monoids $$ \Omega^4Q(S^3_+) \xrightarrow{\iota_k} GL_1(\mathcal{S}^\bullet_{P_k}(S^4)) \xrightarrow{q_k} Q_{\pm 1}(S^3_+). $$ \end{lemma} \medskip In order to understand the representation $\rho_k : \mathcal{G}^b(P_k) \to GL_1(\mathcal{S}^\bullet_{P_k}(S^4))$ describing the gauge symmetry of the manifold string topology spectrum, we now consider the homotopy type of the based gauge group $\mathcal{G}^b(P_k)$. Again, for $k=1$ this was done in \cite{cjgauge}, and we simply adapt the argument there to apply to all $k$. By a basic result on the topology of gauge groups proved by Atiyah and Bott in \cite{atiyahbott}, we have that $$ \mathcal{G}^b(P_k) \simeq \Omega Map^b_k(S^4, BSU(2)), $$ where, as above, $Map^b_k$ denotes the path component of degree $k$ based maps. This (based) loop space is equivalent to $\Omega \Omega^3_kSU(2) = \Omega \Omega^3_kS^3$. Since $\Omega^3S^3$ is a group-like monoid, all of its path components are equivalent, so we have that the following. \begin{lemma} For any $k$, there is an equivalence of group-like monoids, $\psi_k : \mathcal{G}^b(P_k) \xrightarrow{\simeq} \Omega^4S^3$. \end{lemma} \medskip By Proposition 5 of \cite{cjgauge}, one knows that given any principal bundle over a manifold, $G \to P \to M$, the action of the gauge group (and therefore the based gauge group) on the manifold string topology spectrum $\mathcal{S}^\bullet_P(M)$ is defined by the representation given by the stabilization map \begin{align} \mathcal{G}^b(P) &\xrightarrow{\rho} GL_1(\mathcal{S}^\bullet_P(M)) \notag \\ \Omega Map^b_P(M, BG) &\xrightarrow{\sigma} \Omega Map^b_P (M, BGL_1(\Sigma^\infty (G_+)) \notag \end{align} where $\sigma$ is induced by the natural inclusion $G \hookrightarrow GL_1(\Sigma^\infty (G_+)$. Here $Map^b_P$ denotes the path component of the based mapping space that classifies the bundle $P$. In the case of $SU(2) \to P_k \to S^4$ then Lemma \ref{fibgauge} says that the representation $\rho_k : \mathcal{G}^b(P_k) \to GL_1(\mathcal{S}^\bullet_{P_k}(S^4))$ is given by the stabilization map \begin{equation}\label{act} \Omega^4S^3 \xrightarrow{\sigma} \Omega^4Q(S^3_+) \xrightarrow{\iota_k} GL_1(\mathcal{S}^\bullet_{P_k}(S^4)). \end{equation} where $\sigma$ is induced by the map $ u_k : S^3 \to Q(S^3_+) \simeq Q(S^3) \times QS^0$ that sends $S^3$ to a generator of $\pi_3Q(S^3) \cong \mathbb{Z}$ cross the basepoint of the component $Q_k(S^0)$. \bigskip Finally, notice that given any compact ring spectrum $R$, the group of units $GL_1(R)$ acts on its Spanier-Whitehead dual $R^\vee$ by the dual action of $GL_1(R)$ on $R$. Given the Spanier-Whitehead duality between the manifold string topology spectrum $\mathcal{S}^\bullet_{P_k}(S^4) \simeq (P_k^{Ad})^{-TS^4}$ and the Lie group string topology spectrum $\mathcal{S}^{P_k}_\bullet(S^4) \simeq (P_k^{Ad})^{-T_{vert}}$ then (\ref{act}) describes the action of the based gauge group $\mathcal{G}^b(P_k)$ on the Lie group string topology spectrum as well. \medskip To end this section we point out that Proposition \ref{cCYFrob} and the above analysis of gauge symmetry implies the following. \begin{theorem}\label{frobgauge} Let $G \to P \to M$ be a principal bundle over a closed $n$-manifold $M$, with $G$ a $d$-dimensional compact Lie group. Let $E$ be any ring spectrum with respect to which the compact Calabi-Yau structure on $\mathcal{S}^\bullet_P(M)$ given in Proposition \ref{ptmcy} is oriented. Then the homology $E_*(\mathcal{S}^\bullet_P(M))$ is a Frobenius algebra over the homology of the gauge group, $E_*(\mathcal{G}(P))$. That is, the following conditions hold: \begin{itemize} \item The homology algebra structure of the manifold string topology ring spectrum $E_*(\mathcal{S}^\bullet_P(M))$ carries the structure of an algebra over $E_*(\mathcal{G} (P)).$ \\ \item The homology coalgebra structure of the Lie group string topology coalgebra spectrum $E_*(\mathcal{S}_\bullet^P(M))$ is a module over $E_*(\mathcal{G} (P))$, and \\ \item The duality homomorphism defined by the Frobenius algebra structure induced from the compact Calabi-Yau structure, $$ E_*(\mathcal{S}^\bullet_P(M)) \xrightarrow{\cong} E_{n-d-*}(\mathcal{S}_\bullet^P(M))^* $$ is an isomorphism of $E_*(\mathcal{G} (P))$ modules. \end{itemize} \end{theorem} \section{Twisted, smooth Calabi-Yau ring spectra, Thom ring spectra, and Lagrangian immersions of spheres} \bigskip We now turn to the notion of twisted Calabi-Yau structures for \sl smooth \rm ring spectra. Recall that a \sl smooth \rm ring spectrum $A$ is one that is perfect as an $A$-bimodule. That is, it is perfect as a left $A\wedge A^{op}$- module. Given a smooth ring spectrum $A$, let $A^!$ be its ``bimodule dual". That is, $$ A^! = Rhom_{A \wedge A^{op}} (A, A \wedge A^{op}). $$ Now recall that given any $A$-bimodules $P$ and $Q$ there is a cap product pairing $$ \cap : Rhom_{A \wedge A^{op}}(A, P) \quad \wedge \quad A \wedge^L_{A \wedge A^{op}}Q \quad \longrightarrow \quad P \wedge^L_{A \wedge A^{op}} Q. $$ When $P = A \wedge A^{op}$, then one can take a cap product with respect to a map $\rho : \mathbb{S} \to A \wedge_{A \wedge A^{op}}Q $ to obtain a map \begin{equation}\label{cap} \cap \rho : A^! \to Q. \end{equation} \begin{definition} A \bf ``twisted, smooth Calabi-Yau ring spectrum" (twisted $sCY$) \rm of dimension $n$ is a triple $(A, P, \sigma)$, where $A$ is a smooth ring spectrum and $P$ is a smooth $A$-bimodule and has the same $\mathbb{Z}/2$-homology as $A$: $$H_*(P; \mathbb{Z}/2) \cong H_*(A; \mathbb{Z}/2).$$ We refer to $P$ as the ``twisting" bimodule. If $P = A$ we say that $A$ has \sl trivial \rm twisting. $$\sigma : \Sigma^n \mathbb{S} \to THH(A, \, P)$$ is a map of spectra we call the `` $n$-\sl dimensional cotrace map" \rm which has the following duality property: The induced cap product pairing $$ \cap \sigma : A^! = Rhom_{A \wedge A^{op}} (A, A \wedge A^{op}) \longrightarrow \Sigma^{-n} P $$ is an equivalence of $A$-bimodule spectra. \end{definition} \noindent \bf Note: \rm Given a graded module $P$ over a ring $R$ let $P[-n]$ denote the desuspension $\Sigma^{-n}(P)$. \medskip Like in the compact case, in most applications a twisted, smooth Calabi-Yau spectrum is reduced over a homology theory with respect to which the twisting becomes trivialized, or ``oriented". We now make this precise. \begin{definition} Let $(A, P, \sigma)$ be a twisted, $sCY$ ring spectrum of dimension $n$, and let $E$ be a ring spectrum representing a homology theory $E_*$. An \sl ``$E_*$-orientation" \rm of $(A, P, \sigma)$ is a pair $(u, \tilde \sigma_E)$, where $$ u : P \wedge E \xrightarrow{\simeq} A \wedge E $$ is an equivalence of $E$-module spectra as $A$-bimodules. Here $A$ acts trivially on $E$. $\tilde \sigma_E : \Sigma^{n} E \to THH(A , \, A)^{hS^1}\wedge E $ is a map to the $E$-homology of the homotopy fixed point spectrum of the $S^1$-action induced by the cyclic structure, which factorizes the cotrace map $\sigma$ in $E_*$-homology. That is, the following diagram homotopy commutes: $$ \begin{CD} \Sigma^nE @> \tilde \sigma_E >> THH(A, \, A)^{hS^1} \wedge E \\ @V\sigma \wedge 1 VV @VV j V \\ THH(A, \, P)\wedge E @>\simeq >u_*> THH(A, A)\wedge E. \end{CD} $$ Here $j$ is the natural inclusion of the homotopy fixed points. \end{definition} \medskip Notice that when $E =Hk$, the Eilenberg-MacLane spectrum for a field $k$, then a twisted, smooth Calabi-Yau spectrum $(A, \tau, \sigma)$ together with an $Hk$-orientation $(u, \tilde \sigma_{Hk})$ defines a smooth Calabi-Yau algebra structure on the singular chains with $k$-coefficients, $C_*(A; k)$, as defined in \cite{kv} and in \cite{cg}. \subsection{Thom spectra of virtual bundles over the loop space of a manifold} We now consider important examples of a twisted, smooth Calabi-Yau spectra. These are Thom ring spectra of virtual bundles over $\Omega M$, for $M$ a closed manifold. We begin by studying the Thom spectrum of the trivial bundle over $\Omega M$, namely the suspension spectrum $\Sigma^\infty (\Omega M_+)$. The following generalizes the chain complex analogue proven by the first author and Ganatra in \cite{cg}. \medskip \begin{theorem}\label{OmegaM} Let $M$ be a closed manifold of dimension $n$. Then the suspension spectrum of its based loop space, $\Sigma^\infty (\Omega M_+)$ can be given the structure of a twisted, smooth Calabi-Yau ring spectrum of dimension $n$. \end{theorem} \begin{proof} In order to give $ A = \Sigma^\infty (\Omega M_+)$ a twisted $sCY$ structure, we need to define a twisting bimodule and a cotrace map. Consider the virtual bundle $-TM$ over $M$. The associated virtual spherical fibration is classified by a map $B_{-\tau_M} : M \to BGL_1(\mathbb{S})$. By taking the loop of this map and applying suspension spectra we get a map of ring spectra \begin{equation}\label{tau} -\tau_M : A = \Sigma^\infty (\Omega M_+) \to \Sigma^\infty (GL_1(\mathbb{S})). \end{equation} This defines a $\Sigma^\infty (\Omega M_+)$-bimodule structure on the sphere spectrum. We let $\bs_{-\tau_M}$ be the sphere spectrum with this bimodule structure, and we define $P = \bs_{-\tau_M} \wedge \Sigma^\infty (\Omega M_+) = \bs_{-\tau_M} \wedge A$ to be the induced bimodule. Here $\bs_{-\tau_M} \wedge \Sigma^\infty (\Omega M_+)$ is given the diagonal $A$-bimodule structure. $P$ will be the twisting bimodule. To describe the $n$-dimensional cotrace map, we first need the following observation. Replace $\Omega M$ by its Kan loop group, which by abuse of notation we still write as $\Omega M$. Consider the adjoint action of $\Omega M$ on $\Omega M \times \Omega M$ defined by $$ g \cdot (h_1, h_2) = (gh_1, h_2g^{-1}). $$ If $E$ is the homotopy orbit space of this action, $E = pt \times^L_{\Omega M} (\Omega M \times \Omega M)^{Ad}$ then we have a homotopy fiber sequence (i.e each successive three terms is a homotopy fibration) \begin{equation}\label{Delta} \Omega M \xrightarrow{\tilde \Delta} \Omega M \times \Omega M \to E \xrightarrow{\pi} M. \end{equation} where $\tilde \Delta (g) = (g, g^{-1}).$ From this sequence it immediately follows that $E \simeq \Omega M$ and $\pi : E \to M$ is null homotopic. The Thom spectrum of the pull-back virtual bundle $\pi^*(-TM)$ which we denote by $E^{-TM}$ is given by $$ \Sigma^{-n} \mathbb{S}_{-\tau_M} \wedge^L_{\Omega M} \Sigma^\infty (\Omega M_+) \wedge \Sigma^\infty (\Omega M_+) = \Sigma^{-n} \ \mathbb{S}_{-\tau_M}\wedge^L_A (A\wedge A^{op})^{Ad}. $$ However, since $\pi : E \to M$ is null homotopic, $\pi^*(-TM)$ is trivial, and there is an equivalence \begin{equation}\label{etm} h : E^{-TM} = \Sigma^{-n} \ \mathbb{S}_{-\tau_M}\wedge^L_A (A\wedge A^{op})^{Ad} \xrightarrow{\simeq} \Sigma^{-n} \mathbb{S}_{-\tau_M} \wedge A = P[-n]. \end{equation} Furthermore one can check that this equivalence can be taken to be one of $A$-bimodules. We therefore have an equivalence $$ \begin{CD} A \wedge^L_{A\wedge A^{op}} (\mathbb{S}_{-\tau_M}\wedge_A (A\wedge A^{op})^{Ad}) @>h >\simeq > A \wedge^L_{A\wedge A^{op}} (\mathbb{S}_{-\tau_M}\wedge A) @>= >> = THH(A, \, P). \end{CD} $$ Now the map $\tilde \Delta : \Omega M \to \Omega M \times \Omega M$ defines a ring map on the level of suspension spectra, which by abuse of notation we still call $\tilde \Delta$, $$ \tilde \Delta : A \to A \wedge A^{op}. $$ We then get a change-of-rings equivalence, $$ \phi: A^{Ad} \wedge^L_A \mathbb{S}_{-\tau_M} \xrightarrow{\simeq} A \wedge^L_{A\wedge A^{op}} ((A\wedge A^{op})^{Ad}\wedge_A^L \mathbb{S}_{-\tau_M} ). $$ Consider the unit map $u : \mathbb{S} \to \Sigma^\infty (\Omega M_+) = A$. This defines a map $$ u : \mathbb{S} \wedge^L_A \mathbb{S}_{-\tau_M} \to A^{Ad} \wedge^L_A \mathbb{S}_{-\tau_M}. $$ Now $ \mathbb{S} \wedge^L_A \mathbb{S}_{-\tau_M} $ is the Thom spectrum $\Sigma^n(M^{-TM})$. Thus there is a Pontrjagin-Thom map $\gamma : \Sigma^n \mathbb{S} \to \Sigma^n(M^{-TM}) = \mathbb{S} \wedge^L_A \mathbb{S}_{-\tau_M}$. This can be viewed as the $n$-fold suspension of the unit map of the Spanier-Whitehead dual, $$\mathbb{S} \to M^\vee \simeq M^{-TM}.$$ \medskip We can now define the $n$-dimensional cotrace map $\sigma : \Sigma^n \mathbb{S} \to THH(A, \, P) $ to be the composition \begin{align} \sigma : \Sigma^n \mathbb{S} \xrightarrow{\gamma} \mathbb{S} \wedge^L_A \mathbb{S}_{-\tau_M} \xrightarrow{u} A^{Ad} \wedge^L_A \mathbb{S}_{-\tau_M} &\xrightarrow{\phi}A \wedge^L_{A\wedge A^{op}} ((A\wedge A^{op})^{Ad}\wedge_A^L \mathbb{S}_{-\tau_M} ) \notag \\ &\xrightarrow{h} A \wedge^L_{A\wedge A^{op}} (\mathbb{S}_{-\tau_M}\wedge A) = THH(A, \, P) \end{align} \medskip To show $\sigma$ is a valid cotrace map we need to check that it satisfies the required duality condition. Namely, we need to show that the cap product, \begin{equation}\label{cap} \cap \sigma : A^! = Rhom_{A \wedge A^{op}}(A, \, A \wedge A^{op}) \longrightarrow \Sigma^{-n}\mathbb{S}_{-\tau_M}\wedge A = P[-n] \end{equation} is an equivalence of $A$-bimodules. This map was constructed to be an $A$-bimodule map, so it suffices to check that it is an ordinary weak equivalence. In order to do this, we study the homotopy type of $Rhom_{A \wedge A^{op}}(A, \, A \wedge A^{op})$, where $A = \Sigma^\infty (\Omega M_+)$. Notice that since $A$ is a connective Hopf algebra (in the weak sense), we have an equivalence $$ Rhom_{A \wedge A^{op}}(A, \, A \wedge A^{op}) \simeq Rhom_A (\mathbb{S}, \, (A \wedge A^{op})^{Ad}). $$ Consider again the homotopy fibration $\Omega M \times \Omega M \to E \to M$, and its fiberwise suspension spectrum. This is the parameterized spectrum $$ \Sigma^\infty (\Omega M_+) \wedge \Sigma^\infty (\Omega M_+) \to \Sigma^\infty_M(E_+) \to M. $$ The spectrum of sections of this parameterized spectrum is the spectrum whose homotopy type we are trying to compute: $$ \Gamma_M(\Sigma^\infty_M(E_+)) \simeq Rhom_A (\mathbb{S}, \, (A \wedge A^{op})^{Ad}) \simeq Rhom_{A \wedge A^{op}}(A, \, A \wedge A^{op}). $$ If we let $\mathcal{E}^\bullet$ be the twisted equivariant cohomology theory represented by the parameterized spectrum $\Sigma^\infty_M(E_+)$, then what we are trying to compute is $\mathcal{E}^\bullet (M)$. But by twisted Poincar\'e duality theorem of Klein \cite{klein} as described in \cite{CohenKlein}, this twisted cohomology spectrum is given by the Thom spectrum $$ \mathcal{E}^\bullet (M) = \Gamma_M(\Sigma^\infty_M(E_+)) \simeq E^{-TM}. $$ Now as seen in (\ref{etm}) above, there is an equivalence $$ h : E^{-TM} \xrightarrow{\simeq} \Sigma^{-n}\mathbb{S}_{-\tau_M} \wedge A. $$ Putting these together gives an equivalence, $$ A^! = Rhom_{A \wedge A^{op}}(A, \, A \wedge A^{op}) = \mathcal{E}^\bullet (M) \simeq E^{-TM} \xrightarrow{h} \Sigma^{-n}\mathbb{S}_{-\tau_M} \wedge A = P[-n]. $$ We leave it to the reader to check that the cap product map $$ \cap \sigma : A^! \to P[-n] $$ induces such an equivalence. Given this, the proof that $(\Sigma^\infty (\Omega M_+), -\tau_M, \sigma)$ is a twisted, smooth Calabi-Yau ring spectrum is complete. \end{proof} Orientations on this twisted, smooth Calabi-Yau ring spectrum will be addressed in a more general context, in the setting of Thom spectra of virtual bundles over $\Omega M$. \medskip We now generalize the above to the setting of Thom ring spectra. Namely, let $\Omega f: \Omega M \to BGL_1(\mathbb{S})$ be a loop map. That is, it is obtained by applying the based loop functor to a map $f : M \to B(BGL_1(\mathbb{S}))$. Let $\Omega M ^{\Omega f}$ denote the Thom spectrum of $\Omega f$. By a theorem of Lewis, $\Omega M ^{\Omega f}$ is a ring spectrum. Let $E$ be any commutative ($E_\infty$) ring spectrum. As is usual we say that a virtual bundle $\omega : X \to BGL_1(\mathbb{S})$ is $E$-orientable if there is a ``Thom class" $\tau : X^\omega \to E$ so that the composition \begin{equation}\label{diag} \theta_\tau : X^\omega \wedge E \xrightarrow{\Delta \wedge 1} X_+ \wedge X^\omega \wedge E \xrightarrow{1 \wedge \tau \wedge 1} X_+ \wedge E \wedge E \xrightarrow{1 \wedge mult.} X_+ \wedge E \end{equation} is an equivalence (the ``Thom isomorphism"). Here $ \Delta : X^\omega \to X_+ \wedge X^\omega$ is the map of Thom spectra induced by the diagonal map $X \to X \times X$. Again, as is usual we define an $E$-orientation of a manifold $M$ to be an $E$-orientation of its tangent bundle $\tau_M$ or equivalently of $-\tau_M$. An $E$-orientation of a loop map $\Omega f : \Omega Y \to BGL_1(\mathbb{S})$ has the additional requirement that the Thom class $\tau : (\Omega Y)^{\Omega f} \to E$ be a map of ring spectra. In this case notice that the orientation equivalence $\theta_\tau$ in (\ref{diag}) is an equivalence of ring spectra. \begin{theorem}\label{prop-thom-scy} $A= \Omega M ^{\Omega f}$ naturally has the structure of a twisted sCY ring spectrum of dimension $n$. Furthermore, suppose $E$ is a commutative ring spectrum. Then an $E$ orientation $\tau : M^{-TM} \to E$ of $M$ and an $E$-orientation $A = \Omega M ^{\Omega f} \to E$ together induce an $E$-orientation on the $sCY$ structure on $A$. \end{theorem} \noindent {\bf Remark. } This theorem readily generalizes to the setting of generalized Thom spectra of maps to $BGL_1(R)$, where $R$ is a commutative ring spectrum. \medskip \begin{proof} Let $\bs_{-\tau_M}$ denote the sphere spectrum viewed as a $\Sigma^\infty (\Omega M_+)$-module as above. We first observe that by the twisted Poincar\'e duality theorem of Klein \cite{klein} as described in \cite{CohenKlein} (see also \cite{DGI} and \cite{malm}), there is an equivalence of $\Sigma^\infty (\Omega M_+)$-modules \begin{equation} \label{kleinPD} Rhom_{\Sigma^\infty (\Omega M_+)} (\mathbb{S}, \Sigma^\infty(\Omega M _+)) \simeq \Sigma^{-n} \bs_{-\tau_M} \end{equation} Here $\Sigma^\infty (\Omega M_+)$ acts on itself by left multiplication. The reason this equivalence holds is the following. The left-hand side describes the section spectrum of the fiberwise suspension spectrum of the path-loop fibration, $\Omega M \to P(M) \to M$. By Klein's theorem on Poincar\'e duality for parametrized spectra, the section spectrum (i.e twisted cohomology spectrum) associated to this parametrized spectrum is equivalent to a twisting by $-TM$ of the homology spectrum, $ \Sigma^\infty (\Omega M_+) \wedge^L_{\Sigma^\infty (\Omega M_+)} \Sigma^{-n}\bs_{-\tau_M} \simeq \Sigma^{-n} \bs_{-\tau_M}. $ This equivalence is given by cap product with the Pontrjagin-Thom class $t_M: \mathbb{S} \to M^{-TM} = \Sigma^{-n} (\mathbb{S} \wedge^L _{\Sigma^\infty (\Omega M_+)} \bs_{-\tau_M})$. \medskip Consider now the map of ring spectra $\Sigma^\infty (\Omega M_+) \to A \wedge A^{op}$, induced on Thom spectra by the map $\tilde{\Delta}: \Omega M \to \Omega M \times \Omega M$ of (\ref{Delta}). The source of this map is indeed $\Sigma^\infty(\Omega M_+)$, as the composite $$\xymatrix{ \Omega M \ar[r]^-{\tilde{\Delta}} & \Omega M \times \Omega M \ar[r]^-{\Omega f \times \Omega f} & BGL_1(\mathbb{S}) \times BGL_1(\mathbb{S}) \xrightarrow{multiply} BGL_1(\mathbb{S}) }$$ is null homotopic. The following is a result of the second author (Theorem 5.1 of \cite{klang}). \begin{theorem}\label{thm5.1} Under this action of $\Sigma^\infty(\Omega M _+)$ on $A \wedge A^{op}$, there is an equivalence of $A \wedge A^{op}$-modules \begin{equation} A \simeq \mathbb{S} \wedge^L _{\Sigma^\infty( \Omega M_+)} (A \wedge A^{op}) \end{equation} Here the module structure on the right hand side is by right action on $A \wedge A^{op}$. \end{theorem} Notice that $\mathbb{S}$ is a perfect $\Sigma^\infty(\Omega M_+)$-module since $M$ is assumed to be compact. That is, $\mathbb{S}$ is equivalent to a retract of a finite $\Sigma^\infty(\Omega M_+)$-module. Applying $(-) \wedge^L _{\Sigma^\infty(\Omega M_+)}(A \wedge A^{op})$, we can conclude from Theorem \ref{thm5.1} that $A$ is a retract of a finite $A \wedge A^{op}$-module, hence a perfect $A \wedge A^{op}$-module. That is, $A$ is a smooth ring spectrum. Also because $\mathbb{S}$ is perfect as a $\Sigma^\infty(\Omega M_+)$-module, we can apply $(-) \wedge^L _{\Sigma^\infty ( \Omega M_+)} (A \wedge A^{op})$ to both sides of the equivalence (\ref{kleinPD}) to obtain \begin{equation}\label{kleq} Rhom_{\Sigma^\infty ( \Omega M_+)}(\mathbb{S}, A \wedge A^{op}) \simeq \Sigma^{-n} (\bs_{-\tau_M} \wedge^L _{\Sigma^\infty (\Omega M_+)} A \wedge A^{op}). \end{equation} We now take our twisting bimodule to be $P = \bs_{-\tau_M} \wedge^L _{\Sigma^\infty ( \Omega M_+)} A \wedge A^{op}$. Notice that if the original map$ f$ is null homotopic, this agrees with the twisting bimodule given in the proof of Theorem \ref{OmegaM}. Furthermore, by the Thom isomorphism for $-TM$, $$H\mathbb{Z}/2 \wedge P \simeq H\mathbb{Z}/2 \wedge A.$$ This is an equivalence of $A$-bimodules because the equivalence in Theorem \ref{thm5.1} is. Moreover, by the above theorem, $ Rhom_{A \wedge A^{op}} (A, A \wedge A^{op}) \simeq Rhom_{A \wedge A^{op}}((A \wedge A^{op}) \wedge^L_{\Sigma^\infty (\Omega M_+)}\mathbb{S}, \, A \wedge A^{op}) \simeq Rhom_{\Sigma^\infty (\Omega M_+)}(\mathbb{S}, A \wedge A^{op}).$ So the equivalence (\ref{kleq}) becomes \begin{equation}\label{bimoddual}A^! = Rhom_{A \wedge A^{op}}(A, A \wedge A^{op}) \simeq \Sigma^{-n} P. \end{equation} Our goal is to show that this equivalence is given by taking the cap product with an appropriate $n$-dimensional cotrace map $\sigma : \Sigma^n\mathbb{S} \to THH(A,P)$ which we now define. \medskip Since $A$ is smooth, \begin{align}\label{thhap} THH(A, P) &= \Sigma^n Rhom_{A \wedge A^{op}}(A, A \wedge A^{op}) \wedge^L_{A \wedge A^{op}} A \simeq \Sigma^nRhom_{A \wedge A^{op}} (A, A) \\ &\simeq \Sigma^nTHH^{\bullet}(A, A) \notag \end{align} where the last quantity is the topological Hochschild cohomology. The inverse equivalence also has a very natural description. Consider the $\Sigma^\infty (\Omega M_+)$-module structure on $A = (\Omega M)^{\Omega f}$ given by the generalized conjugation action defined to be the pull-back of the $A\wedge A^{op}$-action on $A$ to $\Sigma^\infty (\Omega M_+)$ via the ring map $\Sigma^\infty (\Omega M_+) \to A \wedge A^{op}$ defined above. This action was studied in detail by the second author in \cite{klang}. In \cite{klang} the second author showed that there are equivalences, \begin{align}\label{ac} THH (A, A) &\simeq \mathbb{S} \wedge^L_{\Sigma^\infty(\Omega M_+)} A^c \\ THH^{\bullet}(A, A) &\simeq Rhom_{\Sigma^\infty (\Omega M_+)} (\mathbb{S}, A^c) \notag \end{align} where $A^c$ denotes the algebra $A$ with this generalized conjugation action of $\Sigma^\infty(\Omega M_+)$. Recall the cap product operation \begin{equation} \left(Rhom_{\Sigma^\infty (\Omega M_+)} (\mathbb{S}, A^c) \right) \wedge \left( \mathbb{S} \wedge^L_{\Sigma^\infty(\Omega M_+)} \bs_{-\tau_M} \right) \xrightarrow{\cap} \bs_{-\tau_M} \wedge^L_{\Sigma^\infty(\Omega M_+)} A^c \simeq THH(A, P). \end{equation} Now $\mathbb{S} \wedge^L_{\Sigma^\infty(\Omega M_+)} \bs_{-\tau_M} = \Sigma^nM^{-TM}$ so there is a Pontrjagin-Thom map $S^n \xrightarrow{t_M} \Sigma^nM^{-TM}$ which corresponds to the unit $\iota \in M^\vee$ under the Atiyah-equivalence of the Spanier-Whitehead dual of a manifold $M^\vee$ and its Thom spectrum $M^{-TM}$. We then get an induced equivalence \begin{align}\label{inverse} &THH^{\bullet}(A, A) \wedge S^n \simeq \left(Rhom_{\Sigma^\infty (\Omega M_+)} (\mathbb{S}, A^c) \right) \wedge S^n \xrightarrow{1 \wedge t_M} \\ & \left(Rhom_{\Sigma^\infty (\Omega M_+)} (\mathbb{S}, A^c) \right) \wedge \left( \mathbb{S} \wedge^L_{\Sigma^\infty(\Omega M_+)} \bs_{-\tau_M} \right) \xrightarrow{\cap} \bs_{-\tau_M} \wedge^L_{\Sigma^\infty(\Omega M_+)} A^c \simeq THH(A, P). \notag \end{align} This is the inverse to the equivalence above (\ref{thhap}). Now $THH^\bullet(A,A)$ is an $E_2$-ring spectrum. Let $\iota : \mathbb{S} \to THH^\bullet(A,A)$ be the unit. Alternatively, recall that $THH^\bullet(A,A)$ is the spectrum of $A$-bimodule maps $A \to A$; $\iota$ corresponds to the identity map $id: A \to A$, a characterization which does not rely on the multiplicative structure on $THH^\bullet(A,A)$. The map $\iota$ allows us to define our $n$-dimensional cotrace map as \begin{equation}\label{sigma} \sigma : \Sigma^n\mathbb{S} \xrightarrow{\Sigma^n\iota } \Sigma^nTHH^\bullet (A,A) \simeq THH(A,P). \end{equation} Clearly taking cap product $$ \cap \sigma : A^! = Rhom_{A \wedge A^{op}}(A, A \wedge A^{op}) \to \Sigma^{-n} P $$ defines the equivalence given in (\ref{bimoddual}). This then proves that $(A, P, \sigma)$ is a twisted, smooth Calabi-Yau ring spectrum. \medskip Let $E$ be any commutative ring spectrum satisfying the hypotheses of the theorem. An $E_*$-orientation of $M$ is given by a Thom class $\tau : M^{-TM} \to E$ and induces an an equivalence (\ref{diag}) $\theta_\tau : \Sigma^n M^{-TM} \wedge E \xrightarrow{\simeq} M_+ \wedge E$ as in (\ref{diag}). This can be also be written as an equivalence $$ \theta_\tau : (\bs_{-\tau_M} \wedge^L_{\Sigma^\infty (\Omega M_+)} \mathbb{S} ) \wedge E \xrightarrow{\simeq} (\mathbb{S} \wedge^L_{\Sigma^\infty (\Omega M_+)} \mathbb{S}) \wedge E. $$ Given such an orientation $\tau : E \to M$ and an orientation $\nu : A = (\Omega M)^{\Omega f} \to E$ we describe a resulting $E_*$-orientation $(u, \tilde \sigma_E)$ of the $sCY$ structure on $A = (\Omega M)^{\Omega f}$. \medskip Notice that the $E_*$ orientation $\tau$ of $M$ induces an equivalence $$ \theta_{\tau, R} : (\bs_{-\tau_M} \wedge^L_{\Sigma^\infty (\Omega M_+)} R)\wedge E \xrightarrow{\simeq} (\mathbb{S} \wedge^L_{\Sigma^\infty (\Omega M_+)} R)\wedge E $$ for any left $\Sigma^\infty (\Omega M_+)$-module $R$. Now take $R = A \wedge A^{op}$ with the $\Sigma^\infty (\Omega M_+)$-action defined by the ring homomorphism $\Sigma^\infty (\Omega M_+) \to A \wedge A^{op}$ described above. We then define \begin{equation}\label{udef} u = \theta_{\tau, A \wedge A^{op}} : P \wedge E = (\bs_{-\tau_M} \wedge^L_{\Sigma^\infty (\Omega M_+)} A \wedge A^{op})\wedge E \xrightarrow{\simeq} (\mathbb{S} \wedge^L_{\Sigma^\infty (\Omega M_+)} A \wedge A^{op})\wedge E = A\wedge E. \end{equation} \medskip We now define the map $\tilde \sigma_E : \mathbb{S} \wedge E \to THH (A, A)^{hS^1} \wedge E$ needed for the orientation of the $sCY$-structure. Again, let $\tau : M^{-TM} \to E$ and $\nu : A = (\Omega M)^{\Omega f} \to E$ be orientations of $M$ and $\Omega f$ respectively. Consider the following homotopy commutative diagram. $$ \begin{CD} \Sigma^n \mathbb{S} \wedge E @>\iota >> \Sigma^n THH^\bullet(A, A) \wedge E @>\simeq >> THH(A, P) \wedge E \\ &&& & @V\tau V\simeq V \\ &&@V= VV THH (A, A) \wedge E \\ & & & & @VV= V \\ @A= AA \Sigma^n Rhom_{\Sigma^\infty(\Omega M_+)}(\mathbb{S}, A^c) \wedge E @>\simeq >> \left(\mathbb{S} \wedge^L_{\Sigma^\infty(\Omega M_+)} A^c\right)\wedge E \\ & & @V \nu V \simeq V @V \simeq V \nu V \\ & &\Sigma^n Rhom_{\Sigma^\infty(\Omega M_+)}(\mathbb{S}, (\Sigma^\infty(\Omega M_+))^c) \wedge E @>\simeq >> \left(\mathbb{S} \wedge^L_{\Sigma^\infty(\Omega M_+)} (\Sigma^\infty(\Omega M_+))^c\right)\wedge E \\ && @AA\iota \wedge 1 A @AA\iota \wedge 1 A \\ \Sigma^n \mathbb{S} \wedge E @>\iota >> \Sigma^n Rhom_{\Sigma^\infty(\Omega M_+)}(\mathbb{S}, \mathbb{S}) \wedge E @>>\simeq > \left(\mathbb{S} \wedge^L_{\Sigma^\infty(\Omega M_+)} \mathbb{S}\right)\wedge E \\ & & @A= AA @AA = A \\ & & \Sigma^n M^\vee \wedge E @>>\simeq > M_+\wedge E \end{CD} $$ A few comments about this diagram: \begin{enumerate} \item The maps in this diagram that are labelled by a $\tau$ or a $\nu$ are induced by the respective orientations. The maps labeled by $\iota$ are induced by the units of the respective ring spectra. \item The reason the left side of this diagram homotopy commutes is because the vertical maps induced by both $\nu$ and $\iota$ are all maps of ring spectra. As pointed out above this is because the orientation map $\nu : A \to E$ is assumed to be a ring map. \item The bottom horizontal composition $\Sigma^n \mathbb{S} \wedge E \to M_+ \wedge E$ is the $E_*$-fundamental class. The reason the right side of this diagram homotopy commutes is by the naturality of the Atiyah-Klein equivalences. \item The homotopy orbit spectrum $\mathbb{S} \wedge^L_{\Sigma^\infty(\Omega M_+)} (\Sigma^\infty(\Omega M_+))^c$ is equivalent to $\Sigma^\infty (LM_+)$, and the lower right hand vertical map $M_+ \wedge E \to \left(\mathbb{S} \wedge^L_{\Sigma^\infty(\Omega M_+)} (\Sigma^\infty(\Omega M_+))^c\right)\wedge E \simeq \Sigma^\infty (LM_+) \wedge E$ is homotopic to the inclusion of the constant loops, and therefore factors through the homotopy fixed point set of the circle action. $ M_+ \wedge E \to \Sigma^\infty (LM_+)^{hS^1}$. \item The top horizontal composition is the cotrace map $\sigma \wedge 1 : \Sigma^n \mathbb{S} \wedge E \to THH(A, P)\wedge E$. \end{enumerate} We therefore define the map $\ \tilde \sigma_E: \Sigma^n E \to THH(A, \, A)^{hS^1} \wedge E$ to be the composition $$ \begin{CD} \tilde \sigma_E : \Sigma^n \mathbb{S} \wedge E @>[M]>> M_+\wedge E @>>> \Sigma^\infty (LM_+)^{hS^1} \wedge E \\ @>\simeq >> THH(\Sigma^\infty(\Omega M_+), \Sigma^\infty(\Omega M_+))^{hS^1} \wedge E @<\simeq <\nu < THH(A, A)^{hS^1}\wedge E. \ \end{CD} $$ By comments 3. and 4. above, the composite $ \Sigma^n \mathbb{S} \wedge E \xrightarrow{\tilde \sigma_E} THH(A, A)^{hS^1} \wedge E \hookrightarrow THH(A, A) \wedge E$ is obtained by starting at the lower left of the diagram, going horizontally to the lower right, and then going vertically until $THH(A, A)\wedge E$. And by comment 1 above and the commutativity of the diagram, this means we may conclude that the following diagram homotopy commutes: $$ \begin{CD} \Sigma^n \mathbb{S} \wedge E @>\sigma \wedge 1 >> THH(A, P) \wedge E @>\tau >\simeq > THH(A, A) \wedge E \\ @A= AA && @AAA \\ \Sigma^n \mathbb{S} \wedge E & @>>\tilde \sigma_E > & THH(A, A)^{hS^1}\wedge E. \end{CD} $$ This is what was required to show that $(u, \tilde \sigma_E)$ defines an orientation on the twisted Calabi-Yau structure on $A$. \end{proof} \medskip \medskip \noindent {\bf Example.} Take $M = SU(m)/SO(m)$, and $f: M \to SU/SO \simeq B^2 O$ the natural map. Then $$\Omega f: \Omega (SU(m)/SO(m)) \to BO$$ is a loop map, and by Theorem \ref{prop-thom-scy}, $A = \Omega M ^{\Omega f}$ has the structure of a twisted sCY ring spectrum. Note that $\Omega f: \Omega (SU(m)/SO(m)) \to BO$ induces a map of Thom spectra $A \to MO$. $\Omega f$ is an equivalence in a range, and therefore induces an equivalence in a range between $A$ and $MO$. $\pi_*(MO)$ is 2-torsion, therefore $\pi_*(A)$ is 2-torsion in a range; equivalently, $A$ is highly connected when localized at an odd prime. This is not the case if $f$ is taken to be nullhomotopic, in which case the $ sCY$ ring spectrum is $\Sigma^\infty (\Omega (SU(m)/SO(m))_+)$. \subsection{The image of $J$ and Lagrangian immersions of spheres} In this subsection we study in more detail the twisted smooth Calabi-Yau structure on Thom spectra of virtual bundles over spheres. These bundles arise naturally from the homotopy perspective from the image of the $J$-homomorphism, and from the perspective of symplectic topology from Lagrangian immersions of odd dimensional spheres into their cotangent bundles. As discussed in \cite{ak}, Gromov's $h$-principle implies that the homotopy group $\pi_n(U)$ classifies Lagrangian immersions of $S^{n}$ into its cotangent bundle, $T^*S^{n}$, which are in the homotopy class of the zero section $S^n \hookrightarrow T^*(S^n)$. Assume that $n > 1$ and let $\alpha : S^n \to U$ represent such a homotopy class. Since $\pi_n(SU) \cong \pi_n(U)$, $\alpha$ lifts to a unique (up to homotopy) map that by abuse of notation we still call $\alpha : S^n \to SU$. Taking loop spaces we get a map of $A_\infty$ group-like monoids, $$ \Omega \alpha : \Omega S^n \to \Omega SU \simeq BU. $$ The last equivalence is given by Bott periodicity. By forgetting the almost complex structure we get an $A_\infty$-map $$ \Omega \alpha : \Omega S^n \to BO. $$ By Theorem \ref{prop-thom-scy} above, the Thom spectrum $(\Omega S^n)^{\Omega \alpha}$ has the structure of a twisted, smooth Calabi-Yau ring spectrum. We begin with the following observation. \begin{lemma}\label{sorient} The twisted sCY ring spectrum $(\Omega S^n)^{\Omega \alpha}$ has natural orientation with respect to stable homotopy theory (that is the generalized homology theory $\mathbb{S}_*$ represented by the sphere spectrum $\mathbb{S}$). Furthermore this induces an orientation with respect to any generalized homology theory $E_*$ represented by a commutative ring spectrum $E$. \end{lemma} \begin{proof} First note that $S^n$ has a canonical stable framing. That is, it has a canonical $\mathbb{S}$-orientation. This induces an orientation with respect to any theory $E_*$. Furthermore, by the construction of the twisted sCY structure in the proof of Theorem \ref{prop-thom-scy}, the twisting bimodule of this structure is $$ P = \bs_{-\tau_M} \wedge^L_{\Sigma^\infty (\Omega S^n_+)} A \wedge A^{op} $$ where $A = (\Omega S^n)^{\Omega \alpha}$. Now the $\mathbb{S}$-framing of $S^n$ defines an equivalence of bimodules, $\mathbb{S} \simeq \bs_{-\tau_M}$. Thus $$ P \simeq \mathbb{S}\wedge^L_{\Sigma^\infty (\Omega S^n_+)} A \wedge A^{op} $$ but by (\ref{thm5.1}) this last spectrum is equivalent to $A$ as $A$-bimodules. Thus we have an equivalence $$ u : P \wedge \mathbb{S} \xrightarrow{\simeq} A\wedge \mathbb{S}. $$ Using this identification, the cotrace element can be viewed as a class $\sigma \in THH(A, A)$. To complete the construction of the $\mathbb{S}_*$-orientation we must show that $\sigma$ lifts to an element in the homotopy fixed points, $THH(A, A)^{hS^1}$. By the main result of \cite{BCS}, the topological Hochschild homology of $A = (\Omega S^n)^{\Omega \alpha}$ is equivalent as a $\Sigma^\infty(S^1_+)$-module to the Thom spectrum of a virtual bundle over the free loop space $LS^n$: \medskip \begin{proposition}\label{BCS}\cite{BCS} $$ THH ((\Omega S^n)^{\Omega \alpha}) \simeq L(S^n)^{\ell (\alpha)}.$$ where $\ell (\alpha )$ is the virtual bundle classified by the map $$ \ell (\alpha ) : L(S^n) \xrightarrow{L\alpha} LSU \simeq SU \times \Omega SU \xrightarrow{project} \Omega SU \simeq BU \to BO. $$ \end{proposition} In this composition, the equivalence $LSU \simeq SU \times \Omega SU$ is given by the trivialization of the fibration of infinite loop spaces $\Omega SU \to LSU \to SU$ defined by the canonical section $SU \to LSU$ given by the inclusion of constant loops, and the infinite loop structure of $LSU$. \medskip \noindent {\bf Remark. } In \cite {BCS} the map from $LSU$ to $BO$ was described by a composition $$ LSU \to L(SU/SO) \simeq SU/SO \times \Omega (SU/SO) \xrightarrow{\eta \times 1} \Omega (SU/SO) \times \Omega (SU/O) \xrightarrow{multiply} \Omega (SU/SO) \simeq BO $$ where $\eta : SU/SO \to \Omega (SU/SO) \simeq BO$ was induced by the Hopf map $S^3 \to S^2$. However the map $\eta$ becomes trivial when composed with the projection $SU \to SU/SO$ which allows the description of $\ell (\alpha )$ given in the proposition. \medskip Notice that the restriction to the constant loops, $$ S^n \xrightarrow{\iota} LS^n \xrightarrow{\ell (\alpha)} BO $$ is the constant map. That is, this virtual bundle is trivialized when restricted to the constant loops. But since constant loops are $S^1$-fixed points of $LS^n$, the inclusion naturally lifts to the homotopy fixed points, $$ \Sigma^\infty (S^n_+) \xrightarrow{\iota } (L(S^n)^{\ell (\alpha)})^{hS^1}. $$ Composing with the equivalence given by Proposition \ref{BCS}, this defines a map $\tilde \sigma : \Sigma^\infty (S^n) \to THH(A, A)^{hS^1}$ that lifts the cotrace element $\sigma \in THH(A, A)$. This completes the construction of the $\mathbb{S}_*$-orientation of the sCY structure on $A = (\Omega S^n)^{\Omega \alpha}$. Given any other generalized homology theory $E_*$ represented by a commutative ring spectrum $E$, the $\mathbb{S}_*$-orientation of $(\Omega S^n)^{\Omega \alpha}$ induces an $E_*$ orientation by use of the unit $\mathbb{S} \to E$. This completes the proof of Lemma \ref{sorient}. \end{proof} The following recasts the results of \cite{ak} to show that topological Hochschild homology can be used as an obstruction to being able to deform a Lagrangian immersion of a sphere to a Lagrangian embedding. \medskip \begin{theorem}\label{lagTHH} Let $\alpha : S^n \to U$ represent a Lagrangian immersion $\phi_\alpha : S^n \to T^*S^n$ in the homotopy class of the zero section. Consider the associated twisted smooth Calabi-Yau ring spectrum $(\Omega S^n)^{\Omega (-\alpha)}$. (Here $-\alpha : S^n \to U$ is a map that represents the inverse of $\alpha$ in $\pi_nU$.) Then if $\phi_\alpha$ is Lagrangian isotopic to a Lagrangian embedding then there is an equivalence of topological Hochschild homology spectra, $$ THH ((\Omega S^n)^{\Omega (-\alpha)}) \simeq THH(\Sigma^\infty (\Omega S^n_+)). $$ \end{theorem} \begin{proof} Let $Q$ and $N$ be smooth, closed manifolds of the same dimension. Given an exact Lagrangian embedding $j : Q \to T^*N$, Kragh in \cite{kr} defined a virtual Maslov bundle $\nu$ on $L_0Q$. Here $L_0$ denotes path component of the free loop space that contains constant loops. The construction, which uses notation that is different than ours, is described in section 2 of \cite{ak}. A map of spectra \begin{equation}\label{equiv} \psi : \Sigma^\infty(L_0N_+) \to L_0Q^{TN -TL\oplus \nu} \end{equation} was constructed and studied. One of the main results of \cite{ak} is that the map $\psi$ is a homotopy equivalence of spectra. The Maslov bundle $\nu$ was defined as follows. (See section 2 of \cite{ak}.) The Lagrangian embedding $Q\to T^*N$ defines a map $\tau : Q \to U/O$. Then $-\nu$ was defined to be the restriction to $L_0Q$ of the map $$ LQ \xrightarrow{L\tau} L(U/O) \xrightarrow{\simeq} U/O \times \Omega U/O \xrightarrow{project}\Omega U/O \simeq \mathbb{Z} \times BO. $$ In considering a Lagrangian embedding or immersion $\phi_\alpha : S^n \to T^*S^n$ represented by $\alpha : S^n \to U$, then by Proposition \ref{BCS} the Maslov bundle $\nu$ is just $-\ell (\alpha) : LS^n \to BU \to BO.$ Thus we may conclude from (\ref{equiv}) that if the Lagrangian immersion $\phi_\alpha$ is Lagrangian isotopic to a Lagrangian embedding, then the spectra $$ \Sigma^\infty(LS^n_+) \quad \text{and} \quad (LS^n)^{-\ell (\alpha)} = (LS^n)^{\ell(-\alpha)} $$ are equivalent. Now the spectrum $\Sigma^\infty(LS^n_+) $ is equivalent to the topological Hochschild homology $THH(\Sigma^\infty(\Omega S^n_+))$. Whereas by Proposition \ref{BCS}, the spectrum $(LS^n)^{\ell (-\alpha)}$ is equivalent to the topological Hochschild homology $THH((\Omega S^n)^{\Omega(-\alpha)} )$. The statement of the theorem now follows. \end{proof} For $k > 1$, let $\alpha_k : S^{2k+1} \to U$ be a generator of $\pi_{2k+1}(U)$ which by Bott periodicity is isomorphic to the integers. In \cite{ak} it was proved that for $2k+1$ congruent to $1$, $3$, or $5$ $ modulo \, 8$, the Lagrangian immersion $\phi_k : S^{2k+1} \to T^*(S^{2k+1})$ represented by $\alpha_k$ is not Lagrangian isotopic to a Lagrangian embedding. We now see that this is detected by the fact the twisted smooth Calabi-Yau ring spectra $\Sigma^\infty (\Omega S^{2k+1}_+)$ and $(\Omega S^{2k+1})^{\Omega (-\alpha_k)}$ have different topological Hochschild homologies. For this we again use the fact that $$ THH(\Sigma^\infty (\Omega S^{2k+1}_+)) \simeq \Sigma^\infty (LS^{2k+1}_+) \quad \text{and} \quad THH((\Omega S^{2k+1})^{\Omega (-\alpha_k)}) \simeq (LS^{2k+1})^{\ell (-\alpha_k)}. $$ In \cite{susp}, it was shown that the free loop space of the suspension of a connected, based space $X$ has a model $$ L\Sigma X \simeq \bigcup_n S^1 \times_{\mathbb{Z}/n} X^n / \sim $$ where the equivalence relation $\sim $ is a basepoint identification. This space is equivalent to configuration space of points in $S^1$ with labels in $X$: \begin{equation}\label{cs1} C(S^1, X) = \bigcup_n F(S^1, n) \times_{\Sigma_n} X^n / \sim \quad \simeq L\Sigma X. \end{equation} where $ F(S^1, n)$ is the configuration space of $n$ ordered distinct points in $S^1$. It was also shown in \cite{susp} that the suspension spectrum of these filtered models stably split: $$ \Sigma^\infty (L\Sigma X_+) \simeq \mathbb{S} \vee \Sigma^\infty\left( \bigvee_n S^1_+ \wedge_{\mathbb{Z}/n} X^{(n)}\right) $$ where $X^{(n)}$ denotes the $n$-fold smash product. Thus we have \begin{equation}\label{split} THH(\Sigma^\infty (\Omega S^{2k+1}_+)) \simeq \Sigma^\infty (LS^{2k+1}_+) \simeq \mathbb{S} \vee \Sigma^\infty \left(\bigvee_n S^1_+ \wedge_{\mathbb{Z}/n} S^{2kn}\right). \end{equation} On the other hand, applying a result of Lewis in \cite{LMS} to (\ref{cs1}) says that the Thom spectrum $ (LS^{2k+1})^{\ell (-\alpha_k)}$ has the homotopy type of the spectrum $C(S^1, (S^{2k})^{b\alpha_k})$ where here $ C(S^1, -)$ is viewed as a functor from the category of spectra $E$ with unit $\mathbb{S} \to E$, to spectra. $(S^{2k})^{b\alpha_k}$ is the Thom spectrum of the map $$ b\alpha_k : S^{2k} \xrightarrow{\iota} \Omega \Sigma S^{2k} = \Omega S^{2k+1} \xrightarrow{\Omega (-\alpha_k)} \Omega U \simeq \mathbb{Z} \times BU \to \mathbb{Z} \times BO. $$ Here $\iota : S^{2k} \to \Omega \Sigma S^{2k}$ is the adjoint of the identity on $\Sigma S^{2k}$. $\iota$ generates $\pi_{2k}\Omega \Sigma S^{2k} \cong \mathbb{Z}$. Note also that by the connectivity of $S^{2k}$ the image of $b\alpha_k$ lies in $\{0\} \times BO$. Think of $\Sigma_{m-1}$ as the subgroup of $\Sigma_m$ consisting of those permutations of $m$-letters that leave the last letter fixed. Then as in \cite{LMS}, the spectrum $ C(S^1, (S^{2k})^{b\alpha_k}) \simeq (LS^{2k+1})^{\ell (-\alpha_k)} $ is defined to be the homotopy coequalizer of the maps $$ \beta, \gamma : \bigvee_m F(S^1, m)_+ \wedge_{\Sigma_{m-1}} \left((S^{2k})^{b\alpha_k})\right)^{(m-1)} \longrightarrow \bigvee_n F(S^1, n)_+ \wedge_{\Sigma_{n}} \left((S^{2k})^{b\alpha_k})\right)^{(n)} $$ where for each $m$, $$\beta : F(S^1, m)_+ \wedge_{\Sigma_{m-1}} \left((S^{2k})^{b\alpha_k})\right)^{(m-1)} \to F(S^1, m-1)_+ \wedge_{\Sigma_{m-1}} \left((S^{2k})^{b\alpha_k})\right)^{(m-1)} $$ is defined by deleting the last coordinate of the configuration in $F(S^1, m)$. Whereas $$ \gamma : F(S^1, m)_+ \wedge_{\Sigma_{m-1}} \left((S^{2k})^{b\alpha_k})\right)^{(m-1)} \to F(S^1, m)_+ \wedge_{\Sigma_{m}} \left((S^{2k})^{b\alpha_k})\right)^{(m)} $$ is given by smashing with the unit, $ \left((S^{2k})^{b\alpha_k})\right)^{(m-1)}\wedge \mathbb{S} \xrightarrow{1\wedge u} \left((S^{2k})^{b\alpha_k})\right)^{(m)}$. \medskip We make two observations about this construction. \begin{enumerate}\label{observe} \item Notice that the Thom spectrum $(S^{2k})^{b\alpha_k}$ is equivalent to the $CW$ spectrum $$ (S^{2k})^{b\alpha_k} \simeq \mathbb{S} \cup_{\tilde \alpha_k} D^{2k} $$ where the attaching map $\tilde \alpha_k : \Sigma^\infty S^{2k-1} \to \mathbb{S}$ is defined as follows. Consider the composition $S^{2k} \xrightarrow{b\alpha_k} BO \xrightarrow{BJ} BGL_1(\mathbb{S})$ where $BJ$ is the delooping of the $J$-homomorphism $J : O \to GL_1(\mathbb{S})$. Applying the loop space defines a map $S^{2k-1} \to GL_1(\mathbb{S})$. Since $S^{2k-1}$ is connected, its image lies in a single component of $GL_1(\mathbb{S})$ which is equivalent to the component of the basepoint in $QS^0$. The adjoint of this map is the definition of the map $\tilde \alpha_k : \Sigma^\infty (S^{2k-1}) \to \mathbb{S}$. Notice, that by definition it is in the image of the $J$ homomorphism, $J : \pi_{2k-1}O \to \pi_{2k-1}(\mathbb{S})$. \item $ C(S^1, (S^{2k})^{b\alpha_k}) \simeq C(S^1, \mathbb{S} \cup_{\tilde \alpha_k} D^{2k})$ is a naturally filtered spectrum, where the filtration is by the cardinality of the configuration of points in $S^1$. Let $\mathcal{F}_m( C(S^1, \mathbb{S} \cup_{\tilde \alpha_k} D^{2k})$ be the $m^{th}$ filtration. We observe that filtration $0$ part $\mathcal{F}_0$ is the unit $\mathbb{S} \hookrightarrow C(S^1, \mathbb{S} \cup_{\tilde \alpha_k} D^{2k})$, while filtration 1 part is $ \mathcal{F}_1( C(S^1, \mathbb{S} \cup_{\tilde \alpha_k} D^{2k}) = S^1_+ \wedge \mathbb{S} \cup_{\tilde \alpha_k} D^{2k}$. \end{enumerate} \medskip By the inclusion of the filtration 0 part, we have a cofibration sequence of spectra, \begin{equation}\label{cofib} \mathbb{S} \to C(S^1, \mathbb{S} \cup_{\tilde \alpha_k} D^{2k}) \simeq THH((\Omega S^{2k+1})^{\Omega (-\alpha_k)}) \to C(S^1, \mathbb{S} \cup_{\tilde \alpha_k} D^{2k}) /\mathbb{S}. \end{equation}. In the case where $\alpha_k$ is replace by the trivial homotopy class, this cofibration sequence becomes $$ \mathbb{S} \to C(S^1, \Sigma^\infty (S^{2k}_+)) \simeq THH( \Sigma^\infty (\Omega S^{2k+1}_+)) \to C(S^1, \Sigma^\infty (S^{2k}_+))/\mathbb{S}. $$ By the splitting given by (\ref{split}) above, this cofibration splits, and $$C(S^1, \Sigma^\infty (S^{2k}_+))/ \mathbb{S} \simeq \Sigma^\infty \left(\bigvee_n S^1_+ \wedge_{\mathbb{Z}/n} S^{2kn}\right).$$ So to prove that $THH((\Omega S^{2k+1})^{\Omega (-\alpha_k)})$ is \sl not \rm equivalent to $ THH( \Sigma^\infty (\Omega S^{2k+1}_+))$, it suffices to show that the cofibration sequence (\ref{cofib}) \sl does not \rm split. But to do this, it suffices to check that the cofibration sequence one gets by restricting to filtration 1 of $C(S^1, C(S^1, \mathbb{S} \cup_{\tilde \alpha_k} D^{2k}))$ does not split. That is, by the second observation above, it suffices to check that the cofibration sequence $$ \mathbb{S} \to S^1_+ \wedge \left( \mathbb{S} \cup_{\tilde \alpha_k} D^{2k}\right) \to \left(S^1_+\wedge S^{2k}\right) \vee S^1_+ $$ does not split. But this is true so long as we know that the attaching map $\tilde \alpha_k : \Sigma^\infty (S^{2k-1}) \to \mathbb{S}$ is nontrivial. But this class is in the image of $J$, constructed out of $\alpha_k : S^{2k+1} \to U$ as described above. By standard calculations of Quillen and Adams, as described in \cite{ak}, for $ 2k+1$ congruent to $1$, $3$, or $5$ $ modulo \, 8$, this class is nontrivial. Hence the cofibration sequence (\ref{cofib}) does not split which means that $THH((\Omega S^{2k+1})^{\Omega (-\alpha_k)})$ and $THH( \Sigma^\infty (\Omega S^{2k+1}_+))$ are not homotopy equivalent spectra. By Theorem \ref{lagTHH} this implies that the Lagrangian immersion $\phi_k$ is \sl not \rm Lagrangian isotopic to a Lagrangian embedding. \section{A topological Hochschild (co)homology perspective } In this section we give a topological Hochschild homology and cohomology interpretation of the Calabi-Yau structures and the dualities between the manifold string topology ring spectrum $\mathcal{S}^\bullet_{P}(M)$ and the Lie group string topology coalgebra spectrum $\mathcal{S}^P_\bullet (M)$. \medskip We continue to consider a principal bundle $G \to P \xrightarrow{p} M$ where $M$ is a closed manifold of dimension $n$ and $G$ is a compact Lie group of dimension $d$. A choice of connection on the bundle $P$ defines a \sl holonomy \rm map $$ h_P : \Omega M \to G. $$ This is a map of group-like $A_\infty$ spaces, and the induced map of classifying spaces, $Bh_P : M \simeq B(\Omega M) \to BG$ classifies the bundle $P$. We then have an induced map of ring spectra and differential graded algebras that by abuse of notation we still denote by $h_P$: $$ \Sigma^\infty (\Omega M_+) \xrightarrow{h_P} \Sigma^\infty (G_+) \quad \text{and} \quad C_*(\Omega M) \xrightarrow{h_P} C_*(G) $$ These holonomy maps therefore define bimodule structures of $\Sigma^\infty (G_+)$ over $\Sigma^\infty (\Omega M_+)$ and of $C_*(G)$ over $C_*(\Omega M)$. We can therefore study the (topological) homology of these algebras with coefficients in these bimodules. In what follows we suppress the map $h_P$ from the notation regarding these bimodules. This abuse is somewhat justified because given any two choices of holonomy maps, the induced module structures will be equivalent. We also note that a choice of holonomy defines an inherited dual bimodule structure on the Spanier-Whitehead dual $G^\vee$ over $\Sigma^\infty (\Omega M_+)$, and similarly the cochains $C^*(G)$ inherit the dual bimodule structure over $C_*(\Omega M)$. One of the main results of this section is the following. \medskip \begin{theorem}\label{hochschild} We have the following equivalences involving topological Hochschild homology $THH_\bullet$ and topological Hochschild cohomology $THH^\bullet$. \begin{enumerate} \item $THH_\bullet(\Sigma^\infty (\Omega M_+), \Sigma^\infty (G_+)) \simeq \Sigma^\infty(P^{Ad}_+)$ \notag \\ \item $ THH^\bullet (\Sigma^\infty (\Omega M_+), \Sigma^\infty (G_+)) \simeq (P^{Ad})^{-TM} \simeq \mathcal{S}_P^\bullet (M). $ \quad \text{This equivalence is one of ring spectra.} \\ \item $THH_\bullet(\Sigma^\infty (\Omega M_+), G^\vee) \simeq (P^{Ad})^{-T_{vert}} \simeq \mathcal{S}_\bullet^P(M).$ \quad \text{This equivalence is one of coalgebra spectra.} \end{enumerate} \end{theorem} \begin{proof} Given any homomorphism $\phi : H \to G$ of topological groups, then one has that $$THH_\bullet(\Sigma^\infty (H_+), \Sigma^\infty (G_+)) \simeq \Sigma^\infty (EH \times_H G^{Ad})$$ where $G^{Ad}$ represents the adjoint (conjugation) action of $H$ on $G$: $h\cdot g = \phi(h)g\phi(h)^{-1}$. This is because $THH_*(\Sigma^\infty (H_+), \Sigma^\infty (G_+)) $ is equivalent to the suspension spectrum of the cyclic bar construction $N^{cy}(H,G)$ which Waldhausen showed is equivalent to the homotopy orbit space of $H$ acting on $G$ via the conjugation action \cite{waldhausen}. In our case, we may think of $H$ as the based loop space $\Omega M$ by taking $H$ to be a topological group of the same $A_\infty$-homotopy type. (As we did earlier, by abuse of notation we still call this group $\Omega M$.) Then this observation says that $$ THH_\bullet(\Sigma^\infty (\Omega M_+), \Sigma^\infty (G_+)) \simeq \Sigma^\infty(E\Omega M \times_{\Omega M} G^{Ad}_+) \simeq \Sigma^\infty (P^{Ad}_+). $$ This proves part (1) of the theorem. For part (2), we use the similarly well-known fact that the topological Hochschild cohomology of the suspension spectrum of a group can be described as a homotopy fixed point spectrum. That is, like above, let $\phi : H \to G$ be a homomorphism of topological groups, then \begin{equation} THH^\bullet (\Sigma^\infty (H_+), \Sigma^\infty(G_+)) \simeq \Sigma^\infty(G_+)^{h\Sigma^\infty (H_+)} \end{equation} where $\Sigma^\infty (H_+)$ acts on $\Sigma^\infty (G_+)$ via the conjugation action. Like above we refer to this as the adjoint action and we write it as $\Sigma^\infty (G_+)^{Ad}$. (See \cite{westerland} or section 4 of \cite{malm}.) Now this homotopy fixed point spectrum is defined to be $$ \Sigma^\infty(G_+)^{h\Sigma^\infty (H_+)} = Rhom_{\Sigma^\infty (H_+)}(\mathbb{S}, \Sigma^\infty (G_+)^{Ad}). $$ So in our case we have that $$ THH^\bullet (\Sigma^\infty (\Omega M_+), \Sigma^\infty (G_+)) \simeq Rhom_{\Sigma^\infty (\Omega M_+)}(\mathbb{S}, \Sigma^\infty (G_+)^{Ad}). $$ Now notice that since the homotopy orbit spectrum of $\Sigma^\infty (\Omega M_+)$ acting on $ \Sigma^\infty (G_+)$ via the adjoint action is $\Sigma^\infty (P^{Ad})$, this spectrum of $\Sigma^\infty(\Omega M_+)$-equivariant morphisms is equivalent to the spectrum of sections of the parameterized spectrum $\Sigma^\infty (G_+) \to \Sigma^\infty_M ((P^{Ad})_+) \to M$: $$ Rhom_{\Sigma^\infty (\Omega M_+)}(\mathbb{S}, \Sigma^\infty (G_+)^{Ad}) \simeq \Gamma_M(\Sigma^\infty_M ((P^{Ad})_+)). $$ But this spectrum of sections is, by definition, the manifold string topology spectrum, $\mathcal{S}^\bullet_P(M)$. Furthermore it is clear that the ring spectrum structures coincide under this equivalence. Furthermore, by the Atiyah-Poincar\'e duality theorem proved by Klein \cite{klein}, \cite{CohenKlein} we have that $$ \Gamma_M(\Sigma^\infty_M ((P^{Ad})_+)) \simeq (P^{Ad})^{-TM} $$ as ring spectra. Putting these together says that $$ THH^\bullet (\Sigma^\infty (\Omega M_+), \Sigma^\infty (G_+)) \simeq \mathcal{S}^\bullet_P(M) \simeq (P^{Ad})^{-TM} $$ as ring spectra. This is the statement of part (2) of the theorem. We now consider part (3) of the theorem. The Spanier-Whitehead dual of the simplicial spectrum $THH_\bullet (\Sigma^\infty (\Omega M_+), G^\vee)$ can, because of the compactness assumption of $G$, be described as the totalization of the cosimplicial spectrum given by taking the Spanier-Whitehead dual levelwise. This cosimplicial spectrum has as its spectrum of $k$-simplices, $Rhom_\mathbb{S} ( \Sigma^\infty (\Omega M_+)^{(k)}, \Sigma^\infty (G_+))$. The coface maps and the codegeneracies are the duals of the face and degeneracy maps in the simplicial spectrum $THH_\bullet (\Sigma^\infty (\Omega M_+), G^\vee)$. But this cosimplicial spectrum is exactly the cosimplicial spectrum defining the topological Hochschild cohomology spectrum, $THH^\bullet (\Sigma^\infty (\Omega M_+), \Sigma^\infty (G_+))$. That is, we have observed that $$ THH_\bullet (\Sigma^\infty (\Omega M_+), G^\vee)^\vee = THH^\bullet (\Sigma^\infty (\Omega M_+), \Sigma^\infty (G_+)). $$ This is a ring spectrum, so its Spanier-Whitehead dual, $ THH_\bullet (\Sigma^\infty (\Omega M_+), G^\vee)$ inherits the structure of a coalgebra spectrum. Furthermore, we know from part (2) that $$ THH_\bullet (\Sigma^\infty (\Omega M_+), G^\vee)^\vee = THH^\bullet (\Sigma^\infty (\Omega M_+), \Sigma^\infty (G_+)) \simeq \mathcal{S}^\bullet_P(M) \simeq (P^{Ad})^{-TM} $$ as ring spectra. Thus applying Spanier-Whitehead duality, Theorem \ref{SWdual}, and Theorem \ref{main} we have $$ THH_\bullet (\Sigma^\infty (\Omega M_+), G^\vee) \simeq \mathcal{S}^P_\bullet (M) \simeq (P^{Ad})^{-T_{vert}} $$ as coalgebra spectra. \medskip Alternatively, as in the proof of part (1) of the theorem, we have an equivalence $$THH_\bullet(\Sigma^\infty(\Omega M _+), G^\vee) \simeq \mathbb{S} \wedge^L _{\Sigma^\infty(\Omega M_+)} (G^\vee)^{Ad}$$ Now, $\mathcal{S}_\bullet^P(M)$ is the homology spectrum of the spectrum over $M$ whose fiber is $G^\vee$, on which $\Omega M$ acts via (the dual of the) conjugation action. Therefore, we see that $$\mathcal{S}_\bullet^P(M) \simeq \mathbb{S} \wedge^L _{\Sigma^\infty(\Omega M_+)} (G^\vee)^{Ad} \simeq THH_\bullet(\Sigma^\infty(\Omega M _+), G^\vee)$$ For all these spectra, the coproduct comes from dualizing the multiplication map $G \times G \to G$; see below for an explicit description of the coproduct on $\mathbb{S} \wedge^L _{\Sigma^\infty(\Omega M_+)} (G^\vee)^{Ad}$. \end{proof} \medskip We end by observing how the twisted compact Calabi-Yau structure on $\mathcal{S}^\bullet_P(M)\simeq (P^{Ad})^{-TM}$ can be understood from this Hochschild perspective. \medskip The twisting bimodule in the twisted $cCY$ structure on $R = \mathcal{S}^\bullet_P(M)$ is $Q = \Sigma^{d-n}(\mathcal{S}_\bullet^P(M)) \simeq \Sigma^{d-n}(P^{Ad})^{-T_{vert}}$. We first observe that the duality pairing (\ref{pairing}) in the dimension $n-d$ twisted compact Calabi-Yau structure $$ \langle , \rangle : Q \wedge R \to \Sigma^{d-n}\mathbb{S} $$ can be described in terms of Hochschild theory as follows. As described above we have natural equivalences \begin{align} R = THH^\bullet (\Sigma^\infty (\Omega M_+), \Sigma^\infty (G_+)) &\simeq Rhom_{\Sigma^\infty (\Omega M_+)} (\mathbb{S}, \Sigma^\infty (G_+)^{Ad}), \quad \text{and} \notag \\ Q = \Sigma^{d-n}THH_\bullet (\Sigma^\infty (\Omega M_+), G^\vee) &\simeq \Sigma^{d-n} \mathbb{S} \wedge_{\Sigma^\infty (\Omega M_+)} (G^\vee)^{Ad} \notag \end{align} We therefore have a cap product \begin{align} \cap : Q \wedge R = \left(\Sigma^{d-n} \mathbb{S} \wedge_{\Sigma^\infty (\Omega M_+)} (G^\vee)^{Ad}\right) &\wedge \left(Rhom_{\Sigma^\infty (\Omega M_+)} (\mathbb{S}, \Sigma^\infty (G_+)^{Ad}\right) \\ &\longrightarrow \Sigma^{d-n}(\Sigma^\infty (G_+))^{Ad} \wedge_{\Sigma^\infty (\Omega M_+)}(G^\vee)^{Ad} \notag \end{align} The evaluation map $ev: \Sigma^\infty (G_+) \wedge G^\vee \to \mathbb{S}$ is $\Sigma^\infty (\Omega M_+) $ - invariant with respect to conjugation, and so defines a map $$ ev : \Sigma^{d-n}(\Sigma^\infty (G_+))^{Ad} \wedge_{\Sigma^\infty (\Omega M_+)}(G^\vee)^{Ad} \to \Sigma^{d-n} \mathbb{S}. $$ Composing these defines the duality pairing \begin{align} \langle , \rangle : Q \wedge R &\to \Sigma^{d-n}\mathbb{S} \notag \\ \left(\Sigma^{d-n}THH_\bullet (\Sigma^\infty (\Omega M_+), G^\vee)\right)&\wedge \left(THH^\bullet (\Sigma^\infty (\Omega M_+), \Sigma^\infty (G_+))\right) \xrightarrow{\cap} \notag \\ \Sigma^{d-n}\left((\Sigma^\infty (G_+))^{Ad} \wedge_{\Sigma^\infty (\Omega M_+)}(G^\vee)^{Ad}\right) &\xrightarrow{ev} \Sigma^{d-n} \mathbb{S}. \notag \end{align} \medskip We end by observing how the bimodule structure of $Q$ over $R$ can be understood at the topological Hochschild (co)homology level. $Q = \Sigma^{d-n}\mathcal{S}^P_\bullet (M) \simeq \Sigma^{d-n} THH_\bullet (\Sigma^\infty (\Omega M_+), G^\vee)$. Now $\mathcal{S}^P_\bullet (M) \simeq THH_\bullet (\Sigma^\infty (\Omega M_+), G^\vee) \simeq \mathbb{S} \wedge_{\Sigma^\infty (\Omega M_+)}(G^\vee)^{Ad}$ is a coalgebra spectrum, and its coproduct $\psi$ can be seen on the $THH$-level as follows: $$\xymatrix{ \mathbb{S} \wedge_{\Sigma^\infty (\Omega M_+)}(G^\vee)^{Ad} \ar[r]^-{1 \wedge \mu^\vee} & \mathbb{S} \wedge_{\Sigma^\infty (\Omega M_+)}((G \times G)^\vee)^{Ad} & \ar[l]^-{\simeq} \mathbb{S} \wedge_{\Sigma^\infty (\Omega M_+)}(G^\vee \wedge G^\vee)^{Ad} \ar[d]^-{\Delta} \\ & & (\mathbb{S} \wedge_{\Sigma^\infty (\Omega M_+)}(G^\vee)^{Ad}) \wedge (\mathbb{S} \wedge_{\Sigma^\infty (\Omega M_+)}(G^\vee)^{Ad}) }$$ This needs some explanation. $\mu : G \times G \to G$ is the multiplication map. $\mu^\vee : G^\vee \to G^\vee \wedge G^\vee$ is its Spanier - Whitehead dual. It is equivariant with respect to the adjoint action of $\Sigma^\infty (\Omega M_+)$ since $\mu$ is equivariant with respect to the adjoint action. The map $\Delta : \mathbb{S} \wedge_{\Sigma^\infty (\Omega M_+)}(G^\vee \wedge G^\vee)^{Ad} \to \left(\mathbb{S} \wedge_{\Sigma^\infty (\Omega M_+)}(G^\vee)^{Ad}\right) \wedge \left(\mathbb{S} \wedge_{\Sigma^\infty (\Omega M_+)}(G^\vee)^{Ad}\right)$ is the map induced by thinking of $\Omega M$ as a subgroup of $\Omega M \times \Omega M$ as the diagonal subgroup. The action map $Q \wedge R \to R$ is then homotopic to the composition \begin{align} &Q\wedge R \simeq \left( \Sigma^{d-n} THH_\bullet (\Sigma^\infty (\Omega M_+), G^\vee)\right) \wedge THH^\bullet (\Sigma^\infty (\Omega M_+), \Sigma^\infty (G_+)) \xrightarrow{\psi \wedge 1} \notag \\ &\Sigma^{d-n} THH_\bullet (\Sigma^\infty (\Omega M_+), G^\vee) \wedge THH_\bullet (\Sigma^\infty (\Omega M_+), G^\vee) \wedge THH^\bullet (\Sigma^\infty (\Omega M_+), \Sigma^\infty (G_+)) \xrightarrow{1 \wedge \langle , \rangle } \notag \\ &\Sigma^{d-n} THH_\bullet(\Sigma^\infty (\Omega M_+), G^\vee)\wedge \mathbb{S} = Q. \notag \end{align} The left module structure is homotopic to the analogous composition $R \wedge Q \to Q$.
{ "timestamp": "2018-02-27T02:08:34", "yymm": "1802", "arxiv_id": "1802.08930", "language": "en", "url": "https://arxiv.org/abs/1802.08930" }
\section{Introduction} \IEEEPARstart{O}{ptimal} power flow (\textsc{opf}\xspace) problems (or variants thereof) are employed in many power system contexts to ensure stable and economic system operation. In presence of line congestions for example, \textsc{opf}\xspace problems are used to determine/re-dispatch generator set points. In fact, in the German power grid, the number of these re-dispatch events increased drastically in recent years owing to the increasing penetration of renewables, phase-out of nuclear plants, and liberalized energy markets \cite{BundesnetzagenturfuerElektrizitaet2017}. This trend illustrates the importance of efficient and reliable \textsc{opf}\xspace computations in daily grid operation. Whereas in the past, the distribution grid level was often not considered in \textsc{opf}\xspace computations, nowadays this may lead to problems as renewable generation might cause violation of voltage limits and line limits at the distribution grid level. Including the distribution grid to \textsc{opf}\xspace problems under \textsc{ac}\xspace conditions can help to resolve this, yet doing so increases the problem size. All of the above observations have triggered significant research activity on hierarchical, respectively, distributed algorithms for \textsc{opf}\xspace; i.e. algorithms that split the overall problem into a number of smaller subproblems whose parallel solution may or may not be coordinated by a central entity \cite{Molzahn2017}.\footnote{We remark that the notions of \emph{distributed algorithms} are not unified in the context of numerical optimization for \textsc{opf}\xspace problems: While in the optimization literature distributed algorithms entail a central coordinating entity \cite{Bertsekas1989}, in the context of \textsc{opf}\xspace such schemes are referred to as being \emph{hierarchical} \cite{Molzahn2017}.} Given the relevance of solving \textsc{opf}\xspace problems, it is not surprising that there exists a multitude of results on distributed algorithms for \textsc{opf}\xspace under \textsc{ac}\xspace conditions; we refer to \cite{Molzahn2017,Capitanescu2016} for recent overviews. One can distinguish three main lines of research: i) (ad-hoc) application of algorithms tailored to convex Nonlinear Programs (\textsc{nlp}s\xspace) thus in general losing convergence properties \cite{Erseghe2014a,Erseghe2015}; ii) convex relaxation of \textsc{opf}\xspace by either inner or outer approximation of the feasible set \cite{Low2014,DallAnese2013a}; and iii) application of distributed algorithms tailored to non-convex \textsc{nlp}s\xspace \cite{Engelmann2017,Hours2017}. The present paper follows along iii). Before we present our approach, we concisely review existing results for items i)-iii). With respect to i), the set of convex algorithms directly applied to \textsc{opf}\xspace ranges from the Auxiliary Problem Principle \cite{Kim1997,Hur2002}, the Predictor Corrector Proximal Multiplier Method \cite{Kim2000}, to the popular Alternating Direction of Multipliers Method (\textsc{admm}\xspace) \cite{Kim2000,Erseghe2014a}. A number of recent works discusses \textsc{admm}\xspace in more detail, each with different foci: exhaustive simulation-based convergence analysis \cite{Erseghe2014a}, parameter update rules \cite{Erseghe2015}, applicability to large-scale grids \cite{Guo2017}. All of these methods share the advantage that, usually, they exchange only primal variables between the subproblems, which correspond to linear consensus constraints. However, the convergence rate is at most linear \cite{Boyd2011,Bertsekas1989}. Recently, the authors of \cite{Hong2016} presented \textsc{admm}\xspace convergence results for problems with non-convex objective function of a special form (consensus and sharing problems). However, it remains unclear whether \textsc{ac}\xspace-\textsc{opf}\xspace fits that form and, to the best of the authors' knowledge, there are no general convergence guarantees for \textsc{ac}\xspace-\textsc{opf}\xspace using \textsc{admm}\xspace. Another subbranch of i) proposes Optimality Condition Decomposition (\textsc{ocd}\xspace) to solve \textsc{ac}\xspace-\textsc{opf}\xspace problems, see \cite{Conejo2002, Conejo2006} and \cite{Nogales2003, Arnold2007,Hug-Glanzmann2009}. This method aims to solve the first-order necessary conditions including nonlinear coupling constraints without any problem modification in a distributed fashion. In \textsc{ocd}\xspace, all subproblems receive primal and dual variables from neighboring regions and consider them as fixed parameters in each local optimization. In \cite{Conejo2002}, a necessary condition for convergence to a first-order stationary point is discussed. However, to the best of our knowledge, it remains unclear whether this condition holds for arbitrary \textsc{opf}\xspace problems \cite{Erseghe2014a}. Moreover, in \cite{Conejo2002}, the convergence rate of this method is shown to be linear. With respect to ii), convex outer approximations of the feasible set via Semi-Definite Programming (\textsc{sdp}\xspace) are considered in \cite{Bai2008, DallAnese2013a,Molzahn2013a,Peng2017}; the \textsc{opf}\xspace problem is mapped to a higher dimensional space wherein it becomes convex whenever a specific rank constraint is dropped. This relaxed and inflated problem can be solved using the above mentioned convex algorithms obtaining convergence guarantees. The crux of \textsc{sdp}\xspace relaxations of \textsc{opf}\xspace problems is that the exactness of solutions (in terms of the original non-relaxed \textsc{opf}\xspace problem) can so far only be guaranteed via structural assumptions: either on technical equipment like small transformer resistances or on the grid topology, e.g. radial grids \cite{Lavaei2012,Low2014,Low2014a,Christakou2017}. Finally, research line iii) considers algorithms with certain convergence guarantees for non-convex problems. This includes approaches based on trust region and alternating projection methods with convergence guarantees at linear rate \cite{Hours2017}. A distributed approach based on interior point algorithms is proposed in \cite{Lu2018}, where the authors (similar to works on optimal control \cite{Necoara2009,TranDinh2013}) decompose certain steps in of a centralized optimization method. Hence, \cite{Lu2018} \mbox{obtains---due} to equivalence to the corresponding centralized \mbox{method---promising} numerical results even for very large grids. The present paper aims at investigating the potential of the recently proposed Augmented Lagrangian Alternating Direction Inexact Newton (\textsc{aladin}\xspace) method \cite{Houska2016} for \textsc{opf}\xspace problems. Similar to \textsc{admm}\xspace, \textsc{aladin}\xspace solves a sequence of local optimization problems combined with a coordination step. All computationally expensive operations (i.e. non-convex minimizations as well as function and derivative evaluations) are performed locally. The coordination step entails solving an equality-constrained Quadratic Program (\textsc{qp}\xspace) in each iteration which is computationally cheap as an equality-constrained \textsc{qp}\xspace results in a linear system of equations. Motivated by the locally quadratic convergence properties of \textsc{aladin}\xspace \cite{Houska2016}, we proposed its application to \textsc{opf}\xspace in a preceding conference paper \cite{Engelmann2017} presenting results merely for a 5-bus problem without line limits. \comm{The contributions of the present paper are threefold: (a) We provide a detailed investigation of the prospect of \textsc{aladin}\xspace for \textsc{opf}\xspace problems. To this end, we present numerical results of \textsc{aladin}\xspace for a set of widely used (\textsc{ieee}\xspace) test systems ranging from 5 to 300 buses. We explicitly compare our findings to \textsc{admm}\xspace results presented in \cite{Erseghe2015}. (b) We show how inexact Hessians can be used to reduce the communication effort of \textsc{aladin}\xspace, and provide a detailed analysis for the test systems. (c) Finally, we prove quadratic convergence for the practically relevant case of suboptimal solution of the local \textsc{nlp}s\xspace extending the convergence analysis of \cite{Houska2016}. } The remainder of the paper is structured as follows: Section \ref{sec:ProblemForm} states the \textsc{ac}\xspace-\textsc{opf}\xspace problem. Section \ref{sec:ALADIN}, recalls \textsc{ac}\xspace-\textsc{opf}\xspace problem in affinely coupled separable form and revisits the \textsc{aladin}\xspace algorithm. Extensive numerical case studies for \textsc{aladin}\xspace and \textsc{admm}\xspace are discussed Section \ref{sec:results}. Finally, Section \ref{sec::ALvsADM} compares \textsc{aladin}\xspace and \textsc{admm}\xspace in terms of their convergence properties and in terms of their communication effort. \emph{Notation} Subscripts $(\cdot)_{k,l}$ describe nodal variables, subscripts $(\cdot)_{i,j}$ denote local variables, and superscripts $(\cdot)^k$ indicate \textsc{aladin}\xspace iterates. \section{Problem Statement} \label{sec:ProblemForm} \subsection{Optimal Power Flow Problem} \label{sec:ACOPF} Consider an electrical grid at steady state described by the triple $(\mathcal{N}^0,\mathcal{G},Y)$, where $\mathcal{N}^0=\{1, \hdots, N^0\}$ is the bus set, $\mathcal{G} \subseteq \mathcal{N}^0$ is the generator set and $Y=G+jB \in \mathbb{C}^{N^0\times N^0}$ is the bus admittance matrix. Neglecting shunts for simplicity, the entries $Y_{kl} = G_{kl} + j B_{kl}$ of the bus admittance matrix are given by \[ Y_{kl}= \left\{ \begin{aligned} \displaystyle \sum_{ m \in \mathcal{N}^0 \setminus \{k\}} &y_{km}, && \mbox{if} \quad k=l, \\ -&y_{kl}, && \mbox{if} \quad k \neq l, \end{aligned}\right. \] where $y_{kl} \in \mathbb{C}$ is the admittance of the transmission line connecting buses $k$ and $l$. One bus $r\in \mathcal{N}^0$ is specified as reference bus for the voltage angles. The \textsc{ac}\xspace-\textsc{opf}\xspace problem can be written as the following \textsc{nlp}\xspace \begin{subequations} \label{eq::ACOPF} \begin{align} &\min_{\theta,v,p,q} ~ \sum_{k \in \mathcal{G}} c_{1,k} p_{k}^2 + c_{2,k} p_{k} + c_{3,k}, \\ &\quad \text{subject to}\;\nonumber\\ \begin{split} &\;v_k\sum_{l \in \mathcal{N}^0} v_l(G_{kl}\cos(\theta_{kl})+B_{kl}\sin(\theta_{kl})) = p_k - p_{k}^d ,\label{eq::pf1}\\[0.1cm] &\;v_k\sum_{l \in \mathcal{N}^0} v_l(G_{kl}\sin(\theta_{kl})-B_{kl}\cos(\theta_{kl})) = q_k - q_{k}^d , \end{split} \\[0.1cm] \begin{split} & \; \underline{p}_k \leq p_k \leq \overline{p}_k, \quad \forall k \in \mathcal{G},\label{eq::box1} \\[0.1cm] &\; \underline{q}_k \leq q_k \leq \overline{q}_k, \quad \forall k \in \mathcal{G}, \\[0.1cm] &\; \underline{v}_k \leq v_k \leq \overline{v}_k, \quad \forall k \in \mathcal{N}^0, \end{split}\\[0.1cm] &\; v_r=1\;, \;\; \theta_r=0\; \label{eq::slack}, \end{align} \end{subequations} with $c_{1,k}>0$ and $\theta_{kl}=\theta_k-\theta_l$. In Problem~\eqref{eq::ACOPF} $v_k$ denotes the voltage magnitude, $\theta_k$ denotes the voltage angle, $p_k$ and $ q_k$ denote the active and reactive power injections, $p_k^d$ and $q_k^d$ denote the active and reactive power demands at bus $k$. Problem~\eqref{eq::ACOPF} aims to minimize the total generation cost subject to the power flow equations~\eqref{eq::pf1}, generation and voltage bounds~\eqref{eq::box1}, and the reference constraint \eqref{eq::slack}. \subsection{Separable Reformulation} \label{sec:ACOPF_separable} We recall the reformulation of the \textsc{ac}\xspace-\textsc{opf}\xspace Problem~\eqref{eq::ACOPF} in affinely coupled separable form amenable to distributed optimization \cite{Engelmann2017}. We begin by partitioning the bus set $\mathcal{N}^0$ into $\mathcal{R}=\{1,\dots,R\}$ (usually geographically motivated) distinct local bus sets $\mathcal{N}^0_i=\{n^{0,1}_i,\dots,n_i^{0,N^0_i}\}$. For each bus pair $(m,n)$ located at a boundary between two local bus sets (which means $m \in \mathcal{N}^0_i$ and $n \notin \mathcal{N}^0_i$), we introduce an auxiliary bus pair $(k,l)$ in the middle of the corresponding transmission line. Hence, the corresponding admittances coupling bus $m$ and $k$ ($n$ and $l$ respectively) are twice as big as the original admittance, i.e. $y_{mk} = 2\,y_{mn}$, $y_{nl} = 2\,y_{mn}$. We couple the auxiliary buses only with buses in the interior of each region (i.e. not with each other). Thus we obtain decoupled local admittance matrices $Y_i \in \mathbb{C}^{N_i\times N_i}$ that contain all original buses and the newly introduced auxiliary buses. Furthermore, we define enlarged local bus sets $\mathcal{N}_i=\{n^{1}_i,\dots,n_i^{N_i}\}$ containing the original local bus sets $\mathcal{N}_i^0$ and their corresponding auxiliary bus. Fig. \ref{fig:grid} and Fig. \ref{fig:splitting2} show the decomposition procedure and the corresponding sets exemplarily for a 5-bus system. All auxiliary bus pairs are collected in the set $\mathcal{A}$ and the enlarged local bus sets define the enlarged bus set $\mathcal{N}=\bigcup_{i \in \mathcal{R}} \mathcal{N}_i$. \begin{figure} \centering \includegraphics[width=0.8\linewidth]{grid_new3.pdf} \caption{Decomposed 5-bus test case \cite{Li2010} with three local bus sets $\mathcal{N}_1=\{1,5,6,10,12\}$, $\mathcal{N}_2=\{2,3,7,8\}$, $\mathcal{N}_3=\{4,9,11,13\}$ (black), auxiliary bus pairs $\mathcal{A}=\{(6,7),(8,9),(10,11),(12,13)\}$ (green) and line limits depicted in red.} \label{fig:grid} \end{figure} \begin{figure} \centering \includegraphics[width=0.85\linewidth]{splitting5.pdf} \caption{\comm{Coupling of two neighbored regions.}} \label{fig:splitting2} \end{figure} \comm{Every bus $k \in \mathcal{N}$ is represented by \begin{equation*}\label{eq:NodalState} \chi_{k} = \left [\, \theta_k \quad v_k\quad p_{k}\quad q_{k}\; \right]^\top \in \mathbb{R}^4. \end{equation*} For each region $i \in \mathcal{R}$, we stack its bus variables $\chi_{k}$ in local vectors ${x_i= [\chi_{n_1^i}^\top\;\dots\;\chi_{n_{N_i}^i}^\top ]^\top \in \mathbb{R}^{n_i}}$ where $n_i=4N_i$. The local objective functions $f_i:\mathbb{R}^{n_i}\rightarrow\mathbb{R}$ are \begin{equation} \notag f_i(x_i) := \sum_{k \in \mathcal{G}_i} c_{1,k} p_{k}^2 + c_{2,k} p_{k} + c_{3,k}, \end{equation} where $\mathcal{G}_i=\mathcal{N}_i\,\cap\, \mathcal{G}$ denote local generator sets. The power flow equations~\eqref{eq::pf1} and slack constraints \eqref{eq::slack} are formulated as local nonlinear equality constraints $h_i:\mathbb{R}^{n_i}\rightarrow \mathbb{R}^{n_{hi}}$. } Summarizing the above, the \textsc{opf}\xspace Problem \eqref{eq::ACOPF} can be stated in affinely coupled separable form \begin{subequations} \label{eq:OPF} \begin{align} &\min_{x} && \sum_{i\in \mathcal{R}} f_i(x_i) \label{eq:SepProbObj} \\ &\;\;\text{s.t.}&&\sum_{i\in \mathcal{R}}A_i x_i=0\;\mid\lambda,\label{eq:consCnstr}\\[0.1cm] && &h_{i}(x_i) = 0 \quad \;\, \mid \kappa_i &\forall i \in \mathcal{R}, \label{eq:SepProbEqcnstr}\\[0.1cm] && &\underline{x}_i \leq x_i \leq \overline{x}_i\;\, \mid\eta_i &\forall i \in \mathcal{R}.\label{eq:SepProbIneqcnstr} \end{align} \end{subequations} Here, $x=[x_1^\top, \dots, x_R^\top]^\top \in \mathbb{R}^{n_x}$ stacks the local decision vectors $x_i$, and $\lambda,\;\kappa_i,\;\eta_i$ denote the dual variables (multipliers) of the respective constraints. At all auxiliary bus pairs \mbox{$(k,l) \in \mathcal{A}$}, we enforce consensus in the physical values \begin{equation} \label{eq::couplings} \theta_k = \theta_l,\;\;\; v_k = v_l,\;\;\; p_k = -p_l,\;\;\; q_k = -q_l, \end{equation} which leads to the affine consensus constraint \eqref{eq:consCnstr}. The box constraints~\eqref{eq:SepProbIneqcnstr} collect local bounds on active/reactive power injections and voltage magnitudes for all regions. \section{ALADIN-based Distributed OPF} \label{sec:ALADIN} We describe a variant of \textsc{aladin}\xspace for solving Problem \eqref{eq:OPF} in distributed fashion, cf. Algorithm~\ref{alg:ALADIN}. \textsc{aladin}\xspace consists of five steps: \begin{algorithm}[hbtp!] \small \caption{\textsc{aladin}\xspace-based Distributed \textsc{opf}\xspace} \textbf{Initialization:} Initial guess $(z^0,\lambda^0)$, choose $\Sigma_i,\rho^0,\mu^0,\epsilon$. \\ \textbf{Repeat:} \begin{enumerate} \item \comm{\textit{Parallelizable Step:} \label{step:parStep} Solve for each $i \in \mathcal{R}$ \begin{align} \begin{split} \label{eq::denlp} \underset{x_i\in[\underline x_i, \overline x_i]}{\min}& f_i(x_i) + (\lambda^k)^\top A_i x_i + \hspace{-0.1em} \frac{\rho^k}{2}\left\|x_i-z_i^k\right\|_{\Sigma_i}^2\\ \text{s.t.}\quad& h_i(x_i) = 0 \quad \mid \kappa_i^k \quad \end{split} \end{align} either \begin{enumerate}[label=\roman*)] \item exactly, obtaining $x_i^{\star{}k}$ and assigning $x_i^k:=x_i^{\star{}k}$ \label{step:exact}; or \item approximately, obtaining $\bar x_i^k$ and assigning $x_i^k:=\bar x_i^{k}$. \label{step:approx} \end{enumerate}} \item \textit{Termination Criterion:} If \begin{equation} \label{eq::stop} \left\|\sum_{i\in \mathcal{R}}A_ix^k_i\right\|\leq \epsilon \text{ and } \left\| x^k - z^k \right \|\leq \epsilon\;, \end{equation} return $x^\star = x^k$. \item \textit{Sensitivity Evaluations:} Compute and communicate local gradients~$g_i^k=\nabla f_i(x_i^k)$, Hessian approximations~{$B_i^k \approx \nabla^2\{f_i(x_i^k)+\kappa_i^\top h_i(x_i^k)\}$} and constraint Jacobians $C^k_i = \nabla h_i(x^k_i)$. \item \textit{Consensus Step:} Solve the coordination \textsc{qp}\xspace \begin{align} \notag &\underset{\Delta x,s}{\min}\;\;\sum_{i\in \mathcal{R}}\left\{\frac{1}{2}\Delta x_i^\top B^k_i\Delta x_i + {g_i^k}^\top \Delta x_i\right\} + (\lambda^k)^\top s + \frac{\mu^k}{2}\|s\|^2_2 \\ &\begin{aligned}\label{eq::conqp} \text{s.t.}\; &\quad \sum_{i\in \mathcal{R}}A_i(x^k_i+\Delta x_i) = s &&|\, \lambda^\mathrm{QP},\\[0.2cm] &\quad C^k_i \Delta x_i = 0 &&\forall i\in \mathcal{R},\\[0.2cm] &\quad (\Delta x_i)_j = 0 \quad j\in\mathbb{A}^k_i &&\forall i\in \mathcal{R}, \end{aligned} \end{align} \comm{obtaining $\Delta x^k$ and $\lambda^{\text{QP}}$ as the solution of \eqref{eq::conqp}}. \item \textit{Line Search:} Update primal and dual variables by \begin{eqnarray}\notag z^{k+1}&\leftarrow&z^k + \alpha^k_1(x^k-z^k) + \alpha_2^k\Delta x^k\;,\\[0.2cm]\notag \lambda^{k+1}&\leftarrow&\lambda^k + \alpha^k_3 (\lambda^\mathrm{QP}-\lambda^k), \end{eqnarray} with $\alpha^k_1,\alpha^k_2,\alpha^k_3$ from~\cite{Houska2016}. If full step is accepted, i.e. $ \alpha_1^k=\alpha_2^k=\alpha_3^k=1, $ update $\rho^k$ and $\mu^k$ by \begin{align} \rho^{k+1}\;(\mu^{k+1}) = \begin{cases} r_\rho \rho^k\;(r_\mu \mu^k) &\text{if} \; \rho^k < \bar \rho\;\; (\mu^k < \bar \mu)\\ \nonumber \rho^k\;(\mu^k) &\text{otherwise} \end{cases} . \end{align} \end{enumerate} \label{alg:ALADIN} \end{algorithm} \begin{enumerate} \item \comm{Solve the decoupled \textsc{nlp}s\xspace~\eqref{eq::denlp} in parallel either \begin{enumerate}[label=\roman*)] \item exactly, which yields (modulo technical assumptions) global convergence guarantees and fast local convergence, cf. Theorem \ref{ConvTheoremALADIN}; or \item approximately, preserving fast local convergence, cf. Theorem \ref{the::localcon}. \end{enumerate}} \item If the solution satisfies the termination criterion~\eqref{eq::stop} ($\epsilon$ is choosen by the user), terminate with the solution from the local \textsc{nlp}s\xspace, $x^\star=x^k$. \item If not, compute gradients~$g^k_i$, Hessian approximations $B_i^k$ and constraint Jacobians $C^k_i$. Note that these computations are fully parallelizable.\footnote{In case of derivative--based solvers, these sensitivities can be obtained from the local solvers for \eqref{eq::denlp} avoiding explicit evaluation.}$^,$\footnote{\textsc{aladin}\xspace requires the Hessian approximations $B_i^k$ to be positive definite to ensure convergence \cite{Houska2016}. To ensure positive definiteness, we toggle the sign of all negative eigenvalues of all $B_i^k$s and add a small positive constant to all zero eigenvalues.} \item Construct the consensus \textsc{qp}\xspace~\eqref{eq::conqp} based on local sensitivities and the active sets \[ \mathbb A_i^k =\{\;j\;\mid (x^k_i)_j = \underline{x}_i \text{ or } \overline{x}_i\;\}\, \] detected by the local \textsc{nlp}s\xspace. Note that there are no inequality constraints in the \textsc{qp}\xspace~\eqref{eq::conqp}. Thus, solving this problem is equivalent to solving a linear system of equations yielding a computationally cheap numerical operation~\cite{Nocedal2006}. \item Apply the globalization strategy proposed in~\cite{Houska2016} to update $x^k$ and $\lambda^k$. In practice, full steps are often accepted and the line search can be omitted. Finally, update the parameters $\rho^k$ and $\mu^k$. \end{enumerate} Algorithm~\ref{alg:ALADIN} provides technical details to \textsc{aladin}\xspace. We remark that instead of computing exact Hessians in Step~3), one can also use approximation techniques for $B_i^k$ based on previous gradient evaluations. Here, we use the blockwise and damped Broyden-Fletcher-Goldfarb-Shanno (\textsc{bfgs}\xspace) update. In contrast to standard \textsc{bfgs}\xspace, the damped version ensures positive definiteness of the $B_i^k$s to preserve the convergence properties of \textsc{aladin}\xspace, cf. \cite{Houska2016,Nocedal2006}. The \textsc{bfgs}\xspace update formula is given by \begin{equation} \label{eq:BFGS} B_i^{k+1} = B_i^k - \frac{B^k_i s_i^k s_i^{k\top} B_i^k}{s_i^{k\top} B_i^k s_i^k} + \frac{r_i^k r_i^{k\top}}{s_i^{k\top} r_i^k} \end{equation} with $s_i^k=x_i^{k+1}-x_i^k$ and $r_i^k=\theta^k(g_i^{k+1}(\lambda^{k+1})- g_i^k(\lambda^{k+1})) + (1-\theta^k)B_i^ks_i^k$, where $g_i^k(\lambda)=\rho^k(z_i^k-x_i^k)-A_i^\top\lambda$ are the gradients of the Lagrangians \cite{Nocedal2006}. The damping parameter $\theta^k$ is computed by the update rule given in \cite[p. 537]{Nocedal2006}. Notice that \textsc{bfgs}\xspace reduces the need for communication within \textsc{aladin}\xspace: instead of the full Hessian matrix, it suffices to communicate the gradients of the Lagrangians, and then update $B_i^k$ in the coordination Step 4). \comm{In contrast to \textsc{admm}\xspace, \textsc{aladin}\xspace provides convergence guarantees for non-convex optimization problems such as \textsc{ac}\xspace-\textsc{opf}\xspace. As we recall next, in case of applying Step \ref{step:parStep}) \ref{step:exact} of \textsc{aladin}\xspace global convergence (i.e. convergence with arbitrary initialization) is achieved.} \begin{assumption}[Problem data and \textsc{aladin}\xspace parameters] \label{ass:GlobalConvergence} \comm{ \begin{enumerate}[label=\roman*)] \item \label{item:ProofAss1} Problem~\eqref{eq:OPF} has a compact feasible set. Moreover, linear independence constraint qualification, strict complementarity conditions, as well as the second-order sufficient condition are satisfied at all local minimizers. \item \label{item:ProofAss3} For all $i\in\mathcal{R}$, the functions $f_i$ and $h_i$ are twice Lipschitz-continuously differentiable on the local feasible sets ${\mathcal{F}_i=\{ x_i \; | \; h_i(x_i) = 0,\; \underline{x} \leq x \leq \bar{x}\}}$. \item \label{item:ProofAss4} The matrices $\Sigma_i$ from \eqref{eq::denlp} are positive definite. \item \label{item:ProofAss5} The parameters $\rho$ and $\mu$ are sufficiently large and the line search parameters are adjusted by the globalization strategy stated in \cite{Houska2016}. \hfill $\blacksquare$ \end{enumerate} } \end{assumption} \comm{Note that assumptions \ref{item:ProofAss1}-\ref{item:ProofAss3} are not very restrictive standard assumptions from optimization theory and often satisfied in practice. For example see \cite{Hauswirth18a} for the discussion of linear independence constraint qualifications in \textsc{opf}\xspace. Assumptions \ref{item:ProofAss4} and \ref{item:ProofAss5} can be satisfied by choosing appropriate parameters/matrices.} \begin{theorem}[Global convergence of \textsc{aladin}\xspace] ~\\ \label{ConvTheoremALADIN} \comm{If Assumption~\ref{ass:GlobalConvergence} holds, then Algorithm \ref{alg:ALADIN} executed with Step \ref{step:parStep}) \ref{step:exact} terminates for any user-specified tolerance $\epsilon > 0$ after a finite number of iterations. $\hfill \blacksquare$} \end{theorem} \comm{For the details of the proof we refer to \cite[Thm. 2]{Houska2016}. From Step 2) it follows that upon termination \textsc{aladin}\xspace returns a solution satisfying $\left\|\sum_{i\in \mathcal{R}}A_ix^k_i\right\|\leq \epsilon$. } Regarding the convergence rate, quadratic (respectively superlinear for BFGS variants) convergence is shown for \textsc{aladin}\xspace in case the \textsc{nlp}s\xspace~\eqref{eq::denlp} are solved to optimality \cite{Houska2016}. However, in practice, due to finite precision arithmetics, numerical solvers do not return truly exact solutions. Next we extend the results from~\cite{Houska2016} to cover this. \begin{assumption}[Accuracy of local \textsc{nlp}\xspace solutions] \label{ass:TechnicalConditions}~\\ For all iterations $k\in \mathbb{N}$, the following holds: \comm{ \begin{enumerate}[label=\roman*)] \item \label{ass::inesol} The approximate solution $\bar x^k$ satisfies \begin{equation} \label{eq::inexact} \|\overline{x}^k - x^{k}\| \leq \zeta_1\|z^k - x^{k} \| \end{equation} with constant $\zeta_1>0$. \item \label{ass::rho} The penalty parameter $\rho^k>0$ in Problem~\eqref{eq::denlp} satisfies \begin{equation} \nabla^2 \{f_i(x_i^k) + {\kappa_i^k}^\top h_i(x_i^k) \} + \rho^k \Sigma_i \succ 0 \end{equation} for all $i=1,\dots,R$. \hfill $\blacksquare$ \end{enumerate}} \end{assumption} \comm{ Note that item~\ref{ass::inesol} of Assumption~\ref{ass:TechnicalConditions} can be satisfied e.g. by choosing $\zeta_1=1$ and $\bar x_k = z_k$. In this case, \textsc{aladin}\xspace is equivalent to \textsc{sqp}\xspace as no local steps are computed. On the other hand, if we solve the local \textsc{nlp}\xspace{}s exactly, we obtain \textsc{aladin}\xspace in its pure form, cf. \cite{Houska2016}. From this perspective, approximating a minimizer of the \textsc{nlp}s\xspace yields an algorithm in-between \textsc{sqp}\xspace and (exact) \textsc{aladin}\xspace. Item~\ref{ass::rho} of Assumption~\ref{ass:TechnicalConditions} is not very restrictive as it can be satisfied by choosing $\rho^k$ sufficiently large. However, note that in case of minimizer approximations the global convergence Theorem~\ref{ConvTheoremALADIN} fails to hold.} \comm{ \begin{theorem}[Local quadratic convergence of \textsc{aladin}\xspace]~\\ \label{the::localcon} Let Assumption~\ref{ass:GlobalConvergence} hold and let $\rho^k>0$ and $\bar x^k$ satisfy Assumption~{\ref{ass:TechnicalConditions}}. % Suppose that Algorithm~\ref{alg:ALADIN} executed with Step \ref{step:parStep}) \ref{step:approx} \begin{itemize} \item is initialized with $(x^0,\lambda^0)$ close to $(x^\star,\lambda^\star)$; \item that Step~3) computes exact sensitivities $B_i^k=\nabla \{f_i(x_i^k) + {\kappa_i^k}^\top h_i(x_i^k) \}$ and $C_i^k = \nabla h_i(x_i^k)$; \item and additionally, the update of $\mu^k$ in Step~5) satisfies \begin{equation} \label{eq::mu} \frac{1}{\mu^k}\leq \mathbf{O}(\|\overline{x}^k - x^\star\|)\;. \end{equation} \end{itemize} Then the iterates $(z^k,\lambda^k)$ converge locally to $(x^\star,\lambda^\star)$ at a quadratic rate. \hfill $\blacksquare$ \end{theorem}} The proof is given in Appendix~\ref{sec:LocConvProof}. We remark that \eqref{eq::mu} can be satisfied by choosing an appropriate update rule for $\mu^k$. \comm{\begin{remark}[Superlinear convergence for \textsc{aladin}\xspace-\textsc{bfgs}\xspace] \label{rema:Convergence} With minor modifications, the proof of Theorem \ref{the::localcon} can be extended to cover \textsc{aladin}\xspace-\textsc{bfgs}\xspace. In this case, one obtains superlinear convergence rate provided that the Hessians and Jacobians converge to their optimal counterparts, i.e. $B_i^k \rightarrow \nabla^2\{f_i(x_i^\star)+\kappa_i^\top h_i(x_i^\star)\}$ and $C_i^k\rightarrow \nabla h_i(x_i^\star)$. \end{remark}} \section{Numerical Results} \label{sec:results} The presentation of our results is divided into three parts: We being by showing considerable performance differences of \textsc{aladin}\xspace and \textsc{admm}\xspace for a motivating 5-bus example depicted in Fig.~\ref{fig:grid}. Moreover, we illustrate that \textsc{aladin}\xspace performs well for larger grids (30-bus, 57-bus) when inexact Hessians are used. Finally, we apply \textsc{aladin}\xspace to the 118 and 300-bus test cases, and compare our results to variants of \textsc{admm}\xspace published in the literature. All units are given in p.u. for a base power of 100\,MVA. In all cases, we initialize with voltage magnitudes of $1$ p.u.; all other values are set to zero initially (flat start). The dual variables $\lambda$ are initialized with zero. We compare \textsc{aladin}\xspace and \textsc{admm}\xspace in terms of number of iterations, as well as computation times and communication effort. Our implementation uses the CasADi toolbox \cite{Andersson2013b} running with MATLAB R2016a and IPOPT \cite{Waechter2006} as solver for the local \textsc{nlp}s\xspace. The ``true'' minimizers $x^\star$ are obtained by solving problem \eqref{eq:OPF} with IPOPT centrally. \subsection{5-bus System with Line Limits} \begin{figure*}[h] \centering \adjustbox{trim={.07\width} {.0\height} {0.08\width} {.0\height},clip} {\includegraphics[width=1.18\linewidth]{ADMMcompNew.pdf}} \caption{Convergence of \textsc{aladin}\xspace (solid) and \textsc{admm}\xspace (dashed) for the 5-bus system with and without considering line limits.} \label{fig:Lcstr} \end{figure*} \begin{figure}[h] \centering \adjustbox{trim={.0\width} {.51\height} {0.08\width} {.05\height},clip} {\includegraphics[width=1\linewidth]{physVals2.pdf}} \caption{Power injections and selected limits (dashed) over the iteration index $k$ for the 5 bus system considering line limits.} \label{fig:physVals} \end{figure} Consider the 5-bus case with line limits as shown in Fig. \ref{fig:grid} in order to compare \textsc{aladin}\xspace and \textsc{admm}\xspace. We partition the grid into three regions such that it is expected to be difficult for both distributed optimization algorithms. Specifically, there is a generation center in the west with cheap generators and no loads, which means that large amounts of power have to be transferred to the load centers located in the east. Moreover, line limits between these regions are active (between buses (1,\,2) and (4,\,5)). Many works using \textsc{admm}\xspace for \textsc{opf}\xspace do not consider line limits \cite{Erseghe2014a,Erseghe2015} as they add additional nonlinear inequality constraints to the problems. The recently published work \cite{Guo2017} is one of the few that explicitly considers line limits. Here, they are considered as limits on the magnitude of the apparent power\footnote{\label{footn:lineLimits}\comm{In Algorithm \ref{alg:ALADIN}, these limits \eqref{eq:lineLimits} are considered by introducing additional decision variables $s_{kl}^l$ constrained by $s_{kl}^l = p_{kl}^2 + q_{kl}^2$ and $s_{kl}^l\leq |\bar s_{kl}|$ respectively.}} \begin{equation} \label{eq:lineLimits} p_{kl}^2+q_{kl}^2 \leq |\bar s_{kl}|^2, \end{equation} where \begin{align*} p_{kl}&= - v_k^2G_{kl} + v_{k}v_{l}(G_{kl}\cos(\theta_{k}-\theta_{l})+B_{kl}\sin(\theta_{k}-\theta_{l})), \\ q_{kl}&=\phantom{-} v_k^2B_{kl} - v_{k}v_{l}(B_{kl}\cos(\theta_{k}-\theta_{l})-G_{kl}\sin(\theta_{k}-\theta_{l})). \end{align*} Due to the non-convexity of these constraints they are difficult to handle; especially when they are located at lines connecting regions. \comm{Applying \textsc{aladin}\xspace to the 5-bus system requires to select tuning parameters $\rho^k$ and $\mu^k$. Values for these parameters are determined by parameter sweeps for each grid aiming for fast convergence. The results are shown in Table \ref{tab::prameters}. To obtain a similar scaling, the weighting matrices $\Sigma_i$ are chosen such that each diagonal entry is inversely proportional to its corresponding decision variable range. Therefore, entries corresponding the power injections are chosen to 1; entries corresponding to voltage magnitudes and voltage angles are chosen to 100.} \comm{Fig. \ref{fig:physVals} shows active/reactive power injections and line flows $s_{kl}$ over the iteration index $k$ computed by \textsc{aladin}\xspace for the 5-bus system with line limits. \textsc{aladin}\xspace reaches the final (and optimal) values in around 15 iterations and satisfies active/reactive power limits (dashed). } \comm{ In the following, we compare the performance of \textsc{admm}\xspace and \textsc{aladin}\xspace in terms of the following convergence criteria:} \comm{ \begin{itemize} \item The consensus violation $\|Ax^k\|_\infty$ with $A = [ A_1,\dots,A_R ]$ indicates the maximum mismatch of voltages/powers at auxiliary buses. \item The distance to the minimizer $\|x^k-x^\star\|_\infty$ is the maximum distance of the current power/voltage iterates to its optimal value, where $x^\star$ is the ``true" minimizer obtained by solving \eqref{eq:OPF} in centralized fashion. \item The inf-norm $\|r^k \|_{\infty}$ of the dual residual \begin{equation*} r^k = \sum_{i\in\mathcal{R}}\left\{\nabla f_i(x_i^k) + A_i^\top \lambda^k +\nabla h_i(x_i^k) \kappa_i^k + \eta_i^k\right\} \end{equation*} measures violation of the first-order optimality conditions. \item The suboptimality gap $f(x^k)-f(x^\star)$. \end{itemize} We remark that for \textsc{aladin}\xspace \emph{and} \textsc{admm}\xspace the generated iterates always satisfy the nonlinear equality/inequality constraints \eqref{eq:SepProbEqcnstr} and \eqref{eq:SepProbIneqcnstr} as they are explicitly considered in the local \textsc{nlp}s\xspace \eqref{eq::denlp}. Hence, it is sufficient to show the consensus violation to ensure satisfaction of the power flow equations and limits (feasibility). Optimality is indicated by the remaining indicators suboptimality and distance to the minimizer. } Fig. \ref{fig:Lcstr} shows how the convergence criteria for \textsc{aladin}\xspace and \textsc{admm}\xspace when applied to the 5-bus system in two settings: In the first setting line limits are neglected, while in the second setting there are apparent power limits at the lines (1,\,2) and (4,\,5) of 240\,MVA and 180\,MVA respectively. To enable a fair comparison, the penalty parameters $\rho$ for \textsc{admm}\xspace are chosen based on parameter sweeps aiming for fast convergence. Without line limits, \textsc{aladin}\xspace converges around 3-5 times faster than \textsc{admm}\xspace. However, with slight abuse of optimality and consensus, applicable solutions can be obtained via \textsc{admm}\xspace in around 50 iterations assuming that underlying frequency controllers account for the remaining power mismatch. In case of active line limits, \textsc{aladin}\xspace takes around 30 iterations to converge to the exact solution whereas \textsc{admm}\xspace requires around 1500 iterations to reach the medium level of accuracy as above. Observe that \textsc{aladin}\xspace seems to converge at quadratic rate, which is in line with Theorem~\ref{the::localcon}. For \textsc{admm}\xspace we expect at most a linear convergence rate (as this is the rate achieved by \textsc{admm}\xspace for convex problems) which coincides with the seemingly slow convergence especially in case of binding line limits, cf. Fig. \ref{fig:Lcstr}. \subsection{30-bus and 57-bus with Inexact Hessians} \begin{figure*}[h] \centering \adjustbox{trim={.06\width} {.5\height} {0.08\width} {.03\height},clip} {\includegraphics[width=1.15\textwidth]{allCharBFGS.pdf}} \caption{Convergence behavior of \textsc{aladin}\xspace for the \textsc{ieee}\xspace 30-bus and 57-bus test cases using exact Hessians (solid) and inexact Hessians (dashed, here \textsc{bfgs}\xspace).} \label{fig:BFGS} \end{figure*} Next, we compare the performance of \textsc{aladin}\xspace with exact Hessians to \textsc{aladin}\xspace with inexact Hessian for larger grids. Specifically, we use approximations based on the \textsc{bfgs}\xspace formula~\eqref{eq:BFGS}. Inexact Hessians reduce the per-step communication effort, which is advantageous. The employed grid partitioning for the considered \textsc{ieee} 30- and 57-bus test cases are taken from \cite{Erseghe2014a}, and listed in Table~\ref{tab::partitioning} in the Appendix for self-containment. To foster numerical convergence we add a quadratic regularization for the reactive power injection to the local objective functions \begin{equation*} \tilde f_i(x_i) = f_i(x_i) + \gamma \sum_{k \in \mathcal{N}_i} q_k^2 \end{equation*} with $\gamma$ non-negative in the rest of the paper. This regularization follows the technical motivation to keep reactive power injections small. We choose $\gamma = 10\, \frac{\$}{hr\cdot(p.u.)^2}$ which is around 10\,\% of the quadratic coefficient of the active power injections $c_{1,k}$. Fig. \ref{fig:BFGS} depicts the convergence behavior of \textsc{aladin}\xspace with exact and inexact Hessians.\footnote{The centralized minimizer $x^\star$ is computed here including the regularization into the objective of \eqref{eq:OPF}.} For both cases \textsc{aladin}\xspace converges in less than 40 iterations to high accuracy (at least 10\textsuperscript{\,-\,4} for all convergence criteria). Furthermore, Fig.~\ref{fig:BFGS} shows that \textsc{aladin}\xspace with inexact Hessians needs just slightly more iterations compared with \textsc{aladin}\xspace using exact Hessians. One can observe that the convergence rate for \textsc{aladin}\xspace using inexact Hessians seems to be faster than linear. This observation is consistent with Theorem \ref{the::localcon}. \subsection{118-bus and 300-bus \textsc{aladin}\xspace vs. \textsc{admm}\xspace} \label{sec:118and300} For the \textsc{ieee} 118-bus and 300-bus test cases, we compare \textsc{aladin}\xspace with exact Hessians to \textsc{admm}\xspace results documented in the literature \cite{Erseghe2014a,Erseghe2015} \comm{supposing the authors thereof chose the parameters and their update rules optimally to facilitate fast convergence.} We also adopt the grid partitioning from \cite{Erseghe2014a} for the 118-bus case. Unfortunately, the partitioning for the 300-bus case is not given in \cite{Erseghe2014a}. Hence we choose the partitioning given in Table~\ref{tab::partitioning} in the Appendix. \begin{figure*}[h] \centering \adjustbox{trim={.06\width} {.04\height} {0.08\width} {.48\height},clip} {\includegraphics[width=1.15\textwidth]{allCharCompNew.pdf}} \caption{\comm{Convergence behavior of \textsc{aladin}\xspace for the \textsc{ieee}\xspace 118 and 300-bus test cases.}} \label{fig:allADMMlit} \end{figure*} Using \textsc{aladin}\xspace, we obtain the numerical results for the 118-bus and 300-bus system shown in Fig. \ref{fig:allADMMlit}. In either case, \textsc{aladin}\xspace shows fast convergence to a high level of accuracy for all convergence criteria. In \cite{Erseghe2014a,Erseghe2015} the main convergence criterion is taken to be the infinity norm of the primal gap $\|Ax^k\|_{\infty}< \epsilon$. Adopting this criterion allows a direct comparison between \textsc{aladin} and \textsc{admm}\xspace results from \cite{Erseghe2014a,Erseghe2015}; for $\epsilon = 10^{-4}$ the results are summarized in Table~\ref{tab::comparison}.\footnote{We remark that primal feasibility does not ensure convergence to a minimizer, cf. \cite[Sec. 3.3.1]{Boyd2011} for an \textsc{admm}\xspace-specific discussion. This lack of optimality guarantees can be observed in the numerical results in \cite{Erseghe2014a,Erseghe2015}. However, in practice small optimality gaps are often accepted. Nontheless one has to bear in mind that using $\| Ax^k\| \leq \epsilon$ does not imply convergence of the reactive power injections to the optimal ones since the sensitivity of the objective function with respect to the reactive power is much smaller than the sensitivity to active power. This can be verified by comparing the dual variables for active and reactive power injections, cf. \cite[Chap. 3.2.3]{Bertsekas1999}.} \textsc{aladin}\xspace converges around one order of magnitude faster while much higher accuracies in terms of the optimality gap and dual residual are obtained. \begin{table} \centering \caption{Comparison of tuned \textsc{admm}\xspace from \cite{Erseghe2014a,Erseghe2015} and \textsc{aladin}\xspace employing $\|Ax^k\|_\infty\leq 10^{-4}$ as convergence criterion only. } \renewcommand{\arraystretch}{1.7} \begin{tabular}{c|cccc} \toprule & \multicolumn{2}{c}{\textsc{admm}\xspace} & \multicolumn{2}{c}{\textsc{aladin}\xspace} \\ \midrule Test Case & \#Iter & $\left |\frac{f(x)-f(x^\star)}{f(x^\star)}\right|$ & \#Iter & $\left |\frac{f(x)-f(x^\star)}{f(x^\star)}\right |$ \\ 30 & 110 & 0.14\phantom{0} \,\% & 6 & 4.50 $\cdot$ 10\textsuperscript{\,-\,3}\, \% \\ 57 & 144 & 0.002\,\% & 13 & 2.91 $\cdot$ 10\textsuperscript{\,-\,4}\, \% \\ 118 & 186 & 0.25\phantom{0}\,\% & 11 & 3.86 $\cdot$ 10\textsuperscript{\,-\,5}\, \% \\ 300 & 216 & 0.23\phantom{0}\,\% & 26 & 4.26 $\cdot$ 10\textsuperscript{\,-\,5}\, \% \\ \bottomrule \end{tabular} \label{tab::comparison} \end{table} \begin{table} \caption{Comparison of algorithmic properties. The number of worst case forward (backward) communications in terms of floats is denoted by $\hat{N}_{\textsc{fw}}$ ($\hat{N}_{\textsc{bw}}$).} \centering \renewcommand{\arraystretch}{1.7} \begin{tabular}{p{1.5cm}ccc} \toprule & \textsc{admm}\xspace & \textsc{aladin}\xspace & \textsc{aladin}\xspace-\textsc{bfgs}\xspace \\ \midrule Convergence guarantee & no & yes & yes \\ Convergence rate & (linear) & quadratic& superlinear \\ $\hat{N}_{\textsc{fw}} $ &$ \displaystyle \sum_{i\in \mathcal{R}} n_i$& $ \displaystyle \sum_{i\in \mathcal{R}} \frac{n_{i} (2n_{i}+3)}{2}$ & $ \displaystyle \sum_{i\in \mathcal{R}} \frac{ n_i(n_{i}+4)}{2}$\\ $\hat{N}_{\textsc{bw}} $& $\displaystyle \sum_{i\in \mathcal{R}} n_i$ & $ \displaystyle \sum_{i\in \mathcal{R}} 2n_i$ & $ \displaystyle \sum_{i\in \mathcal{R}} 2n_i$ \\ \bottomrule \end{tabular} \label{tab::comparisonAll} \end{table} \section{Discussion---\textsc{aladin}\xspace vs. \textsc{admm}\xspace} \label{sec::ALvsADM} Our numerical results from Section~\ref{sec:results} using \textsc{aladin}\xspace seem promising. However, compared with \textsc{admm}\xspace there is an increased per-step communication effort when employing \textsc{aladin}\xspace. Thus, we discuss how to trade-off convergence behavior and convergence guarantees versus per-step communication effort. \begin{table*}[t] \caption{Worst case computation times (in s) and worst case forward communication effort (in floats). \label{tab::Comm&timing}} \scriptsize \centering \renewcommand{\arraystretch}{1.7} \begin{tabular}{p{0.5cm}|c|cccc|ccccc|ccccc} \toprule & & \multicolumn{4}{c}{\textsc{admm}\xspace } & \multicolumn{5}{c}{\textsc{aladin}\xspace }& \multicolumn{5}{c}{\textsc{aladin}\xspace-\textsc{bfgs}\xspace} \\ Test Case & $\hat{T}_{\text{\nlp}}$ & $\hat{T}_{\textsc{wc}}$ & $N_{\textsc{fw}}$ &$\hat{N}_{\textsc{fw}}$ & $N_{\textsc{fw}} \cdot \text{\#Iter}$ &$ \hat{T}_{\textsc{wc}} $& $\hat{T}_{\text{\qp}}$ & $ N_{\textsc{fw}}$ & $ \hat{N}_{\textsc{fw}}$&$ N_{\textsc{fw}} \cdot \text{\#Iter}$ & $ \hat{T}_{\textsc{wc}} $ & $\hat{T}_{\text{\qp}}$ & $ N_{\textsc{fw}}$ & $\hat{N}_{\textsc{fw}}$ & $N_{\textsc{fw}} \cdot \text{\#Iter}$ \\ \midrule 30 & 0.03 & 3.30 & 32&184 &3,520 & 0.2 & 0.004 & 2,213 & 8,916 &13,278& 0.28 & 0.005 & 1,012 & 4,688 & 8.096\\ 57& 0.04 & 5.76 & 96 &420 &13,824 &0.66& 0.011 & 5,527 & 23,814 &71,851 &0.98 &0.012 & 2,225 & 12,432& 42.275\\ 118& 0.05 & 9.30 & 52&576& 9,672 &0.76& 0.019 & 14,412 & 86,208 &158,532 & - & - & - & - & -\\ 300& 0.15 & 32.40 & 244&1,688 &52,704 & 14& 0.39 & 129,664 & 955,652 &3,371,264 & - & - & - & - & -\\ \bottomrule \end{tabular} \normalsize \end{table*} \subsubsection{Convergence Properties} \textsc{admm}\xspace and \textsc{aladin}\xspace exhibit differences in convergence guarantees. In case of \textsc{admm}\xspace, a linear convergence rate can be achieved for strictly convex problems under rather mild assumptions like Lipschitz continuity of the gradient and regularity assumptions on the affine constraints \cite{Deng2016}. In case of convex problems, sublinear convergence is achieved \cite{Deng2016}. For the non-convex case, convergence can only be guaranteed for special problem classes, where---to the best of the authors' knowledge---it is not clear whether \textsc{ac}\xspace-\textsc{opf}\xspace belongs to them \cite{Hong2016}. However, this does not mean that \textsc{admm}\xspace does not work for non-convex \textsc{opf}\xspace. Yet, one has to be aware that \textsc{admm}\xspace does not necessarily converge to a local minimizer, or converge at all. Nevertheless, \textsc{admm}\xspace works well in practice but often shows slow practical convergence rates, especially if high accuracies are needed \cite{Boyd2011}. This is in accordance with the simulation results from Section~\ref{sec::ALvsADM} and the result of \cite{Erseghe2014a,Erseghe2015}. \comm{As shown in Section \ref{sec:ALADIN}, convergence for \textsc{aladin}\xspace can be guaranteed without relying on a convexity assumption of the objective or the constraints (Theorem \ref{ConvTheoremALADIN}). Only mild assumptions on the penalty parameter as well as Lipschitz continuity are required. In case of Hessian approximation via \textsc{bfgs}\xspace updates, superlinear convergence can be achieved while in case of exact Hessians quadratic convergence is guaranteed (Theorem \ref{the::localcon}). This comes at the cost of an increased per-step communication, and the need for a central coordinating entity that has to solve the coupling \textsc{qp}\xspace. Furthermore \textsc{aladin}\xspace requires a communication link to this coordinator.} \subsubsection{Worst Case Communication Effort} \comm{The main conceptual difference between \textsc{aladin}\xspace and \textsc{admm}\xspace is that \textsc{aladin}\xspace uses second-order information whereas \textsc{admm}\xspace only communicates local primal solutions. More specifically, \textsc{aladin}\xspace relies on communicating local sensitivities and the active sets, i.e. $ g_i,\;B_i, \;C_i,\;\mathbb{A}_i\,. $ for all regions $i\in\mathcal{R}$, cf. Step 3) of Algorithm \ref{alg:ALADIN}. The gradients $g_i$ are of dimension $n_i$, the (symmetric) Hessians $B_i$ of dimension $n_i \times n_i$, and the Jacobians of the power flow equations collected in $C_i$ are of dimension $(n_i/2) \times n_i$. Recall that $n_i = 4 N_i$, where $N_i$ is the number of buses in region $i$, cf. Section~\ref{sec:ACOPF_separable}. Additionally, the vector of binaries indicating the active bounds $\mathbb{A}_i$, which are of dimension $\frac{3 n_i}{4}$, has to be communicated (bounds on power injections and voltages). Hence, the worst case forward communication need for \textsc{aladin}\xspace comprises $ \sum_{i\in \mathcal{R}} {n_{i} + \frac{n_{i} (n_{i}+1)}{2}+ \frac{n_{i}^2}{2}} = \sum_{i\in \mathcal{R}} \frac{n_i (2 n_i + 3)}{2} $ floats and ${\frac{3 n_i}{4}}$ binaries. } \comm{The block-\textsc{bfgs}\xspace update described in Section~\ref{sec:ALADIN} reduces the total communication need as follows. Instead of having to communicate the Hessians $H_i$ which lead to the quadratic term $\frac{n_{i} (n_{i}+1)}{2}$, \textsc{bfgs}\xspace requires to communicate only the $n_i$-dimensional gradients of the Lagrangian. Hence, the worst-case forward communication need reduces to $ \sum_{i\in \mathcal{R}} {n_{i} + n_i + \frac{n_{i}^2}{2}} = \sum_{i\in \mathcal{R}} \frac{n_i (n_i +4)}{2} $ floats and $\frac{3 n_i}{4}$ binaries.} \comm{After solving \textsc{qp}\xspace \eqref{eq::conqp}, primal and dual steps for the consensus constraint are broadcasted to the subproblems. The number of consensus constraints should typically be smaller than the decision variables since otherwise the original problem might be infeasible. Hence, the number of Lagrange multipliers is upper-bounded by $\sum_{i \in \mathcal{R}}n_i$, and we obtain an upper bound for the backward communication effort of $ \sum_{i \in \mathcal{R}}n_{i} + n_i $ floats.} For \textsc{admm}\xspace, only the minimizers of the local problems have to be communicated in both directions. As a result, we obtain equal worst case forward and backward communication need of $ \sum_{i\in \mathcal{R}} n_{i} $ floats. \comm{Table~\ref{tab::comparisonAll} summarizes the results of this section, comparing convergence properties, convergence rates, and communication effort in terms of floats for \textsc{admm}\xspace and both variants of \textsc{aladin}\xspace. Table~\ref{tab::comparisonAll} introduces the short-hand notations $\hat{N}_{\textsc{fw}}$ ($\hat{N}_{\textsc{bw}}$) for the worst case forward (backward) communication effort in terms of floats.} \begin{remark}[Floats vs. Binaries] Observe that communicating a binary value is much cheaper than communicating floats {(1~bit~vs.~32~or~64 bits)}. Hence, counting the floats is usually sufficient to approximately determine communication effort. \end{remark} \subsubsection{Communication Effort in Practice}\label{sec:CommEffPrac} \comm{ In practice, the Hessian and Jacobian approximations often contain many structural zeros. If the central coordinator knows the sparsity pattern, these zeros do not have to be communicated. Table~\ref{tab::Comm&timing} compares the upper bounds derived above to the worst case per step communication effort occuring in our simulations counting the maximum number of non-zero floats during all iterations. One can observe that the communication effort is approximately a factor of four smaller in practice compared with their upper bounds. More precisely, in Table~\ref{tab::Comm&timing} we observe $ N_{\textsc{fw}} < \hat{N}_{\textsc{fw}}$, where $ N_{\textsc{fw}}$ is the forward communication effort in our simulations and $\hat{N}_{\textsc{fw}}$ is the upper bound. Furthermore, the communication overhead for \textsc{aladin}\xspace is larger compared with \textsc{admm}\xspace---both per step and in the total communication effort. The use of \textsc{bfgs}\xspace reduces the communication effort by at least a factor of two. Generally one can say that the reduction factor gained by \textsc{bfgs}\xspace grows with increasing problems caused by the quadratic growth of the number of variables in the Hessian with problem size. } \subsubsection{Worst Case Computation Time} \comm{ Next, we assess worst case computation times for \textsc{aladin}\xspace and \textsc{admm}\xspace. Note that structurally the local \textsc{nlp}s\xspace are the same for \textsc{aladin}\xspace and \textsc{admm}\xspace. Let $\hat{T}_{\text{\nlp}}$ denote the worst-case time to solve any of the local~\textsc{nlp}s\xspace in any iteration using \textsc{aladin}\xspace. For the coordination step, \textsc{aladin}\xspace requires additional time to solve the \textsc{qp}\xspace (denoted by $\hat{T}_{\text{\qp}}$), while we assume that the averaging time for \textsc{admm}\xspace is negligible. In order to enable a fair comparison we introduce the worst case computation time as follows \begin{equation} \label{eq:WorstCaseComputationTime} \hat{T}_{\textsc{wc}} {=} \begin{cases} \text{\#Iterations} \cdot \hat{T}_{\text{\nlp}}, & \text{for \textsc{admm}\xspace}, \\ \text{\#Iterations} \cdot (\hat{T}_{\text{\nlp}} + \hat{T}_{\text{\qp}}), & \text{for \textsc{aladin}\xspace}. \end{cases} \end{equation} We bound the time needed by \textsc{admm}\xspace to solve the local \textsc{nlp}s\xspace by $\hat{T}_{\text{\nlp}}$ obtained via our numerical \textsc{aladin}\xspace experiments. This way we intend to focus on the algorithmic differences between \textsc{aladin}\xspace and \textsc{admm}\xspace and not on the details of specific implementations.} \comm{ Hence, \textsc{aladin}\xspace needs more time per iteration, but---given the faster convergence of \textsc{aladin}\xspace from Table \ref{tab::comparison}---\textsc{aladin}\xspace still outperforms \textsc{admm}\xspace in terms of the worst case computation time. In fact, the total worst case computation time for \textsc{aladin}\xspace is at least a factor of two smaller compared with \textsc{admm}\xspace. Table~\ref{tab::Comm&timing} shows the worst case computation times for the test cases.} \section{Conclusion \& Outlook} This paper investigated the potential of applying the Augmented Lagrangian Alternating Direction Inexact Newton (\textsc{aladin}\xspace) method to distributed \textsc{ac}\xspace-\textsc{opf}\xspace problems. The presented numerical results for grids of different sizes illustrate the potential of \textsc{aladin}\xspace for \textsc{ac}\xspace-\textsc{opf}\xspace. In comparison with \textsc{admm}\xspace, \textsc{aladin}\xspace is able to reduce the number of iterations by at least one order of magnitude. This comes at the cost of an increased per-step communication effort which can be reduced by using inexact Hessians, for example via \textsc{bfgs}\xspace updates. Doing so, we increase the number of iterations slightly but \textsc{aladin}\xspace remains faster and more accurate than \textsc{admm}\xspace. While the present paper focused primarily on comparing \textsc{aladin}\xspace with \textsc{admm}\xspace, a detailed comparison with other distributed schemes will be of interest. Moreover, future work will consider multi-stage \textsc{opf}\xspace problems including storages and generator ramp constraints. From an algorithmic and communication point of view, it seems promising to reduce the communication effort even more, e.g. by formulating the coordination \textsc{qp}\xspace in the coupling variables only. \comm{The development of improved (distributed) line search strategies and performing tests on larger grids including sensitivity analysis to grid topology and load patterns is subject of ongoing and future work.} \newpage
{ "timestamp": "2018-08-24T02:08:20", "yymm": "1802", "arxiv_id": "1802.08603", "language": "en", "url": "https://arxiv.org/abs/1802.08603" }
\section{Introduction} With applications to fields such as agriculture~\cite{dale2013}, ecology~\cite{wang2010}, mining~\cite{van2012}, forestry~\cite{ghiyamat2010}, urban planning~\cite{wentz2014}, defense~\cite{yuen2010}, and space exploration~\cite{pilorget2014}, hyperspectral imaging is a powerful remote sensing modality to study the chemical properties of scene materials. Hyperspectral imaging, also known as imaging spectroscopy, captures the reflected or emitted electromagnetic energy from a scene over hundreds of narrow, contiguous spectral bands, from visible to infrared wavelengths~\cite{eismann2012}. Each pixel in a hyperspectral image is composed of a vector of hundreds of elements measuring the reflected or emitted energy as a function of wavelength, known as the spectrum. Hence, a hyperspectral image can be interpreted as a three dimensional data structure with two spatial axes, carrying the information about the location of objects, and one spectral axis, carrying the information about the objects' chemical composition. The spectrum captures the chemical information because the interaction between light at different wavelengths and the material is governed by material's atomic or molecular structure. Hyperspectral sensors mounted on aircrafts and satellites can image and map the land cover and land use, detect and locate objects, or understand the physical properties of materials over a large geographical area. Since hyperspectral images contain hundreds of bands, not just three bands as in color photography, it is rarely analyzed by visual inspection, instead algorithms are developed to extract meaningful information from the images. Machine learning and pattern recognition based methods have been very successful for this purpose, as they are able to automatically learn the relationship between the spectrum captured at each pixel of the image and the information that is desired to be extracted. They are also more robust in handling the noise and uncertainties in the measurements than traditional methods, such as manually designed normalized indices and physics-based models. The remote sensing community has shown a great deal of interest in machine learning recently. Many journals have published special issues on machine learning for remote sensing~\cite{tuia2014_sp,chi2015_sp,alavi2016_sp,camps2016_sp}, numerous articles have been published on the topic of rise of machine learning in remotes sensing~\cite{valls2009_intro,lary2016_intro}, and all of the winning methods of the recent annual remote sensing GRSS data fusion competition~\cite{debes2014_df,liao2015_df,moser2015_df,tuia2016_df} and the top performing methods on ISPRS benchmark tests~\cite{ISPRSBenchmarkServer} have been based on machine learning. The hyperspectral remote sensing community has been particularly active in this field and produced a great number of new methods. This survey paper aims to provide a broad coverage of both the hyperspectral image analysis tasks and the machine learning algorithms, unlike previous surveys and tutorials which have either focused on a task~\cite{bioucas2012,lu2007,manolakis2002,valls2014,cheng2016survey} or a particular machine learning algorithm~\cite{valls2016,zhang2016,mountrakis2011,belgiu2016random}. All the methods reviewed here were published in peer-reviewed journals. The reviewed methods are able to analyze both radiance and reflectance images, unless otherwise stated. The hyperspectral data analysis tasks are categorized as land cover classification~\cite{valls2014}, target detection~\cite{nasrabadi2014}, unmixing~\cite{bioucas2012}and physical parameter estimation~\cite{treitz1999}. The machine learning algorithms covered are Gaussian models~\cite{tong2012}, linear regression~\cite{montgomery2015}, logistic regression~\cite{menard2002}, support vector machines~\cite{scholkopf2002}, Gaussian mixture models~\cite{bilmes1998}, latent linear models~\cite{jolliffe2014_llm}, sparse linear models~\cite{mairal2014}, ensemble learning~\cite{rokach2010}, directed graphical models~\cite{wainwright2008}, undirected graphical models~\cite{blake2011}, clustering~\cite{jain2010}, Gaussian processes~\cite{williams2006} , Dirichlet processes~\cite{teh2011}, and deep learning~\cite{bengio2013}. The main contributions of this paper are: \begin {enumerate*} [label=\itshape\alph*\upshape)] \item extensive review of recently published hyperspectral analysis methods, \item categorization of each method by remote sensing task and machine learning algorithm (which is neatly summarized in Table~\ref{tab:summary}), and \item exploration of current trends and problems along with future directions. \end {enumerate*} This paper is organized as follows. Section~\ref{dataanalysis_task_section} discusses about various types of analysis that can be performed on hyperspectral data, Section~\ref{machine_learning_section} provides a brief background on common machine learning techniques and terminologies, Section~\ref{challenges_section} explains the problem of high dimensionality of hyperspectral data, and Section~\ref{section_algo} reviews various machine learning-based hyperspectral image analysis methods found in recent literature. Section~\ref{section_open_issues} discusses open challenges in the field of hyperspectral data analysis, and Section \ref{section_conclusion} summarizes the paper. \section{Taxonomy of hyperspectral data analysis tasks} \label{dataanalysis_task_section} \begin{figure}[!h] \centering \includegraphics[width=\textwidth]{images/image_analysis_tasks_image.pdf} \caption{Hyperspectral image analysis tasks.\protect\footnotemark} \label{fig:hyperspectral_tasks_image} \end{figure} \footnotetext{The image obtained from \cite{NEON_data}.} We discuss methods developed for reflective hyperspectral remote sensing in this survey. The reflective regime contains the visible, the near infrared, and the shortwave infrared wavelengths (\SI{350}{\nano\metre} to \SI{2500}{\nano\metre}) and the radiance reaching the sensor is dominated by the solar energy reflected off the objects in the scene. The material property that determines the magnitude of the reflected radiance is the directional reflectance of the object. The radiance reaching the sensor however contains contributions due to the scatterings from the atmosphere, which can be removed using atmospheric compensation algorithms~\cite{gao2009} to estimate surface reflectance. Hence, the pixels in reflective hyperspectral image are measured in radiance or reflectance. The reflectance image is more preferred for hyperspectral image analysis tasks because reflectance is a surface property and reflectance image generally tends to produce better results as the atmospheric interference is reduced. The majority of algorithms discussed in this paper are agnostic to the units (radiance or reflectance) the image is measured. However, many methods required the image and the ground truth to be converted to same units before application. The data analysis tasks for reflective hyperspectral images can be divided into four distinct groups: land cover classification, target detection, spectral unmixing, and physical parameter estimation, as shown in Figure~\ref{fig:hyperspectral_tasks_image}. These are further divided into sub-tasks. \subsection{Land cover classification} \label{land_cover_classification_subsection} Land cover classification~\cite{valls2014}, also called land cover mapping and land cover segmentation, is the process of identifying the material under each pixel of a hyperspectral image. The goal is to create a map showing how different materials are distributed over a geographical area imaged by the hyperspectral sensor. Common applications of land cover mapping are plant species classification~\cite{dalponte2013}, urban scene classification~\cite{dell2004}, mineral identification~\cite{murphy2012}, and change analysis~\cite{pu2008_change1}. The main advantage of using hyperspectral images to produce land cover maps is that hyperspectral images allow for the discrimination of land covers into finer class compared to other modalities, such as multispectral images, because hyperspectral images capture more information about the chemistry of the materials. Many land cover mapping methods require a prior knowledge about the types of materials present in the scene along with examples of the spectra belonging to them. This information is generally provided from the image pixels by an expert, collected using a spectrometer during field campaign of the study area, or adapted from a third party spectral library. However, there are also many land cover mapping techniques that require no prior information about the materials in the scene. Change detection~\cite{hussain2013} is the task of determining how the land cover and the land use of an area has changed over time by using hyperspectral images of that area taken at different times. Applications of change detection include urban growth mapping~\cite{nemmour2006}, forest monitoring~\cite{desclee2006}, wetland degradation study~\cite{schmid2004}, invasive vegetation growth monitoring~\cite{pu2008_change1}, and post-fire vegetation regeneration mapping~\cite{riano2002}. Many times, change detection study is performed by comparing a set of land cover maps or fractional land cover maps an area generated by applying land cover classification or unmixing algorithm on georegistered hyperspectral image of the area taken at different times~\cite{pu2008_change1,pu2008_change2,mucher2000}. \subsection{Target detection} \label{target_detection_subsection} Target detection~\cite{manolakis2002} is the task of finding and localizing target objects in a hyperspectral image, given a reference spectrum of the object. The target occurs very sparsely in the image, and can be composed of few pixels or even be smaller than a single pixel. Targets smaller than the size of a pixel are called sub-pixel targets. The reference spectrum is generally obtained from a spectral library. Generally, one or only few samples of reflectance spectrum of the target are only available. Anomaly detection~\cite{matteoli2010} is a related task with the objective of labeling anomalous objects in a hyperspectral image, without the prior knowledge of the object's spectrum. The size of the anomalous objects can also be in sub-pixel scale. Target and anomaly detection are widely used for reconnaissance and surveillance, and also in other areas like detection of special species in agriculture and rare mineral in geology~\cite{chang2013,dehnavi2017,makki2017}. \subsection{Spectral unmixing} \label{unmixing_subsection} The energy captured by a pixel of a hyperspectral sensor is rarely reflected from a single surface of a single material. In airborne and space-borne imaging, the instantaneous field of view of a pixel (area covered by a pixel) on the ground is in meters, and it is highly likely that this area is covered by more than one material. For example, when imaging an agricultural land, the area under the pixel may contain vegetation and bare soil. Therefore, the measured spectrum is a combination of the spectra of different materials in the scene. This can be modeled as a linear mixture of spectra, simply called linear mixing. Each material reflects energy in proportion to their coverage of the pixel area. Hence, the spectrum observed at the sensor is linear combination of spectra of the individual materials, weighted by their areal coverage. However, due to multiple scatterings of the light in the scene, the observed spectra is rarely a linear combination of spectra but, in fact, is a nonlinear combination~\cite{heylen2014}. Two common types of non-linear mixing models are bilinear mixing and intimate mixing. Bilinear mixing occurs when there are multiple reflections of the incident light on different materials on the scene. For instance, in forests the energy from the sun could get reflected from a leaf onto bare soil and then get reflected to the sensor. Intimate mixing occurs in fine mixtures, such as minerals, due to several multiple scattering from the particles in the mixture. Hyperspectral unmixing is the process of recovering the proportions of pure material (called abundances) at each pixel of the image. The "pure" material spectra are called endmembers. The endmembers present in a scene may be known a priori, or obtained from the image using an endmemeber extraction algorithm~\cite{chang2006,zortea2009}, or jointly estimated with the abundances. The methods that require endmembers to be supplied are referred to as supervised unmixing and the methods that estimate endmembers simultaneously with the abundances are referred to as unsupervised. Applications of hyperspectral unmixing include mapping of green vegetation, non-photosynthetic vegetation and soil cover~\cite{meyer2015}, minerals exploration~\cite{rogge2014}, urbanization study~\cite{michishita2012}, fire disaster severity study~\cite{robichaud2007}, and water quality mapping~\cite{olmanson2013}. \subsection{Physical parameter estimation} \label{physical_parameter_estimation_subsection} Physical (or biophysical) parameter estimation is the process of predicting material properties, such as contents of chemicals, granularity and density of particles, from reflectance spectra. The chemical or mineral composition of the material is manifested as spectral absorption features in the reflectance spectra. The depth and the width of the absorption features are correlated to the amount or concentration to the corresponding physical parameters. Hence, regression models can be applied to relate spectra with the parameters of interest. Some examples of applications of physical parameter estimation are prediction of leaf biochemistry~\cite{zhao2013_intro}, sand and snow grain size~\cite{painter2003,ghrefat2007}, vegetation biomass and structural parameter~\cite{cho2007,verrelst2012}, plant water stress~\cite{suarez2008}, and soil nutrient~\cite{anne2014}. \section{Machine learning approaches} \label{machine_learning_section} Machine learning algorithms attempt to predict variables of interest by learning a model from data. This section provides a brief background on machine learning techniques and terminology. As general references, the books by Kevin Murphy~\cite{murphy2012_book} and Christopher Bishop~\cite{bishop2006} provide a complete and detailed coverage of machine learning techniques. \subsection{Types of learning} Based on the type of learning, the machine learning methods can be broadly categorized into five groups as supervised learning, unsupervised learning, semi-supervised learning, active learning, and transfer learning. In supervised learning~\cite{hastie2009supervised}, the relationship between the input and the output variables is established using a set of labeled examples, i.e., the examples for which the corresponding output variable values are known. The problem is called regression if the output variable is real and called classification if the output variable is discrete. Unsupervised learning~\cite{hastie2009unsupervised} discovers the structure or the characteristics of the input data using unlabeled examples (examples for which corresponding output values are unavailable). For instance, k-means is an unsupervised learning algorithm that clusters the input data into homogeneous groups. The principal component analysis (PCA) is another unsupervised learning algorithm that can be used to find an uncorrelated linear low dimensional representation of the input data. Semi-supervised learning~\cite{zhu2009} utilizes the unlabeled data along with the labeled data to build relationship between the input and the output variables. The unlabeled examples are used to learn the structure of the input variables, so that this information can be exploited to better learn the input-output relationship using the labeled data. Active learning~\cite{settles2010} iteratively selects examples from the unlabeled data for manual labeling, and adds them to the labeled training set. The picked examples are the ones deemed most important for improving the input-output predictive performance. This cycle is repeated until the model exhibits a desired performance. In this way, the goal of active learning is to produce results similar to supervised or semi-supervised learning methods with fewer training examples. Transfer learning~\cite{pan2010} utilizes the information learned from one problem to solve another problem. The tasks (output variable), the domains (input variables) or both can be different between the source and the destination problems. Hence, compared to traditional learning, transfer learning allows the task, the domain or their distributions to be different during training and testing. Domain adaptation is a subset of transfer learning in which the source and the destination have different domains. Similarly, multitask learning is a type of transfer learning where multiple related tasks are simultaneously learned with the objective of improving the performances on both tasks. \subsection{Non-probabilistic and probabilistic models} \subsubsection{Non-probabilistic} Non-probabilistic models produce point estimation of the output and do not model the probability distribution of the output. These models have a decision or a regression function that estimates the value of the output based on the value of the input. These functions have some free parameters which are estimated by minimizing some cost function during training, with the goal of learning the input-output relationship. Certain penalties might be enforced on the possible values of the parameters via regularization to control the complexity of the model or to encourage certain properties in the solution. \subsubsection{Probabilistic} The probabilistic models infer the probability distribution of the output. The generative probabilistic models consider both the input and the output as random variables, and model a joint distribution of the input and the output variables. In contrast, the discriminative probabilistic models consider the input to be deterministic and the output to be random, and model the distribution of the output variables as a function of the input variables, i.e., they model conditional distribution of the output given the input. The generative model learns the process by which the input and the output are generated, while the discriminative model only learns how to predict the output when the input is given. In probabilistic terms, generative model learns $p(x,y)=p(y|x)p(x)$ while discriminative model only learns $p(y|x)$, where $x$ and $y$ are input and output respectively. Generative models have the advantage of being able to generate samples of the data, however discriminative models typically perform better than generative models for classification and regression as it requires larger number of samples to learn a generative model (because they need to model $p(x)$ along with $p(y|x)$ to learn $p(x,y)$). The parameters in probabilistic models are also considered to be random variables, whose point values or distribution are to be inferred from the data. The parameters can be learned by maximum likelihood estimation, maximum a posteriori estimation, or Bayesian inference. \begin{itemize} \item The maximum likelihood estimate (ML) estimation, makes point estimates for the parameters by maximizing the likelihood (probability) of observing the data given the parameters ($p(y|\theta)$ where $\theta$ represents parameters). The maximum a posteriori (MAP) estimation finds point estimates for the parameters by maximizing the probability of observing the parameters given the data (or posterior probability of parameters, $p(\theta|x)$). The ML estimation is equivalent to minimizing a cost function, and the MAP estimation equivalent to minimizing a cost function with regularization in non-probabilistic settings. \item The Bayesian inference finds the probability distribution of the parameters using the Bayes theorem, rather than just making point estimates. Exact Bayesian inference is intractable for many models, so different approximate inference techniques such as Laplace approximation, variational inference, and Markov chain Monte Carlo sampling have been developed. The main advantage of Bayesian inference over ML and MAP estimates is that Bayesian inference can properly model the prior believes about the model and handle uncertainties in the data and the model. \end{itemize} \subsection{Bias and variance} The error in supervised learning models can be generally attributed to two sources: bias and variance. Bias is the error resulting from the model making wrong assumptions about the data. For instance, if a linear equation is used to model data whose input-output relationship is quadratic, the model will underfit and have high bias. It is characterized by high training error and high generalization (or testing) error. Underfiting occurs when the complexity of the model (the space of all the functions that the model can explain) is insufficient to represent the data. On the other side, variance is the error resulting from the sensitivity of the model to small changes in training data. It occurs when the model tries perfectly fit all the training data points, rather than generalizing the trend. High variance arises when a model overfits the data, such that it has low training error but high generalization error. Overfitting typically occurs in complex models with large number of parameters when the amount of training data is small. \subsection{Parametric and non-parametric models} Models with fixed number of parameters have fixed complexity and are called parametric models. These models will underfit if the data complexity is greater than the model's complexity. For instance, a linear regression model which has fixed number of parameters (equal to the number of input features) will have high bias if fitted on quadratic data. In contrast, a non-parametric model can increase the number of parameters (and hence its complexity) with the available training data. An example of a non-parametric model is the nearest neighbor classifier. In nearest neighbor, the predicted class of a test sample is the class of the most similar training sample. In this model, the training data itself are the parameters. Hence, increasing the number of training samples increases the number of parameters and the complexity of decision boundary that can be modeled. \subsection{Model selection and performance evaluation} Model selection is the process of choosing the best model for a task from a set of candidates. A performance on the training data is not a good metric for choosing best model as it does not capture generalization performance. The goal of learning is to accurately make predictions on new data, not on the training data. Therefore, models are evaluated on a separate set of independent samples called the validation set. In model selection, the set of candidate models do not have to be entirely different algorithms, but could be same algorithm under different hyperparameter settings. Hyperparameters are the variables that control the properties of the model and are typically set before training. For instance, the number of layers in a neural network and the number of clusters in k-means algorithm are hyperparameters. Whenever enough data is not available for building separate training and validation sets, k-fold cross-validation technique can be applied. In this, the whole dataset is randomly divided into k disjoint subsets (folds), and one subset is used for validation while remaining k-1 are used for training. This process is repeated k-1 more times until each subset is chosen once for validation. Then, the results from all the folds is accumulated. In the extreme case, the value of k could be set as high as the number of training examples to get leave-on-out-cross-validation, which is useful when the number of training examples is very small. If validation set is used to tune the hyperparameters of a model, the performance on validation set cannot be used as proxy for generalization performance of the method, because the hyperparameters were fitted to maximize performance on validation set. In this case, a third independent, separate set of samples, called testing set, should be used for performance evaluation to get unbiased estimate of the generalization performance. \section{High dimensionality of hyperspectral datasets} \label{challenges_section} Hyperspectral datasets are high dimensional and suffer from scarcity of ground truth data. The dimensionality of spectra is equal to the number of bands, with each band representing a dimension in the feature space. Typically, there are hundreds of bands in a hyperspectral image, and consequently the dimensionality of spectral data is in hundreds. Additionally, most hyperspectral dataset come with very few ground truth information. This is due to the difficulties in collecting ground truth measurements, as it may be necessary to travel large distance from one location to another to obtain it. Moreover for unmixing and physical parameter prediction tasks, the process of determining ground truth chemical contents of samples in the laboratory is expensive and time consuming, limiting the amount of ground truth data. This unfortunate combination of high dimensional input spectra and low number of labeled examples causes problems of model overfitting and low generalization performance in the algorithms. This is referred as the curse of dimensionality or Hughes phenomenon. Since, collecting more training example is virtually impossible, one possible solution is to somehow reduce the dimensionality of the input spectra, while preserving the information contents. This is possible as the information contents of a hyperspectral image lies in a lower dimensional subspace, due to the high correlations between the reflectance of the neighboring bands. Consequently, dimensionality reduction techniques such as the feature extraction~\cite{hasanlou2012} (transformations to generate lower dimensional spectral representation) and the band or feature selection~\cite{bajcsy2004} (methods that select a subset of most significant bands) are commonly used as pre-processing for hyperspectral data analysis. The principal component analysis~\cite{jolliffe2014_llm} and the stepwise regression are the examples of commonly used feature extraction and band selection techniques. An open challenge in hyperspectral algorithm development is in designing methods that can efficiently exploit the underlying low dimensional nature of hyperspectral datasets. Similar to the correlations across spectral dimension, there is also spatial dependencies between the neighboring pixels in a hyperspectral image. This is due to the fact that the material properties in a natural scene varying smoothly in space and the presence of a material can increase or decrease the likelihood of the occurrence of another material in its vicinity. The spatial information can be exploited be build more robust models, as it is seen with the success of spatial-spectral classification~\cite{fauvel2013} and unmixing algorithms~\cite{shi2014}. \section{Machine learning for hyperspectral analysis} \label{section_algo} In this Section, we review recently published machine learning based hyperspectral image analysis methods. Each Subsection discusses methods that utilize a particular type of machine learning algorithm. The categorization of the machine learning algorithms is loosely based on the one used in Kevin Murphy's book~\cite{murphy2012_book}. Within each Subsection, we divide methods by the type of hyperspectral data analysis task they solve. \subsection{Gaussian models} Multivariate Gaussian models are the basis for most classical algorithms for land cover classification and target detection. A popular hyperspectral land cover classification algorithm is the quadratic discriminant analysis, also known as Gaussian maximum likelihood classifier or just maximum likelihood classifier~\cite{dalponte2013}. It is a discriminative model where the class conditional distribution of the data is assumed to be described by a multivariate Gaussian distribution, with the mean vectors and the covariance matrices estimated using maximum likelihood. A special case where all of the class covariance matrices are assumed to be the same is called linear discriminant analysis~\cite{bandos2009,li2011_kernel}. Gaussian models have also been extensively used in hyperspectral anomaly and target detection. The Mahalanobis distance detector~\cite{chang2002} models the pixel values of a hyperspectral image using a multivariate Gaussian, and labels the pixels having low likelihood under this distribution as anomalies. The Reed-Xiaoli (RX) detector~\cite{chang2002,matteoli2014} extends this by modeling only the neighborhood around the test pixel by a Gaussian distribution, not the entire image. Common target detection algorithms, such as spectral matched filter and adaptive cosine detector~\cite{manolakis2003,truslow2014}, also assume Gaussian distributions for the target and the background pixels. Gaussian models can also be found as components in more advanced algorithms. For instance, the classification method by Persello et al.~\cite{persello2012}, which performs both active learning and transfer learning, utilizes Gaussian models. In this method, the class probabilities of the data were modeled by Gaussian distributions and query functions defined over the class probabilities were used to iteratively remove examples belonging to the source dataset from the training set and add examples belonging to the target dataset to the training set. \subsection{Linear regression} Linear regression is a widely used method for hyperspectral data analysis. It has been applied to physical parameter estimation and unmixing problems. Linear regression is a supervised method that learns a linear relationship between a set of real input variables and a output variable by modeling the output variable as the weighted sum of input variables plus a constant. In physical parameter estimation, it is used to relate the parameter of interest with the spectral reflectance values or features derived from the spectra~\cite{wang2008}. Some of the common spectral features used are spectral derivatives~\cite{treitz1999}, tied spectra~\cite{kokaly2009}, and continuum removed spectra~\cite{kokaly1999}. Most of the studies use step-wise regression technique to select bands that have higher correlation to the parameter of interest. In step-wise regression, bands are one-by-one added or removed from the predictive model depending on whether their presence increases or decreases the predictive performance. When used for linear unmixing, the reflectance of observed spectra at each band is modeled as a weighted sum of reflectance of the endmembers at that band, with the weights being constant for all bands and corresponding to the abundances~\cite{heinz2001}. Using data transformations, non-linear unmixing problems (such as bilinear mixtures and Hapke mixtures) can be solved using linear unmixing framework~\cite{heylen2015}. \subsection{Logistic regression} Logistic regression is a discriminative model primarily used for land cover classification in remote sensing. It models the class probability distribution as the logistic function of weighted sum of input features. It has been primarily used for pixel-wise classification, but as we will discuss later, logistic regression serves as a building component for more sophisticated algorithms that use ensemble learning, random fields, and deep learning. Logistic regression can perform classification with band selection using step-wise learning procedure~\cite{cheng2006} or using sparsity regularizer on the weights~\cite{zhong2008_logistic,pant2014,wu2015}. The sparsity regularizer forces many weights to be equal to zero during training, thus removing the corresponding bands from the model and keeping only the relevant bands. For improved performance, logistic regression have been trained on features derived from hyperspectral data. In \cite{khodadadzadeh2014}, the squared projections on subspaces derived from class-specific spectral correlation matrices were used with logistic regression. Qian et al.~\cite{qian2013} have proposed using 3D discrete wavelet transforms to obtain texture features from hyperspectral data cube for classification, and using mixture of subspace sparse logistic classifier to build a non-linear classifier. The 3D discrete wavelet transform based features have advantage of capturing both spatial and spectral contextual information of the scene. Spatial context can be also be incorporated to logistic regression by using morphological features~\cite{huang2014}. Semi-supervised methods using logistic regression have also been proposed. These methods label the unlabeled data using heuristics and augment the training set with them. In \cite{dopido2014}, unlabeled pixels within the 4-neighborhood of labeled pixels were assigned the class of the labeled pixel and added to the training set, and in \cite{li2013}, the class labels of the unlabeled pixels were predicted by a Markov random fields based segmentation technique~\cite{li2012a} and added to the training set. \subsection{Support vector machines} Support vector machines (SVMs) are the most used algorithms for hyperspectral data analysis~\cite{mountrakis2011support}. They have been successfully applied to all data analysis tasks (land cover classification, target detection, unmixing, and physical parameter estimation). SVM generates a decision boundary with the maximum margin of separation between the data samples belonging to different classes. The decision boundary can be linear, or be non-linear through the use of kernels~\cite{scholkopf2002}. Using kernels, the data can be projected into higher dimensional space where a linear decision hyper-plane is fitted, which in turn is equivalent to fitting a non-linear decision surface in the original feature space. The Gaussian radial basis function kernel is used by majority of the hyperspectral SVM algorithms, however several kernels specifically designed for modeling hyperspectral data~\cite{mercier2003support, fauvel2006evaluation, schneider2010gaussian, gewali2016novel} have been proposed. Since their introduction to the hyperspectral remote sensing in \cite{huang2002,melgani2004}, SVM have been considered state-of-the-art classifier for land cover mapping. The most accurate SVM-based land cover mapping methods utilize spatial-spectral features, such as extended morphological (EMP) features~\cite{benediktsson2005, fauvel2008}. The EMP features are generated by applying a series of morphological opening and closing operations with structural element of different sizes on principle component bands of the hyperspectral image. It has been shown in \cite{fauvel2008} that appending features generated by discriminant analysis to the morphological features can further increase the accuracy. Feature selection has also been incorporated to hyperspectral classification with SVM. For instance, genetic algorithms can be utilized to select the bands and optimize the kernel parameters~\cite{bazi2006}. Similarly, a step-wise feature selection can be performed on the SVM~\cite{pal2010}. Semi-supervised SVM that can utilize unlabeled data for training have also been developed~\cite{chi2007}. Relevance vector machine (RVM)~\cite{tipping2001}, a Bayesian probabilistic classification algorithm related to SVM, has also been applied for hyperspectral classification~\cite{braun2012,demir2007,mianji2011}. Multiple kernel learning tries to find a convex linear combination of an optimized set of kernel functions with optimized parameter that best describes the data. It has been shown that SVMs with multiple kernel can outperform SVM with single kernel for hyperspectral classification~\cite{tuia2010learning, gu2012representative, wang2016discriminative}. Using EMP features, multiple kernel learning framework can be used for spatial-spectral classification~\cite{gu2016nonlinear, liu2016class, li2015multiple}. Kernels defined over the spectra of neighboring regions (square blocks of pixels~\cite{camps2006composite} or superpixels obtained via segmentation~\cite{fang2015classification} around the test pixel) have also been combined with the kernel defined over the spectrum of the test pixel to perform spatial-spectral classification with SVM. A multiple kernel learning based transfer learning/domain transfer approach that simultaneously minimizes the maximum mean discrepancy between the source and the destination datasets along with the structural risk functional of the SVM was proposed for classification in \cite{sun2013}. This method was found to be better than regular SVMs and other SVM-based transfer learning schemes. Similarly, an active learning based domain adaptation method with reweighting and possible removal of samples from the source dataset was introduced in \cite{persello2013}. The pixels in the source dataset misclassified by the SVM in each iteration were removed, while the target dataset pixels with the most uncertain class assignments (based on the votes of binary SVMs trained in one-vs-all approach) were manually labeled and added to the training set. In \cite{wang2009,gu2013}, binary class SVM classification was used for unmixing by assuming that the pixels lying on or separated by the max margin hyperplanes to be the pure pixels, and the pixels occurring within the margin to be the mixed pixels. The abundances of impure pixels was then given by the ratio of the distance from the margin to the margin width. Using one-vs-all scheme this method was extended for scenes with more than two endmembers. Another approach for SVM-based unmixing is to generate an artificial mixed spectra dataset with known abundances, and learn a SVM model to classify the proportion of each endmember present in a test spectra at single percentage increments~\cite{mianji2011b}. The artificial dataset can be generated by calculating the randomly weighted sum of spectra belonging to a set of classes chosen at random from a list of endmembers. In another study~\cite{villa2011}, probabilistic SVM was used to generate per pixel probability of the pixel belonging to an endmember. The pixels with high probability of belonging to any one endmember were considered to be pure pixels, while the remaining pixels were considered mixed. The abundances in the mixed pixels were calculated using linear unmixing. The mixed pixels were further divided into subpixels, with the subpixels arbitrarily assigned to one of the endmember classes in numbers proportional to the class abundances. Simulated annealing was then used to arrange the subpixels in each mixed pixels to have spatial smoothness. This produced a sub-pixel mapping of the scene. Anomaly and target detection have been performed using a SVM related algorithm called support vector data description~\cite{tax2004}. This method generates a minimum enclosing hypersphere containing all the training data. Kernel trick can be used to find minimum enclosing hypersphere in a transformed domain. For anomaly detection, any pixel falling outside of the hypersphere enclosing all the pixels in the image are considered to be anomalies~\cite{banerjee2006,khazai2011,gurram2011}. While for target detection, an artificial training set of target pixels can be created by adding multinomial Gaussian noise to the target reference spectra, and any test pixel falling inside of the hypersphere enclosing the artificial dataset can be labeled as a target~\cite{sakla2011}. Nemmour et al.~\cite{nemmour2006} performed change detection in an area by training a SVM classifier on the concatenation of spectra from images collected at multiple dates to predict the change in the land cover. Previously, SVM regression was applied to predict biophysical parameters from multi-spectral imagery ~\cite{bruzzone2005,camps_valls2006,bazi2007}. While these method are also applicable to hyperspectral data, some newer methods have been developed specifically for hyperspectral data. In \cite{camps_valls2009}, a semi-supervised method that uses kernel matrix deformed by labeled and unlabeled data was proposed. Active learning approaches for biophysical parameter estimation that select new samples based on distance from support vectors and the disagreement between the pool of SVM regressors trained on different subsets of training data have been proposed in \cite{pasolli2012}. The idea of learning related biophysical parameter simultaneously, exploiting the relationship between them using multitask SVMs was introduced in \cite{tuia2011}. The multitask SVMs were found to be more accurate than the individual SVMs in predicting biophysical parameters. \subsection{Gaussian mixture models} \label{GMM_subsection} Gaussian mixture models~\cite{bilmes1998} represent the probability density of the data with a weighted summation of a finite number of Gaussian densities with different means and standard deviations. They are commonly used to model data that are non-Gaussian in nature or to group data into finite number of Gaussian clusters. Gaussian mixture model is a good choice to model class conditional probability in maximum likelihood classifier when the image spectra do not show Gaussian characteristics~\cite{dundar2002_gmm,li2012_gmm,li2014_gmm}. Same is the case with anomaly and target detection algorithms which traditionally utilized Gaussian distribution to model pixel and background probability density~\cite{tarabalka2009_gmm}. Gaussian mixture model have also been used to cluster hyperspectral data. \cite{tarabalka2009_gmmb} used Gaussian mixture model followed by connected component analysis to segment the hyperspectral image into homogeneous areas. A related method called independent component analysis mixture model, which models cluster density by non-Gaussian density have also been applied for unsupervised classification of hyperspectral data~\cite{shah2007_gmm}. The popular clustering algorithm k-means~\cite{arthur2007} is a special case of Gaussian mixture model clustering~\cite{bishop2006}. K-means starts with initial guesses for cluster centers, assigns all the data points to a cluster based on the distance to the cluster centers, calculates the mean of data in each cluster, and updates each cluster center with the mean of that cluster. The process of grouping data, calculating the mean and updating the cluster centers is repeated until convergence. The biggest issue with k-means is that it requires the number of clusters in the data to be known a priori. ISODATA~\cite{ball1965} is a method based on k-means that does not require the number of clusters to be known a priori and works by merging and splitting the clusters in every k-means iteration on the basis of the distance between the clusters and the standard deviation of the data in each cluster. The k-means and the ISODATA are widely used for unsupervised classification of hyperspectral data~\cite{baldeck2013_gmm,narumalani2006_gmm}. Unsupervised classification maps produced by them have been fused with the results of pixel-wise supervised classification to perform spatial-spectral classification~\cite{yang2010_gmm,tarabalka2009_gmmb}. K-means have also been used for anomaly detection and dimensionality reduction. \cite{duran2007_gmm} performed anomaly detection was by labeling pixels which were distant from the cluster centers found by k-means or ISODATA as anomalous pixel and \cite{su2011_gmm} used the cluster centers obtained from k-means as features for classification. \subsection{Latent linear models} Latent linear models find a latent representation of the data by performing a linear transform. The common latent linear models used in hyperspectral image analysis are the principal component analysis (PCA)~\cite{jolliffe2014_llm} and the independent component analysis (ICA)~\cite{hyvarinen2004_llm}. The PCA linearly projects the data onto an orthogonal set of axes such that the projections onto each axis are uncorrelated. The projection onto the first axis captures the largest portion of variance in the data, the projection on to the second axis captures the second largest portion of variance in the data and so on, such that the axes towards the end do not capture any variance in the data but only represent the noise. On the other hand, the ICA linearly projects the data onto a non-orthogonal set of axes such that the projections onto each axis are statistically independent as possible. For PCA, the number of data samples has to be greater than or equal to the dimensionality of the data. Similarly, for ICA, the number axes onto which the data is projected has to be smaller or equal to the number of data samples. The PCA is primarily used for dimensionality reduction in hyperspectral images. The PCA is applied to the spectra in the image and only the projections which explain a significant proportion of the variance are kept~\cite{shaw2002_llm}. Reducing dimensionality makes models less likely to overfit and also removes noise. Hence, it is widely used as a preprocessing tool for hyperspectral analysis~ \cite{xia2014,chen2015,monteiro2007_llm,farrell2005_llm}. The minimum noise fraction (MNF) transform~\cite{lee1990_llm}, which whitens the noise in the image before applying the PCA, is generally preferred over the PCA when reducing the dimensionality of highly noisy images. The PCA and the MNF can be used to perform non-linear dimensionality reduction using their kernalized variants~\cite{fauvel2009_llm,nielsen2011_llm}. The spatial-spectral features can be obtained by applying morphological operations after the PCA or the MNF~\cite{plaza2009_llm}. The partial least squares (PLS)~\cite{vinzi2010_llm} regression is a widely used method for physical parameter estimation from hypespectral data and is closely related to the PCA. The PLS fits a linear regression by projecting the input and the output onto separate linear subspaces where the covariance between their projections are maximized. \cite{carrascal2009_llm} showed that PLS performs better than the combination of PCA and linear regression for hyperspectral data. It has been successfully applied to predict physical quantities, such as, soil organic carbon~\cite{gomez2008_llm}, biomass~\cite{cho2007_llm}, nitrogen~\cite{hansen2003_llm}, and water stress~\cite{bei2011_llm}. The ICA can also be used to reduce the dimensionality of hyperspectral data. \cite{wang2006_llm} observed that better classification results can be obtained if the dimensionality of hyperspectral image is reduced using the ICA compared to using the PCA or the MNF. Apart from dimensionality reduction, the ICA have also been used for unmixing~\cite{nascimento2005_llm} and unsupervised classification~\cite{du2006_llm}. These methods assume that ICA axes are the endmembers and the projections are the abundances. Mixture model using ICA have been also proposed for unsupervised classification~\cite{shah2007_gmm}. Similar to the PCA and the MNF, spatial-spectral features have been generated using morphological profiles on the image after ICA~\cite{dalla2011_llm}. \subsection{Sparse linear models} Linear sparse models~\cite{mairal2014} model the observed output to be the weighted linear combination of the elements (atoms) of a large dictionary with the restriction that most of the weights are equal to zero while the remaining few weights have significant magnitudes. The sparsity on the values of weights is imposed by using a a sparse prior in probabilistic setting and a sparse regularizer in non-probabilistic setting. The dictionary can be supplied manually or be learned from the data itself. When a dictionary is learned from the data, it is automatically able to capture the data statistics. The linear sparse model is widely used for unmixing, because its formulation resembles to the linear mixing model, with the abundances being the weights and the endmembers being the dictionary elements. Iordache et al.~\cite{iordache2011} have proposed the use of sparse linear model with a spectral library as dictionary, to linearly unmix images using $L_1$ regularizer on the weights. This method is able to automatically select a subset of spectra in the spectral library as endmembers for each pixel of the image. Note that the regular least squares cannot be used in this kind of settings, since the number of spectra in a spectral library is much larger than the number of bands in a hyperspectral image. In this problem, sparsity is imposing additional constraints on a ill-conditioned problem to make it solvable. A modified spatial-spectral version of this method~\cite{iordache2012}, additionally imposes the spatial contextual information by applying total variational regularization, i.e., minimizing the $L_1$ norm of the endmember-wise abundance differences between the neighboring pixels. A multitask spatial-spectral extension to the same method, where sparsity is imposed to all the pixels of an image simultaneously to force the neighboring pixels to be composed of same endmembers, using $L_{2,1}$ norm on the abundance matrix was proposed in \cite{iordache2014,iordache2014b}. In all these methods, non-negativity constraint on the abundances was imposed during optimization, but the sum-to-one constraint was not applied. A hierarchical Bayesian approach to sparse unmixing was introduced in \cite{themelis2012}. Zero mean Laplace prior, estimated by a truncated Gaussian distribution, was used as prior on the abundance for sparsity and non-negativity constraints, and a deterministic heuristic to impose sum-to-one constraints was suggested. In \cite{castrodad2011}, multiple spectra belonging to each endmember classes were added one by one to the dictionary until there was no gain in reconstruction accuracy. The authors also proposed using non-local coherence regularizer that promotes coherence between coefficients corresponding to the atoms belonging to the same endmember class along with local neighborhood coherence regularizer. There is also a non-regularization based sparse endmember extraction and unmixing technique in literature. It first performs fully constrained least squares unmixing, and then iteratively removes endmembers with smallest abundances greedily until the required accuracy and sparsity were obtained~\cite{greer2012}. This method was shown to perform better that $L_1$ regularizer based sparse unmixing methods in the experiments. Manifold based regularizers have also been used for sparse unmixing with the assumption that hyperspectral data is sparse in a lower dimensional non-linear manifold in a high dimension space~\cite{lu2013}. Classification can be performed with linear sparse models by either using the sparse representation as features for a classifier~\cite{charles2011,du2015} or by selecting the class that has minimum class-wise reconstruction error~\cite{haq2012,chen2011,chen2013,castrodad2011}. In \cite{charles2011}, dictionary learning was used to infer sparse representation of the spectra, which was used as feature for SVM classification. In the experiments, it was found that using sparse feature performed better than using the raw spectra or the principal component analysis (PCA) transformed spectra. A compressed sensing based deblurring method to reconstruct hyperspectral signal from multispectral measurement was also introduced in this work. Similarly, in \cite{du2015} dictionary learning with total variation regularizer was used to learn discriminative spatial-spectral sparse representation jointly with the sparse multinomial logistic regression trained on them. The sparse reconstruction based classification algorithms learn the sparse representation of a test example using a dictionary containing examples of all classes, and then reconstruct that example using only dictionary atoms belonging to a specific class. The class with minimum reconstruction error is the predicted class for the test example. These methods use basis pursuit~\cite{castrodad2011}, greedy pursuit~\cite{chen2011} or homotopy based algorithms~\cite{haq2012} to obtain $L_1$ regularized sparse coefficients. Kernelized versions of sparse reconstruction algorithm can be used to construct dictionary in the higher dimensional kernel space~\cite{chen2013,liu2013}. In \cite{chen2011,chen2013} spatial contextual information was utilized by enforcing that the Laplacian operation on the reconstructed image be equal to zero, and by optimizing joint sparsity that promotes neighboring pixels to be composed of the same dictionary elements. Similarly, a spatial-spectral classification method in which the surrounding pixels were weighted based on their similarity to the test pixel and reconstructed jointly with the test pixel using sparse model with spatial coherence regularizer was introduced in \cite{zhang2014}. A new set of sparse code can be learned from the learned sparse code. This process can be repeated multiple times, with a new set of sparse code being learned from the sparse code learned in previous step, to obtain deep sparse code. A superpixel guided deep spatial-spectral sparse code learning method was published in \cite{fan2017_sparse}. In this method, first the image was segmented into superpixels and sparse code was computed for spectrum at all the pixels of image. The learned sparse codes were averaged over the superpixels and assigned to all the pixels inside the superpixels. This process was repeated multiple times to generate a stack of features which were concatenated and classified using a support vector machine. Chen et al.~\cite{chen2011b} performed target detection by formulating the problem as a two class classification problem and using a previously proposed spatial-spectral sparse classification technique~\cite{chen2011}. Their method generated target training examples by running MODTRAN~\cite{berk1987} with randomly varying parameters on the reference target reflectance spectra, and used randomly selected pixels from the test image as training examples for the background. This method was shown to out-perform spectral match filters, matched subspace detectors, adaptive subspace detector, and SVM based binary classification. \subsection{Ensemble learning} Ensemble learning~\cite{rokach2010} is a supervised learning technique of merging the results from multiple base predictors to produce a more accurate result. The outputs of the base predictors must be diverse and uncorrelated for ensemble learning methods to produce superior results. It has been applied successfully for hyperspectral classification. There are basically three types of ensemble learning approaches: bagging, boosting, and random forest. Bagging (also called bootstrapping) combines the results from multiple predictors trained on randomly sampled subsets of training set, where as boosting combines the results from multiple separate weak predictors trained on the whole training set. Random forests is bagging of decision/regression trees with random selection of features (also called feature bagging). AdaBoost~\cite{freund1995}, an adaptive boosting technique, has been widely used to build robust hyperspectral classifiers. It was used with support vector machines (SVMs) trained on clusters of bands~\cite{ramzi2014}, multiple kernel support vector machines trained on screened training samples~\cite{gu2016}, support cluster machines with different number of clusters~\cite{chi2009}, and linear and quadratic decision stumps trained on randomly selected features~\cite{kawaguchi2007} . Random forest and other random feature subspace based ensemble learning methods are considered more attractive for hyperspectral data as they use a reduced feature set to learn each ensemble member, which makes them less prone to overfitting. A bias-variance analysis in \cite{merentitis2014} found that random forest with embedded feature selection and Markov random field (MRF) based post-processing are best suited for hyperspectral data. A random subspace SVM that trains multiple SVM classifiers on random subsets of bands and combines the result of each SVM on the test spectrum via majority voting was proposed in \cite{waske2010}. This method performed better than regular SVM and random forest, particularly when very few training examples were available. This method was further improved by optimizing every random subspace, by selecting the optimal bands using genetic algorithm with Jeffries-Matusita (JM) distance as fitness function in \cite{chen2014}. An adaptive boosting technique for random subspace SVM which jointly learns the kernel parameters of the SVMs and the coefficients of adaptive boosting was formulated in \cite{gurram2013}. Ham et al.~\cite{ham2005} introduced random forest algorithm to hyperspectral classification, and also proposed a novel idea of optimally selecting bands in each random subspaces projection using simulated annealing. Their method was extended to perform transfer learning by using the class hierarchy learned on the source image, when making predictions in the test (destination) image with few or no labeled data~\cite{rajan2006}. Transfer learning version of AdaBoost algorithm, TrAdaBoost~\cite{dai2007}, was used to reweight the samples from source image in the training set of a SVM classifier after unlabeled pixels in the target image were manually labeled and added to the training set in \cite{matasci2012}. Extreme learning machines (ELM)~\cite{huang2011} are three layered feed-forward neural network, where the weights from input layer to hidden layer are assigned at random and the weights from the hidden to the output layer are learned. They were introduced to hyperspectral classification as base classifier for bagging and AdaBoost in \cite{samat2014}. They were found to be very fast compared to the methods using SVM as base classifier, while provide similar accuracy. The authors also proposed using extended morphological map (EMP) features for spatial-spectral classification with ELM. Algorithms that dynamically select a separate subset of classifiers from an ensemble for each test pixel, considering the validation performance of pixels similar and near to the test pixel, have been proposed~\cite{damodaran2015}. This method used the SVM and the ELM as base classifiers, and were more accurate than similar methods with fixed set of ensemble members. A rotation forest~\cite{rodriguez2006} is a classifier which builds ensemble members by dividing the features of the training data into random groups, performing rotational transform on each group, and then collecting all the rotation vectors into a single transformation matrix which is used to rotate the entire training set before training a decision tree. The study by Xia et al.~\cite{xia2014} applied rotation forest for hyperspectral data, with rotational transformations generated by principal component analysis (PCA), minimum noise fraction (MNF), independent component analysis (ICA), and Fisher's discriminative analysis and showed their method to outperform bagging, random forest, and AdaBoost. The authors later proposed improving the classification performance by exploiting spatial context using Markov random fields~\cite{xia2015b} and extended morphological map features~\cite{xia2015}. Peerbhay et al.~\cite{peerbhay2015} applied random forest classifier to identify anomaly by training a binary classifier on synthetic dataset which considered the image pixels to be the training samples of the non-anomaly class and the samples from empirical marginal distribution of image pixels to be the training samples of the anomaly class. Though primarily used for classification, random forest have also used for physical parameter estimation. In previous studies, random forest regression was used for band selection and retrieval of biophysical parameters, such as biomass~\cite{adam2014,mutanga2012}, nitrogen~\cite{rahman2013}, and water stress~\cite{ismail2010}. \subsection{Directed graphical models} Directed Graphical Models~\cite{wainwright2008}, also known as Bayesian networks, define a factorization of the joint probability of a set of variables over the structure of a directed graph. Each variable is represented by a node and the directed edges represent the conditional independence properties. Each variable in the joint distribution is considered to be conditionally independent given all its parents in the graph and the joint distribution is defined by the product of conditional distributions of all the variables given their parents. Bayesian networks have been primarily applied to hyperspectral unmixing. They have the advantages of providing uncertainty about the abundance estimates and probabilistically modeling the endmember spectral variability. Most methods follow a similar hierarchical Bayesian framework. They start with a linear mixing model or one of its nonlinear transforms, and assume prior distributions over the abundances and the endmembers, and then use non-informative priors on the hyperparameters. The likelihood or the noise model used is mostly Gaussian. Since, exact inference is not possible in these models, they use a Markov Chain Monte Carlo (MCMC)~\cite{andrieu2003} method to estimate the posterior distribution of the abundances and sometimes also the endmembers. Based on the distributions used for priors and hyperpriors and the inference algorithm selected, different characteristics are obtained. A method to linearly unmix pixels when the endmembers of the scene are known was proposed in \cite{dobigeon2008}. The prior used on the abundances was a uniform sampling on a simplex to enforce non-negativity and sum-to-one constraints. This model was later extended to bilinear unmixing~\cite{halimi2011} and post-nonlinear unmixing~\cite{altmann2012}. In \cite{themelis2012}, the authors proposed the use of zero mean Laplace prior on abundance to promote non-negativity and sparsity on the retrieved abundance coefficients. For computational purposes, the Laplace prior was approximated by truncated Gaussian distribution. This model lacks sum-to-one constraint in its formulation, so the authors have suggested a heuristic to enforce this constraint. The variability in endmember spectra was addressed in \cite{eches2010}, where the endmember spectra were considered to be a Gaussian distributed vectors, with the means set to the endmember spectra extracted from the image and the covariance matrix learned from the data. In \cite{moussaoui2006}, joint estimation of the endmembers and the abundances distributions were made using a linear mixing model with independent Gamma priors on the mixing coefficient and the endmember reflectance values. The Gamma priors enforced positivity constraints on the mixing coefficient and the endmember reflectance values. Dobigeon et al.~\cite{Dobigeon2009} later added sum-to-one constraint to this model by replacing the independent Gamma prior on mixing coefficient by a Dirichlet prior. \cite{schmidt2010} proposed hardware and software implementation strategies to scale up these methods to large scale. Methods have also been devised to jointly estimate the abundances and the endmembers in bilinear and post-nonlinear methods~\cite{altmann2014}. \subsection{Undirected graphical models} \label{subsection_UGM} The strong dependencies between neighboring pixels in hyperspectral images can been exploited for classification and unmixing using undirected graphical models (also called Markov random fields)~\cite{gewali2018tutorial}. Undirected graphical models (UGMs)~\cite{nowozin2011structured} are the probabilistic models that define a joint probability distribution of a set of random variables using the structure of an undirected graph, such that the joint distribution can be factorized over maximally connected sub-graphs, called cliques. Similar to directed graphical models, the nodes of the graph represent the variables while the undirected edges encode the conditional independence properties. In UGMs, each node is conditionally independent to all of the the nodes given the neighbors of the node. The joint probability is equal to the normalized product of positive functions, called potential functions, defined over all of the cliques in the graph. The most common type of undirected graphical model used in hyperspectral image analysis is grid-structured pairwise models. They have been widely used since their introduction to hyperspectral land cover mapping in \cite{jackson2002adaptive}. The graph structure used in these models is a grid, with pixel labels representing the nodes and an edge between every 4-connected or 8-connected neighbors. Two type of potential function is defined for these models, namely, the unary potential function and the pairwise potential function. The unary potential function is defined for each node and captures spectral information, while the pairwise potential function is defined for each edge and captures the spatial information by promoting neighboring similar pixels to be labeled the same class. It is common to use a pixel-wise classifier, such as Gaussian maximum likelihood classifier~\cite{jackson2002adaptive}, logistic regression~\cite{bores2011}, probabilistic SVM~\cite{tarabalka2010,xu2015}, Gaussian mixture model~\cite{li2014_gmm}, and ensemble method~\cite{merentitis2014}, to derive unary potentials. Ising/Potts based models~\cite{zhong2010,bores2011,tarabalka2010,xu2015,li2015} are primarily used for the pairwise potential. In these methods, two types of inference is generally performed--(a) maximum a posteriori (MAP) inference using algorithms such as iterated conditional mode~\cite{jackson2002adaptive}, simulated annealing~\cite{tarabalka2010}, and graph cuts~\cite{bores2011,xu2015,li2015} and (b) probabilistic inference (also called marginal inference) by using loopy belief propagation~\cite{zhong2010,li2013spectral}. Tarabalka et al.~\cite{tarabalka2010} introduced a novel Potts-based spatial energy function that only created dependencies between the neighboring pixels if there was no intensity edge in between them. They found that SVM followed with edge-based Markov random fields performed better than SVM followed by non-edge based Markov random fields, which in turn was better than SVM followed by majority voting. The conditional random fields (CRFs) are the Markov random fields that model conditional distribution by having their potential functions be parameterized by input features~\cite{sutton2011}. Studies~\cite{zhong2010,li2015} have successfully applied grid-structured pairwise CRFs to hyperspectral classification. CRFs are generally preferred over MRF because being discriminative model, they can better utilize training data for classification. However, to train a full CRF a large number of training examples are required because CRFs have much more parameters than MRFs. This is a problem for hyperspectral image analysis tasks, which are well-known for having limited ground truth. To tackle this problem, simpler formulations for the pairwise energy functions based on the similarity of the neighboring pixels have been proposed~\cite{zhang2012simplified,zhong2014support,zhao2015detail}. Band selection can also be jointly performed with the learning of a CRF to model land cover by applying Laplace prior on the CRF parameters as in \cite{zhong2008}. This helps in reduction of the complexity of CRF model and hence is better suited when there are limited training examples. All the methods discussed so far have only unary and pairwise potentials, and cannot express higher level relationship occurring between the different regions in the image. Higher order MRF/CRF have potential functions defined over a set with more than two nodes. Zhong and Wang proposed using robust $P^n$ model~\cite{kohli2009robust} for hyperspectral classification~\cite{zhong2011}. In this method, higher order potentials were defined over pixels inside each segment obtained by unsupervised segmentation of the hyperspectral image by mean-shift algorithm. This potential encouraged every pixel inside each segment to be assigned to the same class and was used along the unary and the pairwise potential functions. A heuristic approach is to use pairwise models to model segment labels directly and assign all the pixels in the segment to the same class. This approach is more rigid than $P^n$ model where all the pixels in the segment not necessarily have to be assigned to the same class. Similarly, \cite{golipour2015} integrated information from hierarchical segmentation map into pairwise energy function to incorporate higher order information into the pairwise MRF. A semi-supervised classification method that utilized MRF with semi-supervised graph priors on parameters was introduced in \cite{li2010}. Li et al.~\cite{li2011} proposed an active learning based classification method consisting of a pipeline of logistic regression followed by MRF, in which the unlabeled pixel having the largest difference in class probabilities for the top two most probable classes were selected for manual labeling and added to training set. Similarly, another heuristic approach for active learning that selected unlabeled pixels which were assigned to different classes by logistic regression and the combination of logistic regression and MRF was proposed in \cite{sun2015_UGM}. Eches et al.~\cite{eches2011} proposed using Markov random field and hierarchical Bayesian linear unmixing for simultaneous spatial-spectral unmixing and classification of a hyperspectral image. They modeled each pixel as the result of linear mixing of the endmembers with added zero mean Gaussian white noise. Potts-Markov model was used as prior for the class distribution over the image and separate sets of abundances prior were defined for each class. The abundances had softmax logistic regression as prior to enforce additivity and non-negativity constraints, and the coefficients of logistic regression had Gaussian prior with the mean and the variance distributed as normal and inverse gamma respectively. Bayesian inference on this model was performed using Metropolis-within-Gibbs algorithm. In the experiments, the proposed method performed better than pixel-wise Bayesian unmixing~\cite{dobigeon2008} and fully constrained least squares unmixing, but was more computationally intensive. The authors solved this problem by proposing an adaptive MRF which established relationships between homogeneous regions in image rather than all the pixels to reduce computational complexity in \cite{eches2013}. The new method used self-complementary area filter to segment the image into homogeneous regions as pre-processing. Potts-Markov model was used as prior to define class probability over these regions. Instead of using softmax logistic regression prior over class conditional abundances, this method used Dirichlet distribution with uniform prior on the parameters. This method performed as good as the previous method~\cite{eches2011} with lesser computational cost. A hierarchical Bayesian method for non-linear unmixing was proposed in \cite{altmann2014_UGM}. Each pixel in the image is modeled as sum of linear mixture of endmembers, a nonlinear term and Gaussian noise. All the pixels in the image were segmented into different classes by a MRF and the class conditional non-linear term was modeled by Gaussian processes. In the following paper~\cite{altmann2015}, the authors proposed to model the nonlinear term by a gamma MRF instead, to enforce the non-linear term to be non-negative and be not limited to finite levels. Sub-pixel mapping is the process of creating a classification map at scale smaller than the size of the pixel. The MRFs and CRFs have been used to create sub-pixel maps by first predicting the the content of coarse pixels, using methods like Gaussian likelihood estimated probability~\cite{wang2013} and linear unmixing~\cite{zhao2015}, then using this information to generate unary potentials for the sub-pixels inside all the coarse pixels, and finally using MRF/CRF to globally assign labels to all sub-pixels in the image while promoting spatial smoothness. \subsection{Clustering} Clustering algorithms have been primarily applied for unsupervised land cover classification of hyperspectral images~\cite{villa2013}, however they have been also used for band selection, semi-supervised classification, dimensionality reduction, and unmixing. The commonly used clustering algorithms are k-means~\cite{bishop2006}, ISODATA~\cite{ball1965}, meanshift~\cite{cheng1995}, affinity propagation~\cite{frey2007}, graph-based clustering~\cite{vonLuxburg2007}, and Dirichlet process mixture models~\cite{teh2011}. The k-means and the ISODATA methods were discusses earlier in Section \ref{GMM_subsection}. K-means and ISODATA are the most popular clustering algorithm used in remote sensing and they have found usages in unsupervised classification~\cite{baldeck2013_gmm,narumalani2006_gmm}, anomaly detection~\cite{duran2007_gmm}, and dimensionality reduction~\cite{su2011_gmm}. Meanshift is an algorithm that can be used to find the locations of the modes in multi-modal distributions of data. It iteratively updates the estimates of the modes by the mean of all the data points weighted by a kernel function placed at the locations of the mode estimates. Meanshift does not require prior knowledge of the number of clusters unlike k-means, which is an advantage. \cite{huang2008} proposed using meanshift for unsupervised classification of a hyperspectral image. Then, they classified the mean spectra of each cluster using a supervised classifier and assigned all the pixels in the cluster with the predicted class. Affinity propagation is a clustering technique that uses message passing between data points and does not require prior knowledge of the number of clusters as well. It has been used primarily for band selection by clustering the bands in an image and selecting a representative band for each cluster~\cite{jia2012,yang2013feature}. Graph based clustering techniques represent the structure of the data in graphs, with data points being nodes and the similarity between them being edges and express clustering as graph partitioning problem. Graph based clustering was used in \cite{camps2007} to perform semi-supervised land cover classification. Dirichlet process mixture models will be discussed in Section \ref{DP_subsection}. Dirichlet process mixture models have been used for semi-supervised land cover classification~\cite{Jun2013}, unmixing~\cite{mittelman2012}, and endmember extraction~\cite{zare2010,zare2013}. \subsection{Gaussian processes} Gaussian processes~\cite{williams2006} (GPs) are non-parametric models that assume all the observed and the unobserved data are jointly distributed by a multivariate normal distribution. The mean vector of the multivariate normal distribution is typically assumed to be a zero vector and the covariance matrix is estimated using a covariance function (same as kernel functions). Predictions are made using Bayesian inference. A popular covariance function for hyperspectral data analysis is the squared exponential function. The square exponential covariance function is called the Gaussian radial basis function in context of the support vector machines. GPs are primarily used for supervised learning, but can also be used for unsupervised learning. Most of the use of GP in hyperspectral data analysis has been in non-linear physical parameter estimation. The advantages of the GP over the traditional approach are that being probabilistic GPs provide a prediction confidence map; and being Bayesian they can handle uncertainties better and are less likely to overfit; and finally being non-parametric their complexity can grow with the amount of data to model highly non-linear functions. GPs have been applied to predict leaf chlorophyll, fractional vegetation cover, and leaf area index in \cite{verrelst2012}. This study first showed that using GPs to predict biophysical parameters from common vegetation indices is more accurate than the standard technique of using linear regression. Then, they used GP to relate the raw spectrum to biophysical parameter and used step-wise backward elimination to select four bands whose reflectance could best predict the biophysical parameters. It was observed that GP with band selection performs the best. In the follow up study~\cite{verrelst2013} auto-relevance detecting (ARD) squared exponential covariance function was used to select bands to predict chlorophyll instead of stepwise regression. Both these techniques make assumption that the noise in each pixel is same, and it is independent of the signal. This assumption was relaxed in \cite{gredilla2014} by allowing the noise power to smoothly vary over observations using variational heteroscedastic Gaussian processes. This approach showed better performance than previous methods. The tutorial by Camps-Valls et al.~\cite{valls2016} covers biophysical parameter estimation using GPs from hyperspectral imagery in detail. In a different application of Gaussian process regression, Murphy et al.~\cite{murphy2014} used GP to accurately locate a feature in ferric iron spectra which occurs at around \SI{900}{\nano\metre} from only the shortwave infrared spectra (\SI{2000}{\nano\meter}-\SI{2500}{\nano\metre}), a commonly used spectral region in mineralogy. Gaussian processes can be employed for classification by using discrete likelihood functions. However, when using these functions, Bayesian inference does not have an analytical solution and approximate inference methods have to be used. In \cite{bazi2010}, Gaussian process classification was used to classify pixels in the image using squared exponential and neural network covariance functions, and logistic likelihood with Laplace and expectation propagation inference methods. In the experiments, their method outperformed the support vector machines. It was seen that the runtime of Gaussian process classifier became very high as the number of training sample was increased, and the authors proposed using approximate sparse GP methods, such as informative vector machines, for faster operation at the cost of some performance in applications where runtime is critical. An active learning based classification method that introduces three heuristics for Gaussian process classification of hyperspectral images was proposed in \cite{sun2015}. A spatial-spectral classification scheme using GP was introduced in \cite{jun2011}. In this method, a set of GPs were used to model the per-class spatial variations in the reflectance in each band of the image pixels and for each class the mean reflectance estimate at each pixel was used as the mean of multivariate class-conditional normal probability distribution at that pixel. The observation angle dependent (OAD) function is a covariance function designed primarily for classifying minerals from hyperspectral data with GPs~\cite{melkumyan2009,schneider2014}. Five parsimonious Gaussian processes for modeling class-conditional distribution in quadratic discriminant classifier, when limited training samples are available, were introduced in \cite{fauvel2015}. Parsimony was enforced using constraints on eigen decomposition of the covariance matrices, assuming that the discriminating information lies on a lower dimensional subspace. Gaussian process latent variable model (GP-LVM)~\cite{lawrence2004} can be used for non-linear unmixing and endmember extraction~\cite{altmann2013}. This approach uses GP-LVM to perform non-linear dimensionality reduction to obtain abundances at each pixel of the image while enforcing positivity and sum-to-one constraints. Then, a GP regression model is trained to predict the pixel spectra from the estimated abundances. Finally, the endmember spectra are predicted by that GP regression model using pure abundances as input. This method works in reverse order compared to most other combined endmember and abundance estimation methods, in that, the abundance is estimated before extracting the endmembers. \subsection{Dirichlet processes} \label{DP_subsection} Dirichlet process (DP) ~\cite{teh2011} is a Bayesian non-parametric model typically used to cluster data by modeling the data as a mixture of possibly infinite components. A set of infinitely many random variables are said to be distributed as Dirichlet process if the marginal of this joint is a Dirichlet distribution. When used for clustering, the number of clusters modeled by Dirichlet process can grow with the data, not requiring it to be set beforehand, unlike most clustering methods. Dirichlet process has been applied for classifying and unmixing hyperspectral data. In \cite{Jun2013}, a spatially adaptive, semi-supervised DP based classification algorithm was proposed. This algorithm modeled the data by a DP based infinite mixture of Gaussian, with the Gaussian means given by spatially varying Gaussian processes. This method is capable of discovering new classes in the unlabeled samples in the training set, which is an uncommon feature in hyperspectral classification algorithms. A DP based joint endmember extraction and linear unmixing algorithm was proposed in \cite{mittelman2012}. This method used sticky hierarchical DP as spatial prior for the abundances in a Bayesian linear unmixing framework and Gibbs sampling to infer the posterior distributions of the endmembers and the abundances. Piecewise convex endmember (PCE) detection algorithm, proposed in \cite{zare2010}, models hyperspectral data as collection of disjoint convex regions. DP was used to determine the number and the location of these convex regions, and a maximum a posteriori estimate based method was used to estimate the Gaussian distributed endmembers in those regions. A stochastic expectation maximization algorithm was used to iteratively refine the extracted abundances and endmembers. A fully Bayesian version of this algorithm that uses Metropolis-within-Gibbs sampling for inference was introduced by the authors in \cite{zare2013}. \subsection{Deep learning} Deep learning~\cite{Goodfellow-et-al-2016,bengio2013} methods apply a hierarchy of non-linear transforms to the data with the goal of generating an abstract, useful representation. The growth in the development of graphical processing units (GPUs), availability of large datasets, and innovations in training deep networks such as dropout, rectified linear unit, residual learning, batch normalization, and dense connections have led to state-of-the-arts performances in the fields of computer vision, speech recognition, natural language processing, and many more engineering disciplines. Remote sensing researchers have also developed numerous deep learning based remote sensing data analysis methods which has produced top performances. Currently, the main focus is on land cover classification task but in future we can expect deep learning to be used for other tasks as well. Due to its popularity, a tutorial~\cite{zhang2016} was also published recently to introduce deep learning to remote sensing researchers. In this section, we review the development of deep learning methods for hyperspectral analysis. Chen et al.~\cite{chen2014_deep} proposed classifying the features learned from stacked autoencoders by logistic regression for hyperspectral classification. Autoencoders~\cite{vincent2010} are the neural networks that map the input to a hidden layer of size smaller than that of input and then try to reconstruct the input signal at the output using hidden layer activations, so that the activations in the hidden layer provides a compact, non-linear representation of the input. The autoencoders can be stacked and fine tuned, such that the hidden representation of one is the input of the other, to learn deep features of the input. Chen st al.\ showed that the features generated by stacked autoencoder from hyperspectral image was better than that generated by commonly used feature extraction methods, such as principal component analysis (PCA), kernel PCA (kPCA), independent component analysis (ICA), non-negative matrix factorization and factor analysis. They extended this work by using deep belief network for learning deep representation in \cite{chen2015}. Deep belief networks are formed by stacking a kind of generative probabilistic neural networks, called restricted Boltzmann machines~\cite{le2008}. In both papers, three kinds of features were tested--spectral, spatial-dominated and spatial-spectral. The spectral features were generated by learning features using the raw spectra taken from the pixels of hyperspectral image as input. The spatial-dominated features used dominant principal components (PCA components) of the pixel neighborhood as the input to the feature extractor and the spatial-spectral approach used the concatenation of the raw pixel spectra and the dominant PCA components of the pixel neighborhood as the input to the feature extractor. They found that the performance of the spatial-spectral features was better than that of spatial-dominated features, which in turn was better than the performance of spectral features. Their method with spatial-dominated and spatial-spectral features also outperformed widely used spatial-spectral support vector machines (SVM). It was seen that the accuracy generally grew with the number of PCA components used up to a limit, and then remained constant. They found that accuracy is highly dependent on the depth of the network and suggest using schemes like cross-validation to learn the best depth. Liu et al.~\cite{liu2015} combined stacked denoising autoencoder (an autoencoder trained on noisy input) and super-pixel segmentation~\cite{li2012} for spatial-spectral classification. Their method used a three-layered stacked denoising autoencoder trained on pixels of an image to generate features to prepare a classification map. This classification map was then segmented using a super-pixel approach and all pixels in a segment was classified to a common class based on majority voting. Stacked autoencoders that jointly learn spatial-spectral features have also been proposed. Ma et al.\cite{ma2015} proposed the use of the PCA bands of the patch around the pixel and the pixel's spectra as joint input to stacked autoencoders for generating spatial-spectral features. They also showed that promoting pixels with smaller spectral angle to have same hidden layer representation in autoencoders trained on pixels and spatially averaging hidden layer representation of each autoencoder before stacking subsequent autoencoder can produce good spatial-spectral features in \cite{ma2016}. A popular deep learning architecture for vision tasks is a convolutional neural network~\cite{krizhevsky2012} (CNN). Inspired by the mammalian visual system, these neural networks contain a series of convolution layers, non-linearity layers, and pooling layers for learning low-level to high-level features. In the convolution layers, rather than being fully connected to the input, each hidden layer unit is connected via shared weights to the local receptive field (pixel neighborhood for images) around the input. Non-linearity layer makes the activations non-linear function of the input. In the pooling layers, the responses at several input locations are summarized, via max operations, to build invariance to input translations. Networks with one dimensional~\cite{hu2015}, two dimensional~\cite{romero2015}, and three dimensional~\cite{chen2016} convolutional layers have been developed for analyzing hyperspectral data. Methods with one dimensional convolution layer take spectra as input and learn features that capture only spectral information. \cite{hu2015} proposed a new five layered convolution neural network trained on the spectra for classification. This architecture contains an input layer, a convolution layer, a max pooling layer, a fully connected layer and an output layer. It was tested on two three hyperspectral datasets, and it was found that this method outperformed the SVM, and the existing shallow and deep neural networks, namely, two layer fully connected neural network, LeNet-5~\cite{lecun1995} and stacked autoencoder based method~\cite{chen2014_deep}. Li et al.~\cite{li2017_deep} devised a scheme to perform spatial-spectral classification using 1D CNN. They trained a 1D CNN on pairs of spectra to classify the common class if the spectra pair belong to same class and predict 0-th class otherwise. During testing, the test pixel and its neighboring pixels were paired one-by-one, fed to the network, and the class of the test pixel was found by majority voting. Generally, methods that use two dimensional convolution layers, first reduce the number of bands in the image by using a dimensionality reduction technique, such as PCA, and then apply two dimensional convolutional network on image patches to extract features. These methods reduce the number of bands to control the size of network, as larger network require more training data which is limited. In \cite{zhao2016}, the feature vector extracted from the last layer of a convolutional neural network, which was trained on the principal components bands of image patches, was concatenated with spectral feature vector obtained by applying a manifold based dimensionality reduction technique on spectra to obtain spatial-spectral features. Yue et al.~\cite{yue2015} generated spectral feature maps by performing outer products between different segments of the pixel's spectra and stacked it with spatial feature maps consisting of dominant PCA bands of image patch around the pixel to form spatial-spectral input, which was used to train a six layered convolutional neural network. In \cite{romero2015}, features generated by sparse unsupervised convolutional neural network, trained using Enforcing Population and Lifetime Sparsity (EPLS)~\cite{romero2015b} algorithm, showed better performance for classification than the features generated by the PCA and the kernel PCA with radial basis function kernel. Ghamisi et al.~\cite{ghamisi2016_deep} proposed a band selection method that chose the bands which maximized the accuracy of a 2D CNN over validation set using swarm particle optimization. Liang et al.~\cite{liang2016_deep} integrated 2D CNN and dictionary learning by learning sparse code from the last layers of a supervised 2D CNN, and using reconstruction loss to predict the class. They found that using dictionary learning was better than using support vector machines. The scale of the spatial features in a hyperspectral image is highly dependent on the instantaneous field of view and the geometry of the imaging system, Zhao et al.~\cite{zhao2015_deep} proposed a multiscale convolutional autoencoder with three PCA bands to build scale invariance for classification. Their model passes the Laplacian pyramid of three dominant PCA bands of an image through two layers of convolutional autoencoder to generate spatial features which are concatenated with spectra and fed to logistic regression. Because of scale invariance, their method performed better than state-of-the-art spatial-spectral classification algorithms, the extended morphological profile method~\cite{benediktsson2005} and the multilevel logistic based method~\cite{li2010}. Aptoula et al.~\cite{aptoula2016_deep} proposed using attribute profile~\cite{mura2010_deep} as input to 2-D CNN for hyperspectral image as these profiles can capture structural information in an image at various scales. Chen et al.~\cite{chen2016} proposed a three dimensional end-to-end convolutional neural networks to predict material class from the image patch around the test pixel. The term 'end-to-end' is used to denote networks that take raw input signal at the input layer and produce final output in the output layer and do not use any pre-processing or post-processing. Three dimensional convolutional networks are advantageous than two dimensional convolutional networks as they can directly learn spatial-spectral features as their filters span over both spatial and spectral axes. They also proposed using augmenting the training data with image patches generated by modeling the changes in illumination in the training examples and modeling linear mixing between the training examples to augment the training data. By augmenting the training data, they were able to train the proposed network without any dimensionality reduction of the image bands. Previous hyperspectral deep learning methods have relied on dimensionality reduction to decrease the number of bands in the image and hence limit the number of parameters in the model. The same idea of using 3D CNN for hyperspectral classification was again presented in \cite{li2017_deep_classification}. This work showed that 3D CNN outperformed baseline stacked autoencoder, deep belief network, and 2D CNN. Similarly, a 3D residual~\cite{he2016_deep} CNN was proposed in \cite{zhong2017_deep}. Residual networks have residual blocks which learn the difference between the target function and the input instead of the learning the whole target function which makes them more robust against overfitting when there are a large number of layers in the network. Recently, generative adversarial networks (GANs)~\cite{goodfellow2014generative} were proposed for semi-supervised classification by Zhi et al.~\cite{zhi2017_deep}. GANs are generative models that learn to generate samples from the data distribution using two competing neural networks, one called the generator and the other called the discriminator. Zhi et al.\ trained a generator and a discriminator on spatial-spectral features obtained from hyperspectral image. Then, they added a softmax layer to the discriminator network at the end and fine tuned the network to perform classification. Their method was found to be effective when number of training examples were small. An architecture with two processing streams, a 1-D CNN stream for spectral information (trained on spectrum) and a 2-D CNN stream for spatial information (trained on image patch around the pixel), was proposed in \cite{zhang2017_deep}. It is different from methods that concatenate spectral and spatial features before classifier in that this method jointly optimizes the spectral and the spatial feature extractors. Similarly, Yang et al.~\cite{Yang2017_deep} performed transfer learning using a two stream spatial-spectral network. They trained the network using two fully annotated source images, fixed the weights of earlier layers, and fine-tuned final layers on test images. They found that transferring weights produces better results than training the entire network from scratch on the test image. Santara et al.~\cite{santara2017_deep} proposed an end-to-end network that selects groups of non-overlapping bands, processes them in identical parallel processing streams, and combines them to produce the final output. A fully convolutional network is an end-to-end deep network with only convolutional layers and no fully connected layers. It is trained to map an arbitrary sized image directly to its classification map. The method by Jiao et al.~\cite{jiao2017_deep} used FCN-8~\cite{shelhamer2017_deep} network pretrained on RGB ground based images to generate multi-scale features, which were then weighted and concatenated with spectrum to form spatial-spectral features for classification. Since, FCN-8 is trained on 3 bands (Red-Green-Blue), this method only use 3 dominant PCA bands of hyperspectral as input. In \cite{lee2017_deep}, a novel fully convolutional architecture was proposed for hyperspectral images. The network was trained on hyperspectral image patches as input and corresponding ground truth map of the image patches as output. This network used a multi-scale convolutional filter bank to extract features at multiple scales in the first layer. The network also contained skip connections for residual learning~\cite{he2016_deep} and data augmentation (mirroring along horizontal, vertical, diagonal axes) to make optimization easier. Due to fully convolutional nature of the network, at test time the entire test image was passed into the network to generate classification map the whole image simultaneously. Tao et al.~\cite{tao2015} proposed using two layered stacked sparse autoencoders (an autoencoder which promotes sparsity of the activation in hidden layer during training) to learn multiscale spatial-spectral features in an unsupervised manner. Their method applied PCA on an hyperspectral image and extracted random square patches of different sizes to learn a set of autoencoders from the dominant principal component bands at different scales. Results from all the autoencoders were concatenated to obtained a final feature vector which was used for classification with a linear SVM. This was the first study that investigated the transferability of features in hyperspectral images. They found that features learned on separate source image can be as good as features learned on the test image itself. Similarly, Kemker and Kanan~\cite{kemker2017_deep} proposed using multi-scale ICA and stacked convolutional autoencoder to learn unsupervised spatial-spectral representation from images patches. Compared to \cite{tao2015}, they used a larger training set from multiple sensors for feature learning. On the other hand, studies~\cite{Yang2017_deep, mei2017_deep} have investigated the idea of pretraining a CNN on an annotated source hyperspectral image and then finetuning only the end layers on the test hyperspectral image to learn spatial-spectral representation in a supervised manner. A domain adaptation based method that learns features that are invariant to the difference in the distribution of the source image and the destination image was proposed in \cite{elshamli2017_deep}. Their method uses an end-to-end domain-adversarial neural network~\cite{ganin2016_deep} to learn features that maximizes the separability of material classes but minimizes the separability based on whether the features came from the source image or the destination image. It was found that this feature was better than feature learned from applying denoising autoencoder, PCA, and KPCA on source image and baseline deep networks trained on the source and the destination image. Even though all the algorithms discusses in this section have been classification algorithms, deep learning can also be used for other tasks. A deep and transfer learning based anomaly detector has been recently published in \cite{li2017_deep_anomaly}. In this method, a two-class one dimensional CNN was trained to detect the dissimilarity between two spectra. Training set was generated by selecting pairs of spectra from a fully annotated separate training image. The pair of spectra were assigned to 0-class if they belonged to same material and 1-class if they belonged to different materials. During testing, each pixel in the test image was compared to all its neighbors and the score obtained from the network was averaged. Anomalies were then detected by thresholding this score. Recurrent neural networks (RNN) are popular architectures for modeling sequential data. They contain feedback loops in their computation allowing the current output to be dependent on the current input and the previous inputs. This is different from all the networks we have discussed previously which used feedforward computations to produce output. Mou et al.~\cite{mou2017_deep} proposed using RNN to model pixel spectra in a hyperspectral image as a 1-D sequences for classification. They experimented with architectures based on two kinds of recurrent units, namely, long short-term memory~\cite{graves2005_deep} (LSTM) and gated recurrent unit~\cite{cho2014_deep} (GRN). They found that the GRN worked better than the LSTM for modeling hyperspectral data and both of the recurrent networks outperformed traditional approaches and baseline CNN. Similarly, Wu et al.~\cite{wu2017_deep} showed that a convolutional RNN~\cite{zuo2015_deep} (a network that has few convolutional layers followed by RNN) is better choice for spectra classification than LSTM and baseline CNN. \section{Open Issues and Future Challenges} \label{section_open_issues} \subsection{Curse of dimensionality} The high dimensionality of hyperspectral data is a well-documented problem in remote sensing. Some of the technique proposed to tackle it are dimensionality reduction~\cite{shaw2002_llm}, fusion of spatial information~\cite{fauvel2013}, transfer and multitask learning~\cite{Yang2017_deep,tuia2011}, and supplementation of training data with synthetic examples~\cite{chen2016}. However, the problem of high dimensionality is far from being solved. Because hyperspectral images are oversampled in spectral dimension, redundancies exist between the bands of the hyperspectral image. A big question is what is the intrinsic dimension of hyperspectral data~\cite{chang2018review}, i.e., the minimum number of bands required to carry all the information in an image. The quantification of the intrinsic dimensionality would not only lead to development more efficient analysis algorithms, but also aid in the development of efficient hyperspectral sensors with optimized number of bands and sampling. \subsection{Robustness and reliability of models} The grand challenge of hyperspectral data analysis is to build models that are invariant to the differences in the time and season of image acquisition, the site, the platform, the spatial resolution, the spectral resolution and sampling intervals, and the sensor technology. It is an ambitious task as there are massive variability in the spectra when any one those factors is even slightly altered. Therefore, studies so far have mostly concentrated on developing models that work for a particular image. These models are typically trained using the labeled pixels in the test image itself or using the ground spectra and the chemical analysis of samples collected over the imaged site. However, the eventual goal is to build universal models that once trained, can be applied to other images taken under conditions different from the original directly or with slight modifications. Interest towards such models is beginning to grow in the remote sensing community, as seen by the contest problem in the 2017 IEEE Geoscience and Remote Sensing (GRSS) data fusion competition\footnote{\url{http://www.grss-ieee.org/community/technical-committees/data-fusion/}}. The task in the competition is to map the land use of new test cities using ground truth land use information of separate cites in the training images using multispectral data collected by different satellites at different seasons. Once the satellite based hyperspectral imaging becomes more mature, such datasets will be available for hyperspectral images as well and make it possible to classify land covers into finer classes than it is currently possible with multispectral images. \subsection{Big data without ground truth} After the initial cost of the instruments, it is often cheaper to obtain large quantity of hyperspectral images than to collect ground truth information for even a small area. For some of the ground truth information, such as chemical composition of the materials that is needed for unmixing and physical parameter estimation, it is only practical to obtained ground truth for few samples collected from the scene. This has lead to availability of large databases of images without ground truth or with very limited ground truth. With the increasing quality and miniaturization of sensors with the decreasing costs, this amount is only expected to grow at an exponential rate as unmanned aerial vehicle-based and satellite-based imaging becomes more popular. The availability of this massive amount of ground truth-less images raises an interesting question. Is it possible to utilize the huge quantity of unlabeled images in improving the data analysis models? The current unsupervised, semi-supervised and active learning algorithms may not be able to handle the huge volume of data, so there is a necessity of development of large-scale algorithms to utilize unlabeled images. \subsection{Lack of standardized datasets and experiments} There is a lack of benchmarking dataset for hyperspectral analysis. Without a set standard procedure to evaluate methods under real-world scenarios, researchers and practitioners cannot make an educated choice in picking a right method for their problem. Since, researchers have been using different datasets with different experimental conditions, it is impossible to compare two methods proposed in two different papers. On top of that, many times it is difficult to reproduce a study because the implementation of the methods is not readily available. These factors stifle the possibilities of follow up work and adoption of the methods. The IEEE Geoscience and Remote Sensing society's (IEEE GRSS) Data and Algorithm Standard Evaluation website\footnote{\url{http://dase.ticinumaerospace.com/}}, IEEE GRSS annual data fusion contest, and Rochester Institute of Technology's target detection blind test website\footnote{\url{http://dirsapps.cis.rit.edu/blindtest/}} are few of the current efforts to provide benchmarks to compare different hyperspectral analysis methods. There are only few publicly available standard datasets for hyperspectral analysis. Even though they are handful in number, datasets for land cover classification are most commonly found. Almost all new classification methods are tested on them. However, these test images are small compared to real-life images and lack variability and diversity. Most of them were captured by same sensor as well. This raises a question that whether current methods which work great on these few and small images generalize to performing good on large real-world images. There are even fewer datasets for other tasks. Most of unmixing algorithms demonstrate their efficacy on simulated data as there are almost none publicly unmixing datasets with ground truth abundances (one available dataset is \footnote{\url{http://www.planetary.brown.edu/relabdocs/LSCCsoil.html}}). Researchers working on target detection and physical parameter estimation tend to use their own data rather than standard datasets, although few standard datasets are available for these tasks. Hyperspectral change detection seems to have no standard datasets. Out of many reasons, this has contributed to more methods being developed for land cover classification, than any of the other data analysis tasks. This is evidenced by the shear number of land cover classification methods published compared to others. More datasets related to target detection, unmixing, and physical parameter estimation need to be prepared. We also need more diverse datasets for land cover classification. Development of newer datasets should be one of the top priorities of the hyperspectral remote sensing community. \begin{sidewaystable}[ph!] \fontsize{6}{4}\selectfont \caption{Summary of all reviewed methods.} \label{tab:summary} \centering \begin{tabularx}{\paperwidth}{@{}lXXXXXXX@{}} \toprule Methods & \multicolumn{2}{c}{Classification} & \multicolumn{2}{c}{Target Detection} & \multicolumn{2}{c}{Unmixing} & Parameter Estimation\tabularnewline & Pixel-wise & Spatial-spectral & Target & Anomaly & Linear & Non-linear & \tabularnewline \midrule Gaussian Models & \cite{dalponte2013,bandos2009,li2011}, transfer learning\cite{persello2012} & & \cite{manolakis2003} & \cite{chang2002} & & & \tabularnewline Linear Regression & & & & & \cite{heinz2001} & \cite{heylen2015} & band selection \cite{wang2008,treitz1999,kokaly2009,kokaly1999}\tabularnewline Logistic Regression & \cite{khodadadzadeh2014}, band selection\cite{cheng2006,zhong2008_logistic,pant2014,wu2015} & \cite{qian2013,huang2014}, semi-supervised\cite{li2013,dopido2014} & & & & & \tabularnewline Support Vector Machines & \cite{huang2002,melgani2004,braun2012,demir2007,mianji2011,tuia2010learning, gu2012representative, wang2016discriminative}, band selection\cite{bazi2006,pal2010}, semi-supervised\cite{chi2007}, transfer learning\cite{sun2013,persello2013}, change detection\cite{nemmour2006} & \cite{benediktsson2005, fauvel2008, gu2016nonlinear, liu2016class, li2015multiple, camps2006composite, fang2015classification} & \cite{sakla2011} & \cite{banerjee2006,khazai2011,gurram2011} & \cite{mianji2011b}, endmember extraction and sub-pixel mapping\cite{villa2011} & \cite{wang2009,gu2013} & semi-supervised\cite{camps_valls2009},active learning\cite{pasolli2012},multi-output\cite{tuia2011}\tabularnewline Gaussain Mixture Models & \cite{dundar2002_gmm,li2012_gmm,li2014_gmm}, unsupervised \cite{tarabalka2009_gmmb,shah2007_gmm}, band selection \cite{su2011_gmm} & \cite{yang2010_gmm,tarabalka2009_gmmb} & & \cite{tarabalka2009_gmm,duran2007_gmm} & & & \tabularnewline Latent Linear Models & dimensionality reduction \cite{shaw2002_llm,lee1990_llm,fauvel2009_llm,nielsen2011_llm,wang2006_llm}, unsupervised \cite{du2006_llm,shah2007_gmm} & feature extraction \cite{plaza2009_llm,dalla2011_llm} & & & \cite{nascimento2005_llm} & & \cite{carrascal2009_llm,gomez2008_llm,cho2007_llm,hansen2003_llm,bei2011_llm}\tabularnewline Sparse Linear Models & \cite{haq2012,castrodad2011}, feature extraction \cite{charles2011} & \cite{liu2013,chen2011,chen2013,zhang2014}, feature extraction \cite{du2015,fan2017_sparse} & spatial-spectral \cite{chen2011b} & & \cite{iordache2011,themelis2012,greer2012,lu2013}, spatial-spectral\cite{iordache2012,iordache2014,iordache2014b,castrodad2011} & & \tabularnewline Ensemble Learning & \cite{waske2010,chen2014,gurram2013,ramzi2014,chi2009,kawaguchi2007,ham2005,damodaran2015,xia2014}, transfer learning \cite{rajan2006}, transfer and active learning \cite{matasci2012} & \cite{merentitis2014,samat2014,xia2015b,xia2015} & & \cite{peerbhay2015} & & & band selection \cite{adam2014,mutanga2012,rahman2013,ismail2010}\tabularnewline Directed Graphical Models & & & & & \cite{dobigeon2008,themelis2012,eches2010,moussaoui2006,Dobigeon2009,schmidt2010} & \cite{halimi2011,altmann2012,altmann2014} & \tabularnewline Undirected Graphical Models & & \cite{zhong2010,bores2011,tarabalka2010,xu2015,merentitis2014,li2015,zhong2011,golipour2015,li2014_gmm,li2013spectral,jackson2002adaptive,zhang2012simplified,zhong2014support,zhao2015detail}, band selection \cite{zhong2008}, semi-supervised \cite{li2010}, active learning \cite{li2011,sun2015_UGM}, sub-pixel mapping \cite{wang2013} & & & spatial-spectral \cite{eches2011,eches2013}, sub-pixel mapping \cite{zhao2015} & spatial-spectral \cite{altmann2014_UGM,altmann2015} & \tabularnewline Clustering & unsupervised \cite{villa2013,huang2008,baldeck2013_gmm,narumalani2006_gmm,Jun2013}, band selection \cite{jia2012,yang2013feature}, dimensionality reduction \cite{su2011_gmm}, semi-supervised \cite{camps2007} & & & \cite{duran2007_gmm} & & & \tabularnewline Gaussian Processes & \cite{bazi2010,melkumyan2009,fauvel2015}, active learning \cite{sun2015} & \cite{jun2011} & & & & endmember extraction\cite{altmann2013} & \cite{verrelst2012,verrelst2013,gredilla2014,murphy2014}\tabularnewline Dirichlet Processes & & \cite{Jun2013} & & & endmember extraction \cite{Jun2013,mittelman2012,zare2010,zare2013} & & \tabularnewline Deep Learning & supervised feature learning\cite{mou2017_deep,wu2017_deep}, unsupervised feature learning\cite{vincent2010,chen2014_deep,chen2015} & unsupervised feature learning \cite{chen2014_deep,chen2015,liu2015,zhao2015_deep}, supervised feature learning \cite{hu2015,romero2015,chen2016,zhao2016,yue2015,ma2015,ma2016,li2017_deep_classification,aptoula2016_deep,zhang2017_deep,jiao2017_deep, lee2017_deep, zhong2017_deep, santara2017_deep, li2017_deep, liang2016_deep}, semi-supervised\cite{zhi2017_deep}, supervised transfer learning \cite{Yang2017_deep,mei2017_deep,elshamli2017_deep}, unsupervised transfer learning \cite{tao2015,kemker2017_deep,elshamli2017_deep}, supervised band selection\cite{ghamisi2016_deep} & & \cite{li2017_deep_anomaly} & & & \tabularnewline \bottomrule \end{tabularx} \end{sidewaystable} \section{Summary} \label{section_conclusion} Over the years, machine learning has become the primary tool for analysis of hyperspectral images. With the literature booming with new methods, choosing the right method for a problem can become a daunting task. This paper addressed this issue by creating a catalog of published methods. We have summarized all of the discussed methods in Table \ref{tab:summary}. The current hot topics in hyperspectral data analysis are deep learning and data fusion. Deep learning has been primarily tested for land cover classification. Hyperspectral datasets are infamous for having small amount of ground truth data while the deep learning methods are infamous for requiring large amount of ground truth data. Current methods get around this problem by training the network on labeled pixels or small patches of an image and testing on the remaining unlabeled pixels in the same image, instead of training and testing on separate images. They additionally employ regularization schemes, such as data augmentation and early stopping, to prevent overfitting. Since, the current models are learned on training sets consisting of spectra from a single image, which is typically small, these models cannot capture the spectral variabilities that occur due to differences in factors, such as illumination, atmospheric conditions, sun angle, viewing geometry, and resolution. If there were large-scale datasets with labeled images covering diverse scenes for training, it is highly likely that the networks would be able to learn a spectral variabilities resistant representation of the data, and would perform well when tested on unseen new images. This highlights the fact that there is an urgent need of large-scale datasets for land cover classification. The arrival of such datasets could have as powerful of an impact to the field of hyperspectral image analysis as the development of ImageNet~\cite{deng2009imagenet} had for the growth of computer vision and deep learning. While the development of robust supervised deep learning methods is contingent on the development of large-scale datasets, unsupervised deep learning can learn robust representation of hyperspectral data using vast amount of already available unlabeled images. Unsupervised learning is also more suited for other hyperspectral image analysis tasks, for which large-scale datasets are unlikely to be available due to the cost associated with collection of ground truth. Studies have already investigated the transferability of learned unsupervised features between images of diverse scenes~\cite{tao2015,kemker2017_deep}. Generative adversarial networks (GANs) and variational autoencoders (VAs) look very promising for modeling unlabeled hyperspectral data. GANs and VAs could characterize the spectral variability by modeling the generative distribution of the spectra. Such generative models, in turn, can be used as prior for classification algorithms to make them invariant to spectral variabilities. If such generative models are conditioned on the set of biophysical parameters of the material, they could be used as data-driven forward models in lieu of physics-based radiative transfer models. These models could find usage in non-linear unmixing and physical parameter estimation. Similarly, the generative models conditioned on material class can also serve as a spectral library. GANs and VAs could also model the spatial prior of the land covers (similar to how GANs were used in \cite{luc2016semantic}). Deep learning based spatial prior could turn out to be better than Markov random fields based prior for land cover classification. GANs and VAs models for hyperspectral images should also prove to be good for image processing tasks, such as pansharpening, superresolution, denoising, and inpainting. Multi-sensor, multi-resolution, multi-temporal, multi-modal, multi-platform, and multi-site data fusion still remains an open problem in remote sensing. In our opinion, deep learning methods are best suited for problems where multiple sources of imaging data have to be combined. Deep networks, in particular convolutional neural networks, can be trained to generate features which combine relevant aspect of all sources at each pixel locations, which can be further be processed for desired task. Such architecture has been investigated for fusion of hyperspectral imagery with lidar data~\cite{ghamisi2017hyperspectral}, but similar architectures can be used to combine multi-sensor, multi-resolution, multi-temporal, multi-modal, and multi-platform images of the same scene. Another kind of data fusion would be to incorporate non-imaging data with imaging data. For instance, information from digital maps and geotagged user data could be used alongside information from remotely observed hyperspectral images for land cover mapping of urban areas. Due to their capabilities to model semantic relationships between different variables, conditional random models (CRFs) could be the best for this kind of fusion of information. In fact, recently a higher order CRF was used to model the dependencies between land use and land cover of a remotely imaged area~\cite{albert2017higher}. \bibliographystyle{plain}
{ "timestamp": "2018-02-27T02:00:37", "yymm": "1802", "arxiv_id": "1802.08701", "language": "en", "url": "https://arxiv.org/abs/1802.08701" }
\section{Introduction} Conifold transition is an important topic in the study of Calabi-Yau geometries and string theory. By the discoveries of \cite{Reid,GH,Wang}, the moduli spaces of three-dimensional CY complete intersections in a product of projective spaces are connected by conifold transitions. It was found by Friedman \cite{Friedman} and Smith-Thomas-Yau \cite{STY} that there exists non-trivial topological obstruction for conifold transition. In the direction of mirror symmetry, HMS for the local resolved and deformed conifolds was proved by Chan-Pomerleano-Ueda \cite{CPU}. The mirror of the Atiyah flop was constructed in a joint work of the auther with Fan-Hong-Yau \cite{FHLY} using a noncommutative mirror of the conifold. More generally geometric transitions involve singularities of deeper levels than conifolds. Singularities worse than conifolds may occur when several Lagrangian spheres vanish simultaneously. For instance, generalized conifolds and orbifolded conifolds are two natural generalizations of the conifold. In \cite{AKLM}, Aganagic-Karch-L\"ust-Miemiec deduced that these two classes of singularities are mirror to each other by gauge-theoretical methods. In a joint work of the author with Kanazawa \cite{KL} mirror symmetry for these local singularities was realized using the SYZ program. SYZ for general local Gorenstein singularities was derived in a previous work of the author \cite{L13} using the Lagrangian fibrations constructed by Gross \cite{Gross-eg} and Goldstein \cite{Goldstein}. A natural question is how to realize these geometric transitions and mirror symmetry in the compact setting. For compact manifolds, Castano-Bernard and Matessi \cite{CM2} made a beautiful construction of conifold transitions and their mirrors using a symplectic version of the Gross-Siebert program that they developed earlier \cite{CM1}. They studied the local affine geometries of the base of Lagrangian fibrations for local conifold transitions, and glued the local models of Lagrangian fibrations to a global geometry. In this paper we construct compact generalized and orbifolded conifolds by extending the method of \cite{CM2}. We construct the local affine geometries modeling the base of the SYZ fibrations on generalized and orbifolded singularities (and also their smoothings and resolutions). Then we formulate global geometries that contain both of these singularities. We call them to be orbi-conifolds. The discriminant locus of the global structure naturally contains orbifolded edges and orbifolded positive or negative vertices. It gives an orbifold generalization of simple and positive tropical manifolds in the the Gross-Siebert program. The Schoen's Calabi-Yau threefold \cite{Schoen} provides an excellent source of examples for orbi-conifolds. The threefold is a resolution of the fiber product of two rational elliptic surfaces over the base $\mathbb{P}^1$. \cite{CM2} constructed compact conifolds which are degenerations of the Schoen's CY and their mirrors by using fan polytopes of toric blow-ups of $\mathbb{P}^2$. In this paper we treat all the reflexive polygons uniformly and construct compact orbi-conifolds and their mirrors. The result is the following. \begin{theorem}[Orbi-conifold degeneration of Schoen's CY] \label{thm:main} Each pair of reflexive polygons $(P_1,P_2)$ (\emph{where $P_1$ and $P_2$ are not necessarily dual to each other}) corresponds to an orbifolded conifold degeneration $O^{(P_1,P_2)}$ of a Schoen's Calabi-Yau threefold, and also corresponds to a generalized conifold degeneration $G^{(P_1,P_2)}$ of a mirror Schoen's Calabi-Yau. A resolution of $O^{(P_1,P_2)}$ is mirror to a smoothing of $G^{(\check{P}_1,\check{P}_2)}$ in the sense of Gross-Siebert, and vice versa. ($\check{P}$ denotes the dual polygon of $P$.) \end{theorem} The connections between Calabi-Yau geometries and affine geometries play a key role. Gross \cite{Gross-topo} found a beautiful topological realization of mirror symmetry using affine geometries with polyhedral decompositions. Independently, Haase-Zharkov \cite{HZ1,HZ2,HZ3} found a brilliant construction of affine structures on spheres by using a pair of dual reflexive polytopes. They are useful to study the geometries of Calabi-Yau complete intersections. Based on these pioneering works, Gross-Siebert \cite{GS1,GS2,GS07} developed their celebrated program of toric degenerations and formulated an algebraic reconstruction of mirror pairs. A symplectic version of the reconstruction was found by Castano-Bernard and Matessi \cite{CM1}. The construction in this paper is a natural generalization of the work of \cite{CM2}. To construct orbi-conifold degenerations of the Schoen's CY, first we study degenerations of rational elliptic surfaces via affine geometry. The connection between affine surfaces and symplectic four-folds is very well-understood and dates back to the early works of Symington \cite{Symington} and Leung-Symington \cite{LS}. They defined the notion of an almost toric fourfold, which is a symplectic fourfold with a Lagrangian fibration under certain topological constraints. They classified almost toric fourfolds by using the affine base of Lagrangian fibrations. Rational elliptic surfaces form a subclass of almost toric fourfolds and hence are well understood by \cite{Symington,LS}. Here we concern about singular surfaces with $A_n$-orbifold singularities, which will be important to the construction of orbi-conifolds. We construct the following. \begin{prop}[Rational elliptic surfaces with singularities] \label{thm:ell-intro} Each reflexive polygon $P$ corresponds to two rational elliptic surfaces $S_P$ and $S_{P}'$ with type $A$ singularities, where the configurations of singularities depend on the integral properties of $P$. $S_P - D_{S_P}$ and $S_{\check{P}}' - D_{S_{\check{P}}'}$ are mirror to each other in the sense that they are discrete Legendre dual to each other, where $D$ denotes an anti-canonical divisor. \end{prop} The topology of elliptic surfaces is closely related to the `12 Property' for a reflexive polygon $P$. Namely the sum of affine lengths of edges and orders of vertices of $P$ must be $12$. They correspond to the number of singular fibers (counted with multiplicities) of a rational elliptic surface. Indeed the `12 Property' holds for more general objects called legal loops \cite{PR}. We find that they correspond to topological (which may not be Lagrangian) torus fibrations $M$ over $\mathbb{S}^2$ which have been extensively studied by \cite{Matsumoto,Iwase}. The multiple of $12$ corresponds to $-3\sigma/2 = - p_1 /2$ where $\sigma$ is the signature and $p_1$ is the first Pontryagin number of the real four-fold $M$. The organization is as follows. We review the SYZ construction for the related local singularities in Section \ref{sec:SYZ}. Then we construct the local affine geometries modeling the singularities in Section \ref{sec:loc-aff}. In Section \ref{sec:surf} we focus on the relation between rational elliptic surfaces and the `12' Property for polygons. In Section \ref{sec:Schoen} we formulate the notion of orbi-conifold and construct orbi-conifold transitions of the Schoen's CY. \subsection*{Acknowledgment} The author is grateful to Ricardo Casta\~no-Bernard for introducing his beautiful joint work with Diego Matessi on Schoen's Calabi-Yau threefold. He expresses his gratitude to Atsushi Kanazawa for useful discussions and comments. He appreciates for the continuous encouragement of Naichung Conan Leung and Shing-Tung Yau. \section{A quick review on the SYZ mirrors for local orbifolded conifolds} \label{sec:SYZ} In \cite{KL}, we studied SYZ mirror symmetry for the local generalized conifold $G_{k,l}$ and orbifolded conifold $O_{k,l}$, following the construction of \cite{auroux07,CLL,AAK}. For $k=l=1$, it reduces to mirror symmetry for the local conifold (which is self-mirror). In this section we recall the geometries of $G_{k,l}$ and $O_{k,l}$ and their SYZ mirror symmetry. \subsection{Lagrangian fibrations and SYZ mirrors} \begin{defn} A local generalized conifold is given by the equation $$xy=(1+z)^k(1+w)^l$$ in $\mathbb{C}^4$. \end{defn} It is a toric Gorenstein singularity whose fan is the three-dimensional cone spanned by the primitive vectors $(0,0,1),(k,0,1),(0,1,1),(l,1,1)$. In other words it is a cone over the trapezoid with vertices $(0,0,1),(k,0,1),(0,1,1),(l,1,1)$ contained in the affine plane in height 1. When $k,l \geq 2$, the set of singularities is the union of $\{x=y=0,z=-1\}$ and $\{x=y=0,w=-1\}$; when $k=1$ and $l\geq 2$, the set of singularities is $\{x=y=0,w=-1\}$; when $k=l=1$, the point $x=y=1+z=1+w=0$ is the only singularity. A smoothing is given by changing the polynomial $(1+z)^k(1+w)^l$ (while keeping the highest order) such that its zero set contains no critical point. A toric crepant resolution is given by subdividing the trapezoid into standard lattice triangles whose areas achieve the smallest value $1/2$. The resolution is small in the sense that the exceptional locus is a rational curve. For the purpose of SYZ mirror symmetry, we remove the anti-canonical divisor $\{zw=0\}$ and denote the resulting space as $G_{k,l}$. Its smoothing and resolution are denoted as $\tilde{G}_{k,l}$ and $\hat{G}_{k,l}$ respectively. \begin{defn} A local orbifolded conifold is given by the equations $$u_1v_1=(1+z)^k, \, u_2v_2=(1+z)^l$$ in $\mathbb{C}^5$. \end{defn} It is another toric Gorenstein singularity whose fan is the three-dimensional cone spanned by the primitive vectors $(0,0,1),(k,0,1),(0,l,1),(k,l,1)$. In other words it is a cone over the corresponding rectangle in the affine plane in height 1. When $k,l \geq 2$, the set of singularities is the union of $\{u_1=v_1=0,z=-1\}$ and $\{u_2=v_2=0,z=-1\}$; when $k=1$ and $l\geq 2$, the set of singularities is $\{u_2=v_2=0,z=-1\}$; when $k=l=1$, the point $u_1=v_1=u_2=v_2=1+z=0$ is the only singularity. A smoothing is given by changing the polynomials $(1+z)^k$ and $(1+z)^l$ such that they do not have multiple roots. A toric crepant resolution is given by subdividing the rectangle into standard lattice triangles. We remove the anti-canonical divisor $\{z=0\}$ and denote the resulting space by $O_{k,l}$. Its smoothing and resolution are denoted as $\tilde{O}_{k,l}$ and $\hat{O}_{k,l}$ respectively. In \cite{KL} we showed the following. \begin{theorem}[\cite{KL}] \label{localSYZ} The local generalized conifold $G_{k,l}$ is SYZ mirror to the orbifolded conifold $O_{k,l}$. Namely, the deformed generalized conifold $\tilde{G}_{k,l}$ is SYZ mirror to the resolved orbifolded conifold $\hat{O}_{k,l}$; the resolved generalized conifold $\hat{G}_{k,l}$ is SYZ mirror to the deformed orbifolded conifold $\tilde{O}_{k,l}$. It is summarized by the following diagram. $$ \xymatrix{ \tilde{G}_{k,l}\ar@{<->}[d]_{SYZ} &\ar@{~>}[l] G_{k,l} \ar@{<->}[d]^{SYZ} & \ar@{->}[l] \hat{G}_{k,l} \ar@{<->}[d]^{SYZ} \\ \hat{O}_{k,l} \ar@{->}[r] & O_{k,l}\ar@{~>}[r] & \tilde{O}_{k,l}. } $$ \end{theorem} The SYZ program realizes a mirror pair as dual Lagrangian torus fibrations. On $O_{k,l}$, we consider the Hamiltonian $T^2$-action given by $u_i \mapsto \lambda_i u_i$, $v_i \mapsto \lambda_i^{-1} v_i$ for $i=1,2$, leaving $z$ unchanged. We also have the corresponding action on its resolution and smoothing, and let's denote the moment map to $\mathbb{R}^2$ by $\nu_{\mathbb{T}^2}$ (which is simply given by $(|u_1|^2 - |v_1|^2,|u_2|^2 - |v_2|^2)$ on $O_{k,l}$). Then one can verify that \begin{equation} \label{eq:fib_O} (\log|z|,\nu_{\mathbb{T}^2}) \end{equation} gives a Lagrangian fibration on $O_{k,l}$,$\hat{O}_{k,l}$ and $\tilde{O}_{k,l}$. On $G_{k,l}$, we have the Hamiltonian $\mathbb{S}^1$-action given by $x \mapsto \lambda x$, $y \mapsto \lambda^{-1} y$, leaving $z,w$ unchanged. We also have the corresponding action on its resolution and smoothing, and let's denote the moment map to $\mathbb{R}$ by $\mu_{\mathbb{S}^1}$. ($\mu_{\mathbb{S}^1}$ is simply given by $|x|^2 - |y|^2$ on $G_{k,l}$.) Then we have the torus fibration $$ (\mu_{\mathbb{S}^1},\log|z|,\log|w|) $$ on $G_{k,l}$,$\hat{G}_{k,l}$ and $\tilde{G}_{k,l}$. The torus fibration is Lagrangian for $\hat{G}_{k,l}$, but not for $\tilde{G}_{k,l}$. By the result of \cite{AAK} using the Moser argument, the torus fibration can be modified by an isotopy to a Lagrangian fibration (where the fibration map is piecewise smooth near the discriminant locus) with the first coordinate $\mu_{\mathbb{S}^1}$ remains unchanged. The issue is that the reduced symplectic space of $\tilde{G}_{k,l}$ by $\mathbb{S}^1$ is just isotopic but not exactly equal to $(\mathbb{C}^2,\omega_{\mathrm{std}})$, and so one needs the Moser argument to connect these two by symplectomorphisms. From the Lagrangian fibrations on $\hat{G}_{k,l}$ and $\tilde{G}_{k,l}$, we constructed $\tilde{O}_{k,l}$ and $\hat{O}_{k,l}$ as their SYZ mirrors respectively; the reverse diection is also true. The key ingredient in the construction is the holomorphic discs emanated from singular Lagrangian fibers (which are of Maslov index zero). It is summarized as follows. \begin{lemma}[Discriminant locus] \label{lem:disc-locus} For the Lagrangian fibration on $G_{k,l}$ or the Lagrangian fibration on $O_{k,l}$, the discriminant locus is given by $\left(\{0\} \times \mathbb{R} \times \{0\}\right) \cup \left(\{0\}\times \{0\} \times \mathbb{R}\right)$. On $\hat{G}_{k,l}$ or $\tilde{O}_{k,l}$, it becomes $$\left(\bigcup_{i=1}^k \left(\{s_i\} \times \mathbb{R} \times \{0\}\right)\right) \cup \left(\bigcup_{i=1}^l \left(\{t_i\} \times \{0\} \times \mathbb{R} \right)\right)$$ where $s_i$ and $t_i$ are certain constants related to the symplectic sizes of spheres in the exceptional curve of $\hat{G}_{k,l}$. For $\tilde{G}_{k,l}$ or $\hat{O}_{k,l}$, the discriminant locus is contained in the plane $\{0\} \times \mathbb{R} \times \mathbb{R}$ and is homotopic to the dual graph of the triangulation of the rectangle with vertices $(0,0),(k,0),(0,l),(k,l)$ in the definition of the resolution $\hat{O}_{k,l}$. \end{lemma} Denote by $B_0$ the complement of the discriminant locus in the base $B$ of a Lagrangian fibration with a Lagrangian section. $B_0$ has an induced tropical affine structure (namely it has an atlas with transition maps being elements of $\mathrm{GL}(n,\mathbb{Z}) \ltimes \mathbb{R}^n$). Denote by $\Lambda \subset TB_0$ the corresponding local system, and $\Lambda^* \subset T^*B_0$ the dual. Then the inverse image of $B_0$ is symplectomorphic to $T^*B_0 / \Lambda^*$. The complex manifold $TB_0 / \Lambda$ is called the semi-flat mirror which serves as the first-order approximation. Then the generating functions of holomorphic discs of Maslov index zero give corrections to the complex structure of the semi-flat mirror. \begin{prop}[Wall] Given a Lagrangian fibration, Let $H \subset B$ (called the wall) be the set of regular Lagrangian torus fibers which bound non-constant holomorphic discs of Maslov index zero. For the Lagrangian fibration on $G_{k,l}$ or $\hat{G}_{k,l}$, $H$ is given by $\left(\mathbb{R} \times \mathbb{R} \times \{0\}\right) \cup \left(\mathbb{R} \times \{0\} \times \mathbb{R}\right)$. On $\tilde{G}_{k,l}$, $H$ is given by $\mathbb{R} \times \Delta$ where $\Delta$ denotes the discriminant locus contained in the horizontal plane. For the Lagrangian fibration on $O_{k,l}$ or $\hat{O}_{k,l}$, $H$ is given by $\{0\} \times \mathbb{R}^2$. On $\tilde{O}_{k,l}$, $H$ becomes $\{s_1,\ldots,s_k,t_1,\ldots,t_l\} \times \mathbb{R}^2$ where $s_i$ and $t_i$ are the constants appearing in Lemma \ref{lem:disc-locus}. \end{prop} The discriminant loci and walls are shown in Figure \ref{fig:local-walls-gencfd}. \begin{theorem}[Slab function] Each component of $H$ is attached with a function defined by wall-crossing of the open Gromov-Witten potential. For $\hat{G}_{k,l}$, the function attached to $\mathbb{R} \times \mathbb{R} \times \{0\}$ is given by $(1+Z)(1+q_1Z)\ldots(1+q_1\ldots q_{k-1}Z)$, and that attached to $\mathbb{R} \times \{0\} \times \mathbb{R}$ is given by $(1+cZ)(1+q_1'cZ)\ldots(1+q_1'\ldots q_{l-1}'cZ)$, where $q_1,\ldots,q_{k-1},q_1',\ldots,q_{l-1}',c$ are given in the form $e^{- \int_C \omega}$ for certain rational curves $C$, and $Z$ is the semi-flat complex coordinate corresponding to the third coordinate $\mu_{\mathbb{S}^1}$ of the base of the Lagrangian fibration. For $\tilde{G}_{k,l}$ the function is $1+Z$. For $\tilde{O}_{k,l}$, the slab function attached to each $\{s_i\} \times \mathbb{R}^2$ is $(1+Z)$, and that attached to each $\{t_i\} \times \mathbb{R}^2$ is $(1+W)$, where $Z,W$ are the semi-flat complex coordinates corresponding to the base direction $\nu_{T^2}$ of the Lagrangian fibration. For $\hat{O}_{k,l}$, the slab function is of the form $$\sum_{i=0}^k \sum_{j=0}^l (1+\delta_{ij}(q)) q^{C_{ij}} Z^i W^j$$ where for each $i,j$, $\delta_{ij}(q)$ is a certain generating function of open Gromov-Witten invariants, and $C_{ij}$ is a certain rational curve in $\hat{O}_{k,l}$. \end{theorem} $\delta_{ij}(q)$ can be explicitly computed from the mirror map \cite{CCLT13}. The discriminant loci, walls and slab functions are shown in Figure \ref{fig:local-walls-gencfd}. \begin{figure}[h] \begin{center} \includegraphics[scale=0.5]{local-walls-gencfd.pdf} \caption{The discriminant loci, walls and slab functions for the local generalized and orbifolded conifold transitions.} \label{fig:local-walls-gencfd} \end{center} \end{figure} By gluing local pieces of semi-flat mirrors using the slab functions, we obtain the SYZ mirrors given in Theorem \ref{localSYZ}. We refer to \cite{CLL,KL} for detail. In Section \ref{sec:loc-aff}, we shall follow \cite{GS07,CM1,CM2} and use tropical geometry to encode the data. The Gross-Siebert program has the important advantage that it can handle global geometries by combinatorics. The walls and generating functions of the local geometry given above serve as initial input data to the Gross-Siebert program. \begin{remark} In an ongoing work we shall construct the quiver mirror of $\tilde{O}_{k,l}$ by using the construction of \cite{CHL2}, see Figure \ref{fig:ncgencfd}, which is useful in studying stability conditions and flop along the line of \cite{FHLY}. It is a noncommutative resolution of $G_{k,l}$ (which was extensively studied by \cite{Nagao,MN}, and can be derived from the construction of Bocklandt \cite{Bocklandt}). \end{remark} \begin{figure}[h] \begin{center} \includegraphics[scale=0.5]{ncgencfd.pdf} \caption{A noncommutative resolution of the generalized conifold $G_{k,l}$, which also serves as a mirror of the orbifolded conifold.} \label{fig:ncgencfd} \end{center} \end{figure} In summary, we have SYZ mirror symmetry for the resolutions and smoothings of the local generalized and orbifolded conifold transitions near large volume limits. There is also a noncommutative mirror construction near the conifold limit. It is schematically depicted by Figure \ref{fig:gen-con-MS-mod}. \begin{figure}[h] \begin{center} \includegraphics[scale=0.7]{gen-con-MS-mod.pdf} \caption{The moduli spaces. Around a large complex structure limit we have the SYZ construction using Lagrangian torus fibrations. Around a conifold limit we have the noncommutative mirror construction in \cite{CHL2} using certain Lagrangian cycles.} \label{fig:gen-con-MS-mod} \end{center} \end{figure} \subsection{Monodromy computation for local generalized and orbifolded conifolds} We have reviewed Lagrangian fibrations and SYZ for the local generalized conifold $G_{k,l}$ and orbifolded conifold $O_{k,l}$. Now we compute the monodromies of the fibrations for later purpose. Recall that $B_0$ denotes the complement of discriminant locus in the base $B$ of the Lagrangian fibration. For $G_{k,l}=\{xy=(1+z)^k(1+w)^l\}$, fix two contractible open sets $U_+$ and $U_-$ covering $B_0$, where $U_+ = B_0 - \mathbb{R}_{\leq 0} \times \Delta$ and $U_- = B_0 - \mathbb{R}_{\geq 0} \times \Delta$. The torus bundles over $B_0$ trivialize over $U_+$ and $U_-$. Then fix a basis of the fundamental group of each fiber at $(a,b,c) \in U_+$ as follows. \begin{align*} \gamma_1(t) &\textrm{ defined by } z=e^b,w=e^c,x= r e^{2\pi\mathbf{i}\, t} \textrm{ for } r \in \mathbb{R}_+;\\ \gamma_2(t) &\textrm{ defined by } z=e^b e^{2\pi\mathbf{i}\, t},w=e^c,x \in \mathbb{R}_+;\\ \gamma_3(t) &\textrm{ defined by } z=e^b,w=e^c e^{2\pi\mathbf{i}\, t},x \in \mathbb{R}_+ \end{align*} under the constraints $|x|^2-|y|^2=e^a$ and $xy=(1+z)^k(1+w)^l$). (Note that $x\not=0$ over $U_+$.) $[\gamma_i]$ for $i=1,2,3$ defines a basis of the fundamental group. Similarly we fix a basis over $U_-$ (where $y\not=0$) by taking \begin{align*} \tilde{\gamma}_1(t) &\textrm{ defined by } z=e^b,w=e^c,y= r e^{-2\pi\mathbf{i}\, t} \textrm{ for } r \in \mathbb{R}_+;\\ \tilde{\gamma}_2(t) &\textrm{ defined by } z=e^b e^{2\pi\mathbf{i}\, t},w=e^c,y \in \mathbb{R}_+;\\ \tilde{\gamma}_3(t) &\textrm{ defined by } z=e^b,w=e^c e^{2\pi\mathbf{i}\, t},y \in \mathbb{R}_+. \end{align*} In the pre-image of $U_-$, $y \not=0$ and so the above is well-defined. \begin{prop}[Monodromy for $G_{k,l}$] \label{prop:mon-gencfd} For the Lagrangian fibration on $G_{k,l}$, the monodromy around $\{0\} \times \mathbb{R} \times \{0\}$ is $[\gamma_1] \mapsto [\gamma_1]$, $[\gamma_2] \mapsto [\gamma_2]$ and $[\gamma_3] \mapsto [\gamma_3] + l [\gamma_1]$; the monodromy around $\{0\} \times \{0\} \times \mathbb{R}$ is $[\gamma_1] \mapsto [\gamma_1]$, $[\gamma_2] \mapsto [\gamma_2]- k [\gamma_1]$ and $[\gamma_3] \mapsto [\gamma_3]$. \end{prop} \begin{proof} First we consider $G_{k,l}$. $U_+ \cap U_-$ consists of four connected components which we call chambers. We have the following by considering the winding numbers of the variables $x,y,z,w$ with the constraint $xy=(1+z)^k(1+w)^l$. For a base point in a chamber, \begin{align*} [\gamma_1]=[\tilde{\gamma}_1], [\gamma_2]=[\tilde{\gamma}_2],[\gamma_3]=[\tilde{\gamma}_3] \textrm{ in } \mathbb{R} \times \mathbb{R}_- \times \mathbb{R}_-;\\ [\gamma_1]=[\tilde{\gamma}_1], [\gamma_2]=[\tilde{\gamma}_2]-k[\tilde{\gamma}_1],[\gamma_3]=[\tilde{\gamma}_3] \textrm{ in } \mathbb{R} \times \mathbb{R}_+ \times \mathbb{R}_-;\\ [\gamma_1]=[\tilde{\gamma}_1], [\gamma_2]=[\tilde{\gamma}_2],[\gamma_3]=[\tilde{\gamma}_3]-l[\tilde{\gamma}_1] \textrm{ in } \mathbb{R} \times \mathbb{R}_- \times \mathbb{R}_+;\\ [\gamma_1]=[\tilde{\gamma}_1], [\gamma_2]=[\tilde{\gamma}_2]-k[\tilde{\gamma}_1],[\gamma_3]=[\tilde{\gamma}_3]-l[\tilde{\gamma}_1] \textrm{ in } \mathbb{R} \times \mathbb{R}_+ \times \mathbb{R}_+. \end{align*} Now take a loop around $\{0\} \times \mathbb{R} \times \{0\}$. It goes from the chamber $\mathbb{R} \times \mathbb{R}_+ \times \mathbb{R}_+$ to $\mathbb{R} \times \mathbb{R}_+ \times \mathbb{R}_-$ in $\mathbb{R}_+ \times \mathbb{R} \times \mathbb{R}$, and then goes back to the original chamber in $\mathbb{R}_- \times \mathbb{R} \times \mathbb{R}$. In $\mathbb{R}_+ \times \mathbb{R} \times \mathbb{R}$ we use the basis $[\gamma_i]$; in $\mathbb{R}_- \times \mathbb{R} \times \mathbb{R}$ we use the basis $[\tilde{\gamma_i}]$ for $i=1,2,3$. Then the monodromy around the loop is $[\gamma_1] \mapsto [\gamma_1]$, $[\gamma_2] \mapsto [\gamma_2]$ and $[\gamma_3] \mapsto [\gamma_3] + l [\gamma_1]$. The computation for the monodromy around $\{0\} \times \{0\} \times \mathbb{R}$ is similar. \end{proof} For $O_{k,l}=\{u_1v_1=(1+z)^k,u_2v_2=(1+z)^l\}$, we take four contractible open subsets $U_{\pm\pm}$ covering $B_0$, where \begin{align*} U_{++} =& ((\mathbb{R}-\{0\})\times \mathbb{R}\times\mathbb{R}) \cup (\mathbb{R} \times \mathbb{R}_{>0} \times \mathbb{R}_{>0});\\ U_{+-} =& ((\mathbb{R}-\{0\})\times \mathbb{R}\times\mathbb{R}) \cup (\mathbb{R} \times \mathbb{R}_{>0} \times \mathbb{R}_{<0});\\ U_{-+} =& ((\mathbb{R}-\{0\})\times \mathbb{R}\times\mathbb{R}) \cup (\mathbb{R} \times \mathbb{R}_{<0} \times \mathbb{R}_{>0});\\ U_{--} =& ((\mathbb{R}-\{0\})\times \mathbb{R}\times\mathbb{R}) \cup (\mathbb{R} \times \mathbb{R}_{<0} \times \mathbb{R}_{<0}). \end{align*} We take a basis over $(a,b,c) \in U_{++}$ to be: \begin{align*} \gamma_1(t)=\gamma^{++}_1(t) &\textrm{ defined by } z=e^a e^{2\pi\mathbf{i}\, t},u_1, u_2 \in \mathbb{R}_+;\\ \gamma_2(t)=\gamma^{++}_2(t) &\textrm{ defined by } z=e^a,u_1=r e^{2\pi\mathbf{i}\, t} \textrm{ for } r \in \mathbb{R}_+,u_2 \in \mathbb{R}_+;\\ \gamma_3(t)=\gamma^{++}_3(t) &\textrm{ defined by } z=e^a,u_1\in \mathbb{R}_+,u_2=r e^{2\pi\mathbf{i}\, t} \textrm{ for } r \in \mathbb{R}_+ \end{align*} under the constraints $|u_1|^2-|v_1|^2=b$ and $|u_1|^2-|v_1|^2=c$. It is well-defined since $u_1,u_2 \not= 0$ over $U_{++}$. We replace $(u_1,u_2)$ by $(u_1,v_2^{-1})$ for $U_{+-}$, by $(v_1^{-1},u_2)$ for $U_{-+}$, and by $(v_1^{-1},v_2^{-1})$ for $U_{--}$. Then we have a basis $\{\gamma^{\pm\pm}_i:i=1,2,3\}$ over each open set. \begin{prop}[Monodromy for $O_{k,l}$] \label{prop:mon-orbcfd} For the Lagrangian fibration on $O_{k,l}$, the monodromy around $\{0\} \times \mathbb{R} \times \{0\}$ is $[\gamma_1] \mapsto [\gamma_1] - l [\gamma_3]$, $[\gamma_2] \mapsto [\gamma_2]$ and $[\gamma_3] \mapsto [\gamma_3]$; the monodromy around $\{0\} \times \{0\} \times \mathbb{R}$ is $[\gamma_1] \mapsto [\gamma_1] + k [\gamma_2]$, $[\gamma_2] \mapsto [\gamma_2]$ and $[\gamma_3] \mapsto [\gamma_3]$. \end{prop} \begin{proof} The intersection of all the open sets is the union of the two disjoint open subsets $\mathbb{R}_\pm \times \mathbb{R} \times \mathbb{R}$. In the chamber $\mathbb{R}_- \times \mathbb{R} \times \mathbb{R}$, for each $i$ we have $[\gamma_i^{\pm\pm}]$ to be all the same. In the chamber $\mathbb{R}_+ \times \mathbb{R} \times \mathbb{R}$, for $i=2,3$, we have $[\gamma_i^{\pm\pm}]$ to be all the same; for $i=1$, $[\gamma_1^{++}]=[\gamma_1^{-+}]-k[\gamma_2^{-+}]=[\gamma_1^{+-}]-l[\gamma_3^{+-}]=[\gamma_1^{--}]-k[\gamma_2^{--}]-l[\gamma_3^{--}]$. Now consider the monodromy around $\{0\} \times \mathbb{R} \times \{0\}$. We start with the basis $[\gamma_i^{++}]$ in the chamber $\mathbb{R}_+ \times \mathbb{R} \times \mathbb{R}$, change to the basis $[\gamma_i^{+-}]$, move to the chamber $\mathbb{R}_- \times \mathbb{R} \times \mathbb{R}$ in $U_{+-}$, change back to the original basis, and move back to the original chamber in $U_{++}$. Obviously $[\gamma_2^{++}],[\gamma_3^{++}]$ remains the same. We have $[\gamma_1^{++}]\mapsto [\gamma_1^{++}] - l [\gamma_3^{++}]$. The monodromy computation around $\{0\} \times \{0\} \times \mathbb{R}$ is similar. \end{proof} \section{Local affine geometries} \label{sec:loc-aff} In this section, we formulate the local singularities, their resolutions and smoothings in the language of affine geometry. Such a formulation has the great advantage that it can be easily globalized, thanks to the groundbreaking works of Gross-Siebert \cite{GS1,GS2,GS07}. The notion of a tropical manifold is central in the Gross-Siebert program. We briefly recall it below. \begin{defn} A polarized tropical manifold is a triple $(B,\mathcal{P},\phi)$, where $B$ is an integral affine manifold with singularities, $\mathcal{P}$ is a toric polyhedral decomposition of $B$ (where the singularities occur in facets of polyhedrons in $\mathcal{P}$), and $\phi$ is a strictly convex multivalued piecewise linear function on $B$. \end{defn} In their reconstruction program they also assume $(B,\mathcal{P},\phi)$ is positive and simple. In dimension three it means that $\Delta$ is a trivalent graph and the vertices of $\Delta$ are of either positive or negative types (with certain simple monodromies of the affine connection around each edge), this is the case $d=1$ in Section \ref{sec:orb_vert}. Gross-Siebert defined discrete Legendre transform which associates a polarized tropical manifold $(B,\mathcal{P},\phi)$ to another one $(\check{B},\check{\mathcal{P}},\check{\phi})$, with the property that the discrete Legendre transform of $(\check{B},\check{\mathcal{P}},\check{\phi})$ goes back to $(B,\mathcal{P},\phi)$. Their groundbreaking work constructs a toric degeneration of a Calabi-Yau manifold $X$ from each compact positive and simple polarized tropical manifold. The Calabi-Yau manifolds associated to a polarized tropical manifold and its discrete Legendre transform form a mirror pair. By gluing local models of Lagrangian fibrations with prescribed discriminant loci, Castano-Bernard and Matessi \cite{CM1} constructed a symplectic manifold (equipped with a Lagrangian fibration and a Lagrangian section) which contains $T^* B_0 / \Lambda^*$ as an open subset. Here $B_0 = B - \Delta$ and $\Lambda^* \subset T^* B_0$ is a fiberwise lattice given by the affine structure. In \cite{CM2}, they performed conifold transitions for the symplectic manifolds constructed from polarized tropical manifolds. In this paper we follow the method of Castano-Bernard and Matessi \cite{CM1,CM2} to construct orbi-conifold transitions from an affine manifold with singularities. First we construct the affine structures for the local generalized conifolds, orbifolded conifolds, and their resolutions and smoothings. Then in Section \ref{sec:Schoen} we consider global affine structures with generalized or orbifolded conifold points. The discriminant locus has four-valent vertices, and we also relax the simplicity condition (which allows orbifolded singularities). Throughout this section $\{e_i:i=1,\ldots,n\}$ denotes the standard basis of $\mathbb{R}^n$. \subsection{Local affine $A_{k-1}$ singularity in dimension two} Let's begin with the local $A_{k-1}$ singularity in dimension two, where $k>1$. \begin{defn}[Affine $A_{k-1}$ singularity] A singular point in an oriented affine surface is called an affine $A_{k-1}$ singularity (for $k>1$) if in a certain oriented basis, it has monodromy $\left(\begin{array}{cc} 1 & k \\ 0 & 1 \end{array}\right)$. A singular point with such monodromy (even when $k\leq 1$) is said to have multiplicity $k$. When $k=\pm 1$ it is called simple. \end{defn} We can cook up two tropical manifolds with an affine $A_{k-1}$ singularity. They are related to each other by discrete Legendre transform. This reflects the fact that the $A_{k-1}$ singularity is self-mirror. Define the tropical manifold $(B,\mathcal{P},\phi)$ as follows. Take the lattice triangle $$\mathrm{Conv}\{(0,0),(0,k),(-1,k)\}$$ and the rectangle $$\mathrm{Conv}\{(0,0),(0,k),(1,0),(1,k)\}.$$ Glue them along the edge $\mathrm{Conv}\{(0,0),(0,k)\}$. Then we have a manifold $B$ with corners and a polyhedral decomposition $\mathcal{P}$. See the top middle of Figure \ref{fig:affA_n}. The fan structures at vertices are given as follows. Denote the standard basis of $\mathbb{R}^2$ by $\{e_1,e_2\}$. We have the fan generated by the primitive vectors $e_1,e_2,-e_1$. At the vertex $(0,0)$, the tangent vectors $(1,0),(0,1),(-1,k)$ of the polytopes are mapped to the primitive generators $e_1,e_2,-e_1$ respectively. At the vertex $(0,k)$, the tangent vectors $(-1,0),(0,-1),(1,0)$ are mapped to the primitive generators $e_1,e_2,-e_1$ respectively. The multivalued piecewise linear function $\phi$ is defined by \begin{equation} \label{eq:phi1} \phi(x,y) = \left\{ \begin{array}{ll} x & x \geq 0 \\ 0 & x \leq 0 \end{array}\right. \end{equation} in the fan generated by $\{e_1,e_2,-e_1\}$ at each of the two vertices. \begin{figure}[h] \begin{center} \includegraphics[scale=0.7]{affA_n.pdf} \caption{Affine structures of the $A_{k-1}$ singularity, its resolution and smoothing.} \label{fig:affA_n} \end{center} \end{figure} The discriminant locus is a point $\Delta=\{p\}$ in the edge $\mathrm{Conv}\{(0,0),(0,k)\}$. In other words $B-\Delta$ has an affine structure induced from the fans at the vertices. The monodromy around $p$ is given as follows. \begin{prop} For the tropical manifold $(B,\mathcal{P},\phi)$ given above, the monodromy matrix around $p$ in the standard basis $\{e_1,e_2\}$ at $(0,0)$ is $\left(\begin{array}{cc} 1 & 0 \\ k & 1 \end{array}\right).$ \end{prop} \begin{proof} Consider a loop starting from the vertex $(0,0)$, going to the vertex $(0,k)$ in the rectangle, and going back to the vertex $(0,0)$ in the triangle. It is easy to see that the vector $e_2$ of the fan at the vertex $(0,0)$ is monodromy invariant. Consider the vector $e_1$. Transporting it to the vertex $(0,k)$ in the rectangle, it is identified with $-e_1$ in the fan at the vertex $(0,k)$. Transporting it back to the vertex $(0,0)$ in the triangle, it is identified with $e_1+ke_2$ in the fan at the vertex $(0,0)$ (since $(1,0) = -(-1,k)+k(0,1)$). Thus $e_1$ is sent to $e_1+ke_2$ under monodromy. \end{proof} When $k=1$ the singularity is removable, and the affine manifold is simple in the sense of Definition 1.60 of \cite{GS1}. However the monodromy is no longer simple for $k>1$. Next, we define $(\check{B},\check{\mathcal{P}})$ by gluing two squares $$\mathrm{Conv}\{(0,0),(1,0),(0,1),(1,1)\} \textrm{ and } \mathrm{Conv}\{(0,0),(1,0),(0,-1),(1,-1)\}$$ along the edge $\mathrm{Conv}\{(0,0),(1,0)\}$. See the bottom middle of Figure \ref{fig:affA_n}. The fan at the vertex $(0,0)$ is given by mapping the tangent vectors $(0,-1),(1,0),(0,1)$ to $-ke_1-e_2, e_1, e_2$ respectively. The fan at the vertex $(1,0)$ is given by mapping the tangent vectors $(0,1),(-1,0),(0,-1)$ to $e_2,-e_1,-e_2$ respectively. The discriminant locus is a point $\Delta=\{p\}$ in the edge $\mathrm{Conv}\{(0,0),(1,0)\}$. One can similarly check that the monodromy is given as follows and the proof is omitted. \begin{prop} For $(\check{B},\check{\mathcal{P}})$, the monodromy matrix around $p$ in the standard basis $\{e_1,e_2\}$ at $(0,0)$ is $\left(\begin{array}{cc} 1 & -k \\ 0 & 1 \end{array}\right).$ \end{prop} \begin{remark} This is just equivalent to the affine structure of $(B,P)$ if we switch to the basis $(-e_2,e_1)$. \end{remark} Define the multivalued piecewise linear function $\check{\phi}$ by $$ \check{\phi}(x,y) = \left\{ \begin{array}{ll} 0 & \textrm{ in } \mathbb{R}_{\geq 0}\{e_1,-k e_1 - e_2\} \\ ky & \textrm{ in } \mathbb{R}_{\geq 0}\{e_1,e_2\} \\ \end{array}\right. $$ on the fan at the vertex $(0,0)$, and $$ \phi(x,y) = \left\{ \begin{array}{ll} 0 & \textrm{ in } \mathbb{R}_{\geq 0}\{-e_1,-e_2\} \\ ky & \textrm{ in } \mathbb{R}_{\geq 0}\{-e_1,e_2\} \\ \end{array}\right. $$ on the fan at the vertex $(1,0)$. (The differentials, or so called discrete Legendre transform, give the corners of the polytopes of $(B,P)$.) It is easy to check the following. \begin{prop} $(B,\mathcal{P},\phi)$ and $(\check{B},\check{\mathcal{P}},\check{\phi})$ given above are discrete Legendre dual to each other. \end{prop} \begin{proof} One can directly check that the polytopes in $(\check{B},\check{\mathcal{P}})$ are Legendre dual polytopes of the piecewise linear function $\phi$ around vertices of $(B,\mathcal{P})$ (by taking the differential of $\phi$ restricted on each cone), and vice versa. Moreover the fan structure at each vertex of $(\check{B},\check{\mathcal{P}})$ is given by the normal fan of the corresponding polytope in $(B,\mathcal{P})$, and vice versa. \end{proof} \begin{remark} \label{rem:edge} By taking a product of the affine $A_{k-1}$ singularity $(B,\mathcal{P})$ (or $(\check{B},\check{\mathcal{P}})$) with the affine line $\mathbb{R}$, one obtains a tropical threefold with discriminant locus being a line. The multiplicity of such a discriminant locus is defined to be $k$. (As in \cite{CM1,CM2}, one can perturb the discriminant locus to be a curve.) \end{remark} \subsection{Smoothing and resolution of a local affine $A_{k-1}$ singularity} The tropical manifolds corresponding to the resolution and smoothing of an $A_{k-1}$ singularity are given by the left and right sides of Figure \ref{fig:affA_n}. The singularity in the affine base (which has multiplicity $k$) separates into $k$ simple singularities. In a smoothing the simple singularities lie in the same monodromy-invariant affine hyperplane (which is formed by some edges of the polytopes in the decomposition). A resolution is Legendre dual to a smoothing. The readers can easily understand the polyhedral decompositions and fan structures from the figures and we omit the detailed descriptions. The monodromy around each critical point is simple, namely it equals $\left(\begin{array}{cc} 1 & \pm 1 \\ 0 & 1 \end{array}\right)$ up to conjugation. For the fan generated by $\{e_1,e_2,-e_1\}$, the restriction of the multivalued piecewise linear function is given by Equation \eqref{eq:phi1}; for that generated by $\{e_1,e_2,-e_1,-e_2\}$ in the top left of Figure \ref{fig:affA_n}, it is given by $$ \phi(x,y) = \left\{ \begin{array}{ll} x+y & x, y \geq 0 \\ x & x \geq 0 \textrm{ and } y \leq 0 \\ y & y \geq 0 \textrm{ and } x \leq 0 \\ 0 & x, y \leq 0; \end{array}\right. $$ for that generated by $\{e_1,e_2,-e_1,-j e_1 - e_2\}$ in the bottom right of Figure \ref{fig:affA_n}, it is given by $$ \phi(x,y) = \left\{ \begin{array}{ll} 0 & \textrm{ in } \mathbb{R}_{\geq 0}\{e_1,-j e_1 - e_2\} \\ \frac{(j+k)(j+k-1)y}{2} & \textrm{ in } \mathbb{R}_{\geq 0}\{e_1,e_2\} \\ -x+\frac{(j+k)(j+k-1)y}{2} & \textrm{ in } \mathbb{R}_{\geq 0}\{e_2,-e_1\} \\ -x+j y & \textrm{ in } \mathbb{R}_{\geq 0}\{-e_1,-j e_1 - e_2\} \\ \end{array}\right. $$ $\phi$ restricted to other strata are similar. \begin{remark} \label{rem:subdivide} At the vertex $(-1,k)$ of $(B,\mathcal{P})$ for the local $A_{k-1}$ singularity, we have an orbifolded fan structure (see the top middle of Figure \ref{fig:affA_n}). The same happens for its local $A_{k-1}$ resolution (top right of Figure \ref{fig:affA_n}). It can be easily resolved by a subdivision of the polyhedral decomposition. See Figure \ref{fig:affA_nres}. The fan structures at the additional vertices are taken to be trivial. \end{remark} \begin{figure}[h] \begin{center} \includegraphics[scale=0.6]{affA_nres.pdf} \caption{A subdivision to resolve the orbifold fan. It shows the case for $k=2$, and it is similar for general $k$.} \label{fig:affA_nres} \end{center} \end{figure} \begin{remark} \label{rem:edge-sm} Again by taking a product with $\mathbb{R}$, one obtains a smoothing or a resolution of the singularity given in Remark \ref{rem:edge}. \end{remark} \subsection{Lagrangian fibration on local $A_{k-1}$ singularity} The $A_{k-1}$ surface singularity is toric. Its fan is the cone in $\mathbb{R}^2$ generated by $(0,1)$ and $(k,1)$. Let $X$ be the corresponding toric variety. Picking up a toric K\"ahler form, one has the moment map to $\mathbb{R}^2$ whose image is a (non-compact) polyhedral set. Let $\mu_1$ be the horizontal component of the moment map. Then \begin{equation} \label{eq:fib_An} (\mu_1,\log|\nu-1|) \end{equation} gives a Lagrangian fibration on $X - \{\nu=1\}$ where $\nu$ is the toric holomorphic function corresponding to the $(0,1)$ lattice point. The affine base of the fibration is isomorphic to the top middle of Figure \ref{fig:affA_n}. See \cite{LLW} for more detail. Alternatively we can take its mirror variety $\{(u,v,z)\in \mathbb{C}^2 \times \mathbb{C}^\times:uv=(1+z)^k\}$ which is again an $A_{k-1}$ singularity at $z=-1,u=v=0$. \begin{equation} \label{eq:fib_An'} (|u|^2-|v|^2,\log|z|) \end{equation} gives a Lagrangian fibration whose affine base is isomorphic to the bottom middle of Figure \ref{fig:affA_n}. The affine smoothings and resolutions in Figure \ref{fig:affA_n} are base of Lagrangian fibrations on resolution and smoothing of the $A_{k-1}$ singularity. We have the toric resolution whose fan is generated by $(j,1)$ for $j=0,\ldots,k$. A smoothing is given by deforming the right hand side of the equation $uv=(1+z)^k$ such that all roots are simple. Their Lagrangian fibrations are given by the same equations as above. By taking product with $\mathbb{C}^\times$, we have Lagrangian fibrations whose base are isomorphic to the affine threefolds in Remark \ref{rem:edge} and \ref{rem:edge-sm}. These Lagrangian fibrations serve as local models which can be glued to give more interesting geometries. \subsection{Affine models for local generalized and orbifolded conifolds} \label{sec:aff-loc} Now we construct tropical manifolds modeling the base of the Lagrangian fibrations in Section \ref{sec:SYZ}. They will have the same discriminant loci and monodromies as in the last subsection. The singularities of the local generalized and the orbifolded conifolds are said to be negative and positive respectively, according to the Euler characteristics of the corresponding singular Lagrangian fibers. For $k=l=1$, this is the local affine geometry studied by \cite{CM2}. First we consider the local generalized conifold. Take the triangular prisms $$\mathrm{Conv}\{(0,0,0),(l,0,0),(0,0,k),(l,0,k),(0,-1,k),(l,-1,k)\}$$ and $$\mathrm{Conv}\{(0,0,0),(l,0,0),(0,0,k),(l,0,k),(0,1,0),(0,1,k)\}$$ and glue them together along the rectangle $\mathrm{Conv}\{(0,0,0),(l,0,0),(0,0,k),(l,0,k)\}$. See the left of Figure \ref{fig:loc-orb-cfd}. Take the fan given by the product of $\mathbb{R}_{\geq 0}\{e_1\}$ with the fan generated by $\{e_2,e_3,-e_2\}$ in the plane. Then the fan at the vertex $(0,0,0)$ is given by mapping the tangent vectors $(1,0,0),(0,1,k),(0,0,1),(0,-1,0)$ to the generators $e_1,e_2,e_3,-e_2$ respectively. The fans at the other three vertices $(l,0,0),(0,0,k),(l,0,k)$ are similar and the reader can understand from the left of Figure \ref{fig:loc-orb-cfd}. \begin{figure}[h] \begin{center} \includegraphics[scale=0.7]{loc-orb-cfd.pdf} \caption{Affine structures of the local generalized conifold and orbifolded conifold.} \label{fig:loc-orb-cfd} \end{center} \end{figure} The discriminant locus $\Delta$ is a union of the lines $$\left([0,l] \times \{0\} \times \{k/2\}\right) \cup \left(\{l/2\} \times \{0\} \times [0,k] \right)$$ in the rectangle $\mathrm{Conv}\{(0,0,0),(l,0,0),(0,0,k),(l,0,k)\}$. This gives an affine manifold with singularities and polyhedral decomposition $(B,\mathcal{P})$. The monodromy is given as follows. \begin{prop} \label{prop:mono-match} For the tropical manifold $(B,\mathcal{P})$ given above, the monodromy matrix around the component $[0,l] \times \{0\} \times \{k/2\}$ of the discriminant locus in the standard basis $\{e_1,e_2,e_3\}$ at $(0,0)$ equals to $$\left(\begin{array}{ccc} 1 & 0 & 0\\ 0 & 1 & 0\\ 0 & k & 1\\ \end{array}\right)$$ and the monodromy matrix around the component $\{l/2\} \times \{0\} \times [0,k]$ equals to $$\left(\begin{array}{ccc} 1 & -l & 0\\ 0 & 1 & 0\\ 0 & 0 & 1\\ \end{array}\right).$$ It matches with the monodromy in Proposition \ref{prop:mon-orbcfd}. \end{prop} \begin{proof} $e_1$ and $e_3$ are obviously monodromy invariant. Let's consider the monodromy of $e_2$ around $\{l/2\} \times \{0\} \times [0,k]$, and the other case is similar. First we transport $e_2$ from the vertex $(0,0,0)$ to $(l,0,0)$ in the prism on the right, which is identified with $l e_1-e_2$ in the fan at $(l,0,0)$ (since $(0,1,0)=l(-1,0,0)-(-l,-1,0)$). It is identified with the vector $(-l,0,0)+(0,1,k)$ in the prism on the left. Transporting it back to $(0,0,0)$, it is identified with $-l e_1 + e_2$ in the fan at $(0,0,0)$. Thus the monodromy maps $e_2$ to $-l e_1 + e_2$. \end{proof} The multivalued piecewise linear function $\phi$ is defined by \begin{equation} \label{eq:phi-gc} \phi(x,y,z) = \left\{ \begin{array}{ll} y & y \geq 0 \\ 0 & y \leq 0 \\ \end{array}\right. \end{equation} on the fan generated by $e_1,e_2,e_3,-e_2$. $(B,\mathcal{P},\phi)$ defines a tropical manifold. The discrete Legendre transform of $(B,\mathcal{P},\phi)$ is a union of four rectangles as shown in the right of Figure \ref{fig:loc-orb-cfd}. The readers can work out the multivalued piecewise linear function $\check{\phi}$ from the figure, and check that the monodromy matches with that in Proposition \ref{prop:mon-gencfd}. \begin{remark} Proposition 6.4 of \cite{CM2} extends to this case by the same proof using action coordinates. Namely, the affine structure induced on the base of the Lagrangian fibration \eqref{eq:fib_O} on $O_{k,l}$ is isomorphic to the affine manifold $(\check{B},\check{\mathcal{P}})$. The monodromy of $(B,\mathcal{P})$ matches with that of the Lagrangian fibration on the orbifolded conifold, while the monodromy of $(\check{B},\check{\mathcal{P}})$ matches with that of the Lagrangian fibration on the generalized conifold. It may look confusing that the roles of $(B,P)$ and $(\check{B},\check{P})$ get switched. This is due to the fact that $T^*B_0 \cong T\check{B}_0$ and $T^*\check{B}_0 \cong TB_0$. (Away from singular fibers a Lagrangian fibration is modeled by $T^*B_0 / \Lambda^*$.) $TB_0/\Lambda$ and $T^*B_0 / \Lambda^*$ are related by discrete Legendre transform. \end{remark} \subsection{Smoothings and resolutions of generalized and orbifolded conifolds} The affine generalized conifolds are not simple in the sense of \cite[Definition 1.60]{GS1}. They can be smoothed or resolved as shown by the left and right sides of Figure \ref{fig:affA_n} (for the case $k=l=2$). Smoothing and resolution of the orbifolded conifold are given by the Legendre transform. There are different choices of smoothing corresponding to different ways of refining the rectangle $[0,k]\times [0,l]$ into standard triangles; the discriminant locus is given by taking the dual graph of the triangulation. (There are further choices of extending the triangulation to a refinement of the polyhedral decomposition, but it does not affect the affine structure.) Similarly there are different choices of resolution corresponding to the different orders of horizontal and vertical line components of the discriminant locus. In the description below we have fixed a particular choice. \begin{figure}[h] \begin{center} \includegraphics[scale=0.7]{loc-orb-con-res2.pdf} \caption{Affine base of the smoothing and resolution of the generalized conifold. In the figure $k=l=2$.} \label{fig:loc-orb-con-res} \end{center} \end{figure} For the smoothed generalized conifold, the polyhedral decomposition is shown in the left figure. The non-trivial affine structure comes from the assignment of a fan structure at each lattice point in the rectangle in the direction $(1,0,0),(0,0,1)$. Let's take the coordinate system such that these lattice points are given by $(j,0,i)$ for $i=0,\ldots,k$ and $j=0,\ldots,l$. The key point is that, at the lattice point $(j,0,i)$, the vector $e_2$ in the local chart induced by the fan is identified with $(0,-1,k-i)$ in the polyhedral decomposition, and the vector $-e_2$ is identified with $(l-j,1,0)$. Other directions are trivially identified. The triangulation of the rectangle $[0,k]\times [0,l]$ is associated with a dual graph. To fix the positions of the vertices in the dual graph, we fix an integral piecewise linear function supported on the triangulation. Then we define $\phi$ to be the sum of this function and that given by Equation \eqref{eq:phi-gc}. This completes the definition of the tropical manifold $(B,P,\phi)$ corresponding to a deformed generalized conifold. Its Legendre dual corresponds to resolution of a resolved orbifolded conifold. The resolved affine generalized conifold is shown in the right figure. The discriminant locus consists of vertical lines contained in the planes $y=1,\ldots,l$ and horizontal lines contained in the planes $y=0,\ldots,-k+1$. Let's fix the coordinates such that the polyhedral decomposition contains the vertices $((j-1)j/2,j,0)$, $((j-1)j/2,j,k(k+1)/2)$ for $j=1,\ldots,l$, and $(0,-i+1,(i-1)i/2)$ or $(l(l+1)/2,-i+1,(i-1)i/2)$ for $i=1,\ldots,k$. At the vertex $((j-1)j/2,j,0)$ or $((j-1)j/2,j,k(k+1)/2)$, $e_2$ is identified with $(j,1,0)$ and $-e_2$ is identified with $(-j+1,-1,0)$. At the vertex $(0,-i+1,(i-1)i/2)$ or $(l(l+1)/2,-i+1,(i-1)i/2)$, $e_2$ is identified with $(0,1,-i+1)$ and $-e_2$ is identified with $(0,-1,i)$. The multivalued piecewise linear function $\phi$ is defined by asserting that in the fan of each vertex, $\phi(e_2)=1$ and $\phi(-e_2)=\phi(\pm e_3)=\phi(\pm e_1)=0$. This defines the tropical manifold $(B,P,\phi)$ corresponding to a resolved generalized conifold, whose Legendre dual corresponds to a deformed orbifolded conifold. One can directly verify the following. \begin{prop} The tropical manifolds given above are positive and simple in the sense of \cite{GS1}. \end{prop} \subsection{Orbifolded trivalent vertex} \label{sec:orb_vert} In this subsection we introduce an orbifolded version of the trivalent vertex in the Gross-Siebert program. This will be used in Definition \ref{def:orbi-cfd}. See \cite[Example 2.2 and 2.3]{CM2} for the usual trivalent vertex. Take a lattice triangle $T \subset \mathbb{R}^2$ with vertices $v_0, v_1, v_2 \in \mathbb{Z}^2$ (labeled clockwise, and the indices are taken in $\mathbb{Z}/3\mathbb{Z}$). We may simply take $v_0=0$. It gives orbifolded positive and negative vertices as follows. The negative vertex consists of the prism $T \times [0,1] \subset \mathbb{R}^3$ and the simplex $$\mathrm{Conv}\{(v_0,0),(v_1,0),(v_2,0),(v_0,-1)\} \subset \mathbb{R}^3.$$ They are glued along the face $T \times \{0\}$ as shown in Figure \ref{fig:orb_vert}. The fan structure at $(v_0,0)$ is generated by $(u_1,0),(-u_3,0),e_3,-e_3$ (which can be orbifolded) where $u_i$ are the primitive vectors in the directions of $v_i-v_{i-1}$, and they are mapped to the tangent vectors of the polytopes at $(v_0,0)$ trivially. The fan structure at $(v_1,0)$ is generated by $(u_2,0),(-u_1,0),e_3,-e_3$ where $(u_2,0),(-u_1,0)$ are mapped to the tangent vectors of the polytopes trivially, while $e_3$ is mapped to $(0,0,1)$ and $-e_3$ is mapped to $(v_0-v_1,-1)$. Similarly the fan structure at $(v_2,0)$ is generated by $(u_2,0),(-u_1,0),e_3,-e_3$ where $(u_2,0),(-u_1,0)$ are mapped to the tangent vectors of the polytopes trivially, $e_3$ is mapped to $(0,0,1)$ and $-e_3$ is mapped to $(v_0-v_2,-1)$. \begin{figure}[h] \begin{center} \includegraphics[scale=0.6]{orb_vert.pdf} \caption{The negative (the left) and positive (the right) orbifolded trivalent vertices.} \label{fig:orb_vert} \end{center} \end{figure} The discriminant locus is a Y-shape which is the dual graph of $T$ embedded in $T\times\{0\}$. It is easy to verify the following. \begin{prop} $\mathbb{R}^2\times\{0\}$ is monodromy invariant. The monodromy around the leg dual to the edge $\{v_{i-1},v_i\}$ of $T$ sends $e_3$ to $e_3 + (v_i-v_{i-1})$. ($v_3:=v_0$.) In particular the multiplicity of the legs is equal to the affine lengths of $v_i-v_{i-1}$. \end{prop} The piecewise linear function $\phi$ is given by Equation \eqref{eq:phi-gc}. This finishes the definition of an orbifolded negative vertex as a tropical manifold. The orbifolded positive vertex associated to $T$ is the following. Let $$R = \left( \begin{array}{ll} 0 & -1\\ 1 & 0 \end{array} \right).$$ $-R\cdot (v_i-v_{i-1})$ are outward normal vectors of the triangle $T$. The polyhedral decomposition consists of the three polyhedral sets $\left(\mathbb{R}_{\geq 0} \cdot \{-R\cdot (v_i-v_{i-1}),-R\cdot (v_{i+1}-v_i)\}\right) \times [0,1]$ for $i=1,2,3$, and they are glued as shown on the right of Figure \ref{fig:orb_vert}. The fan at $(0,1)$ is generated by the vectors $-R \cdot u_1,-R \cdot u_2, -R \cdot u_3, -e_3$. The fan at $(0,0)$ is generated by the directions $-R \cdot u_1,-R \cdot u_3, -R \cdot (v_2-v_1) - d e_3, e_3$ for $d = \det (v_1, v_2)$ (two times the area of $T$), where $-R \cdot u_1,-R \cdot u_3, e_3$ are mapped trivially to the tangent vectors of the polytopes and $-R \cdot (v_2-v_1) - d e_3$ is mapped to $-R \cdot (v_2-v_1)$. The discriminant locus is the same Y-shape which is union of the intersection of the facets of the polytopes with $\mathbb{R}^2 \times \{0\}$. The multivalued piecewise linear function restricts to the fan at each of the two vertices as $v_0,v_1,v_2$ (regarded as linear function on the dual vector space) on the three maximal cones of the fan respectively. This gives an orbifolded positive vertex as a tropical manifold. \begin{prop} Take the leg in the direction $-R \cdot u_i$. The plane $\mathbb{R} \cdot \{(-R \cdot (v_i-v_{i-1}),0), (0,1)\}$ is monodromy invariant. Let $w \in \mathbb{Z}^2\times\{0\}$ such that $\det (u_i,w) =1$. Then the monodromy sends $-R \cdot w$ to $-R \cdot w - h e_3$ where $h$ is the affine length of $v_i-v_{i-1}$. In particular the multiplicity is equal to the affine length of $v_i-v_{i-1}$. The orbifolded positive and negative vertices defined above form a Legendre dual pair. \end{prop} \begin{proof} It is obvious that the plane $\mathbb{R} \cdot \{(-R \cdot (v_i-v_{i-1}),0), (0,1)\}$ is monodromy invariant, and the monodromy sends $(-R \cdot (v_{i+1}-v_i),0)$ to $(-R \cdot (v_{i+1}-v_i),-d)$. Write $v_{i+1}-v_i = -a u_i + b w$ for some integers $a,b$. Also $v_i-v_{i-1}= h u_i$. We have $d = \det (v_{i+1}-v_i,-(v_i-v_{i-1})) = bh$. Then the monodromy sends $-R \cdot w = -R\cdot (v_{i+1}-v_i)/b - a R \cdot u_i / b$ to $-R\cdot (v_{i+1}-v_i)/b - d e_3/b - a R \cdot u_i / b = -R \cdot w - he_3$. The remaining statements are easy to check. \end{proof} The orbifolded positive vertex corresponds to the toric CY orbifold $\mathbb{C}^3/G$ for a finite group $G$, whose fan is the cone over $T \times \{1\} \subset \mathbb{R}^3$. Its Lagrangian fibration is again given by Equation \eqref{eq:fib_An}, where in this case $\mu_1$ is the first two components of the moment map (with respect to a fixed toric K\"ahler form). The orbifolded negative vertex corresponds to the mirror variety $\{(u,v,z_1,z_2) \in \mathbb{C}^2 \times (\mathbb{C}^\times)^2: uv=\sum_{i=1}^3 z_1^{a^{(i)}_1}z_2^{a^{(i)}_2}\}$ where $v_i=(a^{(i)}_1,a^{(i)}_2)$ are the vertices of the triangle $T$. One can cook up a piecewise smooth Lagrangian fibration by using the construction of \cite{AAK} by Moser argument. \subsection{Local Gorenstein singularity} \label{sec:Gor} A natural further generalization is the tropical manifolds corresponding to toric Gorenstein singularity and its mirror. Orbifolded trivalent vertices and conifolds are included as special cases. SYZ mirror symmetry for geometric transitions associated to toric Gorenstein singularities was studied in \cite{L13}. The construction is very similar to the last subsection, and so we will not go into detail. The lattice triangle $T$ in the last subsection is replaced by a lattice polygon $P \subset \mathbb{R}^2$ with vertices $v_0,\ldots,v_{m-1}$. Then the discriminant locus consists of legs in directions $-R\cdot (v_i-v_{i-1})$ stemming from a single vertex. The integer $d$ is defined to be two times the area of $P$ here. We obtain the Gorenstein positive and negative vertices. See the example in Figure \ref{fig:Gor-sing}. \begin{figure}[h] \begin{center} \includegraphics[scale=0.6]{Gor-sing.pdf} \caption{An example of affine structure for toric Gorenstein singularity. The right hand side (positive vertex) is the toric Gorenstein singularity and the left hand side is its mirror.} \label{fig:Gor-sing} \end{center} \end{figure} An interesting class of geometries is hyperconifold studied by physicists \cite{Davies1,Davies2}. It gives an instance that a conifold transition (at several nodes simultaneously) is not mirror to a reverse conifold transition (but to a hyperconifold transition instead). In this case $P$ is taken to be a parallelogram spanned by two vectors $v,w$. It includes orbifolded conifolds as special cases. \begin{remark} A Minkowski decomposition of the polytope $P$ gives a smoothing of the corresponding toric Gorenstein singularity \cite{altmann} (or a resolution of its mirror). However it may not exist in general. (In contrast a triangulation of $P$ into standard triangles which gives a resolution of the toric Gorenstein singularity always exists.) \end{remark} \section{Rational elliptic surfaces and the `12' Property} \label{sec:surf} In this section we study tropical surfaces corresponding to rational elliptic surfaces with $A_n$ singularities. A large part of this section is well-known to experts. It is a reformulation of the works of \cite{Symington,LS} using the terminologies of \cite{GS07,CM1}. First we introduce an easy generalization of the well-known `12' Property for non-convex polygons. It is a special case of the `legal loops' in \cite{PR}. Then we construct the corresponding tropical surfaces and obtain mirror pairs of symplectic rational elliptic surfaces with singularities from dual reflexive polygons. We also generalize the construction for legal loops and relate with elliptic surfaces. By using the classical result of Matsumoto \cite{Matsumoto}, it gives a topological proof of the generalized `12' property. The significant work of Gross-Hacking-Keel \cite{GHK} developed mirror symmetry for log Calabi-Yau surfaces whose anti-canonical divisor is a nodal cycle of holomorphic spheres. For rational elliptic surfaces the corresponding anti-canonical divisor is a smooth elliptic curve. \subsection{The `12' Property} The following is a special case of the `12' Property for legal loops in \cite[Section 9.1]{PR} when the winding number is $1$ and there is no clockwise move in the legal loop. (See Theoreom \ref{thm:12-legal}.) \begin{prop} \label{prop:12} Let $v_1, \ldots, v_m \in \mathbb{Z}^2-\{0\}$ be distinct primitive vectors, labeled in the counterclockwise manner, such that $\mathbb{R}_{\geq 0}\cdot\{v_1,\ldots,v_m\}=\mathbb{R}^2$, and the simplex $\{0,v_i, v_{i+1}\}$ does not contain any interior lattice point for every $i \in \mathbb{Z}/m\mathbb{Z}$. Let $P$ be the union of all these simplices. Then $$ 2 \, \mathrm{Area}(P) + \sum_{i\in\mathbb{Z}/m\mathbb{Z}} \det (u_{i-1},u_{i}) = 12 $$ where $u_i$ is the primitive vector in the direction of $v_{i+1}-v_i$. \end{prop} \begin{proof} First we consider the simplest case where $v_1=(1,0)$, $v_2=(0,1)$ and $v_3=(-1,-1)$. It is easy to check the equality directly. To get better geometric understanding, we consider the complete fan generated by these vectors, which gives the toric manifold $\mathbb{P}^2$. Then $2 \, \mathrm{Area}(P) = \sum_{i=1}^m\det(v_i,v_{i+1})=m=3$, the number of irreducible toric divisors. Moreover $\det (u_{i-1},u_{i}) = c_1 \cdot D_i$ where $c_1 = \sum_{i=1}^m D_i$ and $D_i$ denotes the irreducible toric divisors corresponding to $u_i$. We have $c_1 \cdot D_i = 2+D_i\cdot D_i$ where $2$ comes from the Euler characteristic of each irreducible toric divisor (which is topologically a sphere). Hence the LHS of the above equals $$ m + \sum_{i=1}^m (2+D_i^2) = 3m + \sum_{i=1}^m D_i^2 $$ which is equal to $12$ for $\mathbb{P}^2$. In general, by further subdividing $P$, we can always assume that $\det (v_i, v_{i+1}) = 1$. Then the LHS always equals $3m + \sum_{i=1}^m D_i^2$ of the corresponding toric manifold. We want to prove that it is always $12$. Since every toric manifold can be obtained from $\mathbb{P}^2$ by successively blowing up and down at toric points, it suffices to prove that $3m + \sum_{i=1}^m D_i^2$ is invariant under blow-up. Blowing up at a toric point increases $m$ by $1$, adds a new exceptional curve of self-intersection $-1$, and decreases the self-intersection numbers of the two adjacent divisors by $1$. Thus the total effect is $0$ and $3m + \sum_{i=1}^m D_i^2$ is $12$. \end{proof} \begin{defn} Assume the notations in Proposition \ref{prop:12}. The integer $\det (u_{i-1},u_{i})$ is called the order of the vertex $v_i$. \end{defn} Suppose $P$ in Proposition \ref{prop:12} is a convex polygon. Regarding $P$ as the moment polygon of a toric orbifold, $\det (u_{i-1},u_{i})$ is the order of the isotropy group of the orbifold point corresponding to $v_i$. \subsection{Affine rational elliptic surfaces with singularities} \begin{prop}[Affine rational elliptic surfaces] \label{prop:not-conv} Define $P$ (which is not necessarily convex) as in Proposition \ref{prop:12}. There exist two tropical surfaces with singularities $(\mathcal{A},\mathcal{P})$ and $(\mathcal{A}',\mathcal{P}')$ associated to $P$ having the following properties. \begin{enumerate} \item Both $\mathcal{A}$ and $\mathcal{A}'$ are discs whose boundaries are affine circles, with the affine length being the number of lattice points contained in the boundary of $P$. (An affine circle is a topological circle in $\mathcal{A}$ whose intersection with $\mathcal{A}-\Delta$ is an affine submanifold.) \item The interior of $(\mathcal{A},\mathcal{P})$ contains an affine circle $C$ formed by some edges in $\mathcal{P}$. The interior of $(\mathcal{A}',\mathcal{P}')$ contains the polygon $P$ whose edges are affine line segments which form a part of $\mathcal{P}'$. \item The affine circle $C \subset \mathcal{A}$ contains singular points (called `outer') which are in one-to-one correspondence with the edges of $P$. The boundary of the polygon $P \subset \mathcal{A}'$ contains singular points (called `inner') which are in one-to-one correspondence with the edges of $P$. The multiplicity of such a singular point is equal to the affine length of the corresponding edge of $P$. \item The open disc bounded by $C \subset \mathcal{A}$ contains singular points (called `inner') which are in one-to-one correspondence with the corners of $P$. The complement of the polygon $P \subset \mathcal{A}'$ contains singular points (called `outer') which are in one-to-one correspondence with the corners of $P$. The multiplicity of such a singular point equals the determinant of the two primitive tangent vectors of $P$ at the corresponding corners (which is negative when the corresponding corner is non-convex). \item The total number of singular points (counted with multiplicities) in $\mathcal{A}$ or $\mathcal{A}'$ equals $12$. \end{enumerate} \end{prop} \begin{proof} $(\mathcal{A},\mathcal{P})$ is defined as follows. Take the triangles with corners $0,v_i,v_{i+1}$, and the parallelograms with corners $0,v_i,v_{i-1}-v_i,v_{i-1}$. The triangles are glued to give the polygon $P$. For the parallelograms the sides $\{0,v_i\}$ are glued to $\{v_{i}-v_{i+1},v_{i}\}$. The sides $\{v_i,v_{i-1}\}$ of the triangles are glued to the sides $\{0,v_{i-1}-v_i\}$ of the parallelograms. See the left of Figure \ref{fig:ell-non-Fano-eg} for an example. The fan structure at the vertex $0$ of a triangle is trivial. At the vertex $v_i$ of a triangle, the fan is generated by $e_1,-e_1,e_2,-e_2$ and they are mapped to the primitive vector in the direction $v_{i-1}-v_i$, the primitive vector in the direction $v_{i+1}-v_i$, $v_i$ and $-v_i$ respectively. At the vertex $v_i$ of a parallelogram, the fan is generated by $e_1,-e_1,-e_2$ and they are mapped to the primitive vector in the direction $v_{i-1}-v_i$, the primitive vector in the direction $v_{i+1}-v_i$ and $-v_i$ respectively. This gives $(\mathcal{A},\mathcal{P})$. Each edge of a triangle contains a singular point. The singular points in the edges $\{0,v_i\}$ are called inner and those in $\{v_i,v_{i+1}\}$ are called outer. It is a direct computation that the multiplicities are as stated in (3) and (4). The sides $\{v_i,v_{i-1}\}$ of the parallelograms form the boundary of $\mathcal{A}$ which is an affine circle. The affine length is the sum of affine lengths of $v_i-v_{i-1}$, which is equal to the number of lattice points in the boundary of $P$. This gives (1). For (2), $C$ is given by the union of the sides $\{v_i,v_{i+1}\}$ of the triangles. (5) follows from Proposition \ref{prop:12}. $(\mathcal{A}',\mathcal{P}')$ is defined as follows. Take the polygon $P$ with corners $v_i$, and the parallelograms with corners $0,v_i,v_{i-1}-v_i,v_{i-1}$. For the parallelograms the sides $\{0,v_i\}$ are glued to $\{v_{i}-v_{i+1},v_{i}\}$. The sides $\{v_i,v_{i-1}\}$ of $P$ are glued to the sides $\{0,v_{i-1}-v_i\}$ of the parallelograms. See the right of Figure \ref{fig:ell-non-Fano-eg} for an example. The fan at the vertex $v_i$ of $P$ is generated by the three vectors: $v_i$, the primitive vector along $v_{i+1}-v_i$ and that along $v_{i-1}-v_i$. The fan at the vertex $v_i$ of a parallelogram is generated by $e_1,-e_1,-e_2$, and they are mapped to the primitive vector in the direction $v_{i-1}-v_i$, the primitive vector in the direction $v_{i+1}-v_i$ and $-v_i$ respectively. Each edge of $P$ contains a singular point which is called inner. Each edge $\{0,v_i\}$ of a parallelogram contains a singular point which is called outer. Similarly one can verify the properties (1)-(5) for $(\mathcal{A}',\mathcal{P}')$. \end{proof} $(\mathcal{A},\mathcal{P})$ and $(\mathcal{A}',\mathcal{P}')$ are affine manifolds with toric polyhedral decompositions. We call them to be affine rational elliptic surfaces. They violate the positivity condition in Definition 1.54 of \cite{GS1} if $P$ is not convex. Indeed $\mathcal{A}$ and $\mathcal{A}'$ are related by inversion; the inner singular points of $\mathcal{A}$ correspond to the outer singular points of $\mathcal{A}'$, and vice versa. Figure \ref{fig:ell-non-Fano-eg} shows two examples for Proposition \ref{prop:not-conv} where $P$ is not convex. One can glue local Lagrangian fibrations with monodromy $\left(\begin{array}{cc} 1 & 1 \\ 0 & 1 \end{array}\right)$ and $\left(\begin{array}{cc} 1 & -1 \\ 0 & 1 \end{array}\right)$ to obtain a Lagrangian fibration on a symplectic $4$-fold. Note that negative multiplicities can only occur for a Lagrangian fibration but not for a holomorphic fibration. \begin{figure}[htb!] \centering \begin{subfigure}[b]{0.7\textwidth} \centering \includegraphics[width=\textwidth]{ell-non-Fano-eg.pdf} \caption{} \label{fig:ell-non-Fano-eg1} \end{subfigure} \hspace{10pt} \begin{subfigure}[b]{0.7\textwidth} \centering \includegraphics[width=\textwidth]{ell-non-Fano-eg2.pdf} \caption{} \label{fig:ell-non-Fano-eg2} \end{subfigure} \caption{Two examples of non-convex polygons. (A) is constructed from the fan polytope of the Hirzebruch surface $\mathbb{F}_3$. The `12' property still holds.} \label{fig:ell-non-Fano-eg} \end{figure} Note that the dual of a non-convex polygon in Proposition \ref{prop:12} is not a polygon. It is a legal loop which is discussed in Section \ref{sec:legal}. For this reason let's go back to the important case that $P$ is convex. By definition $P$ is a reflexive polygon which has exactly one interior lattice point. Its dual is again a reflexive polygon $\check{P}$. Without loss of generality we assume that $u_i$ for $i=1,\ldots,m$ are the vertices of $P$. Figure \ref{fig:ell-tor} lists all the affine surfaces constructed in this way. (For simplicity we only list $\mathcal{A}$ but not $\mathcal{A}'$.) Each polygon in $(\mathcal{A}'_{\check{P}},\mathcal{P}'_{\check{P}})$ gives a piecewise linear function (unique up to addition of a linear function) supported on the dual fan at the corresponding vertex in $(\mathcal{A},\mathcal{P})$, and vice versa. This gives a multivalued piecewise linear function $\phi$ on $(\mathcal{A},\mathcal{P})$. Thus for a reflexive polygon we have a tropical manifold $(\mathcal{A},\mathcal{P},\phi)$. \begin{figure}[h] \begin{center} \includegraphics[scale=0.5]{ell-tor.pdf} \caption{Affine rational elliptic surfaces with $A_{k-1}$ singularities coming from reflexive polygons. The dual polygons give mirror pairs. The four in the last row are self-dual.} \label{fig:ell-tor} \end{center} \end{figure} By gluing in the Lagrangian fibrations on $A_{k-1}$ singularities, one obtains the following, which is a more precise version of Proposition \ref{thm:ell-intro}. \begin{prop}[Symplectic rational elliptic surfaces] \label{thm:ell} Each reflexive polygon $P$ corresponds to two symplectic rational elliptic surfaces with $A_k$ singularities $S$ and $S'$ together with Lagrangian fibrations. Both $S$ and $S'$ satisfy the following properties. \begin{enumerate} \item The base of each fibration is topologically a closed disc. The inverse image of the boundary is a symplectic torus. \item The total number of interior singular fibers (counted with multiplicities) equals $12$. \item The $A_k$ singularities can be divided into two groups, which are in one-to-one correspondence with non-standard corners of $P$ and non-standard simplices formed by $\{v_i,v_{i+1}\}$ respectively. One has $k = A-1$ where $A$ is the area of the parallelogram spanned by primitive tangent vectors at the corner in the first case, or the area of the parallelogram spanned by $v_i,v_{i+1}$ in the second case. \item The singular fibers of the Lagrangian fibration can be divided into two groups, which are in one-to-one correspondence with corners of $P$ and simplices formed by $\{v_i,v_{i+1}\}$ respectively. The multiplicity equals $A$ defined above. \end{enumerate} For a pair of dual reflexive polygons $(P,\check{P})$, we have the mirror pairs $(S_P-D_{S_P},S_{\check{P}}'-D_{S_{\check{P}}'})$ and $(S_P'-D_{S_P'},S_{\check{P}}-D_{S_{\check{P}}})$ in the sense that the corresponding tropical geometries are Legendre dual to each other, where $D_{S_P},D_{S_{\check{P}}},D_{S_P'},D_{S_{\check{P}}'}$ denote the symplectic tori defined in (1). \end{prop} \begin{proof} $T^*(\mathcal{A}-\partial \mathcal{A}-\Delta)/\Lambda^*$ gives a symplectic manifold, where $\Delta \subset \mathcal{A}$ is the collection of singular points and $\Lambda\subset T(\mathcal{A}-\partial \mathcal{A}-\Delta)$ is the local system induced from the integral affine structure. The Lagrangian fibration given in \eqref{eq:fib_An} has monodromy $\left(\begin{array}{cc} 1 & k \\ 0 & 1 \end{array}\right)$ around the singular point $(0,0)$ in the base. Thus its base is affine isomorphic to an affine $A_{k-1}$ singularity. By using the action-angle coordinates, the Lagrangian fibration over a punctured neighborhood of $(0,0)$ is isomorphic to the fibration on $T^*(\mathcal{A}-\Delta)/\Lambda$ around an affine $A_{k-1}$ singularity in $\Delta$. Thus the local model can be glued in to give a Lagrangian fibration over $\mathcal{A}-\partial \mathcal{A}$. $\partial \mathcal{A}$ is an affine circle with length $L$. We have the standard Lagrangian fibration $\pi_0:(\mathbb{C}^\times / L\mathbb{Z}) \times \mathbb{C} \to (\mathbb{R}/L\mathbb{Z}) \times \mathbb{R}_{\geq 0}$ (with the standard symplectic form). A neighborhood of $(\mathbb{R}/L\mathbb{Z}) \times \{0\}$ in the base of $\pi_0$ is affine isomorphic to a neighborhood of $\partial \mathcal{A} \subset \mathcal{A}$. Thus we can glue in a neighborhood of $\mathbb{S}^1 \times (\mathbb{R}/L\mathbb{Z})$ in $\pi_0$ and get a Lagrangian fibration over $\mathcal{A}$. This gives $S$. The construction for $S'$ is similar. The boundary divisor is the symplectic torus $\mathbb{S}^1 \times (\mathbb{R}/L\mathbb{Z})$. Property (2)-(4) follow easily from Proposition \ref{prop:not-conv}. It is easy to see that $\mathcal{A}_P-\partial\mathcal{A}_P$ and $\mathcal{A}_{\check{P}}'-\partial\mathcal{A}_{\check{P}}'$ are Legendre dual to each other. Hence they form a mirror pair by the Gross-Siebert reconstruction program. \end{proof} \begin{remark} We can take two such affine rational elliptic surfaces whose boundaries are affine circles with certain lengths, and glue them together (which are rescaled if necessary to match their boundary lengths) and obtain an affine sphere with singularities. By the construction of \cite{CM1} it gives a symplectic K3 surface with $A_k$ singularities. A pair of reflexive polygons $(P_1,P_2)$ and the dual $(\check{P}_1,\check{P}_2)$ produces a mirror pair of K3 surfaces with $A_k$ singularities. In \cite{HZ1} a pair of dual reflexive polygons is used to construct an affine manifold corresponding to a Calabi-Yau manifold. In the above proposition $P_1$ and $P_2$ are not necessarily dual to each other. Such a splitting is related to the conjecture of Doran-Harder-Thompson \cite{DHT} on mirror construction by gluing the Landau-Ginzburg mirrors of the components in a Tyurin degeneration. See \cite[Section 7]{Kanazawa}. \end{remark} In Section \ref{sec:Schoen} we glue the product of an interval with an affine rational elliptic surface with another one to obtain a symplectic Calabi-Yau threefold with orbi-conifold singularities. \subsection{Resolution and smoothing of rational elliptic surfaces with $A_k$ singularities} \label{sec:A_k-res} Now we construct resolutions and smoothings of rational elliptic surfaces with $A_k$ singularities. We shall stick with the example of a mirror pair given on the left of Figure \ref{fig:ell-tor-smoothing}. All other rational elliptic surfaces associated to reflexive polygons have a similar construction. \begin{figure}[h] \begin{center} \includegraphics[scale=0.6]{ell-tor-smoothing.pdf} \caption{An example of a mirror pair of elliptic surfaces with $A_n$ singularities. The top left and bottom left figures show the mirror pair (this case the reflexive polygon is self-dual). The top right and bottom right figures are their smoothings respectively. The mirror of a smoothing of the top left figure is given by a resolution of the bottom left figure, which can be deduced by taking Legendre dual. } \label{fig:ell-tor-smoothing} \end{center} \end{figure} Consider the second graph of the bottom row of Figure \ref{fig:ell-tor}, and compare with the bottom left of Figure \ref{fig:ell-tor-smoothing}. In this example there are four inner and four outer singular points, and they have multiplicities $2,2,1,1$ respectively. First we take a smoothing of all the inner singular points, meaning that we separate each inner singular point with multiplicity $k>1$ into $k$ points with multiplicity $1$, and the $k$ points lie in the original monodromy-invariant affine line. This is done by a certain refinement of the polyhedral decomposition and taking a suitable fan structure at each new vertex to match the monodromy. There are several choices, and we have fixed one choice in the bottom left of Figure \ref{fig:ell-tor-smoothing}. Then we take the Legendre dual, which gives a resolution of all the inner singular points for the dual polygon. (In the example the polygon is self-dual.) Note that each inner singular point with multiplicity $k>1$ is separated into $k$ points with multiplicity $1$ lying in distinct parallel monodromy-invariant affine line. See the top left of Figure \ref{fig:ell-tor-smoothing}. Now we either resolve or smooth out all the outer singular points. Since resolutions can be obtained by taking Legendre dual of smoothings, we just show the smoothings here. For the bottom left of Figure \ref{fig:ell-tor-smoothing}, we refine the reflexive polygon by taking cone over the lattice points lying in the relative interior of each boundary edge, and take a suitable fan structure at all the boundary lattice points to obtain the bottom right of Figure \ref{fig:ell-tor-smoothing}. For the mirror shown in the top left of Figure \ref{fig:ell-tor-smoothing}, we refine the outer polygons if necessary and modify the fan structures at the vertices of the outer polygons to obtain the top right of Figure \ref{fig:ell-tor-smoothing}. All other affine rational elliptic surfaces with $A_k$ singularities can be treated similarly. We summarize by the following proposition. \begin{prop} All rational elliptic surfaces with $A_k$ singularities $S$ and $S'$ associated with reflexive polygons can be resolved and smoothed out symplectomorphically. For each pair of dual reflexive polygons $P$ and $\check{P}$, a resolution of the rational elliptic surface $S_P$ (or $S_P'$) is mirror to a smoothing of $S'_{\check{P}}$ (or $S_{\check{P}}$) in the sense of Gross-Siebert. \end{prop} \subsection{Legal loops} \label{sec:legal} Note that the dual of a non-convex polygon defined in Proposition \ref{prop:12} may not be a polygon. A natural notion which is closed under duality is the legal loop defined in \cite{PR}. The `12' Property holds for a general legal loop; the proof given in \cite{PR} uses modular forms. \begin{defn} A legal loop is a finite sequence of primitive vectors $\{v_1,\ldots,v_m\} \subset \mathbb{Z}^2$ with $v_i \not= v_{i+1}$ such that each triangle $\mathrm{Conv}\{0,v_i,v_{i+1}\}$, for $i=1,\ldots m$ where $v_{m+1}=v_1$, contains no interior lattice point. A legal loop is said to be directed if $\det (v_i,v_{i+1})$ has the same sign for all $i$. \end{defn} The loop is formed by the union of edges connecting consecutive $v_i$. Figure \ref{fig:legal-loop} and \ref{fig:legal-loop-2} show some examples. For simplicity let's assume $(v_{i+1}-v_i) \nparallel (v_i - v_{i-1})$ for all $i$. It is easy to see the following. \begin{prop} The dual (formed by taking the outward primitive vectors of the facets $\mathrm{Conv}\{v_i,v_{i+1}\}$ of the triangles $\mathrm{Conv}\{0,v_i,v_{i+1}\}$) of a legal loop is again a legal loop. \end{prop} \begin{figure}[h] \begin{center} \includegraphics[scale=0.6]{legal-loop.pdf} \caption{A dual pair of legal loops. The bottom figure shows a stitched affine manifold which is dual to the affine manifold given in the left of Figure \ref{fig:ell-non-Fano-eg1}.} \label{fig:legal-loop} \end{center} \end{figure} \begin{figure}[h] \begin{center} \includegraphics[scale=0.6]{legal-loop-2.pdf} \caption{An example of a legal loop which is directed and has winding number $2$. It is associated with an integral affine manifold (with the origin removed).} \label{fig:legal-loop-2} \end{center} \end{figure} \begin{remark} As shown in Figure \ref{fig:legal-loop}, the dual of a directed legal loop could be non-directed. \end{remark} The `12' Property for legal loops involves winding numbers. It is stated as follows. \begin{theorem}[\cite{PR}] \label{thm:12-legal} For a legal loop with winding number $w \in \mathbb{Z}$ around the origin, $$ \sum_{i\in\mathbb{Z}/m\mathbb{Z}} \det(v_i,v_{i+1}) + \sum_{i\in\mathbb{Z}/m\mathbb{Z}} \det (u_{i-1},u_{i}) = 12 w $$ where $u_i$ is the outward primitive normal vector of the triangle $\{0,v_i,v_{i+1}\}$. \end{theorem} Given a legal loop, we can associate it with a `folded' affine space. \begin{defn} A folded integral affine space is a smooth manifold $B_0$ (of dimension $n$) equipped with the following: \begin{enumerate} \item A codimension-1 submanifold $S \subset B_0$ (which may not be connected); \item An integral affine structure on the closure of each connected component of $B_0-S$ (which is a manifold with boundary); \item An isomorphism between the two integral affine structures along $S$ (which locally divides $B_0$ into two half-spaces). \end{enumerate} \end{defn} Note that each of the integral affine structures at a point of $S$ in the above definition is the intersection of a lattice with a half-space, and the isomorphisms of lattices are required to preserve the half-spaces (and hence `fold' the half-spaces to each other). To construct $B_0$, we do a similar construction as in the proof of Proposition \ref{prop:not-conv}: take the triangles with corners $0,v_i,v_{i+1}$, and the parallelograms with corners $0,v_i,v_{i-1}-v_i,v_{i-1}$. Take the \emph{disjoint union} of these polygons, and then identify the sides $\{0,v_i\}$ among the triangles, identify the sides $\{0,v_i\}$ and $\{v_{i}-v_{i+1},v_{i}\}$ of the parallelograms, and identify the sides $\{v_i,v_{i-1}\}$ of the triangles and the sides $\{0,v_{i-1}-v_i\}$ of the parallelograms. Finally we remove the points $0$ of the triangles, the mid-points of the sides $\{0,v_i\}$ and $\{0,v_{i-1}-v_i\}$ of the parallelograms. This gives $B_0$ as an oriented surface. $S$ is defined to be the union of those sides $\{0,v_i\}$ of the triangles and $\{0,v_i\}$ of the parallelograms where $\det(v_{i-1},v_i)$ and $\det(v_i,v_{i+1})$ have different signs. In particular if the legal loop is directed, $S = \emptyset$ and we do not have any folding. As in Proposition \ref{prop:not-conv}, we have a similar local structure at each vertex, namely edges incident to the vertex are identified with rays in $\mathbb{R}^2$, and faces incident to the vertex are identified with cones in $\mathbb{R}^2$. However they do not form a fan in general, namely some cones can intersect at their interior. The rays at the vertex $v_i$ of the triangles are generated by the four vectors: $v_i$, $-v_i$, the primitive vector along $v_{i+1}-v_i$ and that along $v_{i-1}-v_i$. At the vertex $v_i$ of a parallelogram, the vector $-v_i$ is mapped to $-e_2$; the primitive vector in the direction $v_{i-1}-v_i$ and that in the direction $v_{i+1}-v_i$ are mapped to $\mathrm{sgn}(\det(v_{i-1},v_i)) e_1$ and $-\mathrm{sgn}(\det(v_{i-1},v_i)) e_1$ respectively. This gives the structure of a folded integral affine space. For directed legal loops this gives usual integral affine surfaces. The proof of the following is similar to Proposition \ref{prop:not-conv} and hence omitted. \begin{lemma} The monodromy around $0$ is trivial. The multiplicities of the singular points on the sides $\{0,v_i\}$ and $\{0,v_{i-1}-v_i\}$ of the parallelograms are given by $\det(u_{i-1},u_i)$ and $\det(v_{i-1},v_i)$ respectively, where $u_i$ is the outward primitive normal vector of the triangle $\{0,v_i,v_{i+1}\}$. \end{lemma} \begin{prop}[Torus fibration associated with legal loop] Each legal loop is associated with a smooth torus fibration over $\mathbb{S}^2$ with simple nodal singular fibers (with signs $\pm 1$). Each vertex $v_i$ corresponds to $|\det(u_{i-1},u_i)|$ singular fibers of $\mathrm{sign}=\mathrm{sgn}(\det(u_{i-1},u_i))$ where $u_i$ is the outward primitive normal vector of the triangle $\{0,v_i,v_{i+1}\}$. Each edge $\{v_i,v_{i+1}\}$ corresponds to $|\det(v_i,v_{i+1})|$ singular fibers of $\mathrm{sign}=\mathrm{sgn}(\det(v_i,v_{i+1}))$. These are all the singular fibers. \end{prop} \begin{proof} As in Section \ref{sec:A_k-res}, we can resolve or smooth out the $A_k$ singularities such that all of them become simple. Denote the corresponding folded integral affine space by $(\tilde{B}_0,\tilde{S})$. Take the torus bundle $T^*U/\Lambda^*$ where $U$ is a component of $\tilde{B}_0-\tilde{S}$ and $\Lambda$ denotes the integral structure in the tangent bundle. The torus bundles for various $U$ are glued together according to the isomorphism along $S$. This gives a torus bundle over $B_0$. Then at each singular point with positive (or negative) sign, we glue in the fibration given by Equation \eqref{eq:fib_An'} (or multiplying the first component of Equation \eqref{eq:fib_An'} by $(-1)$). This matches the monodromies of the singular points. Now we have a torus fibration over an annulus. Finally since the monodromy around $0$ and the outer boundary is trivial, we can glue in a trivial fibration around $0$ and $\infty$ to obtain a fibration over $\mathbb{S}^2$. This gives the fibration with the specified properties. \end{proof} \begin{proof}[A topological proof of the `12' Property (Theorem \ref{thm:12-legal})] By a classical result of Matsumoto \cite[Theorem 1.1']{Matsumoto}, the total space of a good torus fibration is topologically either $V_k \# l (\mathbb{S}^2 \times \mathbb{S}^2)$ or $\bar{V}_k \# l (\mathbb{S}^2 \times \mathbb{S}^2)$, where $V_k$ is an elliptic surface with Euler characteristic $12k$. The signature is to $8k$ and $-8k$ respectively. On the other hand, the signature is equal to $(-2/3) (k_+ - k_-)$ where $k_+$ (or $k_-$) are the total numbers of positive (or negative) simple singular fibers. Thus we see that $\sum_{i\in\mathbb{Z}/m\mathbb{Z}} \det(v_i,v_{i+1}) + \sum_{i\in\mathbb{Z}/m\mathbb{Z}} \det (u_{i-1},u_{i}) = 12 k$. To prove that $k$ equals the winding number $w$, it suffices to see that $k=0$ when $w=0$ (since we can concatenate legal loops). When $w=0$, the torus fibration can be identified with the pulled back of a fibration over a contractible space, and hence is homotopic to a trivial bundle. It follows that the signature is zero. \end{proof} \section{Orbi-conifold transitions of Schoen's Calabi-Yau mirror pairs} \label{sec:Schoen} Schoen's Calabi-Yau threefold \cite{Schoen} is (a resolution of) a fiber product of two rational elliptic surfaces over $\mathbb{P}^1$. In Section 9 of \cite{CM2}, Castano-Bernard and Matessi constructed conifold transitions of the Schoen's Calabi-Yau from the fan polytopes of toric blow-ups of $\mathbb{P}^2$ (whose dual polytopes have standard vertices). In this section we give a generalization of their construction. Namely we construct orbi-conifold transitions of the Schoen's Calabi-Yau threefold and its mirror. (Orbi-conifold points naturally occur when the vertices of a polygon are not standard.) It gives compact examples of mirror pairs for orbi-conifolds. Conceptually Schoen's Calabi-Yau degenerates to orbifolded conifolds as follows. Take two elliptic surfaces with $A$-type singularities, and take their fiber product over $\mathbb{P}^1$. Suppose there is a point in $\mathbb{P}^1$ such that both elliptic surfaces have singularities over that point. Correspondingly the fiber product has an orbifolded conifold singularity. In the following we use affine geometry to understand it more clearly. The Gross-Siebert mirror (obtained by taking discrete Legendre transform) has generalized conifold singularities. \subsection{General formulation} We have explained local generalized and orbifolded conifolds in Section \ref{sec:SYZ}. Below we define a notion which includes both singularities in a global geometry. \begin{defn} A complex (or symplectic) orbi-conifold is a topological space $X$ together with a discrete subset $S \subset X$ such that $X-S$ is a complex (or symplectic) orbifold, and for each $p \in S$ we have a homeomorphism of a neighborhood of $p$ to a neighborhood of the singular point in a local orbifolded or generalized conifold given in Section \ref{sec:SYZ}, and the homeomorphism is orbi-complex (or orbi-symplectomorphic) away from $p$. A topological orbi-conifold is defined similarly without the complex (or symplectic) structure. \end{defn} \begin{defn} A symplectic orbi-conifold transition of a symplectic manifold $X$ is another symplectic manifold $Y$ which is a resolution of a topological orbi-conifold $X_0$, where $X_0$ is a degeneration of $X$. (The reverse process is also called an orbi-conifold transition.) \end{defn} \begin{remark} A more natural definition should require $X_0$ to be a symplectic orbi-conifold. Below we only construct $X_0$ as a topological orbi-conifold. To make it symplectic, we should use the technique of \cite{AAK} by Moser argument to construct a Lagrangian fibration on the local generalized conifold. \end{remark} Now we consider the affine setting. The following definition is an orbifold generalization of \cite[Definition 6.3]{CM2}. \begin{defn} \label{def:orbi-cfd} An affine orbi-conifold is a polarized tropical threefold $(B,\mathcal{P},\phi)$ with the following properties.  \begin{enumerate} \item The discriminant locus $\Delta$ is a graph with vertices of valency 3 or 4. \item Each edge of $\Delta$ has monodromy given by the matrix $\left(\begin{array}{ccc} 1 & 0 & 0 \\ k & 1 & 0 \\ 0 & 0 & 1 \\ \end{array}\right)$ in a suitable basis. \item Every trivalent vertex of $\Delta$ has a neighborhood which is integral affine isomorphic to the orbifolded positive or negative vertex given in Section \ref{sec:orb_vert}. \item Every 4-valent vertex of $\Delta$ has a neighborhood which is integral affine isomorphic to the affine generalized or orbifolded conifolds given in Section \ref{sec:aff-loc}. \end{enumerate} \end{defn} Since the affine generalized conifold point and the affine orbifolded conifold point are locally dual to each other (see Figure \ref{fig:loc-orb-cfd}), we immediately have the following. \begin{prop} \label{prop:gen-dual-orb} Let $(B,\mathcal{P},\phi)$ be an affine orbi-conifold, and $(\check{B},\check{\mathcal{P}},\check{\phi})$ its Legendre dual. Then the affine generalized conifold points of $(B,\mathcal{P},\phi)$ are in one-to-one correspondence with the affine orbifolded conifold points of $(\check{B},\check{\mathcal{P}},\check{\phi})$, and vice versa. \end{prop} By gluing the local models of fibrations given in Section \ref{sec:SYZ} and \ref{sec:loc-aff}, we obtain a global orbi-conifold. Combining with Proposition \ref{prop:gen-dual-orb}, we have the following. \begin{prop}[Mirror symmetry of singularities] \label{prop:top-orb-cfd} Given an affine orbi-conifold $(B,\mathcal{P},\phi)$, there exists a topological orbi-conifold $X$ and a torus fibration to $B$ with section whose discriminant locus and monodromies agrees with that of $B$. Let $\check{X}$ be the topological orbi-conifold corresponding to the Legendre dual $(\check{B},\check{\mathcal{P}},\check{\phi})$. Then the generalized conifold points of $X$ are in one-to-one correspondence with the orbifolded conifold points of $\check{X}$, and vice versa. \end{prop} \begin{defn} Given an affine orbi-conifold $(B,\mathcal{P},\phi)$, a smoothing of $(B,\mathcal{P},\phi)$ is a simple and positive polarized tropical threefold $(\tilde{B},\tilde{\mathcal{P}},\tilde{\phi})$ such that the corresponding manifold $\tilde{X}$ is a (topological) smoothing of the orbi-conifold $X$. A resolution of $(B,\mathcal{P},\phi)$ is a simple and positive polarized tropical threefold such that its Legendre dual is a smoothing of the Legendre dual $(\check{B},\check{\mathcal{P}},\check{\phi})$. \end{defn} Given a simple and positive polarized tropical threefold $(B,\mathcal{P},\phi)$, \cite{CM2} constructed a symplectic manifold with a Lagrangian fibration to $B$. In particular an affine smoothing or resolution of an affine orbi-conifold corresponds to a symplectic manifold. This gives a way to produce orbi-conifold transition via affine geometry. \begin{remark} Similar to Definition \ref{def:orbi-cfd}, we can allow vertices with higher valencies and define the notion of an affine variety with Gorenstein singularities by using the local models given in Section \ref{sec:Gor}. The same technique can be applied to construct more general geometric transitions. \end{remark} \subsection{Examples of compact CY orbi-conifolds} The following is a more precise formulation of the first part of Theorem \ref{thm:main}. \begin{theorem} Each pair of reflexive polygons $(P_1,P_2)$ corresponds to a compact orbifolded conifold $O^{(P_1,P_2)}$ and a compact generalized conifold $G^{(P_1,P_2)}$. They have the following properties. \begin{enumerate} \item $O^{(P_1,P_2)}$ and $G^{(P_1,P_2)}$ contain orbifolded loci which are locally modeled by (an open subset of) $\mathbb{C}^2/\mathbb{Z}_k \times \mathbb{C}^\times \supset \{0\} \times \mathbb{C}^\times$. \item The components of the orbifolded loci of $O^{(P_1,P_2)}$ (or $G^{(P_1,P_2)}$) are in one-to-one correspondence with the non-standard corners of $P_i$ and edges of $P_i$ with affine length greater than one ($i=1,2$). The multiplicity $k$ equals the index of the non-standard corner in the first case and equals the affine length of the edge in the second case. \item $O^{(P_1,P_2)}$ is a degeneration of a Schoen's CY. $G^{(P_1,P_2)}$ is a blow-down of a mirror Schoen's CY. \end{enumerate} \end{theorem} Note that $O^{(P_1,P_2)}$ is mirror to $G^{(\check{P}_1,\check{P}_2)}$ (rather than $G^{(P_1,P_2)}$) where $\check{P}$ denotes the dual polygon of $P$. The construction goes as follows. The two reflexive polygons $P_1$ and $P_2$ are associated with two affine rational elliptic surfaces $\mathcal{A}_r'$, $r=1,2$, with $A_k$ singularities (see Proposition \ref{prop:not-conv} for the notations). Let $L_r$ be the affine lengths of the boundary of the polygons respectively. We shall glue the affine threefolds $\mathcal{A}_1' \times (\mathbb{R}/L_2\mathbb{Z})$ and $(\mathbb{R}/L_1\mathbb{Z})\times \mathcal{A}_2'$ along their boundaries $\partial \mathcal{A}_1' \times (\mathbb{R}/L_2\mathbb{Z}) \cong (\mathbb{R}/L_1\mathbb{Z})\times \partial\mathcal{A}_2' \cong (\mathbb{R}/L_1\mathbb{Z}) \times (\mathbb{R}/L_2\mathbb{Z})$. Then we get an affine $\mathbb{S}^3$ with singularities, which corresponds to an orbifolded conifold degeneration of the Schoen's Calabi-Yau threefold. \begin{remark} If we glue them in the other way, namely $\partial \mathcal{A}_1' \times (\mathbb{R}/\mathbb{Z}) \cong \partial\mathcal{A}_2'\times (\mathbb{R}/\mathbb{Z})$ with rescaling if necessary, then we get an affine $\mathbb{S}^2 \times \mathbb{S}^1$ which corresponds to $(K3 \textrm{ with } A_k \textrm{ singularities}) \times \mathbb{T}^{2}$. \end{remark} For instance, take $P_1$ to be the polygon given in the second graph of the last row of Figure \ref{fig:ell-tor}, and $P_2$ the second graph of the first row of Figure \ref{fig:ell-tor}. The gluing is given in Figure \ref{fig:SchoenCY-whole}. \begin{figure}[h] \begin{center} \includegraphics[scale=0.6]{SchoenCY-whole-4sides.pdf} \caption{An orbifolded conifold degeneration of Schoen's Calabi-Yau. Each thick dot represents an orbifolded conifold singularity.} \label{fig:SchoenCY-whole} \end{center} \end{figure} The polyhedral decomposition consists of the following polytopes. Let $P_r$ ($r=1,2$) consist of $m_r$ edges with affine lengths $l_k^{(r)}$ for $k=1,\ldots,m_r$ (the labeling of edges is in counterclockwise order). Then $\sum_{k=1}^{m_r} l_k^{(r)} = L_r$. We have the polytopes \begin{equation} \label{eq:hatP1} \hat{P}_j^{(1)}:=[0,l_j^{(2)}] \times P_1 \end{equation} for $j=1,\ldots,m_2$, \begin{equation} \label{eq:hatP2} \hat{P}_i^{(2)}:=[0,l_i^{(1)}] \times P_2 \end{equation} for $i=1,\ldots,m_1$, and the cubes \begin{equation} \label{eq:C} C_{ij}:=[0,l_i^{(1)}] \times [0,l_j^{(2)}] \times [0,1] \end{equation} for $i=1,\ldots,m_1,j=1,\ldots,m_2$. The facet $\{0\} \times P_r$ (for $r=1,2$) of $\hat{P}_k^{(r)}$ is glued with the facet $\{l_{k-1}^{(r)}\} \times P_r$ of $\hat{P}_{k-1}^{(r)}$ for $k\in\mathbb{Z}/m_r\mathbb{Z}$. The facet $\{0\} \times [0,l_j^{(2)}] \times [0,1]$ of $C_{ij}$ is glued with the facet $\{l_{i-1}^{(1)}\} \times [0,l_j^{(2)}] \times [0,1]$ of $C_{i-1,j}$ for $i\in\mathbb{Z}/m_1\mathbb{Z}$; the facet $[0,l_i^{(1)}] \times \{0\} \times [0,1]$ of $C_{ij}$ is glued with the facet $[0,l_i^{(1)}] \times \{l_{j-1}^{(2)}\} \times [0,1]$ of $C_{i,j-1}$ for $j\in\mathbb{Z}/m_2\mathbb{Z}$. The facet $[0,l_i^{(1)}] \times [0,l_j^{(2)}] \times \{0\}$ of $C_{ij}$ is glued with the facet $[0,l_j^{(2)}] \times \partial_i P_1$ of $\hat{P}_j^{(1)}$ (where $\partial_i P_1$ denotes the $i$-th edge of $P_1$), and the facet $[0,l_i^{(1)}] \times [0,l_j^{(2)}] \times \{1\}$ of $C_{ij}$ is glued with the facet $[0,l_i^{(2)}] \times \partial_j P_2$ of $\hat{P}_i^{(2)}$. The resulting total space is $\mathbb{S}^3$ topologically. Now we describe the fan structure at each vertex. Every vertex of the polyhedral decomposition is either a vertex of $\hat{P}_i^{(1)}$ (for $i=1,\ldots,m_1$) or that of $\hat{P}_j^{(2)}$ (for $j=1,\ldots,m_2$). It is always in the shape of a product of the fan of $\mathbb{P}^1$ and that of a weighted projective plane. Then the fan structure is taken to be the product of the fan of $\mathbb{P}^1$ and the fan at the corresponding vertex of the affine rational elliptic surface $\mathcal{A}_r$. The discriminant locus $\Delta$ is taken to be the union of the lines \begin{equation} \label{eq:Delta_1} [0,l_j^{(2)}]\times\{p\} \subset \partial \hat{P}_j^{(1)} \end{equation} for $j=1,\ldots,m_2$ and $p$ being the mid-point of an edge of $P_1$, \begin{equation} \label{eq:Delta_2} [0,l_i^{(1)}]\times\{p\} \subset \partial \hat{P}_i^{(2)} \end{equation} for $i=1,\ldots,m_1$ and $p$ being the mid-point of an edge of $P_2$, \begin{equation} \label{eq:Delta_3} \{0\} \times [0,l_j^{(2)}] \times \{1/2\} \subset \partial C_{ij} \end{equation} and \begin{equation} \label{eq:Delta_4} [0,l_i^{(1)}] \times \{0\} \times \{1/2\} \subset \partial C_{ij} \end{equation} for $i=1,\ldots,m_1$ and $j=1,\ldots,m_2$. This gives $(B,\mathcal{P})$. One can directly verify the following and we omit the standard computations here. \begin{lemma} The polyhedral decomposition and the fan structures at vertices define an affine structure on $B-\Delta$. The multiplicities of the discriminant loci defined by Equation \eqref{eq:Delta_1},\eqref{eq:Delta_2} are the lengths of the edges of $P_2$ and $P_1$ respectively; the multiplicities for Equation \eqref{eq:Delta_3}, \eqref{eq:Delta_4} are the orders of the $i$-th vertex of $P_1$ and the $j$-th vertex of $P_2$ respectively. (The $i$-th vertex is adjacent to the $(i-1)$-th and $i$-th edges.) \end{lemma} The discriminant loci defined by Equation \eqref{eq:Delta_1} and \eqref{eq:Delta_2} correspond to the inner singular points of the affine surfaces $\mathcal{A}_r'$. (One can smooth out the inner singular points so that all of them are simple (having multiplicity $1$), see the top left of Figure \ref{fig:ell-tor-smoothing}. Here we simply leave them as orbifolded singularities.) The discriminant loci defined by Equation \eqref{eq:Delta_3} and \eqref{eq:Delta_4} correspond to the outer singular points of $\mathcal{A}_r'$. They intersect at $m_1\cdot m_2$ points, and these are the orbifolded conifold singularities. For instance in Figure \ref{fig:SchoenCY-whole} there are $12$ orbifolded conifold points, $6$ of which are not the usual conifold points. By Proposition \ref{prop:top-orb-cfd}, we have a topological orbifolded conifold $O^{(P_1,P_2)}$ corresponding to $(B,\mathcal{P})$. We have the following compact affine generalized conifold $(\check{B},\check{P})$. It is glued from $\mathcal{A}^{\check{P}_1} \times (\mathbb{R}/\check{L}_2\mathbb{Z})$ and $(\mathbb{R}/\check{L}_1\mathbb{Z}) \times \mathcal{A}^{\check{P}_2}$, where $\check{P}_r$ is the dual polytope of $P_r$ and $\check{L}_r$ is the affine length of its boundary. See Figure \ref{fig:SchoenCYmir-whole-4sides}. (In this example $\check{P}_1$ is the same as $P_1$, and $\check{P}_2$ is given by the first graph of the first row of Figure \ref{fig:ell-tor}.) \begin{figure}[h] \begin{center} \includegraphics[scale=0.6]{SchoenCYmir-whole-4sides.pdf} \caption{The mirror obtained by taking Legendre dual. Each thick dot represents a generalized conifold singularity.} \label{fig:SchoenCYmir-whole-4sides} \end{center} \end{figure} $(\check{B},\check{P})$ is obtained by gluing the prisms \begin{equation} \label{eq:hatT1} \hat{T}^{(1)}_{ij}:=[0,\check{l}_j^{(2)}] \times T_i^{(1)} \end{equation} and \begin{equation} \label{eq:hatT2} \hat{T}^{(2)}_{ij}:=[0,\check{l}_i^{(1)}] \times T_j^{(2)} \end{equation} along their boundaries, where $T_i^{(r)}$ are the triangles formed by the adjacent corners of $\check{P}_r$ for $r=1,2$. The facet $\{\check{l}_j^{(2)}\} \times T_i^{(1)}$ of $\hat{T}^{(1)}_{ij}$ is glued with the facet $\{0\} \times T_i^{(1)}$ of $\hat{T}^{(1)}_{i,j+1}$; the facet $\{\check{l}_i^{(1)}\} \times T_i^{(2)}$ of $\hat{T}^{(2)}_{ij}$ is glued with the facet $\{0\} \times T_i^{(2)}$ of $\hat{T}^{(2)}_{i+1,j}$. Denote the edges of the triangle $T_i^{r}$ by $\partial_0 T_i^{(r)}, \partial_1 T_i^{(r)},\partial_2 T_i^{(r)}$ corresponding to the $i$-th side of $\check{P}_r$ and its adjacent vertices (in counterclockwise order) respectively. The facet $[0,\check{l}_j^{(2)}] \times \partial_0 T_i^{(1)}$ of $\hat{T}^{(1)}_{ij}$ is glued with the facet $[0,\check{l}_i^{(1)}] \times T_j^{(2)}$ of $\hat{T}^{(2)}_{ij}$ by $\partial_0 T_i^{(1)} \cong [0,\check{l}_i^{(1)}]$ and $\partial_0 T_j^{(2)} \cong [0,\check{l}_j^{(2)}]$; the facet $[0,\check{l}_j^{(2)}] \times \partial_2 T_i^{(1)}$ of $\hat{T}^{(1)}_{ij}$ is glued with the facet $[0,\check{l}_j^{(2)}] \times \partial_1 T_{i+1}^{(1)}$ of $\hat{T}^{(1)}_{i+1,j}$; the facet $[0,\check{l}_i^{(1)}] \times \partial_2 T_j^{(2)}$ of $\hat{T}^{(2)}_{ij}$ is glued with the facet $[0,\check{l}_i^{(1)}] \times \partial_1 T_{j+1}^{(2)}$ of $\hat{T}^{(2)}_{i,j+1}$. The resulting space is again $\mathbb{S}^3$ topologically. The dual discriminant locus $\check{\Delta}$ is given by the union of the lines \begin{align} [0,\check{l}_j^{(2)}]\times\{p\} \subset \partial \hat{T}_{ij}^{(1)}, &\textrm{ $p$ is the mid-point of $\partial_1T_i^{(1)}$ or $\partial_2T_i^{(1)}$}, \label{eq:Deltacheck_1} \\ [0,\check{l}_i^{(1)}]\times \{p\} \subset \partial \hat{T}_{ij}^{(2)}, &\textrm{ $p$ is the mid-point of $\partial_1T_j^{(2)}$ or $\partial_2T_j^{(2)}$}, \label{eq:Deltacheck_2} \\ [0,\check{l}_j^{(2)}]\times\{p\} \subset \partial \hat{T}_{ij}^{(1)}, &\textrm{ $p$ is the mid-point of $\partial_0T_i^{(1)}$}, \label{eq:Deltacheck_3} \\ [0,\check{l}_i^{(1)}]\times \{p\} \subset \partial \hat{T}_{ij}^{(2)}, &\textrm{ $p$ is the mid-point of $\partial_0T_j^{(2)}$}. \label{eq:Deltacheck_4} \end{align} The discriminant loci defined by Equation \eqref{eq:Deltacheck_1} and \eqref{eq:Deltacheck_2} correspond to the inner singular points of the affine surfaces $\mathcal{A}^{\check{P}_r}$. The discriminant loci defined by Equation \eqref{eq:Deltacheck_3} and \eqref{eq:Deltacheck_4} correspond to the outer singular points of $\mathcal{A}^{\check{P}_r}$. They intersect at $m_1\cdot m_2$ points, and these are the generalized conifold singularities. By Proposition \ref{prop:top-orb-cfd}, we have a corresponding topological generalized conifold $G^{(\check{P}_1,\check{P}_2)}$. From the definition of $(\check{B},\check{\mathcal{P}})$, one can write down a multivalued piecewise-linear function $\phi$ on $B$ such that $(\check{B},\check{\mathcal{P}})$ is the Legendre dual of $(B,\mathcal{P},\phi)$. \begin{remark} If we take the polytope $P$ to be the moment map polytope of a toric blow-up of $\mathbb{P}^2$ (or equivalently $\check{P}$ to be the fan polytope of a toric blow-up of $\mathbb{P}^2$), then $B$ is a usual affine conifold. This was constructed in \cite[Section 9.4]{CM2}. \end{remark} \subsection{Orbi-conifold transitions and mirror symmetry} From the last subsection, we have a compact orbifolded conifold $O^{(P_1,P_2)}$ and a compact generalized conifold $G^{(P_1,P_2)}$ for each pair of reflexive polygons $(P_1,P_2)$. In this subsection we construct their smoothings and resolutions in affine geometry. By the Gross-Siebert reconstruction, they give a mirror pair of toric degenerations. Due to discrete Legendre transform, a resolution of $O^{(P_1,P_2)}$ is mirror to a smoothing of $G^{(\check{P}_1,\check{P}_2)}$, and vise versa. This completes the proof of Theorem \ref{thm:main}. Since a resolution of the affine orbifolded conifold corresponding to $(\check{P}_1,\check{P}_2)$ is Legendre dual to a smoothing of the affine generalized conifold corresponding to $(P_1,P_2)$, and vice versa, it suffices to construct smoothings for $O^{(P_1,P_2)}$ and $G^{(P_1,P_2)}$. \subsubsection{Smoothing of $O^{(P_1,P_2)}$.} Recall that we have constructed a smoothing $\tilde{\mathcal{A}}_{P_r}'$ for the affine elliptic surfaces $\mathcal{A}_{P_r}'$, see the top right of Figure \ref{fig:ell-tor-smoothing}. A smoothing of $O^{(P_1,P_2)}$ is given by gluing the two products of an affine circle with $\tilde{\mathcal{A}}_{P_r}'$ for $r=1,2$. See Figure \ref{fig:SchoenCY-whole-4sides-smoothing}. \begin{figure}[h] \begin{center} \includegraphics[scale=0.6]{SchoenCY-whole-4sides-smoothing.pdf} \caption{A smoothing of an orbifolded conifold, which gives a Schoen's CY. Its Legendre transform gives a resolution of the mirror generalized conifold (which is an orbi-conifold transition of the mirror Schoen's CY shown in Figure \ref{fig:SchoenCYmir-whole-4sides-smoothing}).} \label{fig:SchoenCY-whole-4sides-smoothing} \end{center} \end{figure} Let $K_r$, $r=1,2$, be the maximal multiplicity of the outer singular points of $\mathcal{A}_{P_r}'$. The polyhedral decomposition consists of $\hat{P}_j^{(1)}$ for $j=1,\ldots,m_2$ defined by Equation \eqref{eq:hatP1}, $\hat{P}_i^{(2)}$ for $i=1,\ldots,m_1$ defined by Equation \eqref{eq:hatP2}, and $(K_1+K_2)$ copies of $C_{ij}$ for $i=1,\ldots,m_1,j=1,\ldots,m_2$ defined by Equation \eqref{eq:C}. They are glued as shown in Figure \ref{fig:SchoenCY-whole-4sides-smoothing}. The fan structure at each vertex is taken to be the product of the fan of $\mathbb{P}^1$ and the fan at the corresponding vertex of $\tilde{\mathcal{A}}_{P_r}'$. The discriminant locus is shown in the figure and we omit the detailed descriptions. This gives a smoothing of $O^{(P_1,P_2)}$. Its Legendre dual gives a resolution of $G^{(P_1,P_2)}$. \subsubsection{Smoothing of $G^{(P_1,P_2)}$.} First we glue the products of an affine circle with a smoothing $\tilde{\mathcal{A}}_{P_r}$ of $\mathcal{A}_{P_r}$ (recall the bottom right of Figure \ref{fig:ell-tor-smoothing}) for $r=1,2$. See Figure \ref{fig:SchoenCYmir-whole-4sides-smoothing}. It consists of prisms $[0,1] \times \tilde{T}_i^{(1)}$ and $[0,1] \times \tilde{T}_j^{(2)}$, where $\tilde{T}_i^{(r)}$ for $r=1,2$ are the standard triangles formed by (the origin and) the adjacent boundary lattice points of $\check{P}_r$. The resulting space is an affine conifold (with negative nodes). It can be smoothed out by subdividing the rectangular facets containing the conifold singularities, and refining the polyhedral decomposition correspondingly. Since all these negative nodes are contained in an affine plane, they can be simultaneously smoothed out by \cite[Theorem 8.7]{CM2}. This gives a smoothing of $G^{(P_1,P_2)}$. Its Legendre dual gives a resolution of $O^{(P_1,P_2)}$. As a result, we obtain orbi-conifold transitions of the Schoen's CY and its mirror. \begin{figure}[h] \begin{center} \includegraphics[scale=0.6]{SchoenCYmir-whole-4sides-smoothing.pdf} \caption{A smoothing of the mirror. The figure has (negative) conifold singularities which can be easily smoothed out by subdividing the $18$ rectangles containing the conifold singularities. Its Legendre transform gives an orbi-conifold transition of Schoen's CY shown in Figure \ref{fig:SchoenCY-whole-4sides-smoothing}.} \label{fig:SchoenCYmir-whole-4sides-smoothing} \end{center} \end{figure} \begin{remark} For convenience, in the above we first smooth out to a conifold, and then take further smoothing of the conifold. There are other choices of subdivision which do not go through an conifold. For instance see the left of Figure \ref{fig:loc-orb-con-res}. In general orbi-conifold degenerations do not factor through ordinary conifold degenerations. \end{remark} \bibliographystyle{amsalpha}
{ "timestamp": "2018-03-13T01:10:35", "yymm": "1802", "arxiv_id": "1802.08891", "language": "en", "url": "https://arxiv.org/abs/1802.08891" }
\section{Introduction} Chemical and topographic high-throughput patterning of surfaces by lithographic stamping is key to the preparation of a broad range of functional materials and components\cite{CL_Nie2008}. Nanoimprint lithography\cite{CL_Traub2016,CL_Guo2007} involves embossing of plastically deformable surfaces or surface coatings, often consisting of polymeric materials, with hard stamps. Classical soft lithography with elastomeric stamps \cite{CL_Xia1998a,CL_Qin2010} including approaches such as microcontact printing \cite{CL_Perl2009,CL_Kaufmann2010} and polymer pen lithography \cite{CL_Huo2008,CL_Braunschweig2009,CL_Carbonell2017} involves the transfer of molecules adsorbed on the stamp surface to a counterpart surface, on which consequently thin ink layers are deposited. Lithographic approaches that combine deposition of materials and topographic patterning may involve different types of capillary force lithography \cite{CL_Suh2009,CL_Ho2015}, insect-inspired capillary submicron stamping \cite{IICN_Han2018a}, wet lithography \cite{CL_Cavallini2009}, and electrochemical lithography\cite{CL_Simeone2009}. The stamping of functional materials characterized by complex molecular and/or mesoscopic architectures using classical soft lithography has remained demanding. Preformed nanoparticles having the desired functionality may be assembled on a first substrate, then transferred to a stamp and finally stamped onto a second substrate \cite{CL_Lee2007,CL_Cucinotta2009}. In the case of nanoparticles characterized by complex mesoscopic morphologies, such as mesoporous silica nanoparticles, deposition onto surfaces typically requires complex bonding chemistry \cite{NP_arrays_Kehr2010,NP_arrays_Wang2010,NP_arrays_Yoon2007,NP_arrays_Ruiz2006,NP_arrays_Lee2005}; spatial distribution and ordering of the mesoporous silica nanoparticles are difficult to control. Block copolymers (BCPs) are a particularly interesting class of materials because they may combine the specific properties of their chemically distinct blocks. In the case of the BCP polystyrene-\textit{block}-poly(2-vinylpyridine) (PS-\textit{b}-P2VP) the nonpolar PS blocks may serve as rigid glassy scaffold. The polar P2VP blocks can be functionalized taking advantage of the presence of a pyridyl group in each P2VP repeat unit. Also, reversible swelling of the P2VP domains can be controlled \textit{via} the pH value \cite{IICN_Wang2007}. BCPs themselves have been employed as structure-directing agents \cite{NP_arrays_Bang2009,NP_arrays_Lohmuller2011}. Also, either solid or mesoporous BCP nanoparticles are accessible by different solution-based preparative approaches\cite{NP_arrays_Deng2012,NP_arrays_Jin2014,NP_arrays_Fan2014,NP_arrays_Higuchi2017}. Lithographic deposition of BCPs by a parallel lithographic process would be a powerful approach to pattern and to functionalize surfaces but has yet not been established. Here, we report sacrificial stamping to generate arrays of nanostructured submicron dots consisting of PS-\textit{b}-P2VP. Sacrificial stamping involves the lithographic transfer of material the sacrificial stamp itself consists of to a counterpart surface. A sacrificial PS-\textit{b}-P2VP stamp topographically patterned with contact elements is approached to a counterpart surface (Figure \ref{scheme_stamping}a). The PS-\textit{b}-P2VP stamp is then pressed against the counterpart surface so that tight adhesive contact between contact elements and counterpart surface forms (Figure \ref{scheme_stamping}b). Upon retraction of the sacrificial PS-\textit{b}-P2VP stamp, the contact elements rupture so that the layer of the contact elements in intimate contact with the counterpart surface remains attached to the latter (Figure \ref{scheme_stamping}c). As a result, parts of the sacrificial PS-\textit{b}-P2VP stamp are lithographically deposited onto the counterpart surface. The deposited submicron PS-\textit{b}-P2VP dots can be further functionalized (Figure \ref{scheme_stamping}d). \begin{figure}[htbp] \centering \includegraphics[width=0.5\textwidth]{figs/stamping.jpg} \caption{Sacrificial stamping of arrays of submicron PS-\textit{b}-P2VP dots. a) A monolithic nanoporous PS-\textit{b}-P2VP stamp topographically patterned with truncated-pyramidal contact elements (blue) is pressed against a counterpart surface (black) located on a balance. b) The contact elements of the stamp form contact with the counterpart surface under a load controlled \textit{via} the mass displayed by the balance. c) Upon retraction of the nanoporous PS-\textit{b}-P2VP stamp, the contact elements rupture in such a way that their tips remain attached to the counterpart surface as submicron PS-\textit{b}-P2VP dots (blue). The PS-\textit{b}-P2VP, which the residual PS-\textit{b}-P2VP stamp consists of, can be recycled. d) The submicron PS-\textit{b}-P2VP dots thus deposited onto the counterpart surface can be further modified so that functionalized submicron PS-\textit{b}-P2VP dots (red) are obtained.} \label{scheme_stamping} \end{figure} \section{Results and discussion} \subsection{Preparation of sacrificial PS-\textit{b}-P2VP stamps} Sacrificial PS-\textit{b}-P2VP stamps were prepared as schematically displayed in Figure \ref{stamp_prep.jpg}. To topographically pattern the surfaces of the sacrificial PS-\textit{b}-P2VP stamps forming contact to the counterpart surfaces to be stamped, we molded molten PS-\textit{b}-P2VP against macroporous Si (mSi) \cite{SI_Lehmann1990,SI_Birner1998}. The mSi contained hexagonal arrays of macropores with a center-to-center distance of 1.5 $\mu$m (Figure \ref{stamp_prep.jpg}a). The inverse-pyramidal mouths of the mSi macropores (Figure \ref{stamp_prep.jpg}b) resulted from etch pits formed by wet-chemical pattern transfer following photolithographic prepatterning of silicon wafers. The positions of the etch pits defined the positions of the mSi macropores (pore depth $\sim$1.8 $\mu$m) generated by photoelectrochemical etching. The mSi macropores had a neck with a diameter of $\sim$530 nm directly below the inverse-pyramidal pore mouths. Below the neck, the mSi macropores widened and reached a diameter of ~710 nm (Figure \ref{stamp_prep.jpg}b). The surface of the mSi -- initially consisting of a thin native silica layer -- was coated with 1H,1H,2H,2H-perfluorodecyltrichlorosilane (PFDTS) following procedures reported elsewhere \cite{SI_Fadeev2000}. We melted PS-\textit{b}-P2VP sandwiched in between of surface-modified mSi and self-ordered anodic aluminum oxide (AAO) \cite{W_Masuda1998} (Figure \ref{stamp_prep.jpg}c). The self-ordered AAO containing arrays of nanopores with a pore diameter of 300 nm, a lattice period of 500 nm and a pore depth of 1.0 $\mu$m reinforced the PS-\textit{b}-P2VP specimens. In this way, bending of the PS-\textit{b}-P2VP specimens and formation of undulations during the preparation of the sacrificial PS-\textit{b}-P2VP stamps was prevented. During the annealing of the PS-\textit{b}-P2VP the low surface energy of the surface-modified mSi prevented complete infiltration of the mSi macropores; only the inverse-pyramidal pore mouths were filled with PS-\textit{b}-P2VP. Furthermore, the neck of the mSi macropores is an entropic barrier to infiltration; overcoming this barrier would require entropically unfavorable stretching of the PS-\textit{b}-P2VP chains (Figure \ref{stamp_prep.jpg}d). After cooling to room temperature, the surface--modified mSi was non--destructively detached from the vitrified PS-\textit{b}-P2VP specimens. As a result, PS-\textit{b}-P2VP films with arrays of truncated pyramids at the initial positions of the mSi macropores were obtained, while the mSi could be reused as mold (Figure \ref{stamp_prep.jpg}e). \begin{figure}[htbp] \centering \includegraphics[width=0.8\textwidth]{figs/stamp_prep.jpg} \caption{Preparation of PS-\textit{b}-P2VP stamps. a) Top-view SEM image and b) cross-sectional SEM image of mSi in which the cross-section of a contact element of a sacrificial PS-\textit{b}-P2VP stamp is indicated by a red line. c)-g) Schematic diagram displaying the preparation of PS-\textit{b}-P2VP stamps. c) Molten PS-\textit{b}-P2VP (blue) is placed between self-ordered nanoporous AAO (red, on the top) and mSi (grey, at the bottom) modified with a perfluorinated silane. d) The PS-\textit{b}-P2VP melt partially infiltrates the macropores of the surface-modified mSi. e) After vitrification of the PS-\textit{b}-P2VP, the surface-modified mSi is non-destructively detached from the PS-\textit{b}-P2VP that is patterned with arrays of truncated pyramids at the initial positions of the mSi macropores. f) Continuous nanopore systems are generated in the PS-\textit{b}-P2VP by swelling-induced pore generation. The self-ordered AAO reinforces the PS-\textit{b}-P2VP specimens during swelling-induced pore generation and prevents bending and emergence of macroscopic waviness. The truncated pyramids obtained by molding the PS-\textit{b}-P2VP against surface-modified mSi are the contact elements of the obtained sacrificial PS-\textit{b}-P2VP stamps forming contact to the counterpart surfaces during sacrificial stamping. g) Optionally, the self-ordered AAO can be selectively etched.} \label{stamp_prep.jpg} \end{figure} In the next step, we formed continuous nanopore systems in the topographically patterned PS-\textit{b}-P2VP specimens still attached to self-ordered AAO by swelling-induced pore generation with hot ethanol, which is a solvent selective to P2VP \cite{IICN_Wang2010,IICN_Wang2011,IICN_Eichler-Volf2016}. We applied a protocol established for the PS-\textit{b}-P2VP used here that results in the formation of continuous nanopore systems (Figure \ref{stamp_prep.jpg}f) characterized by a mean pore diameter of $\sim$40 nm, a specific surface area of 10 m$^2$/g, and a total pore volume of 0.05 cm$^3$/g \cite{IICN_Eichler-Volf2016}. Osmotic pressure drives the ethanol into the P2VP minority domains. The volumes of the P2VP minority domains increase because the P2VP blocks tend to maximize favorable interactions with ethanol molecules by assuming stretched conformations. The glassy PS matrix in turn undergoes structural reconstruction to accommodate the increased volumes of the P2VP minority domains swollen with ethanol. Bending and the development of macroscopic waviness related to volume expansion during swelling-induced pore generation in the PS-\textit{b}-P2VP specimens were prevented by the reinforcement with self-ordered AAO. Removal of the ethanol by evaporation results in entropic relaxation of the expanded P2VP blocks that transform to coils, while the glassy PS matrix fixates the reconstructed morphology. Consequently, nanopores with walls consisting of coiled P2VP blocks form in place of the expanded P2VP domains swollen with ethanol. To ensure that an isotropic pore network inside a sufficiently stable spongy-continuous scaffold forms, we used asymmetric PS-\textit{b}-P2VP containing PS as matrix component. The nanoporous PS-\textit{b}-P2VP specimens were then subjected to oxygen plasma. The sacrificial PS-\textit{b}-P2VP stamps obtained in this way, which are still connected to self-ordered AAO, may be used for sacrificial stamping and then recovered by again molding them against mSi and repeating steps c)--f) displayed in Figure \ref{stamp_prep.jpg}. Optionally, the self--ordered AAO can be selectively etched Figure (Figure \ref{stamp_prep.jpg}g) to facilitate the loading of the sacrificial stamps with additional components that may further modify and/or functionalize the stamped submicron PS-\textit{b}-P2VP dots. For the sacrificial stamping experiments reported here, we used freestanding and nanoporous sacrificial PS-\textit{b}-P2VP stamps with a thickness of $\sim$240 $\mu$m (Figure \ref{stamp_SEM}a-c). \begin{figure}[htbp] \centering a) \includegraphics[width=0.4\textwidth]{figs/stamp_101_29_qiemian13_Kopie.jpg} b) \includegraphics[width=0.4\textwidth]{figs/101_29_w23_Plasma_0002_Kopie.jpg} c) \includegraphics[width=0.4\textwidth]{figs/melt_ps101_p2vp29_plasma_2h79_Kopie.jpg} d) \includegraphics[width=0.4\textwidth]{figs/used_stamp_08.jpg} \caption{SEM images of PS-\textit{b}-P2VP stamps. a) Cross-section and b) single contact element prior to oxygen plasma treatment. c) Single contact element after oxygen plasma treatment (4 minutes at 100 W). d) Contact elements of a PS-\textit{b}-P2VP stamp after sacrificial stamping.} \label{stamp_SEM} \end{figure} \subsection{Sacrificial stamping} We carried out sacrificial stamping by gluing sacrificial PS-\textit{b}-P2VP stamps onto steel cylinders that were pressed against counterpart surfaces located on a balance. The load was controlled \textit{via} the displayed mass. Using sacrificial PS-\textit{b}-P2VP stamps, we stamped hexagonal arrays of submicron PS-\textit{b}-P2VP dots with a lattice constant of 1.5 $\mu$m (Figure \ref{BCP_dots}; Supporting Figures S1 and S2) corresponding to the lattice constant of the mSi template initially used to topographically pattern the sacrificial PS-\textit{b}-P2VP stamps. Sacrificial stamping requires at first the formation of adhesive contact between the contact elements of the sacrificial PS-\textit{b}-P2VP stamps and the counterpart surfaces (Figure \ref{scheme_stamping}b). We enforced the formation of adhesive contact by controlled application of load. Moreover, formation of adhesive contact was promoted as follows. i) The outer surfaces of the sacrificial PS-\textit{b}-P2VP stamps consisted of P2VP \cite{BCP-NP_Wang2008,BCP-NP_Wang2011}. The counterpart surfaces used here, silicon wafers covered by a native silica layer and glass slides, had hydroxyl-terminated surfaces. It was previously reported that strong attractive interactions between the pyridyl groups of P2VP and hydroxyl groups on the surface of silica substrates exist \cite{W_Roth2007}. Hence, specific chemical interactions between the P2VP surfaces of the contact elements of the sacrificial PS-\textit{b}-P2VP stamps and the hydroxyl-terminated counterpart surfaces enhance adhesion. ii) The sacrificial PS-\textit{b}-P2VP stamps were topographically patterned with hexagonal arrays of contact elements having the shape of truncated pyramids (cf. Figure \ref{stamp_SEM}a-c). The truncated pyramids with a height of $\sim$670 nm had flat, square-shaped upper surfaces with edge lengths of $\sim$550 nm (Figure \ref{stamp_SEM}b and c) that were congruent to the cross sectional areas of the macropore necks of the mSi (Figure \ref{stamp_prep.jpg}a). These flat contact surfaces of the contact elements facilitated formation of tight adhesive contact to counterpart surfaces as compared to, for example, contact elements with hemispherical tips. Thus, the quadratic shape of the contact surfaces of the contact elements was reproduced by the stamped PS-\textit{b}-P2VP dots iii) While the counterpart surfaces can be considered as rigid and non-deformable, the spongy sacrificial PS-\textit{b}-P2VP stamps are deformable. As discussed above, the contour of the submicron PS-\textit{b}-P2VP dots was roughly rectangular, such as the contact surfaces of the contact elements of the sacrificial PS-\textit{b}-P2VP stamps. However, the edge lengths of the submicron PS-\textit{b}-P2VP dots amounted to $\sim$900 nm and exceeded, therefore, the edge lengths of the contact surfaces of the contact elements, as apparent from Figure \ref{stamp_SEM}b and c, by $\sim$65\%. Hence, in the course of sacrificial stamping the contact elements were compressed. Sacrificial stamping converted the open spongy morphology of the sacrificial PS-\textit{b}-P2VP stamps into a more densified morphology (Figure \ref{stamp_SEM}d). The areas of the densified contact elements matched those of the stamped submicron PS-\textit{b}-P2VP dots. The deformability of the contact elements increases the actual contact area between a contact element and the counterpart surface, which in turn results in enhanced adhesion per contact element. \begin{figure}[htbp] \centering a) \includegraphics[width=0.6\textwidth]{figs/particle_transfer_014_Kopie.jpg} b) \includegraphics[width=0.6\textwidth]{figs/particle_transfer_001_Kopie.jpg} c) \includegraphics[width=0.6\textwidth]{figs/5_particle_transfer_5um-5um_7_Kopie.jpg} \caption{Arrays of nanostructured submicron PS-\textit{b}-P2VP dots deposited on silicon wafers by sacrificial stamping. a), b) Scanning electron microscopy images; c) AFM topography image.} \label{BCP_dots} \end{figure} Secondly, the contact elements must rupture upon retraction of the sacrificial PS-\textit{b}-P2VP stamps. As a result, the parts of the contact elements in adhesive contact with the counterpart surfaces remain attached to the latter after removal of the sacrificial PS-\textit{b}-P2VP stamps (Figure \ref{scheme_stamping}c). The height of the stamped submicron PS-\textit{b}-P2VP dots ranged from $\sim$100 nm to $\sim$200 nm (Figure \ref{BCP_dots}c and Supporting Figure S3). The roughness of the surface topography of the submicron PS-\textit{b}-P2VP dots reflects the nanoporous-spongy nature of the sacrificial PS-\textit{b}-P2VP stamps. The crazing behavior of thin PS-\textit{b}-P2VP films has been reported to be complex \cite{BCP-NP_Lee2005,BCP-NP_Gurmessa2015}. Nevertheless, we assume that the following aspects promote the rupture of the contact elements. i) The spongy-nanoporous morphology of the sacrificial PS-\textit{b}-P2VP stamps reduces their tensile strength owing to confinement-induced reduction in the volume density of intermolecular entanglements. The viscosity of polymers as well as their behavior when subjected to stress is crucially related to the presence of intermolecular entanglements. The molecular weight of the PS blocks forming the majority component of the PS-\textit{b}-P2VP amounts to 101000 g/mol and lies, therefore, well above the threshold value $M_c$ of PS for the occurrence of intermolecular entanglements ($M_c$ $\approx$ 31200 g/mol at 490 K) \cite{W_Fetters1999}. However, PS chains in thin PS homopolymer films have been reported to be less entangled than in the bulk, corresponding to a confinement-induced increase in $M_c$ \cite{W_Brown1996,W_Si2005}. The nanopore walls in the sacrificial PS-\textit{b}-P2VP stamps have thicknesses in the 100-nm-range and below so that geometric restrictions similar to those in thin PS films apply. Moreover, the high incompatibility of the PS and P2VP blocks imposes additional constraints on the formation of intermolecular entanglements. Hence, the volume density of intermolecular entanglements within the PS domains in the PS-\textit{b}-P2VP stamps should be lower than in bulk PS-\textit{b}-P2VP. ii) Prior to sacrificial stamping, the sacrificial PS-\textit{b}-P2VP stamps were subjected to oxygen plasma treatment. The oxygen plasma treatment increased the areas of the nanopore openings at the surface of the sacrificial PS-\textit{b}-P2VP stamps, as obvious from a comparison of Figure \ref{stamp_SEM}b and Figure \ref{stamp_SEM}c. The oxygen plasma also cleaves some PS-\textit{b}-P2VP chains close to the surface of the sacrificial PS-\textit{b}-P2VP stamps into shorter segments. It is a straightforward assumption that cleavage of PS-\textit{b}-P2VP chains close the the surface of the contact elements and, therefore, close to the contact interface between contact elements and counterpart surface facilitates the rupture of the contact elements. \subsection{Modification of PS-\textit{b}-P2VP dots} Since the outer surfaces of the submicron PS-\textit{b}-P2VP dots consist of P2VP blocks, the pyridyl groups of the latter are exposed to the environment and can be used for further functionalization. As example, we attached citrate--stabilized gold nanoparticles (AuNPs) with a diameter of 35 nm to the submicron PS-\textit{b}-P2VP dots by immersing arrays of submicron PS-\textit{b}-P2VP dots on Si wafers into AuNP suspensions with a pH value of 3.5. The negatively charged AuNPs adhered to partially protonated pyridyl moieties by van der Waals interactions and ionic interactions (P2VP is protonated at pH $\leq$ 4.1 \cite{IICN_Wang2007}). Evaluation of 10 submicron PS-\textit{b}-P2VP dots revealed that on average 69 $\pm$ 13 AuNPs were bound to a submicron PS-\textit{b}-P2VP dot (Figure \ref{modifications}a). It is also possible to incubate the sacrificial PS-\textit{b}-P2VP stamp prior to sacrificial stamping with a functional material and to transfer the latter along with the submicron PS-\textit{b}-P2VP dots to a counterpart surface. As example, we filled a sacrificial PS-\textit{b}-P2VP stamp with a rhodamine B solution and let the solvent evaporate. Then, arrays of submicron PS-\textit{b}-P2VP dots containing rhodamine B were deposited on a glass slide by sacrificial stamping. The presence of rhodamine B was evidenced by total internal reflection fluorescence microscopy (TIRFM) imaging the fluorescence emission of rhodamine B (Figure \ref{modifications}b). \begin{figure}[htbp] \centering a) \includegraphics[width=0.6\textwidth]{figs/particle_transfer_Au18_Kopie.jpg} b) \includegraphics[width=0.6\textwidth]{figs/Image_21_Kopie.jpg} \caption{Functionalized submicron PS-\textit{b}-P2VP dots obtained by sacrificial stamping. a) Submicron PS-\textit{b}-P2VP dot on a Si wafer functionalized with gold nanoparticles. b) TIRFM image (edge length 54.5 $\mu$m) of the fluorescence of submicron PS-\textit{b}-P2VP dots containing rhodamine B stamped on a glass slide.} \label{modifications} \end{figure} \section{CONCLUSIONS} In conclusion, we reported the ink-free lithographic transfer of parts of a sacrificial stamp to a counterpart surface. Using sacrificial stamps consisting of the block copolymer PS-\textit{b}-P2VP penetrated by spongy-continuous nanopore systems with truncated pyramids as contact elements, we stamped arrays of spongy submicron PS-\textit{b}-P2VP dots on silicon wafers and glass slides. Sacrificial stamping requires that adhesion of the stamp's contact elements to the counterpart surface is stronger than cohesion within the stamp. The enhanced deformability of the sacrificial PS-\textit{b}-P2VP stamps originating from their spongy-nanoporous morphology facilitated formation of adhesive contact between contact elements and counterpart surfaces. Suppression of intermolecular entanglements induced by the confinement of the thin nanopore walls inside the sacrificial PS-\textit{b}-P2VP stamps, as well as chain cleavage by oxygen plasma treatment, supported the rupture of the contact elements upon stamp retraction. Taking into account that the underlying polymer physics is generic, we assume that the methodology reported here is applicable to any polymeric stamp with spongy morphology. The PS-\textit{b}-P2VP, which the sacrificial PS-\textit{b}-P2VP stamps consist of, is recyclable. The pyridyl groups of the P2VP blocks forming the outer surface of the submicron PS-\textit{b}-P2VP dots facilitated further functionalization of the latter, for example, with gold nanoparticles or with dyes. Arrays of submicron PS-\textit{b}-P2VP dots stamped onto glass slides can be used as substrates for advanced optical microscopy, such as TIRF microscopy, as well as for Raman microscopy. Potential applications of block copolymer dot arrays generated by sacrificial stamping may include locally resolved preconcentration sensing and locally resolved monitoring of cellular interactions preconcentrated at the submicron block copolymer dots. Moreover, sacrificial stamping may yield lab-on-chip configurations consisting of arrays of microreactors for locally confined chemical reactions. \section{MATERIALS AND METHODS} \textit{Materials.} Macroporous silicon (product number 620514-W23) was provided by SmartMembranes GmbH (Halle (Saale), Germany). Self-ordered AAO with a pore diameter of 300 nm, a lattice period of 500 nm and a pore depth of 1.0 $\mu$m was prepared by anodizing aluminum chips with a diameter of 4 cm (Goodfellow, purity >99.99 \%) following procedures reported elsewhere \cite{W_Masuda1998}. The self-ordered AAO layer was connected to a $\sim$1000 $\mu$m thick Al substrate. Asymmetric PS-\textit{b}-P2VP (M$_n$(PS) = 101000 g/mol; M$_n$(P2VP) = 29000 g/mol; M$_w$/M$_n$(PS-\textit{b}-P2VP) = 1.60, volume fraction of P2VP 21\%; bulk period $\sim$51 nm) was obtained from Polymer Source Inc., Canada. Tetrachloroauric(III)acid (HAuCl$_4$), trisodium citrate and rhodamine B were purchased from Sigma-Aldrich. 1H,1H,2H,2H-perfluorodecyltrichlorosilane (PFDTS, 97 \%, stabilized with copper) was supplied by ABCR GmbH, Germany. The AuNPs were synthesized following procedures reported elsewhere \cite{NP_arrays_Xie2009,IICN_Xue2017}. 1 mL of an aqueous 3.88 $\cdot$ 10$^{-2}$ M citrate solution was added to 50 mL of a boiling solution of 0.01 wt-\% HAuCl$_4$ in water. The aqueous mixture with a pH value of 3.5 was boiled for 20 min under vigorous stirring and then cooled to room temperature. A silicon wafer stamped with submicron PS-\textit{b}-P2VP dots was dipped into the thus-obtained aqueous AuNP suspension for 1 h, followed by three washing steps with deionized water. \textit{Preparation of sacrificial PS-\textit{b}-P2VP stamps.} To modify the surface of the mSi with PFDTS, the mSi was first treated with a boiling mixture containing 98\% H$_2$SO$_4$ and 30\% H$_2$O$_2$ at a volume ratio of 7:3 for 30 minutes, followed by rinsing with deionized water and drying in an argon flow. Then, the mSi was coated with PFDTS by vapor deposition for 2 h at 85$^\circ$C and for 3 h at 130$^\circ$C following procedures reported elsewhere \cite{SI_Fadeev2000}. About 240 $\mu$m thick PS-\textit{b}-P2VP films were prepared by dropping a solution of 100 mg PS-\textit{b}-P2VP per mL tetrahydrofuran (THF) onto a silicon wafer. After the complete evaporation of THF, the PS-\textit{b}-P2VP films were detached by exposure to ethanol for 24 h at room temperature and sandwiched between self-ordered AAO and surface-modified mSi in such a way that the macropore openings of the self-ordered AAO and the surface-modified mSi were in contact with the PS-\textit{b}-P2VP (Figure \ref{stamp_prep.jpg}c). The PS-\textit{b}-P2VP was infiltrated at 220$^\circ$C for 4 h under vacuum while a load of ~160 mbar was applied. The PS-\textit{b}-P2VP was cooled to room temperature at a rate of --1 K/min and immersed into ethanol for $\sim$30 min. Swelling-induced pore generation was carried out in ethanol at 60$^\circ$C for 4h. The Al substrate connected to the self-ordered AAO layer was selectively etched with a solution of 100 mL 37\% HCl and 3.4 g CuCl$_2$ * 2 H$_2$O in 100 mL deionized water at 0$^\circ$C. Finally, the self-ordered AAO was removed by etching with 3 M NaOH$_{(aq)}$ followed by washing with deionized water. The sacrificial PS-\textit{b}-P2VP stamps were finally subjected to oxygen plasma at 100 W for 4 minutes using a plasma cleaner Femto (Diener electronic, Ebhausen, Germany). \textit{Sacrificial stamping.} Silicon wafers were cut into small pieces with areas of 1 cm $\cdot$ 1 cm. Thus-obtained Si wafer pieces and glass slides used as substrates for sacrificial stamping were washed with acetone, dried in an argon flow and treated with oxygen plasma (100 W; 10 min) using a plasma cleaner Femto (Diener electronic, Ebhausen, Germany). The sacrificial PS-\textit{b}-P2VP stamp used to produce the sample shown in Figure \ref{modifications}b was immersed into a solution of 50 mmol/L rhodamine B in ethanol for 1h followed by three washing steps with deionized water and drying at 35$^\circ$ for 12h. Prior to sacrificial stamping, the PS-\textit{b}-P2VP stamps were attached to a home-made stamp holder made of stainless steel (length: 55 mm, diameter: 12 mm, mass: 40 g) using double sided adhesive tape. Sacrificial stamping was carried out manually while applying a pressure of $\sim$50 bar. The pressure was adjusted by carrying out sacrificial stamping on a balance and calculated from the displayed mass. The contact time amounted to $\sim$1 s. \textit{Characterization.} SEM investigations were carried out on a Zeiss Auriga microscope operated at an accelerating voltage of 3 kV. For SEM, the samples were sputter-coated with a 5 nm thick iridium layer. AFM measurements were conducted in semicontact mode using a NT-MDT NTEGRA device. The cantilevers had a nominal length of 95 $\mu$m, force constants of 3.1--37.6 N m$^{-1}$, and a resonance frequency of 256 kHz (within the range of 140--390 kHz). The tip radius was 10 nm. The AFM images were processed by using the software Nova Px. TIRFM was performed using an inverted microscope (IX71, Olympus) equipped with a motorized 4-line TIRF condenser (cellTIRF 4-Line system, Olympus), a 150fold oil immersion TIRF objective (UAPON 150xTIRF, NA 1.45 Olympus) and a 561 nm diode-pumped solid state laser (max. power 200 mW; Cobolt Jive 561, Cobolt, Sweden). Images were acquired by an electron multiplying back-illuminated frame transfer CCD camera (iXon Ultra 897, Andor). A fluorescence filter cube containing a polychroic beamsplitter (R405/488/561/647, Semrock) and a quad-band emission/blocking filter (FF01 446/523/600/677, Semrock) was used. For each sample, 500 frames were recorded with an exposure time of 31 ms, a cycle time of 67 ms and a laser power of 5 mW (power density approx. 150 W/cm$^2$). \section{ASSOCIATED CONTENT} \begin{suppinfo} Figure S1: Large-area SEM image of an array of submicron PS-\textit{b}-P2VP dots on a Si wafer generated by sacrificial stamping. Figure S2: SEM image at intermediate magnification of an array of submicron PS-\textit{b}-P2VP dots on a Si wafer generated by sacrificial stamping. Figure S3: AFM topography line profile of submicron PS-\textit{b}-P2VP dots deposited on a Si wafer by sacrificial stamping. \end{suppinfo} \section{AUTHOR INFORMATION} \subsection{Corresponding Author} *E-mail: martin.steinhart@uos.de (M. S.). \subsection{ORCID} Martin Steinhart: 0000-0002-5241-8498 \subsection{Notes} The authors declare no competing financial interest. \begin{acknowledgement} The authors thank the European Research Council (ERC-CoG-2014, project 646742 INCANA) for funding. The preparation of self-ordered AAO by C. Hess, I. Hanemann and C. Schulz-K\"olbel is gratefully acknowledged. \end{acknowledgement}
{ "timestamp": "2018-02-27T02:02:08", "yymm": "1802", "arxiv_id": "1802.08754", "language": "en", "url": "https://arxiv.org/abs/1802.08754" }
\section{Introduction} All strong and electromagnetic decays of the $\eta$ meson are either suppressed or forbidden to first order. The $\eta$ meson is, in addition, a $C$ and $P$ eigenstate of strong and electromagnetic interaction. This makes it well suited for the study of rare processes and the search for forbidden ones. The subject of this letter is the process $\eta\rightarrow\pi^0+\text{e}^++\text{e}^-$ via the single-photon intermediate state $\eta\rightarrow\pi^0+\gamma^*$ that would violate $C$ parity conservation. The background for this process would be a two-photon process with an expected branching ratio not larger than $10^{-8}$ according to theoretical calculations \cite{Cheng1967,Smith1968,Ng1993}. The present experimental upper limit for the branching ratio of the decay $\eta\rightarrow\pi^0+\text{e}^++\text{e}^-$ is from the seventies of the last century and amounts only to $4.5\times10^{-5}$ (CL = $90\,\%$) \cite{Jane1975}. A more stringent upper limit for the decay channel $\eta\rightarrow\pi^0+\text{e}^++\text{e}^-$ has been determined in the analysis presented in this paper. The data have been collected using the WASA-at-COSY facility and also constituted the basis for studies of other $\eta$ meson decay channels already published in Ref.~\cite{Adlarson2016}. \section{Experiment} The WASA-at-COSY experiment was an internal experiment operated at the accelerator COSY of the For\-schungs\-zentrum J\"ulich, Germany from 2006 to 2014 \cite{Hoistad2004}. For the measurements discussed here, a proton beam was accelerated to a kinetic beam energy of $T_\text{p} = 1\,\mathrm{GeV}$ and collided with deuterium pellets provided by the internal pellet target. The $\eta$ mesons were produced in the reaction $\text{p}+\text{d}\rightarrow{^3\text{He}}+\eta$. \par The WASA detector setup consists of two main parts: the central detector, which was used for the reconstruction of the produced mesons and their decay particles, and the forward detector used for the measurement of the four momenta of the forward scattered $^3\text{He}$ nuclei. A more detailed description of the WASA-at-COSY experimental setup can be found in Ref.~\cite{Adlarson2016,Hoistad2004,Bargholtz2008}. \par The data for the studies presented here were obtained in two measurement periods, one of four weeks in 2008 and one of eight weeks in 2009. For data acquisition the trigger used required a large energy loss in subsequent scintillator elements of the forward detector. Since the $^3\text{He}$ nucleus stemming from the reaction $\text{p}+\text{d}\rightarrow{^3\text{He}}+\eta$ is stopped in the first layer of the WASA forward range hodoscope, a veto on the signals from the second layer was used in addition. Due to the trigger relying on information from the forward detector only, the utilized trigger was unbiased with respect to a decay mode of the $\eta$ meson. In total about $3\times10^7$ events containing an $\eta$ meson were recorded with $1\times10^7$ events originating from the 2008 period and $2\times10^7$ events from the 2009 period \cite{Adlarson2016}. \section{Data analysis} The analysis of the decay $\eta\rightarrow\pi^0+\text{e}^++\text{e}^-$ was based on a common analysis chain for $\eta$ decay studies described in Ref.~\cite{Adlarson2016}. Since only very few events were expected to remain in the analysis after the event selection, an optimal choice of the selection conditions is important for the best possible result. These conditions were determined with the aid of an optimization algorithm based on Monte Carlo simulations. \paragraph{Preselection} Before the selection conditions for the decay $\eta\rightarrow\pi^0+\text{e}^++\text{e}^-$ were determined, the data collected in 2008 and 2009 were preselected with conditions common to all recorded reactions. For instance, conditions on time correlations of the measured particles were used, as presented in Ref.~\cite{Adlarson2016}. Furthermore, to reject hits from particles that were wrongly identified as secondary particles (so-called split-offs) and electron-positron pairs from conversion of photons at the COSY beam pipe, two-dimensional cuts were utilized. More details of these conditions were published in Ref.~\cite{Adlarson2016}. \par Besides these general preselection conditions, a cut on the signature of the decay $\eta\rightarrow\pi^0+\text{e}^++\text{e}^-$ was applied requesting at least one positively and one negatively charged particle detected in the central detector, as well as at least two neutral particles originating from the $\pi^0$ meson decay $\pi^0\rightarrow\gamma+\gamma$. The last condition applied for data preselection requires the maximum considered momenta of the charged decay particles to be below $p = 250\,\mathrm{MeV}/c$, since the momenta of the leptons of the decay $\eta\rightarrow\pi^0+\text{e}^++\text{e}^-$ are expected to be below this value. \paragraph{Monte Carlo simulations} In order to determine optimal selection conditions for the search for the decay channel $\eta\rightarrow\pi^0+\text{e}^++\text{e}^-$, $1.8\times10^8$ Monte-Carlo events of all non-signal $\eta$ decays observed yet were created with respect to their relative branching ratio \cite{Patrignani2016}, as well as two million events for the signal decay. These simulations were generated with the \textsc{pluto++} software package \cite{Frohlich2007} considering the angular distribution of $\text{p}+\text{d}\rightarrow{^3\text{He}}+\eta$ at $T_\text{p} = 1\,\mathrm{GeV}$ according to Ref.~\cite{Adlarson2014}. For the various $\eta$ decay channels physics models as included in \textsc{pluto++} were used. The reader is refered to Ref.~\cite{Adlarson2016} for further details. \par In addition to the simulations of $\eta$ decays, about $4.3\times10^9$ events for the direct pion production were created, with most events for the production reactions $\text{p}+\text{d}\rightarrow{^3\text{He}}+\pi^0+\pi^0$ and $\text{p}+\text{d}\rightarrow{^3\text{He}}+\pi^++\pi^-$, as these contribute most to the non-$\eta$ background at the given kinetic beam energy. For these two-pion productions the ABC effect was incorporated into the simulations according to the model discussed in Ref.~\cite{Adlarson2015}. \par The simulations for the signal decay $\eta\rightarrow\pi^0+\text{e}^++\text{e}^-$ were generated with two different model assumptions. The first one is a decay according to pure three-particle phase space. The second is based on the vector meson dominance (VMD) model for the intermediate virtual photon. The direct decay $\eta\rightarrow\pi^0+\gamma$ violates both $C$ parity and angular momentum conservation plus global gauge invariance. Thus, there is no $\eta\rightarrow\pi^0+\gamma$ on-shell contribution for the decay $\eta\rightarrow\pi^0+\text{e}^++\text{e}^-$ and the transition form factor for the off-shell contribution vanishes at zero virtuality, such that the single-photon pole is completely removed \cite{Bernstein1965,Barrett1966,Bazin1968}. In Fig.~\ref{fig:etapi0eeIMee} the invariant mass of the $\text{e}^+\text{e}^-$ pair produced in the decay is plotted according to three-particle phase space (shadowed in orange) and the decay via $\eta\rightarrow\pi^0+\gamma^*$ according to the discussed model. A more detailed calculation of the model can be found in Ref.~\cite{Bergmann2017}. \par \begin{figure} \centering \resizebox{0.45\textwidth}{!}{ \includegraphics{etapi0eeIMee_model.pdf} } \caption{Invariant mass of $\text{e}^+\text{e}^-$ pairs for the simulated decay $\eta\rightarrow\pi^0+\text{e}^++\text{e}^-$. Black lined: decay via $\eta\rightarrow\pi^0+\gamma^*$ considering VMD. Shadowed in orange: decay according to three-particle phase space.} \label{fig:etapi0eeIMee} \end{figure} \par To simulate the WASA detector responses, the WASA Monte Carlo package \textsc{wmc} was used, which is based on \textsc{geant3} \cite{Geant1994}. The settings for the spatial, timing and energy resolution in \textsc{wmc} were set to agree with the resolution observed in data. \par Due to the high luminosities of the WASA-at-COSY experiment, it is possible that detector responses from one event can overlap with another event. This effect was considered in the simulations and the amount of event overlap was left as a free parameter for the fit of the simulations to data (see next paragraph). \par All Monte Carlo simulations were preselected with conditions identical to those for data preselection. \paragraph{Data description} The choice of the selection conditions with regard to the decay channel $\eta\rightarrow\pi^0+\text{e}^++\text{e}^-$ is based on Monte Carlo simulations. It is necessary to know the contributions of the various reactions to the collected data for an optimal choice. Therefore, the 2008 and 2009 data sets were fitted separately in distributions of selected quantities by template distributions of the aforementioned Monte Carlo simulations to determine the contributions of the individual reactions to the data. In detail, these distributions are: \begin{itemize} \item the missing mass $m_\text{X}$, corresponding to the invariant mass of the proton beam and the deuteron target remaining after the $^3\text{He}$ four momentum has been subtracted and peaks at the $\eta$ mass for the reaction $\text{p}+\text{d}\rightarrow{^3\text{He}}+\eta$, \item the invariant mass $m_{\text{e}\text{e}\gamma\gamma}$ of an electron-positron pair candidate and two photons, which peaks at the $\eta$ mass for the decay $\eta\rightarrow\pi^0+\text{e}^++\text{e}^-$ with $\pi^0\rightarrow\gamma+\gamma$, \item the invariant mass $m_{\gamma\gamma}$ of two photons, which peaks at the $\pi^0$ mass for reactions with $\pi^0$ mesons produced, \item the invariant mass $m_{\text{e}\text{e}}$ of an electron-positron pair candidate, \item the smallest invariant mass $m_{\text{e}\gamma}$ of all four possible combinations of an electron or positron candidate and a photon and \item the missing mass squared $m_\text{Xee}^2$, which is the invariant mass squared of the proton beam and the deuteron target remaining after the $^3\text{He}$ four momentum and the electron-positron pair candidate momentum have been subtracted and peaks at the $\pi^0$ mass squared for the reaction of interest. \end{itemize} \par Under the assumption of a branching ratio of the decay below the current upper limit of $4.5\times10^{-5}$ (CL = $90\,\%$) \cite{Jane1975}, there are less than 150 events expected from the decay $\eta\rightarrow\pi^0+\text{e}^++\text{e}^-$ in the combined data sets after preselection, considering the preselection efficiency for the signal decay. A fit by Monte Carlo simulations including the simulated decay $\eta\rightarrow\pi^0+\text{e}^++\text{e}^-$ is consistent with zero events from this signal decay channel. Therefore, the decay $\eta\rightarrow\pi^0+\text{e}^++\text{e}^-$ was excluded from the fit. While the differential distribution for the reaction $\text{p}+\text{d}\rightarrow{^3\text{He}}+\eta$ is well known \cite{Adlarson2014}, the differential distributions are known only with high uncertainties or not at all for direct multi-pion productions. Hence, the data were divided into ten bins in angular ranges of $\cos{\vartheta_{^3\text{He}}^\text{cms}}$\footnote{ $\vartheta_{^3\text{He}}^\text{cms}$ is the polar scattering angle of the $^3\text{He}$ nucleus relative to the beam axis in the center of mass system.}. Monte Carlo simulations were fitted to data in the eight angular bins ranging from $-1$ to $0.6$. The angular range $0.6 < \cos{\vartheta_{^3\text{He}}^\text{cms}} \leq 1$ was excluded because of the lower energy resolution of the forward detector for these forward scattered $^3\text{He}$ nuclei. Moreover, the relative amount of background from the direct pion production is larger in this angular range, whereas less than $3\,\%$ of all $\text{p}+\text{d}\rightarrow{^3\text{He}}+\eta$ events have a $\cos{\vartheta_{^3\text{He}}^\text{cms}} > 0.6$. \par \begin{figure} \centering \resizebox{0.45\textwidth}{!}{ \includegraphics{DataFit_mX_Bin3.pdf} } \caption{Missing mass $m_\text{X} = \left|\mathbb{P}_\text{p}+\mathbb{P}_\text{d}-\mathbb{P}_{^3\text{He}}\right|$ after preselection for a data sample of the 2008 period fitted by Monte Carlo simulations. Only the most common contributions of the various reactions to the fit are plotted separately.} \label{fig:DataFit_mX} \end{figure} \par \begin{figure} \centering \resizebox{0.45\textwidth}{!}{ \includegraphics{DataFit_meegg_Bin3.pdf} } \caption{Invariant mass of $\text{e}^+\text{e}^-\gamma\gamma$ after preselection for a data sample of the 2008 period fitted by Monte Carlo simulations. For the legend see Fig.~\ref{fig:DataFit_mX}.} \label{fig:DataFit_meegg} \end{figure} \par \begin{figure} \centering \resizebox{0.45\textwidth}{!}{ \includegraphics{DataFit_mgg_Bin3.pdf} } \caption{Invariant mass of $\gamma\gamma$ after preselection for a data sample of the 2008 period fitted by Monte Carlo simulations. For the legend see Fig.~\ref{fig:DataFit_mX}.} \label{fig:DataFit_mgg} \end{figure} \par \begin{figure} \centering \resizebox{0.45\textwidth}{!}{ \includegraphics{DataFit_mee_Bin3.pdf} } \caption{Invariant mass of $\text{e}^+\text{e}^-$ after preselection for a data sample of the 2008 period fitted by Monte Carlo simulations. For the legend see Fig.~\ref{fig:DataFit_mX}.} \label{fig:DataFit_mee} \end{figure} \par The fit of the Monte Carlo simulations to the data was performed simultaneously for all angular ranges and distributions with identical scaling parameters for the simulations for all distributions within one angular range. Furthermore, the ratios for the various $\eta$ decays were constrained to the branching ratios according to Ref.~\cite{Patrignani2016} within the given uncertainties. These were set to be identical for all angular ranges. Similarly, the amount of event overlap was included as one global fit parameter. In Fig.~\ref{fig:DataFit_mX}, Fig.~\ref{fig:DataFit_meegg}, Fig.~\ref{fig:DataFit_mgg} and Fig.~\ref{fig:DataFit_mee} the resulting Monte Carlo fits to the 2008 data are plotted for $m_\text{X}$, $m_{\text{e}\text{e}\gamma\gamma}$, $m_{\gamma\gamma}$ and $m_{\text{e}\text{e}}$ for the angular range $0.2 < \cos{\vartheta_{^3\text{He}}^\text{cms}} \leq 0.4$. According to this fit most events remaining after preselection originate from the $\eta$ decay $\eta\rightarrow\pi^++\pi^-+\pi^0$, the direct $\text{p}+\text{d}\rightarrow{^3\text{He}}+\pi^++\pi^-+\pi^0$ production and the direct two-pion production reactions. A collection of all fits is available in Ref.~\cite{Bergmann2017}. \paragraph{Selection conditions} The selection conditions for the search for the decay $\eta\rightarrow\pi^0+\text{e}^++\text{e}^-$ were based on the following quantities: \begin{itemize} \item the missing mass $m_\text{X}$ to identify the production reaction $\text{p}+\text{d}\rightarrow{^3\text{He}}+\eta$, \item the invariant mass $m_{\text{e}\text{e}\gamma\gamma}$ of an electron-positron pair candidate and two photons to select the decay $\eta\rightarrow\pi^0+\text{e}^++\text{e}^-\rightarrow\gamma+\gamma+\text{e}^++\text{e}^-$, \item the invariant mass $m_{\gamma\gamma}$ of two photons to ascertain the decay $\pi^0\rightarrow\gamma+\gamma$, \item the invariant mass $m_{\text{e}\text{e}}$ of an electron-positron pair candidate, \item the $\chi^2$ probability of a kinematic fit with the hypothesis $\text{p}+\text{d}\rightarrow{^3\text{He}}+\gamma+\gamma+\text{e}^++\text{e}^-$ and \item the energy loss $E_\text{dep}^\text{SEC}$ of the charged particles in the central detector scintillator electromagnetic calorimeter (SEC) and their momentum $p$ to discriminate $\text{e}^\pm$ and $\pi^\pm$ (particle identification, PID). \end{itemize} \par \begin{figure} \centering \resizebox{0.45\textwidth}{!}{ \includegraphics{PIDall.pdf} } \caption{Energy loss of charged particles in the SEC plotted against their momentum times charge for the preselected data sets. A graphical cut around the electron and positron band is indicated by black lines.} \label{fig:PID2009} \end{figure} \par The choice of the cut conditions was performed with $40\,\%$ of the generated Monte Carlo simulations, where\-as the remaining Monte Carlo data sample was used later for the selection efficiency determination. While the graphical cut for the particle identification was chosen beforehand (see Fig.~\ref{fig:PID2009}), as it is a common cut utilized for PID independent from the analyzed reaction, the selection conditions for the other five quantities were determined by an optimization algorithm. This algorithm is based on the relative amount of simulated signal events $S_\text{R} = N_\text{S}^\text{cut} / N_\text{S}^\text{pres}$ remaining after all cuts ($N_\text{S}^\text{cut}$) compared to the number after preselection ($N_\text{S}^\text{pres}$) and the relative amount of all simulated background events $B_\text{R} = N_\text{B}^\text{cut} / N_\text{B}^\text{pres}$ remaining after all cuts ($N_\text{B}^\text{cut}$) in relation to the number after preselection ($N_\text{B}^\text{pres}$). In case of the background reactions the contributions as obtained in the data description were used to downscale the Monte Carlo simulations and to extract the numbers. \par The cut optimization algorithm maximizes the evaluation function \begin{equation} \label{eq:evaluationfunction} G = S_\text{R} \cdot \frac{S_\text{R}}{B_\text{R}} \end{equation} by varying the selection conditions for all chosen quantities. \par With the aid of the cut optimization algorithm the following selection conditions were determined: \begin{align} \label{eq:selection} 0.5414\,\text{GeV}/c^2 &\leq &&m_\text{X} &&\leq 0.5561\,\text{GeV}/c^2\,, \\ 0.507\,\text{GeV}/c^2 &\leq &&m_{\text{ee}\gamma\gamma} &&\leq 0.646\,\text{GeV}/c^2\,, \\ 0.0923\,\text{GeV}/c^2 &\leq &&m_{\gamma\gamma} &&\leq 0.1574\,\text{GeV}/c^2\,, \\ & &&m_\text{ee} &&\geq 0.096\,\text{GeV}/c^2\,\text{ and} \\ & &&\chi^2 \text{prob.} &&\geq 0.05\,. \end{align} \section{Results} After applying the selection conditions to the data, three events were left, whereas two events were expected to remain from the direct two-pion production $\text{p}+\text{d}\rightarrow{^3\text{He}}+\pi^0+\pi^0$ according to Monte Carlo simulations. All other background reaction channels were found to give no sizeable contribution after applying the cuts. The invariant mass, $m_{\text{ee}\gamma\gamma}$, for these events are plotted in Fig.~\ref{fig:Resultmeegg} together with simulated data. Note that the generated Monte Carlo events were scaled according to the fit to data after preselection and that the sum of all Monte Carlo events remaining after all cuts is equal to two events. \par \begin{figure} \centering \resizebox{0.45\textwidth}{!}{ \includegraphics{Result_meegg.pdf} } \caption{Invariant mass of $\text{e}^+\text{e}^-\gamma\gamma$ after all cuts for the 2008 and 2009 data sets (black) and for the simulations scaled to data according to the fit to data after preselection (red). The blue dashed lines indicate the chosen selection conditions.} \label{fig:Resultmeegg} \end{figure} \par The overall reconstruction efficiency for the signal decay $\eta\rightarrow\pi^0+\text{e}^++\text{e}^-$ was determined to be \begin{equation} \label{eq:effvitual} \varepsilon_\text{S}^\text{virtual} = 0.02331(7) \end{equation} for a decay via $\eta\rightarrow\pi^0+\gamma^*$ assuming VMD, whereas the assumption of a decay according to pure three-particle phase space results in \begin{equation} \label{eq:effphase} \varepsilon_\text{S}^\text{phase} = 0.01844(7). \end{equation} The given uncertainties are purely statistical ones. \par In order to calculate the upper limit for the branching ratio $\Gamma(\eta\rightarrow\pi^0+\text{e}^++\text{e}^-) / \Gamma(\eta\rightarrow\text{all})$, the decay channel $\eta\rightarrow\pi^++\pi^-+\pi^0$ with $\pi^0\rightarrow\gamma+\gamma$ was utilized for normalization. This is a reasonable choice as this decay channel has the same signature as the signal decay and, thus, possible systematic effects introduced by differences of the signature are avoided. According to the data description by Monte Carlo simulations there were \begin{equation} \label{eq:netaprodced} N_{\eta\rightarrow\pi^+\pi^-\pi^0_{\gamma\gamma}}^\text{produced} = (6.509 \pm 0.018) \times 10^6 \end{equation} events in data, considering already the efficiency correction determined by Monte Carlo studies. In order to determine a final upper limit for the branching ratio of $\eta\rightarrow\pi^0+\text{e}^++\text{e}^-$, all uncertainties have to be considered and incorporated into the calculations. \paragraph{Systematics} The systematic and statistical uncertainties, which need to be considered for the upper limit determination, can be separated into uncertainties by multiplicative effects and uncertainties by offset effects. The former include an uncertainty of the reconstruction efficiency of the decay $\eta\rightarrow\pi^0+\text{e}^++\text{e}^-$ and an uncertainty in the number of $\eta\rightarrow\pi^++\pi^-+\left(\pi^0\rightarrow\gamma+\gamma\right)$ events in data. The latter ones are uncertainties of the number of background events remaining after all cuts. \par To determine the systematic uncertainty for the signal reconstruction efficiency, the resolution settings for the Monte Carlo simulations were varied within the uncertainties of the individual detector resolutions observed in data. The extracted square root of the relative variance of the reconstruction efficiency was found to be \begin{equation} \label{eq:varvirtual} \sqrt{\text{Var}_\text{rel}^\text{virtual}} = 0.059 \end{equation} for a decay via $\eta\rightarrow\pi^0+\gamma^*$ assuming VMD whereas for a decay according to pure three-particle phase space one finds \begin{equation} \label{eq:varphase} \sqrt{\text{Var}_\text{rel}^\text{phase}} = 0.057. \end{equation} In the following analysis the square root of the variance was considered as the systematic uncertainty. \par The uncertainty for the efficiency corrected number of $\eta\rightarrow\pi^++\pi^-+\left(\pi^0\rightarrow\gamma+\gamma\right)$ events in data was obtained by a comparison to the efficiency corrected number determined utilizing less strict preselection conditions, namely no cuts to reject conversion or split-off events, no cut on the momentum of charged decay particles and less strict cuts on the particles' energies. Hereby a systematic uncertainty of $2.3\,\%$ was determined. \par The uncertainties for the number of background events remaining after all cuts can be separated into a statistical uncertainty due to the finite number of Monte Carlo simulations and systematic uncertainties introduced by uncertainties of the fit of Monte Carlo simulations to data. The latter are dominated by differences between the Monte Carlo fit parameters for the 2008 and 2009 data sets, leading to asymmetric uncertainties. Such different fit parameters for both data sets originated mainly from different experimental settings, which affected, e.g., the event overlap due to different luminosities. To determine the overall systematic uncertainty for the number of remaining background events, the probability density functions (pdf) of the individual uncertainties were folded. The resulting pdf for the nuisance parameters $\lambda_{2008}$ and $\lambda_{2009}$ corresponds to the overall relative systematic uncertainty for the 2008 and 2009 data sets and was incorporated into the upper limit calculations. In Fig.~\ref{fig:Nuisance} the distribution of the nuisance parameter is illustrated for the 2008 data set. \par \begin{figure} \centering \resizebox{0.45\textwidth}{!}{ \includegraphics{Nuisance2008.pdf} } \caption{Nuisance parameter $\lambda_{2008}$ for the systematic uncertainty of the number of background events remaining after all cuts in the 2008 data set.} \label{fig:Nuisance} \end{figure} \par In order to investigate further possible systematic effects, the selection conditions used for the analysis were varied and the expectations according to simulations were compared to the number of events seen in data. Since the expected number of events agreed with the number of events seen in data within the statistical uncertainties, no additional systematic effect needs to be considered. \par A detailed description of the uncertainty investigations is available in Ref.~\cite{Bergmann2017}. \paragraph{Upper limit} The upper limit for the relative branching ratio of the decay $\eta\rightarrow\pi^0+\text{e}^++\text{e}^-$ was calculated with the formula: \begin{equation} \label{eq:brformula} \frac{\Gamma\left(\eta\rightarrow\pi^0+\text{e}^++\text{e}^-\right)} {\Gamma\left(\eta\rightarrow\pi^++\pi^-+\pi^0\right)} < \frac{N_\text{S,up}}{N_{\eta\rightarrow\pi^+\pi^-\pi^0}^\text{produced} \cdot \varepsilon_\text{S}} \end{equation} with the upper limit $N_\text{S,up}$ for the number of signal events, which depends on the number of observed events and the number of expected background events. For the calculation of $N_\text{S,up}$ a Bayesian approach was chosen as given in Ref.~\cite{Zhu2007} with a flat prior pdf and incorporating the determined uncertainties and the pdfs for the nuisance parameters. \par As a result the relative branching ratio of the decay $\eta\rightarrow\pi^0+\text{e}^++\text{e}^-$ via $\eta\rightarrow\pi^0+\gamma^*$ and assuming VMD was found to be \begin{align} \label{eq:brrelvirtual} \frac{\Gamma\left(\eta\rightarrow\pi^0+\text{e}^++\text{e}^-\right)_\text{virtual}} {\Gamma\left(\eta\rightarrow\pi^++\pi^-+\pi^0\right)} &< 3.28\times10^{-5} \nonumber \\ &\qquad (\text{CL} = 90\,\%) \end{align} whereas the assumption of a pure three-particle phase space distribution of the ejectiles results in \begin{align} \label{eq:brrelphase} \frac{\Gamma\left(\eta\rightarrow\pi^0+\text{e}^++\text{e}^-\right)_\text{phase}} {\Gamma\left(\eta\rightarrow\pi^++\pi^-+\pi^0\right)} &< 4.14\times10^{-5} \nonumber \\ &\qquad (\text{CL} = 90\,\%). \end{align} Considering the branching ratio of the decay $\eta\rightarrow\pi^++\pi^-+\pi^0$ of $\Gamma\left(\eta\rightarrow\pi^++\pi^-+\pi^0\right) / \Gamma\left(\eta\rightarrow\text{all}\right) = 0.2292(28)$ \cite{Patrignani2016}, the new upper limit for the branching ratio of the decay $\eta\rightarrow\pi^0+\text{e}^++\text{e}^-$ via $\eta\rightarrow\pi^0+\gamma^*$ results in \begin{align} \label{eq:brvirtual} \frac{\Gamma\left(\eta\rightarrow\pi^0+\text{e}^++\text{e}^-\right)_\text{virtual}} {\Gamma\left(\eta\rightarrow\text{all}\right)} &< 7.5\times10^{-6} \nonumber \\ &\qquad (\text{CL} = 90\,\%). \end{align} For comparison the assumption of a pure three-particle phase space distribution of the ejectiles would lead to \begin{align} \label{eq:brphase} \frac{\Gamma\left(\eta\rightarrow\pi^0+\text{e}^++\text{e}^-\right)_\text{phase}} {\Gamma\left(\eta\rightarrow\text{all}\right)} &< 9.5\times10^{-6} \nonumber \\ &\qquad (\text{CL} = 90\,\%). \end{align} \par These values are smaller than the previous upper limit of $4.5\times10^{-5}$ (CL = $90\,\%$) \cite{Jane1975} by a factor of six and five, respectively. \section{Summary} We have presented new studies with the WASA-at-COSY experiment on the $C$ parity violating $\eta$ meson decay $\eta\rightarrow\pi^0+\text{e}^++\text{e}^-$. The obtained upper limit for the branching ratio of the decay $\eta\rightarrow\pi^0+\text{e}^++\text{e}^-$ is smaller than the previously available upper limit by a factor of five to six \cite{Jane1975}. The results of the analysis are consistent with no events seen in data, and thus give no hint on a $C$ violation in an electromagnetic process. Similarly, no processes from physics beyond the Standard Model are required to explain the results. \par In order to further decrease this value and to continue the search for a $C$ parity violation in an electromagnetic process, additional data were collected with WASA-at-COSY utilizing the production reaction $\text{p}+\text{p}\rightarrow\text{p}+\text{p}+\eta$. Over three periods in 2008, 2010 and 2012 in total about $5\times10^8$ such events were recorded and are currently being analyzed with regard to the decay $\eta\rightarrow\pi^0+\text{e}^++\text{e}^-$. \par Besides a decay via one virtual photon, the decay $\eta\rightarrow\pi^0+\text{e}^++\text{e}^-$ could possibly occur via a hypothetical $C$ violating dark boson U with $m_\text{U} < 413\,\text{MeV}/c^2$ where the pertinent form factor is even further suppressed (i.e. the second term in its Taylor expansion vanishes) compared with the single-photon mechanism \cite{Kupsc2011}. Investigations with regard to this decay process are currently ongoing for the presented $\text{p}+\text{d}\rightarrow{^3\text{He}}+\eta$ data sets and the $\text{p}+\text{p}\rightarrow\text{p}+\text{p}+\eta$ data sets recorded with WASA-at-COSY. \section*{Acknowledgements} This work was supported in part by the EU Integrated Infrastructure Initiative HadronPhysics Project under contract number RII3-CT-2004-506078; by the European Commission under the 7th Framework Programme through the Research Infrastructures action of the Capacities Programme, Call: FP7-INFRASTRUCTURES-2008-1, Grant Agreement N.~227431; by the Polish National Science Centre through the grants 2016/23/B/ST2/00784, and the Foundation for Polish Science (MPD), co-financed by the European Union within the European Regional Development Fund. We gratefully acknowledge the support given by the Swedish Research Council, the Knut and Alice Wallenberg Foundation, and the For\-schungs\-zen\-trum J\"u\-lich FFE Funding Program. This work is based on the PhD thesis of Florian Sebastian Bergmann. Finally we thank all former WASA-at-COSY collaboration members for their contribution to the success of the measurements, as well as the crew of the COSY accelerator for their support during both measurement periods. \section*{References}
{ "timestamp": "2018-02-26T02:11:55", "yymm": "1802", "arxiv_id": "1802.08642", "language": "en", "url": "https://arxiv.org/abs/1802.08642" }
\section{Iterative refinement} \label{sec:alg} We now present an iterative refinement strategy that, when given a lifted abstract transition system, generates the commutativity and the non-commutativity conditions. We then discuss soundness and relative completeness and, in Secs.~\ref{sec:impl} and~\ref{sec:eval}, challenges in generating precise \emph{and} useful commutativity conditions. The refinement algorithm symbolically searches the state space for regions where the operations commute (or do not commute) in a conjunctive manner, adding on one predicate at a time. We add each subregion $H$ (described conjunctively) in which commutativity always holds to a growing disjunctive description of the commutativity condition $\prop$, and each subregion $H$ in which commutativity never holds to a growing disjunctive description of the non-commutativity condition $\hatprop$. \ifARXIV \begin{figure} \else \begin{wrapfigure}[21]{r}{7.1cm} \fi \centering \figboxS{\begin{program}[style=sf,number=true] {\sc Re}\tab $\text{\sc fine}^m_n(\Halg, \preds)$ \{\\ if\tab\ valid($\Halg \;\Rightarrow\; m\ \Lbowtie\ n$) then\label{ln:valid1}\\ $\varphi$ := $\varphi \vee \Halg$;\untab\\ el\tab se if valid($\Halg \;\Rightarrow\; m\ \NLbowtie\ n$) then\label{ln:valid2}\\ $\hatvarphi$ := $\hatvarphi \vee \Halg$;\untab\\ el\tab se\\ let $(\cexC,\cexNC)$ = counterexs. to $\Lbowtie$ and $\NLbowtie$ \\ let\tab\ $p$ = \Choose($\Halg,\preds,\cexC,\cexNC$) in \label{ln:choose}\\ {\sc Refine}$^m_n$($\Halg\wedge p$, $\preds\setminus\{p\}$);\\ {\sc Refine}$^m_n$($\Halg\wedge \neg p$, $\preds\setminus\{p\}$);\untab \untab \untab\\ \}\\ main \{\; \tab $\varphi$ := $\FALSE$;\;\; $\hatvarphi$ := $\FALSE$;\\ try \{ $\text{\sc Refine}^m_n(\TRUE, \preds)$; \}\\ catch (InterruptedExn e) \{ skip; \} \\ return($\varphi,\hatvarphi$); \} \untab\\ \end{program}} \caption{\label{fig:alg} Algorithm for generating commutativity $\varphi$ and non-commutativity $\hatvarphi$.} \ifARXIV \end{figure} \else \end{wrapfigure} \fi The algorithm in Fig.~\ref{fig:alg} begins by setting $\varphi=\FALSE$ and $\hatvarphi=\FALSE$. {\sc Refine} begins a symbolic binary search through the state space $H$, starting from the entire state: $H=\TRUE$. It also may use a collection of predicates $\preds$ (discussed later). At each iteration, {\sc Refine} checks whether the current $H$ represents a region of space for which $m$ and $n$ always commute: $H \Rightarrow m\ \Lbowtie\ n$ (described below). If so, $H$ can be disjunctively added to $\varphi$. It may, instead be the case that $H$ represents a region of space for which $m$ and $n$ never commute: $H \Rightarrow m\ \NLbowtie\ n$. If so, $H$ can be disjunctively added to $\hatvarphi$. If neither of these cases hold, we have two counterexamples. $\cexC$ is the counterexample to commutativity, returned if the validity check on Line~\ref{ln:valid1} fails. $\cexNC$ is the counterexample to \emph{non}-commutativity, returned if the validity check on Line~\ref{ln:valid2} fails. We now need to subdivide $H$ into two regions. This is accomplished by selecting a new predicate $p$ via the \Choose\ method. For now, let the method \Choose\ and the choice of predicate vocabulary $\preds$ be parametric. {\sc Refine} is sound regardless of the behavior of \Choose. Below we give the conditions on \Choose\ that ensure relative completeness, and in Sec.~\ref{sec:eval} we discuss our particular strategy. Regardless of what $p$ is returned by \Choose, two recursive calls are made to {\sc Refine}, one with argument $H \wedge p$, and the other with argument $H\wedge\neg p$. The algorithm is exponential in the number of predicates. In Sec.~\ref{sec:impl} we discuss prioritizing predicates. The refinement algorithm generates commutativity conditions in disjunctive normal form. Hence, any finite logical formula can be represented. This logical language is more expressive than previous commutativity logics that, because they were designed for run-time purposes, were restricted to conjunctions of inequalities~\cite{KNPSP:PLDI11} and boolean combinations of predicates over finite domains~\cite{DBLP:conf/pldi/DimitrovRVK14}. \smartpar{Checking a candidate $H_m^n$.} Our algorithm involves checking whether $(H_m^n \Rightarrow m\ \Lbowtie\ n)$ or $(H_m^n \Rightarrow m\ \NLbowtie\ n)$. As shown in Sec.~\ref{sec:noAlt}, we can check whether $H_m^n$ specifies conditions under which $m\ \bowtie\ n$ via an SMT query that does not introduce quantifier alternation. For brevity, we define: \[ \begin{array}{l}\textsf{valid}(H_m^n \;\Rightarrow\;m\ \Lbowtie\ n) \;\equiv\; \textsf{valid}\!\!\left( \begin{array}{l} \forall \Lsigma_0\ \Xs\ \Ys\ \Rms\ \Rns.\;\;\; H_m^n(\Lsigma_0,\Xs,\Ys,\Rms,\Rns) \;\Rightarrow\; \\ \;\;\;\;\; m(\Xs)/\Rms\;\;n(\Ys)/\Rns\;\;\Lsigma_0 = n(\Ys)/\Rns\;\;m(\Xs)/\Rms\;\;\Lsigma_0 \end{array} \right) \end{array} \] Above we assume as a black box an SMT solver providing \textsf{valid}. Here we have lifted the universal quantification within $\Lbowtie$ outside the implication. We can similarly check whether $H_m^n$ is a condition under which $m$ and $n$ \emph{do not} commute. First, we define negative analogs of commutativity: \[\begin{array}{rl} \alpha_1\ \NLbowtie\ \alpha_2 \;\equiv& \forall \Lsigma_0.\; \Lsigma_0 \neq \Err{} \;\Rightarrow\; \LGammaOf{\alpha_2}\ \LGammaOf{\alpha_1}\ \Lsigma_0 \neq \LGammaOf{\alpha_1}\ \LGammaOf{\alpha_2}\ \Lsigma_0\\ m\ \NLbowtie\ n \;\equiv& \forall \Xs\ \Ys\ \Rms\ \Rns.\; m(\Xs)/\Rms\ \NLbowtie\ n(\Ys)/\Rns \end{array}\] We thus define a check for when $\varphi_m^n$ is a \emph{non}-commutativity condition with: \[ \begin{array}{l}\textsf{valid}(H_m^n \;\Rightarrow\;m\ \NLbowtie\ n) \;\equiv\; \textsf{valid}\!\!\left( \begin{array}{l} \forall \Lsigma_0\ \Xs\ \Ys\ \Rms\ \Rns. \; H_m^n(\Lsigma_0,\Xs,\Ys,\Rms,\Rns) \;\Rightarrow\; \Lsigma_0 \neq \Err{} \;\Rightarrow\;\\ \;\;\;\;\; m(\Xs)/\Rms\;\;n(\Ys)/\Rns\;\;\Lsigma_0 \neq n(\Ys)/\Rns\;\;m(\Xs)/\Rms\;\;\Lsigma_0 \end{array} \right) \end{array} \] \section{Data Structure Representations} \label{apx:ds} \lstset{numbers=left,numberstyle=\tiny,numbersep=5pt,language=Lisp, stringstyle=\ttfamily\small,basicstyle=\ttfamily\footnotesize, showstringspaces=false,breaklines,frame=none} \input{raw.tex} \section{BlockKing: YML representation} \label{apx:blockking} \input{listings/blockking.tex} \section{BlockKing Fixed: YML representation} \label{apx:blockking_fixed} \input{listings/blockking_fixed.tex} \end{document} \section{Soundness and Relative Completeness} \begin{theorem}[Soundness]\label{thm:sound} For each $\text{\sc Refine}^m_n$ iteration: $\varphi \Rightarrow m\ \Lbowtie\ n$, and $\hatvarphi \Rightarrow m\ \NLbowtie\ n$. \ifARXIV \smartpar{Theorem~\ref{thm:sound}.} \begin{proof}\normalfont By induction. Initially, $\FALSE$ is a suitable condition for when commutativity holds. $\FALSE$ is also a suitable condition under which commutativity does not hold. At each iteration, $\varphi$ or $\hatvarphi$ may be updated (not both, but for soundness this does not matter). Consider $\varphi$. It must also be the case that $(\varphi\vee H)\Rightarrow m\ \Lbowtie\ n$ because we know that $\varphi\Rightarrow m\ \Lbowtie\ n$ (from the previous iteration) and that $H\Rightarrow m\ \Lbowtie\ n$ (from the \textsf{valid} check at Line~\ref{ln:valid1}). Analogous reasoning for $\hatvarphi$. \end{proof} \fi \end{theorem} \noindent \tacasOnly{All proofs available in~\cite{arxiv}.} Soundness holds regardless of what \Choose\ returns and even when the theories used to model the underlying data-structure are incomplete. Next we show termination implies completeness: \begin{lemma}\label{lemma:term} If {\sc Refine}$^m_n$ terminates, then $\prop \vee \hatvarphi$. \ifARXIV \begin{proof}\normalfont The recursive calls of the {\sc Refine} algorithm induce a \emph{binary} tree $T$, where nodes are labeled by the conjunction of predicates. If {\sc Refine} terminates, then $T$ is finite, and each node is labeled with a finite conjunction $p_0 \wedge...\wedge p_n$. \emph{Claim.} The disjunction of all leaf node labels is valid. \emph{Pf.} By induction on the tree. Base case: a single-node tree has label $\TRUE$. Inductive case: for every new node created, labeled with a new conjunct $...\wedge p$, there is a sibling node with label $...\wedge \neg p$. Each leaf node of tree $T$, labeled with conjunction $\gamma$, arises from {\sc Refine} reaching a base case where, by construction, the conjunction $\gamma$ is disjunctively added to either $\prop$ or $\hatvarphi$. Since {\sc Refine} terminates, \emph{all} conjunctions are added to either $\prop$ or $\hatvarphi$, and thus $\prop\vee\hatvarphi$ must be valid. \end{proof} \fi \end{lemma} \begin{theorem}[Conditions for Termination]\label{thm:rc} {\sc Refine}$^m_n$ terminates if 1. {\bf (expressiveness)} the state space $\Sigma$ is partitionable into a finite set of regions $\Sigma_1,...,\Sigma_N$, each described by a finite conjunction of predicates $\psi_i$, such that either $\psi_i\Rightarrow m\ \Lbowtie\ n$ or $\psi_i\Rightarrow m\ \NLbowtie\ n$; and 2. {\bf (fairness)} for every $p\in\preds$, \Choose\ eventually picks $p$ (note that this does not imply that $\preds$ is finite), \ifARXIV \begin{proof}\normalfont By contradiction. As in the proof for Lemma \ref{lemma:term}, we represent the algorithm's execution as a binary tree $T$, induced by the recursive {\sc Refine} calls, whose nodes are labeled by the conjunction of predicates (Lines 9 and 10 in Algorithm \ref{fig:alg}). Assume there exists an infinite path along $T$, and let its respective labels be $\pi = p_0, p_0\wedge p_1, p_0\wedge p_1\wedge p_2,...$. \emph{Claim.} There is no finite prefix of $\pi$ that contains all the predicates $\psi_i$. \emph{Pf.} Had there been such a prefix $\varpi$, by the expressiveness assumption the running condition $H$ would satisfy one of the validity checks at lines 2 and 4 within, or immediately after, $\varpi$. This is because $H$ would be equal to, or stronger than, the conjunction of the predicates $\psi_i$. This would have made $\pi$ finite, as $\pi$ is extended only if both of the validity checks fail, where we assume $\pi$ is infinite. By the above claim, at least one of the predicates $\psi_i$ is not contained in any finite prefix of $\pi$. This contradicts the fairness assumption, whereby any predicate $p \in \preds$ is chosen after finitely many \Choose\ invocations (provided the algorithm hasn't terminated). \end{proof} \fi \end{theorem} \noindent Note that while these conditions ensure termination, the bound on the number of iterations depends on the predicate language and behavior of \Choose. \section{Conclusions and future work} This paper demonstrates that it is possible to automatically generate sound and effective commutativity conditions, a task that has so far been done manually or without soundness. Our commutativity conditions are applicable in a variety of contexts including transactional boosting~\cite{ppopp08}, open nested transactions~\cite{DBLP:conf/ppopp/NiMAHHMSS07}, and other non-transactional concurrency paradigms such as race detection~\cite{DBLP:conf/pldi/DimitrovRVK14}, parallelizing compilers~\cite{rinard,DBLP:conf/oopsla/TrippYFS11}, and, as we show, robustness of Ethereum smart contracts~\cite{sergeyhobor}. It has been shown that understanding the commutativity of data-structure operations provides a key avenue to improved performance~\cite{DBLP:journals/tocs/ClementsKZMK15} or ease of verification~\cite{DBLP:conf/popl/KoskinenPH10,DBLP:conf/pldi/KoskinenP15}. This work opens several avenues of future research. For instance, leveraging the internal state of the SMT solver (beyond counterexamples) in order to generate new predicates~\cite{HBG04}; automatically building abstract representation or making inferences such as one we made for the stack example; and exploring strategies to compute commutativity conditions directly from the program's code, without the need for an intermediate abstract representation~\cite{DBLP:conf/oopsla/TrippYFS11}. \section{Case studies} \label{sec:eval} \smartpar{Common Data-Structures.} We applied \Tool\ to Set, HashTable, Accumulator, Counter, and Stack. The generated commutativity conditions for these data structures typically combine multiple theories, such as sets, integers and arrays. We used the quantifier-free integer theory in SMTLIB to encode the abstract state and contracts for the Counter and Accumulator ADTs. For Set, the theory of finite sets~\cite{BRBT16} for tracking elements along with integers to track size; for HashTable, finite sets to track keys, and arrays for the HashMap itself. For Stack, we observed that for the purpose of pairwise commutativity it is sufficient to track the behavior of boundedly many top elements. Since two operations can \emph{at most} either pop the top two elements or push two elements, tracking four elements is sufficient. All evaluation data is available on our website~\cite{servoishomepage}. Depending on the pair of methods, the number of predicates generated by {\sc PGen} were (count after filtering in parentheses): \input{numpreds}. We did not provide any hints to the algorithm for this case study. On all our examples, the \emph{simple} heuristic terminated with precise commutativity conditions. In Fig.~\ref{fig:res}, we give the number of solver queries and total time (in paren.) consumed by this heuristic. The experiments were run on a 2.53 GHz Intel Core 2 Duo machine with 8 GB RAM. The conditions in Fig.\ref{fig:res} are those generated by the \emph{poke} heuristic, and interested reader may compare them with the simple heuristic in~\cite{B16}. On the theoretical side, our \Choose\ implementation is fair (satisfies condition 2 of Thm.~\ref{thm:rc}, as Lines 9-10 of the algorithm remove from $\mathcal{P}$ the predicate being tried). From our experiments we conclude that our choice of predicates satisfies condition 1 of Thm.~\ref{thm:rc}. Although our algorithm is sound, we manually validated the implementation of \Tool\ by examining its output and comparing the generated commutativity conditions with those reported by prior studies. In the case of Accumulator and Counter, our commutativity conditions were identical to those given in~\cite{KR:PLDI11}. For the Set data structure, the work of~\cite{KR:PLDI11} used a less precise Set abstraction, so we instead validated against the conditions of~\cite{KNPSP:PLDI11}. As for HashTable, we validated that our conditions match those by Dimitrov {\it et al.}~\cite{DBLP:conf/pldi/DimitrovRVK14}. \newcommand{\sort}[1]{\texttt{#1}} \section{The \Tool\ tool and practical considerations} \label{sec:impl} \vspace{-10pt} \newcommand{\specfont}[1]{\textsf{#1}} \smartpar{Input.} We use an input specification language building on YAML (which has parser and printer support for all common programming languages) with SMTLIB as the logical language. This can be automatically generated relatively easily, thus enabling the integration with other tools~\cite{Hoare02,modula,misra,Barnett,Meyer92,Leino06}. \tacasOnly{In~\cite{arxiv}, } \arxivOnly{In Apx.~\ref{yml:counter}, } we show the Counter ADT specification, which was derived from the $\Pre{}$ and $\Post{}$ conditions used in earlier work~\cite{KR:PLDI11}. The states of a transition system describing an ADT are encoded as list of variables (each as a name/type pair), and each method specification requires a list of argument types, return type, and $\Pre{}$/$\Post{}$ conditions. Again, the Counter example can be seen in \tacasOnly{\cite{arxiv}}. \arxivOnly{\ref{yml:counter}}. \smartpar{Implementation.} We have developed the open-source \Tool\ tool~\cite{servois}, which implements {\sc Refine}, {\sc Lift}, predicate generation, and a method for selecting predicates ({\sc Choose}) discussed below. \Tool\ uses CVC4~\cite{cvc4} as a backend SMT solver. \Tool\ begins by performing some pre-processing on the input transition system. It checks that the transition system is deterministic. Next, in case the transition system is partial, \Tool\ performs the {\sc Lift} transformation (Sec.~\ref{sec:noAlt}). An example of {\sc Lift} applied to Counter is \tacasOnly{in~\cite{arxiv}.} \arxivOnly{in Apx.~\ref{yml:counterauto}.} Next, \Tool\ automatically generates the predicate language ({\sc PGen}) in addition to user-provided hints. If the predicate vocabulary is not sufficiently expressive, then the algorithm would not be able to converge on precise commutativity and non-commutativity conditions (Sec.~\ref{sec:alg}). We generate predicates by using terms and operators that appear in the specification, and generating well-typed atoms not trivially true or false. As we demonstrate in Sec.~\ref{sec:eval}, this strategy works well in practice. Intuitively, $\Pre{}$ and $\Post{}$ formulas suffice to express the footprint of an operation. So, the atoms comprising them are an effective vocabulary to express when operations do or do not interfere. \smartpar{Predicate selection ({\sc \Choose}).} Even though the number of computed predicates is relatively small, since our algorithm is exponential in number of predicates it is essential to be able to identify \emph{relevant} predicates for the algorithm. To this end, in addition to filtering trivial predicates, we prioritize predicates based on the \emph{two} counterexamples generated by the validity checks in {\sc Refine}. Predicates that distinguish between the given counter examples are tried first (call these \emph{distinguishing} predicates). \Choose\ must return a predicate such that $\cexC \Rightarrow\; H \wedge p$ and $\cexNC \Rightarrow\; H \wedge \neg p$. This guarantees progress on both recursive calls. When combined with a heuristic to favor less complex atoms, this ensured timely termination on our examples. We refer to this as the \emph{simple} heuristic. Though this produced precise conditions, they were not always very concise, which is desirable for human understanding, and inspection purposes. We thus introduced a new heuristic which significantly improves the \emph{qualitative} aspect of our algorithm. We found that doing a lookahead (recurse on each predicate one level deep, or \emph{poke}) and computing the number of distinguishing predicates for the two branches as a good indicator of importance of the predicate. More precisely, we pick the predicate with lowest sum of remaining number of distinguishing predicates by the two calls. As an aside, those familiar with decision tree learning, might see a connection with the notion of entropy gain. This requires more calls to the SMT solver at each call, but it cuts down the total number of branches to be explored. Also, all individual queries were relatively simple for CVC4. The heuristic converges much faster to the relevant predicates, and produces smaller, concise conditions. \section{Introduction} Reasoning about the conditions under which data-structure operations commute is an important problem. The ability to derive sound yet effective commutativity conditions unlocks the potential of multicore architectures, including parallelizing compilers~\cite{rinard,DBLP:conf/oopsla/TrippYFS11}, speculative execution (\eg\ transactional memory~\cite{ppopp08}), peephole partial-order reduction~\cite{peephole}, futures, etc. Another important application domain that has emerged recently is Ethereum~\cite{ethereum} smart contracts: efficient execution of such contracts hinges on exploiting their commutativity~\cite{Dickerson:2017:ACS:3087801.3087835} and block-wise concurrency can lead to vulnerabilities~\cite{sergeyhobor}. Intuitively, commutativity is an important property because linearizable data-structure operations that commute can be executed concurrently: their effects do not interfere with each other in an observable way. When using a linearizable HashTable, for example, knowledge that \texttt{put(x,'a')} commutes with \texttt{get(y)} provided that $\texttt{x}\neq \texttt{y}$ enables significant parallelization opportunities. Indeed, it’s important for the commutativity condition to be sufficiently granular so that parallelism can be exploited effectively~\cite{DBLP:journals/tocs/ClementsKZMK15}. At the same time, to make safe use of a commutativity condition, it must be sound~\cite{DBLP:conf/popl/KoskinenPH10,DBLP:conf/pldi/KoskinenP15}. Achieving both of these goals using manual reasoning is burdensome and error prone. In light of that, researchers have investigated ways of verifying user-provided commutativity conditions~\cite{KR:PLDI11} as well as synthesizing such conditions automatically, \eg\ based on random interpretation~\cite{aleen}, profiling \cite{TrippJanus} or sampling \cite{DBLP:conf/cav/GehrDV15}. None of these approaches, however, meet the goal of computing a commutativity condition that is both \textit{sound} and \textit{granular} in a \textit{fully automated} manner. In this paper, we present a refinement-based technique for synthesizing commutativity conditions. Our technique builds on well-known descriptions and representations of abstract data types (ADTs) in terms of logical $(\Pre{m},\Post{m})$ specifications~\cite{Hoare02,modula,misra,Barnett,Meyer92,Leino06} for each method $m$. Our algorithm iteratively relaxes under-approximations of the commutativity \emph{and} non-commutativity conditions of methods $m$ and $n$, starting from {\sf false}, into increasingly precise versions. At each step, we conjunctively subdivide the symbolic state space into regions, searching for areas where $m$ and $n$ commute and where they don’t. Counterexamples to both the positive side and the negative side are used in the next symbolic subdivision. Throughout this recursive process, we accumulate the commutativity condition as a growing disjunction of these regions. The output of our procedure is a logical formula $\varphi_m^n$ which specifies when method $m$ commutes with method $n$. We have proven that the algorithm is sound, and can also be aborted at any time to obtain a partial, yet useful~\cite{TrippJanus,ppopp08}, commutativity condition. We show that, under certain conditions, termination is guaranteed (relative completeness). We address several challenges that arise in using an iterative refinement approach to generating precise and useful commutativity conditions. First, we show how to pose the commutativity question in a way that does not introduce additional quantifiers. We also show how to generate the predicate vocabulary for expressing the condition $\varphi_m^n$, as well as how to choose the predicates throughout the refinement loop. A further question that we address is how predicate selection impacts the conciseness and readability of the generated commutativity conditions. Finally, we have generalized our algorithm to left-/right-movers~\cite{lmrm}, a more precise version of commutativity. We have implemented our approach as the \Tool\ tool, whose code and documentation are available online~\cite{servoishomepage}. \Tool\ is built on top of the CVC4 SMT solver~\cite{cvc4}. We evaluate \Tool\ through two case studies. First, we generate commutativity conditions for a collection of popular data structures, including Set, HashTable, Accumulator, Counter, and Stack. The conditions typically combine multiple theories, such as sets, integers, arrays, etc. We show the conditions to be comparable in granularity to manually specified conditions~\cite{KR:PLDI11}. Second, we consider \textsf{BlockKing}~\cite{sergeyhobor}, an Ethereum smart contract, with its known vulnerability. We demonstrate how a developer can be guided by \Tool\ to create a more robust implementation. \smartpar{Contributions.} In summary, this paper makes the following contributions: \begin{itemize} \item The first sound and precise technique to automatically generate commutativity conditions (Sec.~\ref{sec:alg}). \item Proof of soundness and relative completeness (Sec.~\ref{sec:alg}). \item An implementation that takes an abstract code specification and automatically generates commutativity conditions using an SMT solver (Sec.~\ref{sec:impl}). \item A novel technique for selecting refinement predicates that improves scalability and the simplicity of the generated formulae (Sec.~\ref{sec:impl}). \item Demonstrated efficacy for several key data structures (Sec.~\ref{subsec:ds}) as well as the \textsf{BlockKing} Ethereum smart contract~\cite{sergeyhobor}. (Sec.~\ref{subsec:blockking}). \end{itemize} \noindent \arxivOnly{This is an extended version of our paper~\cite{tacas18}.} \tacasOnly{An extended version of this paper can be found in \red{arXiv}~\cite{arxiv}.} \subsection{Lifting a Counter} \label{apx:liftcounter} \ktodo{here.} \section{Right-/Left-movers} \label{sec:lmrm} We now describe how the formalism presented thus far can be extend to a more fine-grained notion of commutativity: an asymmetric version called left-movers and right-movers~\cite{lmrm}, where a method commutes in one direction and not the other. \begin{definition}[Action right-mover~\cite{lmrm}] We say that an action $\alpha_1$ \emph{moves to the right of} action $\alpha_2$ commute, denoted $\alpha_1 \triangleright \alpha_2$, provided that $\GammaOf{\alpha_2} \circ \GammaOf{\alpha_1} \subseteq \GammaOf{\alpha_1} \circ \GammaOf{\alpha_2}$. \end{definition} \noindent Note that left-movers can be defined as right-movers, but with arguments swapped. \begin{definition}[Method right-mover] For $m$ and $n$, $$ m\ \triangleright\ n \;\;\equiv\;\; \forall \Xs\ \Ys\ \Rms\ \Rns.\;\; m(\Xs)/\Rms \triangleright n(\Ys)/\Rns $$ \end{definition} \newcommand\propRM{\vec{\Psi}_m^n} A \emph{logical right-mover condition} denoted $\propRM$ has the same type as a commutativity condition and, again $\sem{\propRM}$ denotes interpretations of $\propRM$. Moreover, we say that $\propRM$ is a right-mover condition for $m$ and $n$ provided that $\forall \sigma_0\ \Xs\ \Ys\ \Rms\ \Rns.\ \sem{\propRM}\ \sigma_0\ (m(\Xs)/\Rms)\ (n(\Ys)/\Rns) = \TRUE \Rightarrow m\ \triangleright\ n $ and similar for a \emph{non}-right-mover condition. \smartpar{Checking whether $H_m^n \Rightarrow m\ \Lrm\ n$.} After performing the lifting transformation, we again are able to reduce the question of whether a formula $H_m^n$ is a right-mover condition to a validity check that does not introduce quantifier alternation. \[ \begin{array}{l} \textsf{valid}\\ \;\left( \begin{array}{l} \forall \Lsigma_0\ \Xs\ \Ys\ \Rms\ \Rns.\\ \;\; \varphi_m^n(\Lsigma_0,\Xs,\Ys,\Rms,\Rns) \;\Rightarrow\\ \;\; \Lsigma_0 \neq \Err{} \;\Rightarrow\\ \;\; \LGammaOf{n(\Ys)/\Rns}\ \LGammaOf{m(\Xs)/\Rms}\ \Lsigma_0 \neq \Err{} \;\Rightarrow\\ \;\; \LGammaOf{n(\Ys)/\Rns}\ \LGammaOf{m(\Xs)/\Rms}\ \Lsigma_0 = \LGammaOf{m(\Xs)/\Rms}\ \LGammaOf{n(\Ys)/\Rns}\ \Lsigma_0. \end{array} \right) \end{array} \] Notice that this is a generalization of the validity check for commutativity. \section{Example}\label{sec:overview} Specifying commutativity conditions is generally nontrivial, more importantly it is easy to miss subtle corner cases. Additionally, it has to be done pairwise for all methods. For ease of illustration, we will focus on the relatively simple \Set\ ADT, whose state consists of a single set $\Contents$ that stores an unordered collection of unique elements. Let us consider one pair of operations: (i) {\tt contains($\x$)/bool}, a side-effect-free check whether the element $\x$ is in $\Contents$; and (ii) {\tt add($\y$)/bool} adds $\y$ to $\Contents$ if it is not already there and returns \TRUE, or otherwise returns \FALSE. {\tt add} and {\tt contains} clearly commute if they refer to different elements in the set. There is another case that is less obvious: {\tt add} and {\tt contains} commute if they refer to the same element $e$, as long as in the pre-state $e \in \Contents$. In this case, under both orders of execution, {\tt add} and {\tt contains} leave the set unmodified and return \FALSE\ and \TRUE, respectively. The algorithm we describe in this paper takes 3.6s to automatically produce a precise logical formula $\prop$ that captures this commutativity condition, \ie\ the disjunction of the two cases above: $\prop \equiv \x\neq\y \vee (\x = \y \wedge \x \in \Contents)$. The algorithm also generates the conditions under which the methods \emph{do not} commute: $\hatprop \equiv\x=\y \wedge x\notin \Contents$. These are precise, since $\prop$ is the negation of $\hatprop$. A more complicated commutativity condition generated by our tool, \Tool, for \textsf{BlockKing} (Sec.~\ref{subsec:blockking}) is for method \textsf{enter($\mvA$,$\msA$,$\bnA$...)} and completed in 1.4s. It does not commute with itself \textsf{enter($\mvB$,$\msB$,$\bnB$...)} \emph{iff}: \[\begin{array}{c} \bigvee \left\{ \begin{array}{l} \mvA \geq 50 \wedge \mvB \geq 50 \wedge \msA \neq \msB\\ \mvA \geq 50 \wedge \mvB \geq 50 \wedge \msA = \msB \wedge \mvA \neq \mvB\\ \mvA \geq 50 \wedge \mvB \geq 50 \wedge \msA = \msB \wedge \mvA = \mvB \wedge \bnA \neq \bnB\\ \end{array} \right. \end{array} \] This disjunction enumerates the non-commutativity cases and, as discussed in Sec.~\ref{subsec:blockking}, directly identifies a vulnerability. Capturing precise conditions such as these by hand, and doing so for many pairs of operations, is tedious and error prone. This paper instead presents a way to automate this. Our algorithm recursively subdivides the state space via predicates until, at the base case, regions are found that are either entirely commutative or else entirely non-commutative. Returning to our \Set\ example, the conditions we incrementally generate are denoted $\prop$ and $\hatprop$, respectively. The following diagram illustrates how our algorithm proceeds to generate the commutativity conditions for {\tt add} and {\tt contains}. \begin{center} \includegraphics[width=4.8in]{diagram.eps} \end{center} In this diagram, each subsequent panel depicts a partitioning of the state space into regions of commutativity ($\prop$) or non-commutativity ($\hatprop$). The counterexamples $\cexC,\cexNC$ give values for the arguments $x$, $y$ and the current state $\Contents$. We denote by $H$ the logical formula that describes the current state space at a given recursive call. We begin with $H_0=\TRUE$, $\prop=\FALSE$, and $\hatprop=\FALSE$. There are three cases for a given $H$: ({\it i}) $H$ describes a precondition for $m$ and $n$ in which they \emph{always} commute; ({\it ii}) $H$ describes a precondition for $m$ and $n$ in which they \emph{never} commute; or ({\it iii}) neither of the above. The latter case drives the algorithm to subdivide the region by choosing a new predicate. We now detail the run of this refinement loop on our earlier \Set\ example. We elaborate on the other challenges that arise in later sections. At each step of the algorithm, we determine which case we are in via carefully designed validity queries to an SMT solver (Sec.~\ref{sec:noAlt}). For $H_0$, it returns the commutativity counterexample: $\cexC = \{ \x=0,\y=0,S=\emptyset \}$ as well as the non-commutativity counterexample $\cexNC = \{ \x=0,\y=1,S=\{0\} \}$. Since, therefore, $H_0=\TRUE$ is neither a commutativity nor a non-commutativity condition, we must refine $H_0$ into regions (or stronger conditions). In particular, we would like to perform a \emph{useful} subdivision: Divide $H_0$ into an $H_1$ that allows $\cexC$ but disallows $\cexNC$, and an $H'_1$ that allows $\cexNC$ but not $\cexC$. So we must choose a predicate $p$ (from a suitable set of predicates $\preds$, discussed later), such that $H_0 \wedge p \Rightarrow \cexC$ while $H_0 \wedge \neg p \Rightarrow \cexNC$ (or vice versa). The predicate $\x=\y$ satisfies this property. The algorithm then makes the next two recursive calls, adding $p$ as a conjunct to $H$, as shown in the second column of the diagram above: one with $H_1 \equiv \TRUE \wedge \x=\y$ and one with $H'_1 \equiv \TRUE \wedge \x\neq\y$. Taking the $H'_1$ case, our algorithm makes another SMT query and finds that $\x\neq \y$ implies that {\tt add} always commutes with {\tt contains}. At this point, it can update the commutativity condition $\prop$, letting $\prop := \prop \vee H'_1$, adding this $H'_1$ region to the growing disjunction. On the other hand, $H_1$ is neither a sufficient commutativity nor a sufficient non-commutativity condition, and so our algorithm, again, produces the respective counterexamples: $\cexC = \{ \x=0,\y=0,\Contents=\emptyset \}$ and $\cexNC = \{ \x=0,\y=0,\Contents=\{0\} \}$. In this case, our algorithm selects the predicate $\x\in \Contents$, and makes two further recursive calls: one with $H_2 \equiv \x=\y \wedge \x\in\Contents$ and another with $H'_2 \equiv \x=\y \wedge \x\notin\Contents$. In this case, it finds that $H_2$ is a sufficiently strong precondition for commutativity, while $H'_2$ is a strong enough precondition for non-commutativity. Consequently, $H_2$ is added as a new conjunct to $\prop$, yielding $\prop \equiv \x\neq y \vee (\x=\y \wedge \x\in\Contents)$. Similarly, $\hatprop$ is updated to be: $\hatprop \equiv (\x=\y \wedge \x\notin\Contents)$. No further recursive calls are made so the algorithm terminates and we have obtained a precise (complete) commutativity/non-commutativity specification: $\prop \vee \hatprop$ is valid (Lem.~\ref{lemma:term}). \smartpar{Challenges \& outline.} While the algorithm outlined so far is a relatively standard refinement, the above generated conditions were not immediate. We now discuss challenges involved in generating sound \emph{and} useful conditions. (Sec.~\ref{sec:noAlt}) A first question is how to pose the underlying commutativity queries for each subsequent $H$ in a way that avoids the introduction of additional quantifiers, so that we can remain in fragments for which the solver has complete decision procedures. Thus, if the data structure can be encoded using theories that are decidable, then the queries we pose to the SMT solver are guaranteed to be decidable as well. $\Pre{m}/\Post{m}$ specifications that are partial would introduce quantifier alternation, but we show how this can be avoided by, instead, transforming them into total specifications. (Sec.~\ref{sec:alg}) We have proved that our algorithm is sound even if aborted or the ADT description involves undecidable theories. We further show that termination implies completeness, and specify broad conditions that imply termination. (Sec.~\ref{sec:impl}) Another challenge is to prioritize predicates during the refinement loop. This choice impacts not only the algorithm's performance, but also the quality/conciseness of the resulting conditions. Our choice of next predicate $p$ is governed by two requirements. First, for progress, $p$/$\neg p$ must eliminate the counterexamples to commutativity/non-commutativity due to the last iteratio . This may still leave multiple choices, and we propose two heuristics -- called \emph{simple} and \emph{poke}---with different trade-offs to break ties. (Sec.~\ref{sec:eval}) We conclude with an evaluation on a range of popular data structures and a case study on boosting the security of an Ethereum smart contract. \section{Quantifier-free commutativity definitions} \section{Preliminaries} \label{sec:qf} \vspace{-10pt} \smartpar{States, actions, methods.} We will work with a state space $\Sigma$, with decidable equality and a set of \emph{actions} $A$. For each $\alpha\in A$, we have a transition function $\GammaOf{\alpha} : \Sigma \rightharpoondown \Sigma$. We denote a single transition as $\sigma\xrightarrow{\alpha}\sigma'$. We assume that each such action arc completes in finite time. Let $\TT\equiv(\Sigma,A,\GammaOf{\bullet})$. We say that two \emph{actions} $\alpha_1$ and $\alpha_2$ \emph{commute}~\cite{DBLP:conf/pldi/DimitrovRVK14}, denoted $\alpha_1 \bowtie \alpha_2$, provided that $\GammaOf{\alpha_1} \circ \GammaOf{\alpha_2} = \GammaOf{\alpha_2} \circ \GammaOf{\alpha_1}$. Note that $\bowtie$ is with respect to $\TT=(\Sigma,A,\GammaOf{\bullet})$. Our formalism, implementation, and evaluation all extend to a more fine-grained notion of commutativity: an asymmetric version called left-movers and right-movers~\cite{lmrm}, where a method commutes in one direction and not the other. \tacasOnly{Details can be found in~\cite{arxiv}.} \arxivOnly{We return to this in Sec.~\ref{sec:lmrm}.} Also, in our evaluation (Sec.~\ref{sec:eval}) we show left-/right-mover conditions that were generated by our implementation. An action $\alpha \in A$ is of the form $m(\Xs)/\Rms$, where $m$, $\Xs$ and $\Rms$ are called a \emph{method}, \emph{arguments} and \emph{return values} respectively. As a convention, for actions corresponding to a method $n$, we use $\Ys$ for arguments and $\Rns$ for return values. The set of methods will be finite, inducing a finite partitioning of $A$. We refer to an action, say $m(\As)/\Vs$, as \emph{corresponding} to method $m$ (where $\As$ and $\Vs$ are vectors of values). The set of actions corresponding to a method $m$, denoted $A_m$, might be infinite as arguments and return values may be from an infinite domain. \begin{definition}\label{def:Mcommute} Methods $m$ and $n$ \emph{commute}, denoted $m\ \bowtie\ n$ provided that $ \forall \Xs\ \Ys\ \Rms\ \Rns.\;\; m(\Xs)/\Rms \bowtie n(\Ys)/\Rns$. \end{definition} The quantification $\forall \Xs \Rms$ above means $\forall m(\Xs)/\Rms \in A_m$, i.e., all vectors of arguments and return values that constitute an action in $A_m$. \smartpar{Abstract specifications.} We symbolically describe the actions of a method $m$ as pre-condition $\Pre{m}$ and post-condition $\Post{m}$. Pre-conditions are logical formulae over method arguments and the initial state: $\sem{\Pre{m}} : \Xs \rightarrow \Sigma \rightarrow \mathbb{B}$. Post-conditions are over method arguments, and return values, initial state and final state: $\sem{\Post{m}} : \Xs \rightarrow \Rms \rightarrow \Sigma \rightarrow \Sigma \rightarrow \mathbb{B}$. Given $(\Pre{m},\Post{m})$ for every method $m$, we define a transition system $\TT=(\Sigma,A,\GammaOf{\bullet})$ such that $\sigma \xrightarrow{m(\As)/\Vs} \sigma'$ \emph{iff} $\sem{\Pre{m}}\ \As\ \sigma$ and $\sem{\Post{m}}\ \As\ \Vs\ \sigma\ \sigma'$. Since our approach works on deterministic transition systems, we have implemented an SMT-based check (Sec.~\ref{sec:eval}) that ensures the input transition system is deterministic. Deterministic specifications were sufficient in our examples. This is unsurprising given the inherent difficulty of creating efficient concurrent implementations of nondeterministic operations, whose effects are hard to characterize. Reducing nondeterministic data-structure methods to deterministic ones through symbolic partial determinization~\cite{AbadiLamport,CookKoskinenPOPL11} is left as future work. \smartpar{Logical commutativity formulae.} We will generate a commutativity condition for methods $m$ and $n$ as logical formulae over initial states and the arguments/return values of the methods. We denote a logical commutativity formula as $\varphi$ and assume a decidable interpretation of formulae: $\sem{\varphi} : (\sigma,\Xs,\Ys,\Rms,\Rns) \rightarrow \mathbb{B}$. (We tuple the arguments for brevity.) The first argument is the initial state. Commutativity \emph{post}- and \emph{mid}-conditions can also be written~\cite{KR:PLDI11} but here, for simplicity, we focus on commutativity \emph{pre}-conditions. We may write $\sem{\varphi}$ as $\varphi$ when it is clear from context that $\varphi$ is meant to be interpreted. We say that $\varphi_m^n$ is a \emph{sound commutativity condition}, and $\hat{\varphi}_m^n$ a sound \emph{non}-commutativity condition resp., for $m$ and $n$ provided that \[\begin{array}{l} \forall \sigma \Xs \Ys \Rms \Rns.\ \sem{\varphi_m^n}\ \sigma\ \Xs\ \Ys\ \Rms\ \Rns \Rightarrow m(\Xs)/\Rms\ \bowtie\ n(\Ys)/\Rns, \text{ and}\\ \forall \sigma \Xs \Ys \Rms \Rns.\ \sem{\hat{\varphi}_m^n}\ \sigma\ \Xs\ \Ys\ \Rms\ \Rns \Rightarrow \neg(m(\Xs)/\Rms\ \bowtie\ n(\Ys)/\Rns), \text{ resp.} \end{array}\] \subsection{Counter} \scriptsize \label{yml:counter} \input{listings/counter.tex} \begin{itemize} \item \CCmethod{decrement} $\bowtie$\ \CCmethod{decrement} Simple: true Poke: true \item \CCmethod{increment} $\rhd$\ \CCmethod{decrement} Simple: [\CCequal{\CCvar{1}}{\CCvar{contents}}] $\vee$ [\CCnot{\CCequal{\CCvar{1}}{\CCvar{contents}}} $\wedge$ \CCnot{\CCequal{\CCvar{0}}{\CCvar{contents}}}] Poke: \CCnot{\CCequal{\CCvar{0}}{\CCvar{contents}}} \item \CCmethod{decrement} $\rhd$\ \CCmethod{increment} Simple: true Poke: true \item \CCmethod{decrement} $\bowtie$\ \CCmethod{reset} Simple: false Poke: false \item \CCmethod{decrement} $\bowtie$\ \CCmethod{zero} Simple: \CCnot{\CCequal{\CCvar{1}}{\CCvar{contents}}} Poke: \CCnot{\CCequal{\CCvar{1}}{\CCvar{contents}}} \item \CCmethod{increment} $\bowtie$\ \CCmethod{increment} Simple: true Poke: true \item \CCmethod{increment} $\bowtie$\ \CCmethod{reset} Simple: false Poke: false \item \CCmethod{increment} $\bowtie$\ \CCmethod{zero} Simple: [\CCequal{\CCvar{1}}{\CCvar{contents}}] $\vee$ [\CCnot{\CCequal{\CCvar{1}}{\CCvar{contents}}} $\wedge$ \CCnot{\CCequal{\CCvar{0}}{\CCvar{contents}}}] Poke: \CCnot{\CCequal{\CCvar{0}}{\CCvar{contents}}} \item \CCmethod{reset} $\bowtie$\ \CCmethod{reset} Simple: true Poke: true \item \CCmethod{reset} $\bowtie$\ \CCmethod{zero} Simple: \CCnot{\CCequal{\CCvar{1}}{\CCvar{contents}}} $\wedge$ \CCequal{\CCvar{0}}{\CCvar{contents}} Poke: \CCequal{\CCvar{0}}{\CCvar{contents}} \item \CCmethod{zero} $\bowtie$\ \CCmethod{zero} Simple: true Poke: true \end{itemize} \subsection{Counter (lifted, auto-generated)} \scriptsize \label{yml:counterauto} \input{listings/counterauto.tex} \subsection{Accumulator} \scriptsize \label{yml:accumulator} \input{listings/accumulator.tex} \begin{itemize} \item \CCmethod{increase} $\bowtie$\ \CCmethod{increase} Simple: true Poke: true \item \CCmethod{increase} $\bowtie$\ \CCmethod{read} Simple: [\CCequal{\CCvar{x1}}{\CCvar{contents}} $\wedge$ \CCequal{\CCplus{\CCvar{contents}}{\CCvar{x1}}}{\CCvar{contents}}] $\vee$ [\CCnot{\CCequal{\CCvar{x1}}{\CCvar{contents}}} $\wedge$ \CCequal{\CCplus{\CCvar{contents}}{\CCvar{x1}}}{\CCvar{contents}}] Poke: \CCequal{\CCplus{\CCvar{contents}}{\CCvar{x1}}}{\CCvar{contents}} \item \CCmethod{read} $\bowtie$\ \CCmethod{read} Simple: true Poke: true \end{itemize} \subsection{Set} \scriptsize \label{yml:set} \input{listings/set.tex} \begin{itemize} \item \CCmethod{add} $\bowtie$\ \CCmethod{add} Simple: [\CCequal{\CCvar{y1}}{\CCvar{x1}} $\wedge$ \CCmember{\CCvar{y1}}{\CCvar{S}}] $\vee$ [\CCnot{\CCequal{\CCvar{y1}}{\CCvar{x1}}}] Poke: [\CCequal{\CCvar{y1}}{\CCvar{x1}} $\wedge$ \CCmember{\CCvar{y1}}{\CCvar{S}}] $\vee$ [\CCnot{\CCequal{\CCvar{y1}}{\CCvar{x1}}}] \item \CCmethod{add} $\bowtie$\ \CCmethod{contains} Simple: [\CCequal{\CCvar{y1}}{\CCvar{x1}} $\wedge$ \CCmember{\CCvar{y1}}{\CCvar{S}}] $\vee$ [\CCnot{\CCequal{\CCvar{y1}}{\CCvar{x1}}}] Poke: [\CCmember{\CCvar{x1}}{\CCvar{S}}] $\vee$ [\CCnot{\CCmember{\CCvar{x1}}{\CCvar{S}}} $\wedge$ \CCnot{\CCequal{\CCvar{y1}}{\CCvar{x1}}}] \item \CCmethod{add} $\bowtie$\ \CCmethod{getsize} Simple: \CCmember{\CCvar{x1}}{\CCvar{S}} Poke: \CCmember{\CCvar{x1}}{\CCvar{S}} \item \CCmethod{add} $\bowtie$\ \CCmethod{remove} Simple: \CCnot{\CCequal{\CCvar{y1}}{\CCvar{x1}}} Poke: \CCnot{\CCequal{\CCvar{y1}}{\CCvar{x1}}} \item \CCmethod{contains} $\bowtie$\ \CCmethod{contains} Simple: true Poke: true \item \CCmethod{contains} $\bowtie$\ \CCmethod{getsize} Simple: true Poke: true \item \CCmethod{contains} $\bowtie$\ \CCmethod{remove} Simple: [\CCequal{\CCvar{y1}}{\CCvar{x1}} $\wedge$ \CCequal{\CCvar{1}}{\CCvar{size}} $\wedge$ \CCnot{\CCmember{\CCvar{y1}}{\CCvar{S}}}] $\vee$ [\CCequal{\CCvar{y1}}{\CCvar{x1}} $\wedge$ \CCnot{\CCequal{\CCvar{1}}{\CCvar{size}}} $\wedge$ \CCnot{\CCmember{\CCvar{y1}}{\CCvar{S}}}] $\vee$ [\CCnot{\CCequal{\CCvar{y1}}{\CCvar{x1}}}] Poke: [\CCequal{\CCsetminus{\CCvar{S}}{\CCsingleton{\CCvar{x1}}}}{\CCsingleton{\CCvar{y1}}}] $\vee$ [\CCnot{\CCequal{\CCsetminus{\CCvar{S}}{\CCsingleton{\CCvar{x1}}}}{\CCsingleton{\CCvar{y1}}}} $\wedge$ \CCmember{\CCvar{y1}}{\CCsingleton{\CCvar{x1}}} $\wedge$ \CCnot{\CCmember{\CCvar{y1}}{\CCvar{S}}}] $\vee$ [\CCnot{\CCequal{\CCsetminus{\CCvar{S}}{\CCsingleton{\CCvar{x1}}}}{\CCsingleton{\CCvar{y1}}}} $\wedge$ \CCnot{\CCmember{\CCvar{y1}}{\CCsingleton{\CCvar{x1}}}}] \item \CCmethod{getsize} $\bowtie$\ \CCmethod{getsize} Simple: true Poke: true \item \CCmethod{getsize} $\bowtie$\ \CCmethod{remove} Simple: [\CCequal{\CCvar{1}}{\CCvar{size}} $\wedge$ \CCnot{\CCmember{\CCvar{y1}}{\CCvar{S}}}] $\vee$ [\CCnot{\CCequal{\CCvar{1}}{\CCvar{size}}} $\wedge$ \CCnot{\CCmember{\CCvar{y1}}{\CCvar{S}}}] Poke: \CCnot{\CCmember{\CCvar{y1}}{\CCvar{S}}} \item \CCmethod{remove} $\bowtie$\ \CCmethod{remove} Simple: [\CCequal{\CCvar{1}}{\CCvar{size}} $\wedge$ \CCequal{\CCvar{y1}}{\CCvar{x1}} $\wedge$ \CCnot{\CCmember{\CCvar{y1}}{\CCvar{S}}}] $\vee$ [\CCequal{\CCvar{1}}{\CCvar{size}} $\wedge$ \CCnot{\CCequal{\CCvar{y1}}{\CCvar{x1}}}] $\vee$ [\CCnot{\CCequal{\CCvar{1}}{\CCvar{size}}} $\wedge$ \CCequal{\CCvar{y1}}{\CCvar{x1}} $\wedge$ \CCnot{\CCmember{\CCvar{y1}}{\CCvar{S}}}] $\vee$ [\CCnot{\CCequal{\CCvar{1}}{\CCvar{size}}} $\wedge$ \CCnot{\CCequal{\CCvar{y1}}{\CCvar{x1}}}] Poke: [\CCequal{\CCsetminus{\CCvar{S}}{\CCsingleton{\CCvar{y1}}}}{\CCsingleton{\CCvar{x1}}}] $\vee$ [\CCnot{\CCequal{\CCsetminus{\CCvar{S}}{\CCsingleton{\CCvar{y1}}}}{\CCsingleton{\CCvar{x1}}}} $\wedge$ \CCmember{\CCvar{y1}}{\CCsingleton{\CCvar{x1}}} $\wedge$ \CCnot{\CCmember{\CCvar{y1}}{\CCvar{S}}}] $\vee$ [\CCnot{\CCequal{\CCsetminus{\CCvar{S}}{\CCsingleton{\CCvar{y1}}}}{\CCsingleton{\CCvar{x1}}}} $\wedge$ \CCnot{\CCmember{\CCvar{y1}}{\CCsingleton{\CCvar{x1}}}}] \end{itemize} \subsection{HashTable} \scriptsize \label{yml:hashtable} \input{listings/hashtable.tex} \begin{itemize} \item \CCmethod{get} $\bowtie$\ \CCmethod{get} Simple: true Poke: true \item \CCmethod{get} $\bowtie$\ \CCmethod{haskey} Simple: true Poke: true \item \CCmethod{put} $\rhd$\ \CCmethod{get} Simple: [\CCequal{\CCvar{x2}}{\CCselect{\CCvar{H}}{\CCvar{y1}}} $\wedge$ \CCmember{\CCvar{y1}}{\CCvar{keys}}] $\vee$ [\CCnot{\CCequal{\CCvar{x2}}{\CCselect{\CCvar{H}}{\CCvar{y1}}}} $\wedge$ \CCnot{\CCequal{\CCvar{y1}}{\CCvar{x1}}}] Poke: [\CCequal{\CCstore{\CCvar{H}}{\CCvar{x1}}{\CCvar{x2}}}{\CCvar{H}} $\wedge$ \CCmember{\CCvar{y1}}{\CCvar{keys}}] $\vee$ [\CCnot{\CCequal{\CCstore{\CCvar{H}}{\CCvar{x1}}{\CCvar{x2}}}{\CCvar{H}}} $\wedge$ \CCnot{\CCequal{\CCvar{y1}}{\CCvar{x1}}}] \item \CCmethod{get} $\rhd$\ \CCmethod{put} Simple: [\CCequal{\CCselect{\CCvar{H}}{\CCvar{y1}}}{\CCvar{y2}}] $\vee$ [\CCnot{\CCequal{\CCselect{\CCvar{H}}{\CCvar{y1}}}{\CCvar{y2}}} $\wedge$ \CCnot{\CCequal{\CCvar{y1}}{\CCvar{x1}}}] Poke: [\CCequal{\CCselect{\CCvar{H}}{\CCvar{y1}}}{\CCvar{y2}}] $\vee$ [\CCnot{\CCequal{\CCselect{\CCvar{H}}{\CCvar{y1}}}{\CCvar{y2}}} $\wedge$ \CCnot{\CCequal{\CCvar{y1}}{\CCvar{x1}}}] \item \CCmethod{remove} $\rhd$\ \CCmethod{get} Simple: true Poke: true \item \CCmethod{get} $\rhd$\ \CCmethod{remove} Simple: [\CCequal{\CCvar{1}}{\CCvar{size}} $\wedge$ \CCnot{\CCequal{\CCvar{y1}}{\CCvar{x1}}}] $\vee$ [\CCnot{\CCequal{\CCvar{1}}{\CCvar{size}}} $\wedge$ \CCnot{\CCequal{\CCvar{y1}}{\CCvar{x1}}}] Poke: \CCnot{\CCequal{\CCvar{y1}}{\CCvar{x1}}} \item \CCmethod{get} $\bowtie$\ \CCmethod{size} Simple: true Poke: true \item \CCmethod{haskey} $\bowtie$\ \CCmethod{haskey} Simple: true Poke: true \item \CCmethod{haskey} $\bowtie$\ \CCmethod{put} Simple: [\CCequal{\CCvar{y1}}{\CCvar{x1}} $\wedge$ \CCmember{\CCvar{y1}}{\CCvar{keys}}] $\vee$ [\CCnot{\CCequal{\CCvar{y1}}{\CCvar{x1}}}] Poke: [\CCmember{\CCvar{y1}}{\CCvar{keys}}] $\vee$ [\CCnot{\CCmember{\CCvar{y1}}{\CCvar{keys}}} $\wedge$ \CCnot{\CCequal{\CCvar{y1}}{\CCvar{x1}}}] \item \CCmethod{haskey} $\bowtie$\ \CCmethod{remove} Simple: [\CCequal{\CCvar{y1}}{\CCvar{x1}} $\wedge$ \CCequal{\CCvar{1}}{\CCvar{size}} $\wedge$ \CCnot{\CCmember{\CCvar{y1}}{\CCvar{keys}}}] $\vee$ [\CCequal{\CCvar{y1}}{\CCvar{x1}} $\wedge$ \CCnot{\CCequal{\CCvar{1}}{\CCvar{size}}} $\wedge$ \CCnot{\CCmember{\CCvar{y1}}{\CCvar{keys}}}] $\vee$ [\CCnot{\CCequal{\CCvar{y1}}{\CCvar{x1}}}] Poke: [\CCmember{\CCvar{x1}}{\CCvar{keys}} $\wedge$ \CCnot{\CCequal{\CCvar{y1}}{\CCvar{x1}}}] $\vee$ [\CCnot{\CCmember{\CCvar{x1}}{\CCvar{keys}}}] \item \CCmethod{haskey} $\bowtie$\ \CCmethod{size} Simple: true Poke: true \item \CCmethod{put} $\bowtie$\ \CCmethod{put} Simple: [\CCequal{\CCvar{x2}}{\CCvar{y2}} $\wedge$ \CCequal{\CCvar{x2}}{\CCselect{\CCvar{H}}{\CCvar{y1}}} $\wedge$ \CCmember{\CCvar{y1}}{\CCvar{keys}}] $\vee$ [\CCequal{\CCvar{x2}}{\CCvar{y2}} $\wedge$ \CCequal{\CCvar{x2}}{\CCselect{\CCvar{H}}{\CCvar{y1}}} $\wedge$ \CCnot{\CCmember{\CCvar{y1}}{\CCvar{keys}}} $\wedge$ \CCnot{\CCequal{\CCvar{y1}}{\CCvar{x1}}}] $\vee$ [\CCequal{\CCvar{x2}}{\CCvar{y2}} $\wedge$ \CCnot{\CCequal{\CCvar{x2}}{\CCselect{\CCvar{H}}{\CCvar{y1}}}} $\wedge$ \CCnot{\CCequal{\CCvar{y1}}{\CCvar{x1}}}] $\vee$ [\CCnot{\CCequal{\CCvar{x2}}{\CCvar{y2}}} $\wedge$ \CCnot{\CCequal{\CCvar{y1}}{\CCvar{x1}}}] Poke: [\CCequal{\CCselect{\CCvar{H}}{\CCvar{y1}}}{\CCvar{y2}} $\wedge$ \CCequal{\CCvar{x2}}{\CCselect{\CCvar{H}}{\CCvar{x1}}} $\wedge$ \CCequal{\CCplus{\CCvar{size}}{\CCvar{1}}}{\CCvar{1}} $\wedge$ \CCmember{\CCvar{y1}}{\CCvar{keys}}] $\vee$ [\CCequal{\CCselect{\CCvar{H}}{\CCvar{y1}}}{\CCvar{y2}} $\wedge$ \CCequal{\CCvar{x2}}{\CCselect{\CCvar{H}}{\CCvar{x1}}} $\wedge$ \CCequal{\CCplus{\CCvar{size}}{\CCvar{1}}}{\CCvar{1}} $\wedge$ \CCnot{\CCmember{\CCvar{y1}}{\CCvar{keys}}} $\wedge$ \CCnot{\CCequal{\CCvar{y1}}{\CCvar{x1}}}] $\vee$ [\CCequal{\CCselect{\CCvar{H}}{\CCvar{y1}}}{\CCvar{y2}} $\wedge$ \CCequal{\CCvar{x2}}{\CCselect{\CCvar{H}}{\CCvar{x1}}} $\wedge$ \CCnot{\CCequal{\CCplus{\CCvar{size}}{\CCvar{1}}}{\CCvar{1}}} $\wedge$ \CCmember{\CCvar{x1}}{\CCvar{keys}}] $\vee$ [\CCequal{\CCselect{\CCvar{H}}{\CCvar{y1}}}{\CCvar{y2}} $\wedge$ \CCequal{\CCvar{x2}}{\CCselect{\CCvar{H}}{\CCvar{x1}}} $\wedge$ \CCnot{\CCequal{\CCplus{\CCvar{size}}{\CCvar{1}}}{\CCvar{1}}} $\wedge$ \CCnot{\CCmember{\CCvar{x1}}{\CCvar{keys}}} $\wedge$ \CCnot{\CCequal{\CCvar{y1}}{\CCvar{x1}}}] $\vee$ [\CCequal{\CCselect{\CCvar{H}}{\CCvar{y1}}}{\CCvar{y2}} $\wedge$ \CCnot{\CCequal{\CCvar{x2}}{\CCselect{\CCvar{H}}{\CCvar{x1}}}} $\wedge$ \CCnot{\CCequal{\CCvar{y1}}{\CCvar{x1}}}] $\vee$ [\CCnot{\CCequal{\CCselect{\CCvar{H}}{\CCvar{y1}}}{\CCvar{y2}}} $\wedge$ \CCnot{\CCequal{\CCvar{y1}}{\CCvar{x1}}}] \item \CCmethod{put} $\bowtie$\ \CCmethod{remove} Simple: \CCnot{\CCequal{\CCvar{y1}}{\CCvar{x1}}} Poke: \CCnot{\CCequal{\CCvar{y1}}{\CCvar{x1}}} \item \CCmethod{put} $\bowtie$\ \CCmethod{size} Simple: \CCmember{\CCvar{x1}}{\CCvar{keys}} Poke: \CCmember{\CCvar{x1}}{\CCvar{keys}} \item \CCmethod{remove} $\bowtie$\ \CCmethod{remove} Simple: [\CCequal{\CCvar{1}}{\CCvar{size}} $\wedge$ \CCequal{\CCvar{y1}}{\CCvar{x1}} $\wedge$ \CCnot{\CCmember{\CCvar{y1}}{\CCvar{keys}}}] $\vee$ [\CCequal{\CCvar{1}}{\CCvar{size}} $\wedge$ \CCnot{\CCequal{\CCvar{y1}}{\CCvar{x1}}}] $\vee$ [\CCnot{\CCequal{\CCvar{1}}{\CCvar{size}}} $\wedge$ \CCequal{\CCvar{y1}}{\CCvar{x1}} $\wedge$ \CCnot{\CCmember{\CCvar{y1}}{\CCvar{keys}}}] $\vee$ [\CCnot{\CCequal{\CCvar{1}}{\CCvar{size}}} $\wedge$ \CCnot{\CCequal{\CCvar{y1}}{\CCvar{x1}}}] Poke: [\CCequal{\CCsetminus{\CCvar{keys}}{\CCsingleton{\CCvar{x1}}}}{\CCsingleton{\CCvar{y1}}}] $\vee$ [\CCnot{\CCequal{\CCsetminus{\CCvar{keys}}{\CCsingleton{\CCvar{x1}}}}{\CCsingleton{\CCvar{y1}}}} $\wedge$ \CCmember{\CCvar{y1}}{\CCsingleton{\CCvar{x1}}} $\wedge$ \CCnot{\CCmember{\CCvar{y1}}{\CCvar{keys}}}] $\vee$ [\CCnot{\CCequal{\CCsetminus{\CCvar{keys}}{\CCsingleton{\CCvar{x1}}}}{\CCsingleton{\CCvar{y1}}}} $\wedge$ \CCnot{\CCmember{\CCvar{y1}}{\CCsingleton{\CCvar{x1}}}}] \item \CCmethod{remove} $\bowtie$\ \CCmethod{size} Simple: [\CCequal{\CCvar{1}}{\CCvar{size}} $\wedge$ \CCnot{\CCmember{\CCvar{x1}}{\CCvar{keys}}}] $\vee$ [\CCnot{\CCequal{\CCvar{1}}{\CCvar{size}}} $\wedge$ \CCnot{\CCmember{\CCvar{x1}}{\CCvar{keys}}}] Poke: \CCnot{\CCmember{\CCvar{x1}}{\CCvar{keys}}} \item \CCmethod{size} $\bowtie$\ \CCmethod{size} Simple: true Poke: true \end{itemize} \subsection{Stack} \scriptsize \label{yml:stack} \input{listings/stack.tex} \begin{itemize} \item \CCmethod{clear} $\bowtie$\ \CCmethod{clear} Simple: true Poke: true \item \CCmethod{clear} $\bowtie$\ \CCmethod{pop} Simple: false Poke: false \item \CCmethod{clear} $\bowtie$\ \CCmethod{push} Simple: false Poke: false \item \CCmethod{pop} $\bowtie$\ \CCmethod{pop} Simple: \CCequal{\CCvar{nextToTop}}{\CCvar{top}} Poke: \CCequal{\CCvar{nextToTop}}{\CCvar{top}} \item \CCmethod{push} $\rhd$\ \CCmethod{pop} Simple: [\CCequal{\CCvar{1}}{\CCvar{size}} $\wedge$ \CCequal{\CCvar{nextToTop}}{\CCvar{top}} $\wedge$ \CCequal{\CCvar{nextToTop}}{\CCvar{thirdToTop}} $\wedge$ \CCequal{\CCvar{nextToTop}}{\CCvar{x1}}] $\vee$ [\CCequal{\CCvar{1}}{\CCvar{size}} $\wedge$ \CCequal{\CCvar{nextToTop}}{\CCvar{top}} $\wedge$ \CCnot{\CCequal{\CCvar{nextToTop}}{\CCvar{thirdToTop}}} $\wedge$ \CCequal{\CCvar{nextToTop}}{\CCvar{x1}}] $\vee$ [\CCequal{\CCvar{1}}{\CCvar{size}} $\wedge$ \CCnot{\CCequal{\CCvar{nextToTop}}{\CCvar{top}}} $\wedge$ \CCequal{\CCvar{nextToTop}}{\CCvar{thirdToTop}} $\wedge$ \CCequal{\CCvar{nextToTop}}{\CCvar{secondToTop}} $\wedge$ \CCequal{\CCvar{top}}{\CCvar{x1}}] $\vee$ [\CCequal{\CCvar{1}}{\CCvar{size}} $\wedge$ \CCnot{\CCequal{\CCvar{nextToTop}}{\CCvar{top}}} $\wedge$ \CCequal{\CCvar{nextToTop}}{\CCvar{thirdToTop}} $\wedge$ \CCnot{\CCequal{\CCvar{nextToTop}}{\CCvar{secondToTop}}} $\wedge$ \CCequal{\CCvar{top}}{\CCvar{x1}}] $\vee$ [\CCequal{\CCvar{1}}{\CCvar{size}} $\wedge$ \CCnot{\CCequal{\CCvar{nextToTop}}{\CCvar{top}}} $\wedge$ \CCnot{\CCequal{\CCvar{nextToTop}}{\CCvar{thirdToTop}}} $\wedge$ \CCequal{\CCvar{nextToTop}}{\CCvar{secondToTop}} $\wedge$ \CCequal{\CCvar{top}}{\CCvar{x1}}] $\vee$ [\CCequal{\CCvar{1}}{\CCvar{size}} $\wedge$ \CCnot{\CCequal{\CCvar{nextToTop}}{\CCvar{top}}} $\wedge$ \CCnot{\CCequal{\CCvar{nextToTop}}{\CCvar{thirdToTop}}} $\wedge$ \CCnot{\CCequal{\CCvar{nextToTop}}{\CCvar{secondToTop}}} $\wedge$ \CCequal{\CCvar{top}}{\CCvar{x1}}] $\vee$ [\CCnot{\CCequal{\CCvar{1}}{\CCvar{size}}} $\wedge$ \CCnot{\CCequal{\CCvar{0}}{\CCvar{size}}} $\wedge$ \CCequal{\CCvar{nextToTop}}{\CCvar{thirdToTop}} $\wedge$ \CCequal{\CCvar{nextToTop}}{\CCvar{secondToTop}} $\wedge$ \CCequal{\CCvar{top}}{\CCvar{x1}}] $\vee$ [\CCnot{\CCequal{\CCvar{1}}{\CCvar{size}}} $\wedge$ \CCnot{\CCequal{\CCvar{0}}{\CCvar{size}}} $\wedge$ \CCequal{\CCvar{nextToTop}}{\CCvar{thirdToTop}} $\wedge$ \CCnot{\CCequal{\CCvar{nextToTop}}{\CCvar{secondToTop}}} $\wedge$ \CCequal{\CCvar{top}}{\CCvar{x1}}] $\vee$ [\CCnot{\CCequal{\CCvar{1}}{\CCvar{size}}} $\wedge$ \CCnot{\CCequal{\CCvar{0}}{\CCvar{size}}} $\wedge$ \CCnot{\CCequal{\CCvar{nextToTop}}{\CCvar{thirdToTop}}} $\wedge$ \CCequal{\CCvar{nextToTop}}{\CCvar{secondToTop}} $\wedge$ \CCequal{\CCvar{top}}{\CCvar{x1}}] $\vee$ [\CCnot{\CCequal{\CCvar{1}}{\CCvar{size}}} $\wedge$ \CCnot{\CCequal{\CCvar{0}}{\CCvar{size}}} $\wedge$ \CCnot{\CCequal{\CCvar{nextToTop}}{\CCvar{thirdToTop}}} $\wedge$ \CCnot{\CCequal{\CCvar{nextToTop}}{\CCvar{secondToTop}}} $\wedge$ \CCequal{\CCvar{top}}{\CCvar{x1}}] Poke: \CCnot{\CCequal{\CCvar{0}}{\CCvar{size}}} $\wedge$ \CCequal{\CCvar{top}}{\CCvar{x1}} \item \CCmethod{pop} $\rhd$\ \CCmethod{push} Simple: [\CCequal{\CCvar{nextToTop}}{\CCvar{y1}} $\wedge$ \CCequal{\CCvar{nextToTop}}{\CCvar{top}}] $\vee$ [\CCnot{\CCequal{\CCvar{nextToTop}}{\CCvar{y1}}} $\wedge$ \CCequal{\CCvar{nextToTop}}{\CCvar{thirdToTop}} $\wedge$ \CCequal{\CCvar{nextToTop}}{\CCvar{secondToTop}} $\wedge$ \CCequal{\CCvar{y1}}{\CCvar{top}}] $\vee$ [\CCnot{\CCequal{\CCvar{nextToTop}}{\CCvar{y1}}} $\wedge$ \CCequal{\CCvar{nextToTop}}{\CCvar{thirdToTop}} $\wedge$ \CCnot{\CCequal{\CCvar{nextToTop}}{\CCvar{secondToTop}}} $\wedge$ \CCequal{\CCvar{y1}}{\CCvar{top}}] $\vee$ [\CCnot{\CCequal{\CCvar{nextToTop}}{\CCvar{y1}}} $\wedge$ \CCnot{\CCequal{\CCvar{nextToTop}}{\CCvar{thirdToTop}}} $\wedge$ \CCequal{\CCvar{nextToTop}}{\CCvar{secondToTop}} $\wedge$ \CCequal{\CCvar{y1}}{\CCvar{top}}] $\vee$ [\CCnot{\CCequal{\CCvar{nextToTop}}{\CCvar{y1}}} $\wedge$ \CCnot{\CCequal{\CCvar{nextToTop}}{\CCvar{thirdToTop}}} $\wedge$ \CCnot{\CCequal{\CCvar{nextToTop}}{\CCvar{secondToTop}}} $\wedge$ \CCequal{\CCvar{y1}}{\CCvar{top}}] Poke: \CCequal{\CCvar{y1}}{\CCvar{top}} \item \CCmethod{push} $\bowtie$\ \CCmethod{push} Simple: [\CCequal{\CCvar{thirdToTop}}{\CCvar{y1}} $\wedge$ \CCequal{\CCvar{thirdToTop}}{\CCvar{x1}}] $\vee$ [\CCnot{\CCequal{\CCvar{thirdToTop}}{\CCvar{y1}}} $\wedge$ \CCequal{\CCvar{y1}}{\CCvar{x1}}] Poke: \CCequal{\CCvar{y1}}{\CCvar{x1}} \end{itemize} \section{Commutativity without quantifier alternation} \label{sec:noAlt} Def.~\ref{def:Mcommute} requires showing equivalence between different compositions of potentially partial functions. That is, $ \GammaOf{\alpha_1} \circ \GammaOf{\alpha_2} = \GammaOf{\alpha_2} \circ \GammaOf{\alpha_1} $ if and only if: \[\begin{array}{c} \forall \sigma_0\ \sigma_1\ \sigma_{12}.\ \GammaOf{\alpha_1}\sigma_0 = \sigma_1 \wedge \GammaOf{\alpha_2}\sigma_1 = \sigma_{12} \;\; \Rightarrow\;\; \exists \sigma_3.\ \GammaOf{\alpha_2}\sigma_0 = \sigma_3 \wedge \GammaOf{\alpha_1}\sigma_3 = \sigma_{12}\\ \text{(\emph{and a symmetric case for the other direction})} \end{array}\] Even when the transition relation can be expressed in a decidable theory, because of $\forall \exists$ quantifier alternation in the above encoding (which is undecidable in general), any procedure requiring such a check would be incomplete. SMT solvers are particularly poor at handling such constraints. We observe that when the transition system is specified as $Pre_m$ and $Post_m$ conditions, and the $Post_m$ condition is \emph{consistent} with $Pre_m$, then it is possible to avoid quantifier alternation. By consistent we mean that whenever $Pre_m$ holds, there is always some state and return value for which $Post_m$ holds. \[\forall \As\ \sigma.\;\;\; \Pre{m}(\As,\sigma) = \TRUE \;\Rightarrow\; \exists \sigma'\ \Rms.\ \Post{m}(\As,\Rms,\sigma,\sigma').\] This assumption holds for all of the specifications in the examples we considered (Sec.~\ref{sec:eval}). This allows us to perform a simple transformation on transition systems to a lifted domain, and enforce a definition of commutativity in the lifted domain $m\ \Lbowtie\ n$ that is equivalent to Def.~\ref{def:Mcommute}. This new definition requires only \emph{universal} quantification, and as such, is better suited to SMT-backed algorithms (Sec.~\ref{sec:alg}). \begin{definition}[Lifted transition function]\label{def:lifted} For $\TT=(\Sigma,A,\GammaOf{\bullet})$, we lift $\TT$ to $\LTT=(\LSigma,A,\LGammaOf{\bullet})$ where $\LSigma = \Sigma \cup \{ \Err{} \}$, $\Err{} \notin \Sigma$, and $\LGammaOf{\alpha} : \LSigma \rightarrow \LSigma$, as: $$ \LGammaOf{\alpha} \hat{\sigma} \equiv \begin{cases} \Err{} & \text{if } \hat{\sigma} = \Err{} \red{remove?}\\ \GammaOf{\alpha} \Lsigma & \text{if } \Lsigma \in \dom{\GammaOf{\alpha}}\\ \Err{} & \text{otherwise} \end{cases} $$ \end{definition} \noindent Intuitively, $\LGammaOf{\alpha}$ wraps $\GammaOf{\alpha}$ so that $\Err{}$ loops back to $\Err{}$, and the (potentially partial) $\GammaOf{\alpha}$ is made to be total by mapping elements to $\Err{}$ when they are undefined in $\GammaOf{\alpha}$. It is not necessary to lift the actions (or, indeed, the methods), but only the states and transition function. Once lifted, for a given state $\Lsigma_0$, the question of \emph{some} successor state becomes equivalent to \emph{all} successor states because there is exactly one successor state. \newcommand\LPre[1]{\widehat{\text{\it Pre}}_{#1}} \newcommand\LPost[1]{\widehat{\text{\it Post}}_{#1}} \newcommand\err{\textsf{err}} \newcommand\nerr{\textsf{nerr}} \smartpar{Abstraction.} Pre-/post-conditions $(\Pre{m},\Post{m})$ are suitable for specifications of potentially partial transition systems. One can translate these into a new pair $(\LPre{m},\LPost{m})$ that induces a corresponding lifted transition system that is total and remains deterministic. These lifted specifications have types over lifted state spaces: $ \sem{\LPre{m}} : \Xs \rightarrow \LSigma \rightarrow \mathbb{B}$ and $ \sem{\LPost{m}} : \Xs \rightarrow \Rms \rightarrow \LSigma \rightarrow \LSigma \rightarrow \mathbb{B}$. Our implementation performs this lifting via translation denoted {\sc Lift} from $(\Pre{m},\Post{m})$ to: \[\begin{array}{rl} \LPre{m}(\Xs,\Lsigma) &\equiv\;\;\TRUE\\ \LPost{m}(\Xs,\Rms,\Lsigma,\Lsigma') &\equiv \;\bigvee \begin{cases} \Lsigma=\Err{} \wedge \Lsigma'=\Err{}\\ \Lsigma \neq \Err{} \wedge\Pre{m}(\Xs,\Lsigma) \wedge \Lsigma'\neq \Err{} \wedge \Post{m}(\Xs,\Rms,\Lsigma,\Lsigma')\\ \Lsigma \neq \Err{} \wedge \neg \Pre{m}(\Xs,\Lsigma) \wedge \Lsigma'=\Err{} \end{cases} \end{array}\] (We abuse notation, giving $\Lsigma$ as an argument to $\Pre{m}$, etc.) It is easy to see that the lifted transition system induced by this translation ($\LSigma,\LGammaOf{\bullet}$) is of the form given in Def.~\ref{def:lifted}. \tacasOnly{In~\cite{arxiv}, } \arxivOnly{In Apx.~\ref{yml:counterauto}, } we show how our tool transforms a counter specification into an equivalent lifted version that is total. We use the notation $\Lbowtie$ to mean $\bowtie$ but over lifted transition system $\LTT$. Since $\Lbowtie$ is over total, determinsitic transition functions, $ \alpha_1\ \Lbowtie\ \alpha_2 $ is equivalent to: \begin{equation} \forall \Lsigma_0.\; \Lsigma_0 \neq \Err{} \;\Rightarrow\; \LGammaOf{\alpha_2}\ \LGammaOf{\alpha_1}\ \Lsigma_0 = \LGammaOf{\alpha_1}\ \LGammaOf{\alpha_2}\ \Lsigma_0 \label{eqn:liftedcommu} \end{equation} The equivalence above is in terms of state equality. Importantly, this is a universally quantified formula that translates to a ground satisfiability check in an SMT solver (modulo the theories used to model the data structure). In our refinement algorithm (Sec.~\ref{sec:alg}), we will use this format to check whether candidate logical formulae describe commutative subregions. \begin{lemma}\label{lemma:foo} $m\ \bowtie\ n \text{ if and only if } m\ \Lbowtie\ n$. \ifARXIV \begin{proof} Follows from classical reasoning, functional extensionality and case analysis on totality-vs-partiality. \end{proof} \else (\emph{All proofs in~\cite{arxiv}.}) \fi \end{lemma}
{ "timestamp": "2018-02-27T02:01:55", "yymm": "1802", "arxiv_id": "1802.08748", "language": "en", "url": "https://arxiv.org/abs/1802.08748" }
\section*{Abstract} The main purpose of this paper is to analyse the earliest work of L\'eon Rosenfeld, one of the pioneers in the search of Quantum Gravity, the supposed theory unifying quantum theory and general relativity. We describe how and why Rosenfeld tried to face this problem in 1927, analysing the role of his mentors: Oskar Klein, Louis de Broglie and Th\'eophile De Donder. Rosenfeld asked himself how quantum mechanics should \textit{concretely} modify general relativity. In the context of a five-dimensional theory, Rosenfeld tried to construct a unifying framework for the gravitational and electromagnetic interaction and wave mechanics. Using a sort of ``general relativistic quantum mechanics'' Rosenfeld introduced a wave equation on a curved background. He investigated the metric created by what he called `quantum phenomena', represented by wave functions. Rosenfeld integrated Einstein equations in the weak field limit, with wave functions as source of the gravitational field. The author performed a sort of semi-classical approximation obtaining at the first order the Reissner-Nordstr\"om metric. We analyse how Rosenfeld's work is part of the history of Quantum Mechanics, because in his investigation Rosenfeld was guided by Bohr's correspondence principle. Finally we briefly discuss how his contribution is connected with the task of finding out which metric can be generated by a quantum field, a problem that quantum field theory on curved backgrounds will start to address 35 years later. \begin{quotation}\small\selectlanguage{english} `A study of history of science [...] shows that the natural attitude of a scientist is to be inspired by their predecessors, but always taking the liberty of doubting when there are reasons for doubt.' \begin{flushright} \textit{Oskar Klein} \end{flushright} \end{quotation} \tableofcontents \section{Introduction}\label{intro} In the physics community, the word quantum gravity (QG) is today associated with the task of quantizing gravity, directly or indirectly, in order to unravel a quantum structure of space and time. Despite many approaches, e.g. String Theory, Supergravity ($ N=8 $), Loop Quantum Gravity, non-commutative geometry and so on, a consistent theory is still lacking. From the point of view of History and Philosophy of Science: `QG, \textit{broadly construed}, is the physical theory (still ``under construction'') incorporating both the principles of general relativity (GR) and quantum theory' [emphasis added] (\cite{Stanford}). ``Broadly construed'' means that all the attempts in this direction have contributed to our modern understanding of the difficulties in constructing a consistent theory of QG, even those approaches that did not quantize the gravitational interaction. To name one, quantum field theory (QFT) on curved backgrounds increased our knowledge on the physics of Black Holes \cite{Hawking}. Furthermore, from a point of view of the integrated History and Philosophy of Science (\&HPS), the fact that the theory is still under construction represents a unique opportunity for studying the process of a theory's formation from the inside (in Kuhnian words ``a revolution in progress''). Usually the history of QG starts in 1930 with the first attempts to reconcile the budding quantum field theory with gravity made by L\'eon Rosenfeld \cite{Rosenfeld1} \cite{Rosenfeld2} (cf. English translation \cite{engl-transl} and the accompanying commentary \cite{commentary}). In the first paper the author tried to find out what would be the gravitational field produced by light in a weak-field approximation. This paper marked the beginning of what is today called the \textit{covariant approach}. In this work the quantization procedure was applied to the electromagnetic field only, the metric field being an operator because it is a function of the Maxwell field. In the second paper, conversely, he tried to apply the quantization procedure directly to the gravitational interaction, employing a tetrad gravitational field rather than the conventional metric. This paper marked the beginning of the today called \textit{canonical approach}. Before Rosenfeld's attempts, soon after the birth of GR in 1915, researchers tried to apply the theory of gravity to the microscopic world. The best known example is Einstein's claim of 1916. When he discovered that a mass should emit gravitational waves, Einstein pointed out the need to modify GR \cite{Ein1}. Of course what he had in mind was Bohr's old move that classical electrodynamics was not applicable in his model of orbiting electrons. In a similar way GR had to be modified with respect to its application to the microscopic world. Einstein's suggestion was not an isolated episode. Recent developments in the history of QG show that in the fifteen years before Rosenfeld's attempts many authors tried to reconcile the old quantum theory or quantum mechanics (QM) with gravity \cite{Stachel1} \cite{Rickles-presentation} \cite{Rickles-sources} \cite{Hagar} \cite{Rocci-Lodge} \cite{Rocci-tesi}. For this reason the period between 1915 and 1930 could be called a prehistory era. Exploring this time frame, the term ``quantum gravity'' must be necessarily interpreted in a broad sense, because in the period between 1916 and 1930 the quantization procedure was a concept under construction. As far as we know, before 1930, there were no attempts that tried to quantize the gravitational field directly. Before going on, we therefore briefly summarize the evolution of the quantization procedure during this period \cite{history-QM1}. Between 1916 and 1924, the construction of atomic models was one of the main tasks of the old quantum theory. The quantization procedure of the atomic model was performed by applying the Epstein-Sommerfeld-Wilson rules. After 1925, with the birth of QM, the investigation of the atomic phenomena was pursued by wave mechanics (WM) and matrix mechanics (MM). In the first formulation of QM, electrons are represented by normalized wave functions. WM was born by using Hamilton Jacobi (HJ) analogy between particle and waves \cite{Schroe1a}. The quantization procedure consisted in writing a wave equation and in imposing the boundary condition on wave functions. The second formulation of QM focused on observable quantities. MM was born by attempting to formulate a new theoretical technique for the determination of the intensities of quantum transitions, using the anharmonic oscillator as a toy model \cite{Blum2017}. The classical position and its conjugated momentum in the Hamiltonian formulation were treated as ``q-numbers'', that today are known as operators. The name ``q-numbers'' stands for quantum numbers, in contrast with ``c-numbers'', i.e. the usual classical variables, like e.g. classical position and momentum of a particle \cite{Darrigol}. The quantization procedure consisted in imposing the commutation relations between these q-numbers. In 1926 Schr\"odinger pointed out the equivalence between the two formulations, but WM remained the preferred point of view in attempting to generalize Schr\"odinger approach in the context of both Special and General Relativity \cite{Rocci-tesi}. In 1927 many new concepts were introduced: the description of spin with two components wave functions, its statistical interpretation, the uncertainty relations. At the end of 1927 Oskar Klein and Pascual Jordan introduced for the first time the quantum commutation relations for the scalar field operators, but the general approach was developed by Heisenberg and Pauli at the end of 1929. Rosenfeld was a protagonist of this early period as well. As stated in the introduction of a recent biography of Rosenfeld \cite{Rosenfeld-bio}, the Belgian physicist is a blank sheet in the history of science literature, `but he was at the centre of modern physics as one of the pioneers of quantum field theory and quantum electrodynamics in the late 1920s and the 1930s' (\cite{Rosenfeld-bio}; p. 1). In spite of the fact that he initiated two of the major research areas in the history of QG, the covariant and the canonical approaches, Rosenfeld never considered his early work as an important contribution \cite{Kuhn1}. The aim of this paper is to offer a historical analysis ``in context'' of the papers published by Rosenfeld at the beginning of his career: \cite{Ros1}, \cite{Ros2},\cite{Ros3}, \cite{Ros4}, \cite{Ros5}. In particular we will focus on the aspects concerning the conciliation between GR and the WM, that produced a first attempt to find the metric generated by ``charged quantum matter'', using a wave-mechanical approach. Rosenfeld was persuaded, at that time, that he had found a quantum modification of the flat metric, using the correspondence principle. He performed a semi-classical approximation in order to compare his quantum metric with the external Reissner-Nordstr\"om (RN) metric. Aside from the fact that this attempt is important by itself, it contained the seeds for his following work \cite{Rosenfeld1}, nevertheless Rosenfeld later become one of the opponents to any quantization of the gravitational field without any experimental evidence for the necessity to do it \cite{Rosenfeld63}. The paper is organized as follows. In section \ref{prehist} we briefly introduce Rosenfeld's life and we put it in the context of the prehistory of QG. In section \ref{5D} we review the work of the authors that influenced the professional training of the young Rosenfeld in 1927: Oskar Klein, Louis de Broglie and Th\'eophile De Donder. In particular we will focus on the analogies and on the differences among these authors. In section \ref{Rosenfeld} we present Rosenfeld's attempt to reconcile GR with WM. At the beginning we shall focus on his first paper, discussing how Klein, de Broglie and De Donder influenced Rosenfeld's work. Then we shall review the papers written by Rosenfeld in 1927, where a general relativistic version of Bohr's correspondence principle emerged. We shall also analyse the role played by Klein, and indirectly by Bohr, in suggesting the first use of the correspondence principle in the context of QG. At the beginning of section \ref{Rosenfeld} we shall focus on what Rosenfeld wanted to achieve. In the last part of the section, i.e. \ref{discussion}, we briefly present a modern interpretation of his approach and a perspective on how the analysed papers would influence Rosenfeld's subsequent work on the search of a quantum theory of gravity. In section \ref{summary} we summarize the basic stages of our paper without entering into technical details. In the Appendices, section \ref{apps}, we describe with more details some calculations left out in the main text. \section{The prehistory of QG and the young Rosenfeld}\label{prehist} The prehistory of QG can be naturally divided into two parts. The first period from 1915 to 1924, was dominated by attempts to understand the role of GR in constructing planetary models of atoms \cite{Jaffe} \cite{Jeffery} \cite{Lodge} \cite{Vallarta1}. With the birth of QM in 1925-26 a new era began, because the classical concept of trajectory had become problematic in the atomic realm. In particular, the second period of the prehistory of QG from 1925 to 1930, was dominated by WM and by attempts which tried to generalize Schr\"odinger's approach in the context of Special Relativity (SR) and GR. In fact, between the two alternative formulations of QM, MM and WM, the second formulation was the preferred one by the authors of the period who tried to find a unique framework describing quantum phenomena and the gravitational interaction \cite{Rocci-tesi}. In this respect, as we will see, L\'eon Rosenfeld was not an exception. The career of the young Belgian physicist had started with the accidental reading of Schr\"odinger's communications \cite{Schroe1a}, as he recollected during an interview with Thomas S. Kuhn and John L. Heilbron in 1963 \cite{Kuhn1}. After completing his studies, Rosenfeld left the University of Li\`ege and moved to Paris at the end of 1926 to meet Louis de Broglie, where, as he recollected in the interview, he spent most of his time learning what he had missed at Li\`ege \cite{Kuhn1}. Rosenfeld himself stressed that he attended a course on relativity in Li\`ege and that the lecturer was an opponent of the new theory. In Paris, he attended many lectures, e.g. Langevin's lectures at the College de France, and he studied a lot of books, including Eddington's book on GR \cite{Eddington}: `I was anxious to do some research, and then the only research I did was in just combining my freshly acquired knowledge of relativity with wave mechanics [...]' \cite{Kuhn1}. A key ingredient of this second period in the prehistory of QG is the enlargement of the four-dimensional space-time by the introduction of a fifth space-like dimension in order to look for a unified picture of the gravitational force, the electromagnetic interaction and the quantum behaviour of particles, described by a wave function. The idea was not new. The founding father of this approach is Theodore Kaluza \cite{Kaluza} who had noted that a five-dimensional theory of ``pure gravity'', i.e. without any matter content but with the electromagnetic potentials represented by specific components of the metric field, seems to offer a unified framework to describe the usual four-dimensional gravitational and electromagnetic interactions\footnote{More precisely Gunnar Nordstr\"om also tried a similar approach before Kaluza \cite{Nord}, but the Norwegian mathematician described the gravitational interaction using a scalar field instead of a tensor field.}. In 1927 many authors tried to harmonize Kaluza's picture with WM, and started explicitly from the German physicist's 1921 paper\footnote{Kaluza's approach was completely classical. He was afraid that quantum theory could invalidate his five-dimensional approach, as he explicitly stated at the end of his paper (\cite{Kaluza-trad}; p. 8).}. The most well-known contribution was Oskar Klein's\footnote{The modern multidimensional approach used by e.g. supergravity and string theory is called Kaluza-Klein approach in honour of these two authors, but the modern approach is different from that of the Fathers. For a review of the modern approach and a comparison with the old one see \cite{Duff-86}.} work, who developed his ideas from 1926 to 1927. Less known contributions were the papers written by Louis de Broglie \cite{deBroglie} and L\'eon Rosenfeld \cite{Ros1} \cite{Ros2} \cite{Ros3} \cite{Ros4} \cite{Ros5}. During the year spent in Paris, Rosenfeld started to interact frequently with de Broglie, discussing for example the problem of spin. It was the Belgian physicist who drew de Broglie's attention on the five-dimensional approach. As a consequence the French physicist published a paper, in 1927, on this topic \cite{deBroglie} \cite{Kuhn1}. During Kuhn's interview Rosenfeld also recollected that he was anxious to apply his new acquired knowledge to relativity, and that the first goal he wanted to achieve was to develop `the wave equation in five dimensions' \cite{Kuhn1}. On this subject Rosenfeld published two notes during his stay at the Ecole Normal in Paris: \cite{Ros1} and \cite{Ros2}. Why did Rosenfeld decide to embark on a five-dimensional adventure? What attracted him? What was Rosenfeld's point of view at that time? In the case of Klein's work the answer was well known, because the Swedish physicist himself answered the question. As we will see, Klein, de Broglie and Rosenfeld constructed their five-dimensional approaches starting from different perspectives and we will try to make clear what considerations suggested to each of the three authors how to develop a five-dimensional picture. Another important role for the young Rosenfeld was played by Th\'eophile De Donder. Like Rosenfeld, De Donder was a Belgian researcher, older and more experienced. De Donder was an enthusiastic supporter of Einstein's theory. As we will see, soon after the birth of QM he tried to explain the existence of atomic stable orbits with the help of GR, but he always followed a classical approach \cite{DD1} \cite{DD2} \cite{DD3}. As Rosenfeld recollected: `I published a note which I sent to him to be presented to the Belgian Academy. De Donder was the least critical person you can imagine, he was enthusiastic about it. So he asked me then to come to Brussels, he wanted to have me in Brussels; I wanted to go abroad a bit more, but I worked for a month with him in Brussels.' \cite{Kuhn1}. As we shall see, one of the main consequences of the Rosenfeld-De Donder collaboration in 1927 was the physical interpretation of the assumptions made by Rosenfeld in his first paper, with the introduction of Bohr's correspondence principle in the context of QG, contained in \cite{Ros3} \cite{Ros5} \cite{DD-Ros}. In October 1927 the fifth Solvay conference took place in Brussels and on that occasion De Donder tried to attract attention to Rosenfeld's work. This Solvay conference is well known to historians of Physics, because it indicates the start of the famous Einstein-Bohr debate. The young Belgian physicist was not officially admitted to attend the conference, but de Donder invited Rosenfeld to follow him. At the conference Rosenfeld met Max Born for the first time and asked him about the possibility of a stay in G\"ottingen. Born's positive answer permitted Rosenfeld to attend Hilbert's, Born's and Pascual Jordan's lectures (\cite{Rosenfeld-bio}, p. 18), and it would open the doors to his future collaborations with Pauli, Jordan and many others. All these facts showed the crucial role played by De Donder in Rosenfeld's life. In the next section we will start with a brief summary of the history of Klein's work and its intersection with de Broglie's contribution to the construction of a five-dimensional Universe. Section 3 will end with an introduction of De Donder four-dimensional approach, based on the lectures he gave at MIT in 1925, in order to understand, in section 4, how De Donder also influenced Rosenfeld's early work. \section{Oskar Klein's, Louis de Broglie's and Theophile De Donder's role}\label{5D} \subsection{The five-dimensional Universe: Klein's approach}\label{section-Klein} Klein's investigation of the five-dimensional Universe started in 1926 with the purpose of unifying gravity, electromagnetism and WM \cite{Pais2}. As Klein himself recollected in \cite{Klein-life}, he was attracted by two facts. First, he knew that the Hamilton-Jacobi (HJ) equation offers a link between particle dynamics and the propagation of a wave front, in the limit of geometrical optics, suggesting a concrete realization for the wave-particle duality. Secondly, by writing the relativistic HJ equation for a particle moving in a combined gravitational and electromagnetic field, he noticed that the electric charge would play the role of an extra momentum component: `[...] I gave a lecture course on electromagnetism, towards the end of which I derived the general relativistic Hamilton-Jacobi equation for an electric particle moving in a combined gravitational and electromagnetic field. Thereby, \textit{the similarity struck me between the ways the electromagnetic potentials and the Einstein gravitational potentials enter into this equation}, the electric charge in appropriate units - appearing as the analogue to a fourth momentum component, the whole looking like a wave front equation in a space of four dimensions. [emphasis added]'\footnote{It is worth noting that in the original paper Klein did not emphasize the role of the electric charge explicitly. Rosenfeld followed a similar reasoning in constructing his wave equation, but stated it explicitly: see the remark after equation (\ref{eq-cl2}).} (\cite{Klein-life}; p. 108)\footnote{The original reasoning runs backward with respect to the path followed by Klein in the paper, where the author presented his model in an axiomatic way.}. In the summer of 1925 he became `immediately very eager to see how far the mentioned analogy reached' (\cite{Klein-life}; p. 109) and he started to investigate the five-dimensional Riemann geometry to describe the gravitational and electromagnetic interactions in a unified framework, trying also to write a five-dimensional wave equation. In the long wavelength limit, the wave equation resembles the eikonal equation for the paths of light rays in geometric optics. These paths follow geodesic lines through a Riemannian space: Klein identified them with five-dimensional null-geodesics which reduce, on his assumptions, to four-dimensional trajectories for charged massive particles moving in a combined electromagnetic and gravitational field. Klein's original idea was to follow an analogy with light in five dimensions, even if he wanted to relate five-dimensional geometry with the stationary states of massive particles. Carrying on this work, the Swedish Physicist convinced himself that his approach was only a first step towards the formulation of a theory able to reconcile GR with WM. But this conclusion was contained only in his last paper of the period \cite{Klein5}, a work that Rosenfeld would never cite. Now we briefly retrace the steps followed by Klein in his first paper \cite{Klein1} \cite{Klein1-trad}. Klein introduced the following five-dimensional line element\footnote{In our paper we consider many authors who introduced different notations. We decided to adopt the following conventions. Barred indices refer to the five-dimensional World, $ \bar\mu = 0, 1, 2, 3, 5$, where the zero component corresponds to a time-like dimension. We use the mostly-plus signature, i.e. $\eta_{\bar\mu\bar\nu}=diag(-1,+1,+1,+1,+1) $. The unbarred Greek indices correspond to the usual four-dimensional space-time, $ \mu = 0,1,2,3 $, and Latin indices refer to the three-dimensional spatial coordinates, $ i = 1,2,3 $. We use International System of Units.}: \begin{equation}\label{d-sigma} d\sigma^2 = \gamma_{\bar{\mu}\bar{\nu}}dx^{\bar{\mu}}dx^{\bar{\nu}}\, , \end{equation} assuming that the metric tensor did not depend on the new fifth space-like component\footnote{Kaluza called this hypothesis the \textit{cylinder condition}. Using modern language, this means that translations in the fifth direction are isometries and hence that the five-dimensional space-time admits a space-like Killing vector field, namely $ \frac{\partial}{\partial x^5} $. Neither Klein nor de Broglie or Rosenfeld mentioned this fact explicitly in their papers.\label{cylinder}} $ x^5 $. Then it follows that the allowed coordinate transformations were restricted to the following set: \begin{subequations} \begin{empheq} [left={\empheqlbrace} ]{align} x^{\mu} &= f^{\mu}\left( {x^0}^{\prime}, \, {x^1}^{\prime}, \, {x^2}^{\prime},\, {x^3}^{\prime}\right) \label{transform1}\\ x^5 &= x^{5\prime} + f_5\left( {x^0}^{\prime},\, {x^1}^{\prime}, \, {x^2}^{\prime},\, {x^3}^{\prime}\right) \, .\label{transform2} \end{empheq} \end{subequations} (\cite{Klein1-trad}; p. 11). After noting the invariance of $ \gamma_{55} $ under the coordinate transformations (\ref{transform1}) and (\ref{transform2}), Klein decided to set $ \gamma_{55}=\alpha $, where $ \alpha $ is a constant. In modern Kaluza-Klein theories $ \gamma_{55} $ is not a constant, it is a real scalar field depending on the transverse dimensions, called a dilaton field. As Lochlain O'Raifeartaigh \cite{Raife2000} and other authors \cite{KK-rev} pointed out, Klein's choice is inconsistent, as we shall explain below after equations (\ref{eq-moto-penta}). Klein rewrote the line element (\ref{d-sigma}) in the following form: \begin{equation}\label{nota-d-sigma1} d\sigma ^2 = \alpha d\theta ^2 + ds^2 \, , \end{equation} \begin{center} where \end{center} \begin{equation}\label{nota-d-sigma2} d\theta = dx^5 + \frac{\gamma_{5\mu}}{\alpha}dx^{\mu}\quad ;\quad g_{\mu\nu} = \gamma_{\mu\nu} - \frac{\gamma_{5\mu}\gamma_{5\nu}}{\alpha}\quad ;\quad ds^2=g_{\mu\nu}dx^{\mu}dx^{\nu}\; \, . \end{equation} Citing Kramers' paper on stationary gravitational fields in four dimensions \cite{Kramers}, Klein noted that $ d\theta$, equation (\ref{nota-d-sigma2}), is invariant under the coordinate transformations (\ref{transform1}) and (\ref{transform2}). In fact, following Kramers and remembering that $ \alpha=\gamma_{55} $, the invariance of $ d\theta $ is transparent if we rewrite it in the following way: $\displaystyle d\theta = dx^5 + \frac{\gamma_{5\mu}}{\gamma_{55}}dx^{\mu} = \frac{1}{\gamma_{55}}\gamma_{5\bar{\mu}}dx^{\bar{\mu}} $. As a consequence, Klein noted that the four components $ \gamma_{5\mu} $ transform as a four-vector of the four-dimensional space-time. Following Kaluza, Klein assumed that they would be proportional to the electromagnetic potentials $ A^\nu = (V;\vec{A}) $, introducing another parameter $ \beta $: \begin{equation}\label{def-beta} \frac{\gamma_{5\mu}}{\alpha} = \beta A_{\mu} \; , \end{equation} where we defined $ A_\mu = g_{\mu\nu}A^\nu $. We note that $ d\theta $ defined in equation (\ref{nota-d-sigma2}) is not an exact form and that it can be rewritten as: $ d\theta = dx^5 + \beta A_{\mu}dx^{\mu} $. Using $ d\theta^2 $ invariance and $ d\sigma^2 $ invariance, it follows that $ ds^2 $ is invariant under the coordinate transformations (\ref{transform1}) and (\ref{transform2}). As a consequence $ g_{\mu\nu} $ can be interpreted as a four-dimensional metric. After having introduced the five-dimensional curvature scalar$ \tilde{R} $, defined in appendix \ref{app1}, Klein varied the five-dimensional Einstein-Hilbert action as usual in GR, with respect to the metric $ \gamma_{\bar{\mu}\bar{\nu}} $: \begin{equation}\label{variation} \delta_{\gamma}\mathcal{S}_5 = \delta_{\gamma} \int_{\Omega} \tilde{R}\sqrt{-\gamma} d^5x = \int_{\Omega} d^5x \frac{\delta\left( \tilde{R}\sqrt{-\gamma}\right) }{\delta \gamma_{\bar{\mu}\bar{\nu}}}\delta \gamma_{\bar{\mu}\bar{\nu}}\; , \end{equation} where the symbol $ \sqrt{-\gamma} $ represents the square root of the negative of the determinant of the metric and the integral is carried out over a closed region $ \Omega $, where boundary values of $ \gamma_{\bar{\mu}\bar{\nu}} $ are kept fixed. From the principle of stationary action the five-dimensional Einstein equations follow: \begin{equation}\label{Klein-5-action} \delta_{\gamma}\mathcal{S}_5 = 0 \quad\Rightarrow\quad \tilde{R}_{\bar\mu\bar\nu}-\frac{1}{2}\gamma_{\bar\mu\bar\nu}\tilde{R}=0\;\, . \end{equation} It is worth noting that neither Klein nor any of the other authors we analysed considered the $ 55 $ component of equation (\ref{Klein-5-action}), because they fixed $ \alpha = \text{constant} $ before varying the action. Thanks to all assumptions he made, equations (\ref{Klein-5-action}) are formally equivalent to the four-dimensional Einstein-like equations coupled to the four-dimensional Maxwell-like equations\footnote{See appendix \ref{app3c} for a detailed explanation of the formal equivalence in the context of Rosenfeld's work.}: \begin{subequations}\label{eq-moto-penta} \begin{empheq} [left={\empheqlbrace} ]{align} {R}_{\mu\nu}-\frac{1}{2}g_{\mu\nu}R &= \frac{\alpha\beta^{2}}{2} T_{\mu\nu}^{em} \label{eq-kla} \\ \partial_{\mu}\left( \sqrt{-g}F^{\mu\nu} \right) &= 0 \label{eq-klb} \quad , \end{empheq} \end{subequations} where $ g $ is the determinant of $ g_{\mu\nu} $ defined in equation (\ref{nota-d-sigma2}). Choosing to set\footnote{In his following papers Klein would set $ \alpha = 1 $. In de Broglie's and Rosenfeld's paper both constants are present.} $ \alpha\beta^{2}=\frac{16\pi G}{c^4} $, where $ G $, and $ c $ are the Newton constant and the speed of light respectively, Klein justified the identification of $ g_{\mu\nu} $ and $ F_{\mu\nu}=\partial_\mu A_\nu - \partial_\nu A_\mu $ with our four-dimensional metric and with the electromagnetic tensor respectively. The electromagnetic stress-energy tensor that appears in (\ref{eq-kla}) is defined by: $ T_{\mu\nu}^{em}={F_\mu}^\alpha F_{\nu\alpha}-\frac{1}{4}g^{\mu\nu}F_{\alpha\beta}F^{\alpha\beta} $. The condition $ \alpha\beta^{2}=\frac{16\pi G}{c^4} $ implies $ \alpha > 0 $. This means that Klein introduced a space-like extra dimension motivated by the need to obtain the four-dimensional Einstein equations coupled with Maxwell's equations. Indeed, a space-like coordinate only, i.e. a positive $ \alpha $ constant in (\ref{eq-kla}), produces the correct coupling between electromagnetic and gravitational interactions. In this sense our four-dimensional World is a ``projection'' of a five-dimensional Universe. As indicated Klein's model is inconsistent, if $ \alpha $ is constant. Indeed, if the dilaton is a non trivial scalar function $ \alpha (x) $, the $ 55 $ component of equations (\ref{Klein-5-action}) is not trivial and it has the form $ \displaystyle \square \sqrt{\alpha}\sim \left( \sqrt{\alpha}\right)^3 F_{\alpha\beta}F^{\alpha\beta} $, where the four-dimensional operator $ \square $, when acting on a scalar function $ \alpha(x) $ is defined by $ \square \alpha =g^{\mu\nu}\nabla_{\mu}\partial_{\nu}\alpha $ for a curved four-dimensional space-time, where $ \nabla_{\mu} $ represents the covariant derivative. This means that a non-zero constant dilaton would imply the too restrictive condition $ F_{\alpha\beta}F^{\alpha\beta} = 0 $, i.e. that the modulus of the electric field should be proportional to the modulus of the magnetic field. As reported in \cite{KK-rev}, this inconsistency was noted by Pascual Jordan \cite{J1} and Yves Thiry \cite{T1} in 1947 and in 1948 respectively: all the authors of the period we are considering imposed the constancy of the dilaton, including de Broglie and Rosenfeld, and they were not aware of this inconsistency. In order to reconcile this framework with WM, Klein's idea was to write a five-dimensional wave equation in a curved space-time, which was then to be connected with the classical four-dimensional Lorentz equation for a charged particle in the presence of gravitational and electromagnetic fields, in the so called geometrical optics limit. The connection between the two equations, considered by all the authors that we shall analyse, is as follows\footnote{For a short review with some mathematical details see appendix \ref{geom-optic}. For a detailed technical explanation of Klein's approach see e.g. \cite{Raife2000}.}. In a geometrical optics approximation, the wave equation reduces to the classical HJ equation with a particular Hamiltonian function. After a Legendre transformation, the associated Lagrangian produces five equations of motion. The four equations transverse to the fifth coordinate can be reduced to the Lorentz equation for a charged massive particle. The Lagrangian approach shows that, in five dimensions, charged particles follow a geodesic motion. Klein himself explained this procedure in the introduction of his paper: `the equations of motion for the charged particles [..] take the form of equations of geodesic lines. If we explain these equations as wave equations because the matter is supposed to be a kind of wave propagation, we are almost naturally led to a partial differential equation of second order, which may be regarded as a generalization of the ordinary wave equation.' (\cite{Klein1-trad}; p. 10). This justifies Klein's idea stated above to connect wave equation with geodesic lines and it also clarifies why WM had a prominent role in his approach in unifying GR with QM. In order to write an equation that generalizes Schr\"odinger's equation, Klein followed an analogy with light. The equation he found resembles a massless Klein-Gordon (KG) equation\footnote{Given a scalar field $ \phi $ of mass $ m $, the KG equation is $ \square\phi = \frac{m^2 c^2}{\hbar^2}\phi $.}, what the author called `our equations for the light wave' (\cite{Klein1-trad}; p. 17). The Swedish physicist was forced to introduce a symmetric tensor $ a_{\bar\mu\bar\nu} $, whose contravariant components are fixed by the request to connect the five-dimensional wave equation with the four-dimensional Lorentz equation for massive charged particles, as we shall see below. Klein's wave equation reads: \begin{equation}\label{KG1} a^{\bar{\mu}\bar{\nu}}\left( \delta^{\bar\sigma}_{\bar\nu}\frac{\partial}{\partial x^{\bar{\mu}}} -\Gamma_{\bar{\mu}\bar{\nu}}^{\bar{\sigma}}\right)\partial_{\bar{\sigma}}\Psi= a^{\bar{\mu}\bar{\nu}}\nabla_{\bar{\mu}}\partial_{\bar{\nu}}\Psi=0\; , \end{equation} where he introduced the covariant derivative $ \nabla_{\bar{\mu}} $ using the Christoffel symbols $ \Gamma_{\bar{\mu}\bar{\nu}}^{\bar{\sigma}}$, because Klein considered a wave function living on a curved five-dimensional Riemannian manifold. This means that Klein's wave function is different from Schr\"odinger's wave function, which lives in configuration space. With this respect, Klein's $ \Psi $ resembles a classical scalar field. From a modern point of view, the introduction of $ a^{\bar\mu\bar\nu} $ sounds strange, because the covariant derivative is usually contracted with the contravariant components of the metric $ \gamma^{\bar\mu\bar\nu} $, which are different from $ a^{\bar\mu\bar\nu} $, as we shall see below. It is worth noting that Klein did not start from a variational principle to obtain his wave equation. He simply wrote a light-like wave equation. The hypothesis that the wave function would be periodic with respect to the fifth coordinate $ x^5 $ permits to ``project'' equation (\ref{KG1}) to obtain the KG wave equation\footnote{Klein and all the authors we consider in the present paper were convinced, at that time, that the relativistic wave equation for the electron would be the KG equation, instead of Dirac's equation. It is worth remembering that Pauli matrices were introduced in the same year \cite{Pauli} and that the Dirac's equation would be published one year later \cite{Dirac}.}. See appendix \ref{app2} for an explanation of the use of periodicity condition in the context of de Broglie's work. How did Klein justify the analogy with light? In \cite{Klein-life} the author recollected: `[...] for some time I had played with the idea that \textit{waves representing the motion of a free particle had to be propagated with constant velocity, in analogy with light waves} - but in a space of four dimensions - so that the motion we observe is a projection on our ordinary three-dimensional space of what is really taking place in four-dimensional space. [emphasis added]' (\cite{Klein-life}; p. 108). The introduction of the symmetric tensor $ a^{\bar{\mu}\bar{\nu}} $ served this specific purpose. Klein's conviction was enforced by the fact that in the long wavelength limit equation (\ref{KG1}) reduces to the eikonal equation for light rays. As a consequence, Klein imposed that in the semi-classical limit the four-dimensional motion of charged particles with mass $ m $ in the presence of a gravitational and electromagnetic field should be described by five-dimensional null-geodesics of the following differential form: \begin{equation}\label{def-a} d\hat{\sigma}^2 = a_{\bar{\mu}\bar{\nu}}dx^{\bar{\mu}}dx^{\bar{\nu}} = \frac{1}{m^2c^2}d\theta^2 + ds^2 \end{equation} (\cite{Klein1-trad}; p. 17) and showed that the correspondent geodesic equation is equivalent to the four-dimensional Lorentz equation. It seems that Klein introduced a different metric for the microscopic world, $ a_{\bar\mu\bar\nu} $, whose components can be obtained from equation (\ref{def-a}), namely: \begin{equation}\label{metric-K} a_{\mu\nu} = g_{\mu\nu}+\frac{e^2}{m^2c^4}A_\mu A_\nu \quad\quad a_{\mu 5}=\frac{e^{2}}{m^2c^3\beta}A_\mu \quad\quad a_{55}= \frac{e^2}{m^2c^4\beta^2} \;\, , \end{equation} and which is quite unlike the space-time metric $ \gamma_{\bar{\mu}\bar{\nu}} $, cfr. eq. (\ref{metric-K}) with (\ref{nota-d-sigma2}) and (\ref{nota-d-sigma1}), but he made no comments on this choice. It is worth noting that the particle's mass $ m $ and its charge $ e $ are hidden in the expressions of $ a_{\bar{\mu}\bar{\nu}} $ tensor. To show the correspondence between five-dimensional null-geodesics and four-dimensional motion of charged particles, Klein considered the corresponding Lagrangian picture, by projecting the equations of motions obtained by varying the Lagrangian $\displaystyle L = \frac{1}{2} a_{\bar\mu\bar\nu}\frac{dx^{\bar\mu}}{d\hat{\lambda}}\frac{dx^{\bar\nu}}{d\hat{\lambda}} $, where $ \hat{\lambda} $ is an arbitrary parameter. One of the five resulting Euler-Lagrange equations states that the momentum conjugated to the coordinate $ x^5 $ is conserved, while the other four equations are equivalent to the Lorentz equation for an electron\footnote{Technical details of the equivalence are given in appendix \ref{geom-optic}.} (charge $ q=-e $): \begin{equation}\label{eq-lorentz} mc\left( \frac{d}{d\tau}\left( g_{\mu\nu}u^{\nu}\right) -\frac{1}{2}\partial_\mu g_{\rho\nu}u^\rho u^\nu\right) =-\frac{e}{c}\left( \partial_\mu A_\nu-\partial_{\nu}A_{\mu}\right) u^\nu\, , \end{equation} where the four-dimensional proper time $ \tau $ is defined by $ d\tau= \sqrt{-ds^2}$, and the four-velocity of the particle is defined by $ \displaystyle u^{\mu}=\frac{dx^\mu}{d\tau} $. The analogy with light forced Klein to look for a correspondence between five-dimensional null-geodesics and four-dimensional paths: this conclusion would be criticized by de Broglie. Before going on, it is worth noting that equation (\ref{eq-lorentz}) can be obtained, as Klein did, without fixing the constant\footnote{See appendix \ref{app2} for technical details in the context of de Broglie's work.} $ \beta $ introduced in (\ref{def-beta}). In his first paper, Klein decided to set $ \beta = \frac{e}{c}$ and consequently the value of $ \alpha $ must be $ \alpha=\frac{16\pi G}{e^2c^2} $. In his second paper \cite{Klein2}, a brief communication to \textit{Nature}, it seems that Klein had changed his mind about the role of null-geodesics. In fact he explicitly referred to `the equation of geodetics' (\cite{Klein2}; p. 516) of the line element\footnote{In this brief communication Klein introduced a different notation and decided to set $ \alpha = 1 $ from the beginning and consequently $ \beta = \sqrt{\frac{16\pi G}{c^4}} $: this simply means that now the fifth coordinate has a dimension of length. \label{convention2}} $ d\sigma^2 $. Furthermore, he suggested to start from the new Lagrangian $\displaystyle L^{\prime} = \frac{m}{2} \gamma_{\bar\mu\bar\nu}\frac{dx^{\bar\mu}}{d\tau}\frac{dx^{\bar\nu}}{d\tau} $, where the $ a_{\bar\mu\bar\nu} $ tensor has disappeared, and the mass and the presence of the proper time $ \tau $ indicate that Klein did not refer to null-geodesics\footnote{From a modern point of view, even in the massive case, the Lagrangian $ L^{\prime} $ should be written by introducing the arbitrary parameter $ \hat{\lambda} $. The proper time $ \tau $ can be introduced because the ratio $ \frac{d\hat{\lambda} }{d\tau}$ is constant, as we shall show in appendix \ref{app2}, discussing de Broglie's approach. We suppose that Klein underlined implicitly that he did not consider null-paths any more.}. This brief communication is important, because Klein noted that the quantization of the momentum along the periodic fifth dimension\footnote{The momentum connected with the quantization of the electric charge is $ p_5 $, the momentum conjugated to the fifth dimension, namely $ \displaystyle p_5 = \frac{\partial L^{\prime}}{\partial \left( dx^5 /d\tau\right)}\; $.\label{nota-hat-lambda}} of finite size $ l $ could have been connected with the quantization of the electric charge. In fact the momentum's quantization along the fifth dimension forces the size $ l $ to assume a precise value: \begin{equation}\label{Klein-l} l=\frac{hc\sqrt{2\kappa}}{e}, \end{equation} where $\displaystyle \kappa = \frac{8\pi G}{c^{4}}$. As we will see, as far as we know, neither de Broglie nor Rosenfeld fixed explicitly either of both parameters and they also did not make explicit considerations on the size of the fifth dimension. \subsection{De Broglie's contribution}\label{section-deBroglie} As mentioned in the introduction, during his stay in Paris Rosenfeld drew de Broglie's attention to Klein's approach. From de Broglie's point of view, the analogy with light was not the correct perspective to describe the path of massive particles. In order to explain the conclusion reached by de Broglie, we emphasize again that Klein, de Broglie and Rosenfeld developed the five-dimensional Universe for different reasons. De Broglie's paper analyses the features of the five-dimensional approach from two distinct perspectives: the classical and the quantum point of view. In the first part of de Broglie's paper, the author described how the most attractive advantage of the classical five-dimensional approach would reside in the fact that it allowed to geometrize all the forces known at that time, i.e. the gravitational and the electromagnetic forces. The author made an analogy between Einstein's approach and the five-dimensional construction. De Broglie interpreted Einstein's theory as a geometrical description of the gravitational force and Kaluza's approach as an extension of this geometrical description to Maxwell's theory\footnote{Here and in the following, we present an English translation of some parts of the original paper, written in French.}: `The main consequence of the introduction of the equivalence principle is that the metaphysic notion of force in the theory of gravitation disappears. The path followed by a point particle in a gravitational field can be defined, thanks to Einstein's conceptions, as the geodesic line of the space-time. [...] The success of this beautiful interpretation of the gravitational field temptingly suggests to throw out the concept of force from the Physics, in order to replace it with the concept of geometry.' (\cite{deBroglie}; p. 65). In the second part of the paper, de Broglie introduced the description of the quantum behaviour of matter using wave/particle duality. From this perspective, there are no forces associated to the particles' wave function, hence neither geometrical description nor analogy with light was needed. De Broglie explicitly stated that `With the present state of our knowledge it seems that all the forces of which we are aware can be reduced to only two: the gravitational and electromagnetic forces.' (\cite{deBroglie} p. 65). It is worth noting that the quantum force concept emerged with the introduction of quantum fields. Unlike Klein, de Broglie introduced a wave equation describing quantum particles' dynamics, i.e. the KG equation, in four dimensions: in the geometrical optics approximation the wave's rays would follow the classical trajectories for massive particles. Hence a five-dimensional generalization of the KG equation would not require any analogy with light. It is important to stress that de Broglie did not use any variational principle to describe the wave's dynamics. With this premise in mind we first consider de Broglie's approach in more detail. De Broglie briefly reviewed Klein's approach and introduced the line element (\ref{d-sigma}) with Klein's Ansatz that now we rewrite here for convenience: \begin{eqnarray}\label{d-sigma2} d\sigma ^2 &=& \alpha d\theta ^2 + ds^2 \, , \label{nota-d-sigma1b}\\ &\text{where}&\nonumber\\ d\theta = dx^5 + \beta A_{\mu}dx^{\mu}\quad ;\quad g_{\mu\nu} &=& \gamma_{\mu\nu} - \frac{\gamma_{5\mu}\gamma_{5\nu}}{\alpha}\quad ;\quad ds^2=g_{\mu\nu}dx^{\mu}dx^{\nu}\; \, \label{nota-d-sigma2b} \end{eqnarray} (We adapted de Broglie's notation changing the symbols he used). Let the values of $ \alpha $ and $ \beta $ be unfixed for the moment. De Broglie's choice shall be analysed after equation (\ref{Iquadro}). At this point, de Broglie's and Klein's paths separate. As we said, de Broglie did not consider any analogy with light, hence he studied the geodesic equations in five dimensions for massive particles. Like Klein, the key idea is that our world would be a projection onto a four-dimensional manifold of what happens in the five-dimensional Universe. The four-dimensional geodesic equation is obtained by the following variational principle\footnote{Because of our mostly-plus signature, the four-dimensional action for a point particle involves the proper time $ \tau $.}: \begin{equation}\label{azione4} \delta S_4=0\quad\Rightarrow\quad\delta\, \int_{O}^{M} d\tau = 0\, , \end{equation} where $ O $ and $ M $ are `two fixed points of the world line' (\cite{deBroglie}; p. 69). De Broglie considered its natural generalization in five dimensions: \begin{equation}\label{azione5} \delta S_5=0\quad\Rightarrow\quad\delta\int_{O}^{M} d\hat{\tau} = 0\, , \end{equation} where we introduced the notation $ d\hat\tau = \sqrt{-d\sigma^{2}} $. The geodesic equations following from (\ref{azione5}) are equivalent to the five-dimensional equations obtained by Klein with the help of the $ a_{\bar{\mu}\bar{\nu}} $ tensor he introduced in his first paper\footnote{See appendix \ref{app2} for a detailed explanation of the original derivation. As we said, Klein was certainly aware of this fact, because he changed his own approach to the geodesics in the brief communication to \textit{Nature}. It is worth noting that de Broglie never cited Klein's \textit{Nature} paper.}, and their four-dimensional projection reproduce equations (\ref{eq-lorentz}). In order to obtain the correct Lorentz equations, de Broglie set \begin{equation}\label{ratio} \alpha\frac{d\theta}{d\tau} = -\frac{e}{\beta c}\frac{1}{mc}\; , \end{equation} underlining the importance of this equation. Indeed, from de Broglie's point of view, equation (\ref{ratio}) suggests a geometrical interpretation of the ratio $ \frac{e}{m} $. Let's consider, following de Broglie, `a coordinate line $ x^5 $' (\cite{deBroglie}; p. 68) and using $ d\tau =\sqrt{-ds^2} $ and $ d\hat{\tau} = \sqrt{-d\sigma^2} $ we rewrite equation (\ref{nota-d-sigma1b}) as follows: \begin{equation}\label{d-sigma3} d\hat{\tau}^2 = d\tau^2 +\left| \alpha\right| d\theta^2\; . \end{equation} We use $\left| \alpha \right| $, because de Broglie set $ \alpha <0 $, a choice that we shall discuss after equation (\ref{Iquadro}). `Let us represent, on a point $ P $ of this coordinate line, a part of a plane $ \pi $ inclined with respect to the $ x^5 $ direction, which represents a little portion of the four-dimensional hypersurface $ x^5 = const. $ passing through the point $ P $. Let $ \overline{PQ} $ be an element of a world line of length $d\hat{\tau} $ and let $ \overline{PS} $ and $ \overline{PR} $ be its projections along the $ x^{5} $ direction and orthogonal to the $ x^5 $ direction respectively. From equation (\ref{d-sigma3}) it follows that \begin{equation}\label{projection} \overline{PS} = \sqrt{\left| \alpha\right| } d\theta \;\text{;}\qquad\overline{PR} = d\tau\; . \end{equation} [...] the tangent of the angle $ \widehat{QPR} $, namely $\displaystyle \frac{\sqrt{\left| \alpha\right| } d\theta}{d\tau} $, is proportional to the ratio $ \frac{e}{m} $ where $ e $ and $ m $ are the charge and the mass of the particle of which $ \overline{PQ} $ is the element of the world line. Hence the world line of every moving object makes the same angle with the direction $ x^5 $ at each point, which angle is straight if the electric charge is zero.' (\cite{deBroglie}; p. 68)\footnote{With the choice $ \alpha > 0 $, the ratio would define the hyperbolic tangent of the angle.}. This result supported de Broglie's conviction that the five-dimensional Universe could provide a geometrical description for all of the known physical concepts. Rosenfeld would continue to use this idea, as we shall see in the discussion after equation (\ref{slope}). De Broglie asked himself what the exact form of the action $ S_5 $ to be varied would be in order to obtain a five-dimensional generalization of the four-dimensional massive particle's action. De Broglie stressed that he wanted to obtain, in the case of zero charge, the usual action $\displaystyle S_4 = -mc\int_{O}^{M} d\tau $ (\cite{deBroglie}; p. 70) and he proposed that the five-dimensional particle's action should be\footnote{We skip over some technical details. See the appendix \ref{app2} for de Broglie's original proof that $ S_5 $ reduces to $ S_4 $ in the case of null charge.}: \begin{equation}\label{action-dB} S_5 =-\mathcal{I} \int_{O}^{M} d\hat\tau\, , \end{equation} where the quantity $ \mathcal{I} $ satisfies the following relations \begin{equation}\label{relazioni} \mathcal{I}\alpha \frac{d\theta}{d\hat\tau} = -\frac{e}{c\beta}\, , \quad\quad \mathcal{I}\frac{d\tau}{d\hat\tau} = mc \, , \end{equation} and has the following form: \begin{equation}\label{Iquadro} \mathcal{I}=\sqrt{m^2c^2 - \frac{e^2}{\alpha\beta^{2} c^{2}} }\, . \end{equation} The invariant $ \mathcal{I} $ needs some comments, connected with de Broglie's choice of $ \alpha $'s and $ \beta $'s values. De Broglie implicitly set \begin{equation}\label{alfa-dB} \alpha\beta^{2}=-\frac{16\pi G}{c^{4}}\; , \end{equation} from the beginning of his paper. As a consequence, $\displaystyle \mathcal{I}_{dB} = \mathcal{I}\left( \alpha\beta^{2}=-\frac{16\pi G}{c^{4}}\right)$ is a real constant: \begin{equation}\label{Iquadro-dB} \mathcal{I}_{dB}=\sqrt{m^2c^2 + \frac{e^2c^2}{16\pi G} }\, , \end{equation} and comparing $ S_4 $ and $ S_5 $, de Broglie suggested that it should be interpreted as the modulus of the five-dimensional momentum $ P_{\bar{\mu}} $ for charged particles, defined in analogy with the four-dimensional momentum $\displaystyle p_{\mu}=mcg_{\mu\nu}\frac{dx^\nu}{d\tau} $ for uncharged particles in four dimensions, namely $\displaystyle P_{\bar{\mu}} = \gamma_{\bar{\mu}\bar{\nu}}\mathcal{I}_{dB}\frac{dx^{\bar{\nu}}}{d\hat{\tau}}$ . To be more explicit, referring to the geometrical picture discussed above, de Broglie asserted that relations (\ref{relazioni}) should be interpreted as the tangent and orthogonal components of the five-dimensional momentum $ P_{\bar{\mu}} $ with respect to the fifth direction $ x^5 $ (\cite{deBroglie}; p. 70, note $ (^1) $). We will return to this interpretation discussing Rosenfeld work, see the discussion after equation (\ref{slope}). Equation (\ref{alfa-dB}) means that unlike Klein, de Broglie imposed that the fifth dimension would be a time-like coordinate, because from equation (\ref{alfa-dB}) it follows $ \gamma_{55}=\alpha <0 $. De Broglie made no explicit comment on the time-like character of the fifth dimension. As we shall see, Klein noted that this choice was inconsistent with other demands of the model. Rosenfeld would be strongly influenced by de Broglie's ideas, but he was aware of this inconsistency. After having specified this fundamental difference between the two approaches, let us now return to de Broglie's considerations. After having established that the Lorentz equations (\ref{eq-lorentz}) can be obtained by varying\footnote{See appendix \ref{app2}.} $ \mathcal{S}_{5} $, de Broglie declared: `\textit{The notion of force has been banned completely from Mechanics.}' (\cite{deBroglie}; p.70), emphasizing his original aim. As a consequence he proposed the following wave equation as a generalization of Schr\"odinger wave equation, instead of (\ref{KG1}), namely \begin{equation}\label{KG4} \gamma^{\bar{\mu}\bar{\nu}}\nabla_{\bar{\mu}}\partial_{\bar{\nu}}\Psi\,=\,\frac{4\pi^2}{h^2}\mathcal{I}_{dB}^{2} \Psi \; , \end{equation} where now the covariant derivative is correctly contracted with the metric. Equation (\ref{KG4}) could resemble a KG equation in five dimension, where $ \frac{\mathcal{I}_{dB}}{c} $ plays the role of the mass in five dimensions, because it is a real quantity. It is worth noting that the identification of $ \Psi $ as a wave function prevents the identification of $ \mathcal{I}_{dB} $ with a mass term in the sense of modern field theory. Using the fact that the action $ S_5 $ can be rewritten as follows \begin{equation}\label{action-dB1} S_5=-\int_O^M \frac{e}{c\beta}dx^5+ \frac{e}{c}\int_O^M A_\mu dx^{\mu} -mc\int_O^M d\tau \; , \end{equation} de Broglie could show that equation (\ref{KG4}) is equivalent to the four-dimensional KG equation for massive particles, which reduces to Schr\"odinger equation in the non-relativistic limit. In order to demonstrate his claim, de Broglie introduced the geometrical optics approximation, writing the five-dimensional wave function $ \Psi $ as \begin{equation}\label{deB-optic} \Psi = Ce^{\frac{i}{\hbar}S_5} = f(x,y,z,t)e^{\frac{i}{\hbar}\frac{ex^5}{c\beta}} \end{equation} (\cite{deBroglie}; p. 72), where $ C $ is a constant and $ S_5 $ is the five-dimensional action defined in (\ref{action-dB1}). It is worth noting that De Broglie considered $ S_5 $ as an Hamiltonian action. This means that he interpreted the five-dimensional action as a ``Jacobi function''. As we will see, De Donder will be more explicit on this fact. At this point, De Broglie expressed his opinion on the analogy with light introduced by the Swedish physicist: `O. Klein writes the equation (\ref{KG4}) without the second member, and he concludes that the world-lines must be null-geodesics; it is in our opinion that the second term of (\ref{KG4}) is fundamental and that the world-lines are still geodesics, but not null-geodesics' (\cite{deBroglie}, p. 72; we modified the number of the cited equation in order to fit with our numerical order). Before going on we return to the question of the fifth dimension's size, which was never calculated by de Broglie. Indeed, the author commented on the size of the fifth dimension like this: `The variations of the fifth coordinate completely escape our senses [...] two points that differ only for the value of the fifth coordinate are indistinguishable from our point of view' (\cite{deBroglie}; p.67). But from these observations, de Broglie inferred, like Klein, that the components of the metric $ \gamma_{\bar{\mu}\bar{\nu}} $ must be independent from the fifth coordinate and that `the only humanly possible transformations have the following form: \begin{equation}\label{tir} {x'}^\mu = f^\mu\left( x^0,\, x^1,\, x^2,\, x^3 \right)\quad \text{'} \end{equation} (\cite{deBroglie}; p.67). If de Broglie would have chosen $ \alpha\beta^2=2\kappa $, i.e. a space-like dimension, he would have been able to read off the size of the compact dimension. Indeed, after noting that\footnote{Remember that de Broglie choose a negative value for $ \alpha $. We suppose that for this reason he never noted this fact.} $\tilde{x}^5 = \sqrt{\alpha}x^5 $ has dimensionality of $ [length]^1 $, the dependence on the fifth dimension in (\ref{deB-optic}) can be rewritten as \begin{equation} \frac{i}{\hbar}\frac{ex^5}{c\beta}=\frac{i}{\hbar}\frac{e}{c\sqrt{\alpha}\beta}\sqrt{\alpha}x^{5}=\frac{i}{\hbar}\frac{e}{c\sqrt{2\kappa}}\sqrt{\alpha}x^{5}=i\frac{\tilde{x}^5}{\tilde{l}}\, , \end{equation} where $ \tilde{l}=\frac{\hbar c\sqrt{2\kappa}}{e} $ is Klein's length (\ref{Klein-l}) divided by $ 2\pi $, showing that Klein's length determines the periodicity. De Broglie was very impressed by equation (\ref{KG4}) and he concluded his paper with the following remark: `For studying the problem of matter and of its atomic structure deeply, it would be necessary to perform a systematic analysis of the five-dimensional Universe's point of view that seemed to be more promising than Weyl's approach. If we understand how to interpret correctly the role played by the constants $ e $, $ m $, $ c $, $ \hbar $ and $ G $ in equation (\ref{KG4}), we will have finally grasped one of the most mysterious secret of Nature.' (\cite{deBroglie} p.73). Klein's answer to the question of null-geodesics arrived immediately \cite{Klein-risp-deBroglie}. He noted that in equation (\ref{KG4}) de Broglie used the metric $ \gamma^{\bar{\mu}\bar{\nu}} $ instead of his ``artificial'' tensor $ a^{\bar{\mu}\bar{\nu}} $: inserting the components of $ a^{\bar{\mu}\bar{\nu}} $ in (\ref{KG4}), Klein showed that the equations (\ref{KG4}) and (\ref{KG1}) were equivalent. The fact is not surprising, because the particle's mass is hidden in the expression of the $ a^{\bar{\mu}\bar{\nu}} $ tensor\footnote{See discussion after equation (\ref{def-a}).}. Klein also noted that the condition on the parameters $ \alpha\beta^2=2\kappa $ was incompatible with the choice of a time-like fifth dimension\footnote{In appendix \ref{app1} we will analyse Klein's claims in more detail.}. But he concluded the brief communication with a positive comment on de Broglie's assertion: `...this \textit{error} has no influence on de Broglie's result [emphasis added]'\footnote{Klein assertion was referred to the fact that irrespective of the nature of the fifth coordinate, after having used the periodicity condition, the term with the Newton constant in (\ref{KG4}) disappears and it reduces to the KG equation. See appendix \ref{app2}, the discussion after equation (\ref{psi1}) for a detailed explanation.} (\cite{Klein-risp-deBroglie}; p. 243). It is worth noting that in his subsequent papers Klein would have stressed the need to introduce a space-like fifth dimension\footnote{See appendix \ref{app1} for technical details.} (\cite{Klein5}; p. 206, footnote *). Notwithstanding, after de Broglie's paper, Klein abandoned explicitly the analogy with light. \subsection{De Donder's lectures on gravitation} Neither Klein nor de Broglie tried to obtain their wave equation, in the works we analysed so far, using a unified variational principle. In fact they introduced only the particle's Lagrangian in order to describe the classical particle's dynamics\footnote{As we shall note in the next section, Klein's last paper would contain a five-dimensional variational principle to derive WM (\cite{Klein5}; p. 201), which is slightly different from Rosenfeld's variational principle.}. The Belgian physicist Th\'eophile De Donder was an early supporter of variational principles, developing the purely formal parts of the calculus of variations and analysing e.g. the effect of transformations of coordinates and parameters upon what he called ``invariants'' and upon other expressions which occur in the theory of the variational calculus \cite{DD-var}. As we shall see, De Donder's ``invariants'' would correspond to our modern Lagrangian density. He tried also to derive WM from a unified variational principle. He did not consider multidimensional world, because he was satisfied to write a unified Lagrangian involving the gravitational field, the Maxwell field and a Lagrange function for the quantum particle. De Donder tried to present a coherent framework for relativistic Lagrangian dynamics in the context of curved spaces, and he was one of the first to note the role of the HJ equation as constraints in this context. In his first paper, Rosenfeld mainly followed De Donder's approach to introduce the wave function in the five-dimensional Universe, as we shall see later. During the Spring of 1926, De Donder gave a series of lectures at the MIT. In these lectures, which would be published the following year \cite{DeDonder2}, the Belgian physicist gathered together all the results he had just published in the \textit{Comptes Rendus} journal. The lectures contain all the original references, with an advantage: \textit{Comptes Rendus} publications were often brief communications, whereas the lectures gave a complete overview of De Donder's point of view. For this reason we will refer to his MIT lectures. We stress that this paragraph is a brief analysis of the ideas that influenced Rosenfeld. A deeper understanding of De Donder's methods goes beyond the goals of the present paper. The Belgian physicist tried explicitly to apply GR to the microscopic world. At the end of the first lecture, the general introduction, De Donder wrote: `We then say a few words about the mysterious quantum. To shed some light on this obscure physical entity, we shall deduce at first from relativistic electrodynamics expressed by means of points in space-time, the dynamics of an atomic or molecular system of any number of degrees of freedom. We shall then devise a general method of quantization in space-time, which we shall apply to the quantization of the point electron and to that of \textit{continuous} systems: It will be shown that this quantization is a logical consequence of our gravific theory [...]'\footnote{De Donder used the old term `gravific theory' instead of `gravitational theory'.} (\cite{DeDonder2}; p. 8). This comment is important for two reasons. First, it emphasized again that the problem of reconciling quantum physics and GR was considered early in the history of quantum physics. Secondly, De Donder developed his approach during the birth of QM and it is a ``spurious'' approach in the following sense. Before 1925 the quantization of a system was performed using Epstein-Sommerfeld-Wilson rules and a system like `the point electron', as De Donder referred to, would follow a classical trajectory. He agreed with this interpretation and in this sense, from our point of view, his approach belongs to the old quantum theory. But De Donder knew Schr\"odinger papers and he explicitly stated that he was looking for new quantization rules that should be compatible with the curved space-time of Einstein theory. These rules would have to reproduce, in his opinion, the general relativistic generalization of Schr\"odinger's equation\footnote{Once again the reference was to the KG equation.}. This means that with the phrase `general method of quantization in space-time' De Donder intended a procedure to obtain a wave equation for the wave function $ \psi $, living on a curved background. As far as we know, De Donder never referred to $ \psi$ as a field. For this reason we could say that De Donder was looking for a ``General Relativistic Quantum Mechanics'' (GRQM). In WM a key ingredient of the quantization procedure was the imposition of boundary conditions for the wave function. As far as we know, De Donder never considered any boundary conditions explicitly. As we will see, his method was based on a unified variational principle, but De Donder's $ \psi $ was treated, from our point of view, classically. This means also that, from the modern field theoretic point of view, he did not consider any quantum feature of the fields. Lastly, it is worth noting that De Donder was not alone in believing that quantization rules could be derived in the context of some unknown classical theory. Einstein, for example, would look for a classical field theory (Einheitliche Feldtheorie) for the rest of his life \cite{Pais}. We do not know why De Donder was convinced of this idea, but because of the absence of a discussion on the wave function's boundary conditions, as we shall discuss after equation (\ref{quantDD}), the unified variational principle seemed not to require any modification of GR. For this reason, De Donder thought that the quantization rules should have been a consequence of GR principles, as he stated in the in the introduction cited above. This attitude is consistent with the claim that De Donder belongs to the group of authors who were convinced of GR supremacy. This conviction is confirmed by the last sentence of the general introduction to his MIT lectures: `Once more relativity unfolds the great physical drama of the universe clad in an immutable form bearing the stamp of eternal laws.' (\cite{DeDonder2}; p. 8). This means also that from a modern point of view, in his approach De Donder did not consider any quantum effect on the gravitational field. This fact was common to almost all the pre-1930 works: as far as we know Rosenfeld's approach was the only exception. We introduce some technical details in order to understand how De Donder tried to harmonize WM with GR. The tenth lecture is dedicated to the `Relativistic Quantization', and it started from the classical dynamics of a charged particle in GR, i.e. the `point-electron'. The dynamics is described by the Euler-Lagrange equations obtained using the following Lagrangian\footnote{The ``Lagrangian'' used by De Donder had the dimensions of a Lagrangian divided by a velocity and the same happens for the following ``Hamiltonian'' (\ref{hamil-DD}), but we will call them Lagrangian and Hamiltonian as well.} (\cite{DeDonder2}; p. 90): \begin{equation}\label{az-DD} L_{DD} \left( x ;\, u \right) = \frac{mc}{2}g_{\mu\nu} u^{\mu}u^{\nu}-\frac{e}{c}A_{\mu}u^{\mu} , \end{equation} where $ \displaystyle u^{\mu}= \,\frac{dx^{\mu}}{d\tau} $, $ \tau $ is the proper time, and the tangent vector satisfies the following constraint: \begin{equation}\label{constraint} g_{\mu\nu}u^{\mu}u^{\nu}=-1 . \end{equation} Using $ L_{DD} $, De Donder was able to define the conjugate momenta as $\displaystyle p_\mu=\frac{\partial L_{DD}}{\partial u^\mu}=mcg_{\mu\nu}u^{\nu}-\frac{e}{c}A_\mu $, and the Hamiltonian $ H =p_\mu u^\mu-L_{DD} $ reads: \begin{equation}\label{hamil-DD} H = \frac{1}{2mc}\left( p_\mu +\frac{e}{c}A_\mu\right)\left( p^\mu +\frac{e}{c}A^\mu\right) \, . \end{equation} The constraint $ g_{\mu\nu}u^{\mu}u^{\nu}=-1 $ is equivalent to the relation $H =-\frac{1}{2}mc $, i.e. the reduced HJ equation for a point particle, which De Donder called `Jacobian equation'. Finally, by using equation (\ref{hamil-DD}), the constraint assumes the following form (\cite{DeDonder2}; p. 91, equation (10)): \begin{equation}\label{eq-cl} g^{\mu\nu}\left( \frac{\partial S}{\partial x^\mu} +\frac{e}{c}A_\mu\right) \left( \frac{\partial S}{\partial x^\nu} +\frac{e}{c}A_\nu\right) +m^2c^2 =0 \qquad ,\qquad \frac{\partial S}{\partial x^\mu}=p_\mu, \end{equation} where $ S $ is the Jacobi function of classical mechanics. Before going on, we point out that De Donder was aware of the following fact. Using $\displaystyle S_4 = -mc\int_O^M d\tau $ as action for the free point-particle, the Lagrangian approach could be performed introducing an arbitrary parameter $ \lambda $ and rewriting $ S_4 $ as follows: \begin{equation} S_4 = \int_O^M L d\hat{\lambda} = \int_O^M \sqrt{-\gamma_{\bar{\mu}\bar{\nu}}\frac{dx^{\bar{\mu}}}{d\hat{\lambda}}\frac{dx^{\bar{\nu}}}{d\hat{\lambda}}}\, d\hat{\lambda}\; . \end{equation} In this case, a Legendre transform would produce a null Hamiltonian, i.e. the constraint $ H=0 $. At this point De Donder introduced a wave function associated to the electron, namely $ \psi\left( \tau, x\right) $, a function of the spatial coordinates $ x $ and of the proper time $ \tau $. In the MIT lectures, the author made no explicit discussion neither on the mathematical feature of the wave function nor on its physical interpretation. He implicitly identified it with Schr\"odinger's wave function, when considering a single electron. In fact, De Donder imposed the following Ansatz for the wave function (\cite{DD1}; p. 91): \begin{equation}\label{DD1} \psi =e^{k\, S} \qquad \text{i.e.}\qquad S=\frac{1}{k}\, \text{log}\left( \psi\right) \, , \end{equation} where the Jacobi function $ S(\tau\, , x ) $ depends on the spatial coordinates and on the proper time. At the beginning $ k $ is an unknown constant, but in the end, in order to match his wave equation with Schr\"odinger equation, he would choose $ \displaystyle k = \frac{i}{\hbar} $. De Donder made no comment on the fact that with this choice both $ \psi $ and the log-function in equation (\ref{DD1}) turn into complex functions. As a consequence of the fact that he left $ k $ undetermined, he would not use the complex conjugate as we shall do in equation (\ref{functional}). De Donder will use the correct notation in his book on Variational Calculus \cite{DD-var}. If $ \displaystyle k = \frac{i}{\hbar} $, the Ansatz (\ref{DD1}) corresponds to the correct geometrical optics approximation. It is worth noting that this procedure is very similar to Klein's approach. In fact, this procedure was the common way to introduce a wave equation for a ``quantum'' particle in the mid 1920s. Unlike Klein, from De Donder's point of view it was not necessary to unify all forces with a five-dimensional Lagrangian. Indeed, De Donder was satisfied with a unified action principle. Unlike Klein, he looked from the beginning for an action principle in four dimensions, with the help of relativistic Hamiltonian dynamics. After having introduced the Jacobi function $ S(\tau\, , x ) $, in order to obtain the reduced HJ equation $H =-\frac{1}{2}mc $, the reducibility condition reads: \begin{equation}\label{DD2} \frac{\partial S}{\partial \tau}=\frac{1}{2}mc . \end{equation} Integrating (\ref{DD2}), De Donder wrote the Jacobi function in the following form: \begin{equation}\label{JacobiDD} S = \frac{1}{2}mc\tau + S_0\left( x^0,\, x^1,\, x^2,\, x^3\right) \, , \end{equation} that will play an important role for Rosenfeld, as we shall see in the next section. Thanks to definition (\ref{DD1}) and using equation (\ref{DD2}), the author was able to write (\cite{DD1}; p.91): \begin{eqnarray} \frac{\partial S}{\partial \tau}&=&\frac{\hbar}{i}\frac{1}{\psi}\frac{\partial \psi}{\partial \tau}\, ,\label{DD3a}\\ \frac{\partial S}{\partial x^\mu}&=&\frac{\hbar}{i}\frac{\partial_\mu\psi}{\psi} \, ,\label{DD3b}\\ \psi &=& \frac{\hbar}{i}\frac{\frac{\partial \psi}{\partial \tau}}{\frac{\partial S}{\partial \tau}} = \frac{\hbar}{i}\frac{2}{mc}\frac{\partial \psi}{\partial \tau} \, .\label{DD3c} \end{eqnarray} The conjugated wave function $ \overline{\psi} $ satisfies the conjugated version of equations (\ref{DD3a}), (\ref{DD3b}) and (\ref{DD3c}). Inserting (\ref{DD3b}) and (\ref{DD3c}) into (\ref{eq-cl}), the HJ equation (\ref{eq-cl}) can be rewritten in the following form: \begin{equation}\label{functional} J\left(\psi \right) \equiv -g^{\mu\nu}\left( \frac{mc}{2}\partial_\mu\psi +\frac{e}{c}A_\mu\frac{\partial \psi}{\partial \tau}\right)\left( \frac{mc}{2}\partial_{\nu}\overline{\psi} -\frac{e}{c}A_{\nu}\frac{\partial \overline{\psi}}{\partial \tau}\right) - m^2c^2\frac{\partial \psi}{\partial \tau}\frac{\partial\overline{\psi}}{\partial \tau}=0\, . \end{equation} In De Donder's approach equation (\ref{functional}) defines a functional $ J\left(\psi \right) $, that is an invariant under all changes of variables, $ x^0, \dots ,x^3 $ (\cite{DeDonder2}; p. 92). The $ J $ functional plays a fundamental role for the author. From his point of view, with the introduction of the wave function $ \psi $, the classical HJ equation (\ref{eq-cl}) becomes a constraint for the new functional $ J(\psi ) $, i.e. \begin{equation}\label{eq-Jac} J\left(\psi \right) = 0 \, , \end{equation} and using this new functional De Donder was able to introduce what the author calls the relativistic quantization rule for curved space-time. After defining the following functional derivative: \begin{equation}\label{der-funz} \frac{\delta}{\delta\psi}J(\psi )=\frac{\partial J}{\partial \psi}-\partial_\mu\frac{\partial J}{\partial\partial_\mu\psi}+\dots \, , \end{equation} the quantization rule reads: `the variational derivative of the left-hand member of the Jacobian equation (\ref{eq-Jac}), with respect to $ \psi $, shall vanish. Explicitly: \begin{equation}\label{quantDD} \frac{\delta}{\delta\psi}\left( \sqrt{-g} J\right) = 0\; \text{'} \end{equation} (\cite{DeDonder2}; p. 92). Before going on, let us consider De Donder's variational principles in more detail. Lecture 5 of the MIT lectures is dedicated to `The fundamental Equations of the Gravific Field'. In order to obtain Einstein equations, De Donder considered the following variational principle (\cite{DD1}; p. 47): \begin{equation} \frac{\delta\left[\left( aR+b + \mathcal{L}_m \right)\sqrt{-g}\right] }{\delta g^{\mu\nu}} = 0\; , \end{equation} where the functional derivative is defined as in equation (\ref{der-funz}) with $ \psi $ replaced by the metric, $ R $ is the four-dimensional curvature scalar, $ a $ and $ b $ are arbitrary constants (incidentally, the constant $ b $ plays the role of the Cosmological Constant $ \Lambda $, but De Donder did not comment on this fact), $ \mathcal{L}_m $ is an unspecified Lagrangian density for the matter part of the theory, and the functional $ \left( aR+b \right)\sqrt{-g} $, i.e. the Lagrangian density, is named `\textit{the characteristic gravific function}' (\cite{DD1}; p. 47). It seems that in these years De Donder preferred to introduce a variational principle using Lagrangian densities instead of action functionals. De Donder himself stressed this fact as follows, advocating a precise justification of the choice he made: `The variational principle, as we have presented it, is evidently a generalization of Hamilton's principle, that is, equivalent to placing \begin{equation} \delta\int_{\Omega}\left( aR+b + \mathcal{L}_m \right)\sqrt{-g} d^4x = 0\; , \end{equation} $ \Omega $ being a region of space-time at the boundaries of which the variations must vanish. \textit{It is in order to avoid the use of four-dimensional space that we have preferred the above presentation}.' [emphasis added] (\cite{DD1}; p. 47). In his following works devoted on the developments of variational principles and their applications \cite{DD-1930}, the author will use both forms. Let us now consider again De Donder's approach to quantization procedure. Why did De Donder call equation (\ref{quantDD}) `a quantization rule'? The functional derivative (\ref{der-funz}), introduced by De Donder, produces the usual equations of motion for a charged scalar field and he showed that it reduces to the Schr\"odinger's equation in the non relativistic limit and in the approximation of an electrostatic field. It is worth noting that De Donder's $ \psi $ would not have the correct dimensionality to be interpreted as the Schr\"odinger's wave equation, but De Donder made no comments on this fact. For this reason he considered equation (\ref{quantDD}) as a quantization rule. In this sense, for us, De Donder's approach belongs to the WM point of view: like Klein he believed that writing a wave equation was a sufficient condition to describe the quantum behaviour of a system. Why did De Donder assert in his general introduction that this quantization rule would be `a logical consequence of our gravitational theory'? In order to answer this question, firstly we note that from a modern point of view, De Donder's approach is of course a classical approach, because it is equivalent to a classical variational principle for a field theory, though De Donder interpreted the ``field'' $ \psi $ as a wave function. The absence of the integral in (\ref{quantDD}) was compensated by an ad hoc choice of the functional derivative defined in (\ref{der-funz}). Secondly we remember that the first authors that tried to quantize scalar fields were Klein and Jordan in 1927 \cite{Klein4}. This means that the concept of quantum field was not already born and like other authors De Donder was convinced that writing a wave equation for a system was sufficient to quantize it. De Donder was convinced that GR could explain where the quantization rules come from, because he obtained Schr\"odinger's wave equation through the use of a variational principle, like Einstein's equations are obtained, only from different action. Lastly, it is worth noting that by applying variational methods without imposing commutation relations for the fields, the apparatus of GR seems not to require any modification. For these reasons, De Donder made the following remark, in order to emphasize his interpretation of the approach: `We have thus shown that \textit{the quantization of the point electron can be deduced from Einstein's gravitational theory} by means of an absolute extremal.' (\cite{DeDonder2}; p. 95). Before going on, we make the following remark on De Donder's functional. Unlike Klein, who considered a real scalar field in five dimensions, De Donder wrote a sort of Lagrangian density for a charged scalar field. More precisely, using relation (\ref{DD3c}) the $ J $ functional reads: \begin{equation}\label{lagr-DD} J(\psi) = \frac{m^{2}c^{2}}{4}\left[- g^{\mu\nu}\left( \partial_{\mu}\psi+\frac{i}{\hbar}\frac{e}{c}A_\mu\psi\right) \left( \partial_{\nu}\overline{\psi}-\frac{i}{\hbar}\frac{e}{c}A_\nu\overline{\psi}\right)-\frac{m^2c^2}{\hbar^2}\overline{\psi}\psi\right] \, . \end{equation} The expression in the squared brackets resembles the Lagrangian density of a complex scalar field in the presence of an electromagnetic and a gravitational field, but neither $ \psi $ nor $ J $ would have the correct dimensionality to be interpreted as a scalar field and a density Lagrangian respectively. Unlike Klein's functional, De Donder's functional (\ref{lagr-DD}) would have the correct sign in order to be interpreted as a Lagrangian density \cite{Rocci1}. \section{Rosenfeld's contributions}\label{Rosenfeld} Rosenfeld merged De Donder's and de Broglie's ideas using Klein's approach. He explicitly cited all the authors we discussed in the preceding section. Like De Donder, he considered the relativistic Jacobi function approach. Like de Broglie, he explicitly inserted a mass term in the KG equation. Like Klein, he was aware of the fact that the fifth dimension's character should be space-like. But the principal purpose of Rosenfeld was to try to understand concretely how quantum effects should modify the classical view in the presence of a gravitational field, at least in the weak field approximation. All of Rosenfeld's papers on this topic, \cite{Ros1} \cite{Ros2} \cite{Ros3}, are authored by Rosenfeld alone: to what extent were de Broglie and De Donder active collaborators in these articles? The influence of de Broglie and De Donder is stated explicitly by the author himself. At the end of the introduction of his first paper, Rosenfeld wrote: `This work was completed under the direction of Mr. L. de Broglie and Mr. Th. De Donder, who have never ceased to assist me with their advice, and have been kind enough to communicate to me their works, even manuscripts; I am happy to be able to express my deep appreciation to them here.' (\cite{Ros1}; p. 305). From the observations that we make in the rest of this paper, we can infer that De Donder had an active part in Rosenfeld's paper. In particular, we shall see how Rosenfeld followed De Donder's approach to introduce the wave equation in the context of a curved space-time, which permitted him to find a natural explanation of De Donder's interpretation of the quantum wave amplitude. Furthermore, we shall infer what precisely de Donder found attractive in Rosenfeld's five-dimensional Universe. In his second and third communications, Rosenfeld supported with a physical explanation his first paper. Stimulated by De Donder's influence, Rosenfeld recognized that he was using Bohr's correspondence principle. Unlike Rosenfeld, De Donder thought that Rosenfeld's work was a proof of a new version of the correspondence principle, which could be derived from Einstein's theory, and stressed that this principle should have been a cornerstone or the `gravitational wave mechanics' (\cite{DD-Ros}; p. 506), i.e. a theory reconciling WM with Einstein's theory. Rosenfeld's first paper \cite{Ros1} is a long and technical work and it does not contain any physical interpretation of the choices he made. For this reason, in subsection \ref{calculation} we shall pay more attention to the technical details of the Rosenfeld's approach, explaining his results from the author's point of view. The second and the third papers are shorter than his first contribution. In these articles the author clarified his technical choices from the physical point of view. We will analyse Rosenfeld's comments in subsection\footnote{The fourth of Rosenfeld's communication is an attempt to unify the preceding works.} \ref{corr-princ}. At the end, in subsection \ref{discussion}, we shall emphasize how these first articles influenced Rosenfeld's future work and we shall interpret the author's results from a modern point of view. \subsection{The quantum origin of a space-time metric}\label{calculation} In the introduction to his first paper \cite{Ros1}, written during his stay in Paris at the ``\`Ecole normale sup\'erieure'', Rosenfeld formulated his main goals\footnote{We present an English translation of some parts of the original paper, written in French, and then we comment on it. We omit the references of the original work.}: \begin{quote} `The first part of this work is dedicated to the systematic study of the five-dimensional universe considered by O. Klein, Th. De Donder and L. de Broglie. We will show how the model of the five-dimensional universe is satisfactory [...]. Generalizing Gordon's and Schr\"odinger's papers, we will show how the introduction of the $ \Psi $ function of de Broglie-Schr\"odinger permits us to combine in a unique variational principle, into the five-dimensional universe, the gravitational force, the electromagnetic force and the quantum phenomena (the $ \Psi $ equation). [...] \textit{Finally, a formula will be established to calculate the gravitational and electromagnetic potentials, for a field slightly different from the Minkowskian field, as a function of} $ \Psi $. The calculation will be developed for the case of a stationary charge and for the case of a charge moving with constant speed. Comparing the values obtained with the classical potentials, we find that the amplitude of the $ \Psi $ function representing the charge must have a constant value inside a finite volume and it must be zero outside of that volume: these results can be well understood with the beautiful interpretation of the $ \Psi $ function recently proposed by Mr. De Donder; quite to the contrary it appears to be irreconcilable with the opinion of Mr. de Broglie, who believed that the charge would be a point singularity of the $ \Psi $ function. [emphasis added]' (\cite{Ros1}; p. 304-5). \end{quote} We shall investigate only the first case proposed by Rosenfeld, i.e. the case of a stationary massive charge, represented by a wave function, in order to investigate the gravitational field produced by a quantum particle. Rosenfeld would consider a weak-field approximation, what he called `a field slightly different from a Minkowskian field'\footnote{Minkowskian field is the English translation of the French expression ``champ de Minkowski'' which was well understood and commonly used in that period as the vacuum space. See e.g. \cite{Solomon} or Lichnerowitz in \cite{Pauli-letter}.}. Rosenfeld would find that the quantum particle should be represented by a localized wave function, which is non zero inside a finite volume, instead of a point-like object, in contrast with de Broglie's point of view. This fact would enforce De Donder's interpretation of the wave function's amplitude as representing a sort of internal quantum force of matter. We will not discuss this interpretation, which was based on the application of Rosenfeld-De Donder's approach to multi-particle systems, because for this case Rosenfeld did not investigate the gravitational field. Why did Rosenfeld consider a five-dimensional framework? The answer seems now almost trivial: the author studied Klein's work with de Broglie and was fascinated by its capability to describe in a unified framework GR and Maxwell's theory. What was Rosenfeld's starting point? The answer is connected with his knowledge of De Donder's and de Broglie's works. Indeed, following De Donder, Rosenfeld started from the classical description of a single charged particle, and following Klein and de Broglie, he considered a five-dimensional space-time, with the usual coordinates $( x^0\, ,\, x^1\, ,\, x^2\, ,\, x^3\, ,\, x^5\, ) $. The classical particle was described by a five-dimensional Jacobi function $ \bar{S} $, namely \begin{equation}\label{S5-Ros} \bar{S}\left( x \right) =-\frac{e}{c\beta}x^5 + S_0\left( x^0\, ,x^1\, , x^2\, , x^3 \right)\, , \end{equation} in analogy with De Donder's four-dimensional Jacobi function (\ref{JacobiDD}), that we rewrite here for convenience, namely: \begin{equation}\label{JacobiDD1} S = \frac{1}{2}mc\tau + S_0\left( x^0,\, x^1,\, x^2,\, x^3\right) \, . \end{equation} Rosenfeld explicitly defined the fifth coordinate putting: \begin{equation}\label{5d-Ros} \text{`}x^5 = -\frac{mc^2\beta}{2e}\tau\, \text{' (\cite{Ros1}; equation (5) p. 306),} \end{equation} specifying that `$ \beta $ is a \textit{universal constant}.' (\cite{Ros1}; p. 306). From our point of view, the introduction of the fifth coordinate simply follows from the comparison between De Donder's Jacobi function, equation (\ref{JacobiDD1}), and de Broglie's five-dimensional Hamiltonian action for the charged particle, equation (\ref{action-dB1}). Indeed, to obtain equation (\ref{action-dB1}), it is sufficient in (\ref{JacobiDD1}) to set $\displaystyle S_0 = - \int_O^M\frac{e}{c}A_{\mu}dx^\mu -mc\int_O^M d\tau $. About the size of the fifth dimension, Rosenfeld shared de Broglie's view. He observed that from equation (\ref{S5-Ros}) it follows the invariance of $ x^5 $ with respect to the general transformation of coordinates $ f(x^0,\, x^1,\, x^2, \, x^3) $ and concluded: `Its invariance with respect to the transformations that we are able to perform explains why this fifth dimension escapes direct observations.' (\cite{Ros1}; p. 307). Like de Broglie, Rosenfeld did never discuss explicitly the size of the fifth dimension, though he would have been able to extract it\footnote{See the discussion after equation (\ref{tir}).}. The dynamics of classical charged particles is described by the HJ equation and Rosenfeld introduced his five-dimensional analogously. Following the author we note first that the new Jacobi function $ \bar{S} $ satisfies\footnote{Note that the combination $ \frac{e}{c\beta}x^5 $ has the dimension of an action.} \begin{equation}\label{eq7} \partial_5\bar{S}=-\frac{e}{c\beta}\, . \end{equation} Secondly, Rosenfeld used Klein's five-dimensional metric $ \gamma_{\bar\mu\bar\nu} $ defined in the previous section, see equations (\ref{nota-d-sigma1b}) and (\ref{nota-d-sigma2b}), with the same convention, i.e. imposing the following choice for $ \alpha $ and $ \beta $: $ \alpha\beta^2 = 2\kappa $. Lastly, with the help of the components of the inverse metric $ \gamma^{\bar{\mu}\bar{\nu}} $, namely \begin{equation}\label{inverse} \gamma^{\mu\nu} = g^{\mu\nu}\; ,\qquad \gamma^{55} = \frac{1}{\alpha}+\beta^{2}A_\mu A^\mu\; ,\qquad \gamma^{5\mu}= -\beta A^{\mu}\; , \end{equation} the author is able to show how De Donder's four-dimensional HJ equation (\ref{eq-cl}), namely \begin{equation}\label{HJ-Ros1} g^{\mu\nu}\left(\partial_\mu S_0 + \frac{e}{c}A_\mu\right) \left( \partial_\nu S_0 +\frac{e}{c}A_\nu\right) +m^2c^2 =0\; , \end{equation} can be rewritten in the following compact form (\cite{Ros1}; p. 307): \begin{equation}\label{eq-cl2} \gamma^{\bar{\mu}\bar{\nu}}\partial_{\bar{\mu}}\bar{S}\partial_{\bar{\nu}}\bar{S} = -\left( m^2c^2 - \frac{e^2c^2}{16\pi G} \right)\, . \end{equation} It is worth noting that equation (\ref{eq7}) is the same relation that induced Klein to introduce a fifth coordinate : it suggests indeed that the electric charge could play the role of an extra momentum component, as recollected by Klein (see the beginning of section \ref{section-Klein}), and permits to translate in the five-dimensional language the relativistic HJ equation for a particle moving in a combined electromagnetic and gravitational field. Choosing $ \alpha\beta^2 = 2\kappa $, Rosenfeld implicitly imposed $ \alpha > 0 $. As noted in the previous section, this means that, like Klein, Rosenfeld correctly introduced a space-like fifth dimension. Hence, the quantity $ \mathcal{I}^{2} $, see equation (\ref{Iquadro}), assumes the following form: \begin{equation}\label{Iquadro-Ros} \mathcal{I}^2_{Ros} = m^2c^2 - \frac{e^2c^2}{16\pi G}\; , \end{equation} and it differs from de Broglie's $ \mathcal{I}_{dB} $, see equation (\ref{Iquadro-dB}), because of the presence of the minus sign. For an electron, the quantity $ \mathcal{I}^2_{Ros} $ is negative: indeed Rosenfeld did not use the symbol $ \mathcal{I}^{2}_{Ros} $, but he explicitly wrote its square root, cfr. eq. (\ref{RosA}) below. Hence, we introduced it in order to compare Rosenfeld's and de Broglie's work. As we shall see in a moment, Rosenfeld did not discuss the square root of the expression $ \mathcal{I}_{Ros} $, but he underlined that it has a geometrical meaning as follows. Parametrizing the five-dimensional path with $ \hat{\tau} $ and the particle's four-dimensional world line with the proper time $ \tau $, Rosenfeld wrote: `It is easy to calculate the five-dimensional trajectory's slope on the space-time. Indeed, if $ \bar{S} $ is a complete integral of equation (\ref{eq-cl2}), along the trajectory, from (\ref{eq-cl2}) it follows that \begin{equation}\label{RosA} \gamma^{\bar{\mu}\bar{\nu}}\partial_{\bar{\nu}}\bar{S}=\sqrt{m^2c^2 - \frac{e^2c^2}{16\pi G}}\cdot\frac{dx^{\bar{\mu}}}{d\hat{\tau}}\, , \end{equation} and from (\ref{eq7}), (\ref{HJ-Ros1}) and (\ref{inverse}) it follows that \begin{equation}\label{RosB} \gamma^{\mu\bar{\nu}}\partial_{\bar{\nu}}\bar{S}=mc\frac{dx^{\mu}}{d\tau}\, . \end{equation} This means that the slope reads: \begin{equation}\label{slope} \frac{d\hat{\tau}}{d\tau}=\sqrt{1-\frac{1}{2\kappa\mu^2}} \end{equation} and therefore it is determined only by the ratio $ \mu $; this geometric interpretation of the ratio $ \mu $ was \textit{on the ground of de Broglie's reasoning}.'\footnote{See \cite{Landau-teo-campi} for an explanation of the four-dimensional case. Inserting equation (\ref{RosA}) into (\ref{eq-cl2}), it can be verified that (\ref{RosA}) is a complete integral of (\ref{eq-cl2}).} [emphasis added] (\cite{Ros1}; p. 308). The ratio $ \mu $ is defined by $ \mu=-\frac{mc^2}{e} $ and it encodes the characteristics of the particle, because it involves the particle's mass and charge. The emphasis added at the end of the citation underscores de Broglie's influence on Rosenfeld's approach. Firstly, Rosenfeld's equation (\ref{slope}) is equivalent to de Broglie's equation (\ref{relazioni}). Secondly, in the previous section we said that from de Broglie's point of view $ P_{\bar{\nu}}=\partial_{\bar{\nu}}\bar{S} $ should be interpreted as the five-dimensional generalization of $ p_\mu = mcg_{\mu\nu}\frac{dx^{\nu}}{d\tau} $. Rosenfeld referred to the fact that equations (\ref{RosA}) and (\ref{RosB}) made explicit this connection\footnote{In appendix \ref{app3a} we clarify the connection among equations (\ref{RosA}), (\ref{RosB}) and (\ref{slope}).}, because they implied that $ \gamma^{\mu\bar{\nu}}P_{\bar{\nu}}=g^{\mu\nu}p_\nu $. Furthermore, Rosenfeld agreed explicitly with de Broglie's idea that the particle's five-dimensional geodesics would be inclined with respect to the hyperplane that locally describes the four-dimensional hypersurface $ x^5 = const. $. See de Broglie's comments after equation (\ref{d-sigma3}). After having introduced the five-dimensional Universe and its unified description of the gravitational and electromagnetic interaction, the author introduced what he called the `de Broglie-Schr\"odinger wave function' (\cite{Ros1}; p. 311). Following de Broglie and De Donder, equations (\ref{deB-optic}) and (\ref{DD1}), Rosenfeld's general Ansatz for the five-dimensional wave function reads: \begin{equation}\label{fun-onda} \Psi \left( x \right) =\mathcal{A}\left( x^0\, ,x^1\, , x^2\, , x^3 \right) e^{k\,\bar{S}} \; , \end{equation} where $ \bar{S} $ is the Jacobi function (\ref{S5-Ros}), $ k $ is a constant and the amplitude $\mathcal{A}$ is in general a complex function of the form $\mathcal{A}= A +iB $. Like De Donder, Rosenfeld made the choice $ k = \frac{i}{\hbar} $ and then he considered the case of real constant amplitude, in order to compare his five-dimensional functional with De Donder's $ J $ functional. But Rosenfeld assigned the value of $ k $ ab initio, therefore, as we pointed out in the discussion after equation (\ref{DD1}), both De Donder and Rosenfeld considered wave functions as complex objects. The periodicity condition is still contained in Rosenfeld's Ansatz (\ref{fun-onda}), because the wave function is periodic in the fifth coordinate, see equation (\ref{S5-Ros}). In the case of real constant amplitude $ A $, from equation (\ref{fun-onda}) it follows: \begin{equation} \frac{\partial \bar S}{\partial x^{\bar\mu}}=\frac{\hbar}{i}\frac{\partial_{\bar\mu}\Psi}{\Psi}\label{def-onda2}\, . \end{equation} Inserting (\ref{def-onda2}) into the HJ equation (\ref{eq-cl2}), Rosenfeld obtained the five-dimensional generalization of De Donder's functional equation (\ref{eq-Jac}), i.e. $ \displaystyle \mathcal{L} = 0 $, where the new functional is \begin{equation}\label{lagr-Ros} \mathcal{L}\left(\,\Psi \, ,\overline{\Psi}\,\right) = - \gamma^{\bar{\mu}\bar{\nu}}\partial_{\bar{\mu}}\overline{\Psi}\partial_{\bar{\nu}}\Psi- \frac{\mathcal{I}^{2}_{Ros} }{\hbar^2}\overline{\Psi}\Psi\; , \end{equation} the symbol $ \overline{\Psi} $ is the complex conjugate of the five-dimensional wave function and we used for this quantity the symbol $ \mathcal{I}_{Ros} $, equation (\ref{Iquadro-Ros}), for brevity. This means that from Rosenfeld's point of view the constant amplitude case corresponded to the classical limit. Indeed, the author underlined: `In the general case, i.e. when $ \mathcal{A} $ is an arbitrary function, $ \mathcal{L} $ is no longer null along a trajectory.' (\cite{Ros1}; p. 312). As a consequence $ \mathcal{L} $ is able to play a central role for the quantum dynamics. Following De Donder, the quantum picture would be described by a variational principle involving (\ref{lagr-Ros}): Rosenfeld applied De Donder's functional derivative (\ref{der-funz}) on $ \mathcal{L}\sqrt{-g} $ and obtained, by varying with respect to $ \overline{\Psi} $ and $ \Psi $ independently, the following wave equations: \begin{equation}\label{eomRos} \gamma^{\bar{\mu}\bar{\nu}}\nabla_{\bar{\mu}}\partial_{\bar{\nu}}\Psi =\frac{\mathcal{I}^{2}_{Ros}}{\hbar^2} \Psi \qquad\quad\text{and}\quad\qquad \gamma^{\bar{\mu}\bar{\nu}}\nabla_{\bar{\mu}}\partial_{\bar{\nu}}\overline{\Psi} =\frac{\mathcal{I}^{2}_{Ros} }{\hbar^2} \overline{\Psi}\, , \end{equation} and that should be, as Rosenfeld wrote, `a generalization of the de Broglie-Schr\"odinger's equation' (\cite{Ros1}; p. 312), i.e. equation (\ref{KG4}). Having introduced a complex wave function ab initio, Rosenfeld wrote explicitly a wave equation both for $ \Psi $ and for $ \overline{\Psi} $. The author's functional $ \mathcal{L} $ is formally equivalent to the Lagrangian density of a complex scalar field, but as for all of the authors of this period, $ \Psi $ is treated as a wave function. This approach has been conceived in a period that lies between the birth of QM and the birth of QFT, when scholars were looking for a ``relativistic quantum mechanics''. For this reason we could say that, like De Donder, Rosenfeld was looking for GRQM. The wave equation obtained by varying $ \overline{\Psi} $ in (\ref{lagr-Ros}) is formally equivalent to the five-dimensional wave equation suggested by de Broglie (\ref{KG4}). Rosenfeld used De Donder's variational derivative, but he was aware of the fact that this procedure is equivalent to the variational principle used in a modern field theory, obtained varying the integral of the Lagrangian density and imposing that the variations of the fields should be zero at the boundary of the domain of integration. Indeed, Rosenfeld claimed that $ \mathcal{L} $ should be the generalization of the Lagrangian considered by Gordon in \cite{Gordon}, where Gordon himself suggested to consider the wave function and his complex conjugated as independent variables with vanishing variations at the boundary. Unlike Klein's functional, Rosenfeld's $ \mathcal{L} $ functional had the correct sign to be interpreted as a Lagrangian density \cite{Rocci1}. This follows from the fact that Rosenfeld was influenced by De Donder's approach presented above. Unlike De Donder, Rosenfeld considered a general form for the wave functions, admitting that its amplitude $ A $ could be a non-constant function of the four-dimensional coordinates. Rosenfeld noted that in the constant-amplitude case he obtained De Donder's results, which are connected with the classical HJ equations (\ref{eq-cl2}) as suggested by De Donder himself. How did Rosenfeld reconcile GR with QM? Like De Donder, after having used the wave-particle duality via the Hamiltonian dynamics, Rosenfeld supposed that, in the case of non-constant amplitude, $ \mathcal{L}$ should be the correct generalization of Schr\"odinger's Lagrangian \cite{Schroe2} in the sense of GRQM. Finally, Rosenfeld introduced a variational principle, based on the following five-dimensional action\footnote{In equation (\ref{az-glob2}) the determinant of the four-dimensional metric $ g $ appears, instead of $ \gamma $. In Rosenfeld's approach, the two determinants are related by the relation $ \gamma = \alpha g $ as explained in appendix \ref{app3b}. This means that the presence of $ g $ does not affect the equations obtained by varying (\ref{az-glob2}).} \begin{equation}\label{az-glob2} \mathcal{S}_{tot}\left(\, \gamma\, , \Psi\, ,\overline{\Psi}\,\right) =\int d^5x \sqrt{-g}\left[ -\tilde{R} +2\kappa\mathcal{L} \right] \, , \end{equation} where $ 2\kappa = \frac{16\pi G}{c^{4}} $. Rosenfeld did not specify the domain of integration, we suppose that the integral should be performed over an arbitrary portion $ \Omega $ of the five-dimensional space-time. By varying the action with respect to the metric like in equation (\ref{variation}), he obtained the five-dimensional Einstein equations coupled with the complex field $ \Psi $, which are formally equivalent to a system with the four-dimensional Maxwell equations coupled to the scalar field and the four-dimensional Einstein equations coupled to the electromagnetic and the scalar fields. By varying the action with respect to $ \overline{\Psi} $ and $ \Psi $, using De Donder's functional derivative, Rosenfeld obtained the KG equations (\ref{eomRos}) for $ \Psi $ and $ \overline{\Psi} $, respectively, as before, because the curvature's scalar depends neither on the wave function nor its complex conjugate. This is the unified framework that should reconcile, from Rosenfeld's point of view, GR with WM. Did the five-dimensional formalism offer any additional insights beyond these that De Donder could have deduced in his four-dimensional context? As Rosenfeld stressed, the main advantage offered by the five-dimensional Universe was the opportunity to write a unified variational principle (\cite{Ros1}; p. 304). It is worth noting that the neutron would be discovered five years later \cite{Chadwick}. This means that all known elementary particles were charged particles and the unified picture offered by the five-dimensional Universe seemed to be a way to describe the known physical phenomena. As we shall see in subsection \ref{corr-princ}, Rosenfeld's approach permitted also to incorporate and, in a certain sense, to justify some of De Donder's ideas. It is not clear whether Rosenfeld considered his approach as a result or as a point of departure. But it is evident that he tried, for the first time, to investigate the geometry created by the wave function $ \Psi $. In fact, the equations obtained by varying action (\ref{az-glob2}) with respect to the metric are: \begin{equation}\label{eq-glob-5D} \tilde{R}_{\bar{\mu}\nu}-\frac{1}{2}\gamma_{\bar{\mu}\nu}\tilde{R} = \kappa T_{\bar{\mu}\nu} \; , \end{equation} where Einstein's and Maxwell's equations are coupled to the complex scalar field via the stress-energy tensor $ T_{\bar{\mu}\nu} $, defined by Rosenfeld as \begin{equation}\label{def-T} T_{\bar\mu\bar\nu} = -\frac{2}{\sqrt{-g}}\frac{\delta \left(\sqrt{-g} \mathcal{L} \right) }{\delta \gamma^{\bar\mu\bar\nu}}\; , \end{equation} which has the usual form: \begin{equation}\label{tens-psi1} T_{\bar{\mu}\nu}=\partial_{\bar{\mu}}\overline{\Psi}\partial_{\nu}\Psi +\partial_{\nu}\overline{\Psi}\partial_{\bar{\mu}}\Psi + \gamma_{\bar{\mu}\nu} \mathcal{L} \; . \end{equation} Rosenfeld made no comments on the fact that in general the r.h.s. of equation (\ref{eq-glob-5D}) is a complex quantity. It is worth noting that the author investigated a particular case, i.e. when the wave function's amplitude is real. Hence, the energy momentum tensor is a real quantity. Introducing the wave function on the right side of equations (\ref{eq-glob-5D}), Rosenfeld considered implicitly the wave function as representing the material part creating gravity. In this first paper, a long and technical paper, Rosenfeld did not justify this choice, which seems to be in contrast with the probabilistic interpretation of the wave function, from a modern point of view. As we shall see in the next section, the author would clarify his choice in the following work, where he referred explicitly to Bohr's correspondence principle. Like Klein, Rosenfeld did not consider the $ 55 $ component of the equations of motion: the Belgian physicist explicitly stated that this equation can be neglected, because the constancy of $ \gamma_{55} $ implies $ \delta\gamma_{55}=0 $ (\cite{Ros1}; p. 314)\footnote{As we said in the previous section, this is not correct.}. Before going on, we compare briefly Rosenfeld's approach with that of his mentors. Though Rosenfeld started out generalizing De Donder's approach, the unitary variational principle is presented starting with the action functional (\ref{az-glob2}) instead of De Donder's invariants, i.e. density Lagrangians. It is worth noting that in the same year Klein published independently a similar action, using a real scalar field. Klein coupled matter and geometry exactly like Rosenfeld did (\cite{Klein5}; p. 207). Unlike Rosenfeld, in \cite{Klein5}, Klein will express explicitly some perplexities about this kind of approach, observing that a unified action principle, e.g. that based on (\ref{az-glob2}), was only a starting step towards a unified theory that reconciles WM with GR (\cite{Klein5}; p. 190, footnote $ (^*) $ at the end of the introduction). In contrast, Rosenfeld, and De Donder with him, seemed to be convinced that the five-dimensional unified action principle would have some interesting features. Thanks to this conviction, the Belgian physicist investigated the quantum character of the metric produced by a quantum object, represented by the wave function $ \Psi $. In order to face this problem, Rosenfeld considered the weak-field approximation for the gravitational field, introduced by Einstein in 1916 to study the problem of gravitational waves, because it permitted to integrate the Einstein equations. In this approximation the metric can be written in the following form (\cite{Ros1}; p. 319): \begin{equation}\label{metr-lin-a} \gamma_{\bar{\mu}\nu}=\eta_{\bar{\mu}\nu} + h_{\bar{\mu}\nu} \, , \end{equation} where $ \eta_{\bar{\mu}\nu} $ is the five-dimensional Minkowski metric and $ h_{\bar{\mu}\nu} $ represents the perturbation of the flat metric, which satisfies the condition $ \vert h_{\bar{\mu}\nu} \vert << 1 $. Rosenfeld contracted (\ref{eq-glob-5D}) with $ \gamma^{\nu\bar{\mu}} $ to obtain an expression for the five-dimensional curvature scalar $ \tilde{R} $, namely\footnote{See appendix \ref{app3d} for a detailed explanation.} \begin{equation}\label{formula-R-tilde} \tilde{R} = -\kappa\left[ \gamma^{\nu\bar{\mu}}T_{\bar{\mu}\nu} + \frac{F_{\sigma\lambda} F^{\sigma\lambda}}{2}-\gamma^{\mu\rho}A_\rho\nabla_\lambda\left( \gamma_{\mu\sigma}F^{\sigma\lambda}\right) \right] \; . \end{equation} After having inserted (\ref{formula-R-tilde}) into equation (\ref{eq-glob-5D}), Rosenfeld used the Ansatz (\ref{metr-lin-a}) for the metric and he considered linear terms only obtaining: \begin{eqnarray}\label{Einst-lin-5D} \square h_{\bar{\mu}\nu} = -\kappa\left[ T_{\bar{\mu}\nu}-\frac{1}{2}\eta_{\bar{\mu}\nu} \eta^{\lambda\bar{\sigma}}T_{\bar{\sigma}\lambda} \right]\nonumber \\ = -\kappa \bar{T}_{\bar{\mu}\nu} \, , \end{eqnarray} where the $ \square $ operator acts only on the usual four dimensions, because the metric does not depend on the fifth coordinate. In this approximation we are considering the gravitational field strength far away from the source, i.e. the particle's wave function, and the second and third term in the r.h.s. of equation (\ref{formula-R-tilde}) can be ignored in the case of a stationary charge\footnote{Rosenfeld did not write explicitly equation (\ref{Einst-lin-5D}), he referred to a `well known procedure' (\cite{Ros1}; p. 319) and wrote directly equation (\ref{sol-eq}).}. The stress-energy tensor appearing in (\ref{Einst-lin-5D}) has the same form of equation (\ref{tens-psi1}), but the curved metric $ \gamma_{\bar{\mu}\nu} $ has been substituted by the flat metric (\cite{Ros1}; p. 319). In particular, in this approximation the indices are raised and lowered by $ \eta_{\bar{\mu}\bar{\nu}} $. Rosenfeld was now able to integrate (\ref{Einst-lin-5D}), and obtained, using Rosenfeld's original notation\footnote{Rosenfeld did not specify that the integration is carried over a three-dimensional hypersurface $ \Sigma $ at the retarded time. In appendix \ref{app3e} we express equation (\ref{sol-eq}) in a modern notation. In the rest of our paper we will continue to use Rosenfeld's original notation.} (\cite{Ros1}; p. 319, equation (71)): \begin{equation}\label{sol-eq} h_{\bar{\mu}\nu} = -\frac{\kappa}{2\pi}\int\left\lbrace \bar{T}_{\bar{\mu}\nu}\right\rbrace_{t-\frac{r}{c}}\frac{dxdydz}{r}\; , \end{equation} where, according to Rosenfeld, $ r $ represents the radial distance and the symbol $ \left\lbrace u\right\rbrace_{t-\frac{r}{c}} $ means that the function $ u $ has been calculated using the variable $ t-\frac{r}{c} $: for this reason the (\ref{sol-eq}) components are often called retarded potentials. In order to consider the case of a stationary mass, the author chooses the following form\footnote{Remember that in our notation the combination $ \frac{e}{c\beta}x^{5} $ has the dimensions of an action.} for the Jacobi function $ \bar{S} $ (\cite{Ros1}; p. 320): \begin{equation}\label{psi} \bar{S}= -\frac{e}{c\beta}x^{5} + mcx^{0}\, , \end{equation} that appears in (\ref{fun-onda}), where now the amplitude $ A $ is a real function of the four-dimensional coordinates. Using this Ansatz, Rosenfeld was able to calculate explicitly the retarded potentials. Introducing the following functions\footnote{The integration domain is the same as in equation (\ref{sol-eq}).} of $ \mathbf{x} $ and $ t $: \begin{eqnarray} \mathcal{F} &=&\frac{2mc^2}{\hbar^2} \int \left\lbrace A^{2}\right\rbrace_{t-\frac{r}{c}} \frac{dxdydz}{r}\label{F}\; ,\\ \mathcal{W}_{\mu\nu} &=& \int \left\lbrace \partial_\mu A\partial_\nu A\right\rbrace_{t-\frac{r}{c}} \frac{dxdydz}{r}\; ,\\ \mathcal{G} &=& \int \left\lbrace \partial_\mu A\partial^{\mu} A\right\rbrace_{t-\frac{r}{c}} \frac{dxdydz}{r}\; , \end{eqnarray} the perturbations of the flat metric are therefore\footnote{In equation (\ref{metr-quant-d}) we used explicitly that $ \alpha $ and $ \beta $ satisfy the constrain $ \alpha\beta^2 = 2\kappa $, like in Klein's approach.}: \begin{eqnarray} h_{5i} &=& 0\; , \qquad\qquad i = 1,2,3\; ,\label{metr-quant-a} \\ h_{50} &=& -\alpha\beta\left( \frac{e}{4\pi}\frac{\mathcal{F}}{c^2}\right) \; ,\label{metr-quant-b} \\ h_{\mu\nu} &=& \frac{8G}{c^4}\mathcal{W}_{\mu\nu}\qquad\qquad \mu\neq \nu\; ,\label{metr-quant-c}\\ h_{\mu\mu} &=& \frac{2mG}{c^{4}}\mathcal{F} +\frac{8G}{c^4}\mathcal{G}\; .\label{metr-quant-d} \end{eqnarray} It is worth noting that in (\ref{metr-quant-b}) and in (\ref{metr-quant-d}) the Planck constant appears via tha definition of $ \mathcal{F} $ (\ref{F}). In this sense, Rosenfeld's result represents a quantum correction of the flat metric. This is not surprising, because these corrections are generated by the wave function $ \Psi $. In this sense, the result is the first attempt to describe a quantum metric using WM and GR. As far as we know, this is the first time that a quantum metric appears in the history of QG. Rosenfeld did not emphasize this feature of the metric he found. As we have said, in his first paper Rosenfeld did not make explicit comments on the physical meaning of the calculations performed. As we shall see, in his following papers he would advocate Bohr's correspondence principle in explaining his use of the wave function as the source of gravitational field. From this perspective, it is easier to understand why Rosenfeld was more interested in analysing the metric in the case of a constant amplitude. Indeed, he considered a sort of semi-classical limit, confronting his ``quantum metric'' with its classical analogue. In this limit, equations (\ref{metr-quant-a}), (\ref{metr-quant-b}), (\ref{metr-quant-c}) and (\ref{metr-quant-d}) should match the metric produced by a classical source of mass $ m $ and charge $ e $, sitting at the origin \textbf{O} of the coordinates, at least in the weak-field limit, known today as the RN solution. The classical metric is presented in appendix \ref{app3f}, equation (\ref{RN}). At asymptotically large distances from the source it can be written as $ \gamma^{RN}_{\bar{\mu}\bar{\nu}} = \eta_{\bar{\mu}\bar{\nu}} + h^{RN}_{\bar{\mu}\bar{\nu}} $, where the components of the perturbations of the flat metric are: \begin{eqnarray} h_{5i} &=& 0\; , \qquad i =1,\, 2,\, 3\; , \label{metr-cl-a}\\ h_{50} &=& \alpha\beta A_0 \quad\text{where}\quad A_0=\eta_{00}A^0=V=-\frac{e}{4\pi r_{0}}\; ,\label{metr-cl-b}\\ h_{\mu\nu} &=& 0\qquad \mu\neq \nu \; ,\label{metr-cl-c}\\ h_{\mu\mu} &=& \frac{2mG}{c^{2}r_{0}}\; ,\label{metr-cl-d} \end{eqnarray} where, according to Rosenfeld, $ r_0 $ represents `the distance between the origin \textbf{O} and an arbitrary point [of the five-dimensional space-time]' (\cite{Ros1}; p. 321). Equations (\ref{metr-cl-c}) and (\ref{metr-cl-d}) represent the components of the RN metric in the weak field approximation expressed using isotropic Cartesian coordinates\footnote{See appendix \ref{app3f} for a detailed discussion.}, while (\ref{metr-cl-a}) and (\ref{metr-cl-b}) coincide with $ \gamma_{5\mu} $ components (\ref{def-beta}) in the case of a stationary charge. As we shall see in a moment, in considering the matching between classical metric and ``quantum metric'' in the semiclassical limit, Rosenfeld did not consider a point-like charge, hence $ r_0=r_0(\vec{x}) $ should be a sort of ``mean distance'' from the charged body, sitting at the origin of the coordinates. In order to match (\ref{metr-quant-a})-(\ref{metr-quant-d}) with (\ref{metr-cl-a})-(\ref{metr-cl-d}), $ \mathcal{W}_{\mu\nu} $ and $ \mathcal{G} $ must be zero and, as a consequence, the two following conditions must hold: \begin{eqnarray} \partial_\mu A & = & 0 \quad , \label{cond-Ros-a}\\ \mathcal{F} & = & \frac{c^2}{r_0}\, . \label{cond-Ros-b} \end{eqnarray} Equation (\ref{cond-Ros-a}) follows directly from the condition $ \mathcal{W}_{\mu\nu}=0 $, while equation (\ref{cond-Ros-b}) can be obtained comparing (\ref{metr-cl-b}) with (\ref{metr-quant-b}). Rosenfeld discussed both these relations: `The first condition tells us that a fixed charge can be represented by a wave with \textit{stationary} phase and \textit{constant} amplitude.' (\cite{Ros1}; p. 322). As stated above, though Rosenfeld did not emphasize this fact, the constancy of the amplitude, i.e. condition (\ref{cond-Ros-b}), emerged as a condition to ensure that the quantum description could contain, at least as a limiting case, the classical description, which in this context corresponds to the classical five-dimensional RN metric (\ref{metr-cl-a})-(\ref{metr-cl-d}). Besides this, the wave function of a fixed charge should have a fixed energy $ \mathcal{E} = mc^{2} $, and because of Heisenberg's uncertainty principle it should spread over the whole space. In a semi-classical approximation the wave packet is highly localized. Rosenfeld used a ``localized wave function'' instead, in the sense that Rosenfeld's wave function is non-zero only inside an arbitrary volume $ V $. Indeed Rosenfeld continued: `The second condition is satisfied [...] if we imagine that the amplitude is non-zero inside a finite volume centred around \textbf{O}.' (\cite{Ros1}; p. 322). Finally, using the \textit{mean value theorem}, the author defined formally the ``mean distance''\footnote{See appendix \ref{app3f} for a definition of the mean distance using modern notation.} $ r_{0} $ (\cite{Ros1}; p. 322): \begin{equation}\label{def-r0} \frac{V}{r_0} =\int \frac{dxdydz}{r}\; . \end{equation} As usual, Rosenfeld did not specify the domain of integration. We suppose that it is the region where the wave function is non-zero, i.e. the volume $ V $. By using definition (\ref{def-r0}) and the definition of $ \mathcal{F} $, equation (\ref{F}), in the constant amplitude approximation the condition (\ref{cond-Ros-b}) reads: \begin{eqnarray} \mathcal{F}=\frac{2mc^2}{\hbar^2} \int \left\lbrace A^{2}\right\rbrace_{t-\frac{r}{c}} \frac{dxdydz}{r} &=&\frac{c^2}{r_0}\; ,\nonumber\\ \frac{2m}{\hbar^2} A^{2}\int \frac{dxdydz}{r} &=&\frac{1}{r_0}\; ,\nonumber\\ \frac{2mA^2}{\hbar^2}\frac{V}{r_0} &=& \frac{1}{r_0}\; ,\nonumber \end{eqnarray} i.e. \begin{equation}\label{norm-Ros} \frac{2mA^2V}{\hbar^2}=1 \; . \end{equation} This condition is consistent from the point of view of dimensional analysis. To understand it, let us consider action (\ref{az-glob2}). The presence of the four-dimensional Einstein coupling $ \kappa $ produces a consequence for the length dimensions of the wave function $ \Psi $. We remember that the curvature scalar has dimensions $ \left[ \tilde{R}\right] =(length)^{-2} $ for every space-time dimension and we observe that from (\ref{az-glob2}) it follows that $ \kappa\mathcal{L} $ and $ \tilde{R} $ have the same dimensions. As a consequence, the squared wave function amplitude $ A^{2} $ has the following dimensions $ [A^{2}]=\frac{(length)(mass)}{(time)^{2}} $ as it should, because of equation (\ref{norm-Ros}). It is worth noting that from Rosenfel's point of view, the wave function of a particle is not a point singularity: its amplitude is non zero in a finite volume $ V $. This fact is in contrast with de Broglie's point of view as Rosenfeld anticipated in the introduction of his paper. In this paper, Rosenfeld did not make any particular comment on (\ref{norm-Ros}) and on the whole calculation: he would discuss the physical meaning of the whole apparatus in the next papers, that we will briefly analyse in the following section. However, for us, Rosenfeld's calculation acquired a fundamental importance. Indeed, with this derivation the author showed for the first time how in the semi-classical limit GRQM is able to reproduce the RN metric in the weak-field approximation. In particular the condition (\ref{norm-Ros}) found by Rosenfeld can be interpreted as the normalization condition for the wave function. In this pre-second-quantized picture, the normalization condition of the wave function can be imposed using the definition of the Hamiltonian\footnote{In the weak-field limit, at the first order, the metric is flat.} (\cite{Landau-teo-quant-rel}) $ H $: \begin{equation} H = \int d^3x T_{00}\, , \end{equation} where $ T_{00} $ is the $ 00 $ component of the total stress-energy tensor (\ref{tens-psi1}). The integration is carried out over the three-spatial volume for the following reason. The stress-energy tensor defined by Rosenfeld is a four-dimensional object, because of the unusual coupling between matter and geometry in the action (\ref{az-glob2}). The presence of the four-dimensional constant $ \kappa $ means that the stress-energy tensor's components represent an energy density with respect to the three-dimensional volume, instead of a four-dimensional volume. Rosenfeld was aware of this peculiarity, even if he did make no specific comment, because he noted that equations (\ref{eq-glob-5D}) imply a relation for the four-dimensional curvature scalar\footnote{See appendix \ref{app3d} for a detailed explanation.}, namely \begin{equation}\label{def-dens-mass} R=-\kappa\left[ \gamma^{\nu\bar{\mu}}T_{\bar{\mu}\nu} -\gamma^{\mu\rho}A_\rho\nabla_\lambda\left( \gamma_{\mu\sigma}F^{\sigma\lambda}\right) \right]\; , \end{equation} that permitted him to define a four-dimensional mass density\footnote{Remember that in GR the trace of the stress-energy tensor is proportional to the curvature scalar and it is the energy density at first order in $ v/c $.} (\cite{Ros1}; p. 318, equation (63)), i.e. the quantity between the squared brackets on the r.h.s. of (\ref{def-dens-mass}). For a stationary charge, in the weak field limit, the four-dimensional density mass defined by Rosenfeld in (\ref{def-dens-mass}) coincides with $ T_{00} $. Moreover, for a localized wave packet the Hamiltonian must correspond to the rest energy $\mathcal{E}=mc^2 $ of the classical particle. In the case of a constant amplitude, the $ T_{00} $ value can be easily read off using equations (\ref{lagr-Ros}), (\ref{fun-onda}), (\ref{tens-psi1}), and the normalization condition for the wave function reads: \begin{equation}\label{norma} \int d^3x \frac{2m^2c^2A^2}{\hbar^2} = mc^2\quad\Rightarrow\quad \frac{2m^2c^2A^2}{\hbar^2} V=mc^2 \quad\Rightarrow\quad \frac{2mA^2V}{\hbar^2}=1 \, , \end{equation} where $ V $ is the three-volume of the localized wave packet. The normalization condition is precisely Rosenfeld condition (\ref{norm-Ros}). This normalization condition can be obtained also by considering the conserved current $ j^{\bar{\mu}} $. In the weak field approximation the continuity equation is $ \partial_{\bar{\mu}}j^{\bar{\mu}} = 0 $. Using the wave function Ansatz (\ref{fun-onda}) with a real constant amplitude $ A $, namely $ \Psi = A\, exp\left[ \frac{i}{\hbar}\left( -\frac{e}{c\beta}x^5+mcx^0\right) \right] $, the continuity equation reads $ \displaystyle \frac{\hbar}{i}\frac{\partial\rho}{\partial t} = 0 $, where the squared modulus of the ``probability amplitude'' $ \rho $ is $ \rho = \frac{2m}{\hbar^{2}}A^2 $. By integrating over a three-spatial volume, because of the unusual length dimensions of the scalar field $ \Psi $, the normalization condition reads $ \frac{2mA^2V}{\hbar^2}=1 $, that is the same result obtained using the stress-energy tensor. In the rest of his first paper, Rosenfeld tried to generalize his previous results to the case of a many-body system. This generalization process would continue in his following papers, where the author also analysed the role of the wave function amplitude $ A $. Rosenfeld inspected the consequences produced by considering a non-constant amplitude. In particular, he would be interested in its interpretation as a `potential of the internal forces' (\cite{Ros1}; p. 325) that should emerge when considering a continuous system. This idea was also shared by de Broglie, but was introduced by De Donder\footnote{The original citations are not quoted.}, as Rosenfeld wrote: `Recently, Mr. De Donder has introduced in WM two important concepts: the notion of \textit{permanence} of a system and the interpretation of the amplitude $ A $ of the Schr\"odinger's function $ \Psi $ as a \textit{potential of the internal tensions} of the system.'\footnote{We will not deepen the concept of ``permanence''.} (\cite{Ros2}; p. 447). \subsection{The role of the correspondence principle in QG}\label{corr-princ} As noted in our previous section, the first communication was sent to De Donder, who asked Rosenfeld to work with him during the summer of 1927. Even if they did not publish a joint paper, they cited each other in the communications published by the \textit{Bulletin de l'Acad\'emie royale de Belgique} \cite{DD-Ros}, \cite{Ros2}, \cite{Ros3}. Rosenfeld acknowledged De Donder explicitly at the end of the introduction: `My warmest thanks to Mr. De Donder, who did not quit to take an active interest in my work.' (\cite{Ros2}; p. 448). At the end of the third paper's introduction, Rosenfeld underscored again: `Mr. De Donder played an essential role in this work, because he suggested to me the basic idea. I owe a lot to De Broglie, who kindly continued to have a correspondence with me of which I took greatest advantage.' (\cite{Ros3}; p. 574). The main result of Rosenfeld-De Donder collaboration was the introduction of Bohr's correspondence principle as a physical interpretation of Rosenfeld's previous mathematical treatment. As far as we know, this is the first time that Bohr's principle was invoked in searching for a theory that could reconcile WM with GR. In particular, Rosenfeld and De Donder posed this principle as one of the founding principles of this new theory, which De Donder called `the gravitational wave mechanics' (\cite{DD-Ros}; p. 506). The purpose of this paragraph is to discuss the role of the correspondence principle, presenting Rosenfeld's following works: \cite{Ros2}, \cite{Ros3} and \cite{Ros5}. In order to understand the role of the correspondence principle, we start pointing out that Rosenfeld was impressed by the fact that the stress-energy tensor (\ref{tens-psi1}) resembled the stress-energy tensor for a particles' system whose form was: \begin{equation}\label{tens-part} T_{\mu\nu}=\sigma_{(m)}g_{\mu\rho}g_{\nu\sigma}u^{\rho}u^\sigma +P_{\mu\nu}\; , \qquad\text{where}\qquad u^\rho = \frac{dx^\rho}{d\tau}\; , \end{equation} as it appears in De Donder's MIT lectures (\cite{DeDonder2} p. 52), and where $ \sigma_{(m)} $ represents the mass density as measured by the observer $ u^{\mu} $. For a swarm of non-interacting particles $ P_{\mu\nu} = 0 $, for a perfect fluid with pressure $ p $, $ P_{\mu\nu} = p\left( u_\mu u_\nu + g_{\mu\nu} \right) $ (\cite{MTW}); p. 132), while if we consider the dissipative processes its form is more complicated. The resemblance between the stress-energy tensor of a scalar field and that of a particle's system emerges as follows. Rosenfeld considered the following Ansatz for the wave function and for the Jacobi function: \begin{eqnarray} \Psi \left( x \right) &=& A\left( x^0\, ,x^1\, , x^2\, , x^3 \right) e^{\frac{i}{\hbar}\bar{S}} \label{fun-onda-gen}\\ \bar{S}\left( x\right) &=& -\frac{e}{\beta c} x^5+S\left( x^0\, ,x^1\, , x^2\, , x^3 \right) \; , \label{form-Jacobi} \end{eqnarray} where now $ S $ has an unspecified form and $ A $ is an arbitrary real function. The author inserted (\ref{fun-onda-gen}) into equation (\ref{tens-psi1}), and the stress-energy tensor components read: \begin{equation}\label{emt} T_{\bar{\mu}\nu}=2\frac{A^2}{\hbar^2}\partial_{\bar{\mu}}\bar{S}\partial_{\nu}\bar{S}+ 2\partial_{\bar{\mu}}A\partial_{\nu}A + \gamma_{\bar{\mu}\nu}\mathcal{L}\, , \end{equation} where the $ 55 $ component has been explicitly omitted, because Rosenfeld was not interested in the $ 55 $ component of five-dimensional Einstein equations. Using the inverse components of the metric, equation (\ref{inverse}), Rosenfeld rewrote equation (\ref{RosB}), that we rewrite here for convenience \begin{equation} \gamma^{\mu\bar{\nu}}\partial_{\bar{\nu}}\bar{S}=mc\frac{dx^{\mu}}{d\tau}\; , \end{equation} in the following form: \begin{equation}\label{RosB2} g^{\mu\nu}\partial_\nu S = mcu^\mu + \frac{e}{c}A^\mu\; . \end{equation} Equations (\ref{RosB2}) and (\ref{form-Jacobi}) imply that: \begin{eqnarray} \partial_{\mu}\bar{S} &=& \partial_\mu S = g_{\mu\nu} mcu^\nu + \frac{e}{c}A_\mu\; ,\label{S1}\\ \partial_5\bar{S} &=& -\frac{e}{\beta c}\; .\label{S2} \end{eqnarray} Inserting equations (\ref{S1}) and (\ref{S2}) in (\ref{emt}), the author obtained\footnote{Equation (\ref{emt-em}) was obtained raising an index with the five-dimensional metric, $ \displaystyle \gamma^{\bar{\rho}\bar{\mu}}T_{\bar{\mu}\bar{\nu}} $, and then choosing $ \bar{\rho}=\rho $ and $ \bar{\nu}=5 $.} (\cite{Ros2}; p. 454): \begin{eqnarray} T_{\mu\nu} &=&\varrho_{(m)} g_{\mu\rho}g_{\nu\sigma}u^{\rho}u^\sigma + \Pi_{\mu\nu}\label{emt-spaz}\\ \beta {T_5}^{\nu} &=& \varrho_{(e)} u^\nu + \Lambda^\nu \label{emt-em}\, , \end{eqnarray} where we define, following Rosenfeld, a ``quantum'' mass density $ \varrho_{(m)} $ and a ``quantum'' charge density\footnote{Remember that ab inizio we decided to consider the case of $ q=-e $.} $ \varrho_{(e)} $: \begin{equation}\label{density} \varrho_{(m)}= \frac{2m^2c^2}{\hbar^2}A^2\qquad\quad\qquad\varrho_{(e)}=-\frac{2em}{\hbar^2}A^2\, . \end{equation} Equations (\ref{emt-spaz}) and (\ref{emt-em}) require some comments, because, from Rosenfeld's and De Donder's point of view they are the basis for invoking the correspondence principle. Firstly, the analogy between (\ref{tens-part}) and (\ref{emt-spaz}) is now evident, and this explains why $ \varrho_{(m)} $ could play the role of a mass density. In order to understand why $ \varrho_{(e)} $ represents a charge density, we remember that the Maxwell equations on curved space-time for a classical charged system are \begin{equation}\label{def-e} \nabla_\mu F^{\nu\mu} = j^\mu\qquad \text{whith} \qquad j^\mu = \sigma_{(e)}u^\mu\; , \end{equation} where $ \sigma_{(e)} $ represents the charge density of the system as measured by the observer $ u^{\mu} $. On the other hand, the Maxwell equations obtained by the five-dimensional Einstein equations coupled to the wave function stress-energy tensor (\ref{eq-glob-5D}) are\footnote{See the appendix \ref{app3c} for technical details.}: \begin{equation}\label{maxwell} \nabla_\mu F^{\nu\mu}=\beta{T_5}^\nu\, . \end{equation} Therefore, it is evident that $ \beta {T_5}^{\nu} $ could play the role of the density current $ j^{\mu} $ and, as a consequence, equation (\ref{emt-em}) defines a charge density $ \varrho_{(e)} $. Secondly, this is the point where Bohr's principle comes into play. At the end of the introduction of his communication, Rosenfeld underscored that the identification of $\varrho_{(m)}$ and $ \varrho_{(e)} $ with the mass and electric densities of quantum system is `a particularly instructive aspect of the \textit{correspondence principle}' (\cite{Ros2}; p. 448): he stressed that this claim would deserve further analysis and that the connection between the above identification and the correspondence principle has been suggested by De Donder. At the end of the fifth section of the brief communication, Rosenfeld remarked that (we changed the original equation's numbers in order to fit our numerical order): `equations (\ref{emt-spaz}) and (\ref{emt-em}) show that $\varrho_{(m)}$ and $ \varrho_{(e)} $ should be interpreted as a mass density and an electric density of the system, or, better$ (^*) $, \textit{corresponding} to the system [...]' (\cite{Ros2}; p. 454). Rosenfeld himself used the italics and in the footnote corresponding to the symbol $ (^*) $ he underscored again that this remark had been suggested by De Donder. The term ``corresponding'' referred to the formal correspondence between a classical and a quantum system. Indeed, $ \varrho_{(m)} $ and $ \varrho_{(e)} $ depend on the wave function's amplitude. In the following papers, Rosenfeld would clarify how his approach is connected with Bohr's correspondence principle. Our last comment concerns the terms $ \Pi_{\mu\nu} $ and $ \Lambda_\nu $. Their precise form will not be discussed here, but it is worth noting that they contain the contribution due to the fact that the amplitude is not constant. From Rosenfeld's and De Donder's point of view the $ \Pi_{\mu\nu} $ tensor would represent the contribution of the internal forces of the system, while $ \Lambda_\nu $ was called `quantum current' (\cite{Ros5}; p. 665) by Rosenfeld, maybe because it has no classical analogue. In the third communication Rosenfeld dedicated an entire section to enunciate his principle of correspondence, explicitly referring to Bohr's principle, also describing what he had in mind as QG theory (we changed the original equation numbers in order to fit our numerical order): \begin{quote} `The wave mechanics obtained using the variational principle (\ref{az-glob2}) realizes \textit{formally} the fusion between Gravity and quantum theory. To the \textit{field equations} that describe gravitational and electromagnetic phenomena, we added the \textit{equation of quantization} (\ref{KG4}), that rules the quantum-energy exchanges. In this last equation intervenes the \textit{fundamental quantity} $ \Psi $, and the fusion between the two theories is represented by the fact that the five-dimensional matter tensor that is present in the [gravitational] field equation is defined using the fundamental quantity $ \Psi $; on the contrary, in a \textit{pure} Einsteinian gravitational theory, this tensor is a function of different fundamental quantities of the system: the \textit{mass density} $ \sigma_{(m)} $ and the \textit{electric charge density} $ \sigma_{(e)} $.'\footnote{The term `\textit{pure} Einsteinian gravitational theory' seems to be referred to the classical theory obtained without the introduction of the ``quantum field''. We introduced Rosenfeld symbols $ \sigma_{(m)} $ and $ \sigma_{(e)} $ in equations (\ref{tens-part}) and (\ref{def-e}) respectively.\label{pure}} (\cite{Ros3}; p. 574). \end{quote} Rosenfeld used different letters referring to the mass and charge densities, because he wanted to emphasize the difference between a classical system and the corresponding quantum system. The author continued: \begin{quote} `The new definition of the stress-energy tensor as a function of $ \Psi $, (\ref{tens-psi1}), implies a modification of our conception for the role of the fundamental quantities $ \sigma_{(m)} $ and $ \sigma_{(e)} $. In the Einsteinian theory these quantities intervene directly in in the field equations in order to fix the gravitational and the electromagnetic potentials, corresponding to a given distribution $\left( \sigma_{(m)}\, ,\, \sigma_{(e)}\right) $. In Wave Mechanics, these quantities do not intervene directly, but through [..] the quantity $ \Psi $. [...] The material tensor $ T^{\bar{\mu}\bar{\nu}} $ as a function of $ \Psi $ should not necessarily be identical to the material tensor of \textit{pure} Gravity, which is defined as a function of $ \sigma_{(m)} $ and $ \sigma_{(e)} $. It seems desirable to analyse, thenceforward, as soon as possible, the behaviour of the $ T^{\bar{\mu}\bar{\nu}} $ tensor, \textbf{in order to emphasize all possible modifications to Gravity produced by the introduction of the quantum quantity} $ \Psi $; this is the role of the \textit{principle of correspondence}. [bold form added]'\footnote{The term \textit{pure Gravity} can be interpreted as GR. See also footnote (\footref{pure}).} (\cite{Ros3}; p. 575). \end{quote} The bold text emphasizes clearly what was the physical meaning of the calculation presented in section \ref{calculation}. From Rosenfeld's point of view, the introduction of the wave function was responsible for the modifications of the ``pure'', i.e. classical, GR, because even in the case of constant amplitude, it permits us to introduce two quantum quantities, corresponding to classical quantities $ \sigma_{(m)} $ and $ \sigma_{(e)} $: through the new stress-energy tensor, the new quantities $ \varrho_{(m)} $ and $ \varrho_{(e)} $, defined by (\ref{density}), must be considered as the quantum source of gravitational and electromagnetic field. Indeed Rosenfeld continued: \begin{quote} `The comparison between $ \varrho_{(m)} $ and $ \varrho_{(e)} $, and $ \sigma_{(m)} $ and $ \sigma_{(e)} $ will show us how the quantum objects will modify the gravitational and the electromagnetic phenomena. It will be possible to enunciate a more precise and \textit{general} correspondence principle; [...] there are some precise formulas that define, in a strict sense, the principle of correspondence and that establish the identification of the formal schema of wave mechanics with the gravitational schema of Th. De Donder, [...] showing how Wave Mechanics widens the picture of the pure Gravity, in order to incorporate quantum phenomena.' (\cite{Ros3}; p. 575). \end{quote} It is important to stress that, like Klein, de Broglie and De Donder, Rosenfeld never discussed the role of the boundary conditions of the wave function. Like De Donder he referred to the introduction of the wave function as the `equation of quantization'. It is worth to remember that Heisenberg's uncertainty principle was introduced in February of the same year \cite{Heisenberg}. This coincided with the fact that Rosenfeld considered it sufficient to introduce the wave function into Einstein's equations in order to describe correctly the coupling between gravity and quantum matter. Rosenfeld did not cite any of Bohr's papers, but the idea that the correspondence principle could be a theoretical argument to infer the behaviour of a quantum system with respect to the classical one is a consequence of Bohr's influence. Indeed, in the introduction of the third communication, Rosenfeld declares that his approach, i.e. the variational principle, is a `formal theory' (\cite{Ros3}; p. 573). Then he continued: `To put a physical interpretation [on the formal theory], we let ourselves be guided by the \textit{correspondence principle}, using the interpretation given by O. Klein \cite{Klein6} ...' (\cite{Ros3}; p. 573). In order to understand Bohr's role, we briefly analyse Klein's paper \cite{Klein6}. Klein's work is a cornerstone of the history of QM. Before that article, matrix mechanics was the only approach incorporating the correspondence principle\footnote{Heisenberg referred to the fact that classical results can be obtained, in matrix mechanics approach, in the limit of high quantum numbers.}, as Heisenberg himself reported in his review of matrix mechanics' successes in 1926 (\cite{history-QM}; p. xxxii). In this sense, the title of Klein's contribution was very revealing: \textit{Electrodynamics and Wave Mechanics from the point of view of the Correspondence Principle}. As reported in \cite{history-QM}, Bohr was aware of the content of Klein's work and he expressed an enthusiastic comment in a letter to Schr\"odinger (\cite{history-QM}; p. 176). In particular, Bohr was fascinated by the connection between Hamiltonian mechanics and HJ dynamics of wave rays, that generated Klein's relativistic WM. Paraphrasing Bohr's words, he was interested in the fact that thanks to this analogy it is possible, on the basis of WM, to build a corresponding theory. Klein's main purpose was to investigate the possibilities of exploiting relativistic WM for understanding atomic processes involving discontinuities. In Klein's paper, the correspondence principle intervenes when the author tries to modify Maxwell's equations. Schr\"odinger also expressed the idea that the wave function `possesses the property to enter even the untouched [classical] Maxwell-Lorentz equations between the electromagnetic field vectors as a ``source'' of the latter' (\cite{history-QM}; p. 43). In 1927 Schr\"odinger investigated also the effect on the stress-energy tensor obtained by a unified variational principle involving the Maxwell's Lagrangian and the complex scalar field Lagrangian, i.e. `the de Broglie's wave' (\cite{Schroe2}; p. 265). Unlike Klein, de Broglie and Rosenfeld, Schr\"odinger declared explicitly that he would consider neither additional dimensions, nor gravitational field contributions. Indeed, Schr\"odinger's Lagrangian $ \mathcal{L}_S $ is the sum of Maxwell's Lagrangian, $\displaystyle \mathcal{L}_{em} =-\frac{1}{4}F_{\mu}F^{\mu\nu} $, and $ \mathcal{L}_\psi $, the Lagrangian for material fields, which is related to De Donder's work, see (\ref{lagr-DD}), because Schr\"odinger cited De Donder's contribution: \begin{equation} \mathcal{L}_S =\mathcal{L}_{em} + \mathcal{L}_\psi =-\frac{1}{4}F_{\mu}F^{\mu\nu} -\eta^{\mu\nu}\left( \partial_{\mu}\psi+\frac{i}{\hbar}\frac{e}{c}A_\mu\psi\right) \left( \partial_{\nu}\overline{\psi}-\frac{i}{\hbar}\frac{e}{c}A_\nu\overline{\psi}\right)+\frac{m^2c^2}{\hbar^2}\overline{\psi}\psi \; . \end{equation} $ \mathcal{L}_S $ can be obtained after a dimensional reduction from Rosenfeld's Lagrangian (\ref{az-glob2}) in the limit of a flat background. But Schr\"odinger did not investigate the role of $ \psi $ as a source of the electromagnetic field, because he explicitly asserted that the KG Lagrangian $ \mathcal{L}_\psi $ did not describe any real field. In spite of this, Klein analysed this aspect, inspired by the idea to use the correspondence principle. First he manipulated his scalar relativistic equation to define the four-vector $ j^\mu = \left( \rho\, ;\, j^i \right) $, where\footnote{The symbols have the usual meaning. We remember that the electromagnetic potentials are $ A^\mu = (A^0 ; A^i) $} (\cite{Klein6}; p. 414, equations (20)): \begin{eqnarray} \rho &=& -\frac{e}{2mc^2}\left\lbrace -\frac{\hbar}{i}\left( \overline{\psi}\frac{\partial\psi}{\partial t}-\psi\frac{\partial\overline{\psi}}{\partial t}\right)+2e\overline{\psi}\psi A^0 \right\rbrace \label{j1}\\ j^i &=& -\frac{e}{2m}\left\lbrace \frac{\hbar}{i}\eta^{ij}\left( \overline{\psi}\partial_i\psi-\psi\partial_i\overline{\psi}\right)+2\frac{e}{c}\overline{\psi}\psi A^i \right\rbrace \; . \label{j2} \end{eqnarray} Then he showed that using the usual optical geometric Ansatz $ \psi=e^{\frac{i}{\hbar}S} $ for the wave function, in the semiclassical limit $ \hbar\rightarrow 0 $, equations (\ref{j1}) and (\ref{j2}) reduce to the components of the usual potentials for a relativistic scalar charged particle, namely: \begin{eqnarray} \rho_{cl} &=& -\frac{e}{\sqrt{1-\left( v^2 / c^2 \right) }} \label{j1a}\\ j^i_{cl} &=& -\frac{ev^i}{\sqrt{1-\left( v^2 / c^2 \right) }}\; , \label{j2a} \end{eqnarray} where $ v^i $ is the three-velocity of the particle\footnote{The role of the analogy between Hamiltonian dynamics and the dynamics of wave's rays is fundamental to obtain these relations.} and $ v $ its modulus. Finally, using the correspondence principle, Klein interpreted equations (\ref{j1}) and (\ref{j2}) as the source for the electromagnetic field, in order to investigate the quantum modifications of the Maxwell equations, namely: \begin{eqnarray} \partial_i E^i &=& 4\pi\rho \label{j1b}\\ \varepsilon^{ijk}\partial_j B_k - \frac{1}{c}\frac{\partial E^i}{\partial t} &=& \frac{4\pi}{c}j^i\; . \label{j2b} \end{eqnarray} Klein solved the Maxwell equations (\ref{j1b}) and (\ref{j2b}) using the advanced and the retarded potentials, in order to write an expression for the electric and the magnetic fields as functions of $ \psi $. Klein identified these electric and magnetic fields with the electromagnetic field produced by the bounded electron\footnote{Unlike Rosenfeld, Klein considered also the full quantum treatment, introducing the eigenfunctions expansion for the wave field.}, by means of the correspondence principle (\cite{Klein6}; p. 422, equations (41). See also equations (33), (28) and (18)). As we have seen, Rosenfeld followed the same path in order to obtain an expression for the metric components, explicitly referring to Klein's paper. In this sense, Rosenfeld was the first author to introduce the correspondence principle in the context of QG. It is worth noting that in the five-dimensional picture the Maxwell equations are naturally coupled to the four-current, like Rosenfeld himself showed with relations (\ref{maxwell}). This seemed to be another advantage of the five-dimensional approach. \subsection{Back to the present}\label{discussion} In his last paper of the year\footnote{In a brief communication to the \textit{Comptes rendus} in June of the same year, Rosenfeld claimed that he was able to reproduce Epstein's description of `the magnetic electron of Uhlenbeck and Goudsmith' (\cite{Ros4}; p. 1541), i.e. the spinning electron, using the five-dimensional apparatus described in the previous section. We will not go into the reasons that could explain Rosenfeld's claim, because we postpone this analysis to a future work.}, written in October 1927, Rosenfeld made a detailed and wider exposition of all the concepts introduced in his previous work. His idea was to formulate a sort of formal basis for the five-dimensional Universe as a unified framework for GR and WM. The foundations of the whole building are three principles: a variational principle, i.e. equation (\ref{az-glob2}); the principle of Schr\"odinger eigenfunctions, i.e. the usual `boundary conditions that must be imposed on $ \Psi $ and $ \overline{\Psi} $ in order to quantize the system' (\cite{Ros5}; p. 665); and the correspondence principle, that the author formulated with the help of De Donder. Rosenfeld also cited a paper written by De Donder, where the latter tried to give a more precise formulation of the principle \cite{DD-Ros}. Unlike Rosenfeld, De Donder will not abandon this idea in the future. Indeed while Rosenfeld seemed to be convinced that quantum theory should modify GR, De Donder will continue to claim that GR and WM, were compatible theories \cite{DD-1930}. Rosenfeld confirmed the ideas proposed in the previous paper, claiming that the components of the new stress-energy tensor as a function of the wave function $ \Psi $ should play the role of `quantum currents', i.e. quantum source for the right side of Maxwell and Einstein equations. The author wrote explicitly: `The \textit{correspondence principle} consists in stating that this analogy is not only a formal analogy, but also a physical analogy.' (\cite{Ros5}; p. 666). He also emphasized the particular nature of the correspondence principle: `There exist \textit{postulates} in the sense of the formal logic, whilst the correspondence principle is a \textit{physical principle} [...]' (\cite{Ros5}; p. 667). Rosenfeld meant that the extension of the analogy from the formal plane to the physical plane is a sort of meta-sentence, and it was different, in this sense, from a formal sentence of the ``basic language'' of the equations, like e.g. the variational principle. Rosenfeld's approach, as well as de Broglie's proposal were briefly discussed at the Solvay conference. As stated above, in section 2, Rosenfeld was not officially admitted to the conference, but De Donder invited him to follow him, in order to have the possibility to meet Pauli at the conference. The conference's proceedings showed once again how de Broglie, Rosenfeld and De Donder agreed on the meaning of the five-dimensional Universe. De Broglie asserted that De Donder succeeded in harmonizing Einstein theory with WM (\cite{Crossroad}; p. 483); De Donder tried to draw attention to the MIT lectures we previously discussed, speculating on a connection between his correspondence principle and Bohr reflections (\cite{Crossroad}; p. 483). Subsequently De Donder stated that there is a connection between de Broglie's contributions, his work and Rosenfeld's ideas (\cite{Crossroad}; p. 499 and 519). De Donder will try again to discuss his approach (\cite{Crossroad}; p. 470; 471; 510), but the questions raised by De Donder and de Broglie will not be faced by the group of physicists. De Donder's approach to Hamiltonian dynamics discussed in section 2 is peculiar, because he introduced systematically the use of poly-momenta $ p_\mu $ obtained starting with a Lagrangian $ \mathcal{L}(y^a\, ,\, \partial_\mu y^a ) $, which were functions of some variables $ y^a $ and its derivatives, deriving it with respect to all of the derivatives, $p^a_\mu = \frac{\partial\mathcal{L}}{\partial\partial_\mu y^a} $, instead of using the time derivative only as usual. This convention, sometimes called the De Donder-Weyl approach, and its generalization to a curved space-time has survived until recent years, as an alternative approach for the quantization of gravity, and it is today known as pre-canonical quantization \cite{Kanatchikov1} \cite{Kanatchikov2}. At the end of 1929, after his stay in G\"ottingen, Rosenfeld moved to Z\"urich where, stimulated by Pauli, tried to inspect what we today call the gravitational self-energy of a quantized electromagnetic field. In \cite{Rosenfeld1} he approached the problem in a way that resembles the work analysed here. Like in his previous work, he integrated again the linearised Einstein equations, this time coupled with Maxwell equations only. The quantized electromagnetic field played the role of the complex scalar field. Rosenfeld used the annihilation and creation operators approach for treating the electromagnetic field, hence the metric field $ h_{\mu\nu} $ itself was described by an operator. In this sense he obtained again a sort of quantum metric, because it is generated by a quantum field. Rosenfeld did not cite the previous papers we analysed, but we must stress the importance played by his early work, because of the affinity of the path followed by the author. The term quantum metric could be understood in a complementary way. The quantum corrections to the classical gravitational field can be considered as the contribution to the classical effects produced by the quantization of the gravitational field. In the mid Thirties, Matvei Bronstein \cite{Bronstein1} would quantize for the first time the gravitational field directly in the weak field limit, in order to understand quantum deviations from the classical Newton law. Only 37 years later, after the development of perturbation theory, Micheal Duff \cite{Duff-72} tried to understand the quantum corrections to the Schwarzschild metric. Duff used explicitly a classical source and he quantized directly the gravitational field. At the tree level, in the weak field limit, he obtained the classical results, while the quantum corrections came from the one-loop corrections. Finally we address the following question: what is the physical meaning of Rosenfeld's result from the modern point of view? Rosenfeld interpreted the particle's wave function as the source of the gravitational field. From a modern point of view, this approach treats the gravitational interaction as a classical phenomenon and the particle's description as fully quantized. This means that Rosenfeld's procedure gives a semi-classical result, even in the case of non constant amplitude. From a modern point of view, Rosenfeld's results can be obtained as non-relativistic limit of the so-called semi-classical Einstein equations, an approach formally suggested by M\o{}ller for the first time \cite{Moller}. These equations are obtained by replacing the stress-energy tensor, i.e. the r.h.s. of Einstein equations, by the expectation value of the stress-energy operator $ \hat{T}_{\mu\nu} $ with respect to some quantum state $ \ket{\Psi} $. In four dimensions they have the following form: \begin{equation}\label{sEe} R_{\mu\nu}-\frac{1}{2}g_{\mu\nu}R = 8\pi G \bra{\Psi}\hat{T}_{\mu\nu}\ket{\Psi}\, . \end{equation} The modern interpretation of equation (\ref{sEe}) is connected with the character of the coupling between gravity and matter. This character has not yet been clarified and it is an open problem in the QG research area. It is equivalent to the question whether gravity should be quantized or not \cite{Treder}. This is a long debate, see e.g. \cite{Carlip} and \cite{Kiefer}, that divided the physicist community in two groups, initiated incidentally by Rosenfeld himself \cite{Rosenfeld63}. On one side those who believe that the gravitational interaction must be quantized, on the other side those who believe that gravitational interaction must remain classical. As a consequence, for the first group equations (\ref{sEe}) can be derived approximately from canonical QG as a kind of mean-field equation \cite{Kiefer}. In this case, the metric obtained integrating the linearised Einstein equations following Rosenfeld's procedure is a sort of ``mean metric'' $ \bra{\Psi}\hat{g}_{\mu\nu}\ket{\Psi} $, where the hat-symbol means that the metric should be an operator. This perspective is also shared by those who investigate the behaviour of QFT on a curved background \cite{Birrel-Davies}, that led to Hawking's results on black hole's entropy. For the second group the coupling between quantum fields and classical gravity described by Einstein equations should be understood as a fundamental description of nature. As a consequence, they interpret the l.h.s. of (\ref{sEe}) as evaluated using the classical metric. From this perspective, a possible starting point for reconciling WM with gravity is the so called Schr\"odinger-Newton equation\footnote{The Schr\"odinger-Newton equation was introduced by Roger Penrose to provide a dynamical description of the quantum wave function's collapse \cite{Penrose}.} \cite{Grossart}, where the source of the gravitational field is represented by the squared modulus of the wave function. We do not enter the debate whether which approach could be the fundamental one, because we believe that any extension of our conceptual framework for the description of nature would be of interest in itself. We observe that recently there has been a revival of Rosenfeld's ideas coming from the second group of physicists. Modern authors, \cite{Giulini} and \cite{Grossart}, with different scope, used some of the Rosenfeld's ideas, extended to the non-static case. More precisely, in \cite{Giulini} the authors studied the coupling between KG field and gravity in the case of a non-static spherical symmetric space-time, in the limit of semi-classical and non-relativistic approximation from the four-dimensional point of view. Following Kiefer's scheme for non-relativistic and semi-classical approximation, the authors investigated KG equation on a curved background, showing that it reduces itself, in this WKB-like scheme, to the Newton-Schr\"odinger equation, at a certain order of the WKB expansion. Einstein equations coupled with the KG stress-energy tensor reduces, in the same approximation, to the Poisson equation for the gravitational potential, where the wave function amplitude plays the role of the mass density. This means that, like in Rosenfeld's scheme, the wave function is the source of the metric. At the order chosen by the authors, the metric itself results as an expansion in terms of $ \frac{\hbar}{c^2} $ powers and it depends on the wave amplitude of the field. In the weak field limit, the quantum-mechanical description can be derived from the field-theoretic approach with a well defined procedure, which allows one to use the wave function, instead of Fock's states \cite{QFT-QM}. In \cite{Grossart}, the authors refined their analysis using the second-quantised formalism and hence they apply the procedure to find the quantum mechanical limit. Once again they find that the wave function is the source of the gravitational field, like in Rosenfeld's approach. \section{Summary and Conclusions}\label{summary} In this paper we have described the earliest of Rosenfeld's contributions of 1927. From an historical point of view, Rosenfeld's work is interesting for various reasons. First, it contains many ingredients that the author will use in his future work. Second, it shows how Rosenfeld was influenced by his mentors: Oskar Klein, Louis de Broglie and Theophile De Donder. Third, it offers a connection between the history of QM and the history of QG. We started considering the main results achieved by his mentors, at the time he started to write his first paper. Klein wrote a five-dimensional unified variational principle for the electromagnetic and the gravitational field. He introduced the relativistic wave equation on a curved background using the correspondence between Hamiltonian dynamics for point particles and the HJ equation in the geometrical optics limit. Following this correspondence, Klein tried to introduce a sort of massless KG equation, in analogy with light. De Broglie was pressed by Rosenfeld, who joined the French physicist in Paris, to investigate the five-dimensional Universe features. De Broglie showed that it is not necessary to consider null-geodesics, and that the four-dimensional geodesics can be represented as the projection of five-dimensional geodesics. De Broglie built his five-dimensional Universe using an inconsistent time-like extra dimension, as Klein himself would note in a following paper. De Donder, the third character of our story, introduced the Lagrangian approach involving the wave function, treating it as a field, again using the correspondence between Hamiltonian particle dynamics and the HJ equation for wave's rays. De Donder interpreted the introduction of a unified variational principle as the mathematical instrument responsible for the quantization of the system, because it produces the KG equation. He was convinced that no modifications of GR were needed for describing quantum phenomena. De Donder played a fundamental role in Rosenfeld's work. Rosenfeld sent De Donder his first paper, who presented it for publication at the \textit{Bulletin de l'Acad\'emie royale de Belgique} journal. Even though we have not analysed any De Donder-Rosenfeld correspondence, a collaboration between these authors emerges clearly. Furthermore, De Donder invited Rosenfeld to the fifth Solvay conference, where De Donder tried to draw attention to Rosenfeld's work and where Rosenfeld met Einstein and the physicists of the G\"ottingen school. After having introduced Klein's, de Broglie's and De Donder's approaches, we considered Rosenfeld's work. In his first paper, Rosenfeld tried to walk one step ahead with respect to his mentors. He decided to put De Donder's action model in a five-dimensional context, building upon the work of Klein and de Broglie. His second contribution, central in our analysis, was to address the task of understanding which metric can be generated by a quantum object, i.e. a localized electron's wave function. Rosenfeld tried also to understand which conditions must hold in order that WM and GR could reproduce in a semi-classical approximation a classical metric in the weak field limit. Studying this problem he presented for the first time a quantum modification of the flat metric, because of the appearance of $ \hbar $. In his following papers, thanks to De Donder's collaboration, Rosenfeld succeeded in giving a physical meaning to his mathematical treatment. De Donder recognized the idea of Bohr's correspondence principle in using the wave function's stress-energy tensor as a source of the gravitational field. In his third communication Rosenfeld himself explicitly recognized that his approach to QG was inspired to what Klein did in the context of Maxwell's equations. Thanks to De Donder, Rosenfeld started to interact with Pauli, Jordan, Bohr himself and many other physicists who will play, unlike de Broglie and De Donder, a fundamental role in constructing the new quantum theory of fields. After 1927, Rosenfeld will convince himself of the importance of quantizing these new objects and, stimulated by Pauli, he will study again the problem of a quantum metric, but using the new-born quantum theory of massless spin-1 fields \cite{Rosenfeld1}. From an historical point of view, this paper concluded what we called the prehistory era in the history of QG. Even if he never considered his early papers on QG an important work, Rosenfeld's contributions show how the search of a theory that could reconcile quantum phenomena with GR started early and that it also reached interesting results, that will continue to be valid in the context of quantum field theory on a curved space-time. Even if Klein, de Broglie, De Donder and Rosenfeld were not a research group as in our modern meaning, in 1927 their works were related by a common purpose. The problem of finding a quantum theory of gravity has never been limited, and is not limited today, to the quantization of gravitational interaction only. We now know that attempts to apply directly to the gravitational field quantization procedures, which have been successful in other contexts, have failed. From the beginning of the prehistory of QG, the authors that tried to face the problem of reconciling quantum phenomena with gravity interpreted the idea of QG in the broadest sense. From an historical point of view, the following statement is particularly true: `In the broadest sense, a quantum theory of gravitation would represent an extension of our conceptual framework for the description of nature: any such extension would be interest in itself.' (\cite{Ashtekar}; p.1213). \section*{Acknowledgement} We express our gratitude to all anonymous referees who gave us the opportunity of improving the original manuscript. We are very grateful to Kurt Lechner for his invaluable comments and suggestions. \noindent{This work has been supported in part by the DOR 2016 funds of the University of Padua.} \section{Appendices}\label{apps} \subsection{Wave optics and null-geodesics in Klein's five-dimensional manifold}\label{geom-optic} Klein's original idea was to write a wave equation in analogy with light in the context of his five-dimensional Universe. This appendix follows Klein's original approach \cite{Klein1}. In a curved five dimensional space-time, a relativistic generalization of Schr\"odinger equation is represented by the following equation: \begin{equation}\label{KG1bis} a^{\bar{\mu}\bar{\nu}}\left( \delta^{\bar\sigma}_{\bar\nu}\frac{\partial}{\partial x^{\bar{\mu}}} -\Gamma_{\bar{\mu}\bar{\nu}}^{\bar{\sigma}}\right)\partial_{\bar{\sigma}}\Psi= a^{\bar{\mu}\bar{\nu}}\nabla_{\bar{\mu}}\partial_{\bar{\nu}}\Psi=0\; , \end{equation} where $ \Psi $ is the wave function and the covariant derivative $ \nabla_{\bar{\mu}} $ is defined using the Christoffel symbols $ \Gamma_{\bar{\mu}\bar{\nu}}^{\bar{\sigma}}$. As stated in the main text, Klein defined the Christoffel symbols using the space-time metric $ \gamma_{\bar{\mu}\bar{\nu}} $, that we rewrite here for convenience: \begin{equation}\label{dsigma1} d\sigma ^2 = \alpha d\theta ^2 + ds^2 \, , \end{equation} \begin{center} where \end{center} \begin{equation}\label{dsigma2} d\theta = dx^5 + \beta A_\mu dx^{\mu}\quad ;\quad g_{\mu\nu} = \gamma_{\mu\nu} - \frac{16\pi G}{c^4}A_{\mu}A_{\nu}\quad ;\quad ds^2=g_{\mu\nu}dx^{\mu}dx^{\nu}\; \, . \end{equation} Equation (\ref{KG1bis}) resembles a massless equation for a scalar field, where the inverse of the space-time metric $ \gamma^{\bar{\mu}\bar{\nu}} $ has been replaced by the tensor $ a^{\bar{\mu}\bar{\nu}} $, whose covariant components are defined by equation (\ref{metric-K}). As stressed in sections \ref{section-Klein} and \ref{section-deBroglie}, this fact generated the ambiguity in Klein's approach, criticized by de Broglie. Following Klein's approach, we shall show how wave equation (\ref{KG1bis}) is connected with five-dimensional null-geodesics that reduce to the four-dimensional equations of motion for charged massive particles in a combined electromagnetic and gravitational field. In the geometrical optics limit a wave front propagates locally as a plane-fronted wave. Therefore, the Ansatz for the wave function is \begin{equation}\label{psi-app} \Psi (x) = A e^{i\omega S(x)} \end{equation} where $ \omega $ is so large that only the term proportional to $ \omega^2 $ in equation (\ref{KG1bis}) need to be taken into account. The function $ S=S(x) $ is termed the eikonal and it characterizes the phase of the wave. Substituting (\ref{psi-app}) into the wave equation, the term with two derivatives is proportional to $ \omega^2 $ and equation (\ref{KG1bis}) reads: \begin{equation}\label{geom-optik} a^{\bar{\mu}\bar{\nu}}\partial_{\bar{\mu}}S\partial_{\bar{\nu}}S = 0\,\,\, . \end{equation} Last equation resembles the eikonal equation for light rays, that describes the propagation of the wave front $ S(x) $ of light rays. In the HJ approach, it can be derived by a particular Hamiltonian, whose Hamilton equations describe the dynamics of the particle associated to the wave by wave/particle duality. Klein we defined the Hamiltonian as follows: \begin{equation} H = \frac{1}{2}a^{\bar{\mu}\bar{\nu}}p_{\bar{\mu}}p_{\bar{\nu}} \qquad\text{where}\qquad p_{\bar{\mu}} = \partial_{\bar{\mu}}S\;\,\, . \end{equation} Hence, equation (\ref{geom-optik}) now reads: \begin{equation} H = 0 \; , \end{equation} and parametrizing the five-dimensional particle's world line with an arbitrary parameter $ \hat{\lambda} $, the relativistic Hamilton equations are: \begin{equation} \frac{\partial H}{\partial p_{\bar{\mu}}}=\frac{dx^{\bar{\mu}}}{d\hat\lambda} \qquad ;\qquad -\frac{\partial H}{\partial x^{\bar{\mu}}}=\frac{dp_{\bar{\mu}}}{d\hat\lambda}\; . \end{equation} The analogy between equation (\ref{geom-optik}) and the usual eikonal equation suggests to consider null-geodesics for the differential form $ a_{\bar{\mu}\bar{\nu}}dx^{\bar{\mu}}dx^{\bar{\nu}} $ as stated by Klein, where $ a_{\bar{\mu}\bar{\nu}} $ represent the reciprocal quantities of $ a^{\bar{\mu}\bar{\nu}} $. As emphasized in the main text, After a Legendre transformation, the Hamiltonian $ H $ is mapped into the following Lagrangian: \begin{equation} L= \frac{1}{2} a_{\bar{\mu}\bar{\nu}}\frac{dx^{\bar{\mu}}}{d\hat\lambda}\frac{dx^{\bar{\mu}}}{d\hat\lambda}\;\, , \end{equation} where the covariant components of the tensor $ a_{\bar{\mu}\bar{\nu}} $ are: \begin{equation}\label{metric-K1} a_{\mu\nu} = g_{\mu\nu}+\frac{e^2}{m^2c^4}A_\mu A_\nu \quad\quad a_{\mu 5}=\frac{e^{2}}{m^2c^3\beta}A_\mu \quad\quad a_{55}= \frac{e^2}{m^2c^4\beta^2}\; \, . \end{equation} Like all the quantities introduced by Klein, also the components of $ a_{\bar{\mu}\bar{\nu}} $ do not depend on the fifth coordinate. As we emphasized in the main text, $ a_{\bar{\mu}\bar{\nu}} $ and $ \gamma_{\bar{\mu}\bar{\nu}} $ are quite different, cfr. equations (\ref{metric-K1}) and equations (\ref{dsigma2}). As we said, it seems that Klein introduced a new metric for the microscopic world, $ a_{\bar{\mu}\bar{\nu}} $, indeed the null-like character of the paths is referred to the tensor $ a_{\bar{\mu}\bar{\nu}} $ instead of $ \gamma_{\bar{\mu}\bar{\nu}} $. If following Klein we define $ \mu=a_{55} $, hence $ a_{\bar{\mu}\bar{\nu}}dx^{\hat{\mu}}dx^{\hat{\nu}} = \mu d\theta^2 +ds^2 $. After having defined the tangent vector along the null-path, $\displaystyle V^{\mu} = \frac{dx^\mu}{d\hat{\lambda}} $, it should satisfy the condition $ \mu \left( \frac{d\theta}{d\hat{\lambda}}\right) ^2 +\left( \frac{ds}{d\hat{\lambda}}\right) ^2= 0 $. The Hamilton equations are equivalent to the Euler-Lagrange equations: \begin{equation}\label{eom-app1} \frac{d}{d\hat\lambda} \frac{\partial L}{\partial\left( dx^{\bar{\mu}} / d\hat\lambda\right) }- \frac{\partial L}{\partial x^{\bar{\mu}}}= 0\; . \end{equation} We now skip some technical details, because a similar derivation is proposed in appendix \ref{app2}, discussing de Broglie's approach. The equation for the fifth component is a conservation law, because the tensor $ a_{\bar{\mu}\bar{\nu}} $ does not depend on the fifth coordinate $ x^5 $. The conserved momentum $ p_5 $ reads $\displaystyle p_5 = \frac{\partial L}{\partial\left( dx^5 / d\hat\lambda\right) } = \mu\frac{d\theta}{d\hat{\lambda}} $. This conservation law can be used to reduce equations (\ref{eom-app1}), with $ \bar{\mu} = 0,1,2,3 $, to: \begin{equation}\label{Lorentz-Klein} mc\left( \frac{d}{d\hat{\lambda}}\left( g_{\mu\nu}V^{\nu}\right) -\frac{1}{2}\partial_\mu g_{\rho\nu}V^\rho V^\nu\right) =-\frac{e}{c}\left( \partial_\mu A_\nu-\partial_{\nu}A_{\mu}\right) V^\nu\, . \end{equation} Klein now introduces the particle's proper time $ \tau $ as follows. The constancy of $ p_5 $ and the condition for the null-like character of the paths imply that the ratio $ \displaystyle \frac{d\tau}{d\hat{\lambda}} $ is constant along the path. Hence, in the projected four-dimensional equation (\ref{Lorentz-Klein}), the arbitrary parameter can be substituted with the proper time, notwithstanding we started considering null-geodesics\footnote{It is worth remembering that the proper-time cannot be defined for null-geodesics.}. After some manipulation it can be shown that it is equivalent to the Lorentz equation for a charged massive particle of mass $ m $ and charge $ -e $ in a combined electromagnetic and gravitational field ( see appendix \ref{app2}): \begin{equation}\label{Lorentz} mc\left( \frac{du^{\lambda}}{d\tau} +\Gamma^\lambda_{\varrho\nu} u^\rho u^\nu\right) =-\frac{e}{c}{F^\lambda}_{\nu} u^\nu\, , \end{equation} where now $\displaystyle u^{\mu} = \frac{dx^\mu}{d\tau} $ is the particle's four-velocity. We stress again the role of the tensor $ a_{\bar{\mu}\bar{\nu} }$. The mass of the particle is hidden into its definition, equation (\ref{metric-K1}). Therefore, the five-dimensional null-geodesics for the differential form $ a_{\bar{\mu}\bar{\nu}}dx^{\bar{\mu}}dx^{\bar{\nu}} $ are connected with four-dimensional geodesics of charged massive particles. \subsection{{On the inconsistency of a time-like compactified dimension}}\label{app1} One of the most important assertion we made in the text is that, unlike Klein, de Broglie considered a time-like fifth dimension. In order to understand the consequences of this choice we start again with the five-dimensional line element $ d\sigma^2 = \gamma_{\bar{\mu}\bar{\nu}}dx^{\bar{\mu}}dx^{\bar{\nu}} $. Using Klein notation, which we rewrite here for convenience, we define $ \frac{\gamma_{5\mu}}{\alpha}=\beta A_\mu $ and the components of the five-dimensional metric are: \begin{equation}\label{metrica} \gamma_{\mu\nu} = g_{\mu\nu}+\alpha\beta^2A_\mu A_\nu\; , \qquad \gamma_{55} = \alpha\; ,\qquad \gamma_{5\mu} = \alpha\beta A_\mu\; . \end{equation} This metric has the following signature: $ \left( -\, ;+\, ;+\, ;+\, ;\epsilon\right) $, where $ \epsilon = + $ if $ \alpha > 0 $, i.e. if the fifth coordinate describes a space-like dimension, and $ \epsilon = - $ if $ \alpha < 0 $, i.e. in the case of a time-like coordinate. We remember that the line element can be rewritten as $ d\sigma^2 = \alpha d\theta^2 +ds^2 $, where $ d\theta = dx^5 + \beta A_\mu dx^\mu $ and $ ds^2 = g_{\mu\nu}dx^\mu dx^\nu $. The components of the inverse metric are: \begin{equation}\label{metrica-inversa} \gamma^{\mu\nu} = g^{\mu\nu}\; , \qquad \gamma^{55} = \frac{1}{\alpha}+\beta^{2}A_\mu A^\mu\; ,\qquad \gamma^{5\mu} = -\beta A^\mu\; . \end{equation} Using the Ansatz that the metric does not depend on the fifth coordinate, we have calculated the components of the five-dimensional Ricci tensor, defined by \begin{equation}\label{def-R-tilde} \tilde{R}_{\bar{\mu}\bar{\nu}}= \partial_{\bar{\lambda}}\tilde{\Gamma}_{\bar{\mu}\bar{\nu}}^{\bar{\lambda}} - \partial_{\bar{\nu}}\tilde{\Gamma}_{\bar{\mu}\bar{\lambda}}^{\bar{\lambda}} + \tilde{\Gamma}_{\bar{\sigma}\bar{\lambda}}^{\bar{\lambda}}\tilde{\Gamma}_{\bar{\mu}\bar{\nu}}^{\bar{\sigma}} - \tilde{\Gamma}_{\bar{\sigma}\bar{\nu}}^{\bar{\lambda}}\tilde{\Gamma}_{\bar{\mu}\bar{\lambda}}^{\bar{\sigma}} \; . \end{equation} We need the following results: \begin{eqnarray} \tilde{R}_{55} &=& \frac{\alpha^{2}\beta^{2}}{4}F_{\mu\nu}F^{\mu\nu}\; ,\label{R1}\\ \tilde{R}_{5\sigma} &=& \alpha\beta\nabla_\lambda{F_\sigma}^{\lambda}+\frac{\alpha^{2}\beta^{3}}{4}A_{\sigma}F_{\mu\nu}F^{\mu\nu}\; ,\label{R2}\\ g^{\mu\nu}\tilde{R}_{\mu\nu} &=& R + \frac{\alpha^{2}\beta^{4}}{4}A_\sigma A^\sigma F_{\mu\nu}F^{\mu\nu} -\frac{\alpha\beta^{2}}{2}F_{\mu\nu}F^{\mu\nu} + \alpha\beta^{2}A_\mu\nabla_\lambda F^{\mu\lambda}\; ,\label{R3} \end{eqnarray} that lead to the following relation for the five-dimensional curvature scalar: \begin{eqnarray}\label{curv-scal-5} \tilde{R} &=& \gamma^{\bar{\mu}\bar{\nu}}\tilde{R}_{\bar{\mu}\bar{\nu}} = \gamma^{55}\tilde{R}_{55}+2\gamma^{5\mu}\tilde{R}_{5\mu} + \gamma^{\mu\nu}\tilde{R}_{\mu\nu} \nonumber\\ &=& R - \frac{\alpha\beta^{2}}{4}F_{\mu\nu}F^{\mu\nu}\; . \end{eqnarray} Equation (\ref{curv-scal-5}) shows that if the fifth dimension is space-like, $ \alpha > 0 $, we can identify $ \alpha\beta^{2} = 2\kappa $ and the electromagnetic kinetic term has the correct sign. On the contrary, if $ \alpha $ is negative this identification is not possible. This is the inconsistency connected with a compactified time-like dimension. As written in the main text, Klein inferred from this fact the need to introduce a space-like compact dimension. \subsection{{Geodesics in de Broglie-Rosenfeld approach}}\label{app2} In this section we describe de Broglie's analysis of five-dimensional geodesics, with some details. After having introduced the five-dimensional metric, in the fifth paragraph of his paper de Broglie considered all five-dimensional geodesics, not only null-geodesics as suggested by Klein, with the following motivation: `Admitting the existence of a fifth dimension of the Universe, we could enunciate the following principle: <<\textit{In the five-dimensional universe, the World-line of every point particle is a geodesic}>>' (\cite{deBroglie}; p. 69). Given $ O $ and $ M $, `two fixed points of the World-line' (\cite{deBroglie}; p. 69), five-dimensional geodesics can be seen as world-lines of extremal ``five-dimensional proper time'' $ d\hat{\tau}=\sqrt{-d\sigma^{2}} $: \begin{equation} \delta\int_{O}^{M} d\hat{\tau} =0 \, . \end{equation} After introducing an arbitrary parameter $ \hat{\lambda} $, the geodesic equation can be obtained equivalently by the following variational principle: \begin{eqnarray} \frac{1}{2}\delta\int_{O}^{M} \left[ \gamma_{\bar{\mu}\bar{\nu}} \frac{dx^{\hat{\mu}}}{d\hat{\lambda}}\frac{dx^{\hat{\nu}}}{d\hat{\lambda}}\right] \, d\hat{\lambda} =\frac{1}{2}\delta\int_{O}^{M} \left[ \alpha \left( \frac{d\theta}{d\hat\lambda} \right)^2 +g_{\mu\nu}\frac{dx^{\mu}}{d\hat{\lambda}}\frac{dx^{\nu}}{d\hat{\lambda}} \right] d\hat{\lambda} = 0\quad &\text{i.e.}&\nonumber\\ \frac{1}{2}\delta\int_{O}^{M} \left[ \alpha \left( V^5+\beta A_\mu V^\mu\right)^2 +g_{\mu\nu}V^{\mu}V^{\nu} \right] d\hat{\lambda} &=& 0\; , \end{eqnarray} where we used $ d\sigma^2 = \gamma_{\hat{\mu}\hat{\nu}}dx^{\hat{\mu}}dx^{\hat{\nu}} = \alpha d\theta^2 +ds^2 $ and where $ V^5 $ and $ V^\mu $ are the five components of the five-velocity $ \displaystyle V^{\bar\mu}=\frac{dx^{\bar\mu}}{d\hat\lambda} $. Now de Broglie identified the quantity into the square bracket as a Lagrangian $ L(x\, ,\, V) $. Varying the action as a function of $ x^{\bar\mu} $ and $ V^{\bar\mu} $, de Broglie obtained the following Euler-Lagrange equations: \begin{subequations} \begin{eqnarray} \frac{d}{d\hat{\lambda}}\frac{\partial L}{\partial V^5} &=& \frac{\partial L}{\partial x^5}\; , \label{eq-moto1} \\ \frac{d}{d\hat{\lambda}}\frac{\partial L}{\partial V^\mu } &=& \frac{\partial L}{\partial x^{\mu}}\label{eq-moto2}\; . \end{eqnarray} \end{subequations} Remembering that there is no dependence from the fifth dimension, the equation (\ref{eq-moto1}) produces a conserved quantity: \begin{equation}\label{p5} \frac{d}{d\hat{\lambda}}\alpha\left( V^5+\beta A_{\mu}V^{\mu}\right) = 0\qquad \text{i.e.} \qquad \pi_5= \alpha \frac{d\theta}{d\hat\lambda} = \text{constant}\; , \end{equation} while equations (\ref{eq-moto2}) read\footnote{Remember that $ A_\mu $ is a function of the four-dimensional coordinates.}: \begin{equation} \frac{d}{d\hat\lambda}\left( \pi_5\beta A_\mu+g_{\mu\nu}V^\nu\right) = \frac{1}{2}\partial_\mu g_{\rho\sigma}V^\rho V^\sigma + \pi_5\beta \partial_\mu A_\nu V^\nu \; , \end{equation} and, rearranging the terms and inserting $ \pi_5 $ expression (\ref{p5}), its equivalent form is: \begin{equation}\label{eq-moto3} \frac{d}{d\hat\lambda}\left( g_{\mu\nu}\frac{dx^{\nu}}{d\hat{\lambda}}\right) = \frac{1}{2}\partial_\mu g_{\rho\sigma}\frac{dx^{\rho}}{d\hat{\lambda}} \frac{dx^{\sigma}}{d\hat{\lambda}} + \alpha \frac{d\theta}{d\hat\lambda} \beta F_{\mu\rho}\frac{dx^{\rho}}{d\hat{\lambda}}\, . \end{equation} We can now introduce the proper-time $ d\tau = \sqrt{-ds^2} $, because we are considering non-null geodesics. The five-dimensional geodesic equation and the metricity condition imply that the covariant derivative of the $ \gamma_{\bar{\mu}\bar{\nu}}V^{\bar{\mu}}V^{\bar{\nu}} $ would be zero. Hence the ratio $ \frac{d\hat{\lambda}}{d\tau} $ is constant along the geodesic curve and in equation (\ref{eq-moto3}) $ \hat{\lambda} $ could be substituted by $ \tau $. If we define the normalized four-dimensional vector $ \displaystyle u^{\mu} = \frac{dx^{\mu}}{d\tau} $ and if we set, following de Broglie, \begin{equation}\label{rel1} \alpha\frac{d\theta}{d\tau}= -\frac{e}{\beta c}\dfrac{1}{mc}\; , \end{equation} equation (\ref{eq-moto3}) reduces to \begin{equation}\label{eq-lorentz2} mc\left( \frac{d}{d\tau}\left( g_{\mu\nu}u^{\nu}\right) -\frac{1}{2}\partial_\mu g_{\rho\nu}u^\rho u^\nu\right) =-\frac{e}{c}F_{\mu\nu} u^\nu\, . \end{equation} As claimed in the main text, the parameter $ \beta $ disappears and it remains undetermined. In order to obtain Lorentz equations we rewrite the first term of the l.h.s. of equation (\ref{eq-lorentz2}) as follows: \begin{equation} \frac{d}{d\tau}\left( g_{\mu\nu}u^{\nu}\right) = u^{\rho}\partial_{\rho} \left( g_{\mu\nu }u^{\nu} \right) = g_{\mu\nu} \frac{du^{\nu}}{d\tau}+ \frac{1}{2}\left( \partial_{\rho} g_{\mu\nu}+\partial_{\nu} g_{\mu\rho} \right) \; , \end{equation} Finally, we insert the previous equation in (\ref{eq-lorentz2}) and we contract it with the inverse components of the metric $ g^{\lambda\mu} $ to get the Lorentz equations: \begin{equation}\label{eq-lorentz3} mc\left( \frac{du^{\lambda}}{d\tau} +\Gamma^{\lambda}_{\mu\rho}u^\rho u^\nu\right) =-\frac{e}{c}{F^{\lambda}}_{\nu} u^\nu\, . \end{equation} Unlike Klein, de Broglie's purpose was to show how the five-dimensional Universe's approach permits to geometrize the electromagnetic force. He stressed: `This means that with geometric meaning we have attributed to the [electromagnetic] potentials and to the ratio $ e/m $, the five-dimensional World-lines of point particles are all geodesics. \textit{The notion of force has been completely banned from Mechanics}.' (\cite{deBroglie}; p. 70). This connection between geodesic lines and equation (\ref{eq-lorentz2}) convinced de Broglie that was not necessary to consider null-geodesics lines only. De Broglie's investigation of five-dimensional geodesic lines continued with the question of what would be the correct particle's action in five dimensions. The author proposed (\cite{deBroglie}; p. 70): \begin{equation}\label{azione-dB} S_5 = -\mathcal{I} \int_O^M d\hat\tau \, , \end{equation} where \begin{equation} \mathcal{I} = m^{2}c^{2}-\frac{e^{2}}{\alpha\beta^{2}c^{2}}\; , \end{equation} because it reduces to the usual point particle action in the case of zero charge. In order to understand this fact, following de Broglie, we point out that we can rewrite $ S_5 $ as follows: \begin{equation}\label{action-final1} S_5 = -\mathcal{I} \int_O^M d\hat\tau = \int_O^M\left(\mathcal{I}\alpha\frac{d\theta}{d\hat{\tau}} \right) d\theta -\int_O^M \left( \mathcal{I}\frac{d\tau}{d\hat{\tau}}\right) d\tau \; , \end{equation} where the second equality sign follows by inserting $\displaystyle 1 = \left( \frac{d\hat{\tau}}{d\hat{\tau}}\right) ^2 = \left( \frac{d\tau}{d\hat{\tau}}\right) ^2 - \alpha \left( \frac{d\theta}{d\hat{\tau}}\right) ^2 $, as a formal consequence of $d\hat{\tau}^2 = d\tau^2 - \alpha d\theta^2 $. We remember that the condition $ \partial_5\gamma_{\bar{\mu}\bar{\nu}}=0 $ is equivalent to assert, using modern language, that space-time would admit a Killing vector field, which is tangent to the fifth coordinate. The scalar product between the Killing field and the velocity field is constant along the geodesic. This result implies that the ratio $ \frac{d\theta}{d\hat{\tau}} $ must be constant. Hence, de Broglie chose: \begin{equation}\label{rel2} \mathcal{I}\frac{d\tau}{d\hat{\tau}} = mc \end{equation} and \begin{equation}\label{rel1b} \mathcal{I}\alpha\frac{d\theta}{d\hat{\tau}} = -\frac{e}{c\beta}\; , \end{equation} which are consistent with equations (\ref{rel1}). Finally, using $ d\theta = dx^5 + \beta A_\mu dx^\mu $, $ S_5 $ assumes the following form: \begin{equation}\label{actionS5} S_5=-\int \frac{e}{c\beta}dx^5+ \frac{e}{c}\int A_\mu dx^{\mu} -mc\int d\tau \; . \end{equation} It is now evident that $ S_5 $ reduces to $\displaystyle S_4=-mc\int d\tau $ when we set $ e=0 $. Indeed, when $ e=0 $ the scalar product between the Killing field and the velocity field (\ref{rel1b}) (cf. de Broglie's comment on equation (\ref{projection})) is zero, then the geodesic projects onto the four-dimensional space-time. As a consequence, de Broglie convinced himself that the invariant $ \mathcal{I}^{2} $ should have been a five-dimensional generalization of the four-dimensional momentum\footnote{We remember that De Broglie's idea emerged comparing $ S_4 $ with $ S_5 $.} $ mc $. At this stage, we are able to explain the form of the invariant $ \mathcal{I} $. Equations (\ref{rel2}) and (\ref{rel1b}) and the identity $ d\hat{\tau}^2 = d\tau^2 \alpha d\theta^2 $ imply that $ \mathcal{I} $ must have the following form: \begin{equation}\label{formaI} \mathcal{I}^2 = m^2c^2 - \frac{e^{2}}{\alpha\beta^{2}c^{2}} \; . \end{equation} As noted by Klein, de Broglie choose $ -\alpha\beta^{2} = 2\kappa $, because, if the fifth dimension is time-like, $ \alpha $ is negative and the invariant $ \mathcal{I}^{2} $ would be strictly positive. On the other hand, as we have said, the choice is not consistent with the request to obtain Maxwell Lagrangian in (\ref{curv-scal-5}). As we have said in the main text, Klein asserted that de Broglie's mistake did not affect his conclusions. Klein referred to the following fact. De Broglie proposed the five-dimensional wave equation: \begin{equation}\label{we} \gamma^{\bar{\mu}\bar{\nu}}\nabla_{\bar{\mu}}\partial_{\bar{\nu}}\Psi\,=\,\frac{4\pi^2}{h^2} \mathcal{I}^2\Psi \; . \end{equation} It is worth noting that $ S_5 $ depends linearly from $ x^5 $, as we can see integrating (\ref{actionS5}). Hence, using a geometrical optics Ansatz $ \displaystyle\Psi=Ae^{\frac{i}{\hbar}S_5} $, the periodicity with respect to $ x^5 $ follows immediately. This means that the wave function can be written as: \begin{equation}\label{psi1} \Psi\left( x\right) = \psi\left( x^0\, ,x^1\, , x^2\, , x^3 \right)\cdot \text{exp}\left( \frac{i}{\hbar}\frac{e}{c\beta}x^5\right) \; , \end{equation} where $ \psi $ is the four-dimensional wave function. Using (\ref{psi1}) and the components of the inverse metric (\ref{metrica-inversa}), we note, following Klein (\cite{Klein-risp-deBroglie}; p. 243) that, $ \Psi $ satisfies \begin{equation}\label{rel3} \gamma^{55}\partial^2_5\Psi = -\frac{1}{\hbar^{2}}\left( \frac{1}{\alpha}+\beta^{2}A_{\mu}A^{\mu} \right) \frac{e^2}{c^2\beta^2}\Psi\; . \end{equation} This means that (\ref{we}) can be rewritten, in a flat space-time, in the following way (\cite{deBroglie}; p. 73): \begin{equation}\label{ultima1} g^{\mu\nu}\partial_\mu \partial_\nu\psi - \frac{2ie}{\hbar c}A^\mu\partial_\mu\psi - \frac{e^2}{\hbar^2 c^2}A_\mu A^\mu\psi = \frac{m^2c^2}{\hbar^2}\psi\; . \end{equation} We note that (\ref{ultima1}) corresponds to the KG equation for a complex scalar field in an external electromagnetic field, and can be written in the following compact way: \begin{equation} g^{\mu\nu}\left(\partial_{\mu}-\frac{i}{\hbar}\frac{e}{c}A_{\mu} \right)\left(\partial_{\nu}-\frac{i}{\hbar}\frac{e}{c}A_{\nu} \right)\psi = \frac{m^2c^2}{\hbar^2}\psi \; , \end{equation} if the Lorenz gauge, namely $\displaystyle\partial_{\mu} A^{\mu}=0 $, holds. As claimed by Klein, independently to the character of the fifth dimension, the term depending on $ \alpha\beta^{2} $ in $ \mathcal{I}^{2} $ definition (\ref{formaI}) disappears, and equation (\ref{ultima1}) reduces to de Broglie's equation (\cite{deBroglie}; p. 73, equation (40)), where the case of null gravitational field is considered. \subsection{{On Rosenfeld approach}} In this section we explain some technical details skipped in the main text. \subsubsection{{Five-dimensional versus four-dimensional momentum}}\label{app3a} Equations (\ref{RosB}) and (\ref{slope}) can be obtained as follows. First we note that if $ S_0 $ is a complete integral of the HJ equation in four dimensions, see equation (\ref{HJ-Ros1}), it follows that\footnote{See \cite{Landau-teo-campi}.} $ \displaystyle g^{\mu\nu}\left( \partial_\nu S_0 + \frac{e}{c}A_\nu \right) = mc \frac{dx^\mu}{d\tau} $. Then, using the inverse components of the metric tensor (\ref{inverse}) and equation (\ref{eq7}), we rewrite the l.h.s. of (\ref{RosB}) as follows: \begin{eqnarray} \gamma^{\mu\bar{\nu}}\partial_{\bar{\nu}}\bar{S} &=& \gamma^{\mu 5}\partial_{5}\bar{S} + \gamma^{\mu\nu}\partial_{\nu}\bar{S} = \left( -\beta A^{\mu}\right) \left( -\frac{e}{c\beta}\right) + g^{\mu\nu}\partial_\nu S_0\nonumber\\ &=& g^{\mu\nu}\left( \partial_\nu S_0 +\frac{e}{c}A_{\nu}\right) = mc \frac{dx^\mu}{d\tau}\, , \end{eqnarray} and we have finally obtained equation (\ref{RosB}). Since Rosenfeld introduced explicitly the quantity $ \sqrt{m^2c^2 - \frac{e^2c^2}{16\pi G}} $, we used for this quantity the symbol $ \mathcal{I}_{Ros} $ for brevity. From equation (\ref{RosA}) we get \begin{equation}\label{RosA1} \gamma^{\mu\bar{\nu}}\partial_{\bar{\nu}}\bar{S}=\mathcal{I}_{Ros}\frac{dx^{\mu}}{d\hat{\tau}} = \mathcal{I}_{Ros}\frac{dx^{\mu}}{d\tau}\frac{d\tau}{d\hat{\tau}} \, , \end{equation} and confronting equation (\ref{RosA1}) with (\ref{RosB}) we get equation (\ref{slope}). \subsubsection{{Modern five-dimensional action}}\label{app3b} In action (\ref{az-glob2}) Rosenfeld choose an unusual coupling between matter and gravity. Rosenfeld's coupling is unusual for the following reason. In a modern five-dimensional approach, the action would be: \begin{equation}\label{az-reale} \mathcal{S}_{tot} \left( \gamma\, ,\,\Phi\, ,\,\bar{\Phi}\right) = \int d^5x\sqrt{-\gamma}\left[ -\dfrac{1}{2\kappa_5}\tilde{R}+\tilde{\mathcal{L}}\right] \; , \end{equation} where $ \tilde{\mathcal{L}} $ is the action for a complex scalar field $ \Phi $, that has the expected length dimension $ \left[ \Phi\right] = (length)^{-\frac{3}{2}} $, in natural units $ \hbar = c = 1 $. Using the determinant definition and (\ref{metrica}) it can be proved that\footnote{We define $ \epsilon^{01235} = 1 $.} \begin{equation} \gamma = \epsilon^{\bar{\mu}\bar{\nu}\bar{\rho}\bar{\sigma}\bar{\lambda}}\gamma_{\bar{\mu}0} \gamma_{\bar{\nu}1}\gamma_{\bar{\rho}2}\gamma_{\bar{\sigma}3}\gamma_{\bar{\lambda}5} = \alpha g\; . \end{equation} Using $ \kappa_5 = 2\pi \tilde{l}\kappa$, where $ 2\pi \tilde{l} $ is the ``volume'' of the compact dimension, (\ref{az-reale}) can be rewritten as follows: \begin{equation}\label{az-reale2} \mathcal{S}_{tot} \left( \gamma\, ,\,\Phi\, ,\,\bar{\Phi}\right) = \dfrac{\sqrt{\alpha}}{2\kappa_5}\int d^5x\sqrt{-g}\left[ -\tilde{R}+\kappa \left( 2\pi\tilde{l}\tilde{\mathcal{L}}\right) \right] \; . \end{equation} Now the length $ \tilde{l} $ of the fifth dimensions can be adsorbed with the following field redefinition: $ \Psi = \sqrt{2\pi\tilde{l}}\,\Phi $. This shows that the equations obtained by varying (\ref{az-reale2}) are equivalent to Rosenfeld's equations of motion, but the new scalar field $ \Psi $ has length dimensions $ \left[ \Psi\right] = (length)^{-1} $ as a four-dimensional scalar field. As a consequence, as stated in the main text, the stress-energy tensor defined by Rosenfeld is a four-dimensional object. \subsubsection{{Einstein-Maxwell equations coupled with complex scalar field}}\label{app3c} The equations obtained by varying (\ref{az-reale2}) with respect to the metric are: \begin{equation}\label{eomA} \tilde{R}_{\bar{\mu}\bar{\nu}}-\frac{1}{2}\gamma_{\bar{\mu}\bar{\nu}}\tilde{R} = \kappa T_{\bar{\mu}\bar{\nu}} \; , \end{equation} and, as written in the main text, they are formally equivalent to the the four-dimensional Einstein equations, coupled to the electromagnetic and matter stress-energy tensor, and Maxwell equations. In order to understand this fact, firstly we write the expression for $ \tilde{R}_{\mu\nu} $. After a lengthy calculation, from (\ref{def-R-tilde}) it follows: \begin{equation}\label{R3b} \tilde{R}_{\mu\nu} = R_{\mu\nu} + \frac{\alpha^{2}\beta^{4}}{4}A_\mu A_\nu F_{\sigma\lambda}F^{\sigma\lambda} -\frac{\alpha\beta^{2}}{2}F_{\mu\lambda}{F_\nu}^{\lambda} + \frac{\alpha\beta^{2}}{2}\left( A_\mu\nabla_\lambda {F_\nu}^{\lambda} + A_\nu\nabla_\lambda {F_\mu}^{\lambda}\right) \; . \end{equation} Let us consider the contravariant components of (\ref{eomA}), i.e. \begin{equation}\label{eomB} \gamma^{\bar{\lambda}\bar{\mu}}\gamma^{\bar{\sigma}\bar{\nu}}\tilde{R}_{\bar{\mu}\bar{\nu}}-\frac{1}{2}\gamma^{\bar{\lambda}\bar{\sigma}}\tilde{R} = \kappa\gamma^{\bar{\lambda}\bar{\mu}}\gamma^{\bar{\sigma}\bar{\nu}}T_{\bar{\mu}\bar{\nu}}\; . \end{equation} Using (\ref{metrica}), (\ref{metrica-inversa}), (\ref{R1}), (\ref{R2}) and (\ref{R3b}) the $ \lambda\sigma $ components of the l.h.s. of equation (\ref{eomB}) can be rewritten as follows: \begin{eqnarray} \gamma^{\lambda\bar{\mu}}\gamma^{\sigma\bar{\nu}}\tilde{R}_{\bar{\mu}\bar{\nu}}-\frac{1}{2}\gamma^{\lambda\sigma}\tilde{R} &=& \left[ g^{\lambda\mu}g^{\sigma\nu}\tilde{R}_{\mu\nu}+g^{\lambda\mu}\gamma^{\sigma 5}\tilde{R}_{5\mu} + \gamma^{\lambda 5}g^{\sigma \nu}\tilde{R}_{5\nu} + \gamma^{\lambda 5}\gamma^{\sigma 5}\tilde{R}_{55} \right] -\frac{1}{2}g^{\lambda\sigma}\tilde{R}\, , \nonumber\\ &=& R^{\lambda\sigma}-\frac{1}{2}g^{\lambda\sigma}R - \kappa g^{\lambda\mu}g^{\sigma\nu} T^{em}_{\mu\nu}\; . \end{eqnarray} Following Rosenfeld we define \begin{equation} T^{\lambda\sigma}=\gamma^{\lambda\bar{\mu}}\gamma^{\sigma\bar{\nu}}T_{\bar{\mu}\bar{\nu}}\; , \end{equation} and the $ \lambda\sigma $ components of (\ref{eomB}) read (\cite{Ros1}; p. 313): \begin{equation}\label{Einstein} R^{\lambda\sigma}-\frac{1}{2}g^{\lambda\sigma}R = \kappa \left( T_{em}^{\lambda\sigma} + T^{\lambda\sigma} \right) \; , \end{equation} that correspond to Einstein equations coupled to the electromagnetic and the matter stress-energy tensor. Maxwell equations emerge conversely as follows. If we contract (\ref{eomA}) with $ \gamma^{\bar{\rho}\bar{\mu}} $, we get: \begin{equation}\label{eomC} \gamma^{\bar{\rho}\bar{\mu}}\tilde{R}_{\bar{\mu}\bar{\nu}}-\frac{1}{2}{\delta^{\bar{\rho}}}_{\bar{\nu}}\tilde{R} = \kappa\gamma^{\bar{\rho}\bar{\mu}} T_{\bar{\mu}\bar{\nu}}\; . \end{equation} The $ \rho 5 $ components of the l.h.s. of equation (\ref{eomC}) now read\footnote{Remember that $ {\delta^{\bar{\rho}}}_{\bar{\nu}}=0 $ when $ \bar{\rho}\neq \bar{\nu}$}: \begin{eqnarray} \gamma^{\rho\bar{\mu}}\tilde{R}_{\bar{\mu}\bar{\nu}} &=& \gamma^{\rho\mu}\tilde{R}_{\mu 5} + \gamma^{\rho 5}\tilde{R}_{5 5}\; ,\nonumber\\ &=& \frac{\alpha\beta}{2}\nabla_\lambda F^{\rho\lambda}\; . \end{eqnarray} Remembering that $ \kappa = \frac{\alpha\beta^2}{2} $, following Rosenfeld, we define \begin{equation} {T^\rho}_5 = \gamma^{\rho\bar{\mu}} T_{\bar{\mu}5}\; , \end{equation} and equation (\ref{eomC}) now reads: \begin{equation}\label{em} \nabla_\lambda F^{\rho\lambda} = \beta {T^\rho}_5 \; . \end{equation} Equations (\ref{em}) correspond to Maxwell equations coupled to a current density as written by Rosenfeld (\cite{Ros1}; p. 313). \subsubsection{{Four-dimensional and five-dimensional curvature scalar}}\label{app3d} In the main text, we have written that using (\ref{eomA}) Rosenfeld obtained a particular relation for the curvature scalars $ R $ and $ \tilde{R} $, namely \begin{eqnarray} R &=&-\kappa\left[ \gamma^{\nu\bar{\mu}}T_{\bar{\mu}\nu} -\gamma^{\mu\rho}A_\rho\nabla_\lambda\left( \gamma_{\mu\sigma}F^{\sigma\lambda}\right) \right]\qquad \text{and}\label{formula1}\\ \tilde{R} &=& -\kappa\left[ \gamma^{\nu\bar{\mu}}T_{\bar{\mu}\nu}+\frac{F_{\sigma\lambda}F^{\sigma\lambda}}{2} -\gamma^{\mu\rho}A_\rho\nabla_\lambda\left( \gamma_{\mu\sigma}F^{\sigma\lambda}\right) \right].\label{formula2} \end{eqnarray} In order to obtain these relations, we set $ \bar{\rho}=\bar{\nu}=\nu $ in equation (\ref{eomC}) and it reads: \begin{equation}\label{eomD} \gamma^{\nu\bar{\mu}}\tilde{R}_{\bar{\mu}\nu}-2\tilde{R} = \kappa {T^{\nu}}_\nu\; , \end{equation} where we have defined $\displaystyle {T^{\nu}}_\nu = \gamma^{\nu\bar{\mu}} T_{\bar{\mu}\nu} $. Using the definition of $ \tilde{R} $ (equation (\ref{curv-scal-5})), equation (\ref{eomD}) can be rewritten as \begin{equation}\label{eomD1} \tilde{R}-\gamma^{55}\tilde{R}_{55}-\gamma^{5\mu}\tilde{R}_{\mu 5}-2\tilde{R} = \kappa {T^{\nu}}_\nu\; . \end{equation} Inserting (\ref{metrica-inversa}), (\ref{R1}), (\ref{R2}) and (\ref{curv-scal-5}), and isolating $ R $, we obtain equation (\ref{formula1}) and using again (\ref{curv-scal-5}) we obtain (\ref{formula2}). \subsubsection{{The retarded potentials}}\label{app3e} After having linearised Einstein equations (\ref{eomA}), Rosenfeld integrated it and obtained the retarded potentials, equation (\ref{sol-eq}). Using modern notation the retarded potentials read: \begin{equation} h_{\bar{\mu}\nu}(t; \mathbf{x}) = -\frac{\kappa}{2\pi}\int_\Sigma \bar{T}_{\bar{\mu}\nu}\left( t-\frac{| \mathbf{x}- \mathbf{y}|}{c}; \mathbf{y}\right) \frac{d^3y}{| \mathbf{x}- \mathbf{y}|} \quad , \end{equation} where the radial distance is defined by $ r= | \mathbf{x}- \mathbf{y}|$ and the integration is carried on a three-dimensional hypersurface $ \Sigma $ at the retarded time $\displaystyle t-\frac{| \mathbf{x}- \mathbf{y}|}{c} $. The retarded potential are functions of $ \mathbf{x} $ and $ t $. \subsubsection{{The isotropic coordinate system and the ``mean distance''}}\label{app3f} In this last appendix we show how Rosenfeld was inspired by his knowledge of Eddington's book on GR. Given a bounded charged matter distribution of radius $ \epsilon $, the RN metric is an exact solution of equations (\ref{Einstein}), with $ T^{\lambda\sigma} $ being the stress-energy tensor associated to the classical spherical symmetric mass distribution. In polar coordinates the line element has the following form: \begin{equation}\label{RN} ds^2 = -\left( 1- \frac{2mG}{c^2r} +\frac{GQ^2}{c^4r^2} \right)c^2dt^2 + \left( 1- \frac{2mG}{c^2r}+\frac{GQ^2}{c^4r^2}\right)^{-1}dr^2 + r^2\left( d\theta^2 + sin^2\theta d\varphi^2\right) \, , \end{equation} where $ m $ and $ Q $ are the mass and the charge of the particle respectively and the coordinate $ r $ has the following range: $ \epsilon\leq r<+\infty $. If $ Q=0 $ the line element describes the so called exterior Schwartzschild metric. Rosenfeld used the less known isotropic coordinate system. We do not know if the author would know RN metric in isotropic coordinates. As stated in section \ref{prehist}, we know from Kuhn's interview \cite{Kuhn1} that Rosenfeld studied Eddington's book on GR. In \textit{The Mathematical Theory of Relativity} \cite{Eddington} the British Physicist introduced isotropic coordinates for Schwartzschild metric, using both its exact form and its limit at first order in $ \displaystyle \frac{1}{r} $. It is worth noting that at asymptotically large distances from the source, at the first order in $ \displaystyle \frac{1}{r} $, both Schwartzschild and RN metric have the same form. This fact is true both in isotropic and in polar coordinates. In the so called isotropic Cartesian coordinate system the line element of a spherically symmetric space-time has the following form: \begin{equation}\label{isotropic1} ds^2 = -A\left( r\right) dt^2 + B\left( r\right) \left( dx^2+dy^2+dz^2\right) \, , \end{equation} where $ r=\sqrt{x^2+y^2+z^2} $ is the distance from the origin. Following Eddington, at the first order in $ \displaystyle \frac{1}{r} $, for a point-particle continually at rest we have (\cite{Eddington}; p. 101): \begin{equation} A\left( r\right)\approx 1-\frac{2mG}{c^2r}\qquad\text{and}\qquad B\left( R\right) \approx 1+\frac{2mG}{c^2r}\;\, , \end{equation} where the particle need not be at the origin provided that $ r $ is the distance from the particle to the point considered. The line element now reads: \begin{equation}\label{isotropic2} ds^2 = -\left( 1-\frac{2mG}{c^2r}\right) dt^2 + \left( 1+\frac{2mG}{c^2r}\right) \left( dx^2+dy^2+dz^2\right) \, , \end{equation} showing that at large distances the particle's gravitational field is ``less different'' from the Minkowskian field, as stated by Rosenfeld. Line element (\ref{isotropic2}) and Rosenfeld's line element are different, see e.g. (\ref{metr-cl-a}). Rosenfeld used the ``mean distance'' $ r_0(\vec{x}) $ instead of $ r $: Rosenfeld replaced the distance to the single particle by the mean distance to the cloud. In order to understand this fact, we remember that inspecting the semi-classical limit of his quantum metric the particle is represented by a wave function that is zero outside a volume $ V $. For this reason, following Eddington, we consider the transition to continuous matter. Summing the fields of force of a number of particles, Eddington suggested the following form for the two functions $ A\left( R\right) $ and $ B\left( R\right) $: \begin{equation} A\left( r\right)\approx 1-\frac{2\Omega}{c^2}\qquad\text{and}\qquad B\left( r\right) \approx 1+\frac{2\Omega}{c^2}\;\, , \end{equation} where $ \Omega $ represents the Newton potential at the point considered and using Eddington notation reads: \begin{equation}\label{particles1} \Omega = \sum \frac{m}{r}\;\, . \end{equation} Let $ \vec{y}_i $, with $ i=1,\dots ,N $, be the position of the i-th particle, $ m_i $ its mass and let $ \vec{x} $ be an arbitrary point of the space-time. Using modern notation, equation (\ref{particles1}) reads: \begin{equation}\label{particles2} \Omega = \sum_{i=1}^N \frac{m_i}{|\vec{y}_i-\vec{x}|}\;\, . \end{equation} For a homogeneous system of mass $ m $ with volume $ V $ the Newton potential reads: \begin{equation}\label{homo} \Omega = \frac{m}{V}\int_{V}\frac{dxdydz}{|\vec{y}-\vec{x}|}\;\, , \end{equation} where $ \mathbf{y} $ is a point of the volume $ V $. The mean value theorem states that: \begin{equation}\label{media} \frac{1}{V}\int_V \frac{dxdydz}{| \mathbf{x}- \mathbf{y}|} = \frac{1}{r_0( \mathbf{x})} \end{equation} where $ r_0(\vec{x}) $ is the mean distance to the cloud. Equation (\ref{media}) is equivalent to Rosenfeld's condition (\ref{def-r0}), namely $\displaystyle \frac{V}{r_0( \mathbf{x})} =\int_V \frac{dV}{| \mathbf{x}- \mathbf{y}|} $ and the line element to be compared with the semi-classical limit of the quantum metric reads: \begin{equation}\label{isotropic3} ds^2 = -\left( 1-\frac{2mG}{c^2r_0(\vec{x})}\right) dt^2 + \left( 1+\frac{2mG}{c^2r_0(\vec{x})}\right) \left( dx^2+dy^2+dz^2\right) \, . \end{equation}
{ "timestamp": "2018-02-27T02:06:52", "yymm": "1802", "arxiv_id": "1802.08878", "language": "en", "url": "https://arxiv.org/abs/1802.08878" }
\section{Introduction} \IEEEPARstart{T}{racking} for autonomous vehicles involves accurately identifying and localizing dynamic objects in the environment surrounding the vehicle. Tracking of surround vehicles is essential for many tasks crucial to truly autonomous driving, such as \textit{obstacle avoidance}, \textit{path planning}, and \textit{intent recognition}. To be useful for such high-level reasoning, the generated tracks should be accurate, long and robust to sensor noise. In this study, we propose a full-surround MOT framework to create such desirable tracks. \begin{figure}[!t] \centering \begin{subfigure}[b]{0.4\textwidth} \centering \includegraphics[width=0.6\linewidth]{motivation.pdf} \caption{} \end{subfigure}% \\ \begin{subfigure}[b]{0.5\textwidth} \centering \includegraphics[width=0.6\linewidth]{motivation_1.png} \caption{} \end{subfigure}% \caption{\textbf{(a)} Illustration of online MOT for autonomous vehicles. The surrounding vehicles (in red) are tracked in a right-handed coordinate system centered on the ego-vehicle (center). The ego-vehicle has full-surround coverage from vision and range sensors, and must fuse proposals from each of them to generate continuous tracks (dotted lines) in the real world. \\ \textbf{(b)} An example of images captured from a full-surround camera array mounted on our testbed, along with color coded vehicle annotations.} \label{fig:motivation} \end{figure} Traditional MOT techniques for autonomous vehicles can roughly be categorized into 3 groups based on the sensory inputs they use - 1) dense point clouds from range sensors, 2) vision sensors, and 3) a fusion of range and vision sensors. Studies like \cite{choi2013multi, song2015object, asvadi2015detection, asvadi20163d1} make use of dense point clouds created by 3D LiDARs like the Velodyne HDL-64E. Such sensors, although bulky and expensive, are capable of capturing finer details of the surroundings owing to its high vertical resolution. Trackers can therefore create suitable mid-level representations like 2.5D grids, voxels etc. that retain unique statistics of the volume they enclose, and group such units together to form coherent objects that can be tracked. It must be noted however, that these approaches are reliant on having dense point representations of the scene, and would not scale well to LiDAR sensors that have much fewer scan layers. On the other hand, studies such as \cite{pfeiffer2010efficient, broggi2013full, vatavu2015stereovision, ovsep2016multi} make use of stereo vision alone to perform tracking. The pipeline usually involves estimating the disparity image and optionally creating a 3D point cloud, followed by similar mid-level representations like stixels, voxels etc. which are then tracked from frame to frame. These sensors are limited by the quality of disparity estimates and the field of view (FoV) of the stereo pair. Unlike 3D LiDAR based systems, they are unable to track objects in full-surround. There are other single camera approaches to surround vehicle behavior analysis \cite{satzoda2017vision, sivaraman2014dynamic}, but they too are limited in their FoVs and localization capabilities. Finally, there are fusion based approaches like \cite{cho2014multi, asvadi20163d, allodi2016machine}, that make use of LiDARs, stereo pairs, monocular cameras, and Radars in a variety of configurations. These techniques either perform \textit{early} or \textit{late} fusion based on their sensor setup and algorithmic needs. However, none of them seem to offer full-surround solutions for vision sensors, and are ultimately limited to fusion only in the FoV of the vision sensors. In this study, we take a different approach to full-surround MOT and try to overcome some of the limitations in previous approaches. Specifically, we propose a framework to perform full-surround MOT using calibrated camera arrays, with varying degrees of overlapping FoVs, and with an option to include low resolution range sensors for accurate localization of objects in 3D. We term this the M$^{3}$OT framework, which stands for multi-perspective, multi-modal, multi-object tracking framework. To train and validate the M$^{3}$OT framework, we make use of naturalistic driving data collected from our testbed (illustrated in Figure~\ref{fig:motivation}) that has full-surround coverage from vision and range modalities. Since we use vision as our primary perception modality, we leverage recent approaches from the 2D MOT community which studies tracking of multiple objects in the 2D image plane. Recent progress in 2D MOT has focused on the \textit{tracking-by-detection} strategy, where object detections from a category detector are linked to form trajectories of the targets. To perform tracking-by-detection \textit{online} (i.e. in a causal fashion), the major challenge is to correctly associate noisy object detections in the current video frame with previously tracked objects. The basis for any data association algorithm is a similarity function between object detections and targets. To handle ambiguities in association, it is useful to combine different cues in computing the similarity, and learn an association based on these cues. Many recent 2D MOT methods such as \cite{bae2014robust, kim2012online, kuo2010multi, li2009learning, song2008vision} use some form of learning (online or offline) to accomplish data association. Similar to these studies, we formulate the online multi-object tracking problem using Markov Decision Processes (MDPs) proposed in \cite{xiang2015learning}, where the lifetime of an object is modeled with a MDP (see Figure~\ref{fig:MDP}), and multiple MDPs are assembled for multi-object tracking. In this method, learning a similarity function for data association is equivalent to learning a policy for the MDP. The policy learning is approached in a reinforcement learning fashion which benefits from advantages of both offline-learning and online-learning in data association. The M$^{3}$OT framework is also capable to naturally handle the birth/death and appearance/disappearance of targets by treating them as state transitions in the MDP, and also benefits from the strengths of online learning approaches in single object tracking (\cite{babenko2011robust, bao2012real, hare2016struck, kalal2012tracking}). Our main contributions in this work can be summarized as follows - 1) We extend and improve the MDP formulation originally proposed for 2D MOT, and modify it to track objects in 3D (real world). 2) We make the M$^{3}$OT framework capable of tracking objects across multiple vision sensors in calibrated camera arrays by carrying out efficient and accurate fusion of object proposals. 3) The M$^{3}$OT framework is made highly \textit{modular}, capable of working with any number of cameras, with varying degrees of overlapping FoVs, and with the option to include range sensors for improved localization and fusion in 3D. The above contributions and the wider scope and applicability of this work in comparison to traditional 2D MOT approaches are highlighted in Figure~\ref{fig:block_diag}. Finally, we carry out experiments using naturalistic driving data collected on highways using full-surround sensory modalities, and validate the accuracy, robustness and modularity of our framework. \begin{table*}[ht!] \centering \resizebox{0.9\linewidth}{!}{% \begin{threeparttable} \caption{Relevant research in 3D MOT for intelligent vehicles.} \begin{tabular}{| c | c | c | c | c | c | c | c | c | c | p{4cm} |} \hline \multirow{3}{*}{\textbf{Study}} & \multicolumn{5}{c |}{\textbf{Sensors used}} & \multicolumn{3}{c |}{\textbf{Tracker Type}} & \multicolumn{2}{c |}{\textbf{Experimental Analysis}}\\ \cline{2-11} & \thead{Monocular\\camera} & \thead{Stereo\\pair} & \thead{Full-surround\\camera array} & \thead{LiDAR\\ } & \thead{Radar\\ } & \thead{Single object\\tracker} & \thead{Multi-object\\tracker} & \thead{Online\\(causal)} & \thead{Dataset\\ } & \thead{Evaluation metrics}\\ \hline \hline Choi et al.\cite{choi2013multi} & - & - & - & \ding{51} & - & - & \ding{51} & \ding{51} & Proposed & \makecell{Distance and velocity errors}\\ \hline Broggi et al.\cite{broggi2013full} & - & \ding{51} & - & - & - & - & \ding{51} & \ding{51} & Proposed & \makecell{True positives, false positives,\\false negatives}\\ \hline Song et al.\cite{song2015object} & - & - & - & \ding{51} & - & \ding{51} & \ding{55} & \ding{51} & KITTI & \makecell{Position error, intersection ratio}\\ \hline Cho et al.\cite{cho2014multi} & \ding{51} & - & - & \ding{51} & \ding{51} & - & \ding{51} & \ding{51} & Proposed & \makecell{Correctly tracked, falsely tracked,\\true and false positive rates}\\ \hline Asvadi et al.\cite{asvadi2015detection} & - & - & - & \ding{51} & - & - & \ding{51} & \ding{51} & KITTI & - \\ \hline Vatavu et al.\cite{vatavu2015stereovision} & - & \ding{51} & - & - & - & - & \ding{51} & \ding{51} & Proposed & - \\ \hline Asvadi et al.\cite{asvadi20163d1} & - & - & - & \ding{51} & - & - & \ding{51} & \ding{51} & KITTI & \makecell{Number of missed and false\\obstacles}\\ \hline Asvadi et al.\cite{asvadi20163d} & \ding{51} & - & - & \ding{51} & - & \ding{51} & \ding{55} & \ding{51} & KITTI & \makecell{Average center location errors in\\2D and 3D, orientation errors}\\ \hline O{\v{s}}ep et al.\cite{ovsep2016multi} & - & \ding{51} & - & - & - & - & \ding{51} & \ding{51} & KITTI & \makecell{Class accuracy, GOP recall,\\tracking recall}\\ \hline Allodi et al.\cite{allodi2016machine} & - & \ding{51} & - & \ding{51} & - & - & \ding{51} & \ding{51} & KITTI & \makecell{MOTP, IDS, Frag, average\\localization error}\\ \hline Dueholm et al.\cite{dueholm2016trajectories} & - & - & \ding{51} & - & - & - & \ding{51} & \ding{55} & \makecell{VIVA Surround} & \makecell{MOTA, MOTP, IDS, Frag,\\Precision, Recall}\\ \hline \textbf{\makecell{This work\tnote{1}\\(M$^{3}$OT)}} & - & - & \ding{51} & \ding{51} & - & - & \ding{51} & \ding{51} & Proposed & \makecell{MOTA, MOTP, MT, ML, IDS for a\\variety of sensor configurations}\\ \hline \end{tabular} \begin{tablenotes} \item[1] this framework can work with or without LiDAR sensors, and with any subset of camera sensors. \end{tablenotes} \label{tab:related_work} \end{threeparttable} } \end{table*} \section{Related Research}\label{related_work} We highlight some representative works in 2D and 3D MOT below. We also summarize the some key aspects of related 3D MOT studies in Table~\ref{tab:related_work}. For a recent survey on detection and tracking for intelligent vehicles, we refer the reader to \cite{sivaraman2013looking}. \textbf{2D MOT:} Recent research in MOT has focused on tracking-by-detection, where the main challenge is data association for linking object detections to targets. Majority of batch (\textit{offline}) methods (\cite{berclaz2011multiple, butt2013multi, li2009learning, milan2014continuous, niebles2010efficient, pirsiavash2011globally, zhang2008global}) formulate MOT as a global optimization problem in a graph-based representation, while \textit{online} methods solve the data association problem either probabilistically \cite{khan2005mcmc, oh2009markov, okuma2004boosted} or deterministically (e.g., Hungarian algorithm \cite{munkres1957algorithms} in \cite{bae2014robust, kim2012online} or greedy association \cite{breitenstein2011online}). A core component in any data association algorithm is a similarity function between objects. Both batch methods \cite{kuo2010multi, li2009learning} and online methods \cite{bae2014robust, kim2012online, song2008vision} have explored the idea of learning to track, where the goal is to learn a similarity function for data association from training data. In this work, we extend and improve the MDP framework for 2D MOT proposed in \cite{xiang2015learning}, which is an online method that uses reinforcement learning to learn associations. \textbf{3D MOT for autonomous vehicles:} In \cite{pfeiffer2010efficient}, the authors use a stereo rig to first calculate the disparity using Semi-Global Matching (SGM), followed by height based segmentation and free-space calculation to create a mid-level representation using \textit{stixels} that encode the height within a cell. Each stixel is then represented by a 6D state vector, which is tracked using the Extended Kalman Filter (EKF). In \cite{broggi2013full}, the authors use a \textit{voxel} based representation instead, and cluster neighboring voxels based on color to create objects that are then tracked using a greedy association model. The authors in \cite{vatavu2015stereovision} use a grid based representation of the scene, where cells are grouped to create objects, each of which is represented by a set of control points on the object surface. This creates a high dimensional state-space representation, which is accounted for by a Rao-Blackwellized particle filter. More recently, the authors of \cite{ovsep2016multi} propose to carry out semantic segmentation on the disparity image, which is then used to generate generic object proposals by creating a scale-space representation of the density, followed by multi-scale clustering. The proposed clusters are then tracked using a Quadratic Pseudo-Boolean Optimization (QPBO) framework. The work in \cite{dueholm2016trajectories} use a camera setup similar to ours, but the authors propose an offline framework for tracking, hence limiting their use to surveillance related applications. Alternatively, there are approaches that make use of dense point clouds generated by LiDARs as opposed to creating point clouds from a disparity image. In \cite{choi2013multi}, the authors first carry out ground classification based on the variance in the radius of each scan layer, followed by a 2.5D occupancy grid representation of the scene. The grid is then segmented, and regions of interest (RoIs) identified within, each of which is tracked by a standard Kalman Filter (KF). Data association is achieved by simple global nearest neighbor. Similar to this, the authors in \cite{asvadi2015detection} use a 2.5D occupancy grid based representation, but augment this with an occupancy grid \textit{map} which accumulates past grids to create a coherent global map of occupancy by accounting for ego-motion. Using these two representations, a 2.5D \textit{motion} grid is created by comparing the map with the latest occupancy grid, which isolates and identifies dynamic objects in the scene. Although the work in \cite{asvadi20163d1} follows the same general idea, the authors propose a piece-wise ground plane estimation scheme capable of handling non-planar surfaces. In a departure from grid based methods, the authors in \cite{song2015object} project the 3D point cloud onto a virtual image plane, creating an object appearance model based on 4 image-based cues for each template of the desired target. A particle filtering framework is implemented, where the particle with least reconstruction error with respect to the stored template is chosen to update the tracker. Background filtering and occlusion detection are implemented to improve performance. Finally, we list recent methods that rely on fusion of different sensor modalities to function. In \cite{cho2014multi}, the authors propose an EKF based fusion scheme, where measurements from each modality are fed sequentially. Vision is used to classify the object category, which is then used to choose appropriate motion and observation models for the object. Once again, observations are associated based on a global nearest neighbor policy. This is somewhat similar to the work in \cite{asvadi20163d}, where given an initial 3D detection box, the authors propose to project 3D points within the box to the image plane and calculate the convex hull of projected points. In this case, a KF is used to perform both fusion and tracking, where fusion is carried out by projecting the 2D hull to a sparse 3D cloud, and using both 3D cues to perform the update. In contrast, the authors of \cite{allodi2016machine} propose using the Hungarian algorithm (for bipartite matching) for both data association and fusion of object proposals from different sensors. The scores for association are obtained from Adaboost classifiers trained on high-level features. The objects are then tracked using an Unscented Kalman Filter (UKF). \begin{figure*}[ht!] \begin{center} \includegraphics[width=0.9\linewidth]{block_diag_1.pdf} \end{center} \caption{Illustration of proposed M$^3$OT framework and its scope in comparison to traditional 2D MOT algorithms. Data is associated not only within video frames, but also across other videos and sensors. The algorithm produces tracks in each individual sensor coordinate frame, and in the desired global coordinate frame using cross-modal fusion in an online manner.} \label{fig:block_diag} \end{figure*} \section{Fusion of Object Proposals}\label{fusion} In this study, we make use of full-surround camera arrays comprising of sensors with varying FoVs. The M$^{3}$OT framework, however, is capable of working with any type and number of cameras, as long as they are calibrated. In addition to this, we also propose a variant of the framework for cases where LiDAR point clouds are available. To effectively utilize all available sensors, we propose an \textit{early fusion} of object proposals obtained from each of them. At the very start of each time step during tracking, we identify and fuse all proposals belonging to the same object. These proposals are then utilized by the M$^{3}$OT framework to carry out tracking. It must be noted that this usage of ``early fusion" is in contrast to the traditional usage of the term to refer to fusion of raw sensor data to provide a merged representation. \paragraph{Projection \& Back-projection} It is essential to have a way of associating measurements from different sensors to track objects across different camera views, and to carry out efficient fusion across sensor modalities. This is achieved by defining a set of \textit{projection} mappings, one from each sensor's unique coordinate system to the global coordinate system, and a set of \textit{back-projection} mappings that take measurements in the global coordinate system to individual coordinate systems. In our case, the global coordinate system is centered at the mid-point of the rear axle of the ego-vehicle. The axes form a right-handed coordinate system as shown in Figure~\ref{fig:motivation}. The LiDAR sensors output a 3D point cloud in a common coordinate system at every instant. This coordinate frame may either be centered about a single LiDAR sensor, or elsewhere depending on the configuration. In this case, the projection and back-projection mappings are simple 3D coordinate transformations: \begin{equation} P_{range \rightarrow G}(\mathbf{x}^{range}) = \boldsymbol{R}_{range} \cdot \mathbf{x}^{range} + \mathbf{t}_{range} \mathbf{l}, \end{equation} and, \begin{equation} P_{range \leftarrow G}(\mathbf{x}^{G}) = \boldsymbol{R}_{range}^T \cdot \mathbf{x}^{G} -\boldsymbol{R}_{range}^T \mathbf{t}_{range}, \end{equation} where $P_{range \rightarrow G}(\cdot)$ and $P_{range \leftarrow G}(\cdot)$ are the projection and back-projection mappings from the LiDAR (range) coordinate system to the global (G) coordinate system and vice-versa. The vectors $\mathbf{x}^{range}$ and $\mathbf{x}^{G}$ are the corresponding coordinates in the LiDAR and global coordinate frames. The $3 \times 3$ orthonormal rotation matrix $\boldsymbol{R}_{range}$ and translation vector $\mathbf{t}_{range}$ are obtained through calibration. Similarly, the back-projection mappings for each camera $k \in \{1, \cdots, K\}$ can be defined as: \begin{align} P_{cam_k \leftarrow G}(\mathbf{x}^{G}) = (u, v)^T,\\ \text{s.t }\begin{bmatrix}u \\v \\1 \end{bmatrix} = \boldsymbol{C}_k \cdot [\boldsymbol{R}_k | \mathbf{t}_k] \cdot [(\mathbf{x}^G)^T | 1]^T, \end{align} where the set of camera calibration matrices $\{\boldsymbol{C}_k\}_{k=1}^K$ are obtained after the intrinsic calibration of cameras, the set of tuples $\{(\boldsymbol{R}_k, \mathbf{t}_k)\}_{k=1}^K$ obtained after extrinsic calibration, and $(u, v)^T$ denotes the pixel coordinates after back-projection. Unlike the back-projection mappings, the projection mappings for camera sensors are not well defined. In fact, the mappings are one-to-many due to the depth ambiguity of single camera images. To find a good estimate of the projection, we use two different approaches. In case of a vision-only system, we use the \textit{inverse perspective mapping} (IPM) approach: \begin{align} P_{cam_k \rightarrow G}(\mathbf{x}^{k}) = (x, y)^T,\\ \text{s.t. }\begin{bmatrix}x \\y \\1 \end{bmatrix} = \boldsymbol{H}_k \cdot [(\mathbf{x}^k)^T | 1]^T, \end{align} where $\{\boldsymbol{H}_k\}_{k=1}^K$ are the set of homographies obtained after IPM calibration. Since we are only concerned with lateral and longitudinal displacements of vehicles in the global coordinate system, we only require the $(x, y)^T$ coordinates, and set the altitude coordinate to a fixed number. \begin{figure}[!t] \centering \begin{subfigure}[b]{0.5\textwidth} \centering \includegraphics[width=0.7\linewidth]{ipm_projection.pdf} \caption{} \label{fig:ipm_projection} \end{subfigure}% \\ \begin{subfigure}[b]{0.5\textwidth} \centering \includegraphics[width=0.7\linewidth]{lidar_projection.pdf} \caption{} \label{fig:lidar_projection} \end{subfigure}% \caption{Projection of object proposals using: (a) \textbf{IPM:} The bottom center of the bounding boxes are projected into the global coordinate frame (right), (b) \textbf{LiDAR point clouds:} LiDAR points that fall within a detection window are flattened and lines are fit to identify the vehicle center (right).} \end{figure} \paragraph{Sensor Measurements \& Object Proposals} As we adopt a tracking-by-detection approach, each sensor is used to produce object proposals to track and associate. In case of vision sensors, a vehicle detector is run on each individual camera's image to obtain multiple detections $d$, each of which is defined by a bounding box in the corresponding image. Let $(u, v)$ denote the top left corner and $(w, h)$ denote the width and height of a detection $d$ respectively. In case of a vision-only system, the corresponding location of $d$ in the global coordinate system is obtained using the mapping $P_{cam_k \rightarrow G}((u+\frac{w}{2}, v+h)^T)$, where $k$ denotes the camera from which the proposal was generated. This procedure is illustrated in Figure~\ref{fig:ipm_projection}. In cases where LiDAR sensors are available, an alternative is considered (shown in Figure~\ref{fig:lidar_projection}). First, the back-projected LiDAR points that fall within a detection box $d$ are identified using a look-up table with pixel coordinates as the key, and the corresponding global coordinates as the value. These points are then flattened by ignoring the altitude component of their global coordinates. Next, a line $\mathbf{l}_1$ is fitted to these points using RANSAC with a small inlier ratio (0.3). This line aligns with the dominant side of the detected vehicle. The other side of the vehicle corresponding to line $\mathbf{l}_2$ is then identified by removing all inliers from the previous step and repeating a RANSAC line fit with a slightly higher inlier ratio (0.4). Finally, the intersection of lines $\mathbf{l}_1$ and $\mathbf{l}_2$ along with the vehicle dimensions yield the global coordinates of the vehicle center. The vehicle dimensions are calculated based on the extent of the LiDAR points along a given side of the vehicle, and stored for each track separately for later use. Depending on the type of LiDAR sensors used, object proposals along with their dimensions in the real world can be obtained. However, we decide not to make use of LiDAR proposals, but rather use vision-only proposals with high recall by trading off some of the precision. This was seen to provide sufficient proposals to track all surrounding vehicles, at the expense of more false positives which the tracker is capable of handling. \paragraph{Early Fusion of Proposals} Since we operate with camera arrays with overlapping FoVs, the same vehicle may be detected in two adjacent views. It is important to identify and \textit{fuse} such proposals to track objects across camera views. Once again, we propose two different approaches to carry out this fusion. For vision-only systems, the fusion of proposals is carried out in 4 steps: i) Project proposals from all cameras to the global coordinate system using proposed mappings, ii) Sort all proposals in descending order based on their confidence scores (obtained from the vehicle detector), iii) Starting with the highest scoring proposal, find the subset of proposals whose euclidean distance in the global coordinate system falls within a predefined threshold. These proposals are considered to belong to the same object and removed from the original set of proposals. In practice, we use a threshold of $1m$ for grouping proposals. iv) The projection of each proposal within this subset is set to the mean of projections of all proposals within the subset. This process is repeated for the remaining proposals until no proposals remain. Alternatively, for a system consisting of LiDAR sensors, we project the 3D point cloud onto each individual camera image. Next, for each pair of proposals, we make a decision as to whether or not they belong to the same object. This is done by considering the back-projected LiDAR points that fall within the bounding box of each proposal (see Figure~\ref{fig:fusion_lidar}). Let $\mathcal{P}_1$ and $\mathcal{P}_2$ denote the index set of LiDAR points falling within each bounding box. Then, two proposals are said to belong to the same object if: \begin{equation} \text{max}\bigg(\frac{\mathcal{P}_1 \cap \mathcal{P}_2}{|\mathcal{P}_1|}, \frac{\mathcal{P}_1 \cap \mathcal{P}_2}{|\mathcal{P}_2|}\bigg) \geq 0.8, \end{equation} where $|\mathcal{P}_1|$ and $|\mathcal{P}_2|$ denote the cardinalities of sets $\mathcal{P}_1$ and $\mathcal{P}_2$ respectively. It should be noted that after fusion is completed, the union of LiDAR point sets that are back-projected into fused proposals can be used to obtain better projections. \begin{figure}[!t] \centering \includegraphics[width=0.9\linewidth]{fusion_lidar.pdf} \caption{Fusion of object proposals using LiDAR point clouds: Points common to both detections are drawn in green, and the rest are drawn red.} \label{fig:fusion_lidar} \end{figure} \section{M$^{3}$OT Framework}\label{framework} \begin{figure}[t] \begin{center} \includegraphics[width=0.75\linewidth]{MDP.pdf} \end{center} \caption{The Markov Decision Process (MDP) framework proposed in \cite{xiang2015learning}. In this work, we retain the structure of the MDP, and modify the actions, rewards and inputs to enable multi-sensory tracking.} \label{fig:MDP} \end{figure} Once we have a set of fused object proposals, we feed them into the MDP as illustrated in Figure~\ref{fig:MDP}. Although the MDP framework introduced in~\cite{xiang2015learning} forms a crucial building block of the proposed M$^3$OT framework, we believe that extending this to account for multiple sensors and modalities is a non-trivial endeavor (see Figure~\ref{fig:block_diag}). This section highlights and explains the modifications we propose to achieve these objectives. \subsection{Markov Decision Process} \label{MDP} As detailed in \cite{xiang2015learning}, we model the lifetime of a target with a Markov Decision Process (MDP). The MDP consists of the tuple $(\mathcal{S}, \mathcal{A}, T(\cdot, \cdot), R(\cdot, \cdot))$, where: \begin{itemize} \item States $s \in \mathcal{S}$ encode the status of the target. \item Actions $a \in \mathcal{A}$ define the actions that can be taken. \item The state transition function $T: \mathcal{S} \times \mathcal{A} \mapsto \mathcal{S}$ dictates how the target transitions from one state to another, given an action. \item The real-valued reward function $R: \mathcal{S} \times \mathcal{A} \mapsto \mathbb{R}$ assigns the immediate reward received after executing action $a$ in state $s$. \end{itemize} In this study, we retain the states $\mathcal{S}$, actions $\mathcal{A}$ and the state transition function $T(\cdot, \cdot)$ from the MDP framework for 2D MOT \cite{xiang2015learning}, while changing only the reward function $R(\cdot, \cdot)$. \textbf{States:} The state space is partitioned into four subspaces, i.e., $\mathcal{S} = \mathcal{S}_{Active} \cup \mathcal{S}_{Tracked} \cup \mathcal{S}_{Lost} \cup \mathcal{S}_{Inactive}$, where each subspace contains infinite number of states which encode the information of the target depending on the feature representation, such as appearance, location, size and history of the target. Figure~\ref{fig:MDP} illustrates the transitions between the four subspaces. \textit{Active} is the initial state for any target. Whenever an object is detected by the object detector, it enters an \textit{Active} state. An active target can transition to \textit{Tracked} or \textit{Inactive}. Ideally, a true positive from the object detector should transition to a \textit{Tracked} state, while a false alarm should enter an \textit{Inactive} state. A tracked target can stay tracked, or transition to a \textit{Lost} state if the target is not visible due to some reason, e.g. occlusion, or disappearance from sensor range. Likewise, a lost target can stay \textit{Lost}, or go back to a \textit{Tracked} state if it appears again, or transition to an \textit{Inactive} state if it has been lost for a sufficiently long time. Finally, \textit{Inactive} is the terminal state for any target, i.e., an inactive target stays inactive forever. \textbf{Actions and Transition Function:} Seven possible transitions are designed between the state subspaces, which correspond to seven actions in our target MDP. Figure~\ref{fig:MDP} illustrates these transitions and actions. In the MDP, all the actions are deterministic, i.e., given the current state and an action, we specify a new state for the target. For example, executing action $a_6$ on a \textit{Lost} target would transfer the target into a \textit{Tracked} state, i.e., $T(s_{Lost}, a_6) = s_{Tracked}$. \textbf{Reward Function:} As in the original study \cite{xiang2015learning}, we learn the reward function from training data, i.e., an inverse reinforcement learning problem, where we use ground truth trajectories of the targets as supervision. \subsection{Policy} \paragraph{Policy in Active States} In an Active state $s$, the MDP makes the decision between transferring an object proposal into a Tracked or Inactive state based on whether the detection is true or noisy. To do this, we train a set of binary Support Vector Machines (SVM) offline, one for each camera view, to classify a detection belonging to that view into Tracked or Inactive states using a normalized 5D feature vector $\phi_{Active}(s)$, i.e., 2D image plane coordinates, width, height and score of the detection, where training examples are collected from training video sequences. This is equivalent to learning the reward function: \begin{equation} R_{Active}(s, a) = y(a)((\mathbf{w}_{Active}^k)^T \cdot \phi_{Active}(s) + b_{Active}^k), \end{equation} for an object proposal belonging to camera $k \in \{1, \cdots, K\}$. $(\mathbf{w}_{Active}^k, b_{Active}^k)$ defines the learned weights and bias of the SVM for camera $k$, $y(a) = +1$ if action $a = a_1$, and $y(a) = -1$ if $a = a_2$ (see Figure~\ref{fig:MDP}). Training a separate SVM for each camera view allows weights to be learned based on object dimensions and locations in that particular view, and thus works better than training a single SVM for all views. Since a single object can result in multiple proposals, we initialize a tracker for that object if any of the fused proposals result in a positive reward. Note that a false positive from the object detector can still be misclassified and transferred to a Tracked state, which we then leave to be handled by the MDP in Tracked and Lost states. \paragraph{Policy in Tracked States} In a \textit{Tracked} state, the MDP needs to decide whether to keep tracking the target or to transfer it to a \textit{Lost} state. As long as the target is visible, we should keep tracking it. Else, it should be marked ``lost". We build an appearance model for the target online and use it to track the target. If the appearance model is able to successfully track the target in the next video frame, the MDP leaves the target in a Tracked state. Otherwise, the target is transferred to a Lost state. \textbf{Template Representation:} The appearance of the target is simply represented by a template that is an image patch of the target in a video frame. Whenever an object detection is transferred to a Tracked state, we initialize the target template with the detection bounding box. If the target is initialized with multiple fused proposals, then each detection is stored as a template. We make note of detections obtained from different camera views, and use these to model the appearance of the target in that view. This is crucial to track objects across camera views under varying perspective changes. When the target is being tracked, the MDP collects its templates in the tracked frames to represent the history of the target, which will be used in the Lost state for decision making. \textbf{Template Tracking:} Tracking of templates is carried out by performing dense optical flow as described in \cite{xiang2015learning}. The stability of the tracking is measured using the median of the Forward-Backward (FB) errors~\cite{kalal2012tracking} of all sampled points: $e_{medFB} = median({e(\mathbf{u}_i)}_{i=1}^n)$, where $\mathbf{u}_i$ denotes each sampled point, and $n$ is the total number of points. If $e_{medFB}$ is larger than some threshold, the tracking is considered to be unstable. Moreover, after filtering out unstable matches whose FB error is larger than the threshold, a new bounding box of the target is predicted using the remaining matches by measuring scale change between points before and after. This process is carried out for all camera views in which a target template has been initialized and tracking is in progress. Similar to the original MDP framework, we use the optical flow information in addition to the object proposals history to prevent drifting of the tracker. To do this, we compute the bounding box overlap between the target box for $l$ past frames, and the corresponding detections in each of those frames. Then we compute the mean bounding box overlap for the past $L$ tracked frames $o_{mean}$ as another cue to make the decision. Once again, this process is repeated for each camera view the target is being tracked in. In addition to the above features, we also \textit{gate} the target track. This involves introducing a check to see if the current global position of the tracked target falls within a window (gate) of it's last known global position. This forbids the target track from latching onto objects that appear close on the image plane, yet are much farther away in the global coordinate frame. We denote the last know global position and the currently tracked global position of the target as $\mathbf{x}^{G}(t-1)$ and $\mathbf{\hat{x}}^{G}(t)$ respectively. Finally, we define the reward function in a Tracked state $s$ using the feature set $\phi_{Tracked}(s) = (\{e_{medFB}^{k'}\}_{k'=1}^{K'}, \{o_{mean}^{k'}\}_{k'=1}^{K'}, \mathbf{x}^{G}(t-1), \mathbf{\hat{x}}^{G}(t))$ as: \begin{equation} R_{Tracked}(s, a) = \begin{cases} y(a), & \text{if }\exists k' \in \{1, \cdots, K'\}\text{ s.t.}\\ & (e_{medFB}^{k'} < e_0) \land (o_{mean}^{k'} > o_0)\\ & \land (|\mathbf{x}^{G}(t-1) - \mathbf{\hat{x}}^{G}(t)| \leq \mathbf{t}_{gate}),\\ -y(a), & \text{otherwise}, \end{cases} \end{equation} where $e_0$ and $o_0$ are fixed thresholds, $y(a) = +1$ if action $a = a_3$, and $y(a) = -1$ if $a = a_4$ (see Figure~\ref{fig:MDP}). $k'$ above indexes camera views in which the target is currently being tracked and $\mathbf{t}_{gate}$ denotes the gating threshold. So the MDP keeps the target in a Tracked state if $e_{medFB}$ is smaller and $o_{mean}$ is larger than their respective thresholds for any one of $K'$ camera views in addition to satisfying the gating check. Otherwise, the target is transferred to a Lost state. \textbf{Template Updating:} The appearance model of the target needs to be regularly updated in order to accommodate appearance changes. As in the original work, we adopt a``lazy" updating rule and resort to the object detector in preventing tracking drift. This is done so that we don't accumulate tracking errors, but rather rely on data association to handle appearance changes and continue tracking. In addition to this, templates are initialized in views where the target is yet to be tracked by using proposals that are fused with detections corresponding to the tracked location in an adjacent camera view. This helps track objects that move across adjacent camera views, by creating target templates in the new view as soon as they are made available. \IncMargin{1em} \begin{algorithm}[t!] \smaller \SetKwInOut{Input}{input}\SetKwInOut{Output}{output} \Input{Set of multi-video sequences $\mathcal{V} = \{(v_i^1, \cdots, v_i^K)\}_{i = 1}^{N}$, ground truth trajectories $\mathcal{T}_i = \{t_{ij}\}_{j = 1}^{N_i}$, object proposals $\mathcal{D}_i = \{d_{im}\}_{m = 1}^{M_i}$ and their corresponding projections for each multi-video sequence $(v_i^1, \cdots, v_i^K)$} \BlankLine \Output{Binary classifiers $\{(\mathbf{w}^k_{Lost}, b^k_{Lost})\}_{k=1}^K$ for data association} \BlankLine \Repeat{all targets are successfully tracked}{ \ForEach{multi-video sequence $(v_i^1, \cdots, v_i^K)$ in $\mathcal{V}$}{ \ForEach{target $t_{ij}$}{ Initialize the MDP in an Active state\; $l \leftarrow$ index of the first frame in which $t_{ij}$ is correctly detected\; Transfer the MDP to a Tracked state and initialize the target template for each camera view in which target is observed\; \While{$l \leq$ index of last frame of $t_{ij}$}{ Fuse object proposals as described in \ref{fusion}\; Follow the current policy and choose an action $a$\; Compute the action $a_{gt}$ according to the ground truth\; \eIf{current state is lost and $a \neq a_{gt}$}{ \ForEach{camera view $k$ in which the target has been seen}{ Decide the label $y^k_{m_k}$ of the pair $(t^k_{ij}, d^k_{im_k})$\; $\mathcal{S}^k \leftarrow \mathcal{S}^k\ \bigcup\ \{(\phi(t^k_{ij}, d^k_{im_k}), y_{m_k})\}$\; $(\mathbf{w}^k_{Lost}, b^k_{Lost}) \leftarrow$ solution of Eq.\ref{eq:SVM_lost} on $\mathcal{S}^k$\; } break\; }{ Execute action a\; $l \leftarrow l + 1$\; } \If{$l >$ index of last frame of $t_{ij}$}{ Mark target $t_{ij}$ as successfully tracked\; } } } } } \caption{Reinforcement learning of the binary classifier for data association.}\label{alg:reinforce} \end{algorithm}\DecMargin{2em} \paragraph{Policy in Lost States} In a Lost state, the MDP needs to decide whether to keep the target in a Lost state, or transition it to a Tracked state, or mark it as Inactive. We simply mark a lost target as Inactive and terminate the tracking if the target has been lost for more than $L_{Lost}$ frames. The more challenging task is to make the decision between tracking the target and keeping it as lost. This is treated as a data association problem where, in order to transfer a lost target into a Tracked state, the target needs to be associated with an object proposal, else, the target retains its Lost state. \textbf{Data Association:} Let $t$ denote a lost target, and $d$ be an object detection. The goal of data association is to predict the label $y \in \{+1, -1\}$ of the pair $(t, d)$ indicating that the target is linked $(y = +1)$ or not linked $(y = -1)$ to the detection. Assuming that the detection $d$ belongs to camera view $k$, this binary classification is performed using the real-valued linear function $f^k(t, d) = (\mathbf{w}_{Lost}^k)^T \cdot \phi_{Lost}(t, d) + b_{Lost}^k$, where $(\mathbf{w}_{Lost}^k, b_{Lost}^k)$ are the parameters that control the function (for camera view $k$), and $\phi_{Lost}(t, d)$ is the feature vector which captures the similarity between the target and the detection. The decision rule is given by $y = +1$ if $f^k(t, d) \geq 0$, else $y = -1$. Consequently, the reward function for data association in a lost state $s$ given the feature set $\{\phi_{Lost}(t, d_j)\}_{m=1}^M$ is defined as \begin{equation} R_{Lost}(s, a) = y(a)\bigg(\vc{\text{max }}{M}{m=1} ((\mathbf{w}_{Lost}^{k_m})^T \cdot \phi_{Lost}(t, d_m) + b_{Lost}^{k_m})\bigg), \end{equation} where $y(a) = +1$ if action $a = a_6$, $y(a) = -1$ if $a = a_5$ (see Figure~\ref{fig:MDP}), and $m$ indexes M potential detections for association. Potential detections for association with a target are simply obtained by applying a gating function around the last known location of the target in the global coordinate system. Note that based on which camera view each detection $d_m$ originates from, the appropriate weights $(\mathbf{w}_{Lost}^{k_m}, b_{Lost}^{k_m})$ associated with that view are used. As a result, the task of policy learning in the Lost state reduces to learning the set of parameters $\{(\mathbf{w}_{Lost}^k, b_{Lost}^k)\}_{k=1}^K$ for the decision functions $\{f^k(t, d)\}_{k=1}^K$. \begin{table}[t!] \caption{Features used for data association \cite{xiang2015learning}. We introduce two new features (highlighted in bold) based on the global coordinate positions of targets and detections.} \centering \resizebox{0.9\linewidth}{!}{% \begin{tabular}{ c | c p{5cm} } \hline \textbf{Type} & \textbf{Notation} & \multicolumn{1}{c}{\textbf{Feature Description}} \\ \hline \multirow{3}{*}{FB error} & \multirow{3}{*}{$\phi_1, \cdots, \phi_5$} & {Mean of the median forward-backward errors from the entire, left half, right half, upper half and lower half of the templates obtained from optical flow}\\ \hline \multirow{9}{*}{NCC} & \multirow{4}{*}{$\phi_6$} & {Mean of the median Normalized Correlation Coefficients (NCC) between image patches around the matched points in optical flow} \\ \\ & \multirow{4}{*}{$\phi_7$} & {Mean of the median Normalized Correlation Coefficients (NCC) between image patches around the matched points obtained from optical flow} \\ \hline \multirow{6}{*}{Height ratio} & \multirow{3}{*}{$\phi_8$} & {Mean of ratios of the bounding box height of the detection to that of the predicted bounding boxes obtained from optical flow} \\ \cline{2-3} & \multirow{2}{*}{$\phi_9$} & {Ratio of the bounding box height of the target to that of the detection} \\ \hline \multirow{3}{*}{Overlap} & \multirow{3}{*}{$\phi_{10}$} & {Mean of the bounding box overlaps between the detection and the predicted bounding boxes from optical flow}\\ \hline Score & $\phi_{11}$ & Normalized detection score\\ \hline \multirow{13}{*}{Distance} & \multirow{4}{*}{$\phi_{12}$} & {Euclidean distance between the centers of the target and the detection after motion prediction of the target with a linear velocity model} \\ \cline{2-3} & \multirow{4}{*}{$\boldsymbol{\phi_{13}}$} & \textbf{Lateral offset between last known global coordinate position of the target and that of the detection} \\ \cline{2-3} & \multirow{4}{*}{$\boldsymbol{\phi_{14}}$} & \textbf{Longitudinal offset between last known global coordinate position of the target and that of the detection} \\ \hline \end{tabular} } \label{tab:lost} \end{table} \textbf{Reinforcement Learning:} We train the binary classifiers described above using the reinforcement learning paradigm. Let $\mathcal{V} = \{(v_i^1, \cdots, v_i^K)\}_{i = 1}^{N}$ denote a set of multi-video sequences for training, where $N$ is the number of sequences and $K$ is the total number of camera views. Suppose there are $N_i$ ground truth targets $\mathcal{T}_i = \{t_{ij}\}_{j=1}^{N_i}$ in the $i^{th}$ multi-video sequence $(v_i^1, \cdots, v_i^K)$. Our goal is to train the MDP to successfully track all these targets across all camera views they appear in. We start training with initial weights $(\mathbf{w}^k_0, b^k_0)$ and an empty training set $\mathcal{S}^k_0 = \emptyset$ for the binary classifier corresponding to each camera view $k$. Note that when the weights of the binary classifiers are specified, we have a complete policy for the MDP to follow. So the training algorithm loops over all the multi-video sequences and all the targets, follows the current policy of the MDP to track the targets. The binary classifier or the policy is updated only when the MDP makes a mistake in data association. In this case, the MDP takes a different action than what is indicated by the ground truth trajectory. Suppose the MDP is tracking the $j^{th}$ target $t_{ij}$ in the video $v^k_i$, and on the $l^{th}$ frame of the video, the MDP is in a lost state. Consider the two types of mistakes that can happen: i) The MDP associates the target $t^k_{ij}(l)$ to an object detection $d^k_m$ which disagrees with the ground truth, i.e., the target is incorrectly associated to a detection. Then $\phi(t_{ij}^k(l) , d^k_m)$ is added to the training set $\mathcal{S}^k$ of the binary classifier for camera $k$ as a negative example. ii) The MDP decides to not associate the target to any detection, but the target is visible and correctly detected by a detection $d^k_m$ based on the ground truth, i.e., the MDP missed the correct association. Then $\phi(t_{ij}^k(l) , d_m^k)$ is added to the training set as a positive example. After the training set has been augmented, we update the binary classifier by re-training it on the new training set. Specifically, given the current training set $\mathcal{S}^k = \{(\phi(t^k_m, d^k_m), y^k_m)\}_{m=1}^M$, we solve the following soft-margin optimization problem to obtain a max-margin classifier for data association in camera view $k$: \begin{multline} \label{eq:SVM_lost} \vc{\text{min }}{}{\vspace{0.3cm}\mathbf{w}^k_{Lost}, b^k_{Lost}, \mathbf{\xi}} \ \ \frac{1}{2} ||\mathbf{w}^k_{Lost}||^2 + C \sum_{m=1}^M \xi_m \\ \text{s.t. }y^k_m\big((\mathbf{w}^k_{Lost})^T \cdot \phi(t^k_m, d^k_m) + b^k_{Lost}\big) \geq 1 - \xi_m, \xi_m \geq 0, \forall m, \end{multline} where $\xi_m, m = 1, \cdots, M$ are the slack variables, and $C$ is a regularization parameter. Once the classifier has been updated, we obtain a new policy which is used in the next iteration of the training process. Note that based on which view the data association is carried out in, the weights of the classifier in that view are updated in each iteration. We keep iterating and updating the policy until all the targets are successfully tracked. Algorithm~\ref{alg:reinforce} summarizes the policy learning algorithm. \textbf{Feature Representation:} We retain the same feature representation described in \cite{xiang2015learning}, but add two features based on the lateral and longitudinal displacements of the last known target location and the object proposal location in the global coordinate system. This leverages 3D information that is otherwise unavailable in 2D MOT. Table~\ref{tab:lost} summarizes our feature representation. \IncMargin{1em} \begin{algorithm}[t!] \caption{Multi-object tracking with MDPs.}\label{alg:MOT} \smaller \SetKwInOut{Input}{input}\SetKwInOut{Output}{output} \Input{A multi-video sequence $(v^1, \cdots, v^K)$, corresponding object proposals $\mathcal{D} = \{d_m\}_{m = 1}^{M}$ and their projections, learned binary classifier weights $\{(\mathbf{w}_{Active}^k, b_{Active}^k)\}_{k = 1}^K$ and $\{(\mathbf{w}^k_{Lost}, b^k_{Lost})\}_{k=1}^K$} \Output{Trajectories of targets $\mathcal{T} = \{t_j\}_{j = 1}^{N}$ in the sequence} \BlankLine \ForEach{frame $l$ in $(v^1, \cdots, v^K)$}{ Fuse object proposals as described in \ref{fusion}\; \tcc{process targets in tracked states} \ForEach{tracked target $t_j$ in $\mathcal{T}$}{ Follow the policy, move the MDP of $t_j$ to the next state\; } \tcc{process targets in lost states} \ForEach{lost target $t_j$ in $\mathcal{T}$}{ \ForEach{proposal $d_m$ not covered by any tracked target}{ Compute $f^{k_m}(t_j, d_m) = (\mathbf{w}^{k_m})_{Lost}^T \cdot \phi(t_j, d_m) + b^{k_m}_{Lost}$\; } } Data association with Hungarian algorithm for the lost targets\; Initialize target templates for uninitialized camera views using matched (fused) proposals\; \ForEach{lost target $t_j$ in $\mathcal{T}$}{ Follow the assignment, move the MDP of $t_j$ to the next state\; } \tcc{initialize new targets} \ForEach{proposal $d_m$ not covered by any tracked target in $\mathcal{T}$}{ Initialize a MDP for a new target $t$ with proposal $d_m$\; \eIf{action $a_1$ is taken following the policy}{ Transfer $t$ to the \textit{tracked} state and initialize the target template for each camera view in which target is observed\; $\mathcal{T} \leftarrow \mathcal{T}\ \bigcup\ \{t\}$\; }{ Transfer $t$ to the Inactive state\; } } } \end{algorithm}\DecMargin{2em} \begin{table*}[ht] \centering \caption{Quantitative results showing ablative analysis of our proposed tracker.} \label{tab:results} \begin{tabular}{@{}ccccccccccc@{}} \toprule \multirow{2}{*}{\textbf{Criteria for Comparison}} & \multirow{2}{*}{\textbf{Tracker Variant}} & \multicolumn{2}{c}{\textbf{Sensor Configuration}} & \multicolumn{5}{c}{\textbf{MOT Metrics} \cite{bernardin2008evaluating, milan2013challenges}}\\ \cmidrule(lr){3-4}\cmidrule(lr){5-9} & & \# of Cameras & Range Sensors & MOTA ($\uparrow$) & MOTP ($\downarrow$) & MT ($\uparrow$) & ML ($\downarrow$) & IDS ($\downarrow$)\\ \midrule\midrule \multirow{5}{*}{\textbf{\begin{tabular}[c]{@{}c@{}}Number of\\Cameras Used\\(Section \ref{results_cameras})\end{tabular}}} & - & 2 & \ding{51} & 73.38 & 0.03 & 71.36\% & 16.13\% & 16\\ & - & 3 & \ding{51} & 77.26 & 0.03 & 77.34\% & 14.49\% & 38\\ & - & 4 & \ding{51} & 72.81 & 0.05 & 72.48\% & 20.76\% & 49\\ & - & 4$^{\dagger}$ & \ding{51} & 74.18 & 0.05 & 74.10\% & 18.18\% & 45\\ & - & 6 & \ding{51} & 79.06 & 0.04 & 79.66\% & 11.93\% & 51\\ & - & 8 & \ding{51} & 75.10 & 0.04 & 70.37\% & 14.07\% & 59\\ \midrule \multirow{2}{*}{\textbf{\begin{tabular}[c]{@{}c@{}}Projection Scheme\\(Section \ref{results_projection})\end{tabular}}} & Point cloud based projection & 8 & \ding{51} & \textbf{75.10} & \textbf{0.04} & \textbf{70.37\%} & \textbf{14.07\%} & \textbf{59}\\ & IPM projection & 8 & \ding{51} (for fusion) & 47.45 & 0.39 & 53.70\% & 19.26\% & 152\\ \midrule \multirow{2}{*}{\textbf{\begin{tabular}[c]{@{}c@{}}Fusion Scheme\\(Section \ref{results_fusion})\end{tabular}}} & Point cloud based fusion & 8 & \ding{51} & \textbf{75.10} & \textbf{0.04} & \textbf{70.37\%} & 14.07\% & \textbf{59}\\ & Distance based fusion & 8 & \ding{51} (for projection) & 72.20 & \textbf{0.04} & 68.23\% & \textbf{12.23\%} & 65\\ \midrule \multirow{2}{*}{\textbf{\begin{tabular}[c]{@{}c@{}}Sensor Modality\\(Sections \ref{results_projection},\ref{results_fusion})\end{tabular}}} & Cameras+LiDAR & 8 & \ding{51} & \textbf{75.10} & \textbf{0.04} & \textbf{70.37\%} & \textbf{14.07\%} & \textbf{59}\\ & Cameras & 8 & \ding{55} & 40.98 & 0.40 & 50.00\% & 27.40\% & 171\\ \midrule \multirow{2}{*}{\textbf{\begin{tabular}[c]{@{}c@{}}Vehicle Detector\\(Section \ref{results_detector})\end{tabular}}} & RefineNet\cite{rajaram2016refinenet} & 8 & \ding{51} & \textbf{75.10} & \textbf{0.04} & \textbf{70.37\%} & \textbf{14.07\%} & \textbf{ 59}\\ & RetinaNet\cite{lin2017focal} & 8 & \ding{51} & 73.89 & 0.05 & 68.37\% & 17.07\% & 72\\ & SubCat\cite{ohn2014fast} & 8 & \ding{51} & 69.93 & \textbf{0.04} & 66.67\% & 22.22\% & 81\\ \midrule \multirow{2}{*}{\textbf{\begin{tabular}[c]{@{}c@{}}Global Position\\based Features\\(Section \ref{results_3Dfeatures})\end{tabular}}} & with $\{\boldsymbol{\phi_{13}}, \boldsymbol{\phi_{14}}\}$ & 8 & \ding{51} & \textbf{75.10} & \textbf{0.04} & \textbf{70.37\%} & \textbf{14.07\%} & \textbf{59}\\ & without $\{\boldsymbol{\phi_{13}}, \boldsymbol{\phi_{14}}\}$ & 8 & \ding{51} & 71.32 & 0.05 & 64.81\% & 17.78\% & 88\\ & & & & & & & & \\ \bottomrule \end{tabular} \label{tab:results} \end{table*} \subsection{Multi-Object Tracking with MDPs} \label{MOT} After learning the policy/reward of the MDP, we apply it to the multi-object tracking problem. We dedicate one MDP for each target, and the MDP follows the learned policy to track the object. Given a new input video frame, targets in tracked states are processed first to determine whether they should stay as tracked or transfer to lost states. Then we compute pairwise similarity between lost targets and object detections which are not covered by the tracked targets, where non-maximum suppression based on bounding box overlap is employed to suppress covered detections, and the similarity score is computed by the binary classifier for data association. After that, the similarity scores are used in the Hungarian algorithm \cite{munkres1957algorithms} to obtain the assignment between detections and lost targets. According to the assignment, lost targets which are linked to some object detections are transferred to tracked states. Otherwise, they stay as lost. Finally, we initialize a MDP for each object detection which is not covered by any tracked target. Algorithm \ref{alg:MOT} describes our 3D MOT algorithm using MDPs in detail. Note that, tracked targets have higher priority than lost targets in tracking, and detections covered by tracked targets are suppressed to reduce ambiguities in data association. \section{Experimental Analysis} \textbf{Testbed:} Since we propose full-surround MOT using vision sensors, we use a testbed comprising of 8 outside looking RGB cameras (seen in Figure~\ref{fig:motivation}). This setup ensures full surround coverage of the scene around the vehicle, while retaining a sufficient overlap between adjacent camera views. Frames captured from these cameras along with annotated surround vehicles are shown in Figure~\ref{fig:motivation}. In addition to full vision coverage, the testbed has full-surround Radar and LiDAR FoVs. Despite the final goal of this study being full-surround MOT, we additionally consider cases where only a subset of the vision sensors are used to illustrate the modularity of the approach. More details on the sensors, their synchronization and calibration, and the testbed can be found in \cite{rangesh2017multimodal}. \textbf{Dataset:} To train and test our 3D MOT system, we collect a set of four sequences, each 3-4 minutes long, comprising of multi-camera videos and LiDAR point clouds using our testbed described above. The sequences are chosen much longer than traditional MOT sequences so that long range maneuvers of surround vehicles can be tracked. This is very crucial to autonomous driving. We also annotate all vehicles in the 8 camera videos for each sequence with their bounding box, as well as track IDs. It should be noted that each unique vehicle in the scene is assigned the same ID in all camera views. With these sequences set up, we use one sequence for training our tracker, and reserve the rest for testing. All our results are reported on the entire test set. \textbf{Evaluation Metrics:} We use multiple metrics to evaluate the multiple object tracking performance as suggested by the MOT Benchmark \cite{bernardin2008evaluating}. Specifically, we use the 3D MOT metrics described in \cite{milan2013challenges}. These include Multiple Object Tracking Accuracy (MOTA), Multiple Object Tracking Precision (MOTP), Mostly Track targets (MT, percentage of ground truth objects who trajectories are covered by the tracking output for at least 80\%), Mostly Lost targets (ML, percentage of ground truth objects who trajectories are covered by the tracking output less than 20\%), and the total number of ID Switches (IDS). In addition to listing the metrics in Table~\ref{tab:results}, we also draw arrows next to each of them indicating if a high ($\uparrow$) or low ($\downarrow$) value is desirable. Finally, we provide top-down visualizations of the tracking results in a global coordinate system centered on the ego-vehicle for qualitative evaluation. \begin{figure*}[ht] \centering \begin{subfigure}[b]{0.12\textwidth} \centering \includegraphics[width=\linewidth]{9_GT.png} \caption{Ground truth} \end{subfigure}% \hspace{1mm} ~ \begin{subfigure}[b]{0.12\textwidth} \centering \includegraphics[width=\linewidth]{9_8cams_range_tracks.pdf} \caption{8 cameras} \end{subfigure}% \hspace{1mm} ~ \begin{subfigure}[b]{0.12\textwidth} \centering \includegraphics[width=\linewidth]{9_6cams_range_tracks.pdf} \caption{6 cameras} \end{subfigure}% \hspace{1mm} ~ \begin{subfigure}[b]{0.12\textwidth} \centering \includegraphics[width=\linewidth]{9_4cams_range_tracks.pdf} \caption{4 cameras} \end{subfigure}% \hspace{1mm} ~ \begin{subfigure}[b]{0.12\textwidth} \centering \includegraphics[width=\linewidth]{9_4cams_1_range_tracks.pdf} \caption{4$^{\dagger}$ cameras} \end{subfigure}% \hspace{1mm} ~ \begin{subfigure}[b]{0.12\textwidth} \centering \includegraphics[width=\linewidth]{9_3cams_range_tracks.pdf} \caption{3 cameras} \end{subfigure}% \hspace{1mm} ~ \begin{subfigure}[b]{0.12\textwidth} \centering \includegraphics[width=\linewidth]{9_2cams_range_tracks.pdf} \caption{2 cameras} \end{subfigure}% \hspace{1mm} \caption{Tracking results with different number of cameras. The camera configuration used is depicted above each result.} \label{fig:results_cameras} \end{figure*} \subsection{Experimenting with Number of Cameras}\label{results_cameras} As our approach to tracking is designed to be extremely modular, we test our tracker with different camera configurations. We experiment with 2, 3, 4, 6 and 8 cameras respectively. Top-down visualizations of the generated tracks for a test sequence are depicted in Figure~\ref{fig:results_cameras}. The ground truth tracks are provided for visual comparison. As can be seen, the tracker provides consistent results in its FoV irrespective of the camera configuration used, even if the cameras have no overlap between them. The quantitative results on the test set for each camera configuration are listed in Table~\ref{tab:results}. It must be noted that the tracker for each configuration is scored only based on the ground truth tracks visible in that camera configuration. The tracker is seen to score very well on each metric, irrespective of the number of cameras used. This illustrates the robustness of the M$^{3}$OT framework. More importantly, it is seen that our tracker performs exceptionally well in the MT and ML metrics, especially in camera configurations with overalapping FoVs. Even though our test sequences are about 3 minutes long in duration, the tracker mostly tracks more than 70\% of the targets, while mostly losing only a few. This demonstrates that our M$^{3}$OT framework is capable of long-term target tracking. \subsection{Effect of Projection Scheme}\label{results_projection} Figure~\ref{fig:results_projection} depicts the tracking results for a test sequence using the two projection schemes proposed. It is obvious that LiDAR based projection results in much better localization in 3D, which leads to more stable tracks and fewer fragments. The IPM based projection scheme is very sensitive to small changes in the input domain, and this leads to considerable errors during gating and data association. This phenomenon is verified by the high MOTP value obtained for IPM based projection as listed in Table~\ref{tab:results}. \begin{figure}[!h] \captionsetup[subfigure]{justification=centering} \centering \begin{subfigure}[t]{0.12\textwidth} \centering \includegraphics[width=\linewidth]{13_GT.png} \caption{Ground truth} \end{subfigure}% \hspace{1mm} ~ \begin{subfigure}[t]{0.12\textwidth} \centering \includegraphics[width=\linewidth]{13_LiDAR_proj_tracks.png} \caption{LiDAR based projection} \end{subfigure}% \hspace{1mm} ~ \begin{subfigure}[t]{0.12\textwidth} \centering \includegraphics[width=\linewidth]{13_IPM_tracks.png} \caption{IPM based projection} \end{subfigure} \caption{Tracking results on a test sequence with different projection schemes.} \label{fig:results_projection} \end{figure} \subsection{Effect of Fusion Scheme}\label{results_fusion} Once again, we see that the LiDAR point cloud based fusion scheme is more reliable in comparison to the distance based approach, albeit this difference is much less noticeable when proposals are projected using LiDAR point clouds. The LiDAR based fusion scheme results in objects being tracked longer (across camera views), and more accurately. The distance based fusion approach on the other hand fails to associate certain proposals, which results in templates not being stored for new camera views, thereby cutting short the track as soon as the target exits the current view. This superiority is reflected in the quantitative results shown in Table~\ref{tab:results}. The drawbacks of the distance based fusion scheme are exacerbated when using IPM to project proposals, reflected by the large drop in MOTA for a purely vision based system. This drop in performance is to be expected in the absence of LiDAR sensors. However, it must be noted that half the targets are still tracked for most of their lifetime, while only a quarter of the targets are mostly lost. \begin{figure}[t] \begin{center} \includegraphics[width=0.7\linewidth]{roc.png} \end{center} \caption{ROC curves for different vehicle detectors on the 4 test sequences.} \label{fig:results_detector} \end{figure} \subsection{Effect of using Different Vehicle Detectors}\label{results_detector} Ideally, a tracking-by-detection approach should be detector agnostic. To observe how the tracking results change for different vehicle detectors, we ran the proposed tracker on vehicle detections obtained from three commonly used object detectors~\cite{rajaram2016refinenet, lin2017focal, ohn2014fast}. All three detectors were trained on the KITTI dataset~\cite{geiger2013vision} and have not seen examples from the proposed multi-camera dataset. The ROC curves for the detectors on the proposed dataset are shown in Figure~\ref{fig:results_detector}. The corresponding tracking results for each detector are listed in Table~\ref{tab:results}. Despite the sub-optimal performance of all three detectors in addition to significant differences in their ROC curves, the tracking results are seen to be relatively unaffected. This indicates that the tracker is less sensitive to errors made by the detector, and consistently manages to correct for it. \subsection{Effect of Global Position based Features}\label{results_3Dfeatures} Table~\ref{tab:results} indicates a clear benefit in incorporating features $\{\phi_{13}, \phi_{14}\}$ for data association in Lost states. These features express how near/far a proposal is from the last know location of a target. This helps the tracker disregard proposals that are unreasonably far away from the latest target location. Introduction of these features leads to an improvement in all metrics and therefore justifies their inclusion. \section{Concluding Remarks}\label{conclusions} In this work, we have described a full-surround camera and LiDAR based approach to multi-object tracking for autonomous vehicles. To do so, we extend a 2D MOT approach based on the tracking-by-detection framework, and make it capable of tracking objects in the real world. The proposed M$^{3}$OT framework is also made highly modular so that it is capable of working with any camera configuration with varying FoVs, and also with or without LiDAR sensors. An efficient and fast early fusion scheme is adopted to handle object proposals from different sensors within a calibrated camera array. We conduct extensive testing on naturalistic full-surround vision and LiDAR data collected on highways, and illustrate the effects of different camera setups, fusion schemes and 2D-to-3D projection schemes, both qualitatively and quantitatively. Results obtained on the dataset support the modular nature of our framework, as well as its ability to track objects for a long duration. In addition to this, we believe that the M$^{3}$OT framework can be used to test the utility of any camera setup, and make suitable modifications thereof to ensure optimum coverage from vision and range sensors. \section{Acknowledgments} We gratefully acknowledge the continued support of our industrial and other sponsors. We would also like to thank our colleagues Kevan Yuen, Nachiket Deo, Ishan Gupta and Borhan Vasli at the Laboratory for Intelligent and Safe Automobiles (LISA), UC San Diego for their useful inputs and help in collecting and annotating the dataset. Finally, we greatly appreciate the comments and suggestions from the reviewers and the associate editor which have improved the quality and readability of this work. \bibliographystyle{IEEEtran}
{ "timestamp": "2018-09-12T02:02:01", "yymm": "1802", "arxiv_id": "1802.08755", "language": "en", "url": "https://arxiv.org/abs/1802.08755" }
\section{Abstract} Fewer than 20 transiting Kepler planets have periods longer than one year. Our early search of the Kepler light curves revealed one such system, Kepler-1654 b (originally KIC~8410697b), which shows exactly two transit events and whose second transit occurred only 5 days before the failure of the second of two reaction wheels brought the primary Kepler mission to an end. A number of authors have also examined light curves from the Kepler mission searching for long period planets and identified this candidate. Starting in Sept. 2014 we began an observational program of imaging, reconnaissance spectroscopy and precision radial velocity measurements which confirm with a high degree of confidence that Kepler-1654 b is a {\it bona fide} transiting planet orbiting a mature G2V star (T$_{eff}= 5580$K, [Fe/H]=-0.08) with a semi-major axis of 2.03 AU, a period of 1047.84 days and a radius of 0.82$\pm$0.02 R$_{Jup}$. Radial Velocity (RV) measurements using Keck's HIRES spectrometer obtained over 2.5 years set a limit to the planet's mass of $<0.5\ (3\sigma$) M$_{Jup}$. The bulk density of the planet is similar to that of Saturn or possibly lower. { We assess the suitability of temperate gas giants like Kepler-1654b for transit spectroscopy with the James Webb Space Telescope since their relatively cold equilibrium temperatures (T$_{pl}\sim 200$K) make them interesting from the standpoint of exo-planet atmospheric physics. Unfortunately, these low temperatures also make the atmospheric scale heights small and thus transmission spectroscopy challenging. Finally, the long time between transits can make scheduling JWST observations difficult---as is the case with Kepler-1654b} \section{Introduction} The Kepler mission \citep{Borucki2010} has revolutionized our understanding of exoplanets, finding over 2,300 confirmed planets and almost 4500 candidates\footnote{As of December 2107 for Kepler with an additional 170 confirmed planets for K2, http://exoplanetarchive.ipac.caltech.edu/}\citep{Batalha2013}. These data have improved our knowledge of the constituents of the inner solar system with an inventory that includes planets ranging from less than an Earth radius (Kepler 37b) up to 1.5 Jupiter radii (Kepler 12b), and periods ranging from less than a day (Kepler 78b) up to 1100 days, including Kepler 167 \citep{Kipping2016} and Kepler 1647 \citep{Kostov2016}. A number of non-transiting Kepler planets with longer periods were identified by their radial velocity (RV) signature, e.g. Kepler 407c with a period of order 3000 days \citep{Marcy2014}. The completeness of the Kepler catalog is poor for long period planets. These objects are hard to find {\it a priori} since the transit probability decreases with increasing semi-major axis and because fewer transits are observable in a given observing period. A smaller number of events reduces the total signal-to-noise-ratio (SNR) achievable by averaging multiple transits. Most importantly, the Kepler pipeline required 3 or more potential transits before promoting a star to become a "Kepler Object of Interest", or KOI, worthy of further investigation. \citep{Jenkins2010}. To avoid the Kepler pipeline's prohibition against planets with 1 or 2 transits we analyzed Kepler light curves {\it not identified} with confirmed planets, Kepler candidates, or KOI's. As described below, this search was rewarded with the detection of a Jupiter sized planet in a 2.87 yr (1047.836 day) period orbiting a mid G star, KIC~8410697, which we now refer to as Kepler-1654. A more complete search for long period systems was carried out by the Planet Hunters group \citep{Wang2015} who identified a number of systems with 1 and 2 transits. In the case of Kepler-1654 they found only the first of its two transits. \citet{Foreman2016} identified seven new transiting systems, showing 1 or at most 2 transits, and 8 long period planets identified with known Kepler systems having at least one shorter period planet. This paper describes follow-up observations of Kepler-1654 using the W.M. Keck Observatory have allowed us to reject a variety of alternative ("false-positive") interpretations, fully characterize the host star, and to set an upper limit to its mass to be less than 0.48 M$_{Jup}\, (3 \sigma)$. $\S$\ref{lightcurves} describes the search through the Kepler Light curves, $\S$\ref{follow} the follow-up observations of the star, $\S$\ref{TransitProps} the characterization of the planet, and $\S$\ref{atmosphereJWST} investigates the prospects of studying the planet's atmosphere with JWST transit spectroscopy. \section{Searching non-KOI Light curves\label{lightcurves}} The data used for this investigation were drawn from Quarters 1-17 and encompassed the entire duration of the Kepler prime mission. A total of 11,232 stars were selected on the basis of their properties in the Kepler Stellar Database \citep{Brown2011}: Kepler magnitude, Kp $<14$ mag, effective temperatures between 5500K -- 6000 K and log g $>$3.75. These stellar values are of course only rough estimates \citep{Huber2014} and were used only for an initial selection of likely F5-G5 dwarf stars. Data within each Quarter, $I(t)$, were normalized to near-unity using a trimmed mean signal for the entire Quarter and then searched for individual flux dips using a zero-sum Box Car filter of length $3L$ where $L$ was allowed to range in duration from 4 to 24 hours. A local trimmed average and standard deviation were evaluated within each segment with the filter output, $S(t)$. { A local trimmed average and standard deviation were evaluated within each segment with the filter output, $S(t)$, at a given time, $t$, having a value, } $$ S(t)=<I(t)>_{-L/2}^{L/2} -0.5 \times (<I(t)>_{-3L/2}^{-L/2} $$ $$ \, \, \, \, \, \, \, \, \, \, +<I(t)>_{+L/2}^{+3L/2}) \eqno{(1)}.$$ Negative going dips with Signal to Noise Ratio (SNR)$> 20$ were output for subsequent analysis. The noise per sample, $\sigma$, used in this calculation was derived on a Quarter by Quarter basis using a robust estimate of standard deviation of all points within the Quarter\footnote{We used the ``resistant\_mean" algorithm in the GSFC IDL library, http://idlastro.gsfc.nasa.gov/contents.html. Routines in this library were used for a number of other calculations in this work.}, $\sigma_Q$, rejecting values deviating by more than $\pm3\sigma$ from the initial mean and standard deviation. The SNR of a potential transit event was evaluated by { dividing the depth of the event by the noise per sample, $\sigma$, and multiplying by $\sqrt{N_L}$ where $N_L$ is the number of samples in a segment of length $L$. } A list of 24 systems was examined more closely. For most of the single transit cases, the transit duration combined with the approximate properties of the star yielded predicted orbital periods \citep{Seager2003} much greater than duration of the Kepler mission. These systems would be impossible to confirm. In a few cases the predicted orbital periods were short compared to the mission duration, implying that the Kepler pipeline should have found and considered the object if real. One object we identified is Kepler-1654b, orbiting a mid-G dwarf star with a Kepler magnitude of 13.42 mag, a transit depth of 0.51\%, and a period of 1047.8356 days (2.87 yr,Table~\ref{props}). \citet{Wang2015} identified this object as having only a single transit on Day 542+2454833 (BJD). By going to the very end of Q17 we were able to identify the second transit on Day 1590+2454833 (BJD). \citet{Foreman2016} also found two transits for this system. \begin{deluxetable}{lcc} \tablecaption{Observed Properties of Kepler-1654 \label{props} } \tabletypesize{\scriptsize} \tablehead{Property&Value&Comment} \startdata Kepler\# &1654\\ KIC \# &8410697&\\ 2MASS designation&J18484459+4426041\\ $\alpha$&18h48m44.6s&J2000\\ $\delta$&44d26m04.1s&J2000\\ Kepler Mag &13.42 &mag\\ J &12.28$\pm$0.021 &mag\\ H &11.93$\pm$0.019 &mag\\ K &11.92 $\pm$0.015 &mag\\ WISE W1 &11.88$\pm$0.023 &mag\\ WISE W2 & 11.92 $\pm$ 0.022 &mag\\ T$_{eff}$ &5580$\pm$ 70 K & Keck HIRES\\ log g & 4.19 $\pm$ 0.06 &Keck HIRES\\ $\lbrack$ Fe/H $\rbrack$ & -0.08 $\pm$ 0.06 &Keck HIRES\\ Vsini & $<$ 2.0 km s$^{-1}$&Keck HIRES\\ Stellar Age&$>$ 5 Gyr&Keck HIRES\\\hline \enddata \end{deluxetable} Figure~\ref{TransitFit} shows light curves from Quarters 6 and 17 which were normalized and detrended using either a linear (Q6) or 2$^{nd}$ order (Q17) baseline to remove small trends. We also examined the entire light curve looking for other transit signatures using the LombScargle tool available at the Exoplanet Archive\footnote{http://exoplanetarchive.ipac.caltech.edu/}. No significant periodicities indicative of shorter period planets could be identified in the periodogram. A search through the Kepler light curve using the TERRA software (Petigura 2013) revealed no other planets in this system. This limit is approximated by a limiting depth of $80\ ppm \times (Period/1\, day)^{0.6}$. Thus $>$ 80ppm transits with 1 day orbital periods ($\sim 1 R_\oplus$) are ruled out and 100-day planets with depths $>$ 1300 ppm ($\sim 0.35 R_\mathrm{Jup}$) are ruled out. Nor did we find any evidence of a transit at half of the nominal 1047.8 day period thereby ruling out the presence of an eclipsing binary in an edge-on, circular orbit \citep{Santerne2013}. \begin{figure*} \includegraphics[scale=0.55]{figure1New.pdf}\caption{Kepler data from Quarters 6 and 17 have been normalized, detrended with a linear (Q6, red) or second order (Q17, blue) baseline, and phased around the period of the transit. The solid line shows a fit to these data using a model based on the EXOFAST routines \citep{Eastman2013}. The inset in the upper right shows residuals with respect to the fit. \label{TransitFit}} \end{figure*} Superimposed on the light curves in Figure ~\ref{TransitFit} is a model transit curve fitted to the data as described in $\S$\ref{exofast}. But before describing the result of the light curve analysis, we first discuss the observations used to reject false positive interpretations and to characterize more fully the star and the transiting planet. \begin{deluxetable*}{lcc} \tablecaption{Median values and 68\% confidence interval for EXOFAST$^*$} \tabletypesize{\scriptsize} \tablehead{\colhead{~~~Parameter} & \colhead{Units} & \colhead{Value}} \startdata \sidehead{ Stellar Parameters:} ~~~$M_{*}$\dotfill &Mass (\ensuremath{\,M_\Sun})\dotfill & $1.011_{-0.052}^{+0.056}$\\ ~~~$R_{*}$\dotfill &Radius (\ensuremath{\,R_\Sun})\dotfill & $1.179_{-0.023}^{+0.026}$\\ ~~~$L_{*}$\dotfill &Luminosity (\ensuremath{\,L_\Sun})\dotfill & $1.23_{-0.11}^{+0.12}$\\ ~~~$\rho_*$\dotfill &Density (cgs)\dotfill & $0.876_{-0.033}^{+0.015}$\\ ~~~$\log(g_*)$\dotfill &Surface gravity (cgs)\dotfill & $4.3001_{-0.012}^{+0.0099}$\\ ~~~$\ensuremath{T_{\rm eff}}$\dotfill &Effective temperature (K)\dotfill & $5597_{-93}^{+95}$\\ ~~~$\ensuremath{\left[{\rm Fe}/{\rm H}\right]}$\dotfill &Metallicity\dotfill & $-0.088_{-0.095}^{+0.097}$\\ \sidehead{Planetary Parameters:} ~~~$P$\dotfill &Period (days)\dotfill & $1047.8356_{-0.0019}^{+0.0018}$\\ ~~~$a$\dotfill &Semi-major axis (AU)\dotfill & $2.026_{-0.035}^{+0.037}$\\ ~~~$R_{P}$\dotfill &Radius (\ensuremath{\,R_{\rm J}})\dotfill & $0.819_{-0.017}^{+0.019}$\\ ~~~$T_{eq}$\dotfill &Equilibrium Temperature (K)\dotfill & $206.0_{-3.5}^{+3.7}$\\ ~~~$\langle F \rangle$\dotfill &Incident flux (10$^9$ erg s$^{-1}$ cm$^{-2}$)\dotfill & $0.000408_{-0.000027}^{+0.000030}$\\ \sidehead{Primary Transit Parameters:} ~~~$T_C$\dotfill &Time of transit (\ensuremath{\rm {BJD_{TDB}}})\dotfill & $2455375.1341_{-0.0015}^{+0.0014}$\\ ~~~$R_{P}/R_{*}$\dotfill &Radius of planet in stellar radii\dotfill & $0.07138_{-0.00032}^{+0.00033}$\\ ~~~$a/R_{*}$\dotfill &Semi-major axis in stellar radii\dotfill & $370.3_{-4.7}^{+2.2}$\\ ~~~$u_1$\dotfill &linear limb-darkening coeff\dotfill & $0.401_{-0.025}^{+0.024}$\\ ~~~$u_2$\dotfill &quadratic limb-darkening coeff\dotfill & $0.205\pm0.034$\\ ~~~$i$\dotfill &Inclination (degrees)\dotfill & $89.982_{-0.017}^{+0.012}$\\ ~~~$b$\dotfill &Impact Parameter\dotfill & $0.114_{-0.079}^{+0.11}$\\ ~~~$\delta$\dotfill &Transit depth\dotfill & $0.005096_{-0.000045}^{+0.000047}$\\ ~~~$T_{FWHM}$\dotfill &FWHM duration (days)\dotfill & $0.8933_{-0.0053}^{+0.0038}$\\ ~~~$\tau$\dotfill &Ingress/egress duration (days)\dotfill & $0.06463_{-0.00072}^{+0.0023}$\\ ~~~$T_{14}$\dotfill &Total duration (days)\dotfill & $0.9580_{-0.0039}^{+0.0035}$\\ ~~~$P_{T}$\dotfill &A priori non-grazing transit prob\dotfill & $0.002508_{-0.000015}^{+0.000032}$\\ ~~~$P_{T,G}$\dotfill &A priori transit prob\dotfill & $0.002893_{-0.000017}^{+0.000038}$\\ ~~~$F_0$\dotfill &Baseline flux\dotfill & $1.0000036_{-0.0000087}^{+0.0000089}$\\ \sidehead{Secondary Eclipse Parameters:} ~~~$T_{S}$\dotfill &Time of eclipse (\ensuremath{\rm {BJD_{TDB}}})\dotfill & $2455899.05191\pm0.00091$\\ \sidehead{From EXOFAST run with non-zero eccentricity} ~~~$e$\dotfill &Eccentricity\dotfill & $0.26_{-0.11}^{+0.21}$\\ ~~~$\omega_*$\dotfill &Argument of periastron (degrees)\dotfill & $81_{-71}^{+73}$\\ \enddata \tablecomments{$^*$Parameters derived with eccentricity forced to zero except as noted.} \label{KIC8410697Fit} \end{deluxetable*} \section{Follow-up Observations of Kepler-1654 and Kepler-1654b\label{follow}} \subsection{Keck AO Imaging} We obtained near-infrared adaptive optics images of Kepler-1654 at Keck Observatory on the night of 2015-08-21 UT (Figure~\ref{Kimage}). Observations were obtained with the 1024 $\times$ 1024 NIRC2 array and the natural guide star system; the target star was bright enough to be used as the guide star. The data were acquired using the narrow-band $Br$-$\gamma$ filter using the narrow camera field of view with a pixel scale of 9.942 mas/pixel. The $Br$-$\gamma$ filter has a narrower bandwidth (2.13–2.18 $\micron$), but a similar central wavelength (2.15 $\micron$) compared the Ks filter (1.95-2.34 $\micron$; 2.15 $\micron$) and allows for longer integration times before saturation. A 3-point dither pattern was utilized to avoid the noisier lower left quadrant of the NIRC2 array. The 3-point dither pattern was observed three times with 2 coadds and a 30 second integration time per coadd for a total on-source exposure time of $3\times 3 \times 2 \times30s = 540 s.$ The target star was measured with a resolution of 0.059\arcsec\ (FWHM). No other stars were detected within the 10\arcsec\ field of view of the camera. In the $Br$-$\gamma$ filter, the data are sensitive to stars that have K-band contrast of $\Delta$K = 4.3 mag at a separation of 0.1\arcsec\ and $\Delta$K = 7.49 at 0.5\arcsec\ from the central star. We estimate the sensitivities by injecting fake sources with a signal-to-noise ratio of 5 into the final combined images at distances of N $\times$ FWHM from the central source, where N is an integer. The 5$\sigma$ sensitivities, as a function of radius from the star, are also shown in Figure~\ref{Kimage}. There is a star 7\arcsec\ northwest of Kepler-1654 that was outside the field of view of the NIRC2 observations. However, this star is clearly resolved in 2MASS and is a separate star in the Kepler Input Catalog (KIC 8410692). The KIC photometry of KIC 8410692 (KepMag=17.64 mag) indicates that the star has an effective temperature and a surface gravity of $T_{eff}=6111$K and $\log{g} = 4.35$, making the star a main sequence F dwarf at a distance of about 4 kpc, and, thus, not a bound companion to Kepler-1654. The Kepler photometric aperture is oriented such that the background star is not included in the aperture in quarters 6 and 17 when the transits were observed, and the photocentric position remains centered on the Kepler-1654 during the transit, indicating that the transit occurs around the Kepler-1654 and not the background star. Further, at $50\times$ fainter than Kepler-1654 the photometric blending of the background star (if the entire stellar profile were inside the photometric aperture) would only dilute the observed transit, and, hence, the derived planetary radius, by $<1\%$ \citep{Ciardi2015}. \begin{figure*} \includegraphics[scale=0.5]{figure2.pdf}\caption{An image of Kepler-1654 obtained with the Keck II telescope in the narrow band $Br$-$\gamma$ filter shows no evidence for a companion within 4\arcsec\, of the central star. The derived $5\sigma$ detection limits for the infrared imaging are also shown: differential magnitude as a function of angular separation from the primary star. \label{Kimage}} \end{figure*} \subsection{Keck HIRES Spectroscopy} We obtained spectra of Kepler-1654 using the HIRES instrument \citep{Vogt94} at the W.\ M.\ Keck Observatory. Observations and data reduction followed the usual methods of the California Planet Search \citep[CPS;][]{Howard2010}. A spectrum obtained with a 15 minute exposure on 2014/9/14 without the iodine cell was used for spectral typing (Figure~\ref{spectra}). The spectral synthesis modeling program ``SpecMatch" \citep{Petigura2015} has been calibrated with asteroseismology stars and yielded values of T$_\mathrm{eff}$, log g, and [Fe/H] with formal uncertainties of 70 K, 0.06 dex, and 0.06 dex, respectively (Table ~\ref{props}). These parameters show the star is a slowly rotating, G5 main sequence star, perhaps beginning to evolve off the main sequence. The Ca H\&K lines show no emission reversal implying a stellar age greater than $\sim$5 Gyr. An analysis looking for secondary spectra in the HIRES spectrum of Kepler-1654 found no companions brighter than 1\% of the primary \citep{Kolbl2015}. These stellar values are similar to those cited in \citep{Foreman2016}: our spectroscopically derived values of (T$_{eff}$,R$_*$)=({ 5580$\pm$70 K, 1.18$\pm$0.03 R$_\odot$) } vs. (5918$\pm$160 K,$1.0^{+0.35}_{-0.16}$ R$_\odot$) for Foreman-Mackey's values. We adopt our stellar values in this analysis (Table~\ref{props} and \ref{KIC8410697Fit}). We collected 18 RV measurements between 2014/09/07 and 2017/3/30. An iodine cell was used for each observation as a wavelength calibrator and point spread function (PSF) reference. Each spectrum spanned wavelengths from 3600–-8000 $\AA$ with a spectral resolution of R=60,000 and typical SNR per pixel of 100–200. The ``C2'' decker ($0\farcs87$ $\times$ 14\arcsec\ slit) provided spectral resolution $R \sim$ 55,000 and allowed for the sky background to be measured and subtracted. An exposure meter was used to automatically terminate exposures after reaching a target signal-to-noise ratio (SNR) per pixel at 550 nm. The standard CPS Doppler pipeline was used to measure RVs \citep{Marcy1992, Howard2009}. RV measurements are listed in Table~\ref{RVdata}. These values are consistent with the transit interpretation, showing variations of $<$10 m s$^{-1}$, ruling out definitively the false alarm possibility of an eclipsing binary which would show RV variations of a few km s$^{-1}$ on this timescale. \begin{figure*} \includegraphics[scale=0.75]{figure3.pdf}\caption{Spectra from the HIRES instrument on the Keck telescope. The top left panel shows a portion of the spectrum near the Ca H\&K lines and the lack of an emission reversal implies a stellar age greater than $\sim$5 Gyr. The middle left and bottom right panels show lines near the Mg b triplet and H-$\alpha$ which look normal for a mid-G star with narrow lines. \label{spectra}} \end{figure*} \begin{deluxetable}{lcc} \tablecaption{Keck HIRES Data for Kepler-1654} \tabletypesize{\scriptsize} \tablehead{\colhead{JD Date} & \colhead{Velocity (m s$^{-1}$)} & \colhead{$\sigma$Vel (m s$^{-1}$)}} \startdata 2456907.899441 & 4.61 & 3.42 \\ 2457061.166912 & 13.98 & 5.52 \\ 2457062.168017 & -1.34 & 5.19 \\ 2457151.051283 & -7.03 & 3.13 \\ 2457180.021422 & -0.33 & 3.78 \\ 2457201.026295 & -16.67 & 3.90 \\ 2457203.095994 & 11.25 & 4.68 \\ 2457211.936684 & 9.66 & 3.31 \\ 2457229.053345 & -2.74 & 3.63 \\ 2457326.714999 & -3.32 & 3.21 \\ 2457353.692574 & 8.66 & 3.21 \\ 2457354.729928 & 7.67 & 4.20 \\ 2457478.135425 & -2.43 & 3.72 \\ 2457521.001943 & -8.54 & 3.06 \\ 2457601.046180 & -7.80 & 5.64 \\ 2457620.898209 & 7.04 & 3.33 \\ 2457672.824144 & 2.84 & 3.76 \\ 2457830.140368 & -19.56 & 3.38 \\ \enddata \label{RVdata} \end{deluxetable} \section{Analysis of the Transit and RV Observations\label{TransitProps}} \subsection{Properties of the Transiting Planet Kepler-1654b\label{exofast}} First it is important to to confirm that this system truly represents a giant transiting planet. We used the VESPA tool to estimate \citep{Morton2012,Morton2015} { the probability that this signal represents an astrophysical false positive}. As inputs, we used the light curve shown in Fig.~1, the stellar parameters listed in Table~1 along with {\it gri} photometry from APASS, the NIRC2 contrast curve described in Sec.~3.2, the Keck/HIRES limit on secondary spectra of $\Delta$mag$<$5, and an upper limit on any secondary eclipse of $2\times10^{-4}$. The most likely false positive configuration is that of a blended eclipsing binary, but this scenario is roughly 20,000 times less likely than the planetary scenario. The resulting false positive probability is $6.2\times10^{-5}$, more than sufficient to validate Kepler-1654 as a transiting planet. { \citet{Foreman2016} cited a false alarm rate due to eclipsing binaries of 0.05 based on statistical estimates of the contamination by background objects. Our much higher confidence level is due the follow-up observations which gave direct and sensitive limits on stellar companions as well as taking advantage of improved stellar parameters. {\it It is on this basis that we suggest Kepler-1654b (n\'ee KIC~8410697b) should be regarded as a fully confirmed Kepler object.}} To determine the properties of the transiting companion we used the EXOFAST transit analysis routine \citep{Eastman2013} using stellar properties derived from the Keck data as priors plus the transit light curves as input\footnote{We used the implementation of EXOFAST available at the NASA Exoplanet Science Institute: https://exoplanetarchive.ipac.caltech.edu/cgi-bin/ExoFAST/nph-exofast.}. We ran EXOFAST in its full MCMC mode with the eccentricity set to zero with the presented in Table~\ref{KIC8410697Fit} and shown in Figure~\ref{TransitFit}. With 715 data points in the two observed transits the $\chi^2$ of the fit was 692.5 and the rms of the residuals was 0.00024 as shown in the figure. The various fitted parameters are astrophysically reasonable. For example, the derived limb-darkening coefficients of 0.40$\pm$0.02 and 0.20$\pm$0.03 are consistent with values appropriate to the stellar properties \citep{Claret2011}. The EXOFAST fit shows the planet to be a Jupiter sized object, 0.82 R$_\mathrm{Jup}$, in a 2.03 AU orbit. At this location the equilibrium temperature of the planet is 206 K assuming an albedo of zero. Finally, we conducted a separate fit to the transit light curve using the BATMAN software package \citep{Kreidberg2015}. All light curve parameters from this analysis agree with those in Table~\ref{KIC8410697Fit} to within 1$\sigma$. Using our posterior distributions, we computed the posterior of the stellar density under the assumption of a circular orbit \citep{Seager2003}. With the stellar density derived from our spectroscopic analysis, we then used the density posterior to investigate the photoeccentric effect \citep{Dawson2012}. The photoeccentric effect allows a direct and independent constraint on a transiting planet's orbital eccentricity through the observable impact of any nonzero orbital eccentricity on the transit light curve. Our analysis shows a preference for nonzero orbital eccentricity: we find $e=0.3^{+0.3}_{-0.1}$, { consistent with the weakly non-zero estimate from EXOFAST when run with eccentricity as a free parameter, $0.26_{-0.11}^{+0.21}$ (Table~\ref{KIC8410697Fit}). } The BATMAN analysis sets a lower limit on the eccentricity of $e>0.06$ at 99.7\% confidence. Thus, like most other giant, long-period exoplanets known from radial velocity surveys, Kepler-1654b may also have an orbital eccentricity greater than that of Jupiter and Saturn. Finally, we note our derived planet values are consistent with those derived by \citet{Foreman2016}, e.g. $R_p=0.82\pm0.06$ vs. 0.70$\pm0.1$ for \citep{Foreman2016}. \subsection{Precision RV: Constraining Kepler-1654b} Although our RV measurements have helped to confirm the planetary nature of Kepler-1654b, our goal of determining the mass of the transiting planet has not yet been achieved. We analyzed the 18 HIRES RV measurements (Table~\ref{RVdata}), which span 2.5 years, using the open source Python package \texttt{RadVel} \citep{Fulton2018}. We adopt an RV model consisting of a single Keplerian orbit, with orbital period and phase fixed at the known values and assuming an eccentricity of zero. The model includes a constant RV offset, $\gamma$, and a ``jitter'' term $\sigma$ representing astrophysical and instrumental noise. The MCMC analysis (Tables~\ref{tab:comp} and ~\ref{tab:params}) yield an estimate of the semi-amplitude $K_b=2.7^{+3.2}_{-3.3}$ m s$^{-1}$ which corresponds to 43$\pm$52 M$_\oplus$ (0.13$\pm$0.16 M$_{Jup}$), or a 3-$\sigma$ upper limit of $<$156 M$_\oplus$ ($<$0.49 M$_{Jup}$). Figure~\ref{fig:multiplot2} shows the RV data plotted along with the best fit model while Figures ~\ref{freepost} and ~\ref{allpost} show the posterior distributions of the model parameters. A 0-planet model is favored on the basis of the Bayesian Information Criterion (Table~\ref{tab:comp}), consistent with a non-detection. What level of signal might we expect to find on the basis of a planet of radius 0.82 R$_{Jup}$? The radius-mass data shown in Figure~3 of \citet{Howard2013a} suggest that with a radius of 9.2 R$_\oplus$, Kepler-1654b should have a mass in the range of 50--100 M$_\oplus$. \citet{Wolfgang2016} give a number of radius-mass relationships for planets with $R<4R_\oplus$ (somewhat smaller than Kepler-1654b) and their Method-1 yields a mass estimate of 58 M$_\oplus$ which falls within the \citet{Howard2013a} range. These masses correspond to RV semi-amplitudes of 3--6 m s$^{-1}$ which our RV data only begin to constrain. \begin{figure*}[!h] \centering \includegraphics[width=6.5in]{figure4.pdf} \caption{ Best-fit 1-planet Keplerian orbital model for Kepler-1654b. The maximum likelihood model is plotted while the orbital parameters listed in Table \ref{tab:params} are the median values of the posterior distributions. The thin blue line is the best fit 1-planet model. We add in quadrature the RV jitter term(s) listed in Table \ref{tab:params} with the measurement uncertainties for all RVs. {\bf b)} Residuals to the best fit 1-planet model. {\bf c)} RVs phase-folded to the ephemeris of planet b. The small point colors and symbols are the same as in panel {\bf a}. The phase-folded model for planet b is shown as the blue line. \label{fig:multiplot2} } \end{figure*} \begin{figure*}[!tbp] \centering \begin{minipage}[b]{0.45\textwidth} \includegraphics[width=\textwidth]{figure5.pdf} \caption{Posterior distributions for all free parameters.\label{freepost} } \end{minipage} \hfill \begin{minipage}[b]{0.45\textwidth} \includegraphics[width=\textwidth]{figure6.pdf} \caption{Posterior distributions for all derived parameters. \label{allpost}} \end{minipage} \end{figure*} The corresponding upper limit to the bulk density is $<$1.2 g cm$^{-3}$. As shown in Figure~\ref{fig:massraddens2}, the limit to Kepler-1654b's density sits close to Saturn's in the Mass-Radius-Density parameter space. Our RV observations rule out the most massive planets, but are consistent with the distribution of planetary densities in this radius range. Continuing RV observations will eventually yield a mass for the transiting system. We can use our data RV to explore the upper limit to the mass of the any interior planet. With a 1-$\sigma$ RMS residual of 9.1 m s$^{-1}$ (Table~\ref{tab:comp}) and 18 observations we can set a 3-$\sigma$ upper limit to the RV semi-amplitude any interior planet (transiting or not) of $K=3\times9.1/\sqrt{18}=6.4$ m s$^{-1}$ for a low inclination planet where K is given by: $$ K=\frac{28.4\, {\rm m\, s^{-1}}} {\sqrt{1-e^2}} M_{pl}\, {\rm sin(i)} \, M_*^{-2/3}\, P^{-1/3} \eqno{(2)}$$ \noindent with the planet mass with the planet mass in Jupiter units, the stellar mass in solar units and the period in years \citep{Lovis2010}. Assuming $sin(i)=1$ for a system with at least one transiting planet, a stellar mass of 1 M$_\odot$, and $e=0$, the HIRES observations set a mass limit for any additional planet of $ M_{pl}<0.23 P_{yr}^{1/3}$ M$_{Jup}$. \begin{deluxetable}{lrr} \tablecaption{Model Comparison} \tablehead{\colhead{Statistic} & \colhead{0 planets} & \colhead{{1 planet}}} \startdata $N_{\rm data}$ (number of measurements) & 18 & 18\\ $N_{\rm free}$ (number of free parameters) & 2 & 3\\ RMS (RMS of residuals in m s$^{-1}$) & 9.12 & 8.88\\ $\chi^{2}$ (assuming no jitter) & 69.91 & 67.22\\ $\chi^{2}_{\nu}$ (assuming no jitter) & 4.37 & 4.48\\ $\ln{\mathcal{L}}$ (natural log of the likelihood) & -65.29 & -64.87\\ BIC (Bayesian information criterion) & 135.48 & 135.62\\ \enddata \label{tab:comp} \end{deluxetable} \begin{deluxetable}{lrr} \tablecaption{MCMC Posteriors} \tablehead{\colhead{Parameter} & \colhead{Value} & \colhead{Units}} \startdata \sidehead{\bf{Modified MCMC Step Parameters}} $\sqrt{e}\cos{\omega}_{b}$ & $\equiv$ 0.0 & \\ $\sqrt{e}\sin{\omega}_{b}$ & $\equiv$ 0.0 & \\ \hline \sidehead{\bf{Orbital Parameters}} $P_{b}$ & $\equiv$ 1047.8363 & days\\ $T\rm{conj}_{b}$ & $\equiv$ 2455375.133 & JD\\ $e_{b}$ & $\equiv$ 0.0 & \\ $\omega_{b}$ & $\equiv$ 0.0 & degrees\\ $K_{b}$ & 2.7 $^{+3.2}_{-3.3}$ & m s$^{-1}$\\ \hline \sidehead{\bf{Other Parameters}} $\gamma$ (RV offset)& -1.1 $\pm 2.6$ & m s$^{-1}$\\ $\sigma$ (jitter) & 9.1 $^{+2.3}_{-1.7}$ & $\rm m\ s^{-1}$\\ \enddata \label{tab:params} \end{deluxetable} \begin{figure*}[!h] \centering \includegraphics[width=6.5in]{figure7.pdf} \caption{The distribution of bulk density in g cm$^{-3}$ for planets with radii in the range of 0.5-2 R$_{Jup}$ based on data for over 250 transiting planets with well determined mass and radius measurements (cyan points). The color scale shows bulk densities from 0.1 to $>$1 g cm$^{-3}$ and shows the fall-off in bulk density for more massive planets \citep{Howard2013}. The positions of Saturn, Jupiter and the upper limit to Kepler-1654b (red star) are indicated. The point size is proportional to the planet's density. \label{fig:massraddens2}} \end{figure*} Of primary importance will be to follow the Kepler-1654 system with additional RV monitoring to determine the planet's mass. New imaging and RV observations are planned to investigate the new long-period systems found by \citet{Foreman2016}. \section{Characterizing the Atmosphere of Temperate Gas Giants\label{atmosphereJWST}} Kepler-1654b is representative of the few temperate, transiting gas-giants available for atmospheric characterization. We investigated whether this system might be promising for spectroscopy with the Hubble (HST) and James Webb Space Telescopes \citep[JWST;][]{Beichman2014}. Kepler-1654b and others like it as described in \citet{Wang2015} and \citet{Kipping2016} (Table~\ref{LongP}) will be the coolest gas planets ($\sim200$ K) for which we will be able to probe atmospheric composition and physical characteristics. Comparisons to planets in our own Solar System will be particularly valuable. Kepler-1654b is cold for a transiting planet. The strength of absorption features in transmission spectra are proportional to a planet's atmospheric scale height, and that scale height is proportional to atmospheric temperature. Therefore the low temperature of the planet produces a small amplitude transmission spectrum. This plus the relative faintness of Kepler-16547 itself limits the signal-to-noise of its transmission spectrum. On the plus side, the long duration of these events enhances the sensitivity for measurements of trace atomic and molecular species in the 1-5 $\mu$m band. Sample spectra in the visible and near-IR for Kepler-1654b are shown in Figure~\ref{JWST}. \begin{deluxetable*}{ccc} \tablecaption{Predicted Epochs of Future Transits for Kepler-1654b (UT)\label{future}} \tabletypesize{\scriptsize} \tablehead{\colhead{Orbit} & {Transit Midpoint (BJD)} & \colhead{Transit Midpoint (UT)} } \startdata 0&2,455,375.1341$\pm$0.0014&2010-Jun-27 15:13:06$\pm$120 (sec)\\ 1&2,456,422.9697$\pm$0.0024&2013-May 10 11:16:22$\pm$200 (sec)\\ 2&2,457,470.8053$\pm$0.0040&2016-Mar 23 07:19:38$\pm$350 (sec)\\ 3&2,458,518.6409$\pm$0.0059&2019-Feb-4 03:22:54$\pm$510 (sec)\\ 4& 2,459,566.4765$\pm$0.0077&2021-Dec-17 23:26:10$\pm$670 (sec)\\ {\bf 5}&{\bf 2,460,614.3121$\pm$0.0096}&{\bf 2024-Oct-30 19:29:25$\pm$830 (sec)}\\ 6&2,461,662.1477$\pm$0.0115&2027-Sep-13 15:32:41.3$\pm$990 (sec)\\ 7&2,462,709.9833$\pm$0.0134&2030-Jul-27 11:35:57.1$\pm$1,160 (sec)\\ 8&2,463,757.8189$\pm$0.0153&2033-Jun-9 07:39:13.0$\pm$1,320 (sec)\\ \enddata \tablecomments{These predicted transit midpoints assume no offsets due to interactions with other bodies in the system (Transit Timing Variations, TTVs). The bold entry for 2024/10/30 is nominally the first one observable by JWST and occurs at the edge of the JWST observablity window.} \end{deluxetable*} We have simulated $JWST$ NIRSpec prism spectra for a single transit of this system and show the results in Figure~\ref{JWST}. These spectra were computed using the method described in \citet{Greene2016} and use Nextgen \citep{HAB99} stellar models with the T$_{eff}$ and log $g$ of Kepler-1654 (Table~1) and our atmospheric transmission models of Kepler-1654b. We model the atmosphere by solving radiative and chemical equilibrium, and also include condensation of water when supersaturation is reached. Three atmospheric models with g=10 m s$^{-2}$ with and without clouds and g=25 m s$^{-2}$ without clouds are shown in the top panel of Figure~\ref{JWST}a. We computed signals in photo-electrons using the apparent stellar magnitude of Kepler-1654 in the relevant bands, 18 hours integration time on transit, an additional 18 hours on the star, the 25-m$^2$ collecting area of $JWST$, and NIRSPEC prism resolving power and system transmission values kindly provided by the NIRSpec team (S. Birkmann, private communication). The resultant 1 $\sigma$ noise values are on the order of 15 ppm when binned to $R = 10$, lower than the best values achieved with $HST$ WFC3 G141 observations \citep[e.g.,][]{KBD14a}. It is uncertain whether $JWST$ NIRSpec or other $JWST$ instruments will achieve such low noise levels, so Figure~\ref{JWST} represents the best performance that $JWST$ is likely to achieve on a single transit observation of this system. \begin{figure*} \centering \includegraphics[width=6.5in]{figure8a.pdf}\\ \includegraphics[width=6.5in]{figure8b.pdf} \caption{top) Simulated JWST NIRSpec prism spectrum of Kepler-1654b with uncertainties computed as described in \citet{Greene2016}. Model spectra have been binned to $R=10$ and are shown as solid colored curves. 1 $\sigma$ uncertainties were computed for a single 18.4 hr transit at $R=10$ and are shown as error bars. Three atmospheric models with g=10 m s$^{-2}$ with and without clouds and g=25 m s$^{-2}$ without clouds appear in the top panel. The bottom panel shows $R = 10$ models and uncertainties for g=25 m s$^{-2}$ atmospheres with and without a heated stratosphere (HS) and two levels of metallicity, Solar and 10 $\times$ Solar.\label{JWST}} \end{figure*} A second scenario for the planet's atmosphere includes enhanced transmission spectral features from a heated stratosphere (Figure~\ref{JWST}b). Our models predict that the transmission spectra are sensitive to the scale height above a cloud deck at 0.1 - 1.0 bar. If the temperature above the cloud deck is substantially higher than the equilibrium temperature ($\sim$200 K) of the planet, the strength of the absorption features will be proportionally larger (Figure~\ref{JWST}). Such a stratosphere commonly exists in all giant planets in the Solar System, and has recently been detected in one hot exoplanet \citep{Evans2017}, although the degree of heating with respect to their equilibrium temperatures differs from planet to planet. Figure~\ref{JWST}b shows models with and without a heated stratosphere (HS) and two levels of metallicity, solar and 10 $\times$ Solar. The effect of stratospheric heating is much stronger than enhanced metallicity. Figure~\ref{JWST} shows that $JWST$ could detect the strong CH$_4$ features at 2.3 and 3.4 $\mu$m at low-to-moderate confidence in several models. We expect that spectral retrieval algorithms \citep[e.g.,][]{LWZ13} will likely provide a higher confidence detection of CH$_4$ since such methods combine information on all features in the observed spectrum. We do not expect $HST$ observations to yield detections of CH$_4$ or other features in the model spectra. The smaller aperture of $HST$ will produce lower SNR in the $1.1 - 1.7\mu$m passband of its WFC3 G141 instrument mode than for $JWST$ NIRSpec. The transmission models show no spectral absorption features and only modest Raleigh slopes at wavelengths shorter than $\lambda = 600$ nm ($JWST$/NIRSpec's lower cutoff), so shorter wavelength HST observations will also not be able to constrain the planet's atmospheric properties. The $JWST$ NIRSpec prism spectra could certainly detect the spectral features in the heated stratosphere models. Finally, we put Kepler-1654b into the context of other long period transiting systems suitable for observation by JWST. Table~\ref{LongP} gives data on 18 confirmed planets with radius $\geq 2$R$_\oplus$, orbital periods greater than 250 days and an equilibrium temperature\footnote{We adopt an illustrative equilibrium temperature given by $T_{pl}=265 L^{0.25} d^{-0.5}$ K for a planet located at d(AU) from a star of luminosity, $L(L_\odot)$} T$_{pl}<$300K. We developed a figure of merit which takes into account the total number of stellar photons, denoted $S$, observed in a spectral element, $\Delta\nu$, in a time $\tau$; the photon shot-noise, $\sigma=\sqrt{S}$; and the transit depth, $\alpha$. The ``Transit SNR" is defined as $\alpha S/ \sigma=\alpha\sqrt{S}$ and is evaluated for stellar flux densities, $F_\nu$, at K$_s$ (2.2 $\mu$m) or WISE W2 (4.6 $\mu$m) for a telescope with a collecting area $A$=25 m$^2$, with an instrument of resolution $R$=100 and efficiency $\eta$=0.25, and in an integration time, $\tau$, equal to the duration of a transit: $S=F_\nu A\eta\Delta\nu \tau /(h\nu)$. This figure of merit glosses over many details \citep{Greene2016}, but serves to rank these planets in terms of their suitability for transit spectroscopy. For planets with a temperature below 200 K, only Kepler-167e, which is a larger planet orbiting a smaller star \citep{Kipping2016}, has a ``Transit SNR" larger than Kepler-1654b's. Other systems rank a factor of two or more lower, making Kepler-1654b a valuable target for future study. Of course, the atmospheric scale height which depends on the planet's temperature and surface gravity also affects the detectability of spectral signatures. But since only a few of these planets have RV-determined masses we do not account for the effects of scale height here. The last column of Table~\ref{LongP} highlights the challenge of actually observing these long period planets. The long time between transits and JWST's limited pointing windows can make scheduling difficult. JWST's sun avoidance restrictions determine when the $10^o\times10^o$ Kepler field can be observed, nominally from early/mid-April to late-October/mid-November. Thus, for example, transits of Kepler-1654b and Kepler 167e will be observable only starting with the 2024 events based on extrapolations from the information in the JWST APT tool. \begin{deluxetable*}{ccccccccccc} \tablecaption{ Properties of Long Period Transiting Planets \label{LongP}} \tabletypesize{\scriptsize} \tablehead{\colhead{Planet}&\colhead{Period}&\colhead{R$_{pl}$}&\colhead{Depth}&\colhead{Duration}&\colhead{Ks}&\colhead{WISE2}&\colhead{SNR$^*$}&\colhead{SNR$^*$}&\colhead{T$_{pl}$}&\colhead{First }\\ \colhead{Name}&\colhead{(days)} & \colhead{(R$_{Jup}$)}&\colhead{(ppm)}&\colhead{(days)}&\colhead{(mag)}&\colhead{(mag)}&\colhead{(Ks)}&\colhead{ (W2)}&\colhead{(K)}&\colhead{JWST}} \startdata Kepler-167e$^1$ & 1,070 & 0.91 & 16,224 & 0.67 & 11.83 & 11.84 & 407 & 213 & 140 & 2024-10-25\\ PH2b/Kepler 86b$^9$ & 280 & 0.90 & 8,589 & 0.44 & 11.12 & 11.14 & 242 & 126 & 284 & 2020-10-28\\ Kepler-553c$^6$ & 330 & 1.00 & 14,549 & 0.51 & 13.06 & 12.88 & 180 & 103 & 234 & 2019-06-08\\ {\bf Kepler-1654b}$^2$ & 1,410 & 0.82 & 5,095 & 0.89 & 11.92 & 11.93 & 141 & 74 & 177 & 2024-10-30$^{11}$\\ Kepler-421b$^4$ & 700 & 0.37 & 2,510 & 0.66 & 11.54 & 11.49 & 71 & 38 & 177 & 2025-10-10\\ Kepler-1647b$^3$ & 1,110 & 1.06 & 3,687 & 0.41 & 12.00 & 11.90 & 67 & 37 & 255 & 2021-08-02\\ Kepler-1625b$^6$ & 290 & 0.54 & 3,489 & 0.79 & 13.92 & 13.92$^{12}$ & 36 & 19 & 275 & 2019-05-26\\ KIC 9663113b$^5$ & 570 & 0.41 & 1,669 & 0.83 & 12.50 & 12.46 & 34 & 18 & 244 & 2020-10-23\\ Kepler-1536b$^6$ & 360 & 0.28 & 1,840 & 0.54 & 12.55 & 12.54 & 30 & 16 & 176 & 2019-05-12\\ KIC 10525077b$^5$ & 850 & 0.49 & 2,489 & 0.83 & 13.75 & 13.80 & 29 & 15 & 211 & 2019-04-11\\ Kepler-1630b$^6$ & 510 & 0.20 & 1,009 & 0.35 & 11.80 & 11.71 & 18 & 10 & 165 & 2019-07-15\\ Kepler-22b$^{10}$ & 290 & 0.21 & 493 & 0.31 & 10.15 & 10.15 & 18 & 10 & 272 & 2019-09-08\\ Kepler-1634b$^6$ & 370 & 0.29 & 1,080 & 0.47 & 12.72 & 12.68 & 15 & 8 & 238 & 2019-08-02\\ Kepler-150f$^7$ & 640 & 0.33 & 1,259 & 0.56 & 13.37 & 13.37 & 14 & 7 & 207 & 2024-05-09\\ Kepler-1635b$^6$ & 470 & 0.33 & 1,540 & 0.56 & 13.90 & 13.90 & 14 & 7 & 212 & 2020-06-12\\ Kepler-1600b$^6$ & 390 & 0.28 & 1,219 & 0.41 & 13.90 & 13.88 & 9 & 5 & 218 & 2019-10-06\\ Kepler-1632b$^6$ & 450 & 0.22 & 360 & 0.53 & 11.66 & 11.64 & 9 & 5 & 281 & 2020-05-15\\ Kepler-1636b$^6$ & 430 & 0.29 & 840 & 0.74 & 14.23 & 14.23$^{12}$ & 7 & 4 & 255 & 2023-05-23\\ \enddata \tablecomments{Notes: $^*$See text for a description of the ``Transit SNR" figure of merit in R=100 spectral element. $^1$\citet{Kipping2016}; $^2$This work; $^3$Circumbinary planet with multiple transits \citet{Kostov2016}; $^4$\citet{Kipping2014}; $^5$\citet{Wang2015}; $^6$\citet{Morton2016}; $^7$\citet{Schmitt2017}. $^8$\citet{Jenkins2015}. $^9$\citet{Wang2013}. $^{10}$\citet{Borucki2012}. $^{11}$This transit is just at the edge of the JWST observability window based on current knowledge. $^{12}$Estimated from 2MASS.} \end{deluxetable*} \section{Conclusion} We have searched Q1-Q17 Kepler light curves of F and G stars not previously associated with confirmed or candidate planets or even with Kepler "Objects of Interest" and we were able to identify Kepler-1654b (originally KIC~8410697b) which shows two transits with a 1047 day period---one of the longest periods yet found in the Kepler survey. Subsequent AO and RV observations were able to rule out false positives and to characterize the planet and its host star. A fit to the combined transit curve plus RV data shows that orbiting this mature G5 star is a 0.82 \ensuremath{\,R_{\rm J}}\ planet with a mass of $<$0.5 M$_{Jup}$. Transit spectroscopy with JWST of Kepler-1654b and similar objects will enable a careful study of planets whose physical states, e.g. a low equilibrium temperature of $\sim$200 K, most closely resemble those of the outer planets in our own solar system.
{ "timestamp": "2018-02-27T02:08:56", "yymm": "1802", "arxiv_id": "1802.08945", "language": "en", "url": "https://arxiv.org/abs/1802.08945" }
\section{Introduction} Consider the following question: a learner receives an $iid$ training set $S$ drawn from a distribution parametrized by $\theta^*$. There is a teacher who knows $\theta^*$. Can the teacher select a subset from $S$ so the learner estimates $\theta^*$ better from the subset than from $S$? This question is distinct from training set reduction (see e.g.~\cite{garcia2012prototype,zeng2005smo,Wilson2000}) in that the teacher can use the knowledge of $\theta^*$ to carefully design the subset. It is, in fact, a coding problem: Can the teacher approximately encode $\theta^*$ using items in $S$ for a known decoder, which is the learner? As such, the question is not a machine learning task but rather a machine teaching one~\cite{Zhu2018Overview,Goldman1995Complexity,Zhu2015Machine}. This question is relevant for several nascent applications. One application is in understanding blackbox models such as deep nets. Often observation to a blackbox model is limited to its predicted label $y=\theta^*(x)$ given input $x$. One way to interpret a blackbox model is to locally train an interpretable model with data points $S$ labeled by the blackbox model around the region of interest~\cite{ribeiro2016should}. We, however, ask for more: to reduce the size of the training set $S$ for the local learner \emph{while} making the learner approximate the blackbox better. The reduced training set itself also serves as representative examples of local model behavior. Another application is in education. Imagine a teacher who has a teaching goal $\theta^*$. This is a reasonable assumption in practice: e.g. a geology teacher has the knowledge of the actual decision boundaries between rock categories. However, the teacher is constrained to teach with a given textbook (or a set of courseware) $S$. To the extent that the student is quantified mathematically, the teacher wants to select pages in the textbook with the guarantee that the student learns better from those pages than from gulping the whole book. But is the question possible? The following example says yes. Consider learning a threshold classifier on the interval $[-1,1]$, with true threshold at $\theta^*=0$. Let $S$ have $n$ items drawn uniformly from the interval and labeled according to $\theta^*$. Let the learner be a hard margin SVM, which places the estimated threshold in the middle of the inner-most pair in $S$ with different labels: $\hat\theta_S = (x_- + x_+)/2$ where $x_-$ is the largest negative training item and $x_+$ the smallest positive training item in $S$. It is well known that $|\hat\theta_S-\theta^*|$ converges at a rate of $1/n$: the intuition being that the average space between adjacent items is $O(1/n)$. \begin{figure}[h] \centerline{\includegraphics[width=.5\textwidth]{symmetric.pdf}} \caption{The original training set $S$ with $n=6$ items (circles and stars; green=negative, purple=positive), and the most-symmetric training set (stars) the teacher selects.} \label{fig:symmetric} \end{figure} The teacher knows everything but cannot tell $\theta^*$ directly to the learner. Instead, it can select the \emph{most-symmetric pair} in $S$ about $\theta^*$ and give them to the learner as a two-item training set. We will prove later that the risk on the most symmetric pair is $O(1/n^2)$, that is, learning from the selected subset surpasses learning from $S$. Thus we observe something interesting: the teacher can turn a larger training set $S$ into a smaller and better subset for the midpoint classifier. We call this phenomenon \textbf{super-teaching}. \section{Formal Definition of Super Teaching} \label{sec:def} Let $\setfont{Z}$ be the data space: for unsupervised learning $\setfont{Z}=\setfont{X}$, while for supervised learning $\setfont{Z}=\setfont{X} \times \setfont{Y}$. Let $p_\setfont{Z}$ be the underlying distribution over $\setfont{Z}$. We take a function view of the learner: a learner $A$ is a function $A: \cup_{n=0}^\infty \setfont{Z}^n \mapsto \Theta$, where $\Theta$ is the learner's hypothesis space. The notation $\cup_{n=0}^\infty \setfont{Z}^n$ defines the ``set of (potentially non-$iid$) training sets'', namely multisets of any size whose elements are in $\setfont{Z}$. Given any training set $T \in \cup_{n=0}^\infty \setfont{Z}^n$, we assume $A$ returns a unique hypothesis $A(T) \triangleq \hat\theta_T \in \Theta$. The learner's risk $R(\theta)$ for $\theta\in\Theta$ is defined as: \begin{equation} R(\theta)=\E{p_\setfont{Z}}{\ell(\theta(x), y)}, \mbox{ or } R(\theta)= \|\theta-\theta^*\|_2. \label{eq:R} \end{equation} The former is for prediction tasks where $\ell()$ is a loss function and $\theta(x)$ denotes the prediction on $x$ made by model $\theta$; the latter is for parameter estimation where we assume a realizable model $p_\setfont{Z} = p_{\theta^*}$ for some $\theta^* \in \Theta$. We now introduce a clairvoyant teacher $B$ who has full knowledge of $p_\setfont{Z}, A, R$. The teacher is also given an $iid$ training set $S=\{z_1, \ldots, z_n\} \sim p_\setfont{Z}$. If the teacher teaches $S$ to $A$, the learner will incur a risk $R(A(S)) \triangleq R(\hat\theta_S)$. The teacher's goal is to judiciously select a subset $B(S) \subset S$ to act as a ``super teaching set'' for the learner so that $R(\hat\theta_{B(S)}) < R(\hat\theta_S)$. Of course, to do so the teacher must utilize her knowledge of the learning task, thus the subset is actually a function $B(S, p_\setfont{Z}, A, R)$. In particular, the teacher knows $p_\setfont{Z}$ already, and this sets our problem apart from machine learning. For readability we suppress these extra parameters in the rest of the paper. We formally define super teaching as follows. \begin{definition}[Super Teaching] \label{def:superteaching} $B$ is a super teacher for learner $A$ if~$\forall\delta>0, \exists N$ such that $\forall n\ge N$ \begin{equation} \P{S}{R(\hat\theta_{B(S)}) \le c_n R(\hat\theta_S)} > 1-\delta, \end{equation} where $S \stackrel{iid}{\sim} p_\setfont{Z}^n, B(S)\subset S$, and $c_n \le 1$ is a sequence we call super teaching ratio. \end{definition} Obviously, $c_n=1$ can be trivially achieved by letting $B(S)=S$ so we are interested in small $c_n$. There are two fundamental questions: (1) Do super teachers provably exist? (2) How to compute a super teaching set $B(S)$ in practice? We answer the first question positively by exhibiting super teaching on two learners: maximum likelihood estimator for the mean of a Gaussian in section~\ref{sec:Gaussian}, and 1D large margin classifier in section~\ref{sec:midpoint}. Guarantees on super teaching for general learners remain future work. Nonetheless, empirically we can find a super teaching set for many general learners: We formulate the second question as mixed-integer nonlinear programming in section~\ref{sec:MINLP}. Empirical experiments in section~\ref{sec:exp} demonstrates that one can find a good $B(S)$ effectively. \section{Analysis on Super Teaching for the MLE of Gaussian mean} \label{sec:Gaussian} In this section, we present our first theoretical result on super teaching, when the learner $A_{MLE}$ is the maximum likelihood estimator (MLE) for the mean of a Gaussian. Let $\setfont{Z}=\setfont{X}=\setfont{R}$, $\Theta=\setfont{R}$, $p_\setfont{Z}(x)=\mathcal{N}(\theta^*, 1)$. Given a sample $S$ of size $n$ drawn from $p_\setfont{Z}$, the learner computes the MLE for the mean: $\hat \theta_S =A_{MLE}(S)= \frac{1}{n}\sum_{i=1}^n x_i$. We define the risk as $R(\hat \theta_S)=|\hat \theta_S-\theta^*|$. The teacher we consider is the optimal $k$-subset teacher $B_k$, which uses the best subset of size $k$ to teach: \begin{equation} B_k(S) \in \ensuremath{\mbox{argmin}}_{T \subset S, |T|=k} R(\hat\theta_T). \end{equation} To build intuition, it is well-known that the risk of $A_{MLE}$ under $S$ is $O(1/\sqrt{n})$ because the variance under $n$ items shrinks like $1/n$. Now consider $k=1$. Since the teacher $B_1$ knows $\theta^*$, under our setting the best teaching strategy is for her to select the item in $S$ closest to $\theta^*$, which forms the singleton teaching set $B_1(S)$. One can show that with large probability this closest item is $O(1/n)$ away from $\theta^*$ (the central part of a Gaussian density is essentially uniform). Therefore, we already see a super teaching ratio of $c_n = n^{-\frac{1}{2}}$. More generally, our main result below shows that $B_k$ achieves a super teaching ratio $c_n = O(n^{-k+\frac{1}{2}})$: \begin{restatable}{theorem}{Amuthm} \label{thm:Amuthm} Let $B_k$ be the optimal $k$-subset teacher. $\forall \epsilon\in(0,\frac{2k-1}{4}), \forall\delta\in(0, 1)$, $\exists N(k, \epsilon, \delta)$ such that $\forall n\ge N(k, \epsilon, \delta)$, $\P{}{R(\hat \theta_{B_k(S)})\le c_nR(\hat \theta_S)}>1-\delta$, where $c_n=\frac{k^{k-\epsilon}}{\sqrt{k}}n^{-k+\frac{1}{2}+2\epsilon}$. \end{restatable} Toward proving the theorem, \footnote{Remark: we introduced an auxiliary variable $\epsilon$ which controls the implicit tradeoff between $c_n$, how much super teaching helps, and $N$, how soon super teaching takes effect. When $\epsilon\rightarrow 0$ the teaching ratio $c_n$ approaches $O(n^{-k+\frac{1}{2}})$, but as we will see $N(k, \epsilon, \delta)\rightarrow\infty$. Similarly, $k$ also affects the tradeoff: the teaching ratio is smaller as we enlarge $k$, but $N(k, \epsilon, \delta)$ increases.} we first recall the standard rate $R(\hat\theta_S) \approx n^{-\frac{1}{2}}$ if $A_{MLE}$ learns from the whole training set $S$: \begin{restatable}{proposition}{thmAsyGaussianPool} \label{thm:AsyGaussianPool} Let $S$ be an $n$-item $iid$ sample drawn from $ \mathcal{N}(\theta^*, 1)$. $\forall \epsilon>0$, $\forall\delta\in(0,1)$, $\exists N_1(\epsilon, \delta)$ such that $\forall n\ge N_1$, \begin{equation} \P{}{n^{-\frac{1}{2}-\epsilon}<R(\hat \theta_S)<n^{-\frac{1}{2}+\epsilon}}>1-\delta. \end{equation} \end{restatable} \begin{proof} $R(\hat \theta_S)=|\hat \theta_S-\theta^*|$ and $\hat \theta_S-\theta^*\sim \mathcal{N}(0, n^{-1})=\sqrt{\frac{n}{2\pi}}e^{-\frac{nx^2}{2}}$. Let $\alpha=n^{-\frac{1}{2}-\epsilon}$ and $\beta=n^{-\frac{1}{2}+\epsilon}$. We have \begin{equation} \begin{aligned} \label{lower} &\P{}{R(\hat\theta_S)\le\alpha}=2\int_{0}^\alpha \sqrt{\frac{n}{2\pi}}e^{-\frac{nx^2}{2}}dx\\ &<2\int_{0}^\alpha \sqrt{\frac{n}{2\pi}}dx=2\alpha\sqrt{\frac{n}{2\pi}}=\sqrt{\frac{2}{\pi}}n^{-\epsilon}, \end{aligned} \end{equation} \begin{equation} \label{upper} \begin{aligned} &\P{}{R(\hat\theta_S)\ge\beta}=2\int_{\beta}^\infty \sqrt{\frac{n}{2\pi}}e^{-\frac{nx^2}{2}}dx\\ &<2\int_{\beta}^\infty \frac{x}{\beta}\sqrt{\frac{n}{2\pi}}e^{-\frac{nx^2}{2}}dx=\int_{\beta^2}^\infty \frac{1}{\beta}\sqrt{\frac{n}{2\pi}}e^{-\frac{ny}{2}}dy\\ &=\frac{1}{\beta}\sqrt{\frac{2}{n\pi}}e^{-\frac{n\beta^2}{2}}<\frac{1}{\beta}\sqrt{\frac{2}{n\pi}}=\sqrt{\frac{2}{\pi}}n^{-\epsilon}. \end{aligned} \end{equation} Thus $\P{}{\alpha< R(\hat\theta_S)<\beta} =1-\P{}{R(\hat\theta_S)\le \alpha}-\P{}{R(\hat\theta_S)\ge \beta} >1-2\sqrt{\frac{2}{\pi}}n^{-\epsilon}$. Let $N_1(\epsilon, \delta)=(\frac{1}{\delta}\sqrt{\frac{8}{\pi}})^{\frac{1}{\epsilon}}$, then $\forall n\ge N_1$, $\P{}{\alpha< R(\hat\theta_S)<\beta}>1-\delta$. \end{proof} We now work out the risk of $A_{MLE}$ if it learns from the optimal $k$-subset teacher $B_k$. \thmref{thm:Gaussian} says that this risk is very small and sharply concentrated around $R(\hat\theta_{B_k(S)}) \approx n^{-k}$. To prove~\thmref{thm:Gaussian}, we first give the following lemma. \begin{lemma} Denote $C^n_k=\begin{pmatrix} n\\k\end{pmatrix}$. Let the index set $I=\{1,2,...n\}$ where $n\ge 4k$. Consider all subsets of size $k$, then there are at most $4^kC^{2k}_kC^n_{2k-1}$ ordered pairs of subsets that are overlapping but not identical. \label{lem:intersect} \end{lemma} \begin{proof} Let $I_1$ and $I_2$ be two subsets of size $k$ and they overlap on $t$ indexes. Then the total number of distinct indexes that appear in $I_1\cup I_2$ is $2k-t$. There are $C^n_{2k-t}$ ways of choosing such $2k-t$ indexes. Next we determine which $t$ indexes are overlapping ones. We have $C^{2k-t}_t$ ways of choosing such $t$ indexes. Finally we have $C^{2k-2t}_{k-t}$ ways of selecting half of the non-overlapping indexes and attribute them to $I_1$. Thus in total we have $O_t=C^n_{2k-t}C^{2k-t}_tC^{2k-2t}_{k-t}$ ordered pairs of subsets that overlap on $t$ indexes. By our assumption $n\ge4k$ we have $C^n_{2k-t}\le C^n_{2k-1}$. Also note that $C^{2k-t}_t<C^{2k}_t$ and $C^{2k-2t}_{k-t}<C^{2k}_k$, thus $O_t<C^n_{2k-1}C^{2k}_tC^{2k}_k$. Therefore the total number of ordered pairs of subsets that are overlapping but not identical is \begin{equation} \begin{aligned} &O=\sum_{t=1}^{k-1}O_t<\sum_{t=1}^{k-1}C^n_{2k-1}C^{2k}_tC^{2k}_k\\ &<\sum_{t=0}^{2k}C^n_{2k-1}C^{2k}_tC^{2k}_k=4^kC^{2k}_kC^n_{2k-1}. \end{aligned} \end{equation} \end{proof} Now we prove the risk of the optimal $k$-subset teacher. \begin{restatable}{theorem}{thmAsyGaussian} \label{thm:Gaussian} Let $B_k$ be the optimal $k$-subset teacher. Let $S$ be an $n$-item $iid$ sample drawn from $\mathcal{N}(\theta^*, 1)$. $\forall \epsilon\in(0, k), \forall \delta\in(0,1)$, $\exists N_2(k, \epsilon, \delta)$ such that $ \forall n\ge N_2$, \begin{equation} \P{}{\frac{1}{\sqrt{k}}(\frac{k}{n})^{k+\epsilon}<R(\hat \theta_{B_k(S)})<\frac{1}{\sqrt{k}}(\frac{k}{n})^{k-\epsilon}}>1-\delta. \end{equation} \end{restatable} \begin{proof} Let $I\subseteq\{1,2,...,n\}$ and $|I|=k$, define $\gamma_I=\frac{1}{\sqrt{k}}\sum_{i\in I}(x_i-\theta^*)$. Let $S_I$ denote the subset indexed by $I$. Note that $\hat\theta_{S_I}=\frac{1}{k}\sum_{i\in I}x_i$ and $R(\hat\theta_{S_I})=|\hat\theta_{S_I}-\theta^*|=|\frac{1}{k}\sum_{i\in I}x_i-\theta^*|=\frac{1}{\sqrt{k}}|\gamma_I|$. Also note that $R(\hat \theta_{B_k(S)})=\inf_{I}R(\hat\theta_{S_I})=\frac{1}{\sqrt{k}}\inf_{I}|\gamma_I|$. Thus to prove \thmref{thm:Gaussian} it suffices to prove \begin{equation} \P{}{(\frac{k}{n})^{k+\epsilon}<\inf_{I}|\gamma_I|<(\frac{k}{n})^{k-\epsilon}}\rightarrow1. \end{equation} Let $\alpha=(\frac{k}{n})^{k+\epsilon}$ and $\beta=(\frac{k}{n})^{k-\epsilon}$. We first prove the lower bound. Note that $\gamma_I$ has the same distribution for all $I$. Thus by the union bound, \begin{equation} \begin{aligned} \P{}{\inf_{I}|\gamma_I|\le \alpha}=\P{}{\exists I: |\gamma_I|\le\alpha}\le C^n_k\P{}{|\gamma_{I_1}|\le\alpha}, \end{aligned} \end{equation} where $I_1=\{1,2,...,k\}$. Since $\gamma_{I_1}\sim \mathcal{N}(0, 1)$, we have \begin{equation} \P{}{|\gamma_{I_1}|\le\alpha}=\int_{-\alpha}^{\alpha}\frac{1}{\sqrt{2\pi}}e^{-\frac{x^2}{2}}dx<\sqrt{\frac{2}{\pi}}\alpha. \end{equation} Note that $C^n_k\le (\frac{en}{k})^k$. Thus, \begin{equation} \P{}{\inf_{I}|\gamma_I|\le \alpha}<(\frac{en}{k})^k\sqrt{\frac{2}{\pi}}\alpha=\sqrt{\frac{2}{\pi}}e^k(\frac{k}{n})^\epsilon\rightarrow 0. \end{equation} Thus $\exists N_2^{'}(k,\epsilon, \delta)$ such that $\forall n\ge N_2^{'}$, \begin{equation}\label{lower:gaussian} \P{}{\inf_{I}|\gamma_I|\le \alpha}<\frac{\delta}{2}. \end{equation} To show the upper bound, we define $t_I=\ind{|\gamma_I|<\beta}$, where $\ind{}$ is the indicator function. Let $T=\sum_{I}t_I$. Then it suffices to show $\lim_{n\rightarrow \infty}\P{}{T=0}=0$. Note that \begin{equation} \label{evbound} \begin{aligned} &\P{}{T=0}=\P{}{T-\E{}{T}=-\E{}{T}}\\ &\le\P{}{(T-\E{}{T})^2\ge(\E{}{T})^2}\le\frac{\V{T}}{(\E{}{T})^2}, \end{aligned} \end{equation} where the last inequality follows from the Markov inequality. Now we lower bound $\E{}{T}$. \begin{equation} \begin{aligned} &\E{}{T}=\E{}{\sum_{I}t_I}=\sum_{I}\E{}{t_I}=C^{n}_k\E{}{t_{I_1}}\\ &=C^{n}_k\P{}{|\gamma_{I_1}|<\beta}=C^{n}_k\int_{-\beta}^{\beta}\frac{1}{\sqrt{2\pi}}e^{-\frac{x^2}{2}}dx.\\ \end{aligned} \end{equation} Note that $\epsilon<k$, thus $\beta<1$. For $x\in(-\beta,\beta)$, $\frac{1}{\sqrt{2\pi}}e^{-\frac{x^2}{2}}>\frac{1}{\sqrt{2\pi}}e^{-\frac{1}{2}}=\frac{1}{\sqrt{2\pi e}}$. Also note that $C^{n}_k>(\frac{n}{k})^k$, thus \begin{equation} \label{ebound} \E{}{T}>(\frac{n}{k})^k\frac{1}{\sqrt{2\pi e}}2\beta=\sqrt{\frac{2}{\pi e}}(\frac{n}{k})^\epsilon. \end{equation} Now we upper bound $\V{T}$. \begin{equation} \V{T}=\sum_{I, I^{'}}\Cov{t_I}{t_{I^{'}}}=\sum_{I, I^{'}, |I\cap I^{'}|\ge1}\Cov{t_I}{t_{I^{'}}}. \end{equation} Note that for Bernoulli random variable $t_I$, $\V{t_I}\le \E{}{t_I}$. Thus if $I=I^{'}$, then \begin{equation} \begin{aligned} &\Cov{t_I}{t_{I^{'}}}=\V{t_I}\le \E{}{t_I}=\P{}{|\gamma_I|<\beta}\\ &=\int_{-\beta}^{\beta}\frac{1}{\sqrt{2\pi}}e^{-\frac{x^2}{2}}dx<\frac{1}{\sqrt{2\pi}}2\beta=\sqrt{\frac{2}{\pi}}(\frac{k}{n})^{k-\epsilon}. \end{aligned} \end{equation} Otherwise $1 \le |I\cap I^{'}| \le k-1$, that is, $I$ and $I^{'}$ overlap but not identical, then \begin{equation} \begin{aligned} \Cov{t_I}{t_{I^{'}}}&=\E{}{t_It_{I^{'}}}-\E{}{t_I}\E{}{t_{I^{'}}}\le \E{}{t_It_{I^{'}}}\\ &=\P{}{|\gamma_I|<\beta, |\gamma_{I^{'}}|<\beta}. \end{aligned} \end{equation} Note that $\gamma_I$ and $\gamma_{I^{'}}$ are jointly Gaussian with covariance \begin{equation} \begin{aligned} \Cov{\gamma_I}{\gamma_{I^{'}}}&=\frac{1}{k}\sum_{i\in I, i^{'}\in I^{'}}\Cov{x_i-\theta^*}{x_{i^{'}}-\theta^*}\\ &=\frac{1}{k}\sum_{i\in I,i^{'}\in I^{'},i=i^{'}}1=\frac{|I\cap I^{'}|}{k}\triangleq\rho, \end{aligned} \end{equation} where $\frac{1}{k}\le\rho\le\frac{k-1}{k}$. The joint PDF of two standard normal distributions $x, y$ with covariance $\rho$ is \begin{equation} f(x,y)=\frac{1}{2\pi\sqrt{1-\rho^2}}e^{-\frac{x^2-2\rho xy+y^2}{2(1-\rho^2)}}. \end{equation} Note that $f(x, y)\le\frac{1}{2\pi\sqrt{1-\rho^2}}$, thus \begin{equation} \begin{aligned} &\P{}{|\gamma_I|<\beta, |\gamma_{I^{'}}|<\beta}\le\iint\displaylimits_{|x|<\beta, |y|<\beta}\frac{1}{2\pi\sqrt{1-\rho^2}}dxdy\\ &=\frac{1}{2\pi\sqrt{1-\rho^2}}(2\beta)^2=\frac{2}{\pi\sqrt{1-\rho^2}}\beta^2. \end{aligned} \end{equation} Since $\frac{2}{\pi\sqrt{1-\rho^2}}\le\frac{2}{\pi\sqrt{1-(\frac{k-1}{k})^2}}\le\frac{2}{\pi\sqrt{\frac{k}{k^2}}}=\frac{2\sqrt{k}}{\pi}$, thus \begin{equation} \P{}{|\gamma_I|<\beta, |\gamma_{I^{'}}|<\beta}\le\frac{2\sqrt{k}\beta^2}{\pi}=\frac{2\sqrt{k}}{\pi}(\frac{k}{n})^{2k-2\epsilon}. \end{equation} According to \lemref{lem:intersect}, there are at most $4^kC^{2k}_kC^n_{2k-1}$ pairs of $I$ and $I^{'}$ such that $1\le|I\cap I^{'}|\le k-1$. Thus, \begin{equation}\label{vbound} \begin{aligned} &\V{T}=\sum_{I}\V{t_I}+\sum_{I\neq I^{'}, |I\cap I^{'}|\ge1}\Cov{t_I}{t_{I^{'}}}\\ &\le C^n_k\sqrt{\frac{2}{\pi}}(\frac{k}{n})^{k-\epsilon}+4^kC^{2k}_kC^n_{2k-1}\frac{2\sqrt{k}}{\pi}(\frac{k}{n})^{2k-2\epsilon}\\ &\le \sqrt{\frac{2}{\pi}}(\frac{en}{k})^k(\frac{k}{n})^{k-\epsilon}+4^kC^{2k}_k(\frac{en}{2k-1})^{2k-1}\frac{2\sqrt{k}}{\pi}(\frac{k}{n})^{2k-2\epsilon}\\ &=\sqrt{\frac{2}{\pi}}e^k(\frac{n}{k})^\epsilon+\frac{4\sqrt{k}}{\pi}C^{2k}_k(\frac{2ek}{2k-1})^{2k-1}(\frac{n}{k})^{2\epsilon-1}.\\ \end{aligned} \end{equation} Now plug~\eqref{vbound} and~\eqref{ebound} into~\eqref{evbound}, we have \begin{equation} \begin{aligned} &\P{}{T=0}\le a_1(k)(\frac{n}{k})^{-\epsilon}+a_2(k)(\frac{n}{k})^{-1}\rightarrow 0,\\ \end{aligned} \end{equation} where $a_1=\sqrt{\frac{\pi}{2}}e^{k+1}$ and $a_2(k)=2\sqrt{k}eC^{2k}_k(\frac{2ek}{2k-1})^{2k-1}$. Thus $\exists N_2^{''}(k, \epsilon,\delta)$ such that $\forall n\ge N_2^{''}$, \begin{equation}\label{upper:gaussian} \P{}{\inf_{I}|\gamma_I|\ge \beta}<\frac{\delta}{2}. \end{equation} Let $N_2(k, \epsilon, \delta)=\max\{N_2^{'}(k, \epsilon,\delta), N_2^{''}(k, \epsilon,\delta)\}$, combining~\eqref{lower:gaussian} and ~\eqref{upper:gaussian} concludes the proof. \end{proof} Now we can conclude super-teaching by comparing~\thmref{thm:Gaussian} and~\propref{thm:AsyGaussianPool}: \begin{proof}[\textbf{Proof of~\thmref{thm:Amuthm}}] Let $\alpha=\frac{1}{\sqrt{k}}(\frac{k}{n})^{k-\epsilon}$ and $\beta=n^{-\frac{1}{2}-\epsilon}$. By \propref{thm:AsyGaussianPool}, $\forall \epsilon\in(0, \frac{2k-1}{4}),\forall\delta\in(0,1)$, $\exists N_1(\epsilon,\frac{\delta}{2})$ such that $\forall n\ge N_1$, $\P{}{R(\hat \theta_S)>\beta}> 1-\frac{\delta}{2}$. By \thmref{thm:Gaussian}, $\exists N_2(k, \epsilon,\frac{\delta}{2})$ such that $\forall n\ge N_2$, $\P{}{R(\hat \theta_{B_k(S)})<\alpha}>1-\frac{\delta}{2}$. Let $c_n=\frac{k^{k-\epsilon}}{\sqrt{k}}n^{-k+\frac{1}{2}+2\epsilon}$. Since $\epsilon<\frac{2k-1}{4}$, $c_n$ is a decreasing sequence in $n$ with $\lim_{n\rightarrow\infty}c_n=0$. Let $N_3(k, \epsilon)$ be the first integer such that $c_{N_3}\le1$. Let $N(k, \epsilon, \delta)=\max\{N_1(\epsilon,\frac{\delta}{2}), N_2(k, \epsilon,\frac{\delta}{2}), N_3(k, \epsilon)\}$. By a union bound $\forall n\ge N(k, \epsilon, \delta)$, $\P{}{R(\hat\theta_{B_k(S)})<\alpha, R(\hat\theta_S)\ge \beta}>1-\delta$. Since $\frac{\alpha}{\beta}=c_n$, we have $\P{}{R(\hat\theta_{B_k(S)})\le c_n R(\hat\theta_S)}>1-\delta$, where $c_n\le c_{N_3}\le1$. \end{proof} \section{Analysis on Super Teaching for 1D Large Margin Classifier} \label{sec:midpoint} We present our second theoretical result, this time on teaching a 1D large margin classifier. Let $\setfont{X}=[-1,1]$, $\setfont{Y}=\{-1,1\}$, $\Theta=[-1,1]$, $\theta^*=0$, $p_{\setfont{Z}}(x, y)=p_{\setfont{Z}}(x)p_{\setfont{Z}}(y\mid x)$ where $p_{\setfont{Z}}(x)=U(\setfont{X})$ and $p_{\setfont{Z}}(y=1\mid x)=\ind{x\ge\theta^*}$. Let $x_- \triangleq \max_{i:y_i=-1} x_i$ and $x_+ \triangleq \min_{i:y_i=+1} x_i$ be the inner-most pair of opposite labels in $S$ if they exist. We formally define the large margin classifier $A_{lm}(S)$ as \begin{equation} \hat\theta_S = A_{lm}(S)=\left\{ \begin{array}{ll} (x_- + x_+)/2 & \mbox{ if $x_-$, $x_+$ exist} \\ -1 & \mbox{ if $S$ all positive} \\ 1 & \mbox{ if $S$ all negative.} \end{array} \right. \end{equation} The risk is defined as $R(\hat\theta_S)=|\hat\theta_S-\theta^*|=|\hat\theta_S|$. The teacher we consider is the most symmetric teacher, who selects the most symmetric pair about $\theta^*$ in $S$ and gives it to the learner. We define the most-symmetric teacher $B_{ms}$: \begin{equation} B_{ms}(S) = \left\{ \begin{array}{ll} \{(s_-,-1), (s_+,1)\} & \mbox{ if } s_-,s_+ \mbox{ exist}, \\ \{(x_1,y_1)\} & \mbox{ otherwise.} \end{array} \right. \label{eq:Bms} \end{equation} where $(s_-,s_+) \in \ensuremath{\mbox{argmin}}_{(x,-1),(x',1) \in S} |\frac{x+x'}{2}-\theta^*|$. Our main result shows that learning from the whole set $S$ achieves the well-known $O(1/n)$ risk, but surprisingly $B_{ms}$ achieves $O(1/n^2)$ risk, therefore it is an approximately $c_n=O(n^{-1})$ super teaching ratio. \begin{restatable}{theorem}{Ampthm} \label{thm:Ampthm} Let $S$ be an $n$-item $iid$ sample drawn from $p_{\setfont{Z}}$. Then $\forall \delta\in(0,1)$, $\exists N(\delta)$ such that $\forall n\ge N$, $\P{}{R(\hat\theta_{B_{ms}(S)})\le c_nR(\hat\theta_S)}>1-\delta$, where $c_n=\frac{32}{n\delta}\ln\frac{6}{\delta}$. \end{restatable} Before proving~\thmref{thm:Ampthm}, we first show that $B_{ms}$ is an optimal teacher for the large margin classifier. \begin{restatable}{proposition}{propBms} $B_{ms}$ is an optimal teacher for the large margin classifier $\hat\theta_S$. \end{restatable} \begin{proof} We show $R(\hat\theta_{B_{ms}(S)})\le R(\hat\theta_{B(S)})$ for any $ B$ and any $ S$. If $|B_{ms}(S)|=1$, then $S$ is either all positive or all negative. In both cases $R(\hat\theta_{B(S)})=1$ for any $B$ by definition. Thus $R(\hat\theta_{B_{ms}(S)})\le R(\hat\theta_{B(S)})$. Otherwise $|B_{ms}(S)|=2$, then if $B(S)$ is all positive or all negative, we have $R(\hat\theta_{B(S)})=1$ and thus $R(\hat\theta_{B_{ms}(S)})\le R(\hat\theta_{B(S)})$. Otherwise let $x^B_-, x^B_+$ be the inner most pair of $B(S)$. Since $x^B_-, x^B_+\in S$, then by definition of $B_{ms}$, $R(\hat\theta_{B_{ms}(S)})=|\frac{s_-+s_+}{2}-\theta^*|\le |\frac{x^B_-+x^B_+}{2}-\theta^*|=R(\hat\theta_{B(S)})$. \end{proof} Now we show that learning on the whole $S$ incurs $O(n^{-1})$ risk. First, we give the following lemma for the exact tail probability of $R(\hat\theta_S)$. \begin{restatable}{lemma}{lemAmp} \label{lem:Amp} For the large margin classifier $\hat\theta_S$, we have \begin{equation}\label{eq:Amptail} \P{}{R(\hat\theta_S)>\epsilon}=\left\{ \begin{aligned} &(1-\epsilon)^n+(\epsilon)^n &&\text{ $0<\epsilon\le\frac{1}{2}$}\\ &(\frac{1}{2})^{n-1} &&\text{ $\frac{1}{2}<\epsilon<1$}\\ &0 &&\mbox{ $\epsilon=1$}. \end{aligned} \right. \end{equation} \end{restatable} The proof for~\lemref{lem:Amp} is in the appendix. Now we show that $R(\hat\theta_S)$ is $O(n^{-1})$. \begin{restatable}{theorem}{thmAmp} \label{thm:Amp} Let $S$ be an $n$-item $iid$ sample drawn from $p_{\setfont{Z}}$. Then $\forall \delta\in(0,1)$ and $\forall n\ge2$, \begin{equation} \P{}{R(\hat\theta_S)>\frac{\delta}{n}}>1-\delta. \end{equation} \end{restatable} \begin{proof} According to~\lemref{lem:Amp}, for $\epsilon\le\frac{1}{2}$, we have \begin{equation}\label{lowerlem2} \P{}{R(\hat\theta_S)>\epsilon}>(1-\epsilon)^n>1-n\epsilon. \end{equation} Note that $n\ge2$, thus $\frac{\delta}{n}\le\frac{1}{2}$. Let $\epsilon=\frac{\delta}{n}$ in~\eqref{lowerlem2} we have \begin{equation} \begin{aligned} &\P{}{R(\hat\theta_S)>\frac{\delta}{n}}>1-n\frac{\delta}{n}=1-\delta. \end{aligned} \end{equation} \end{proof} Now we work out the risk of the most symmetric teacher $B_{ms}$. To bound the risk of $B_{ms}$ we need the following key lemma, which shows that the sample complexity with the teacher is $O(\epsilon^{-1/2})$. \begin{restatable}{lemma}{lemAmshighP} \label{lem:Ams_highP} Let $n=4m$, where $m$ is an integer. Let $S$ be an $n$-item $iid$ sample drawn from $p_{\setfont{Z}}$. $\forall\epsilon>0, \forall\delta\in(0,1)$, $\exists \setfont{M}(\epsilon, \delta)=\max\{\frac{3e}{\ln4-1}\ln\frac{3}{\delta}, (\frac{1}{\epsilon}\ln\frac{3}{\delta})^{\frac{1}{2}}\}$ such that $\forall m\ge\setfont{M} (\epsilon, \delta)$, $\P{}{R(\hat\theta_{B_{ms}(S)})\le\epsilon}>1-\delta$. \end{restatable} \begin{proof} We give a proof sketch and the details are in the appendix. Let $S_1=\{x\mid (x, 1)\in S\}$ and $S_2=\{x\mid (x, -1)\in S\}$ respectively. Then we have $|S_1|+|S_2|=4m$. Define event $E_1:\{|S_1|\ge m\land |S_2|\ge m\}$. Given that $m\ge \frac{3e}{\ln4-1}\ln\frac{3}{\delta}$, one can show $P(E_1)>1-\frac{\delta}{3}$. Since $|S_1|+|S_2|=4m$, either $|S_1|\ge2m$ or $|S_2|\ge2m$. Without loss of generality we assume $|S_1|\ge2m$. We then divide the interval [0, 1] equally into $N=\lfloor m^2(\ln\frac{3}{\delta})^{-1} \rfloor$ segments. The length of each segment is $\frac{1}{N}=O(\frac{1}{m^2})$ as Figure~\ref{segments_copy} shows. \begin{figure}[H] \centering \includegraphics[width=3in,height=0.5in]{segments.pdf} \caption{segments} \label{segments_copy} \end{figure} Let $N_o$ be the number of segments that are occupied by the points in $S_1$. Note that $N_o$ is a random variable. Let $E_2$ be the event that $N_o\ge m$. Then one can show $P(E_2)>1-\frac{\delta}{3}$. By union bound, we have $P(E_1, E_2)>1-\frac{2\delta}{3}$. Let $E_3$ be the following event: there exist a point $x_2$ in $S_2$ such that $-x_2$, the flipped point, lies in the same segment as some point $x_1$ in $S_1$. One can show that $P(E_3\mid E_1, E_2)>1-\frac{\delta}{3}$. Thus $P(E_3)\ge P(E_1, E_2, E_3)=P(E_3\mid E_1, E_2)P(E_1, E_2)\ge (1-\frac{\delta}{3})(1-\frac{2\delta}{3})> 1-\delta$. If $E_3$ happens, then $|x_1+x_2|=|x_1-(-x_2)|\le\frac{1}{N}$. Note that $m\ge(\frac{1}{\epsilon}\ln\frac{3}{\delta})^{\frac{1}{2}}$ and $N=\lfloor m^2(\ln\frac{3}{\delta})^{-1} \rfloor\ge \frac{m^2}{2}(\ln\frac{3}{\delta})^{-1} $, thus $\frac{1}{N}\le \frac{2}{m^2}\ln\frac{3}{\delta}\le2\epsilon$. Therefore $R(\hat \theta_{B_{ms}(S)})=|\frac{s_-+s_+}{2}|\le|\frac{x_1+x_2}{2}|\le\epsilon$. \end{proof} Rewriting $\epsilon$ in~\lemref{lem:Ams_highP} as a function of $n$, we have the following theorem. \begin{restatable}{theorem}{thmAmsRes} \label{thm:Ams} Let $S$ be an $n$-item $iid$ sample dawn from $p_{\setfont{Z}}$, then $\exists N_1(\delta)=\frac{12e}{\ln4-1}\ln\frac{3}{\delta}$ such that $\forall n\ge N_1$, \begin{equation} \P{}{R(\hat\theta_{B_{ms}(S)})\le\frac{16}{n^2}\ln\frac{3}{\delta}}>1-\delta. \end{equation} \end{restatable} \begin{proof} Note that if $n\ge N_1(\delta)=\frac{12e}{\ln4-1}\ln\frac{3}{\delta}$, then $m=\frac{n}{4}\ge\frac{3e}{\ln4-1}\ln\frac{3}{\delta}$, thus the minimum $\epsilon$ that satisfies $m\ge\setfont{M} (\epsilon, \delta)$ is $\frac{1}{m^2}\ln\frac{3}{\delta}=\frac{16}{n^2}\ln\frac{3}{\delta}$. \end{proof} Now we can conclude super teaching: \begin{proof}[\textbf{Proof of~\thmref{thm:Ampthm}}] According to~\thmref{thm:Ams}, $\exists N_1(\frac{\delta}{2})$ such that $\forall n\ge N_1$, $\P{}{R(\hat\theta_{B_{ms}(S)})\le\frac{16}{n^2}\ln\frac{6}{\delta}}>1-\frac{\delta}{2}$. Note that $N_1\ge2$, thus according to~\thmref{thm:Amp}, $\forall n\ge N_1$, $\P{}{R(\hat\theta_S)>\frac{\delta}{2n}}>1-\frac{\delta}{2}$. Let $c_n=\frac{32}{n\delta}\ln\frac{6}{\delta}$ and $N_2(\delta)=\frac{32}{\delta}\ln\frac{6}{\delta}$ so that $c_{N_2}=1$. Let $N(\delta)=\max\{N_1(\delta), N_2(\delta)\}$. By union bound, $\forall n\ge N$, with probability at least $1-\delta$, we have both $R(\hat\theta_S)>\frac{\delta}{2n}$ and $R(\hat\theta_{B_{ms}(S)})\le\frac{16}{n^2}\ln\frac{6}{\delta}$, which gives $\P{}{R(\hat\theta_{B_{ms}(S)})\le c_nR(\hat\theta_S)}>1-\delta$, where $c_n\le c_{N_2}=1$. \end{proof} \section{An MINLP Algorithm for Super Teaching} \label{sec:MINLP} Although the problem of proving super teaching ratios for a specific learner is interesting, we now focus on an algorithm to find a super teaching set for general learners \emph{given} a training set $S$. That is, we find a subset $B(S) \subset S$ so that $R(\hat\theta_{B(S)}) < R(\hat\theta_S)$. We start by formulating super teaching as a subset selection problem. To this end, we introduce binary indicator variables $b_1, \ldots, b_n$ where $b_i=1$ means $z_i \in S$ is included in the subset. We consider learners $A$ that can be defined via convex empirical risk minimization: \begin{equation}\label{learner} A(S) \triangleq \ensuremath{\mbox{argmin}}_{\theta \in \Theta} \sum_{i=1}^n \tilde\ell(\theta, z_i) + \frac{\lambda}{2} \|\theta\|^2. \end{equation} For simplicity we assume there is a unique global minimum which is returned by $\ensuremath{\mbox{argmin}}$. Note that we use $\tilde\ell$ in~\eqref{learner} to denote the (surrogate) convex loss used by $A$ in performing empirical risk minimization. For example, $\tilde\ell$ may be the negative log likelihood for logistic regression. $\tilde\ell$ is potentially different from $\ell$ (e.g. the 0-1 loss) used by the teacher to define the teaching risk $R$ in~\eqref{eq:R}. We formulate super teaching as the following bilevel combinatorial optimization problem: \begin{eqnarray} &&\min_{b\in\{0,1\}^n,\hat\theta\in\Theta} R(\hat\theta)\\ \mbox{s.t. } && \hat\theta = \ensuremath{\mbox{argmin}}_{\theta \in \Theta} \sum_{i=1}^n b_i \tilde\ell(\theta, z_i) + \frac{\lambda}{2} \|\theta\|^2\label{eq:ML}. \end{eqnarray} Under mild conditions, we may replace the lower level optimization problem (i.e. the machine learning problem~\eqref{eq:ML}) by its first order optimality (KKT) conditions: \begin{eqnarray} \min_{b\in\{0,1\}^n,\hat\theta\in\Theta} &&R(\hat\theta) \label{eq:MINLP}\\ \mbox{s.t. } && \sum_{i=1}^n b_i \nabla_\theta \tilde\ell(\hat\theta, z_i) + {\lambda}\hat\theta = 0. \nonumber \end{eqnarray} This reduces the bilevel problem but the constraint is nonlinear in general, leading to a mixed-integer nonlinear program (MINLP), for which effective solvers exist. We use the MINLP solver in NEOS~\cite{czyzyk1998neos}. \section{Simulations} \label{sec:exp} We now apply the framework in section~\ref{sec:MINLP} to logistic regression and ridge regression, and show that the solver indeed selects a super-teaching subset that is far better than the original training set $S$. \subsection{Teaching Logistic Regression $A_{lr}$} Let $\setfont{X}=\setfont{R}^d$, $\Theta=\setfont{R}^d$, $\theta^* = (\frac{1}{\sqrt{d}},..., \frac{1}{\sqrt{d}})$, $p_{\setfont{Z}}(x)=\mathcal{N}(0, I)$. Let $p_{\setfont{Z}}(y\mid x)=\ind{x^\top\theta^*>0}$, which is deterministic given $x$. Logistic regression estimates $\hat\theta_S=A_{lr}(S)$ with~\eqref{learner}, where $\lambda=0.1$ and $\tilde\ell(z_i)=\log (1+\exp(-y_ix_i^\top\theta))$. In contrast, The teacher's risk is defined to be the expected 0-1 loss: $R(\hat\theta)=\E{p_{\setfont{Z}}}{\ind{\hat\theta(x)\neq y}}$, where $\hat\theta(x)$ is the label of $x$ predicted by $\hat\theta$. Since $p_{\setfont{Z}}$ is symmetric about the origin, the risk can be rewritten in terms of the angle between $\hat\theta$ and $\theta^*$: $R(\hat\theta)=\arccos(\frac{\hat\theta^\top\theta^*}{||\hat\theta||\cdot||\theta^*||})/\pi$. Instantiating~\eqref{eq:MINLP} we have \begin{eqnarray} \min_{b\in\{0,1\}^n,\hat\theta\in\setfont{R}^d} &&\arccos(\frac{\hat\theta^\top\theta^*}{||\hat\theta||\cdot||\theta^*||})/\pi \label{eq:Logistic}\\ \mbox{s.t. } && \lambda\hat\theta-\sum_{i=1}^n \frac{b_iy_ix_i}{1+\exp(y_ix_i^\top\hat\theta)} = 0. \nonumber \end{eqnarray} We run experiments to study the effectiveness and scalability of the NEOS MINLP solver on~\eqref{eq:Logistic}, specifically with respect to the training set size $n=|S|$ and dimension $d$. In the first set of experiments we fix $d=2$ and vary $n=16, 64, 256$ and $1024$. For each $n$ we run 10 trials. In each trial we draw an $n$-item $iid$ sample $S \sim p_{\setfont{Z}}$ and call the solver on~\eqref{eq:Logistic}. The solver's solution to $b_1 \ldots b_n$ indicates the super teaching set $B(S)$. We then compute an empirical version of the super teaching ratio: $$\hat c_n=R(\hat\theta_{B(S)})/R(\hat\theta_S).$$ \tabcolsep=0.09cm \begin{table}[ht] \small \centering \begin{tabular}{ |c|c|c|c|c|c|c|} \hline & \multicolumn{3}{|c| }{Logistic Regression} & \multicolumn{3}{ |c| }{Ridge Regression} \\ \hline $n=|S|$ &$\hat c_n $ & $|B(S)|/n$ & time (s) & $\hat c_n $ & $|B(S)|/n$ &time (s)\\ \hline 16 & 8.5e-4&0.50&3.4e-1 &7.8e-3&0.50&6.3e-1\\ 64 &1.3e-3&0.69&3.5e+0 &7.5e-3&0.70&5.8e+0\\ 256 &6.3e-3&0.67&6.0e+1 &5.6e-3&0.84&1.4e+2 \\ 1024 &1.3e-2&0.86&1.4e+3 &4.1e-3&0.92& 3.3e+3\\ \hline \end{tabular} \caption{Super teaching as $n$ changes.}\label{Ep:empc} \end{table} \begin{table}[ht] \small \centering \begin{tabular}{ |c|c|c|c|c|c|c|} \hline & \multicolumn{3}{|c| }{Logistic Regression} & \multicolumn{3}{ |c| }{Ridge Regression} \\ \hline $d$ &$\hat c_n$ & $|B(S)|/n$ & time (s) & $\hat c_n $ & $|B(S)|/n$ &time (s)\\ \hline 2 & 3.1e-3 &0.67 &5.4e-1 & 3.3e-3 &0.55 &6.6e+0\\ 4 & 2.4e-3 &0.44&8.5e+1 & 7.2e-3 &0.53 &5.8e+1 \\ 8 & 1.8e-1&0.39 &4.1e+0 & 1.5e-1 &0.47 &6.0e+0 \\ 16 & 5.6e-1&0.42 &5.1e+0 & 4.3e-1 &0.59 &9.3e+0 \\ 32 & 8.2e-1&0.58 &1.0e+1 & 6.4e-1 &0.86 &3.0e+0 \\ \hline \end{tabular} \caption{Super teaching as $d$ changes.}\label{Ep:empd} \end{table} In the left half of Table~\ref{Ep:empc} we report the median of the following quantities over 10 trials: $\hat c_n$, the fraction of the training items selected for super teaching $|B(S)|/n$, and the NEOS server running time. The main result is that $\hat c_n\ll1$ for all $n$, which means the solver indeed selects a super-teaching set $B(S)$ that is far better than the original $iid$ training set $S$. Therefore, MINLP is a valid algorithm for finding a super teaching set. Second, we note that the solver tends to select a large subset since the median $|B(S)|/n \ge 1/2$. This is interesting as it is known that when $S$ is dense, one can select extremely sparse super teaching sets, as small as a few items, to teach effectively~\cite{JMLR:v17:15-630}. Understanding the different regimes remains future work. Finally, the running time grows fast with $n$. For example, when $n=1024$ it takes around half an hour to solve~\eqref{eq:Logistic}. Future work needs to address this bottleneck in applying MINLP to large problems. In the second set of experiments we fix $n=32$ and vary $d = 2, 4, 8, 16, 32$. The left half of Table~\ref{Ep:empd} shows the results. The empirical teaching ratio $\hat c_n$ is still below 1 in all cases, showing super teaching. But as the dimension of the problem increases $\hat c_n$ deteriorates toward 1. Nonetheless, even when $d=n$ we still see a median super teaching ratio of 0.82; the corresponding super teaching set $B(S)$ has only 58\% training items than the dimension. It is interesting that the MINLP algorithm intentionally created a ``high dimensional'' learning problem (as in higher dimension $d$ than selected training items $|B(S)|$) to achieve better teaching, knowing that the learner $A_{lr}$ is regularized. The running time does not change dramatically. \subsection{Teaching Ridge Regression $A_{rr}$} Let $\setfont{X}=\setfont{R}^d$, $\Theta=\setfont{R}^d$, $\theta^* = (\frac{1}{\sqrt{d}},..., \frac{1}{\sqrt{d}})$, $p_{\setfont{Z}}(x)=\mathcal{N}(0, I)$, $p_{\setfont{Z}}(y\mid x)=\mathcal{N}(y; x^\top\theta^*, 0.1)$. Let the teaching risk be the parameter difference: $R(\hat\theta)=\|\hat\theta-\theta^*\|$. Given a sample $S$ with $n$ $iid$ items drawn from $p_{\setfont{Z}}$, ridge regression estimates $\hat\theta_S=A_{rr}(S)$ with $\lambda=0.1$ and $\tilde\ell(z_i)=(x_i^\top\hat\theta-y_i)^2$. The corresponding MINLP is: \begin{eqnarray} \min_{b\in\{0,1\}^n,\hat\theta\in\setfont{R}^d} &&||\hat\theta-\theta^*|| \label{eq:Ridge}\\ \mbox{s.t. } && \lambda\hat\theta+2\sum_{i=1}^n b_i(x_i^\top\hat\theta- y_i) x_i = 0. \nonumber \end{eqnarray} We run the same set of experiments. Tables ~\ref{Ep:empc} and~\ref{Ep:empd} show the results, which are qualitatively similar to teaching logistic regression. Again, we see the empirical super teaching ratio $\hat c_n \ll 1$, indicating the presence of super teaching. \begin{figure}[ht] \centering \begin{subfigure}[t]{0.45\columnwidth} \centering \includegraphics[width=.9\columnwidth]{logistic} \caption{logistic regression} \label{fig:logistic} \end{subfigure} ~ \begin{subfigure}[t]{0.45\columnwidth} \centering \includegraphics[width=.9\columnwidth]{ridge} \caption{ridge regression} \label{fig:ridge} \end{subfigure}% \caption{Typical trials from the MINLP algorithm} \label{fig:example} \end{figure} Finally, Figure~\ref{fig:example} visualizes one typical trial each for teaching logistic regression and ridge regression. $S$ consists of both dark and light points, while the dark ones representing $B(S)$ optimized by MINLP. The dashed line shows $\hat\theta_S$, while the solid lines shows $\hat\theta_{B(S)}$. The ground truth ($x_1+x_2=0$ in logistic regression, $y=x$ in ridge regression) essentially overlaps with the solid lines. Specifically, the super taught models $\hat\theta_{B(S)}$ have negligible risks of 2.5e-4 and 3.3e-3, whereas models $\hat\theta_S$ trained from the whole $iid$ sample $S$ incur much larger risks of 0.03 and 0.16, respectively. \section{Related Work} \label{sec:relatedwork} There has been several research threads in different communities aimed at reducing a data set while maintaining its utility. The first thread is training set reduction~\cite{garcia2012prototype,zeng2005smo,Wilson2000}, which during training time prunes items in $S$ in an attempt to improve the learned model. The second thread is coresets~\cite{har2011geometric,2017arXiv170306476B}, a summary of $S$ such that models learned on the summary are provably competitive with models learned on the full data set $S$. But as they do not know the target model $p_{\setfont{Z}}$ or $\theta^*$, these methods cannot truly achieve super teaching. The third thread is curriculum learning~\cite{icml2009_006} which showed that smart initialization is useful for nonconvex optimization. In contrast, our teacher can directly encode the true model and therefore obtain faster rates. The final thread is sample compression~\cite{floyd1995sample}, where a compression function chooses a subset $T\subset S$ and a reconstruction function to form a hypothesis. Our present work has some similarity with compression, which allows increased accuracy since compression bounds can be used as regularization~\cite{kontorovich2017nearest}. The theoretical study of machine teaching has focused on the teaching dimension, i.e. the minimum training set size needed to exactly teach a target concept $\theta^*$~\cite{Goldman1995Complexity,Shinohara1991Teachability,Zhu2017NoLearner,JMLR:v18:16-460,Liu2016Teaching,Zhu2015Machine,JMLR:v15:doliwa14a,zhu2013machine,Zilles2011Models,Balbach2009Recent,982362,conf/colt/AngluinK97,Goldman1996Teaching,DBLP:journals/jcss/Mathias97,Balbach2006Teaching,Balbach:2008:MTU:1365093.1365255,Kobayashi2009Complexity,journals/ml/AngluinK03,conf/colt/RivestY95,Hegedus1995Generalized,journals/ml/Ben-DavidE98}. Most of the prior work assumed a synthetic teaching setting where $S$ is the whole item space, which is often unrealistic. Liu \textit{et al.} considered approximate teaching in the finite $S$ setting~\cite{Liu2017Iterative}, though their analysis focused on a specific SGD learner. Our super teaching setting applies to arbitrary learners, and we allow approximate teaching -- namely we do not require the teacher to teach exactly the target model, which is infeasible in our pool-based teaching setting with a finite $S$. Machine teaching applications include education~\cite{Clement2016edm,Patil2014Optimal,singla2014near,NIPS2013_4887,Cakmak2011Mixed,Rafferty:2011:FTP:2026506.2026545}, computer security~\cite{Alfeld2017Explicit,Alfeld2016Data,Mei2015Machine}, and interactive machine learning~\cite{Suh2016Label,AAAI124954,Khan2011How}. By establishing the existence of super-teaching, the present paper can guide the process of finding a more effective training set for these applications. \section{Discussions and Conclusion} \label{discuss:superteach} We presented super-teaching: when the teacher already knows the target model, she can often choose from a given training set a smaller subset that trains a learner better. We proved this for two learners, and provided an empirical algorithm based on mixed integer nonlinear programming to find a super teaching set. However, much needs to be done on the theory of super teaching. We give two counterexamples to illustrate that not all learners are super-teachable. \begin{example}[MLE of interval] Let $\setfont{X}=[0, \theta^*]$, where $\theta^*\in \setfont{R}^+$. $p_{\setfont{Z}}(x)=U(\setfont{X})$. Given a $n$-item training set $S$, the MLE for $\theta^*$ is $\hat\theta_S=A_{int}(S)=\max_{i=1:n}x_i$. The risk is defined as $R(\hat\theta_S)=|\hat\theta_S-\theta^*|$. We show $A_{int}$ is not super-teachable. $\hat\theta_{B(S)}=\max_{x_i\in B(S)}x_i\le \max_{x_i\in S}x_i=\hat\theta_S$. Since $\hat\theta_S \le \theta^*$, $R(\hat\theta_{B(S)})=|\hat\theta_{B(S)}-\theta^*|\ge |\hat\theta_S-\theta^*|=R(\hat\theta_S)$. \end{example} We can generalize this to a classification setting, and show that neither the least nor the greatest consistent hypothesis is not super-teachable: \begin{example}[Consistent learners]\label{ex:consistent} Let $\setfont{X}=[x_{\min}, x_{\max}] \subset \setfont{Z}$ be an interval over the integer grid. The hypothesis space is $\Theta=\{[a,b]\subseteq\setfont{X}: \mbox{$y=1$ in $[a,b]$ and $-1$ outside}\}$. $\theta^*=[a^*, b^*] \in \Theta$. $p_{\setfont{Z}}$ is uniform on $\setfont{X}$ and noiseless $y$ labeled according to $\theta^*$. The risk $R(\hat\theta_S)$ is the size of the symmetric difference between the two intervals $\hat\theta_S$ and $\theta^*$, normalized by $x_{\max} - x_{\min}$. Given a sample $S$, the least consistent learner $A_{lc}$ learns the tightest interval over positive items in $S$: $\hat\theta^{lc}_S=A_{lc}(S) \triangleq \left[\min_{\substack{i=1:n\\y_i=1}} x_i, \max_{\substack{i=1:n\\y_i=1}} x_i \right].$ $\hat\theta^{lc}_S=\emptyset$ if $S$ does not contain positive items. The greatest consistent learner $A_{gc}$ extends the hypothesis interval in both directions as much as possible before hitting negative points in $S$. If $S$ has no positive we define $\hat\theta^{gc}_S=\emptyset$, too. \begin{restatable}{proposition}{Consistent} Neither $A_{lc}$ nor $A_{gc}$ is super-teachable. \end{restatable} \begin{proof} We first show $A_{lc}$ is not super-teachable. Note that $A_{lc}$ learns the tightest interval consistent with $S$, thus we always have $\hat\theta^{lc}_S\subseteq \theta^*$. Now we show that $\hat\theta^{lc}_{B(S)}\subseteq \hat\theta^{lc}_S$ is always true so that $R(\hat\theta^{lc}_S)\le R(\hat\theta^{lc}_{B(S)})$ follows. If $\theta^*=\emptyset$, then trivially $\hat\theta^{lc}_{B(S)}= \hat\theta^{lc}_S=\emptyset$. Now assume $\theta^*\neq\emptyset$. If $\exists (x, 1)\in B(S)$, let $[a_1, b_1]=\hat\theta^{lc}_{B(S)}$. Note that $\hat\theta^{lc}_S\neq \emptyset$ because $B(S)\subseteq S$ and thus $S$ has at least one positive point. Let $\hat\theta^{lc}_S=[a_2,b_2]$. Now $a_1=\min\{x\mid (x, 1)\in B(S)\}\ge \min\{x\mid (x, 1)\in S\}=a_2$, and $b_1=\max\{x\mid (x, 1)\in B(S)\}\le \max\{x\mid (x, 1)\in S\}=b_2$. Thus we have $\hat\theta^{lc}_{B(S)}\subseteq \hat\theta^{lc}_S$. If $\nexists (x, 1)\in B(S)$, $\hat\theta^{lc}_{B(S)}=\emptyset$ and $\hat\theta^{lc}_{B(S)}\subseteq \hat\theta^{lc}_S$ is always true. Thus $\hat\theta^{lc}_{B(S)}\subseteq \hat\theta^{lc}_S\subseteq \theta^*$ for any $B$ and any $S$. The proof for $A_{gc}$ is similar by showing $\theta^*\subseteq \hat\theta^{gc}_S\subseteq \hat\theta^{gc}_{B(S)}$. \end{proof} \end{example} This leads to an open question: which family of learners are super teachable? We offer a conjecture here: we speculate that MLEs (and the derived MAP estimates or regularized empirical risk minimizers) which satisfy the asymptotic normality conditions~\cite{white1982maximum} are super teachable. This conjecture is motivated by its similarity to the proof in section~\ref{sec:Gaussian}. Also note that the two counterexamples are classic examples of MLE that do \emph{not} satisfy the asymptotic normality conditions. Another open question concerns the optimal super-teaching subset size $k$ for a given training set of size $n$. For example, our result on teaching the MLE of Gaussian mean indicates that the rate improves as $k$ grows. However, our analysis only applies to a fixed $k$. Further research is needed to identify the optimal $k$. \textbf{Acknowledgments}: R.N. acknowledges support by NSF IIS-1447449 and CCF-1740707. P.R. is supported in part by grants NSF DMS-1712596, NSF DMS-TRIPODS-1740751, DARPA W911NF-16-1-0551, ONR N00014-17-1-2147 and a grant from the MIT NEC Corporation. X.Z. is supported in part by NSF CCF-1704117, IIS-1623605, CMMI-1561512, DGE-1545481, and CCF-1423237.
{ "timestamp": "2018-02-27T02:08:56", "yymm": "1802", "arxiv_id": "1802.08946", "language": "en", "url": "https://arxiv.org/abs/1802.08946" }
\section{Introduction } Classifications of abstract regular polytopes have been a subject of interest for several decades. One path has been to fix (families of) groups of automorphisms and determine the abstract regular polytopes having these groups as full automorphism groups. Some striking results have been obtained, for instance for the symmetric and alternating groups. Fernandes and Leemans classified abstract regular polytopes of rank $n-1$ and $n-2$ for $S_n$~\cite{fl,sympolcorr} and more recently, they extended this classification to rank $n-3$ and $n-4$ with Mixer~\cite{flm}. Cameron, Fernandes, Leemans and Mixer showed that the highest rank of an abstract regular polytope with full automorphism group an alternating group $A_n$ is $\lfloor (n-1)/2 \rfloor$ when $n\geq 12$~\cite{CFLM2017}, and thanks to two previous papers of Fernandes, Leemans and Mixer~\cite{flm1,flm2}, this bound is known to be sharp. More recently, Gomi, Loyola and De Las Pe\~{n}as determined the non-degenerate string C-groups of order $1024$ in~\cite{sc1024}. There exists a well known one-to-one correspondence between abstract regular polytopes and string C-groups. We therefore work with string C-groups as it is more convenient and easier to define them than abstract regular polytopes. In this paper, we study $2$-groups acting on regular polytopes. The starting point of our research was the following problem proposed by Schulte and Weiss in~\cite{Problem}. \begin{prob} Characterize the groups of orders $2^n$ or $2^np$, with $n$ a positive integer and $p$ an odd prime, which are automorphism groups of regular or chiral polytopes? \end{prob} Conder~\cite{SmallestPolytopes} showed that if $\mathcal{P}$ is a regular $3$-polytope with Schl\"afli type $\{k_1,k_2\}$, then $|\hbox{\rm Aut}}\def\Out{\hbox{\rm Out}(\mathcal{P})| \geq 2k_1k_2$. If $\mathcal{P}$ has Schl\"afli type $\{2^s,2^t\}$ and $|\hbox{\rm Aut}}\def\Out{\hbox{\rm Out}(\mathcal{P})|=2^n$, then $n-1\geq s+t$. In this paper, we first show the following theorem. \begin{theorem}\label{existmaintheorem} For any positive integers $n, s, t$ such that $n \geq 10$, $s, t \geq 2$ and $n-1 \geq s+t$, there exists a string C-group of order $2^n$ with Schl\"afli type $\{2^s, 2^t\}$. \end{theorem} Cunningham and Pellicer~\cite{GD2016} classified the regular $3$-polytopes $\mathcal{P}$ for the case when $|\hbox{\rm Aut}}\def\Out{\hbox{\rm Out}(\mathcal{P})|=2k_1k_2$. Note that if $|\hbox{\rm Aut}}\def\Out{\hbox{\rm Out}(\mathcal{P})|=2^n$ and $k_1=4$ then $k_2 \leq 2^{n-3}$. As a special case, Cunningham and Pellicer~\cite{GD2016} obtained the classification of regular $3$-polytopes with automorphism groups of order $2^n$ and Schl\"afli type $\{4, 2^{n-3}\}$, and this was also given in Loyola~\cite{Loyola} by using the classification of $2$-groups with a cyclic subgroup of order $2^{n-3}$~\cite{Classfictionpgroup}. We prove the result again independently by using new techniques that are described in this paper, and the techniques work well for more classifications. In particular, we further classify the regular $3$-polytopes with automorphism groups of order $2^n$ and Schl\"afli types $\{4, 2^{n-4}\}$ and $\{4, 2^{n-5}\}$ in this paper. To state the next result, we need to define some groups: \begin{small} \begin{itemize}\setlength{\parskip}{0pt} \item [$G_1$]$=\lg \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2 \ |\ \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0^2, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1^2, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2^2, (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1)^{2^{2}}, (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^{2^{n-3}}, (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^2, [(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1)^2,\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2] \rg$, \item [$G_2$]$=\lg \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2 \ |\ \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0^2, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1^2, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2^2, (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1)^{2^{2}}, (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^{2^{n-3}}, (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^2, [(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1)^2,\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2](\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^{2^{n-4}} \rg$, \item [$G_3$]$=\lg \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2 \ |\ \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0^2, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1^2, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2^2, (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1)^{2^{2}}, (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^{2^{n-4}}, (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^2, [(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1)^2, (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^2]\rg$, \item [$G_4$]$=\lg \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2 \ |\ \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0^2, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1^2, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2^2, (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1)^{2^{2}}, (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^{2^{n-4}}, (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^2, [(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1)^2, (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^2](\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^{2^{n-5}} \rg$, \item [$G_5$]$=\lg \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2 \ |\ \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0^2, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1^2, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2^2, (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1)^{2^{2}}, (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^{2^{n-5}}, (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^2, [\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0, (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^2]^2, [\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0, (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^4]\rg$, \item [$G_6$]$=\lg \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2 \ |\ \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0^2, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1^2, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2^2, (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1)^{2^{2}}, (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^{2^{n-5}}, (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^2, [\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0, (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^2]^2 (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^{2^{n-6}}, [\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0, (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^4]\rg,$ \item [$G_7$]$=\lg \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2 \ |\ \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0^2, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1^2, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2^2, (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1)^{2^{2}}, (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^{2^{n-5}}, (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^2, [\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0, (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^2]^2, [\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0, (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^4](\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^{2^{n-6}}\rg,$ \item [$G_8$]$=\lg \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2 \ |\ \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0^2, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1^2, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2^2, (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1)^{2^{2}}, (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^{2^{n-5}}, (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^2, [\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0, (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^2]^2(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^{2^{n-6}},$ \\ \mbox{}\hskip 2.3cm $[\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0, (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^4)](\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^{2^{n-6}}\rg.$ \end{itemize} \end{small} \begin{theorem}\label{maintheorem} For $n \geq 10$, let $\Gamma:=(G,\{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0,\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1,\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2\})$ be a string C-group of order $2^n$. Then \begin{enumerate} \item[(1)] $\Gamma$ has type $\{4,2^{n-3}\}$ if and only if $G \cong G_1$ or $G_2$; \item[(2)] $\Gamma$ has type $\{4,2^{n-4}\}$ if and only if $G \cong G_3$ or $G_4$; \item[(3)] $\Gamma$ has type $\{4,2^{n-5}\}$ if and only if $G \cong G_5, G_6, G_7$ or $G_8$. \end{enumerate} \end{theorem} Let $n<10$. By \cite{atles1} or \cite{atles}, there is a unique string C-group of order $2^n$ with type $\{4,4\}$, and Theorem~\ref{maintheorem} is true for the types $\{4,2^{n-s}\}$ with $n-s\geq 3$ and $s=3,4$ or $5$, except for the cases when $n=8$ or $9$ with $s=5$. For $n=8$ with $s=5$, there are four string C-groups with type $\{4,8\}$: two are $G_5$ and $G_6$, and the other two are $\lg \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2 \ |\ \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0^2, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1^2, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2^2, (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1)^{2^{2}}, (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^{2^{8}}, (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^2, [(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1)^2, (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^2](\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^4 \rg$ and $\lg \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2 \ |\ \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0^2$, $\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1^2, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2^2, (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1)^{2^{2}}, (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^{2^{3}}, (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^2, [((\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^2)^{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0}, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2](\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^4\rg$. For $n=9$ with $s=5$, there are six string C-groups with type $\{4,16\}$: four are $G_i$ with $5\leq i\leq 8$, and the other two are $\lg \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2 \ |\ \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0^2, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1^2, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2^2, (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1)^{2^{2}}, (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^{2^{4}}, (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^2, [(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1)^2, (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^2](\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^4 \rg$ and $\lg \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2 \ |\ \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0^2$, $\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1^2, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2^2, (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1)^{2^{2}}, (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^{2^{4}}, (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^2, [(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1)^2, (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^2](\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1)^4, [\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0] \rg$. \section{Background results}\label{backgroud} \subsection{String C-groups} Abstract regular polytopes and string C-groups are the same mathematical objects. The link between these objects may be found for instance in~\cite[Chapter 2]{ARP}. We take here the viewpoint of string C-groups because it is the easiest and the most efficient one to define abstract regular polytopes. Let $G$ be a group and let $S=\{\rho_0,\cdots,\rho_{d-1}\}$ be a generating set of involutions of $G$. For $I\subseteq\{0,\cdots,d-1\}$, let $G_I$ denote the group generated by $\{\rho_i:i\in I\}$. Suppose that \begin{itemize} \item[*] for any $i,j\in \{0, \ldots, d-1\}$ with $|i-j|>1$, $\rho_i$ and $\rho_j$ commute (the \emph{string property}); \item[*] for any $I,J\subseteq\{0,\cdots,d-1\}$, $G_I\cap G_J=G_{I\cap J}\ \ (\mbox{the \emph{intersection property}})$. \end{itemize} Then the pair $(G,S)$ is called a {\em string C-group of rank} $d$ and the {\em order} of $(G,S)$ is simply the order of $G$. If $(G,S)$ only satisfies the string property, it is called a {\em string group generated by involutions} or \emph {sggi}. By the intersection property, $S$ is a minimal generating set of $G$. It is known that string $C$-groups are the same thing as automorphism groups of regular polytopes~\cite[Section 2E]{ARP}. The following proposition is straightforward, and for details, one may see \cite{MC}. \begin{prop}\label{intersection} The intersection property for a string $C$-group $(G,S)$ of rank $3$ is equivalent to that $S$ is a minimal generating set of $G$ and $\lg \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rg \cap \lg \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2\rg = \lg \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rg $. \end{prop} The \emph{i-faces} of the regular d-polytope associated with $(G,S)$ are the right cosets of the distinguished subgroup $G_i = \lg \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_j \ |\ j \neq i\rg$ for each $i=0,1,\cdots,d-1$, and two faces are incident just when they intersect as cosets. The {\em (Schl\"afli) type} of $(G,S)$ is the ordered set $\{p_1,\cdots, p_{d-1}\}$, where $p_i$ is the order of $\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_{i-1}\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_i$. In this paper we always assume that each $p_i$ is at least 3 for otherwise the generated group is a direct product of two smaller groups. If that happens, the string C-group (and the corresponding abstract regular polytope) is called {\em degenerate}. The following proposition is related to degenerate string C-groups of rank $3$. \begin{prop}\label{degenerate} For $t \geq 1$, let \begin{itemize}\setlength{\parskip}{-3pt} \item [$L_1$]$=\lg \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2 \ |\ \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0^2, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1^2, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2^2, (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1)^{4}, (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^{2}, (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^2\rg$, \item [$L_2$]$=\lg \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2 \ |\ \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0^2, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1^2, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2^2, (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1)^{2}, (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^{2^t}, (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^2 \rg$, \item [$L_3$]$=\lg \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2 \ |\ \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0^2, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1^2, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2^2, (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1)^{2^t}, (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^{2}, (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^2 \rg$. \end{itemize} Then $|L_1|=16$, $|L_2|=|L_3|=2^{t+2}$. In particular, the listed exponents are the true orders of the corresponding elements. \end{prop} The proof of Propostion~\ref{degenerate} is straightforward from the fact that $L_2=\langle\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rangle\times\langle\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1,\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2\rangle\cong {\mathbb Z}_2\times D_{2^{t+1}}$ and $L_3=\langle\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0,\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rangle\times\langle\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2\rangle\cong D_{2^{t+1}}\times{\mathbb Z}_2$, where $D_{2^{t+1}}$ denotes the dihedral group of order $2^{t+1}$. The following proposition is called the {\em quotient criterion} for a string C-group. \begin{prop}{\rm \cite[Section 2E]{ARP}}\label{stringC} Let $(\G,\{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2\})$ be an sggi, and let $\Lambda = (\lg \s_{0}, \s_{1}, \s_{2}\rg,$ $\{\s_0,\s_1,\s_2\})$ be a string C-group. If the mapping $\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_j \mapsto \s_j$ for $j=0 ,1, 2$ induces a homomorphism $\pi : \G \rightarrow \Lambda$, which is one-to-one on the subgroup $\lg \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rg$ or on $\lg \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2\rg$, then $(\G, \{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2\})$ is also a string $C$-group. \end{prop} The following proposition gives some string C-groups with type $\{4, 4\}$, which is proved in \cite[Section 8.3]{HW} for $b\geq 2$ but it is also true for $b=1$ by {\sc Magma}~\cite{BCP97}. \begin{prop}\label{type44} For $b \geq 1$, let \begin{itemize}\setlength{\parskip}{-3pt} \item [$M_1$]$=\lg \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2 \ |\ \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0^2, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1^2, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2^2, (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1)^{4}, (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^{4}, (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^2, (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0)^{2b}\rg$, \item [$M_2$]$=\lg \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2 \ |\ \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0^2, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1^2, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2^2, (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1)^{4}, (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^{4}, (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^2, (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0)^b\rg$. \end{itemize} Then $|M_1|=16b^2$ and $|M_2|=8b^2$. In particular, the listed exponents are the true orders of the corresponding elements. \end{prop} \subsection{Permutation representation graphs and CPR graphs} In~\cite{Pel2008}, Daniel Pellicer introduced CPR-graphs to give a permutation representation of string C-groups (CPR stands for $C$-group Permutation Representation). These graphs are also sometimes called permutation representation graphs. Let $G$ be a group and $S:= \{\rho_0,\ldots, \rho_{d-1}\}$ be a generating set of involutions of $G$. Let $\phi$ be an embedding of $G$ into the symmetric group $S_n$ for some $n$. The {\em permutation representation graph} ${\cal G}$ of $G$ determined by $\phi$ is the multigraph with $n$ vertices, and with edge labels in the set $\{0,\ldots,d-1\}$, such that any two vertices $v,w$ are joined by an edge of label $j$ if and only if $(v)((\rho_j)\phi)=w$. If $(G,S)$ is a string C-group, then the permutation representation graph defined above is called a {\em CPR-graph} by Pellicer. \subsection{Group theory} Let $G$ be a group. For $x,y\in G$, we use $[x,y]$ as an abbreviation for the {\em commutator} $x^{-1}y^{-1}xy$ of $x$ and $y$, and $[H, K]$ for the subgroup generated by all commutators $[x, y]$ with $x \in H$ and $y \in K$, when $H$ and $K$ are subgroups of $G$. The following proposition is a basic property of commutators and its proof is straightforward. \begin{prop}\label{commutator} Let $G$ be a group. Then, for any $x, y, z \in G$, $[xy, z]=[x, z]^y[y, z]$ and $[x, yz]=[x, z][x, y]^z$. \end{prop} The {\em commutator (or derived)} subgroup $G'$ of a group $G$ is the subgroup generated by all commutators $[x, y]$ for any $x, y \in G$. With Proposition~\ref{commutator}, it is easy to prove that if $G$ is generated by a subset $M$, then $G'$ is generated by all conjugates in $G$ of elements $[x_i, x_j]$ with $x_i, x_j \in M$; see ~\cite[Hilfsatz \uppercase\expandafter{\romannumeral3}.1.11]{GroupBooks} for example. \begin{prop}\label{Derived} Let $G$ be a group, $M \subseteq G$ and $G = \lg M \rg$. Then $G' = \lg [x_i, x_j]^g\ |\ x_i, x_j \in M, g \in G\rg$. \end{prop} The {\em Frattini subgroup}, denoted by $\Phi(G)$, of a finite group $G$ is defined to be the intersection of all maximal subgroups of $G$. Let $G$ be a finite $p$-group for a prime $p$, and set $\mho_1(G) = \lg g^p\ |\ g \in G\rg$. The following theorem is the well-known Burnside Basis Theorem. \begin{theorem}{\rm ~\cite[Theorem 1.12]{GroupBookss}}\label{burnside} Let $G$ be a $p$-group and $|G: \Phi(G)| = p^d$. \begin{itemize} \item [(1)] $G/\Phi(G) \cong {\mathbb Z}_p^d$. Moreover, if $N \lhd G$ and $G/N$ is elementary abelian, then $\Phi(G) \leq N$. \item [(2)] Every minimal generating set of $G$ contains exactly $d$ elements. \item [(3)] $\Phi(G) = G' \mho_1(G)$. In particular, if $p=2$, then $\Phi(G) = \mho_1(G)$. \end{itemize} \end{theorem} By Theorem~\ref{burnside}(2), we have the following important result. \begin{rem} A string $2$-group has $C$-group representations in only one rank. \end{rem} The unique cardinality of all minimal generating set of a $2$-group $G$ is called the {\em rank} of $G$, and denoted by $d(G)$. This is quite different from almost simple groups where in most cases if a group has string $C$-group representations of maximal rank $d$, then it has string $C$-group representations of ranks from $3$ to $d$. The only known exception is the alternating group $A_{11}$~\cite{flm1}. For a subgroup $H$ of a group $G$, the {\em core} $\hbox{\rm Core}_G(H)$ of $H$ in $G$ is the largest normal subgroup of $G$ contained in $H$. The following result is called {\em Lucchini's theorem}. \begin{prop}{\rm \label{core}\cite[Theorem 2.20]{GroupBook}} Let $A$ be a cyclic proper subgroup of a finite group $G$, and let $K = \hbox{\rm Core}_{G}(A)$. Then $|A : K| < |G : A|$, and in particular, if $|A| \geq |G : A|$, then $K > 1$. \end{prop} \section{Proof of Theorem~\ref{existmaintheorem}}\label{Theorem1.2} Let $n \geq 10$, $s, t \geq 2$ and $n-s-t \geq 1$. Set $R(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)= \{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0^2, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1^2, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2^2, (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1)^{2^{s}}, (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^{2^{t}}, (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^2$, $[(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1)^4, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2], [\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0,(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^4]\}$ and define $$H=\left\{ \begin{array}{ll} \lg \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2 \ |\ R(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2), [(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1)^2, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2]^{2^{\frac{n-s-t-1}{2}}}\rg, & n-s-t\mbox{ odd }\\ \lg \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2 \ |\ R(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2), [(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1)^2, (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^2]^{2^{\frac{n-s-t-2}{2}}}\rg, & n-s-t\mbox{ even. } \end{array} \right.$$ To prove Theorem~\ref{existmaintheorem}, we only need to show that $H$ is a string C-group of order $2^{n}$ with Schl\"afli type $\{2^s, 2^{t}\}$. For convenience, write $o(h)$ for the order of $h$ in $H$. Note that $\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0$ commutes with $(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^4$ because $[\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0,(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^4]=1$. Since $\langle\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1,\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2\rangle$ is a dihedral group, we have $(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1}=(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2}=(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^{-1}$. It follows that $\lg (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^4\rg \unlhd H$. Similarly, $\lg (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1)^4\rg \unlhd H$ as $[(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1)^4,\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2]=1$. Let $L_2=\lg \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2 \ |\ \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0^2, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1^2, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2^2, (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1)^{2}, (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^{2^t}, (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^2 \rg$. Clearly, $\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0$ commutes with both $\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1$ and $\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2$ in $L_2$, and hence $[\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0,(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^4]=1$. It is easy to see that the generators $\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2$ in $L_2$ satisfy all relations in $H$. This implies that $L_2$ is a homomorphic image of $H$. By Proposition~\ref{degenerate}, $\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2$ has order $2^t$ in $L_2$, and hence has order $2^t$ in $H$. It follows that $|H|=o((\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^4) \cdot |H/\lg (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^4\rg|=2^{t-2} \cdot |H/\lg (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^4\rg|$. Let $L_3=\lg \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2 \ |\ \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0^2, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1^2, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2^2, (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1)^{2^s}, (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^{2}, (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^2 \rg$. The element $\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2$ commutes with both $\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0$ and $\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1$ in $L_3$, and hence $[(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1)^4, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2]=1$. Since $\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2=\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0$, Proposition~\ref{commutator} implies $[(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1)^2, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2]=[\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2]=[\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1]^{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1}=[\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0, (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^2]^{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1}$. Hence $[(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1)^2, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2]=1$ in $L_3$. Therefore the generators $\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2$ in $L_3$ satisfy all relations in $H$. By Proposition~\ref{degenerate}, $\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1$ has order $2^s$ in $L_3$, and hence has order $2^s$ in $H$. It follows that $|H|=2^{s-2} \cdot |H/\lg (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1)^4\rg|$. To finish the proof of Theorem~\ref{existmaintheorem}, we are left to prove that $|H|=2^n$. \medskip \noindent {\bf Case 1:} $s=2$. We distinguish two cases, namely the case where $n-t$ is odd and the case where $n-t$ is even. Assume that $n-t$ is odd. Then $H=\lg \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2 \ |\ R(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2), [(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1)^2, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2]^{2^{\frac{n-t-3}{2}}}\rg$. Since $\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2=\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0$, we have $[(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1)^2, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2]=(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^2$. It follows that $[(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1)^2, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2]^{2^{\frac{n-t-3}{2}}}=(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^{2^{\frac{n-t-1}{2}}}$. Note that $(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1)^4=1$ and $\lg (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^4 \rg \unlhd H$. Thus $H/\lg (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^4\rg\cong H_1$, where $H_1=\lg \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2 \ |\ \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0^2, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1^2, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2^2, (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1)^{2^2}, (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^{2^2}, (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^2, (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^{2^{\frac{n-t-1}{2}}}\rg$. By Proposition~\ref{type44}, $|H_1|=8\cdot (2^{\frac{n-t-1}{2}})^2=2^{n-t+2}$, and hence $|H|=2^{t-2} \cdot |H/\lg (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^4\rg|=2^{t-2}|H_1|=2^n$. Assume that $n-t$ is even. Then $H=\lg \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2 \ |\ R(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2), [(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1)^2, (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^2]^{2^{\frac{n-t-4}{2}}}\rg$. A similar argument as above gives rise to $H/\lg (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^4\rg\cong H_2=$ $\lg \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2 \ |\ \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0^2, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1^2$, $ \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2^2, (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1)^{2^2},$ $ (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^{2^2}, (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^2, [(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1)^2, (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^2]^{2^{\frac{n-t-4}{2}}}\rg$. Noting $(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1)^2=(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1)^{-2}$ and $(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^2=(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^{-2}$ in $H_2$, we have $[(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1)^2, (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^2]^{2^{\frac{n-t-4}{2}}}=$ $(((\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1)^2(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^2)^2)^{2^{\frac{n-t-4}{2}}}=$ $(((\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^2)^2)^{2^{\frac{n-t-4}{2}}}=$ $(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^{2\cdot 2^{\frac{n-t-2}{2}}}$ because $\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2=\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0$, and so $H_2=\lg \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2 \ |\ \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0^2, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1^2$, $ \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2^2, (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1)^{2^2}, (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^{2^2},$ $(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^2, (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^{2\cdot 2^{\frac{n-t-4}{2}}}\rg$. By Proposition~\ref{type44}, $|H_2|=16 \cdot (2^{\frac{n-t-2}{2}})^2=2^{n-t+2}$, and $|H|=2^{t-2} \cdot |H/\lg (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^4\rg|=2^{t-2} \cdot |H_2|=2^n$. \medskip \noindent {\bf Case 2:} $s>2$. Assume that $n-t-s$ is odd. Then $H=\lg \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2 \ |\ R(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2), [(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1)^2, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2]^{2^{\frac{n-t-s-1}{2}}}\rg$. It follows $H/\lg (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1)^4\rg\cong H_3$, where $H_3=\lg \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2 \ |\ \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0^2, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1^2, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2^2, (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1)^{2^2}, (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^{2^t}, (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^2, $ $[\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0, (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^4], [(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1)^2, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2]^{2^{\frac{(n-s+2)-t-3}{2}}}\rg$. By Case~1, $|H_3|=2^{n-s+2}$, and therefore $|H|=2^{s-2} \cdot |H/\lg (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1)^4\rg|=2^{s-2} \cdot |H_3|=2^n$. Assume that $n-t-s$ is even. Then $H/\lg (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1)^4\rg\cong H_4$, where $H_4=\lg \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2 \ |\ \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0^2, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1^2, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2^2$, $(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1)^{2^2}, (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^{2^t}, (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^2, $ $[\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0, (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^4], [(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1)^2, (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^2]^{2^{\frac{(n-s+2)-t-4}{2}}}\rg$. Then $|H_4|=2^{n-s+2}$ from Case~1, and therefore $|H|=2^{s-2} \cdot |H_4|=2^n$. \hfill\mbox{\raisebox{0.7ex}{\fbox{}}} \vspace{4truemm} \begin{cor} \label{cor3.1} The pairs $(G_1,\{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0,\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1,\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2\})$, $(G_3,\{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0,\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1,\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2\})$ and $(G_5,\{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0,\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1,\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2\})$, defined in Theorem~\ref{maintheorem}, are string $C$-groups of order $2^n$ with Schl\"afli type $\{4, 2^{n-3}\}$, $\{4, 2^{n-4}\}$ and $\{4, 2^{n-5}\}$, respectively. \end{cor} \f {\bf Proof.}\hskip10pt By taking $(s, t)=(2, n-3),(2,n-4),(2,n-5)$ in the proof of Theorem~\ref{existmaintheorem}, we know that $(H_i,\{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0,\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1,\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2\})$ for $i=1,3,5$ are string $C$-groups of order $2^n$ with Schl\"afli type $\{4, 2^{n-3}\}$, $\{4, 2^{n-4}\}$ and $\{4, 2^{n-5}\}$ respectively, where \begin{small} \begin{itemize}\setlength{\parskip}{0pt} \item [$H_1$]$=\lg \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2 \ |\ \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0^2, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1^2, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2^2, (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1)^{2^{2}}, (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^{2^{n-3}}, (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^2, [\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0,(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^4], [(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1)^2,\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2] \rg$, \item [$H_3$]$=\lg \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2 \ |\ \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0^2, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1^2, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2^2, (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1)^{2^{2}}, (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^{2^{n-4}}, (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^2, [\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0,(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^4], [(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1)^2, (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^2]\rg$, \item [$H_5$]$=\lg \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2 \ |\ \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0^2, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1^2, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2^2, (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1)^{2^{2}}, (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^{2^{n-5}}, (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^2, [\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0, (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^4], [\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0, (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^2]^2\rg$. \end{itemize} \end{small} Since $\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2=\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0$ and $((\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1)^2)^{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0}=((\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1)^2)^{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1}=(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1)^2$, by Proposition~\ref{commutator} we have the following identities in all $H_i$ and $G_i$ for $i=1,3,5$: \begin{small} \begin{itemize}\setlength{\parskip}{0pt} \item []$[(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1)^2, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2]=[\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2]=[\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1]^{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1}=[\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0, (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^2]^{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1}$, \item []$[\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0, (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^4]=[\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0, (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^2][\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0, (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^2]^{(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^2}=[(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1)^2, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2]^{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2}[(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1)^2, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2]^{(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^3}$, \item []$[(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1)^2,(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^2]=[(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1)^2,\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2][(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1)^2,\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1]^{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2}= [(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1)^2,\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2][(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1)^2,\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2]^{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2}.$ \end{itemize} \end{small} Clearly, $H_5=G_5$. In $G_1$, $[\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0, (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^4]=[(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1)^2, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2]^{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2}[(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1)^2, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2]^{(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^3}=1$ because $[(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1)^2,\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2]=1$. Thus, $H_1=G_1$. To prove $H_3=G_3$, we only need to show that $[\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0, (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^4]=1$ in $G_3$. Noting that $[(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1)^2, (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^2]=1$ in $G_3$, we have \begin{center} $\begin{array}{l} [(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1)^2, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2]^{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1}[(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1)^2, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2]=[\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0, (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^2]^{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2}[(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1)^2, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2] =[\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0, (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1)^2][(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1)^2, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2]\\ =\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^2\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1)^2\{(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1)^2\}\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1)^2\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2 =\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\{(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1)^2\}(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^2\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1)^2\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1)^2\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2 \\=\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^2\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^2\}(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1)^2\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2 =\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^2\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1)^2(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^2\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2 \\=\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2=1, \end{array}$ \end{center} that is, $[(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1)^2, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2]^{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1}=[(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1)^2, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2]^{-1}$. On the other hand, since $1=[(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1)^2,\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2]=[(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1)^2,\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2][(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1)^2,\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2]^{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2}$ and $1=[(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1)^2,(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^2]= [(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1)^2,\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2][(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1)^2,\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2]^{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2}$, we have $[(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1)^2, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2]^{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1}=([(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1)^2, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2]^{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2})^{-1}=[(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1)^2, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2]$ . It follows that $[(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1)^2, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2]^{2}=1$ and $[(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1)^2,\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2]^{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2}=[(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1)^2,\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2]^{-1}$, and so $[\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0, (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^4]=[(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1)^2, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2]^{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2}[(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1)^2, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2]^{(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^3}=[(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1)^2,\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2]^{-2}=1$, as required. \hfill\mbox{\raisebox{0.7ex}{\fbox{}}} \vspace{4truemm} \section{Proof of Theorem~\ref{maintheorem}}\label{Theorem1.3} To prove Theorem~\ref{maintheorem}, we need the following lemmas. \begin{lem}\label{quotient} Let $(G,\{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0,\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1,\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2\})$ be a string C-group of type $\{2^s, 2^t\}$ with $2 \leq s \leq t$. Let $|G|=2^n$ and $2t \geq n-1$. Then $N = \lg (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^{2^{t-1}} \rg \unlhd G$ and $(\overline G} \def\ola{\overline \alpha} \def\olh{\overline h, \{\overline{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_{0}}, \overline{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_{1}}, \overline{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_{2}}\})$ is a string $C$-group of type $\{2^s, 2^{t-1}\}$ and order $2^{n-1}$, where $\overline G} \def\ola{\overline \alpha} \def\olh{\overline h=G/N$ and $\overline{x}=xN$ for any $x\in G$. \end{lem} \f {\bf Proof.}\hskip10pt Let $H = \lg \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2\rg$ be the rotation subgroup of $G$. Then $|G:H| \leq 2$. Since $\{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0,\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1,\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2\}$ is a minimal generating set of $G$, Theorem~\ref{burnside}~(2) implies $d(G)=3$, and since $H$ is generated by two elements, we have $|H| = 2^{n-1}$. Let $M = \lg \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2\rg$. Then $|M| = 2^t$ and $|M|^2\geq |H|$ as $2t \geq n-1$. Then Proposition~\ref{core} implies that $\hbox{\rm Core} _{H}(M)>1$. Since $M$ is cyclic and $|N| = 2$, $N$ is characteristic in $\hbox{\rm Core} _{H}(M)$, and so $\hbox{\rm Core} _{H}(M)\unlhd H$ implies $N \unlhd H$. Noting that $N$ lies in the center of the dihedral group $\lg \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2\rg$, we have $N^{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1} = N$, and hence $N \unlhd G$ because $G = \lg H, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rg$. Clearly, $\overline{G}=2^{n-1}$. Since $t\geq 2$, we have $N\leq \mho_1(G) = \lg g^2 | g \in G\rg$, and by Theorem~\ref{burnside}, $N\leq \Phi(G)$ and $G/\Phi(G)\cong{\mathbb Z}_2^3$. Thus, $G/\Phi(G)$ has rank $3$, and since $G/\Phi(G)\cong (G/N)/(\Phi(G)/N)$, $G/N$ has rank $3$, implying that $\{\overline{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0}, \overline{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1}, \overline{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2}\}$ is a minimal generating set of $\overline G} \def\ola{\overline \alpha} \def\olh{\overline h$. It follows that $\overline{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0}$, $\overline{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1}$, $\overline{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2}$ and $\overline{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2}$ are involutions. To prove that $\overline{G}$ has the intersection property, by Proposition~\ref{intersection} we only need to show $\lg \overline{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0}, \overline{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_{1}} \rg \cap \lg \overline{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_{1}}, \overline{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_{2}}\rg = \lg \overline{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_{1}}\rg$. Suppose $\lg \overline{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0}, \overline{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_{1}} \rg \cap \lg \overline{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_{1}}, \overline{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_{2}}\rg \not= \lg \overline{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_{1}}\rg$. Then there exist $x_1\in \lg \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0,\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rg$ and $x_2\in \lg \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1,\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2\rg$ such that $\overline{x_1} = \overline{x_2}\not\in \lg \overline{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1}\rg$, which implies $x_1\not\in \lg\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rg$. Since $\lg \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1 \rg \cap \lg \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2\rg=\lg \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rg$, we have $x_1 \neq x_2$ and $x_1=x_2(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^{2^{t-1}}$ as $\overline{x_1} = \overline{x_2}$, which is impossible because otherwise $x_1=x_2(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^{2^{t-1}} \in \lg \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rg \cap \lg \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2\rg = \lg \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rg$. Thus, $\lg \overline{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0}, \overline{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_{1}} \rg \cap \lg \overline{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_{1}}, \overline{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_{2}}\rg = \lg \overline{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_{1}}\rg$, as required. To finish the proof, we are left to show $\overline{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1}$ and $\overline{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2}$ have order $2^s$ and $2^{t-1}$ respectively. Since $(G,\{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0,\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1,\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2\})$ has type $\{2^s,2^t\}$, $\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1$ and $\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2$ have order $2^s$ and $2^t$ respectively. Since $N\leq \langle \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2\rangle$ and $|N|=2$, $\overline{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2}$ has order $2^{t-1}$ and $\overline{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1}$ has order $2^s$ or $2^{s-1}$. Suppose $\overline{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1}$ has order $2^{s-1}$. Then $(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1)^{2^{s-1}}=(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^{2^{t-1}}\in \lg \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rg \cap \lg \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2\rg = \lg \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1 \rg$, and hence $(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^{2^{t-1}}=\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1$ because $\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2$ has order $2^t$. It follows that $(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^{2^{t-1}-1}=\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2$ and $(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^{2^t-2}=1$, a contradiction. Thus, $\overline{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1}$ has order $2^s$. This completes the proof. \hfill\mbox{\raisebox{0.7ex}{\fbox{}}} \vspace{4truemm} \begin{lem}\label{deviredgroup} Let $G=\lg \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2 \ |\ \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0^2, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1^2, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2^2, (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^2\rg$. Then $G'=\lg [\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1], [\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2], [\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1]^{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2}\rg$. \end{lem} \f {\bf Proof.}\hskip10pt Since $\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2=\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0$, Proposition~\ref{Derived} implies $G' = \lg [\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1]^{g}, [\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1]^{h} |\ g, h \in G\rg$. Since $\lg \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0,\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rg$ and $\lg \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1,\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2\rg$ are dihedral groups, we have $[\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1]^{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0}=((\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1)^2)^{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0}=(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1)^{-2}=[\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1]^{-1}$, $[\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1]^{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1}=[\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1]^{-1}$, $[\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2]^{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1}=[\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2]^{-1}$ and $[\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2]^{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2}=[\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2]^{-1}$. Set $L=\lg [\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1], [\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2], [\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1]^{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2}\rg$. Since $([\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1]^{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2})^{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0}=([\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1]^{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2})^{-1}$, $([\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1]^{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2})^{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2}=[\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1]$ and $([\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1]^{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2})^{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1}=\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1=[\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2][\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0]^{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2}[\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1]$, we have $[\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1]^g \in L$ for any $g\in G$. Since $[\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2]^{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0} = \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0=\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2$ $=[\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1][\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2][\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0]^{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2}$, we have $[\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2]^h \in L$ for any $h\in G$. It follows that $G' \leq L$, and hence $G'=L$. \hfill\mbox{\raisebox{0.7ex}{\fbox{}}} \vspace{4truemm} \vskip 0.3cm \noindent{\bf Proof of Theorem~\ref{maintheorem}(1):} For the sufficiency, we need to show that both $G_1$ and $G_2$ are string $C$-groups of order $2^n$, where $n\geq 10$ and \vskip 0.1cm \begin{small} $\begin{array}{rl} G_1=& \lg \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2 \ |\ \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0^2, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1^2, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2^2, (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1)^{2^{2}}, (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^{2^{n-3}}, (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^2, [(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1)^2,\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2] \rg, \\ G_2=& \lg \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2 \ |\ \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0^2, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1^2, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2^2, (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1)^{2^{2}}, (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^{2^{n-3}}, (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^2, [(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1)^2,\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2](\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^{2^{n-4}} \rg. \end{array}$ \end{small} \vskip 0.1cm By Corollary~\ref{cor3.1}, $G_1$ is a string $C$-group of order $2^n$ and we are only left with $G_2$. However, to explain the method clearly, we prove the above fact again for $G_1$ using a permutation representation graph that is simple and easy to understand. Let $G=G_1$ or $G_2$. For convenience, we write $o(g)$ for the order of $g$ in $G$. We first prove the following claim. \medskip \noindent {\bf Claim:} $|G|\leq 2^n$. Note that $G/G'$ is abelian and is generated by three involutions. Thus $|G/G'|\leq 2^3$. To prove the claim, it suffices to show $|G'|\leq 2^{n-3}$. For $G=G_1$, we have $[(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1)^2, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2]=1$ and $\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2=\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0$, which implies $[\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1]^{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2}=[\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1][\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0][\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1]^{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2}=[\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1][(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1)^2, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2] = [\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0,\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1]$. Since $[\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0,\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1]^{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0}=[\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0,\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1]^{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1}=[\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0,\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1]^ {-1}$, we have $\lg [\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0,\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1]\rg\unlhd G$, and by Lemma~\ref{deviredgroup}, we have $G'=\lg [\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1], [\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2], [\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1]^{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2}\rg=\langle [\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1], [\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2]\rg=\lg [\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0,\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1]\rg\lg [\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1,\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2]\rg$. This implies that $|G'|\leq |\lg [\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0,\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1]\rg||\lg [\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1,\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2]\rg|=o((\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1)^2)o((\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^2)\leq 2\cdot 2^{n-4}=2^{n-3}$, as required. For $G=G_2$, we have $[(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1)^2,\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2](\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^{2^{n-4}}=1$ and $\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2=\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0$, which implies $[\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1]^{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2}=[\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1][(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1)^2, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2] = [\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1](\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^{-2^{n-4}} \in \lg [\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1], [\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2]\rg$ and $[\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2]^{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0}=\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1[(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1)^2, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2]\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2=\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^{-2^{n-4}}\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^2=(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^{2^{n-4}}(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^2 \in \lg [\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2]\rg$. It follows that $\lg [\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2]\rg\unlhd G$ because $[\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1,\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2]^{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1}=[\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1,\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2]^{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2}=[\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1,\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2]^{-1}$, and by Lemma~\ref{deviredgroup}, $G'=\lg [\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1], [\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2], [\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1]^{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2}\rg=\langle [\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1],[\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2]\rg=\langle [\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1]\rg\lg [\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2]\rg$. In particular, $|G'|\leq |\lg [\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0,\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1]\rg||\lg [\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1,\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2]\rg|=o((\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1)^2)o((\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^2)\leq 2\cdot 2^{n-4}=2^{n-3}$, as required. Now we are ready to finish the sufficiency proof by considering two cases. We use another method than the quotient method, based on permutation representation graphs. We give the details for $G_1$ as they are simpler than those of $G_2$ and might help the reader understand the case $G_2$. \medskip \noindent {\bf Case 1:} $G=G_1$. The key point is to construct a permutation group $A$ of order at least $2^n$ on a set $\Omega$ that is an epimorphic image of $G$, that is, $A$ has three generators, say $a, b, c$, satisfying the same relations as do $\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0,\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1,\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2$. The permutation representation graph has vertex set $\Omega$ with $a$-, $b$- and $c$-edges. Recall that an $x$-edge ($x$=$a,b$ or $c$) connects two points in $\Omega$ if and only if $x$ interchanges them. It is easy to have such graphs when $n$ is small by taking $\Omega$ as the set of right cosets of the subgroup $\lg\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0,\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2\rg$ in $G$, where $\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0$, $\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1$ and $\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2$ produce the $a$-, $b$- and $c$-edges, respectively. We give in Figure~\ref{1} a permutation representation graph for $G_1$ and explain below how it is constructed. \begin{figure}[H] \centering \includegraphics[width=9cm]{cpr1}\\ \caption{A permutation representation graph corresponding to $G_1$}\label{1} \end{figure} Set $t=2^{n-3}$ and write $i_{jt}^{k}=jt+4i+k$ where $0 \leq i \leq \frac{t}{4}-1, 0 \leq j \leq 1$ and $1\leq k \leq 4$. Then $a, b, c$ are permutations on the set $\{1,2,\cdots, 2^{n-2}\}$: \vskip 0.2cm \begin{small} $\begin{array}{l} a=\prod_{i=0}^{\frac{t}{4}-1}(i_{0}^{2},i_{t}^{2})(i_{0}^{3},i_{t}^{3}), ~~~\quad b=\prod_{i=0}^{\frac{t}{4}-1}(i_{0}^{1},i_{0}^{2})(i_{t}^{1},i_{t}^{2})(i_{0}^{3},i_{0}^{4})(i_{t}^{3},i_{t}^{4}),\\ c=(0_{0}^{1})(0_{t}^{1})((\frac{t}{4}-1)_{0}^{4})((\frac{t}{4}-1)_{t}^{4})\cdot\prod_{i=0}^{\frac{t}{4}-1}(i_{0}^{2},i_{0}^{3})(i_{t}^{2},i_{t}^{3})\cdot\prod_{i=0}^{\frac{t}{4}-2}(i_{0}^{4},(i+1)_{0}^{1})(i_{t}^{4},(i+1)_{t}^{1}).\\ \end{array}$ \end{small} \vskip 0.2cm Here, $(i+1)_{jt}^k=jt+4(i+1)+k$ for $0\leq i\leq \frac{t}{4}-2$. Note that $1$-cycles are also given in the product of distinct cycles of $c$ and this would be helpful to compute conjugations of some elements by $c$. It is easy to see that $a$ is fixed under conjugacy of $c$, that is, $a^c=a$. It follows $(ac)^2=1$. We further have \vskip 0.2cm \begin{small} $\begin{array}{lcl} ab&=&\prod_{i=0}^{\frac{t}{4}-1} (i_{0}^{1},i_{0}^{2},i_{t}^{1},i_{t}^{2})(i_{0}^{3},i_{t}^{4},i_{t}^{3},i_{0}^{4}),\\ bc&=&\prod_{i=0}^{1}(1+ti,3+ti,\cdots, t-1+ti, t+ti, t-2+ti, \cdots, 2+ti), \\ (ab)^2&=&\prod_{i=0}^{\frac{t}{8}-1} (i_{0}^{1},i_{t}^{1})(i_{0}^{2},i_{t}^{2})(i_{0}^{3},i_{t}^{3})(i_{0}^{4},i_{t}^{4}).\\ \end{array}$ \end{small} \vskip 0.2cm Let $A=\langle a,b,c\rangle$. Clearly, $a^2=b^2=c^2=1$, $(ab)^4=1$ and $(bc)^{2^{n-3}}=1$. Furthermore, $(ab)^2$ is fixed under conjugacy of $c$, that is, $((ab)^2)^c=(ab)^2$, and hence $[(ab)^2,c]=1$. Clearly, $A$ is transitive on $\{1,2,\cdots, 2^{n-2}\}$, and the stabilizer $A_1$ has order at least $4$ because $a,c\in A_1$. This implies that $A$ is a permutation group of order at least $2^n$ and its generators $a,b,c$ satisfy the same relations as do $\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0,\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1,\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2$ in $G$. Then there is an epimorphism $\phi:$ $G\mapsto A$ such that $\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0^\phi=a$, $\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1^\phi=b$ and $\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2^\phi=c$. Since $|A|\geq 2^n$ and $|G|\leq 2^n$, $\phi$ is an isomorphism, implying $|G|=2^n$. The generators $\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2$ in $L_1:=\lg \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2 \ |\ \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0^2, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1^2, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2^2, (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1)^{4}, (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^{2}, (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^2\rg$ satisfy all relations in $G$. This implies that the map: $\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\mapsto\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0$, $\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\mapsto\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1$, $\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2\mapsto\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2$, induces a homomorphism from $G$ to $L_1$. By Proposition~\ref{degenerate}, $o(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1)=4$ in $L_1$ and hence $o(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1)=4$ in $G$, and by Proposition~\ref{stringC}, $(G,\{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0,\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1,\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2\})$ is a string C-group. \medskip \noindent {\bf Case 2:} $G=G_2$. As in Case 1, we give in Figure~\ref{2} a permutation representation graph for $G_2$. Note that, as in Case~1, $bc$ consists of two paths of length $2t$ with alternating labels of $b$ and $c$, where $t=2^{n-4}$, and here all of the real complexity lies in the definition of $a$. \begin{figure}[H] \centering \includegraphics[width=14cm]{cpr2}\\ \caption{A permutation representation graph corresponding to $G_2$}\label{2} \end{figure} Write $ci=\frac{t}{8}-i-1$ for $0\leq i\leq \frac{t}{8}-1$, $i_{jt}^k=jt+8i+k$ and ${ci}_{jt}^k=jt+8ci+k$ for $0 \leq j \leq 3$ and $1 \leq k \leq 8$. Note that $0\leq i\leq \frac{t}{8}-1$ if and only if $0\leq ci\leq \frac{t}{8}-1$. Then $a, b, c$ are permutations on the set $\{1,2,\cdots, 2^{n-2}\}$: \vskip 0.2cm \begin{small} $\begin{array}{rl} a=&\prod_{i=0}^{\frac{t}{8}-1} (i_t^2,i_{2t}^2)(i_0^2,ci_{2t}^7)(i_{3t}^2,ci_t^7)(i_0^7,i_{3t}^7)(i_t^3,i_{2t}^3)(i_0^3,ci_{2t}^6) (i_{3t}^3,ci_t^6)(i_0^6,i_{3t}^6)\\ &(i_0^4,ci_t^5)(i_0^5,ci_t^4)(i_{2t}^4,ci_{3t}^5)(i_{2t}^5,ci_{3t}^4), \\ b=&\prod_{j=0}^3\prod_{i=0}^{\frac{t}{8}-1}(i_{jt}^1,i_{jt}^2)(i_{jt}^3,i_{jt}^4) (i_{jt}^5,i_{jt}^6)(i_{jt}^7,i_{jt}^8), \\ c=& \prod_{i=0}^{1} (0_{0+2ti}^{1}) ((\frac{t}{8}-1)_{t+2ti}^{8})((\frac{t}{8}-1)_{0+2ti}^{8},0_{t+2ti}^{1}) \prod_{j=0}^3(\prod_{i=0}^{\frac{t}{8}-1}(i_{jt}^2,i_{jt}^3)(i_{jt}^4,i_{jt}^5) (i_{jt}^6,i_{jt}^7) \cdot\\ & \prod_{i=0}^{\frac{t}{8}-2}(i_{jt}^8,(i+1)_{jt}^1)) .\\ \end{array}$ \end{small}\\ Here, $(i+1)_{jt}^1=jt+8(i+1)+1$ for $0\leq i\leq \frac{t}{8}-2$. It is easy to see that $a$ is fixed under conjugacy of $c$, that is, $a^c=a$. It follows $(ac)^2=1$. Let $\alpha} \def\b{\beta} \def\g{\gamma} \def\d{\delta} \def\e{\varepsilon=a,b$, or $c$. Then $\alpha} \def\b{\beta} \def\g{\gamma} \def\d{\delta} \def\e{\varepsilon$ is an involution. Recall that $ci=\frac{t}{8}-i-1$. Since $0\leq i\leq \frac{t}{8}-1$ if and only if $0\leq ci\leq \frac{t}{8}-1$, it is easy to see that if $\alpha} \def\b{\beta} \def\g{\gamma} \def\d{\delta} \def\e{\varepsilon$ interchanges $i_{j_1t}^{k_1}$ and $i_{j_2t}^{k_2}$ then $\alpha} \def\b{\beta} \def\g{\gamma} \def\d{\delta} \def\e{\varepsilon$ also interchanges $ci_{j_1t}^{k_1}$ and $ci_{j_2t}^{k_2}$, and if $\alpha} \def\b{\beta} \def\g{\gamma} \def\d{\delta} \def\e{\varepsilon$ interchanges $i_{j_1t}^{k_1}$ and $ci_{j_2t}^{k_2}$ then $\alpha} \def\b{\beta} \def\g{\gamma} \def\d{\delta} \def\e{\varepsilon$ also interchanges $ci_{j_1t}^{k_1}$ and $i_{j_2t}^{k_2}$. These facts are very helpful for the following computations. \vskip 0.2cm \begin{small} $\begin{array}{lcl} ab&=&\prod_{i=0}^{\frac{t}{8}-1} (i_t^1,i_t^2,i_{2t}^1,i_{2t}^2) (i_0^1,i_0^2,{ci}_{2t}^8,{ci}_{2t}^7) (i_{3t}^1,i_{3t}^2,{ci}_t^8,{ci}_t^7) (i_0^7,i_{3t}^8,i_{3t}^7,i_0^8) \\ & &(i_0^3,{ci}_{2t}^5,i_{3t}^3,{ci}_t^5) (i_0^5,{ci}_t^3,{ci}_{2t}^4,i_{3t}^6) (i_0^6,i_{3t}^5,{ci}_{2t}^3,{ci}_t^4) (i_0^4,{ci}_t^6,i_{3t}^4,{ci}_{2t}^6),\\ bc&=&\prod_{i=0}^{1}(1+2ti,3+2ti,\cdots, 2t-1+2ti, 2t+2ti, 2t-2+2ti, \cdots, 2+2ti), \\ (ab)^2&=&\prod_{i=0}^{\frac{t}{8}-1} (i_t^1,i_{2t}^1)(i_0^1,ci_{2t}^8)(i_t^8,ci_{3t}^1)(i_0^8,i_{3t}^8) (i_t^2,i_{2t}^2)(i_0^2,ci_{2t}^7)(i_t^7,ci_{3t}^2)(i_0^7,i_{3t}^7)\\ & &(i_0^3,i_{3t}^3)(i_t^3,ci_{3t}^6)(i_0^6,ci_{2t}^3)(i_t^6,i_{2t}^6) (i_0^4,i_{3t}^4)(i_t^4,ci_{3t}^5)(i_0^5,ci_{2t}^4)(i_t^5,i_{2t}^5),\\ (bc)^{2^{n-4}}&=&\prod_{i=0}^{t-1}(1+i,2t-i)(2t+1+i,4t-i). \end{array}$ \end{small} \vskip 0.2cm Let $A=\langle a,b,c\rangle$. Clearly, $(ab)^4=1$ and $(bc)^{2^{n-3}}=1$. Since $ci=\frac{t}{8}-i-1$, $0\leq i\leq \frac{t}{8}-2$ if and only if $1\leq ci\leq \frac{t}{8}-1$, and since $c$ interchanges $i_{jt}^8$ and $(i+1)_{jt}^1$, it also interchanges $ci_{jt}^8$ and $c(i-1)_{jt}^1$, where $c(i-1)=\frac{t}{8}-(i-1)-1$. Thus, \vskip 0.2cm \begin{small} $\begin{array}{lcl} c(ab)^2c&=&\prod_{i=0}^{\frac{t}{8}-1} (i_{0}^{1},i_{3t}^{1})(i_{0}^{8},ci_{2t}^{1}) (i_{t}^{1},ci_{3t}^{8})(i_{t}^{8},i_{2t}^{8}) (i_{0}^{2},i_{3t}^{2})(i_{0}^{7},ci_{2t}^{2}) (i_{t}^{2},ci_{3t}^{7})(i_{t}^{7},i_{2t}^{7})\\ & &(i_{t}^{3},i_{2t}^{3})(i_{0}^{3},ci_{2t}^{6}) (i_{t}^{6},ci_{3t}^{3})(i_{0}^{6},i_{3t}^{6}) (i_{t}^{4},i_{2t}^{4})(i_{0}^{4},ci_{2t}^{5}) (i_{t}^{5},ci_{3t}^{4})(i_{0}^{5},i_{3t}^{5}).\\ \end{array}$ \end{small} \vskip 0.2cm It is clear that $(bc)^{2^{n-4}}$ interchanges $i_0^k$ and $ci_{t}^{9-k}$ as $i_0^k+ci_{t}^{9-k}=2t+1$ (note that $1 \leq i_0^k\leq t$ and $t+1 \leq ci_{t}^{9-k}\leq 2t$), and similarly $(bc)^{2^{n-4}}$ interchanges $i_{2t}^{k}$ and $ci_{3t}^{9-k}$. Then it is easy to check $[(ab)^2,c]=(ab)^2c(ab)^2c=(bc)^{2^{n-4}}$. It follows that the generators $a,b,c$ of $A$ satisfy the same relations as do $\rho_0,\rho_1,\rho_2$ in $G$, and hence $A$ is isomorphic to $G$ with order $2^n$. Clearly, $A$ is transitive and $A_1$ has order at least $4$ because $a,c\in A_1$. It follows that $|A|\geq 2^n$ and hence $|G|=2^n$. On the other hand, the generators $\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0$, $\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1$, $\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2$ in $L_1:=\lg \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2 \ |\ \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0^2, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1^2, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2^2, (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1)^{4}, (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^{2}, (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^2\rg$ (defined in Proposition~\ref{degenerate}) satisfy all relations in $G$. This implies that $o(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1)=4$ in $G$, and by Proposition~\ref{stringC}, $(G,\{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0,\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1,\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2\})$ is a string C-group. Now we prove the necessity. Let $(G,\{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0,\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1,\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2\})$ be a string C-group of rank three with type $\{4, 2^{n-3}\}$ and $|G|=2^n$. Then each of $\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0,\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1$ and $\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2$ has order $2$, and we further have $o(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1)=4$, $o(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)=2$ and $o(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)=2^{n-3}$. To finish the proof, we aim to show that $G\cong G_1$ or $G_2$. Since both $G_1$ and $G_2$ are $C$-groups of order $2^n$ of type $\{4, 2^{n-3}\}$, it suffices to show that, in $G$, $[(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1)^2, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2] =1$ or $[(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1)^2, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2](\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^{2^{n-4}} =1$, which will be done by induction on $n$. This can easily be checked to be true for $n = 10$ by using the computational algebra package {\sc Magma}~\cite{BCP97}. Assume $n\geq 11$. Take $N = \lg (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^{2^{n-4}} \rg$. By Lemma~\ref{quotient}, we have $N \unlhd G$ and ($\overline G} \def\ola{\overline \alpha} \def\olh{\overline h=G/N, \{\overline{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0}, \overline{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_{1}}, \overline{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_{2}}\})$ (with $\overline{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_i} = N\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_i$) is a string C-group of rank three of type $\{4, 2^{n-4}\}$. Since $|\overline G} \def\ola{\overline \alpha} \def\olh{\overline h| = 2^{n-1}$, by induction hypothesis we may assume $\overline G} \def\ola{\overline \alpha} \def\olh{\overline h=\overline G} \def\ola{\overline \alpha} \def\olh{\overline h_1$ or $\overline G} \def\ola{\overline \alpha} \def\olh{\overline h_2$, where \begin{small} \begin{itemize} \item [$\overline G} \def\ola{\overline \alpha} \def\olh{\overline h_1$]$=\lg \overline{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0}, \overline{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1}, \overline{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2} \ |\ \overline{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0}^2, \overline{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1}^2, \overline{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2}^2, (\overline{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0}\overline{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1})^{2^{2}}, (\overline{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1}\overline{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2})^{2^{n-4}}, (\overline{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0}\overline{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2})^2, [(\overline{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0}\overline{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1})^2, \overline{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2}]\rg$, \item [$\overline G} \def\ola{\overline \alpha} \def\olh{\overline h_2$]$=\lg \overline{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0}, \overline{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1}, \overline{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2} \ |\ \overline{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0}^2, \overline{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1}^2, \overline{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2}^2, (\overline{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0}\overline{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1})^{2^{2}}, (\overline{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1}\overline{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2})^{2^{n-4}}, (\overline{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0}\overline{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2})^2, [(\overline{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0}\overline{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1})^2, \overline{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2}](\overline{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1}\overline{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2})^{2^{n-5}}\rg$. \end{itemize} \end{small} Suppose $\overline G} \def\ola{\overline \alpha} \def\olh{\overline h=\overline G} \def\ola{\overline \alpha} \def\olh{\overline h_2$. Since $N=\langle (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^{2^{n-4}}\rangle\cong{\mathbb Z}_2$, we have $[(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1)^2, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2](\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^{2^{n-5}}=1$ or $(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^{2^{n-4}}$, implying $[(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1)^2, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2]=(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^{\d\cdot 2^{n-5}}$, where $\d=1$ or $-1$. Since $((\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1)^2)^{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0}=(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1)^{-2}=(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1)^2$ and $\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2=\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0$, we have $[(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1)^2, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2]^{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0}=[(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1)^2, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2]$, and hence $[\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0, (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^{\d\cdot 2^{n-5}}]=1$. By Proposition~\ref{commutator}, $1=[(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1)^4,\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2]=[(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1)^2,\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2]^{(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1)^2}[(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1)^2,\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2]=((\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^{\d\cdot 2^{n-5}})^{(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1)^2}(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^{\d\cdot 2^{n-5}}=(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^{\d\cdot 2^{n-4}}$, which is impossible because $o(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)=2^{n-3}$. Thus, $\overline G} \def\ola{\overline \alpha} \def\olh{\overline h=\overline G} \def\ola{\overline \alpha} \def\olh{\overline h_1$. Since $N=\langle (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^{2^{n-4}}\rangle\cong{\mathbb Z}_2$, we have $[(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1)^2, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2] = 1$ or $(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^{2^{n-4}}$. For the latter, $[(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1)^2, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2](\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^{2^{n-4}}=(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^{2^{n-3}}=1$. It follows that $G\cong G_1$ or $G_2$. \hfill\mbox{\raisebox{0.7ex}{\fbox{}}} \vspace{4truemm} \noindent{\bf Proof of Theorem~\ref{maintheorem}(2):} For the sufficiency, we need to show that both $G_3$ and $G_4$ are string $C$-group of order $2^n$, where $n \geq 10$ and \vskip 0.1cm \begin{small} $\begin{array}{rl} G_3=& \lg \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2 \ |\ \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0^2, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1^2, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2^2, (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1)^{2^{2}}, (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^{2^{n-4}}, (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^2, [(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1)^2,(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^2] \rg, \\ G_4=& \lg \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2 \ |\ \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0^2, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1^2, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2^2, (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1)^{2^{2}}, (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^{2^{n-4}}, (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^2, [(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1)^2,(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^2](\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^{2^{n-5}} \rg. \end{array}$ \end{small}\\ By Corollary~\ref{cor3.1}, we only need to show that $G_4$ is a string C-group. Let $G=G_4$. Then $[(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1)^2,(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^2]=(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^{2^{n-5}}$. Noting $[(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1)^2,(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^2(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^2]=[(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1)^2,(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^2][(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1)^2,(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^2]^{(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^2} =(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^{2^{n-4}}=1$, we have $[(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1)^2,(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^{2^{n-5}}]=1$, which implies $[\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1,((\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^{2^{n-5}})^{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0}]=1$ as $[\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1,(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^{2^{n-5}}]=1$. Thus, $\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1$ fixes $K=\lg (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^{2^{n-5}}, ((\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^{2^{n-5}})^{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0}\rg$. Clearly, $K^{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0}=K^{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2}=K$, and so $K \trianglelefteq G$. The three generators $\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0K$, $\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1K$, $\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2K$ in $G/K$ satisfy the same relations as $\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0$, $\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1$, $\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2$ in $G_3$. In fact, $(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1K\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2K)^{2^{n-5}}=K$, and hence $|G/K|\leq 2^{n-1}$ (here we need to check that $|G_3|=2^9$ for $n=9$ and this can be done using {\sc Magma}). Furthermore, $[\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2,((\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^{2^{n-5}})^{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0}]=1$ as $[\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2,((\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^{2^{n-5}})^{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0}]=1$. It follows that $|K|\leq 4$ and $|G|\leq 2^{n+1}$. Suppose $|G|=2^{n+1}$. Then $|G/K|=2^{n-1}$ and $|K|=4$. It follows that $o(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2K)=2^{n-5}$ in $G/K$ and hence $o(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)=2^{n-4}$ in $G$. Note that $2(n-4) \geq n$ as $n\geq 10$, by Proposition~\ref{core}, $\hbox{\rm Core}_{G}(\lg \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2\rg)>1$, so $\lg (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^{2^{n-5}}\rg\unlhd G$. It follows that $|K|=2$, a contradiction. Thus $|G|\leq 2^n$. We give in Figure~\ref{4} a permutation representation graph of $G$. In this case, $bc$ consists of two paths of length $2t$ and a circle of length $4t$ with alternating labels $b$ and $c$, where $t=2^{n-5}$. \begin{figure}[H] \centering \includegraphics[width=14cm]{cpr4}\\ \caption{A permutation representation graph corresponding to $G_4$}\label{4} \end{figure} Write $i_{jt}^k=jt+8i+k$ and ${ci}_{jt}^k=jt+8(\frac{t}{8}-i-1)+k$ for $0 \leq i \leq \frac{t}{8}-1$, $0\leq j \leq 7$ and $1\leq k \leq 8$. The permutations $a, b, c$ on the set $\{1,2,\cdots, 2^{n-2}\}$ are as follows: \begin{small} $\begin{array}{rl} a=&\prod_{i=0}^{\frac{t}{8}-1} (i_0^2,i_{2t}^2)(i_t^2,i_{3t}^2)(i_{4t}^2,i_{6t}^2)(i_{5t}^2,i_{7t}^2) (i_0^3,i_{2t}^3)(i_t^3,i_{3t}^3)(i_{4t}^3,i_{6t}^3)(i_{5t}^3,i_{7t}^3)\\ &(i_0^4,ci_{7t}^5)(i_0^5,ci_{7t}^4)(i_{2t}^4,ci_{3t}^5)(i_{2t}^5,ci_{3t}^4) (i_{6t}^4,ci_{t}^{5})(i_{6t}^5,ci_{t}^{4})(i_{4t}^4,ci_{5t}^5)(i_{4t}^5,ci_{5t}^4)\\ &(i_0^6,i_{4t}^6)(i_t^6,i_{5t}^6)(i_{2t}^6,i_{6t}^6)(i_{3t}^6,i_{7t}^6) (i_0^7,i_{4t}^7)(i_t^7,i_{5t}^7)(i_{2t}^7,i_{6t}^7)(i_{3t}^7,i_{7t}^7),\\ b=&\prod_{j=0}^7\prod_{i=0}^{\frac{t}{8}-1}(i_{jt}^1,i_{jt}^2)(i_{jt}^3,i_{jt}^4) (i_{jt}^5,i_{jt}^6)(i_{jt}^7,i_{jt}^8), \\ \end{array}$ \end{small} \\ \begin{small} $\begin{array}{rl} c=&(0_{0}^{1}) (0_{6t}^{1})((\frac{t}{8}-1)_{t}^{8})((\frac{t}{8}-1)_{7t}^{8}) (0_{2t}^{1},0_{4t}^{1})((\frac{t}{8}-1)_{3t}^{8},(\frac{t}{8}-1)_{5t}^{8})\cdot\prod_{i=0}^{3}((\frac{t}{8}-1)_{2ti}^{8},0_{t+2ti}^{1})\\ &\cdot\prod_{j=0}^7(\prod_{i=0}^{\frac{t}{8}-1}(i_{jt}^2,i_{jt}^3)(i_{jt}^4,i_{jt}^5)(i_{jt}^6,i_{jt}^7)\cdot \prod_{i=0}^{\frac{t}{8}-2}(i_{jt}^8,(i+1)_{jt}^1)). \end{array}$ \end{small} \\ Here, $(i+1)_{jt}^k=jt+8(i+1)+k$ for $0 \leq i \leq \frac{t}{8}-2$. It is easy to see that $c$ fixes $a$ under conjugacy, that is, $a^c=a$. It follows that $(ac)^2=1$. Furthermore, \begin{small} $\begin{array}{lcl} ab&=&\prod_{i=0}^{\frac{t}{8}-1} (i_0^1,i_0^2,i_{2t}^1,i_{2t}^2) (i_t^1,i_t^2,i_{3t}^1,i_{3t}^2) (i_{4t}^1,i_{4t}^2,i_{6t}^1,i_{6t}^2) (i_{5t}^1,i_{5t}^2,i_{7t}^1,i_{7t}^2)\\ & &(i_0^3,i_{2t}^4,ci_{3t}^6,ci_{7t}^5) (i_t^3,i_{3t}^4,ci_{2t}^6,ci_{6t}^5) (i_{2t}^3,i_0^4,ci_{7t}^6,ci_{3t}^5) (i_{3t}^3,i_t^4,ci_{6t}^6,ci_{2t}^5) \\ & &(ci_{4t}^3,ci_{6t}^4,i_t^6,i_{5t}^5) (ci_{5t}^3,ci_{7t}^4,i_0^6,i_{4t}^5) (ci_{6t}^3,ci_{4t}^4,i_{5t}^6,i_t^5) (ci_{7t}^3,ci_{5t}^4,i_{4t}^6,i_0^5)\\ & &(i_0^7,i_{4t}^8,i_{4t}^7,i_0^8) (i_t^7,i_{5t}^8,i_{5t}^7,i_{t}^8) (i_{2t}^7,i_{6t}^8,i_{6t}^7,i_{2t}^8) (i_{3t}^7,i_{7t}^8,i_{7t}^7,i_{3t}^8),\\ bc&= &\prod_{i=0}^{1}(1+6ti,3+6ti,\cdots, 2t-1+6ti, 2t+6ti, 2t-2+6ti, \cdots, 2+6ti) \\ & &(2t+1+2ti,2t+3+2ti,\cdots,4t-1+2ti,6t-2ti,6t-2-2ti,\cdots,4t+2-2ti), \\ (ab)^2&=&\prod_{i=0}^{\frac{t}{8}-1} (i_0^1,i_{2t}^1)(i_{t}^1,i_{3t}^1) (i_{4t}^1,i_{6t}^1)(i_{5t}^1,i_{7t}^1) (i_0^2,i_{2t}^2)(i_t^2,i_{3t}^2) (i_{4t}^2,i_{6t}^2)(i_{5t}^2,i_{7t}^2)\\ & &(i_0^3,ci_{3t}^6)(i_t^3,ci_{2t}^6) (i_{2t}^3,ci_{7t}^6)(i_{3t}^3,ci_{6t}^6) (i_0^6,ci_{5t}^3)(i_t^6,ci_{4t}^3) (i_{4t}^6,ci_{7t}^3)(i_{5t}^6,ci_{6t}^3)\\ & &(i_0^4,ci_{3t}^5)(i_t^4,ci_{2t}^5) (i_{2t}^4,ci_{7t}^5)(i_{3t}^4,ci_{6t}^5) (i_0^5,ci_{5t}^4)(i_t^5,ci_{4t}^4) (i_{4t}^5,ci_{7t}^4)(i_{5t}^5,ci_{6t}^4)\\ & &(i_0^7,i_{4t}^7)(i_t^7,i_{5t}^7) (i_{2t}^7,i_{6t}^7)(i_{3t}^7,i_{7t}^7) (i_0^8,i_{4t}^8)(i_t^8,i_{5t}^8) (i_{2t}^8,i_{6t}^8)(i_{3t}^8,i_{7t}^8),\\ (bc)^{2^{n-5}}&=&\prod_{i=0}^{t-1}(1+i,2t-i)(6t+1+i,8t-i)\cdot\prod_{i=0}^{2t-1}(2t+1+i,6t-i), \end{array}$ \end{small} For $0\leq i\leq \frac{t}{8}-2$, $c$ interchanges $i_{jt}^8$ and $(i+1)_{jt}^1$, and also $ci_{jt}^8$ and $c(i-1)_{jt}^1$. Thus, \vskip 0.2cm \begin{small} $\begin{array}{lcl} c(ab)^2c&=&\prod_{i=0}^{\frac{t}{8}-1} (i_{0}^{1},i_{4t}^{1})(i_{t}^{1},i_{5t}^{1}) (i_{6t}^{1},i_{2t}^{1})(i_{7t}^{1},i_{3t}^{1}) (i_{0}^{2},ci_{3t}^{7})(i_{0}^{7},ci_{5t}^{2}) (i_{t}^{2},ci_{2t}^{7})(i_{t}^{7},ci_{4t}^{2})\\ & &(i_{6t}^{2},ci_{5t}^{7})(i_{6t}^{7},ci_{3t}^{2}) (i_{7t}^{2},ci_{4t}^{7})(i_{7t}^{7},ci_{2t}^{2}) (i_{0}^{3},i_{2t}^{3})(i_{t}^{3},i_{3t}^{3}) (i_{6t}^{3},i_{4t}^{3})(i_{7t}^{3},i_{5t}^{3})\\ & &(i_{0}^{4},ci_{5t}^{5})(i_{0}^{5},ci_{3t}^{4}) (i_{t}^{4},ci_{4t}^{5})(i_{t}^{5},ci_{2t}^{4}) (i_{6t}^{4},ci_{3t}^{5})(i_{6t}^{5},ci_{5t}^{4}) (i_{7t}^{4},ci_{2t}^{5})(i_{7t}^{5},ci_{4t}^{4})\\ & &(i_{0}^{6},i_{4t}^{6})(i_{t}^{6},i_{5t}^{6}) (i_{6t}^{6},i_{2t}^{6})(i_{7t}^{6},i_{3t}^{6}) (i_{0}^{8},i_{2t}^{8})(i_{t}^{8},i_{3t}^{8}) (i_{6t}^{8},i_{4t}^{8})(i_{7t}^{8},i_{5t}^{8}),\\ c(ab)^2cb&=&\prod_{i=0}^{\frac{t}{8}-1} (i_{0}^{1},i_{4t}^{2},ci_{t}^{8},ci_{3t}^{7}) (ci_{t}^{1},ci_{5t}^{2},i_{0}^{8},i_{2t}^{7}) (i_{2t}^{1},i_{6t}^{2},ci_{5t}^{8},ci_{7t}^{7}) (ci_{3t}^{1},ci_{7t}^{2},i_{4t}^{8},i_{6t}^{7})\\ & &(i_{4t}^{1},i_{0}^{2},ci_{3t}^{8},ci_{t}^{7}) (ci_{5t}^{1},ci_{t}^{2},i_{2t}^{8},i_{0}^{7}) (i_{6t}^{1},i_{2t}^{2},ci_{7t}^{8},ci_{5t}^{7}) (ci_{7t}^{1},ci_{3t}^{2},i_{6t}^{8},i_{4t}^{7})\\ & &(i_{0}^{3},i_{2t}^{4},ci_{t}^{6},ci_{5t}^{5}) (ci_{t}^{3},ci_{3t}^{4},i_{0}^{6},i_{4t}^{5}) (i_{2t}^{3},i_{0}^{4},ci_{5t}^{6},ci_{t}^{5}) (ci_{3t}^{3},ci_{t}^{4},i_{4t}^{6},i_{0}^{5}) \\ & &(i_{4t}^{3},i_{6t}^{4},ci_{3t}^{6},ci_{7t}^{5}) (ci_{5t}^{3},ci_{7t}^{4},i_{2t}^{6},i_{6t}^{5}) (i_{6t}^{3},i_{4t}^{4},ci_{7t}^{6},ci_{3t}^{5}) (ci_{7t}^{3},ci_{5t}^{4},i_{6t}^{6},i_{2t}^{5}).\\ \end{array}$ \end{small} \vskip 0.1cm Let $A=\langle a,b,c\rangle$. It is clear that $(bc)^{2^{n-5}}$ interchanges $i_0^j$ and $ci_t^{9-j}$ for each $1\leq j\leq 8$ because $i_0^j+ci_t^{9-j}=2t+1$ (note that $1\leq i_0^j\leq t$ and $t+1\leq ci_t^{9-j}\leq 2t$), and similarly $(bc)^{2^{n-5}}$ also interchanges $i_{2t}^j$ and $ci_{5t}^{9-j}$, $i_{3t}^j$ and $ci_{4t}^{9-j}$, and $i_{6t}^j$ and $ci_{7t}^{9-j}$. Thus, $[c(ab)^2c,b]=(c(ab)^2cb)^2=(bc)^{2^{n-5}}$, and hence $[(ab)^2,cbc]=(bc)^{2^{n-5}}$ as $[(bc)^{2^{n-5}},c]=1$. Since $[(ab)^2,b]=1$, Proposition~\ref{commutator} implies that $[(ab)^2,(bc)^2]=[(ab)^2,cbc][(ab)^2,b]^{cbc}=(bc)^{2^{n-5}}$. It follows that the generators $a,b,c$ of $A$ satisfy the same relations as do $\rho_0,\rho_1,\rho_2$ in $G$, and hence $A$ is isomorphic to $G$ with order $2^n$. Again let $L_1=\lg \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2 \ |\ \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0^2, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1^2, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2^2, (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1)^{4}, (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^{2}, (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^2\rg$. The generators $\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2$ in $L_1$ satisfy all relations in $G$. This means that $o(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1)=4$ in $G$, and by Proposition~\ref{stringC}, $(G,\{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0,\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1,\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2\})$ is a string C-group. To prove the necessity, let $(G,\{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0,\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1,\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2\})$ be a string C-group of rank three with type $\{4, 2^{n-4}\}$ and $|G|=2^n$. Then $o(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0)=o(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1)=o(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)=o(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)=2$, $o(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1)=4$ and $o(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)=2^{n-4}$. To finish the proof, we aim to show that $G\cong G_3$ or $G_4$. Since both $G_3$ and $G_4$ are $C$-groups of order $2^n$ and of type $\{4, 2^{n-4}\}$, it suffices to show that, in $G$, $[(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1)^2, (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^2] =1$ or $[(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1)^2, (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^2](\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^{2^{n-5}} =1$, which will be done by induction on $n$. This is true for $n = 10$ by {\sc Magma}. Assume $n\geq 11$. Take $N = \lg (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^{2^{n-5}} \rg$. By Lemma~\ref{quotient}, we have $N \unlhd G$ and ($\overline G} \def\ola{\overline \alpha} \def\olh{\overline h=G/N, \{\overline{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0}, \overline{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_{1}}, \overline{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_{2}}\})$ (with $\overline{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_i} = N\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_i$) is a string C-group of rank three of type $\{4, 2^{n-5}\}$. Since $|\overline G} \def\ola{\overline \alpha} \def\olh{\overline h| = 2^{n-1}$, by induction hypothesis we may assume $\overline G} \def\ola{\overline \alpha} \def\olh{\overline h=\overline G} \def\ola{\overline \alpha} \def\olh{\overline h_3$ or $\overline G} \def\ola{\overline \alpha} \def\olh{\overline h_4$, where \begin{small} \begin{itemize} \item [$\overline G} \def\ola{\overline \alpha} \def\olh{\overline h_3$] $= \lg \overline{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0}, \overline{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1}, \overline{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2} \ |\ \overline{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0}^2, \overline{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1}^2, \overline{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2}^2, (\overline{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0}\overline{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1})^{2^{2}}, (\overline{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1}\overline{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2})^{2^{n-5}}, (\overline{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0}\overline{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2})^2, [(\overline{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0}\overline{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1})^2, (\overline{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1}\overline{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2})^2]\rg$, \item [$\overline G} \def\ola{\overline \alpha} \def\olh{\overline h_4$] $= \lg \overline{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0}, \overline{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1}, \overline{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2} \ |\ \overline{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0}^2, \overline{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1}^2, \overline{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2}^2, (\overline{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0}\overline{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1})^{2^{2}}, (\overline{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1}\overline{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2})^{2^{n-5}}, (\overline{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0}\overline{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2})^2, [(\overline{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0}\overline{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1})^2, (\overline{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1}\overline{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2})^2](\overline{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1}\overline{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2})^{2^{n-6}}\rg$. \end{itemize} \end{small} Suppose $\overline G} \def\ola{\overline \alpha} \def\olh{\overline h=\overline G} \def\ola{\overline \alpha} \def\olh{\overline h_4$. Since $N=\lg (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^{2^{n-5}}\rg\cong \mathbb{Z}_2$, we have $[(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1)^2, (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^2](\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^{2^{n-6}}=1$ or $(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^{2^{n-5}}$, which implies $[(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1)^2, (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^2]=(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^{\d \cdot 2^{n-6}}$, where $\d=1$ or $-1$. By Proposition~\ref{commutator}, $[(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1)^2, (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^4]=[(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1)^2, (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^2][(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1)^2, (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^2]^{(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^2}=(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^{\d \cdot 2^{n-5}}$, and $[(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1)^2, (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^8]=(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^{\d \cdot 2^{n-4}}=1$, implying $[(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1)^2, (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^{2^{n-6}}]=1$. Thus, $1=[(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1)^4,(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^2]=[(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1)^2,(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^2]^{(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1)^2}[(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1)^2,(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^2]=((\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^{\d \cdot 2^{n-6}})^{(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1)^2}(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^{\d \cdot 2^{n-6}}$ $=(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^{\d \cdot 2^{n-5}}$, which is impossible because $o(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)=2^{n-4}$. Thus, $\overline G} \def\ola{\overline \alpha} \def\olh{\overline h=\overline G} \def\ola{\overline \alpha} \def\olh{\overline h_3$. In this case, $[(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1)^2, (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^2]=1$ or $(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^{2^{n-5}}$. For the latter, $[(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1)^2, (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^2](\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^{2^{n-5}}=(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^{2^{n-4}}=1$. It follows that $G\cong G_3$ or $G_4$. \hfill\mbox{\raisebox{0.7ex}{\fbox{}}} \vspace{4truemm} \noindent{\bf Proof of Theorem~\ref{maintheorem}(3):} Let $n\geq 10$ and let $G=G_5, G_6, G_7$ or $G_8$, where\\ \begin{small} $\begin{array}{rl} G_5=& \lg \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2 \ |\ \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0^2, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1^2, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2^2, (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1)^{2^{2}}, (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^{2^{n-5}}, (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^2, [\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0, (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^2]^2, [\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0, (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^4] \rg, \\ G_6=& \lg \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2 \ |\ \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0^2, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1^2, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2^2, (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1)^{2^{2}}, (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^{2^{n-5}}, (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^2, [\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0, (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^2]^2 (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^{2^{n-6}}, [\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0, (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^4] \rg, \\ G_7=& \lg \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2 \ |\ \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0^2, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1^2, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2^2, (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1)^{2^{2}}, (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^{2^{n-5}}, (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^2, [\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0, (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^2]^2, [\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0, (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^4](\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^{2^{n-6}} \rg, \\ G_8=& \lg \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2 \ |\ \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0^2, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1^2, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2^2, (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1)^{2^{2}}, (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^{2^{n-5}}, (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^2, \\ & ~~~~~~~~~~~\quad [\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0, (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^2]^2(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^{2^{n-6}}, [\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0, (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^4)](\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^{2^{n-6}} \rg. \end{array}$ \end{small} \noindent By Corollary~\ref{cor3.1}, we only need to show that $G_6, G_7$ and $G_8$ are string C-groups. In all cases, $[\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0, (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^4]=1$ or $(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^{2^{n-6}}$. It follows from Proposition~\ref{commutator} that $[\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0, (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^8]=[\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0, (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^4]([\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0, (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^4])^{(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^4}=1$. Noting that $((\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^8)^{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1}=(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^{-8}=((\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^8)^{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2}$, we have $K=\lg (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^8\rg \unlhd G$. Clearly, $|K|\leq 2^{n-8}$, and the three generators $\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0K$, $\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1K$, $\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2K$ in $G/K$ satisfy the same relations as $\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0$, $\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1$, $\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2$ in $G_5$ when $n=8$. By {\sc Magma}, $|G_5|=2^8$ when $n=8$, and hence $|G|=|G:K|\cdot|K| \leq 2^n$. \medskip \noindent {\bf Case 1:} $G=G_6$. We construct a permutation representation graph of $G$, and in this graph, $bc$ consists four paths of length $2t$ and two circles of length $4t$ with alternating labels of $b$ and $c$, where $t=2^{n-6}$. We omit the drawing of the graph here because it is quite big. Write $i_{jt}^k=jt+8i+k$ and ${ci}_{jt}^k=jt+8(\frac{t}{8}-i-1)+k$, where $0 \leq i \leq \frac{t}{8}-1$, $1\leq k \leq 8$, $0\leq j \leq 15$. The permutations $a, b, c$ on the set $\{1,2,\cdots, 2^{n-2}\}$ are as follows: \vskip 0.2cm \begin{small} $\begin{array}{rl} a=&\prod_{i=0}^{\frac{t}{8}-1} (i_{0}^{2},i_{2t}^{2})(i_{t}^{2},i_{3t}^{2})(i_{4t}^{2},i_{7t}^{2})(i_{8t}^{2},i_{11t}^{2})(i_{12t}^{2},i_{14t}^{2}) (i_{13t}^{2},i_{15t}^{2})(i_{5t}^{2},ci_{7t}^{7})(i_{6t}^{2},ci_{2t}^{7})\\ & (i_{9t}^{2},ci_{13t}^{7})(i_{10t}^{2},ci_{8t}^{7}) (i_{0}^{7},i_{4t}^{7})(i_{t}^{7},i_{5t}^{7})(i_{3t}^{7},i_{6t}^{7})(i_{9t}^{7},i_{12t}^{7})(i_{10t}^{7},i_{14t}^{7}) (i_{11t}^{7},i_{15t}^{7})\\ & (i_{0}^{3},i_{2t}^{3})(i_{t}^{3},i_{3t}^{3})(i_{4t}^{3},i_{7t}^{3})(i_{8t}^{3},i_{11t}^{3}) (i_{12t}^{3},i_{14t}^{3})(i_{13t}^{3},i_{15t}^{3})(i_{5t}^{3},ci_{7t}^{6})(i_{6t}^{3},ci_{2t}^{6})\\ &(i_{9t}^{3},ci_{13t}^{6})(i_{10t}^{3},ci_{8t}^{6}) (i_{0}^{6},i_{4t}^{6})(i_{t}^{6},i_{5t}^{6})(i_{3t}^{6},i_{6t}^{6})(i_{9t}^{6},i_{12t}^{6})(i_{10t}^{6},i_{14t}^{6}) (i_{11t}^{6},i_{15t}^{6})\\ &(i_{6t}^{4},i_{8t}^{4})(i_{7t}^{4},i_{9t}^{4})(i_{0}^{4},ci_{15t}^{5})(i_{t}^{4},ci_{14t}^{5})(i_{2t}^{4},ci_{11t}^{5})(i_{3t}^{4},ci_{10t}^{5})(i_{4t}^{4},ci_{13t}^{5}) (i_{5t}^{4},ci_{12t}^{5})\\ &(i_{6t}^{5},i_{8t}^{5})(i_{7t}^{5},i_{9t}^{5})(i_{0}^{5},ci_{15t}^{4})(i_{t}^{5},ci_{14t}^{4}) (i_{2t}^{5},ci_{11t}^{4})(i_{3t}^{5},ci_{10t}^{4})(i_{4t}^{5},ci_{13t}^{4})(i_{5t}^{5},ci_{12t}^{4})\\ &(i_{8t}^{1},ci_{9t}^{8})(i_{9t}^{1},ci_{8t}^{8})(i_{10t}^{1},ci_{13t}^{8})(i_{11t}^{1},ci_{12t}^{8})(i_{12t}^{1},ci_{11t}^{8})(i_{13t}^{1},ci_{10t}^{8}) (i_{14t}^{1},ci_{15t}^{8})(i_{15t}^{1},ci_{14t}^{8}),\\ \end{array}$ \end{small} \begin{small} $\begin{array}{rl} b=&\prod_{j=0}^{15}\prod_{i=0}^{\frac{t}{8}-1}(i_{jt}^1,i_{jt}^2)(i_{jt}^3,i_{jt}^4) (i_{jt}^5,i_{jt}^6)(i_{jt}^7,i_{jt}^8), \\ c=&\prod_{i=0}^{1} (0_{8it}^{1})(0_{6t+8it}^{1})((\frac{t}{8}-1)_{t+8it}^{8})((\frac{t}{8}-1)_{7t+8it}^{8})(0_{2t+8it}^{1},0_{4t+8it}^{1})((\frac{t}{8}-1)_{3t+8it}^{8},(\frac{t}{8}-1)_{5t+8it}^{8})\\ &\prod_{i=0}^{7}((\frac{t}{8}-1)_{2it}^{8},0_{(2i+1)t}^{1}) \cdot\prod_{j=0}^{15}(\prod_{i=0}^{\frac{t}{8}-1}(i_{jt}^2,i_{jt}^3)(i_{jt}^4,i_{jt}^5)(i_{jt}^6,i_{jt}^7) \cdot \prod_{i=0}^{\frac{t}{8}-2}(i_{jt}^8,(i+1)_{jt}^1)). \end{array}$ \end{small}\\ Here, $(i+1)_{jt}^k=jt+8(i+1)+k$ for $0 \leq i \leq \frac{t}{8}-2$. For $0\leq i\leq \frac{t}{8}-2$, $c$ interchanges $i_{jt}^8$ and $(i+1)_{jt}^1$, and also $ci_{jt}^8$ and $c(i-1)_{jt}^1$. It is easy to see that $a$ is fixed under conjugacy by $c$, that is, $a^c=a$. It follows that $(ac)^2=1$. Furthermore \vskip 0.2cm \begin{small} $\begin{array}{lcl} ab&=&\prod_{i=0}^{\frac{t}{8}-1} (i_{0}^{1},i_{0}^{2},i_{2t}^{1},i_{2t}^{2})(i_{t}^{1},i_{t}^{2},i_{3t}^{1},i_{3t}^{2}) (i_{4t}^{1},i_{4t}^{2},i_{7t}^{1},i_{7t}^{2})(i_{5t}^{1},i_{5t}^{2},ci_{7t}^{8},ci_{7t}^{7})\\ & &(i_{6t}^{1},i_{6t}^{2},ci_{2t}^{8},ci_{2t}^{7})(i_{8t}^{1},ci_{9t}^{7},ci_{12t}^{8},i_{11t}^{2}) (i_{9t}^{1},ci_{8t}^{7},i_{10t}^{1},ci_{13t}^{7})(i_{11t}^{1},ci_{12t}^{7},ci_{9t}^{8},i_{8t}^{2})\\ & &(i_{12t}^{1},ci_{11t}^{7},ci_{15t}^{8},i_{14t}^{2})(i_{13t}^{1},ci_{10t}^{7},ci_{14t}^{8},i_{15t}^{2}) (i_{14t}^{1},ci_{15t}^{7},ci_{11t}^{8},i_{12t}^{2})(i_{15t}^{1},ci_{14t}^{7},ci_{10t}^{8},i_{13t}^{2})\\ & &(i_{0}^{8},i_{0}^{7},i_{4t}^{8},i_{4t}^{7})(i_{t}^{8},i_{t}^{7},i_{5t}^{8},i_{5t}^{7}) (i_{3t}^{8},i_{3t}^{7},i_{6t}^{8},i_{6t}^{7})(i_{8t}^{8},ci_{9t}^{2},i_{13t}^{8},ci_{10t}^{2}) \\ & &(i_{0}^{3},i_{2t}^{4},ci_{11t}^{6},ci_{15t}^{5})(i_{t}^{3},i_{3t}^{4},ci_{10t}^{6},ci_{14t}^{5}) (i_{2t}^{3},i_{0}^{4},ci_{15t}^{6},ci_{11t}^{5})(i_{3t}^{3},i_{t}^{4},ci_{14t}^{6},ci_{10t}^{5})\\ & &(i_{4t}^{3},i_{7t}^{4},i_{9t}^{3},ci_{13t}^{5})(i_{5t}^{3},ci_{7t}^{5},ci_{9t}^{6},ci_{12t}^{5}) (i_{6t}^{3},ci_{2t}^{5},i_{11t}^{3},i_{8t}^{4})(i_{7t}^{3},i_{4t}^{4},ci_{13t}^{6},i_{9t}^{4})\\ & &(i_{8t}^{3},i_{11t}^{4},ci_{2t}^{6},i_{6t}^{4})(i_{10t}^{3},ci_{8t}^{5},ci_{6t}^{6},ci_{3t}^{5}) (i_{12t}^{3},i_{14t}^{4},ci_{t}^{6},ci_{5t}^{5})(i_{13t}^{3},i_{15t}^{4},ci_{0}^{6},ci_{4t}^{5})\\ & &(i_{14t}^{3},i_{12t}^{4},ci_{5t}^{6},ci_{t}^{5})(i_{15t}^{3},i_{13t}^{4},ci_{4t}^{6},ci_{0}^{5}) (i_{5t}^{4},ci_{12t}^{6},ci_{9t}^{5},ci_{7t}^{6})(i_{10t}^{4},ci_{3t}^{6},ci_{6t}^{5},ci_{8t}^{6}),\\ \end{array}$ \end{small} \vskip 0.2cm \begin{small} $\begin{array}{lcl} bc&=&\prod_{i=0}^{1}(1 +8ti,3 +8ti, \cdots, 2t-1 +8ti,2t +8ti,2t-2 +8ti, \cdots, 2 +8ti)\\ & &(2t+1 +8ti,2t+3 +8ti, \cdots, 4t-1 +8ti, 6t +8ti,6t-2 +8ti, \cdots, 4t+2 +8ti)\\ & &(4t+1 +8ti,4t+3 +8ti,\cdots, 6t-1 +8ti,4t +8ti,4t-2 +8ti, \cdots, 2t+2 +8ti)\\ & &(6t+1 +8ti,6t+3 +8ti, \cdots, 8t-1 +8ti, 8t +8ti,8t-2 +8ti, \cdots, 6t+2 +8ti), \end{array}$ \end{small} \vskip 0.2cm \begin{small} $\begin{array}{rl} (bc)^{2^{n-6}}=&\prod_{i=0}^{t-1}(1+i,2t-i)(6t+1+i,8t-i)(8t+1+i,10t-i)(14t+1+i,16t-i) \\ &\cdot\prod_{i=0}^{2t-1}(2t+1+i,6t-i)(10t+1+i,14t-i). \\ \end{array}$ \end{small} The above computations imply $(ab)^4=1$ and $(bc)^{2^{n-5}}=1$. Furthermore, \vskip 0.2cm \begin{small} $\begin{array}{lcl} (ab)^2&=&\prod_{i=0}^{\frac{t}{8}-1} (i_{0}^{1},i_{2t}^{1})(i_{t}^{1},i_{3t}^{1})(i_{4t}^{1},i_{7t}^{1})(i_{9t}^{1},i_{10t}^{1}) (i_{5t}^{1},ci_{7t}^{8})(i_{6t}^{1},ci_{2t}^{8})(i_{8t}^{1},ci_{12t}^{8})(i_{11t}^{1},ci_{9t}^{8})\\ &&(i_{12t}^{1},ci_{15t}^{8})(i_{13t}^{1},ci_{14t}^{8})(i_{14t}^{1},ci_{11t}^{8})(i_{15t}^{1},ci_{10t}^{8}) (i_{0}^{8},i_{4t}^{8})(i_{t}^{8},i_{5t}^{8})(i_{3t}^{8},i_{6t}^{8})(i_{8t}^{8},i_{13t}^{8})\\ &&(i_{0}^{2},i_{2t}^{2})(i_{t}^{2},i_{3t}^{2})(i_{4t}^{2},i_{7t}^{2})(i_{9t}^{2},i_{10t}^{2}) (i_{5t}^{2},ci_{7t}^{7})(i_{6t}^{2},ci_{2t}^{7})(i_{8t}^{2},ci_{12t}^{7})(i_{11t}^{2},ci_{9t}^{7})\\ &&(i_{12t}^{2},ci_{15t}^{7})(i_{13t}^{2},ci_{14t}^{7})(i_{14t}^{2},ci_{11t}^{7})(i_{15t}^{2},ci_{10t}^{7}) (i_{0}^{7},i_{4t}^{7})(i_{t}^{7},i_{5t}^{7})(i_{3t}^{7},i_{6t}^{7})(i_{8t}^{7},i_{13t}^{7})\\ &&(i_{4t}^{3},i_{9t}^{3})(i_{6t}^{3},i_{11t}^{3})(i_{0}^{3},ci_{11t}^{6})(i_{t}^{3},ci_{10t}^{6}) (i_{2t}^{3},ci_{15t}^{6})(i_{3t}^{3},ci_{14t}^{6})(i_{5t}^{3},ci_{9t}^{6})(i_{7t}^{3},ci_{13t}^{6})\\ &&(i_{8t}^{3},ci_{2t}^{6})(i_{10t}^{3},ci_{6t}^{6})(i_{12t}^{3},ci_{t}^{6})(i_{13t}^{3},i_{0}^{6}) (i_{14t}^{3},ci_{5t}^{6})(i_{15t}^{3},ci_{4t}^{6})(i_{3t}^{6},i_{8t}^{6})(i_{7t}^{6},i_{12t}^{6})\\ & &(i_{4t}^{4},i_{9t}^{4})(i_{6t}^{4},i_{11t}^{4})(i_{0}^{4},ci_{11t}^{5})(i_{t}^{4},ci_{10t}^{5}) (i_{2t}^{4},ci_{15t}^{5})(i_{3t}^{4},ci_{14t}^{5})(i_{5t}^{4},ci_{9t}^{5})(i_{7t}^{4},ci_{13t}^{5})\\ &&(i_{8t}^{4},ci_{2t}^{5})(i_{10t}^{4},ci_{6t}^{5})(i_{12t}^{4},ci_{t}^{5})(i_{13t}^{4},i_{0}^{5}) (i_{14t}^{4},ci_{5t}^{5})(i_{15t}^{4},ci_{4t}^{5})(i_{3t}^{5},i_{8t}^{5})(i_{7t}^{5},i_{12t}^{5}),\\ c(ab)^2c&=&\prod_{i=0}^{\frac{t}{8}-1} (i_{0}^{1},i_{4t}^{1})(i_{t}^{1},i_{5t}^{1})(i_{3t}^{1},i_{6t}^{1})(i_{8t}^{1},i_{13t}^{1}) (i_{2t}^{1},ci_{6t}^{8})(i_{7t}^{1},ci_{5t}^{8})(i_{9t}^{1},ci_{11t}^{8})(i_{10t}^{1},ci_{15t}^{8})\\ & &(i_{11t}^{1},ci_{14t}^{8})(i_{12t}^{1},i_{8t}^{8})(i_{14t}^{1},ci_{13t}^{8})(i_{15t}^{1},ci_{12t}^{8}) (i_{0}^{8},i_{2t}^{8})(i_{t}^{8},i_{3t}^{8})(i_{4t}^{8},i_{7t}^{8})(i_{9t}^{8},i_{10t}^{8})\\ &&(i_{4t}^{2},i_{9t}^{2})(i_{6t}^{2},i_{11t}^{2})(i_{0}^{2},ci_{11t}^{7})(i_{t}^{2},ci_{10t}^{7}) (i_{2t}^{2},ci_{15t}^{7})(i_{3t}^{2},ci_{14t}^{7})(i_{5t}^{2},ci_{9t}^{7})(i_{7t}^{2},ci_{13t}^{7})\\ &&(i_{8t}^{2},ci_{2t}^{7})(i_{10t}^{2},ci_{6t}^{7})(i_{12t}^{2},ci_{t}^{7})(i_{13t}^{2},ci_{0}^{7}) (i_{14t}^{2},ci_{5t}^{7})(i_{15t}^{2},ci_{4t}^{7})(i_{7t}^{7},i_{12t}^{7})(i_{8t}^{7},i_{3t}^{7})\\ &&(i_{0}^{3},i_{2t}^{3})(i_{t}^{3},i_{3t}^{3})(i_{4t}^{3},i_{7t}^{3})(i_{9t}^{3},i_{10t}^{3}) (i_{5t}^{3},ci_{7t}^{6})(i_{6t}^{3},ci_{2t}^{6})(i_{8t}^{3},ci_{12t}^{6})(i_{11t}^{3},ci_{9t}^{6})\\ & &(i_{12t}^{3},ci_{15t}^{6})(i_{13t}^{3},ci_{14t}^{6})(i_{14t}^{3},ci_{11t}^{6})(i_{15t}^{3},ci_{10t}^{6}) (i_{0}^{6},i_{4t}^{6})(i_{t}^{6},i_{5t}^{6})(i_{3t}^{6},i_{6t}^{6})(i_{8t}^{6},i_{13t}^{6})\\ &&(i_{3t}^{4},i_{8t}^{4})(i_{7t}^{4},i_{12t}^{4})(i_{0}^{4},ci_{13t}^{5})(i_{t}^{4},ci_{12t}^{5}) (i_{2t}^{4},ci_{8t}^{5})(i_{4t}^{4},ci_{15t}^{5})(i_{5t}^{4},ci_{14t}^{5})(i_{6t}^{4},ci_{10t}^{5})\\ & &(i_{9t}^{4},ci_{5t}^{5})(i_{10t}^{4},ci_{t}^{5})(i_{11t}^{4},i_{0}^{5})(i_{13t}^{4},ci_{7t}^{5}) (i_{14t}^{4},ci_{3t}^{5})(i_{15t}^{4},ci_{2t}^{5})(i_{4t}^{5},i_{9t}^{5})(i_{6t}^{5},i_{11t}^{5}),\\ \end{array}$ \end{small} \begin{small} $\begin{array}{lcl} [(ab)^2,c]&=&\prod_{i=0}^{\frac{t}{8}-1} (i_{0}^{1},ci_{6t}^{8},ci_{t}^{8},i_{7t}^{1}) (i_{t}^{1},i_{6t}^{1},ci_{0}^{8},ci_{7t}^{8}) (i_{2t}^{1},i_{4t}^{1},ci_{5t}^{8},ci_{3t}^{8}) (i_{3t}^{1},i_{5t}^{1},ci_{4t}^{8},ci_{2t}^{8})\\ & &(i_{8t}^{1},i_{15t}^{1},ci_{9t}^{8},ci_{14t}^{8}) (i_{9t}^{1},ci_{15t}^{8},ci_{8t}^{8},i_{14t}^{1}) (i_{10t}^{1},ci_{11t}^{8},ci_{13t}^{8},i_{12t}^{1}) (i_{11t}^{1},ci_{10t}^{8},ci_{12t}^{8},i_{13t}^{1})\\ & &(i_{0}^{2},ci_{15t}^{7},ci_{t}^{7},i_{14t}^{2}) (i_{t}^{2},ci_{14t}^{7},ci_{0}^{7},i_{15t}^{2}) (i_{2t}^{2},ci_{11t}^{7},ci_{5t}^{7},i_{12t}^{2}) (i_{3t}^{2},ci_{10t}^{7},ci_{4t}^{7},i_{13t}^{2})\\ & &(i_{4t}^{2},ci_{13t}^{7},ci_{3t}^{7},i_{10t}^{2}) (i_{5t}^{2},ci_{12t}^{7},ci_{2t}^{7},i_{11t}^{2}) (i_{6t}^{2},i_{8t}^{2},ci_{7t}^{7},ci_{9t}^{7}) (i_{7t}^{2},i_{9t}^{2},ci_{6t}^{7},ci_{8t}^{7})\\ & &(i_{0}^{3},i_{14t}^{3},ci_{t}^{6},ci_{15t}^{6}) (i_{t}^{3},i_{15t}^{3},ci_{0}^{6},ci_{14t}^{6}) (i_{2t}^{3},i_{12t}^{3},ci_{5t}^{6},ci_{11t}^{6}) (i_{3t}^{3},i_{13t}^{3},ci_{4t}^{6},ci_{10t}^{6}) \\ & &(i_{4t}^{3},i_{10t}^{3},ci_{3t}^{6},ci_{13t}^{6}) (i_{5t}^{3},i_{11t}^{3},ci_{2t}^{6},ci_{12t}^{6}) (i_{6t}^{3},ci_{9t}^{6},ci_{7t}^{6},i_{8t}^{3}) (i_{7t}^{3},ci_{8t}^{6},ci_{6t}^{6},i_{9t}^{3}) \\ & &(i_{0}^{4},ci_{6t}^{5},ci_{t}^{5},i_{7t}^{4}) (i_{t}^{4},i_{6t}^{4},ci_{0}^{5},ci_{7t}^{5}) (i_{2t}^{4},i_{4t}^{4},ci_{5t}^{5},ci_{3t}^{5}) (i_{3t}^{4},i_{5t}^{4},ci_{4t}^{5},ci_{2t}^{5})\\ & &(i_{8t}^{4},i_{15t}^{4},ci_{9t}^{5},ci_{14t}^{5}) (i_{9t}^{4},ci_{15t}^{5},ci_{8t}^{5},i_{14t}^{4}) (i_{10t}^{4},ci_{11t}^{5},ci_{13t}^{5},i_{12t}^{4}) (i_{11t}^{4},ci_{10t}^{5},ci_{12t}^{5},i_{13t}^{4}),\\ [(ab)^2,c]^b&=&\prod_{i=0}^{\frac{t}{8}-1} (i_{0}^{1},ci_{15t}^{8},ci_{t}^{8},i_{14t}^{1}) (i_{t}^{1},ci_{14t}^{8},ci_{0}^{8},i_{15t}^{1}) (i_{2t}^{1},ci_{11t}^{8},ci_{5t}^{8},i_{12t}^{1}) (i_{3t}^{1},ci_{10t}^{8},ci_{4t}^{8},i_{13t}^{1}) \\ & &(i_{4t}^{1},ci_{13t}^{8},ci_{3t}^{8},i_{10t}^{1}) (i_{5t}^{1},ci_{12t}^{8},ci_{2t}^{8},i_{11t}^{1}) (i_{6t}^{1},i_{8t}^{1},ci_{7t}^{8},ci_{9t}^{8}) (i_{7t}^{1},i_{9t}^{1},ci_{6t}^{8},ci_{8t}^{8}) \\ & &(i_{0}^{2},ci_{6t}^{7},ci_{t}^{7},i_{7t}^{2}) (i_{t}^{2},i_{6t}^{2},ci_{0}^{7},ci_{7t}^{7}) (i_{2t}^{2},i_{4t}^{2},ci_{5t}^{7},ci_{3t}^{7}) (i_{3t}^{2},i_{5t}^{2},ci_{4t}^{7},ci_{2t}^{7})\\ & &(i_{8t}^{2},i_{15t}^{2},ci_{9t}^{7},ci_{14t}^{7}) (i_{9t}^{2},ci_{15t}^{7},ci_{8t}^{7},i_{14t}^{2}) (i_{10t}^{2},ci_{11t}^{7},ci_{13t}^{7},i_{12t}^{2}) (i_{11t}^{2},ci_{10t}^{7},ci_{12t}^{7},i_{13t}^{2})\\ & &(i_{0}^{3},ci_{6t}^{6},ci_{t}^{6},i_{7t}^{3}) (i_{t}^{3},i_{6t}^{3},ci_{0}^{6},ci_{7t}^{6}) (i_{2t}^{3},i_{4t}^{3},ci_{5t}^{6},ci_{3t}^{6}) (i_{3t}^{3},i_{5t}^{3},ci_{4t}^{6},ci_{2t}^{6})\\ & &(i_{8t}^{3},i_{15t}^{3},ci_{9t}^{6},ci_{14t}^{6}) (i_{9t}^{3},ci_{15t}^{6},ci_{8t}^{6},i_{14t}^{3}) (i_{10t}^{3},ci_{11t}^{6},ci_{13t}^{6},i_{12t}^{3}) (i_{11t}^{3},ci_{10t}^{6},ci_{12t}^{6},i_{13t}^{3})\\ & &(i_{0}^{4},ci_{14t}^{4},ci_{t}^{5},ci_{15t}^{5}) (i_{t}^{4},i_{15t}^{4},ci_{0}^{5},ci_{14t}^{5}) (i_{2t}^{4},i_{12t}^{4},ci_{5t}^{5},ci_{11t}^{5}) (i_{3t}^{4},i_{13t}^{4},ci_{4t}^{5},ci_{10t}^{5})\\ & &(i_{4t}^{4},i_{10t}^{4},ci_{3t}^{5},ci_{13t}^{5}) (i_{5t}^{4},i_{11t}^{4},ci_{2t}^{5},ci_{12t}^{5}) (i_{6t}^{4},ci_{9t}^{5},ci_{7t}^{5},i_{8t}^{4}) (i_{7t}^{4},ci_{8t}^{5},ci_{6t}^{5},i_{9t}^{4}),\\ \end{array}$ \end{small} \vskip 0.2cm Let $A=\lg a, b, c\rg$. Now, one may see that $[(ab)^2,c]^{bc}=[(ab)^2,c]^b$. By Proposition~\ref{commutator}, $[a,(bc)^2]=[(ab)^2,c]^{bc}$ and $[a,(cb)^2]=[(ab)^2,c]^{b}$. It follows that $[a,(bc)^2]= [a,(cb)^2]$ and hence $[a,(bc)^4]=1$. For $1 \leq j \leq 8$, it is clear that $(bc)^{2^{n-6}}$ interchanges $i_{0}^{j}$ and $ci_{t}^{9-j}$ as $i_0^j+ci_{t}^{9-j}=2t+1$ (note that $1\leq i_0^j\leq t$ and $t+1\leq ci_t^{9-j}\leq 2t$), and similarly $(bc)^{2^{n-6}}$ also interchanges $i_{2t}^{j}$ and $ci_{5t}^{9-j}$, $i_{4t}^{j}$ and $ci_{3t}^{9-j}$, $i_{6t}^{j}$ and $ci_{7t}^{9-j}$, $i_{8t}^{j}$ and $ci_{9t}^{9-j}$, $i_{10t}^{j}$ and $ci_{13t}^{9-j}$, $i_{12t}^{j}$ and $ci_{11t}^{9-j}$, and $i_{14t}^{j}$ and $ci_{15t}^{9-j}$. This implies $(bc)^{2^{n-6}}=([(ab)^2,c]^b)^2=([(ab)^2,c]^{bc})^2=[a,(bc)^2]^2$. It follows that the generators $a,b,c$ of $A$ satisfy the same relations as do $\rho_0,\rho_1,\rho_2$ in $G$, and hence $A$ is isomorphic to $G$ with order $2^n$. Again let $L_1=\lg \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2 \ |\ \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0^2, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1^2, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2^2, (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1)^{4}, (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^{2}, (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^2\rg$. The generators $\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2$ in $L_1$ satisfy all relations in $G$. This means that $o(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1)=4$ in $G$, and by Proposition~\ref{stringC}, $(G,\{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0,\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1,\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2\})$ is a string C-group. \medskip \noindent {\bf Case 2:} $G=G_7$. We construct a permutation representation graph of $G$, and in this graph, $bc$ consists one path of length $32t$ with alternating labels of $c$ and $b$, where $t=2^{n-5}$. Again, the graph is too big to be drawn in this paper. Write $i_{jt}^k=jt+16i+k$, where $0 \leq i \leq \frac{t}{16}-1$, $1\leq k \leq 16$ and $0 \leq j \leq 1$. The permutations $a, b, c$ on the set $\{1,2,\cdots, 2^{n-5}\}$, are as follows: \vskip 0.2cm \begin{small} $\begin{array}{rl} a=&\prod_{i=0}^{\frac{t}{16}-1}(i_0^5,i_{t}^5)(i_0^6,i_{t}^6)(i_0^7,i_{t}^7)(i_0^8,i_{t}^8) (i_0^9,i_{t}^9)(i_0^{10},i_{t}^{10})(i_0^{11},i_{t}^{11})(i_0^{12},i_{t}^{12})\\ b=&(0_{0}^{1})(0_{t}^{1})((\frac{t}{16}-1)_{0}^{16},(\frac{t}{16}-1)_{t}^{16})\prod_{j=0}^{1}(\prod_{i=0}^{\frac{t}{16}-1} (i_{jt}^{2},i_{jt}^{3})(i_{jt}^{4},i_{jt}^{5})(i_{jt}^{6},i_{jt}^{7}) (i_{jt}^{8},i_{jt}^{9})(i_{jt}^{10},i_{jt}^{11})\\ &(i_{jt}^{12},i_{jt}^{13})(i_{jt}^{14},i_{jt}^{15})\cdot \prod_{i=0}^{\frac{t}{16}-2}(i_{jt}^{16},(i+1)_{jt}^{1})), \\ c=&\prod_{j=0}^{1}\prod_{i=0}^{\frac{t}{16}-1}(i_{jt}^{1},i_{jt}^{2})(i_{jt}^{3},i_{jt}^{4}) (i_{jt}^{5},i_{jt}^{6})(i_{jt}^{7},i_{jt}^{8}) (i_{jt}^{9},i_{jt}^{10})(i_{jt}^{11},i_{jt}^{12}) (i_{jt}^{13},i_{jt}^{14})(i_{jt}^{15},i_{jt}^{16}). \end{array}$ \end{small}\\ Here, $(i+1)_{jt}^k=jt+16(i+1)+k$ for $0 \leq i \leq \frac{t}{16}-2$. It is easy to see that $a$ is fixed under conjugacy of $c$, that is, $a^c=a$. It follows that $(ac)^2=1$. Furthermore \vskip 0.2cm \begin{small} $\begin{array}{lcl} ab&=&((\frac{t}{16}-1)_{0}^{16},(\frac{t}{16}-1)_{t}^{16}) \cdot \prod_{i=0}^{\frac{t}{16}-1}(i_0^2,i_0^3)(i_t^2,i_t^3)(i_0^4,i_0^5,i_{t}^4,i_{t}^5)(i_0^6,i_{t}^7)(i_0^7,i_{t}^6)(i_0^8,i_{t}^9)(i_0^9,i_{t}^8)\\ &&(i_0^{10},i_{t}^{11})(i_0^{11},i_{t}^{10})(i_0^{12},i_{t}^{13},i_{t}^{12},i_0^{13}) (i_0^{14},i_{0}^{15})(i_t^{15},i_{t}^{14})\cdot\prod_{j=0}^{1}\prod_{i=0}^{\frac{t}{16}-2}(i_{jt}^{16},(i+1)_{jt}^{1}),\\ cb&=&(1,3,5, \cdots,t-1,2t,2t-2,\cdots,t+2,t+1,t+3,\cdots,2t-1,t,t-2,\cdots,4,2) \\ \end{array}$ \end{small} \vskip 0.2cm The above computations imply $(ab)^4=1$ and $(bc)^{2^{n-5}}=1$. Moreover, we have \vskip 0.2cm \begin{small} $\begin{array}{lcl} (ab)^2&=&\prod_{i=0}^{\frac{t}{16}-1}(i_{0}^{4},i_{t}^{4})(i_{0}^{5},i_{t}^{5}) (i_{0}^{12},i_{t}^{12})(i_{0}^{13},i_{t}^{13}),\\ c(ab)^2c&=&\prod_{i=0}^{\frac{t}{16}-1}(i_{0}^{3},i_{t}^{3})(i_{0}^{6},i_{t}^{6}) (i_{0}^{11},i_{t}^{11})(i_{0}^{14},i_{t}^{14}),\\ [(ab)^2,c]&=&\prod_{i=0}^{\frac{t}{16}-1}(i_{0}^{3},i_{t}^{3})(i_{0}^{4},i_{t}^{4}) (i_{0}^{5},i_{t}^{5})(i_{0}^{6},i_{t}^{6}) (i_{0}^{11},i_{t}^{11})(i_{0}^{12},i_{t}^{12}) (i_{0}^{13},i_{t}^{13})(i_{0}^{14},i_{t}^{14}),\\ [(ab)^2,c]^b&=&\prod_{i=0}^{\frac{t}{16}-1}(i_0^2,i_{t}^2)(i_0^4,i_{t}^4)(i_0^5,i_{t}^5)(i_0^7,i_{t}^7) (i_0^{10},i_{t}^{10})(i_0^{12},i_{t}^{12})(i_0^{13},i_{t}^{13})(i_0^{15},i_{t}^{15}),\\ [(ab)^2,c]^{bc}&=&\prod_{i=0}^{\frac{t}{16}-1}(i_0^1,i_{t}^1)(i_0^3,i_{t}^3)(i_0^6,i_{t}^6)(i_0^8,i_{t}^8) (i_0^{9},i_{t}^{9})(i_0^{11},i_{t}^{11})(i_0^{14},i_{t}^{14})(i_0^{16},i_{t}^{16}).\\ \end{array}$ \end{small} \vskip 0.2cm Let $A= \lg a, ,b, c\rg$. Since $[a,c]=1$, by Proposition~\ref{commutator} we have $[a,(bc)^2]=[(ab)^2,c]^{bc}$, and hence $[a,(bc)^2]^2=1$. The element $(bc)^{2^{n-6}}$ interchanges $i_{0}^{k}$ and $i_{t}^{k}$ as $i_{t}^{k}-i_{0}^{k}=t$ (note that $1 \leq i_{0}^{k}\leq t$ and $t+1 \leq i_{t}^{k} \leq 2t$), which implies $[(ab)^2,c]^{b}[(ab)^2,c]^{bc}=(bc)^{2^{n-6}}$. Clearly, $[(ab)^2,c]^c=[(ab)^2,c]$. Again by Proposition~\ref{commutator}, $[a,(bc)^4]=[a,(bc)^2][a,(bc)^2]^{(bc)^2}=[(ab)^2,c]^{bc}[(ab)^2,c]^{(bc)^3}= ([(ab)^2,c]^{cb}[(ab)^2,c]^{bc})^{(bc)^2}=([(ab)^2,c]^b[(ab)^2,c]^{bc})^{(bc)^2} =(bc)^{2^{n-6}}$. It follows that the generators $a,b,c$ of $A$ satisfy the same relations as do $\rho_0,\rho_1,\rho_2$ in $G$, and hence $A$ is a quotient group of $G$. In particular, $o(bc)=2^{n-5}$ in $A$, and hence $o(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)=2^{n-5}$ in $G$. It follows that $|G|=o(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^{8} \cdot |G/\lg(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^8\rg|=2^{n-8}\cdot 256=2^n$. Again let $L_1=\lg \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2 \ |\ \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0^2, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1^2, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2^2, (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1)^{4}, (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^{2}, (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^2\rg$. The generators $\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2$ in $L_1$ satisfy all relations in $G$. This means that $o(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1)=4$ in $G$, and by Proposition~\ref{stringC}, $(G,\{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0,\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1,\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2\})$ is a string C-group. \medskip \noindent {\bf Case 3:} $G=G_8$. We construct a permutation representation graph of $G$. In this graph, $bc$ consists two paths of length $2t$ and a circle of length $4t$ alternating labels of $c$ and $b$, where $t=2^{n-6}$. Again, the graph is too big to be drawn in this paper. Write $i_{jt}^k=jt+16i+k$ and ${ci}_{jt}^k=jt+16(\frac{t}{16}-i-1)+k$, where $0 \leq i \leq \frac{t}{16}-1$, $1\leq k \leq 16$ and $0\leq j \leq 7$. The permutations $a, b, c$ on the set $\{1,2,\cdots, 2^{n-3}\}$ are as follows: \vskip 0.2cm \begin{small} $\begin{array}{rl} a=&\prod_{i=0}^{\frac{t}{16}-1} (i_0^1,i_{4t}^1)(i_{t}^{1},i_{5t}^{1})(i_{2t}^{1},i_{6t}^{1})(i_{3t}^{1},i_{7t}^{1}) (i_0^2,i_{4t}^2)(i_{t}^{2},i_{5t}^{2})(i_{2t}^{2},i_{6t}^{2})(i_{3t}^{2},i_{7t}^{2})\\ &(i_{t}^{3},i_{2t}^{3})(i_{5t}^{3},i_{6t}^{3})(i_0^3,ci_{4t}^{14})(i_{3t}^{3},ci_{t}^{14}) (i_{4t}^{3},ci_{6t}^{14})(i_{7t}^{3},ci_{3t}^{14})(i_0^{14},i_{5t}^{14})(i_{2t}^{14},i_{7t}^{14})\\ &(i_{t}^{4},i_{2t}^{4})(i_{5t}^{4},i_{6t}^{4})(i_0^4,ci_{4t}^{13})(i_{3t}^{4},ci_{t}^{13}) (i_{4t}^{4},ci_{6t}^{13})(i_{7t}^{4},ci_{3t}^{13})(i_0^{13},i_{5t}^{13})(i_{2t}^{13},i_{7t}^{13})\\ &(i_{t}^{5},i_{4t}^{5})(i_{3t}^{5},i_{6t}^{5})(i_0^5,ci_{2t}^{12})(i_{2t}^{5},ci_{6t}^{12}) (i_{5t}^{5},ci_{t}^{12})(i_{7t}^{5},ci_{5t}^{12})(i_0^{12},i_{3t}^{12})(i_{4t}^{12},i_{7t}^{12})\\ &(i_{t}^{6},i_{4t}^{6})(i_{3t}^{6},i_{6t}^{6})(i_0^6,ci_{2t}^{11})(i_{2t}^{6},ci_{6t}^{11}) (i_{5t}^{6},ci_{t}^{11})(i_{7t}^{6},ci_{5t}^{11})(i_0^{11},i_{3t}^{11})(i_{4t}^{11},i_{7t}^{11})\\ &(i_0^7,ci_{5t}^{10})(i_{t}^{7},ci_{4t}^{10})(i_{2t}^{7},ci_{t}^{10})(i_{3t}^7,ci_0^{10}) (i_{4t}^{7},ci_{7t}^{10})(i_{5t}^{7},ci_{6t}^{10})(i_{6t}^{7},ci_{3t}^{10})(i_{7t}^{7},ci_{2t}^{10})\\ &(i_0^8,ci_{5t}^9)(i_{t}^{8},ci_{4t}^{9})(i_{2t}^{8},ci_{t}^{9})(i_{3t}^8,ci_0^9) (i_{4t}^{8},ci_{7t}^{9})(i_{5t}^{8},ci_{6t}^{9})(i_{6t}^{8},ci_{3t}^{9})(i_{7t}^{8},ci_{2t}^{9})\\ &(i_0^{15},i_{2t}^{15})(i_{t}^{15},i_{3t}^{15})(i_{4t}^{15},i_{6t}^{15})(i_{5t}^{15},i_{7t}^{15}) (i_0^{16},i_{2t}^{16})(i_{t}^{16},i_{3t}^{16})(i_{4t}^{16},i_{6t}^{16})(i_{5t}^{16},i_{7t}^{16}),\\ b=&(0_{0}^{1})((\frac{t}{16}-1)_{t}^{16})(0_{6t}^{1})((\frac{t}{16}-1)_{7t}^{16}) (0_{2t}^1,0_{4t}^{1})((\frac{t}{16}-1)_{3t}^{16},(\frac{t}{16}-1)_{5t}^{16}) \cdot\prod_{i=0}^{3}((\frac{t}{16}-1)_{2ti}^{16},0_{(2i+1)t}^{1})\\ &\cdot \prod_{j=0}^{7}(\prod_{i=0}^{\frac{t}{16}-1}(i_{jt}^2,i_{jt}^3)(i_{jt}^4,i_{jt}^5)(i_{jt}^6,i_{jt}^7) (i_{jt}^8,i_{jt}^9)(i_{jt}^{10},i_{jt}^{11})(i_{jt}^{12},i_{jt}^{13})(i_{jt}^{14},i_{jt}^{15})\cdot\prod_{i=0}^{\frac{t}{16}-2}(i_{jt}^{16},(i+1)_{jt}^1)),\\ c=&\prod_{j=0}^{7}\prod_{i=0}^{\frac{t}{16}-1}(i_{jt}^{1},i_{jt}^{2})(i_{jt}^{3},i_{jt}^{4})(i_{jt}^{5},i_{jt}^{6})(i_{jt}^{7},i_{jt}^{8}) (i_{jt}^{9},i_{jt}^{10})(i_{jt}^{11},i_{jt}^{12})(i_{jt}^{13},i_{jt}^{14})(i_{jt}^{15},i_{jt}^{16}).\\ \end{array}$ \end{small}\\ Here, $(i+1)_{jt}^k=jt+16(i+1)+k$ for $0 \leq i \leq \frac{t}{16}-2$. For $0\leq i\leq \frac{t}{16}-2$, $b$ interchanges $i_{jt}^{16}$ and $(i+1)_{jt}^1$, and also $ci_{jt}^{16}$ and $c(i-1)_{jt}^1$. It is easy to see that $a$ is fixed under conjugacy of $c$, that is, $a^c=a$. It follows that $(ac)^2=1$. Furthermore \vskip 0.2cm \begin{small} $\begin{array}{lcl} ab&= &(0_{0}^{1},0_{2t}^{1},0_{6t}^{1},0_{4t}^{1})(0_{t}^{1},(\frac{t}{16}-1)_{4t}^{16},0_{7t}^{1},(\frac{t}{16}-1)_{2t}^{16})\\ & &(0_{3t}^{1},(\frac{t}{16}-1)_{6t}^{16},0_{5t}^{1},(\frac{t}{16}-1)_{0}^{16}) ((\frac{t}{16}-1)_{t}^{16},(\frac{t}{16}-1)_{5t}^{16},(\frac{t}{16}-1)_{7t}^{16},(\frac{t}{16}-1)_{3t}^{16}) \\ & &\prod_{i=0}^{\frac{t}{16}-2} ((i+1)_{0}^{1},i_{4t}^{16},(i+1)_{6t}^{1},i_{2t}^{16}) ((i+1)_{t}^{1},i_{5t}^{16},(i+1)_{7t}^{1},i_{3t}^{16})\\ & &((i+1)_{2t}^{1},i_{6t}^{16},(i+1)_{4t}^{1},i_{0}^{16}) ((i+1)_{3t}^{1},i_{7t}^{16},(i+1)_{5t}^{1},i_{t}^{16})\cdot\\ \end{array}$ \end{small} \begin{small} $\begin{array}{lcl} & &\prod_{i=0}^{\frac{t}{16}-1}(i_{t}^{2},i_{5t}^{3},i_{6t}^{2},i_{2t}^{3}) (i_{2t}^{2},i_{6t}^{3},i_{5t}^{2},i_{t}^{3}) (i_{0}^{2},i_{4t}^{3},ci_{6t}^{15},ci_{4t}^{14}) (i_{3t}^{2},i_{7t}^{3},ci_{3t}^{15},ci_{t}^{14}) \\ & &(i_{4t}^{2},i_{0}^{3},ci_{4t}^{15},ci_{6t}^{14}) (i_{7t}^{2},i_{3t}^{3},ci_{t}^{15},ci_{3t}^{14}) (i_{0}^{14},i_{5t}^{15},i_{7t}^{14},i_{2t}^{15}) (i_{2t}^{14},i_{7t}^{15},i_{5t}^{14},i_{0}^{15})\\ & &(i_{0}^{4},ci_{4t}^{12},ci_{7t}^{13},ci_{2t}^{12}) (i_{t}^{4},i_{2t}^{5},ci_{6t}^{13},i_{4t}^{5}) (i_{2t}^{4},i_{t}^{5},i_{4t}^{4},ci_{6t}^{12}) (i_{3t}^{4},ci_{t}^{12},i_{5t}^{4},i_{6t}^{5}) \\ & &(i_{6t}^{4},i_{5t}^{5},ci_{t}^{13},i_{3t}^{5}) (i_{7t}^{4},ci_{3t}^{12},ci_{0}^{13},ci_{5t}^{12}) (i_{0}^{5},ci_{2t}^{13},ci_{7t}^{12},ci_{4t}^{13}) (i_{7t}^{5},ci_{5t}^{13},ci_{0}^{12},ci_{3t}^{13})\\ & &(i_{0}^{6},ci_{2t}^{10},i_{7t}^{6},ci_{5t}^{10}) (i_{t}^{6},i_{4t}^{7},ci_{7t}^{11},ci_{4t}^{10}) (i_{2t}^{6},ci_{6t}^{10},i_{5t}^{6},ci_{t}^{10}) (i_{3t}^{6},i_{6t}^{7},ci_{3t}^{11},ci_{0}^{10})\\ & &(i_{4t}^{6},i_{t}^{7},ci_{4t}^{11},ci_{7t}^{10}) (i_{6t}^{6},i_{3t}^{7},ci_{0}^{11},ci_{3t}^{10}) (i_{0}^{7},ci_{5t}^{11},i_{7t}^{7},ci_{2t}^{11}) (i_{2t}^{7},ci_{t}^{11},i_{5t}^{7},ci_{6t}^{11})\\ & &(i_{0}^{8},ci_{5t}^{8},i_{6t}^{8},ci_{3t}^{8}) (i_{t}^{8},ci_{4t}^{8},i_{7t}^{8},ci_{2t}^{8}) (i_{0}^{9},ci_{3t}^{9},i_{6t}^{9},ci_{5t}^{9}) (i_{t}^{9},ci_{2t}^{9},i_{7t}^{9},ci_{4t}^{9}),\\ cb&= &\prod_{i=0}^{1}(1+6ti,3+6ti,\cdots, 2t-1+6ti, 2t+6ti, 2t-2+6ti, \cdots, 2+6ti) \\ & &(2t+1+2ti,2t+3+2ti,\cdots,4t-1+2ti,6t-2ti,6t-2-2ti,\cdots,4t+2-2ti), \\ (cb)^{2^{n-6}}&=&\prod_{i=0}^{t-1}(1+i,2t-i)(6t+1+i,8t-i)\cdot\prod_{i=0}^{2t-1}(2t+1+i,6t-i). \end{array}$ \end{small} The above computations imply $(ab)^4=1$, $(bc)^{2^{n-5}}=1$, and $(bc)^{2^{n-6}}=(cb)^{2^{n-6}}$. Furthermore, \vskip 0.2cm \begin{small} $\begin{array}{lcl} (ab)^2&=&\prod_{i=0}^{\frac{t}{16}-1} (i_{0}^{1},i_{6t}^{1})(i_{t}^{1},i_{7t}^{1}) (i_{2t}^{1},i_{4t}^{1})(i_{3t}^{1},i_{5t}^{1}) (i_{0}^{16},i_{6t}^{16})(i_{t}^{16},i_{7t}^{16}) (i_{2t}^{16},i_{4t}^{16})(i_{3t}^{16},i_{5t}^{16})\\ & &(i_{t}^{2},i_{6t}^{2})(i_{2t}^{2},i_{5t}^{2}) (i_{0}^{2},ci_{6t}^{15})(i_{3t}^{2},ci_{3t}^{15}) (i_{4t}^{2},ci_{4t}^{15})(i_{7t}^{2},ci_{t}^{15}) (i_{0}^{15},i_{7t}^{15})(i_{2t}^{15},i_{5t}^{15})\\ &&(i_{t}^{3},i_{6t}^{3})(i_{2t}^{3},i_{5t}^{3}) (i_{0}^{3},ci_{6t}^{14})(i_{3t}^{3},ci_{3t}^{14}) (i_{4t}^{3},ci_{4t}^{14})(i_{7t}^{3},ci_{t}^{14}) (i_{0}^{14},i_{7t}^{14})(i_{2t}^{14},i_{5t}^{14})\\ & &(i_{2t}^{4},i_{4t}^{4})(i_{3t}^{4},i_{5t}^{4}) (i_{0}^{4},ci_{7t}^{13})(i_{t}^{4},ci_{6t}^{13}) (i_{6t}^{4},ci_{t}^{13})(i_{7t}^{4},ci_{0}^{13}) (i_{2t}^{13},i_{4t}^{13})(i_{3t}^{13},i_{5t}^{13})\\ & &(i_{2t}^{5},i_{4t}^{5})(i_{3t}^{5},i_{5t}^{5}) (i_{0}^{5},ci_{7t}^{12})(i_{t}^{5},ci_{6t}^{12}) (i_{6t}^{5},ci_{t}^{12})(i_{7t}^{5},ci_{0}^{12}) (i_{2t}^{12},i_{4t}^{12})(i_{3t}^{12},i_{5t}^{12})\\ & &(i_{0}^{6},i_{7t}^{6})(i_{2t}^{6},i_{5t}^{6}) (i_{t}^{6},ci_{7t}^{11})(i_{3t}^{6},ci_{3t}^{11}) (i_{4t}^{6},ci_{4t}^{11})(i_{6t}^{6},ci_{0}^{11}) (i_{t}^{11},i_{6t}^{11})(i_{2t}^{11},i_{5t}^{11})\\ & &(i_{0}^{7},i_{7t}^{7})(i_{2t}^{7},i_{5t}^{7}) (i_{t}^{7},ci_{7t}^{10})(i_{3t}^{7},ci_{3t}^{10}) (i_{4t}^{7},ci_{4t}^{10})(i_{6t}^{7},ci_{0}^{10}) (i_{t}^{10},i_{6t}^{10})(i_{2t}^{10},i_{5t}^{10})\\ & &(i_{0}^{8},i_{6t}^{8})(i_{t}^{8},i_{7t}^{8}) (i_{2t}^{8},i_{4t}^{8})(i_{3t}^{8},i_{5t}^{8}) (i_{0}^{9},i_{6t}^{9})(i_{t}^{9},i_{7t}^{9}) (i_{2t}^{9},i_{4t}^{9})(i_{3t}^{9},i_{5t}^{9})\\ \end{array}$ $\begin{array}{lcl} c(ab)^2c&=&\prod_{i=0}^{\frac{t}{16}-1} (i_{t}^{1},i_{6t}^{1})(i_{2t}^{1},i_{5t}^{1}) (i_{0}^{1},ci_{6t}^{16})(i_{3t}^{1},ci_{3t}^{16}) (i_{4t}^{1},ci_{4t}^{16})(i_{7t}^{1},ci_{t}^{16}) (i_{0}^{16},i_{7t}^{16})(i_{2t}^{16},i_{5t}^{16})\\ & &(i_{0}^{2},i_{6t}^{2})(i_{t}^{2},i_{7t}^{2}) (i_{2t}^{2},i_{4t}^{2})(i_{3t}^{2},i_{5t}^{2}) (i_{0}^{15},i_{6t}^{15})(i_{t}^{15},i_{7t}^{15}) (i_{2t}^{15},i_{4t}^{15})(i_{3t}^{15},i_{5t}^{15})\\ &&(i_{2t}^{3},i_{4t}^{3})(i_{3t}^{3},i_{5t}^{3}) (i_{0}^{3},ci_{7t}^{14})(i_{t}^{3},ci_{6t}^{14}) (i_{6t}^{3},ci_{t}^{14})(i_{7t}^{3},ci_{0}^{14}) (i_{2t}^{14},i_{4t}^{14})(i_{3t}^{14},i_{5t}^{14})\\ & &(i_{t}^{4},i_{6t}^{4})(i_{2t}^{4},i_{5t}^{4}) (i_{0}^{4},ci_{6t}^{13})(i_{3t}^{4},ci_{3t}^{13}) (i_{4t}^{4},ci_{4t}^{13})(i_{7t}^{4},ci_{t}^{13}) (i_{0}^{13},i_{7t}^{13})(i_{2t}^{13},i_{5t}^{13})\\ & &(i_{0}^{5},i_{7t}^{5})(i_{2t}^{5},i_{5t}^{5}) (i_{t}^{5},ci_{7t}^{12})(i_{3t}^{5},ci_{3t}^{12}) (i_{4t}^{5},ci_{4t}^{12})(i_{6t}^{5},ci_{0}^{12}) (i_{t}^{12},i_{6t}^{12})(i_{2t}^{12},i_{5t}^{12})\\ & &(i_{2t}^{6},i_{4t}^{6})(i_{3t}^{6},i_{5t}^{6}) (i_{0}^{6},ci_{7t}^{11})(i_{t}^{6},ci_{6t}^{11}) (i_{6t}^{6},ci_{t}^{11})(i_{7t}^{6},i_{0}^{11}) (i_{2t}^{11},i_{4t}^{11})(i_{3t}^{11},i_{5t}^{11})\\ & &(i_{0}^{7},i_{6t}^{7})(i_{t}^{7},i_{7t}^{7}) (i_{2t}^{7},i_{4t}^{7})(i_{3t}^{7},i_{5t}^{7}) (i_{0}^{10},i_{6t}^{10})(i_{t}^{10},i_{7t}^{10}) (i_{2t}^{10},i_{4t}^{10})(i_{3t}^{10},i_{5t}^{10})\\ & &(i_{0}^{8},i_{7t}^{8})(i_{2t}^{8},i_{5t}^{8}) (i_{t}^{8},ci_{7t}^{9})(i_{3t}^{8},ci_{3t}^{9}) (i_{4t}^{8},ci_{4t}^{9})(i_{6t}^{8},ci_{0}^{9}) (i_{t}^{9},i_{6t}^{9})(i_{2t}^{9},i_{5t}^{9})\\ \end{array}$ $\begin{array}{lcl} [(ab)^2,c]&=&\prod_{i=0}^{\frac{t}{16}-1} (i_{0}^{1},i_{t}^{1},ci_{t}^{16},ci_{0}^{16}) (i_{2t}^{1},ci_{4t}^{16},ci_{5t}^{16},i_{3t}^{1}) (i_{4t}^{1},i_{5t}^{1},ci_{3t}^{16},ci_{2t}^{16}) (i_{6t}^{1},ci_{6t}^{16},ci_{7t}^{16},i_{7t}^{1})\\ & &(i_{0}^{2},ci_{0}^{15},ci_{t}^{15},i_{t}^{2}) (i_{2t}^{2},i_{3t}^{2},ci_{5t}^{15},ci_{4t}^{15}) (i_{4t}^{2},ci_{2t}^{15},ci_{3t}^{15},i_{5t}^{2}) (i_{6t}^{2},i_{7t}^{2},ci_{7t}^{15},ci_{6t}^{15})\\ & &(i_{0}^{3},i_{t}^{3},ci_{t}^{14},ci_{0}^{14}) (i_{2t}^{3},i_{3t}^{3},ci_{5t}^{14},ci_{4t}^{14}) (i_{4t}^{3},ci_{2t}^{14},ci_{3t}^{14},i_{5t}^{3}) (i_{6t}^{3},ci_{6t}^{14},ci_{7t}^{14},i_{7t}^{3})\\ & &(i_{0}^{4},ci_{0}^{13},ci_{t}^{13},i_{t}^{4}) (i_{2t}^{4},ci_{4t}^{13},ci_{5t}^{13},i_{3t}^{4}) (i_{4t}^{4},i_{5t}^{4},ci_{3t}^{13},ci_{2t}^{13}) (i_{6t}^{4},i_{7t}^{4},ci_{7t}^{13},ci_{6t}^{13})\\ & &(i_{0}^{5},i_{t}^{5},ci_{t}^{12},ci_{0}^{12}) (i_{2t}^{5},ci_{4t}^{12},ci_{5t}^{12},i_{3t}^{5}) (i_{4t}^{5},i_{5t}^{5},ci_{3t}^{12},ci_{2t}^{12}) (i_{6t}^{5},ci_{6t}^{12},ci_{7t}^{12},i_{7t}^{5})\\ & &(i_{0}^{6},ci_{0}^{11},ci_{t}^{11},i_{t}^{6}) (i_{2t}^{6},i_{3t}^{6},ci_{5t}^{11},ci_{4t}^{11}) (i_{4t}^{6},ci_{2t}^{11},ci_{3t}^{11},i_{5t}^{6}) (i_{6t}^{6},i_{7t}^{6},ci_{7t}^{11},ci_{6t}^{11})\\ & &(i_{0}^{7},i_{t}^{7},ci_{t}^{10},ci_{0}^{10}) (i_{2t}^{7},i_{3t}^{7},ci_{5t}^{10},ci_{4t}^{10}) (i_{4t}^{7},ci_{2t}^{10},ci_{3t}^{10},i_{5t}^{7}) (i_{6t}^{7},ci_{6t}^{10},ci_{7t}^{10},i_{7t}^{7})\\ & &(i_{0}^{8},ci_{0}^{9},ci_{t}^{9},i_{t}^{8}) (i_{2t}^{8},ci_{4t}^{9},ci_{5t}^{9},i_{3t}^{8}) (i_{4t}^{8},i_{5t}^{8},ci_{3t}^{9},ci_{2t}^{9}) (i_{6t}^{8},i_{7t}^{8},ci_{7t}^{9},ci_{6t}^{9}),\\ \end{array}$ $\begin{array}{lcl} [(ab)^2,c]^b&=&\prod_{i=0}^{\frac{t}{16}-1} (i_{0}^{1},ci_{0}^{16},ci_{t}^{16},i_{t}^{1}) (i_{2t}^{1},ci_{4t}^{16},ci_{5t}^{16},i_{3t}^{1}) (i_{4t}^{1},i_{5t}^{1},ci_{3t}^{16},ci_{2t}^{16}) (i_{6t}^{1},i_{7t}^{1},ci_{7t}^{16},ci_{6t}^{16})\\ & &(i_{0}^{2},i_{t}^{2},ci_{t}^{15},ci_{0}^{15}) (i_{2t}^{2},i_{3t}^{2},ci_{5t}^{15},ci_{4t}^{15}) (i_{4t}^{2},ci_{2t}^{15},ci_{3t}^{15},i_{5t}^{2}) (i_{6t}^{2},ci_{6t}^{15},ci_{7t}^{15},i_{7t}^{2})\\ & &(i_{0}^{3},ci_{0}^{14},ci_{t}^{14},i_{t}^{3}) (i_{2t}^{3},i_{3t}^{3},ci_{5t}^{14},ci_{4t}^{14}) (i_{4t}^{3},ci_{2t}^{14},ci_{3t}^{14},i_{5t}^{3}) (i_{6t}^{3},i_{7t}^{3},ci_{7t}^{14},ci_{6t}^{14})\\ & &(i_{0}^{4},i_{t}^{4},ci_{t}^{13},ci_{0}^{13}) (i_{2t}^{4},ci_{4t}^{13},ci_{5t}^{13},i_{3t}^{4}) (i_{4t}^{4},i_{5t}^{4},ci_{3t}^{13},ci_{2t}^{13}) (i_{6t}^{4},ci_{6t}^{13},ci_{7t}^{13},i_{7t}^{4})\\ & &(i_{0}^{5},ci_{0}^{12},ci_{t}^{12},i_{t}^{5}) (i_{2t}^{5},ci_{4t}^{12},ci_{5t}^{12},i_{3t}^{5}) (i_{4t}^{5},i_{5t}^{5},ci_{3t}^{12},ci_{2t}^{12}) (i_{6t}^{5},i_{7t}^{5},ci_{7t}^{12},ci_{6t}^{12})\\ & &(i_{0}^{6},i_{t}^{6},ci_{t}^{11},ci_{0}^{11}) (i_{2t}^{6},i_{3t}^{6},ci_{5t}^{11},ci_{4t}^{11}) (i_{4t}^{6},ci_{2t}^{11},ci_{3t}^{11},i_{5t}^{6}) (i_{6t}^{6},ci_{6t}^{11},ci_{7t}^{11},i_{7t}^{6})\\ & &(i_{0}^{7},ci_{0}^{10},ci_{t}^{10},i_{t}^{7}) (i_{2t}^{7},i_{3t}^{7},ci_{5t}^{10},ci_{4t}^{10}) (i_{4t}^{7},ci_{2t}^{10},ci_{3t}^{10},i_{5t}^{7}) (i_{6t}^{7},i_{7t}^{7},ci_{7t}^{10},ci_{6t}^{10})\\ & &(i_{0}^{8},i_{t}^{8},ci_{t}^{9},ci_{0}^{9}) (i_{2t}^{8},ci_{4t}^{9},ci_{5t}^{9},i_{3t}^{8}) (i_{4t}^{8},i_{5t}^{8},ci_{3t}^{9},ci_{2t}^{9}) (i_{6t}^{8},ci_{6t}^{9},ci_{7t}^{9},i_{7t}^{8}),\\ \end{array}$ \end{small} \vskip 0.2cm Let $A= \lg a, ,b, c\rg$. Now it is easy to see that $[(ab)^2,c]^c=[(ab)^2,c]^{-1}$ and $[(ab)^2,c]^{bc}=([(ab)^2,c]^b)^{-1}$. Every $4$-cycle in the product of distinct $4$-cycles of $[(ab)^2,c]$ is either a $4$-cycle or the inverse of a $4$-cycle in $[(ab)^2,c]^b$, and $[(ab)^2,c][(ab)^2,c]^b$ is an involution, which fixes $2^{n-4}$ points including the point $1$. Then $[(ab)^2,c][(ab)^2,c]^b=[(ab)^2,c]^b[(ab)^2,c]$. It is clear that $(cb)^{2^{n-6}}$ interchanges $i_0^j$ and $ci_t^{16-j}$ for each $1\leq j\leq 16$ because $i_0^j+ci_t^{16-j}=2t+1$ (note that $1\leq i_0^j\leq t$ and $t+1\leq ci_t^{16-j}\leq 2t$), and similarly $(cb)^{2^{n-6}}$ also interchanges $i_{2t}^j$ and $ci_{5t}^{16-j}$, $i_{3t}^j$ and $ci_{4t}^{16-j}$, and $i_{6t}^j$ and $ci_{7t}^{16-j}$. It follows that $(cb)^{2^{n-6}}=[(ab)^2,c]^2=([(ab)^2,c]^b)^2$. Since $[a,c]=1$, by Proposition~\ref{commutator} we have $[a,(bc)^2]^2=([(ab)^2,c]^{bc})^2=([(ab)^2,c]^{b})^{-2}= (cb)^{-2^{n-6}}=(bc)^{2^{n-6}}$ and $[a,(bc)^4]=[a,(bc)^2][a,(bc)^2]^{(bc)^2}= ([(ab)^2,c][(ab)^2,c]^{bcbc})^{bc}=([(ab)^2,c]([(ab)^2,c]^{-b})^{bc})^{bc} =([(ab)^2,c]^2)^{bc}=((cb)^{2^{n-6}})^{bc}=(bc)^{2^{n-6}}$. This implies that the generators $a,b,c$ of $A$ satisfy the same relations as do $\rho_0,\rho_1,\rho_2$ in $G$, and hence $A$ is a quotient group of $G$. In particular, $o(cb)=2^{n-5}$ in $A$, and hence $o(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)=2^{n-5}$ in $G$. It follows that $|G|=o(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^{8} \cdot |G/\lg(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^8\rg|=2^{n-8}\cdot 256=2^n$. Again let $L_1=\lg \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2 \ |\ \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0^2, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1^2, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2^2, (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1)^{4}, (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^{2}, (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^2\rg$. The generators $\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1, \rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2$ in $L_1$ satisfy all relations in $G$. This means that $o(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1)=4$ in $G$, and by Proposition~\ref{stringC}, $(G,\{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0,\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1,\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2\})$ is a string C-group. Now we prove the necessity. Let $(G,\{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0,\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1,\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2\})$ be a string C-group of rank three with type $\{4, 2^{n-5}\}$ and $|G|=2^n$. Then each of $\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0,\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1$ and $\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2$ has order $2$, and we further have $o(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1)=4$, $o(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)=2$ and $o(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)=2^{n-5}$. To finish the proof, we only need to prove $G\cong G_5, G_6, G_7$ or $G_8$. Since $G_5, G_6, G_7$ and $G_8$ are $C$-groups of order $2^n$ of type $\{4, 2^{n-5}\}$, it suffices to show that, in $G$, $[\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0, (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1)^2]^2 =1$ or $[\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0, (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1)^2]^2(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^{2^{n-6}} =1$, and $[\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0, (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1)^4] =1$ or $[\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0, (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1)^4](\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^{2^{n-6}} =1$, which will be done by induction on $n$. This is true for $n = 10$ by {\sc Magma}. Assume $n\geq 11$. Take $N = \lg (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^{2^{n-6}} \rg$. By Lemma~\ref{quotient}, we have $N \unlhd G$ and ($\overline G} \def\ola{\overline \alpha} \def\olh{\overline h=G/N, \{\overline{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0}, \overline{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_{1}}, \overline{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_{2}}\})$ (with $\overline{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_i} = N\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_i$) is a string C-group of rank three of type $\{4, 2^{n-6}\}$. Since $|\overline G} \def\ola{\overline \alpha} \def\olh{\overline h| = 2^{n-1}$, by induction hypothesis we may assume $\overline G} \def\ola{\overline \alpha} \def\olh{\overline h=\overline G} \def\ola{\overline \alpha} \def\olh{\overline h_5, \overline G} \def\ola{\overline \alpha} \def\olh{\overline h_6, \overline G} \def\ola{\overline \alpha} \def\olh{\overline h_7$ or $\overline G} \def\ola{\overline \alpha} \def\olh{\overline h_8$, where \begin{small} \begin{itemize} \item [$\overline G} \def\ola{\overline \alpha} \def\olh{\overline h_{5}$] $= \lg \overline{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0}, \overline{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1}, \overline{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2} \ |\ \overline{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0}^2, \overline{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1}^2, \overline{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2}^2, (\overline{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0}\overline{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1})^{4}, (\overline{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1}\overline{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2})^{2^{n-6}}, [\overline{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0}, (\overline{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1}\overline{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2})^2]^2, [\overline{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0}, (\overline{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1}\overline{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2})^4]\rg$, \item [$\overline G} \def\ola{\overline \alpha} \def\olh{\overline h_{6}$] $= \lg \overline{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0}, \overline{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1}, \overline{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2} \ |\ \overline{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0}^2, \overline{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1}^2, \overline{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2}^2, (\overline{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0}\overline{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1})^{4},(\overline{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1}\overline{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2})^{2^{n-6}}, [\overline{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0}, (\overline{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1}\overline{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2})^2]^2 (\overline{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1}\overline{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2})^{2^{n-7}}, [\overline{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0}, (\overline{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1}\overline{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2})^4]\rg$, \item [$\overline G} \def\ola{\overline \alpha} \def\olh{\overline h_{7}$] $= \lg \overline{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0}, \overline{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1}, \overline{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2} \ |\ \overline{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0}^2, \overline{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1}^2, \overline{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2}^2, (\overline{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0}\overline{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1})^{4}, (\overline{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1}\overline{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2})^{2^{n-6}}, [\overline{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0}, (\overline{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1}\overline{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1})^2]^2, [\overline{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0}, (\overline{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1}\overline{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2})^4](\overline{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1}\overline{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2})^{2^{n-7}}\rg$, \item [$\overline G} \def\ola{\overline \alpha} \def\olh{\overline h_{8}$] $= \lg \overline{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0}, \overline{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1}, \overline{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2} \ |\ \overline{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0}^2, \overline{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1}^2, \overline{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2}^2, (\overline{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0}\overline{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1})^{4}, (\overline{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1}\overline{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2})^{2^{n-6}}, [\overline{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0}, (\overline{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1}\overline{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2})^2]^2(\overline{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1}\overline{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2})^{2^{n-7}}, [\overline{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0}, (\overline{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1}\overline{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2})^4](\overline{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1}\overline{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2})^{2^{n-7}}\rg$. \end{itemize} \end{small} Then $[\overline{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0}, (\overline{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1}\overline{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2})^4]=1$, or $[\overline{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0}, (\overline{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1}\overline{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2})^4](\overline{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1}\overline{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2})^{2^{n-7}}=1$, and since $N=\langle (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^{2^{n-6}}\rangle\cong{\mathbb Z}_2$, we have $[\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0, (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^4]=(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^{\d\cdot2^{n-7}}$, where $\d=0,\pm1,2$. It follows $[\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0, (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^8]=[\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0, (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^4][\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0, (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^4]^{(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^4}=(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^{\d\cdot2^{n-6}}$, and similarly $[\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0, (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^{16}]=(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^{\d\cdot2^{n-5}}=1$. Since $n\geq 11$, we have $[\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0, (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^{2^{n-7}}]=1$, that is, $((\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^{2^{n-7}})^{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0}=(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^{2^{n-7}}$. Suppose $\overline G} \def\ola{\overline \alpha} \def\olh{\overline h=\overline G} \def\ola{\overline \alpha} \def\olh{\overline h_7$ or $\overline G} \def\ola{\overline \alpha} \def\olh{\overline h_8$. Then $[\overline{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0}, (\overline{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1}\overline{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2})^4](\overline{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1}\overline{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2})^{2^{n-7}}=1$, that is, $[\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0, (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^4]=(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^{\g\cdot2^{n-7}}$ for $\g=\pm1$. It follows that $1=[\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0^2, (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^4]=[\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0, (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^4]^{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0}[\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0, (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^4]=((\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^{\g\cdot2^{n-7}})^{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0}(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^{\g\cdot2^{n-7}}= (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^{\g\cdot2^{n-6}}$, which contradicts $o(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)=2^{n-5}$. Suppose $\overline G} \def\ola{\overline \alpha} \def\olh{\overline h=\overline G} \def\ola{\overline \alpha} \def\olh{\overline h_6$. Then $[\overline{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0}, (\overline{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1}\overline{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2})^2]^2(\overline{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1}\overline{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2})^{2^{n-7}}=1$, that is, $[\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0, (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^2]^2=(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^{\g 2^{n-7}}$ for $\g=\pm1$. Recall that $((\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^{\g 2^{n-7}})^{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0}=(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^{\g 2^{n-7}}$. On the other hand, $1=[\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0^2, (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^2]=[\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0, (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^2][\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0, (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^2]^{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0}$, and hence $([\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0, (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^2]^2)^{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0}=([\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0, (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^2]^{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0})^2=([\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0, (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^2]^2)^{-1}$, that is, $((\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^{\g 2^{n-7}})^{\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0}=(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^{-\g 2^{n-7}}$. It follows $(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^{\g 2^{n-7}}=(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^{-\g 2^{n-7}}$ and $(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^{\g 2^{n-6}}=1$, a contradiction. Thus, $\overline G} \def\ola{\overline \alpha} \def\olh{\overline h=\overline G} \def\ola{\overline \alpha} \def\olh{\overline h_5$. Since $N=\langle (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^{2^{n-6}}\rangle\cong{\mathbb Z}_2$, we have $[\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0, (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^2]^2=1$ or $(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^{2^{n-6}}$ and $[\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_0, (\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^4]=1$ or $(\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_1\rho} \def\s{\sigma} \def\t{\tau} \def\om{\omega} \def\k{\kappa_2)^{2^{n-6}}$. It follows that $G\cong G_5, G_6, G_7$ or $G_8$. \hfill\mbox{\raisebox{0.7ex}{\fbox{}}} \vspace{4truemm} \medskip \noindent {\bf Acknowledgements:} This work was supported by the National Natural Science Foundation of China (11571035, 11731002) and the 111 Project of China (B16002).
{ "timestamp": "2019-01-23T02:28:42", "yymm": "1802", "arxiv_id": "1802.08947", "language": "en", "url": "https://arxiv.org/abs/1802.08947" }
\section{Introduction} \label{sec:intro} At LHC energies, the effect of electroweak (EW) corrections on the cross section can be significant ($\sim 10$\%). These are dominated by EW Sudakov double logarithms, \begin{align} \sigma = \sigma_0\, \sum_{m \leq 2n} c_{nm}\, \alpha_w^n \ln^m \frac{Q}{M} \,,\end{align} where $\sigma_0$ is the Born cross section, $\alpha_w$ is a weak coupling constant ($\alpha_2$ or $\alpha_1$), $Q$ denotes the hard scale (typically taken to be the partonic center of mass energy $\sqrt{\hat s}$) and $M$ is an electroweak scale such as $M_W$, $M_Z$, $m_H$, $m_t$, which we consider to be of the same parametric size.\footnote{Of course there are also mixed QCD-EW corrections, and we will consider their interplay in the analysis.} The energy dependence of EW corrections makes it important to include them when searching for new physics in tails of distributions. It also highlights that these effects are indispensable for cross section predictions at a FCC, where EW logarithms are truly large \cite{Mangano:2016jyj}, and make order one corrections to the cross section. EW Sudakov logarithms also play an important role in calculations of WIMP dark matter, see e.g.~refs.~\cite{Ciafaloni:2010ti,Baumgart:2014vma, Bauer:2014ula, Ovanesyan:2014fwa, Baumgart:2017nsr}. \begin{figure} \centering \includegraphics[width=0.6\textwidth]{figs/factorization.pdf} \caption{EW corrections to Drell-Yan production: the parton from each proton (blob) emits {\color{ForestGreen} initial-state radiation} before participating in the hard scattering ($Z$ exchange). The outgoing leptons produce {\color{blue} final-state radiation}. These collinear effects are described by the DGLAP evolution of the corresponding PDFs and FFs. Surprisingly, {\color{orange}soft radiation} between different collinear directions matters because the incoming and outgoing particles are not $SU(2)$ singlets, and this also modifies the DGLAP evolution.} \label{fig:factorization} \end{figure} Most studies of EW logarithms focus on virtual effects. The underlying assumption is that one is fully exclusive, i.e.~all real EW radiation is resolved by detectors. This is not unreasonable because the $W$ and $Z$ boson are massive, and can be tagged experimentally via their decay products. Electroweak Sudakov logarithms were first studied in refs.~\cite{Ciafaloni:1998xg,Kuhn:1999de,Fadin:1999bq,Beenakker:2000kb,melles1}, a recipe for the next-to-leading order (NLO) corrections was presented in refs.~\cite{Denner:2000jv,Denner:2001gw,Denner:2006jr} and the two-loop logarithms for a four-fermion process were obtained in refs.~\cite{Kuhn:2001hz,Jantzen:2005az}. Refs.~\cite{Chiu:2007yn,Chiu:2007dg} developed a resummation framework using Soft-Collinear Effective Theory (SCET)~\cite{Bauer:2000ew, Bauer:2000yr, Bauer:2001ct, Bauer:2001yt}, obtaining results at next-to-leading-logarithmic (NLL) plus NLO accuracy. The effect of real radiation can be important, and was studied at NLO in e.g.~refs.~\cite{Baur:2006sn,Bell:2010gi,Stirling:2012ak,Manohar:2014vxa}. In this paper we start from the opposite extreme, considering inclusive processes. One example is Drell-Yan, $pp \to \ell \bar \ell X$, where $X$ is unconstrained, illustrated in \fig{factorization}. When the lepton pair has a small transverse momentum or has a large invariant mass (threshold limit), the cross section contains large double logarithms. Here we will not focus on these regions of phase space, so the QCD corrections do not involve double logarithms. Nevertheless, because the proton is not an electroweak singlet, EW double logarithms remain present~\cite{Ciafaloni:2000df,Ciafaloni:2000rp}, which is one of the salient features of our analysis. In this paper, we develop a framework to perform the resummation of EW logarithms using a factorization theorem that is valid to all-orders in perturbation theory. Important ingredients for resummation are the collinear splitting functions, which were determined at leading order in refs.~\cite{Ciafaloni:2001mu,Ciafaloni:2005fm}. These have been implemented into a parton shower~\cite{Christiansen:2014kba,Krauss:2014yaa,Chen:2016wkt} and used to resum initial-state radiation by including them in the evolution of parton distribution functions (PDFs)~\cite{Bauer:2017isx,Bauer:2017bnh}. Our calculation gives the same result for the Sudakov double logarithms as ref.~\cite{Bauer:2017isx}, but we also consider final-state radiation and extend to NLL. Interestingly, we will see that the splitting functions alone are not enough to account for all the EW logarithms, and soft anomalous dimensions must also be included. We also show the importance of polarization effects for gauge bosons, which are a consequence of the chiral nature of $SU(2)$ and the helicity dependence of splitting functions, and were missed in earlier studies. We achieve resummation using an effective field theory analysis, in the spirit of refs.~\cite{Chiu:2007yn,Chiu:2007dg}. First the hard scattering is integrated out at the scale $Q$, matching onto an effective field theory in the symmetric phase of $SU(2) \times U(1)$. We then factorize the cross section and use the renormalization group evolution to evolve to the low scale $M$, thereby resumming EW logarithms. Only at the low scale $M$ do we switch to the broken phase. Anomalous dimensions are related to ultraviolet divergences and do not depend on symmetry breaking, which is an infrared effect. The collinear initial- and final-state radiation will be resummed using the DGLAP evolution~\cite{Gribov:1972ri,Altarelli:1977zs,Dokshitzer:1977sg} of the corresponding PDFs and fragmentation functions (FFs). Surprisingly, for the nonsinglet PDFs there is also a sensitivity to soft radiation. This introduces rapidity divergences, and we use the rapidity renormalization group~\cite{Chiu:2011qc,Chiu:2012ir} to resum the corresponding single logarithms of $Q/M$. We calculate all ingredients necessary for resummation at NLL and provide an explicit recipe on how to implement them for $2 \to 2$ processes in the appendix. We end the paper by discussing a range of generalizations: \begin{itemize*} \item Resummation beyond NLL. \item Other processes. \item Kinematic hierarchies which arise when not all of the Mandelstam invariants are of order $Q$. \item Jets identified (inclusively) using a jet algorithm \item Less inclusive processes where radiation within the range of the detectors is observed, but radiation near the beam axis is not. \end{itemize*} The outline of our paper is as follows. Our factorization analysis, which splits the cross section into collinear and soft parts, is described in \sec{framework}. The renormalization group equations for the collinear sector are given in \sec{RGE_C}, and for the soft sector in \ref{sec:RGE_S}. The matching onto the broken phase of the gauge theory is presented in \sec{low_matching}. The evolution from the hard scale $Q$ to the electroweak scale $M$ accomplishes the resummation of electroweak logarithms of $Q/M$, as discussed in \sec{resummation}. In \sec{comparison} we show how our results compare to electroweak resummation for PDFs in the literature. We discuss the generalizations listed above in \sec{extensions}, and conclude in \sec{conclusions}. For readers mostly interested in the final results, we provide a recipe to include electroweak resummation at NLL accuracy in appendix \ref{app:recipe}. In appendix~\ref{app:xsec}, we give examples of the possible PDF combinations which enter the production of a heavy particle in quark-antiquark annihilation. \section{Factorization} \label{sec:framework} \begin{figure} \begin{center} \includegraphics[width=0.35\textwidth]{fdfigs/nu_dis.pdf} \end{center} \vspace{-2ex} \caption{\label{fig:nu_dis} Tree-level diagram that contributes to deep-inelastic neutrino scattering.} \end{figure} In this section we present our framework for resumming electroweak logarithms in inclusive cross sections, considering as an example deep-inelastic neutrino scattering $\nu p \to \ell X$, for which a tree-level diagram is shown in \fig{nu_dis}. We start, in \sec{high_matching}, by integrating out the short-distance scattering at the hard scale $Q$. Here we can work in the symmetric phase of the gauge theory, since $Q \gg M \sim M_W, M_Z, m_h, m_t$. The scattering amplitude can be factored into a coefficient and hard scattering operator. We discuss the factorization of the hard-scattering operator into collinear and soft operators in \sec{factorization}, which allows one to sum collinear and soft logarithms using RGEs. The gauge and spin indices are disentangled in \sec{gauge_spin}, allowing one to write the scattering amplitude for any process in terms of a standard basis of collinear and soft operators. In previous work on electroweak resummation using SCET in refs.~\cite{Chiu:2007dg,Chiu:2007yn,Chiu:2008vv,Chiu:2009ft,Chiu:2009mg,Chiu:2009yx,Chiu:2011qc,Chiu:2012ir,Fuhrer:2010eu,Fuhrer:2010vi}, the final state was assumed to consist of particles and jets with masses smaller than the electroweak scale such that only virtual electroweak corrections needed to be taken into account. This allows for the entire analysis from $Q$ down to $M$ to be carried out at the amplitude level. In this paper, we are interested in inclusive cross sections where we sum over final states with masses larger than the electroweak scale (e.g.~in semi-inclusive cross sections), so we square the amplitude and factorize the cross section above the electroweak scale. \subsection{Matching at the hard scale} \label{sec:high_matching} The sample process we study is given by lepton-quark scattering, and the hard scattering operators that contribute are \begin{align} \label{eq:hard_ops} O^{(3)}_{\ell q} &= (\bar \ell_1 \gamma^\mu t^a \ell_2)\, (\bar q_3 \gamma_\mu t^a q_4) \,, & O_{\ell q} &= (\bar \ell_1 \gamma^\mu \ell_2)\, (\bar q_3 \gamma_\mu q_4) \,, \nonumber \\ O_{\ell u} &= (\bar \ell_1 \gamma^\mu \ell_2)\, (\bar u_3 \gamma_\mu u_4) \,, & O_{\ell d} &= (\bar \ell_1 \gamma^\mu \ell_2) \, (\bar d_3 \gamma_\mu d_4) \,, \nonumber \\ O_{eq} &= (\bar e_1 \gamma^\mu e_2)\, (\bar q_3 \gamma_\mu q_4) \,, \nonumber \\ O_{eu} &= (\bar e_1 \gamma^\mu e_2)\, (\bar u_3 \gamma_\mu u_4) \,, & O_{ed} &= (\bar e_1 \gamma^\mu e_2) \, (\bar d_3 \gamma_\mu d_4) \,,\end{align} at leading power in $M/Q \ll 1$. The electroweak doublet fields $\ell = (e_L, \nu_L)$ and $q = (u_L, d_L)$ are left-handed, the electroweak singlet fields $e = e_R$, $u = u_R$ and $d=d_R$ are right-handed, and $t^a$ are the $SU(2)$ generators. For quark-quark scattering, one can also have operators such as $ (\bar q_1 \gamma^\mu T^A q_2)\, (\bar q_3 \gamma_\mu T^A q_4)$ which involve the color generators $T^A$, or $ (\bar q_1 \gamma^\mu t^a T^A q_2)\, (\bar q_3 \gamma_\mu t^a T^A q_4)$ which involve both weak and color generators. The subscripts on the fields indicate their momentum, e.g.~$\bar e_1$ has momentum $p_1$. This is important because the hard-matching coefficients $\mathcal{H}_i$ depend on $p_i$, \begin{align} \label{eq:hard_L} {\mathcal L}_{\rm hard} = \sum_i \bigg(\prod_k \int\! \frac{\mathrm{d}^4 p_k}{(2\pi)^4} \bigg)\, (2\pi)^4 \delta^4\Big(\sum_m p_m\Big)\, \mathcal{H}_i(\{p_k\})\, O_i(\{p_k\}) \,.\end{align} We will use the convention that all momenta are incoming. Thus an outgoing particle has momentum $p$ with $p^0<0$. This convention avoids certain minus signs between incoming and outgoing particles in subsequent results, and allows us to treat both with a unified notation. The field $\bar e_1$ contributes to processes with either an outgoing right-handed electron or an incoming left-handed positron, and the two are distinguished by the sign of $p_1^0$. At tree-level, $O^{(3)}_{\ell q}$ is generated by $SU(2)$ gauge boson exchange. The other operators in \eq{hard_ops}, which we denote by $O_{AB}$ with $A = \ell, e$ and $B=q, u, d$, are due to the exchange of a $U(1)$ gauge boson. This leads to the matching coefficients \begin{align} \label{eq:H_tree} \mathcal{H}^{(3)}_{\ell q} = \frac{\mathrm{i} g_2^2}{2 p_1 \!\cdot\! p_2} \,, \qquad \mathcal{H}_{AB} = \frac{\mathrm{i} g_1^2\, \mathsf{y}_A\, \mathsf{y}_B}{2 p_1 \!\cdot\! p_2} \,,\end{align} where $g_2$ and $g_1$ are the $SU(2)$ and $U(1)$ couplings, and $\mathsf{y}_A$ and $\mathsf{y}_B$ are the $U(1)$ hypercharge of the fields $A$ and $B$. Since $p_1 \cdot p_2 \sim Q^2 \gg M^2$, gauge boson masses in the propagator are power suppressed, and have been dropped. For example, for neutrino-proton scattering via $\nu q \to e^- X$, the hard-scattering at tree level is given by \begin{align}\label{eq:DY} \sum_i \mathcal{H}_i O_i &= \frac{\mathrm{i} g_2^2}{2 p_1 \!\cdot\! p_2}\, O^{(3)}_{\ell q} + \frac{\mathrm{i} g_1^2 }{2 p_1 \!\cdot\! p_2} \left[\mathsf{y}_\ell\, \mathsf{y}_q\, O_{\ell q} + \mathsf{y}_\ell\, \mathsf{y}_u\, O_{\ell u} + \mathsf{y}_\ell\, \mathsf{y}_d\, O_{\ell d} \right] \,. \end{align} Because we will carry out the collinear and soft evolution to the scale $Q$ of the hard scattering, we do not have to calculate the matching coefficients $\mathcal{H}_i$ at one loop to achieve resummation at NLL accuracy. After integrating out the hard gauge boson to obtain \eq{hard_L}, only collinear and soft fluctuations of the fields remain. These can be described using SCET, where the Lagrangian ${\mathcal L}_{\rm SCET}$ encodes the dynamics of the collinear and soft fields. For our analysis, we do not need all the technical details of SCET, so we present the discussion in terms of soft and collinear corrections, which should be accessible to a wider audience.\footnote{That is, we use pseudo-SCET analogous to pseudocode in computer science.} We will make frequent use of the following light-like vectors for incoming particles, \begin{align} n_i = (1, \mathbf{n}_i) \,, \qquad \bar n_i = (1, - \mathbf{n}_i) \,,\end{align} where the unit vector $\mathbf{n}_i$ points along the direction of $\mathbf{p}_i$. For outgoing particles with energy $E_j$ and momentum $\mathbf{p}_j$, our convention is that $p_j=(-E_j,-\mathbf{p}_j)$, and \begin{align}\label{eq:3.5} n_j = (-1, -\mathbf{n}_j) \,, \qquad \bar n_j = (-1, \mathbf{n}_j) \,,\end{align} where $\mathbf{n}_j$ is a unit vector in the direction $\mathbf{p}_j$. The matching in \eq{hard_L} removes fluctuations of virtuality $\gtrsim Q$, and the full gauge invariance of the theory is replaced by collinear gauge invariance for each collinear direction, as well as soft gauge invariance~\cite{Bauer:2001yt}. Each field in \eq{hard_ops} corresponds to a distinct collinear direction, so it must be (made) collinearly gauge invariant by itself. This is accomplished by including collinear Wilson lines in the definitions of fields~\cite{Bauer:2001ct}. By including soft Wilson lines, the interactions between collinear and soft fields can be removed from the Lagrangian~\cite{Bauer:2001yt}, and included in the hard scattering operator. For example, the incoming field $q_4$ in \eq{hard_ops} is short-hand for a collinear fermion field $\psi_{4}$ (typically denoted by $\xi_{n_4}$ in SCET) combined with a collinear Wilson line $\mathcal{W}$ in the ${\bar{n}}_4$ direction and a soft Wilson line ${\mathcal S}$ in the $n_4$ direction, (using the covariant derivative convention $D_\mu=\partial_\mu + i g A_\mu$) \begin{align} \label{eq:Wilson} q_4(x) &= S_{4} \int \mathrm{d}^4x\, e^{\mathrm{i} p_1 \cdot x}\, \mathcal{W}_{4}^\dagger(x) \psi_{4}(x) \,, \nonumber \\ \mathcal{W}_{4}(x) &= {\rm P} \exp\bigg\{-\mathrm{i} \int_{\infty}^0\!\mathrm{d} s\, {\bar{n}}_4 \!\cdot\! \big[g_3 A_{n_4}(x+s \bar n_4) + g_2 W_{n_4}(x+s \bar n_4) + g_1 \mathsf{y}_q B_{n_4}(x+s \bar n_4) \big] \bigg\}\,, \nonumber \\ {\mathcal S}_{4} &= {\rm P} \exp\bigg\{-\mathrm{i} \int_{-\infty}^0\!\mathrm{d} s\, n_4 \!\cdot\! \big[g_3 A_s(s\, n_4) + g_2 W_s(s\, n_4) + g_1 \mathsf{y}_q B_s(s\, n_4) \big]\bigg\}\,. \end{align} $A_{n_4}$, $W_{n_4}$ and $B_{n_4}$ denote $SU(3)$, $SU(2)$ and $U(1)$ gauge fields whose momenta are collinear to the $n_4$ direction, and $A_s, W_s, B_s$ denote soft gauge fields.\footnote{The $SU(2)$ gauge field $W$ should not be confused with $\mathcal{W}$, a collinear Wilson line.} The Wilson lines $\mathcal{W}$ and ${\mathcal S}$ depend on the gauge representation of the particle. The soft Wilson line integral is over the worldline of the particle. For incoming particles, the soft Wilson line integral is from $t=-\infty$ to $t=0$. With our sign convention, \eq{Wilson} also holds for outgoing particles, and the minus sign in \eq{3.5} converts the integral from $t=-\infty$ to $t=0$ into one from $t=0$ to $t=\infty$. The direction of the soft Wilson line affects the sign of $\mathrm{i} 0$ terms in the eikonal propagators, and the sign of scattering phases. The interaction of the collinear fields $\mathcal{W}_{4}$ and $\psi_{4}$ in \eq{Wilson}, which is given by the full QCD interaction for particles in the $n_4$ direction~\cite{Goerke:2017ioi}, leads to the production of a jet of particles in the $n_4$ direction, with invariant mass much smaller than $Q$. The soft Wilson line ${\mathcal S}_{4}$ sums the emission of soft radiation from the collinear fields, so the collinear fields no longer interact with soft fields in this picture. To avoid additional notation, in the remainder of the paper $q_4$ will denote the collinear part of the right-hand side of \eq{Wilson}, $\mathcal{W}_4^\dagger \psi_4 \to q_4$, so that $q_4$ in \eq{hard_ops} is now denoted by $q_4 \to S_4 q_4$. \subsection{Factorization into collinear and soft} \label{sec:factorization} Eq.~\eqref{eq:Wilson} factors the operator describing the hard interaction in \eq{hard_ops} into different collinear sectors and a soft sector that no longer interact. The cross section is given by taking the matrix element of the hard scattering in \eq{hard_L} with initial and final-state particles, squaring, and including the phase-space integration, flux factor and measurement. This is largely an exercise in bookkeeping, where most of the complications arise from the phase space, and leads to the usual factorization theorems for hard scattering processes in QCD. Schematically, the cross section for $\nu p \to e^- X$ is given by \begin{align}\label{eq:3.8} \sigma \sim \sum_X \braket{\nu p | {\mathcal L}_{\text{hard}} | e^- X}\braket{e^- X | {\mathcal L}_{\text{hard}} | \nu p}\,. \end{align} The fields in ${\mathcal L}_{\text{hard}}$ are the product of soft and collinear terms (see \eqs{hard_L}{Wilson}), and the matrix element in \eq{3.8} is factored into the product of soft and collinear matrix elements, by writing $\ket{e^-X}$ as the product $\ket{X_s} \otimes \ket{X_{c,1}} \otimes \ket{X_{c,2}} \ldots$ of soft particles and collinear particles in different collinear sectors in the final state. For $\nu p \to e^- X$, there are four sectors given by the directions of $\nu$, $p$, $e^-$ and the outgoing jet produced by the struck quark. Using only the $O_{\ell q}^\dagger O_{\ell q}$ contribution to \eq{3.8} as an example, the relevant matrix element is \begin{align} \label{eq:factor} &(\gamma^\mu_{\beta_1\gamma_1} \gamma_{\mu, \beta_2 \gamma_2} \gamma^\nu_{\beta_3\gamma_3} \gamma_{\nu,\beta_4\gamma_4}) \Bigl[ \sum_{X_1} \langle 0 | \ell_{1',\delta_3} |X_1 \rangle \langle X_1| \bar \ell_{1,\alpha_1} | 0 \rangle \Bigr] \Bigl[\langle \nu| \bar \ell_{2',\alpha_3} \ell_{2,\delta_1} | \nu \rangle\Bigr] \nonumber \\ & \Bigl[ \sum_{X_3} \langle 0 | q_{3',\delta_4} |X_3 \rangle \langle X_3| \bar q_{3,\alpha_2} | 0 \rangle\Bigr] \Bigl[ \langle p | \bar q_{4',\alpha_4} q_{4,\delta_2} | p \rangle\Bigr] \nonumber \\ & \Bigl[ \sum_{X_s} \langle 0 | {\mathcal S}_{1,\gamma_3,\delta_3} {\mathcal S}_{2,\alpha_3 \beta_3}^\dagger {\mathcal S}_{3,\gamma_4,\delta_4} {\mathcal S}_{4,\alpha_4 \beta_4}^\dagger |X_s \rangle \langle X_s| {\mathcal S}_{1,\alpha_1,\beta_1}^\dagger {\mathcal S}_{2,\gamma_1 \delta_1} {\mathcal S}_{3,\alpha_2 \beta_2}^\dagger {\mathcal S}_{4,\gamma_2 \delta_2} | 0 \rangle \Bigr] \,.\end{align} Here the indices $\alpha_1, \dots, \delta_4$ include both spin and gauge indices. Since the hard interaction \eq{hard_L} has a sum over the momenta of the colliding partons weighted with a hard coefficient $H(\{p_k\})$, the labels on the fields in $O^\dagger$ have been distinguished from those in $O$ by a prime. Eventually these will be equal because of momentum conservation in the matrix elements. Eq.~\eqref{eq:factor} has factored the total cross section into collinear sectors corresponding to the incoming proton and neutrino, outgoing lepton (in $X_1$) and jet (in $X_3$), and the soft sector. This factorization is what enables the resummation of logarithms of $Q/M$, by separating the ingredients at different invariant mass and rapidity scales, as discussed in \sec{resummation}. In the next section we show how to disentangle the gauge/spin indices for all combinations of hard-scattering operators. Since we only probe the hard scattering kinematics, the collinear matrix elements will correspond to parton distributions functions for incoming directions and fragmentation functions for outgoing directions, as is the case in QCD factorization for inclusive cross sections. (We avoid kinematic limits, such as small transverse momentum, which would require a transverse momentum dependent parton distribution or fragmentation function.) What is perhaps surprising is the appearance of a soft function, since it would seem that soft radiation is not directly probed by the measurement, i.e.~we are not in a kinematic limit that makes the measurement sensitive to soft radiation. For the QCD corrections, color conservation forces the hadronic matrix elements of quark operators to be diagonal in color, leading to color contractions of indices on the soft Wilson lines. Because the observables we consider do not directly probe the soft radiation, we can perform the sum over $\ket{X_s}$ and then find that the soft matrix element is the identity by using ${\mathcal S}_i^\dagger {\mathcal S}_i=1$. The fundamental difference in the electroweak case is that electroweak symmetry is broken, so the matrix elements do not have to be diagonal in electroweak indices, \begin{align} \label{eq:soft} \langle 0 | {\mathcal S}_{1,\gamma_3,\delta_3} {\mathcal S}_{2,\alpha_3 \beta_3}^\dagger {\mathcal S}_{3,\gamma_4,\delta_4} {\mathcal S}_{4,\alpha_4 \beta_4}^\dagger {\mathcal S}_{1,\alpha_1,\beta_1}^\dagger {\mathcal S}_{2,\gamma_1 \delta_1} {\mathcal S}_{3,\alpha_2 \beta_2}^\dagger {\mathcal S}_{4,\gamma_2 \delta_2} | 0 \rangle \,.\end{align} The proton matrix element of $q_4$ in \eq{factor} gives the sum of left-handed $u$ and $d$ quark PDFs in the proton. Similarly, the neutrino matrix element corresponds to a lepton PDF in the neutrino. Because the neutrino does not have QCD or QED interactions, this PDF is a delta function at tree-level at the electroweak scale. The matrix element of $q_3$ reduces to a quark jet function, after summing on $X_3$. The matrix element involving $\ell_1$ would reduce to a lepton (electroweak) jet function if one sums over all $X_1$. However, in DIS the energy and direction of the outgoing electron are measured, so one sums over $\ket{X_1}=\ket{e(p_e), X}$ where $p_e$ is measured. This corresponds to a fragmentation function, as it only probes the energy $p_e^0$ (the electron is collinear to the field $\ell_1$). On the other hand, the soft function is sensitive to the direction of $p_e$ but not its energy. Thus we have factorized the cross section into collinear and soft pieces which can be studied independently. There is a subtlety in \eq{factor}. The soft Wilson lines have been written as ${\mathcal S}$ or ${\mathcal S}^\dagger$ depending on whether they arose from the field $\psi$ or $\overline \psi$. This keeps track of the gauge indices in the Wilson lines. The Wilson lines in $O_H$ give the scattering amplitude, whereas those in $O_H^\dagger$ give the complex conjugate of the amplitude. Thus the Wilson lines from $O_H$ are time-ordered, whereas those from $O_H^\dagger$ are anti-time-ordered. In \eq{factor}, $S_1^\dagger$, $S_2$, $S_3^\dagger$ and $S_4$ are time-ordered, whereas $S_1$, $S_2^\dagger$, $S_3$ and $S_4^\dagger$ are anti-time-ordered. We will not carefully keep track of this in our notation, because our calculation of the anomalous dimension in \sec{RGE_S} shows that this is irrelevant. \subsection{Disentangling gauge and spin indices} \label{sec:gauge_spin} The next step is to disentangle the spin and gauge indices on the fermion fields in the product of two operators $O^\dagger O$, which enter the factorization formula \eq{factor}. This can be achieved by using the relations \begin{align} \label{eq:disentangle} \bar \ell_{i\alpha} \ell^j_\beta &= \frac{1}{2 N_w}\, \delta^j{}_i (P_L n\!\!\!\slash)_{\beta \alpha}\, \bar \ell \tfrac{\bar{n}\!\!\!\slash}{2} \ell + (t^a)^j{}_i (P_L n\!\!\!\slash)_{\beta\alpha}\, \bar \ell \tfrac{\bar{n}\!\!\!\slash}{2} t^a \ell \,, \nonumber \\ \bar e_{\alpha} e_{\beta} &= \tfrac{1}{2} (P_R n\!\!\!\slash)_{\beta \alpha}\, \bar e_{} \tfrac{\bar{n}\!\!\!\slash}{2} e \,. \end{align} Here $\alpha,\beta$ are spinor indices, $i,j$ are $SU(2)$ gauge indices, and $N_w=2$. The lepton fields are treated as massless and assumed to correspond to the same collinear direction $n$. There are similar relations for the quarks. Eventually, we will take the proton matrix element of the quark operators. Since color is an unbroken gauge symmetry, and the proton is a color singlet state, matrix elements of color non-singlet operators in the proton vanish. We therefore drop these from the outset. We start with the most complicated case, namely $O^{(3)\dagger}_{\ell q} O^{(3)}_{\ell q}$: \begin{align} \label{eq:O1_fact} O^{(3)\dagger}_{\ell q} O^{(3)}_{\ell q}&= (\bar \ell_2 {\mathcal S}_2^\dagger \gamma^\nu t^b {\mathcal S}_1 \ell_1)\,(\bar q_4 {\mathcal S}_4^\dagger \gamma_\nu t^b {\mathcal S}_3 q_3)\, (\bar \ell_1 {\mathcal S}_1^\dagger \gamma^\mu t^a {\mathcal S}_2 \ell_2)\, (\bar q_3 {\mathcal S}_3^\dagger \gamma_\mu t^a {\mathcal S}_4 q_4) \,. \end{align} We can use \eq{disentangle} to combine $\bar \ell_1$ and $\ell_1$ into a bilinear, $\bar \ell_2$ and $\ell_2$ into a bilinear, etc., and drop color non-singlet operators to obtain \begin{align} \label{eq:2.13} O^{(3)\dagger}_{\ell q} O^{(3)}_{\ell q} &= (n_1 \!\cdot\! n_3)(n_2 \!\cdot\! n_4) \bigg[\frac{1}{N_c N_w^4}\, \mathcal{C}_{\ell_1}\, \mathcal{C}_{\ell_2}\, \mathcal{C}_{q_3}\, \mathcal{C}_{q_4}\, \mathrm{tr}(t^a t^b)\, \mathrm{tr}(t^a t^b) \\ & \quad + \frac{4}{N_c N_w^2} \, \mathcal{C}_{\ell_1}^c\, \mathcal{C}_{\ell_2}^d\, \mathcal{C}_{q_3}\, \mathcal{C}_{q_4}\, \mathrm{tr}({\mathcal S}_1 t^c {\mathcal S}_1^\dagger t^a {\mathcal S}_2 t^d {\mathcal S}_2^\dagger t^b)\, \mathrm{tr}(t^a t^b) \nonumber \\ & \quad + \frac{4}{N_c N_w^2} \, \mathcal{C}_{\ell_1}\, \mathcal{C}_{\ell_2}\, \mathcal{C}^c_{q_3}\, \mathcal{C}^d_{q_4}\, \mathrm{tr}({\mathcal S}_3 t^c {\mathcal S}_3^\dagger t^a {\mathcal S}_4 t^d {\mathcal S}_4^\dagger t^b)\, \mathrm{tr}(t^a t^b) \nonumber \\ & \quad + \frac{4}{N_c N_w^2} \, \mathcal{C}_{\ell_1}^c\, \mathcal{C}_{\ell_2}\, \mathcal{C}_{q_3}^d\, \mathcal{C}_{q_4}\, \mathrm{tr}({\mathcal S}_1 t^c {\mathcal S}_1^\dagger t^a t^b)\, \mathrm{tr}({\mathcal S}_3 t^d {\mathcal S}_3^\dagger t^a t^b) + (3\text{ more}) \nonumber \\ & \quad + \frac{8}{N_c N_w} \, \mathcal{C}_{\ell_1}^c\, \mathcal{C}_{\ell_2}^d\, \mathcal{C}_{q_3}^e\, \mathcal{C}_{q_4}\, \mathrm{tr}({\mathcal S}_1 t^c {\mathcal S}_1^\dagger t^a {\mathcal S}_2 t^d {\mathcal S}_2^\dagger t^b)\, \mathrm{tr}({\mathcal S}_3 t^e {\mathcal S}_3^\dagger t^a t^b) + (3\text{ more}) \nonumber \\ & \quad + \frac{16}{N_c} \, \mathcal{C}_{\ell_1}^c\, \mathcal{C}_{\ell_2}^d\, \mathcal{C}_{q_3}^e\, \mathcal{C}_{q_4}^f\, \mathrm{tr}({\mathcal S}_1 t^c {\mathcal S}_1^\dagger t^a {\mathcal S}_2 t^d {\mathcal S}_2^\dagger t^b)\, \mathrm{tr}({\mathcal S}_3 t^e {\mathcal S}_3^\dagger t^a {\mathcal S}_4 t^f {\mathcal S}_4^\dagger t^b) \bigg] \,.\nonumber\end{align} Here we introduce the abbreviation \begin{align} \label{eq:coll_abb} \mathcal{C}_{\ell_1} = \bar \ell_1 \tfrac{\bar{n}\!\!\!\slash_1}{2} \ell_1 \,, \qquad \mathcal{C}_{\ell_1}^c = \bar \ell_1 \tfrac{\bar{n}\!\!\!\slash_1}{2} t^c \ell_1 \,, \qquad \dots \,,\end{align} where the superscript distinguishes the gauge group representations of the collinear operator: $\mathcal{C}_{\ell_1}$ is a weak singlet, and $\mathcal{C}_{\ell_1}^c$ is a weak triplet. We simplify the soft operators using the $SU(N_w)$ completeness relation \begin{align} (t^a)^\alpha{}_\beta\,(t^a)^\gamma{}_\delta = \frac12 \delta^\alpha{}_\delta\,\delta^\gamma{}_\beta - \frac{1}{2N_w} \delta^\alpha{}_\beta\, \delta^\gamma{}_\delta\,. \end{align} The relevant identities are \begin{align} \label{eq:soft_id} \mathrm{tr}(t^a t^b)\, \mathrm{tr}(t^a t^b) &= \tfrac{1}{4}(N_w^2-1) \,, \\ \mathrm{tr}({\mathcal S}_1 t^c {\mathcal S}_1^\dagger t^a {\mathcal S}_2 t^d {\mathcal S}_2^\dagger t^b)\, \mathrm{tr}(t^a t^b) &= - \tfrac{1}{4N_w}\, {\mathcal S}_{12}^{cd} \,,\nonumber \\ \mathrm{tr}({\mathcal S}_1 t^c {\mathcal S}_1^\dagger t^a t^b)\, \mathrm{tr}({\mathcal S}_3 t^d {\mathcal S}_3^\dagger t^a t^b) &= -\tfrac{1}{2N_w}\, {\mathcal S}_{13}^{cd} \,, \nonumber \\ \mathrm{tr}({\mathcal S}_1 t^c {\mathcal S}_1^\dagger t^a {\mathcal S}_2 t^d {\mathcal S}_2^\dagger t^b)\, \mathrm{tr}({\mathcal S}_3 t^e {\mathcal S}_3^\dagger t^a t^b) &= -\tfrac{1}{4N_w} ({\mathcal S}_{123}^{cde} + {\mathcal S}_{132}^{ced}) \,,\nonumber \\ \quad \mathrm{tr}({\mathcal S}_1 t^c {\mathcal S}_1^\dagger t^a {\mathcal S}_2 t^d {\mathcal S}_2^\dagger t^b)\, \mathrm{tr}({\mathcal S}_3 t^e {\mathcal S}_3^\dagger t^a {\mathcal S}_4 t^f {\mathcal S}_4^\dagger t^b) &= \tfrac{1}{4N_w^2} {\mathcal S}_{12}^{cd} {\mathcal S}_{34}^{ef} \!+\! \tfrac14 {\mathcal S}_{14}^{cf} {\mathcal S}_{23}^{de} \!-\!\tfrac{1}{4N_w} ({\mathcal S}_{1234}^{cdef} \!+\! {\mathcal S}_{1432}^{cfed} ) \nonumber \\ &\!\!\!\!\stackrel{N_w=2}{=} -\tfrac{1}{16} {\mathcal S}_{12}^{cd} {\mathcal S}_{34}^{ef} +\tfrac{1}{8} {\mathcal S}_{13}^{ce} {\mathcal S}_{24}^{df} + \tfrac{1}{8} {\mathcal S}_{14}^{cf} {\mathcal S}_{23}^{de} \,,\nonumber\end{align} where the last one only holds for $SU(2)$. Here we used ${\mathcal S}_i^\dagger {\mathcal S}_i = 1$ and introduced the shorthand notation \begin{align} \label{eq:soft_ops} {\mathcal S}_{12}^{cd} &= \mathrm{tr}({\mathcal S}_1 t^c {\mathcal S}_1^\dagger {\mathcal S}_2 t^d {\mathcal S}_2^\dagger) \,, \nonumber \\ {\mathcal S}_{123}^{cde} &= \mathrm{tr}({\mathcal S}_1 t^c {\mathcal S}_1^\dagger\, {\mathcal S}_2 t^d {\mathcal S}_2^\dagger\, {\mathcal S}_3 t^e {\mathcal S}_3^\dagger ) \,, \nonumber \\ {\mathcal S}_{1234}^{cdef} &= \mathrm{tr}({\mathcal S}_1 t^c {\mathcal S}_1^\dagger\, {\mathcal S}_2 t^d {\mathcal S}_2^\dagger\, {\mathcal S}_3 t^e {\mathcal S}_3^\dagger\, {\mathcal S}_4 t^f {\mathcal S}_4^\dagger) \,.\end{align} For $N_w=2$, the last relation in \eq{soft_id} was simplified using \begin{align} {\mathcal S}_{1234}^{cdef} \stackrel{N_w=2}{=} \tfrac12\big( {\mathcal S}_{12}^{cd} {\mathcal S}_{34}^{ef} - {\mathcal S}_{13}^{ce} {\mathcal S}_{24}^{df} + {\mathcal S}_{14}^{cf} {\mathcal S}_{23}^{de} \big) \,.\nonumber\end{align} Using the above relations gives \begin{align} O^{(3)\dagger}_{\ell q} O^{(3)}_{\ell q} &= (n_1 \!\cdot\! n_3)(n_2 \!\cdot\! n_4) \bigg[\frac{N_w^2-1}{4N_c N_w^4}\, \mathcal{C}_{\ell_1}\, \mathcal{C}_{\ell_2}\, \mathcal{C}_{q_3}\, \mathcal{C}_{q_4}\, \\ & \quad - \frac{1}{N_c N_w^3} \, \mathcal{C}_{\ell_1}^c\, \mathcal{C}_{\ell_2}^d\, \mathcal{C}_{q_3}\, \mathcal{C}_{q_4}\, {\mathcal S}_{12}^{cd} - \frac{1}{N_c N_w^3} \, \mathcal{C}_{\ell_1}\, \mathcal{C}_{\ell_2}\, \mathcal{C}^c_{q_3}\, \mathcal{C}^d_{q_4}\, {\mathcal S}_{34}^{cd} \nonumber \\ & \quad - \frac{2}{N_c N_w^3} \, \mathcal{C}_{\ell_1}^c\, \mathcal{C}_{\ell_2}\, \mathcal{C}_{q_3}^d\, \mathcal{C}_{q_4}\, {\mathcal S}_{13}^{cd} + (3\text{ more}) \nonumber \\ & \quad - \frac{2}{N_c N_w^2} \, \mathcal{C}_{\ell_1}^c\, \mathcal{C}_{\ell_2}^d\, \mathcal{C}_{q_3}^e\, \mathcal{C}_{q_4}\, ({\mathcal S}_{123}^{cde} + {\mathcal S}_{132}^{ced}) + (3\text{ more}) \nonumber \\ & \quad + \frac{1}{N_c}\, \mathcal{C}_{\ell_1}^c\, \mathcal{C}_{\ell_2}^d\, \mathcal{C}_{q_3}^e\, \mathcal{C}_{q_4}^f\, ( - {\mathcal S}_{12}^{cd} {\mathcal S}_{34}^{ef} +2 {\mathcal S}_{13}^{ce} {\mathcal S}_{24}^{df} + 2 {\mathcal S}_{14}^{cf} {\mathcal S}_{23}^{de}) \bigg] \,.\nonumber\end{align} We reiterate that a color-adjoint collinear operator of the form $\mathcal{C}_{q_4}^A = \bar q_4 \tfrac{\bar{n}\!\!\!\slash_4}{2} T^A q_4$, where $T^A$ is a color generator, would never have been considered in QCD. Although it could in principle be kept in intermediate steps of the calculation, it would be dropped at the end because its proton matrix element vanishes, since the proton is a color-singlet state. However, the proton is not an electroweak singlet and gives a nonzero matrix element for the $SU(2)$ adjoint operator $\mathcal{C}_{q_4}^a = \bar q_4 \tfrac{\bar{n}\!\!\!\slash_4}{2} t^a q_4$, where $t^a$ is an $SU(2)$ generator, see \sec{low_matching}. Related to this, we note that only the $SU(2)$ Wilson lines survive in the soft operators in \eq{soft_ops}, since the colored Wilson lines are paired with colored operators which have vanishing proton matrix elements. The new features in the remaining discussion therefore center on $SU(2)$. For $SU(3)$ and $U(1)$ we have the standard PDF and fragmentation function evolution for the collinear operators and we have no soft operators. Next we consider the interference contribution $O_{\ell q}^\dagger O^{(3)}_{\ell q}$ (and its conjugate $O_{\ell q}^{(3)\dagger} O_{\ell q}$), which can be obtained from \eq{O1_fact} by dropping the $t^b$'s \begin{align} O_{\ell q}^\dagger O^{(3)}_{\ell q} &= (\bar \ell_2 {\mathcal S}_2^\dagger \gamma^\nu {\mathcal S}_1 \ell_1)\, (\bar q_4 {\mathcal S}_4^\dagger \gamma_\nu {\mathcal S}_3 q_3)\, (\bar \ell_1 {\mathcal S}_1^\dagger \gamma^\mu t^a {\mathcal S}_2 \ell_2)\, (\bar q_3 {\mathcal S}_3^\dagger \gamma_\mu t^a {\mathcal S}_4 q_4) \nonumber \\ &= (n_1 \!\cdot\! n_3)(n_2 \!\cdot\! n_4) \bigg[ \frac{4}{N_c N_w^2} \, \mathcal{C}_{\ell_1}^c\, \mathcal{C}_{\ell_2}\, \mathcal{C}_{q_3}^d\, \mathcal{C}_{q_4}\, \mathrm{tr}({\mathcal S}_1 t^c {\mathcal S}_1^\dagger t^a )\, \mathrm{tr}({\mathcal S}_3 t^d {\mathcal S}_3^\dagger t^a ) + (3\text{ more}) \nonumber \\ & \quad + \frac{8}{N_c N_w} \, \mathcal{C}_{\ell_1}^c\, \mathcal{C}_{\ell_2}^d\, \mathcal{C}_{q_3}^e\, \mathcal{C}_{q_4}\, \mathrm{tr}({\mathcal S}_1 t^c {\mathcal S}_1^\dagger t^a {\mathcal S}_2 t^d {\mathcal S}_2^\dagger )\, \mathrm{tr}({\mathcal S}_3 t^e {\mathcal S}_3^\dagger t^a ) + (3\text{ more}) \nonumber \\ & \quad + \frac{16}{N_c} \, \mathcal{C}_{\ell_1}^c\, \mathcal{C}_{\ell_2}^d\, \mathcal{C}_{q_3}^e\, \mathcal{C}_{q_4}^f\, \mathrm{tr}({\mathcal S}_1 t^c {\mathcal S}_1^\dagger t^a {\mathcal S}_2 t^d {\mathcal S}_2^\dagger )\, \mathrm{tr}({\mathcal S}_3 t^e {\mathcal S}_3^\dagger t^a {\mathcal S}_4 t^f {\mathcal S}_4^\dagger ) \bigg] \,.\end{align} This can be simplified using \begin{align} \mathrm{tr}({\mathcal S}_1 t^c {\mathcal S}_1^\dagger t^a)\, \mathrm{tr}({\mathcal S}_3 t^d {\mathcal S}_3^\dagger t^a) &= \tfrac12 {\mathcal S}_{13}^{cd} \,, \nonumber \\ \mathrm{tr}({\mathcal S}_1 t^c {\mathcal S}_1^\dagger t^a {\mathcal S}_2 t^d {\mathcal S}_2^\dagger )\, \mathrm{tr}({\mathcal S}_3 t^e {\mathcal S}_3^\dagger t^a ) &= \tfrac12 {\mathcal S}_{132}^{ced} \,, \nonumber \\ \mathrm{tr}({\mathcal S}_1 t^c {\mathcal S}_1^\dagger t^a {\mathcal S}_2 t^d {\mathcal S}_2^\dagger )\, \mathrm{tr}({\mathcal S}_3 t^e {\mathcal S}_3^\dagger t^a {\mathcal S}_4 t^f {\mathcal S}_4^\dagger ) &= \tfrac{1}{2} {\mathcal S}_{1432}^{cfed} - \tfrac{1}{2N} {\mathcal S}_{12}^{cd} {\mathcal S}_{34}^{ef} \nonumber \\ &\!\!\!\!\stackrel{N_w=2}{=} -\tfrac{1}{4} {\mathcal S}_{13}^{ce} {\mathcal S}_{24}^{df} + \tfrac{1}{4} {\mathcal S}_{14}^{cf} {\mathcal S}_{23}^{de} \,,\end{align} to get \begin{align} O_{\ell q}^\dagger O^{(3)}_{\ell q} &= (n_1 \!\cdot\! n_3)(n_2 \!\cdot\! n_4) \bigg[ \frac{2}{N_c N_w^2} \, \mathcal{C}_{\ell_1}^c\, \mathcal{C}_{\ell_2}\, \mathcal{C}_{q_3}^d\, \mathcal{C}_{q_4}\, {\mathcal S}_{13}^{cd}+ (3\text{ more}) \nonumber \\ & \quad + \frac{4}{N_c N_w} \, \mathcal{C}_{\ell_1}^c\, \mathcal{C}_{\ell_2}^d\, \mathcal{C}_{q_3}^e\, \mathcal{C}_{q_4}\, {\mathcal S}_{132}^{ced} + (3\text{ more}) \nonumber \\ & \quad + \frac{4}{N_c}\, \mathcal{C}_{\ell_1}^c\, \mathcal{C}_{\ell_2}^d\, \mathcal{C}_{q_3}^e\, \mathcal{C}_{q_4}^f\, ( - {\mathcal S}_{13}^{ce} {\mathcal S}_{24}^{df}+ {\mathcal S}_{14}^{cf} {\mathcal S}_{23}^{de} ) \bigg] \,.\end{align} The expressions for $O_{\ell q}^\dagger O_{\ell q}$ can directly be obtained from \eq{O1_fact}, dropping $t^a$ \emph{and} $t^b$, \begin{align} \label{eq:O2_fact} O_{\ell q}^\dagger O_{\ell q} &= ( \bar \ell_2 {\mathcal S}_2^\dagger \gamma^\nu {\mathcal S}_1 \ell_1)\, (\bar q_4 {\mathcal S}_4^\dagger \gamma_\nu {\mathcal S}_3 q_3)\, (\bar \ell_1 {\mathcal S}_1^\dagger \gamma^\mu {\mathcal S}_2 \ell_2)\, (\bar q_3 {\mathcal S}_3^\dagger \gamma_\mu {\mathcal S}_4 q_4) \nonumber \\ &= (n_1 \!\cdot\! n_3)(n_2 \!\cdot\! n_4) \bigg[ \frac{1}{N_cN_w^2}\, \mathcal{C}_{\ell_1}\, \mathcal{C}_{\ell_2}\, \mathcal{C}_{q_3}\, \mathcal{C}_{q_4} + \frac{4}{N_cN_w} \, \mathcal{C}_{\ell_1}^c\, \mathcal{C}_{\ell_2}^d\, \mathcal{C}_{q_3}\, \mathcal{C}_{q_4}\, {\mathcal S}_{12}^{cd} + (1\text{ more}) \nonumber \\ & \quad + \frac{16}{N_c}\, \mathcal{C}_{\ell_1}^c\, \mathcal{C}_{\ell_2}^d\, \mathcal{C}_{q_3}^e\, \mathcal{C}_{q_4}^f\, {\mathcal S}_{12}^{cd} {\mathcal S}_{34}^{ef} \bigg] \,.\end{align} For $O_{\ell u}^\dagger O_{\ell u}$ there is a further simplification compared to \eq{O2_fact} because the $SU(2)$ doublet $q$ is replaced by the singlet $u$, \begin{align} \label{eq:O3_fact} O_{\ell u}^\dagger O_{\ell u} &= (\bar \ell_2 {\mathcal S}_2^\dagger \gamma^\nu {\mathcal S}_1 \ell_1)\, (\bar u_4 {\mathcal S}_4^\dagger \gamma_\nu {\mathcal S}_3 u_3)\, (\bar \ell_1 {\mathcal S}_1^\dagger \gamma^\mu {\mathcal S}_2 \ell_2)\, (\bar u_3 {\mathcal S}_3^\dagger \gamma_\mu {\mathcal S}_4 u_4) \nonumber \\ &= (n_1 \!\cdot\! n_3)(n_2 \!\cdot\! n_4) \bigg[ \frac{1}{N_cN_w}\, \mathcal{C}_{\ell_1}\, \mathcal{C}_{\ell_2}\, \mathcal{C}_{u_3}\, \mathcal{C}_{u_4} +\frac{4}{N_c}\, \mathcal{C}_{\ell_1}^c\, \mathcal{C}_{\ell_2}^d\, \mathcal{C}_{u_3}\, \mathcal{C}_{u_4}\, {\mathcal S}_{12}^{cd} \bigg] \,.\end{align} The expression for $O_{\ell d}^\dagger O_{\ell d}$ can directly be obtained from \eq{O3_fact} by replacing $u \to d$, and the expression for $O_{eq}^\dagger O_{eq}$ follows from interchanging $q \leftrightarrow \ell$ and $e \leftrightarrow u$. Finally, for $O_{eu}^\dagger O_{eu}$ \begin{align} O_{eu}^\dagger O_{eu} &= (\bar e_2 {\mathcal S}_2^\dagger \gamma^\nu {\mathcal S}_1 e_1)\, (\bar u_4 {\mathcal S}_4^\dagger \gamma_\nu {\mathcal S}_3 u_3)\, (\bar e_1 {\mathcal S}_1^\dagger \gamma^\mu {\mathcal S}_2 e_2)\, (\bar u_3 {\mathcal S}_3^\dagger \gamma_\mu {\mathcal S}_4 u_4) \nonumber \\ &= (n_1 \!\cdot\! n_3)(n_2 \!\cdot\! n_4) \frac{1}{N_c}\, \mathcal{C}_{e_1}\, \mathcal{C}_{e_2}\, \mathcal{C}_{u_3}\, \mathcal{C}_{u_4} ,\end{align} and similarly for $O_{ed}^\dagger O_{ed}$. The above identities can be used to write the factorized cross section \eq{factor} as a product of collinear and soft factors, which we now study separately. The collinear factors are the usual PDFs. The soft factors do not arise in QCD factorization theorems, but are present in electroweak cross sections. In appendix~\ref{app:xsec}, we give examples of the possible PDF combinations which enter the production of a heavy particle in quark-antiquark annihilation, and show that they are all given in terms of the singlet and triplet PDFs. \section{Collinear evolution} \label{sec:RGE_C} In this section we determine the renormalization group (RG) evolution of the collinear operators entering the factorized cross section. The splitting functions for $z < 1$ agree with those computed in ref.~\cite{Bauer:2017isx}. We begin with the collinear operators corresponding to the incoming particles, discussing the fragmentation case in \sec{RGE_FF}. The collinear operators that enter the cross section can be written in terms of the usual PDF operators. In QCD processes, PDF operators are singlets under the $SU(3)$ gauge group, and there are no rapidity divergences, as these cancel between real and virtual graphs. In the electroweak case, the factorization formula has terms involving the product of collinear and soft operators which are not separately gauge singlets, as we have seen in the previous section. There are rapidity divergences in the collinear and soft sectors. One can see this must be true in the collinear sector by noting that real and virtual graphs have different group theory factors for gauge non-singlet PDFs. Before discussing the electroweak case, we first review QCD, for which the standard definitions for the unpolarized PDF quark operator is~\cite{Collins:1981uw} \begin{align}\label{eq:cs} O_Q(r^-) &= \frac{1 }{ 4 \pi} \int\! \mathrm{d} \xi\, e^{-i \xi r^-} [ \bar Q(\bar n \xi )\, \mathcal{W} (\bar n \xi )]\ \slashed{\bar n}\ [\mathcal{W}^\dagger (0) \, Q(0 ) ] \,. \end{align} Here the Wilson line $\mathcal{W}$ is defined in \eq{Wilson} and the null vectors are \begin{align} n^\mu=(1,\mathbf{\hat n})\,, \qquad \bar n^\mu=(1,-\mathbf{\hat n}) \,.\end{align} Note that the operator product in \eq{cs} is an ordinary product, not a time-ordered product. One can insert a complete set of states between $\mathcal{W}$ and $\mathcal{W}^\dagger$, allowing us to evaluate matrix elements of the PDF operator using cut Feynman rules. The PDF operators in \eq{cs} are written using standard QCD notation. In terms of collinear fields introduced in \eq{Wilson}, the quark PDF operator is given by $[W^\dagger Q] \to Q$, since the collinear Wilson line was included in $Q$. The quark PDF is given by the matrix elements of this operators in a target state $T$ of momentum $p$, \begin{align}\label{eq:pdf} f_{Q/T}(r^-/p^-,\mu) &\equiv \braket{T,p| O_Q(r^-) |T,p} \,,\end{align} where $p^- = {\bar{n}} \!\cdot\! p$ and the operators are renormalized in the \ensuremath{\overline{\text{MS}}}\ scheme. The polarized quark PDF $f_{\Delta Q}$ is given by replacing $\slashed{\bar n}$ by $\slashed{\bar n}\gamma_5$. In terms of $f_{Q_+}$ and $f_{Q_-}$, the distributions of quarks with helicity $h=+1/2$ and $h=-1/2$, the unpolarized and polarized PDFs are $f_Q = f_{Q_+}+f_{Q_-}$ and $f_{\Delta Q} = f_{Q_+}-f_{Q_-}$. We need to generalize \eq{cs}, \eq{pdf} to include PDFs which are not gauge singlets. The gauge indices in a fermion bilinear $\bar Q \, Q$ can be combined into a gauge singlet or adjoint. We thus define two different fermion operators \begin{align} \label{eq:3.33} O^{(1)}_Q(r^-) &= \frac{1 }{ 4 \pi} \int \mathrm{d} \xi\, e^{-i \xi r^-}\, [ \bar Q(\bar n \xi ) \, \mathcal{W} (\bar n \xi )]_i \ \slashed{\bar n}\ \, \delta^i_j\, [\mathcal{W}^\dagger (0) \, Q(0 ) ]^j \,, \nonumber \\ O^{(\text{adj},a)}_Q(r^-) &= \frac{1 }{ 4 \pi} \int \mathrm{d} \xi\, e^{-i \xi r^-}\, [ \bar Q(\bar n \xi )\, \mathcal{W} (\bar n \xi )]_i\ \slashed{\bar n}\ [t^a]^i{}_j\, [\mathcal{W}^\dagger (0) \, Q(0 ) ]^j \,, \end{align} where $i,j,k,l$ are gauge indices in the fundamental representation. Note that it is the gauge indices at the $x=\infty$ end of the Wilson line which are combined into a singlet or adjoint. The anti-fermion and anti-scalar PDFs are given by $CP$ conjugation, where $P$ is implemented by reflection in a plane containing the direction of the proton $n$. This amounts to exchanging $\bar Q \leftrightarrow Q$ and letting $t^a \to -(t^a)^T$ in the adjoint PDFs, \begin{align} \label{eq:3.33f} O^{(1)}_{\bar Q}(r^-) &= \frac{1 }{ 4 \pi} \int \mathrm{d} \xi\, e^{-i \xi r^-}\, \mathrm{tr} \left\{ [\mathcal{W}^\dagger (\bar n \xi ) \, Q(\bar n \xi ) ]^j \, \delta^i_j\, [ \bar Q( 0 ) \, \mathcal{W} ( 0 )]_i \, \slashed{\bar n}\right\} \,, \nonumber \\ O^{(\text{adj},a)}_{\bar Q}(r^-) &= \frac{1 }{ 4 \pi} \int \mathrm{d} \xi\, e^{-i \xi r^-}\, \mathrm{tr} \left\{ [\mathcal{W}^\dagger (\bar n \xi) \, Q( \bar n \xi ) ]^j [-(t^a)^T]_j{}^i\, [ \bar Q( 0 )\, \mathcal{W} ( 0 )]_i\ \slashed{\bar n}\right\} \,, \end{align} which can also be written \begin{align} \label{eq:3.33g} O^{(1)}_{\bar Q}(r^-) &=- \frac{1 }{ 4 \pi} \int \mathrm{d} \xi\, e^{-i \xi r^-}\, [ \bar Q(0 ) \, \mathcal{W} (0 )]_i \ \slashed{\bar n}\ \, \delta^i_j\, [\mathcal{W}^\dagger (\bar n \xi) \, Q(\bar n \xi ) ]^j \,, \nonumber \\ O^{(\text{adj},a)}_{\bar Q}(r^-) &= \frac{1 }{ 4 \pi} \int \mathrm{d} \xi\, e^{-i \xi r^-}\, [ \bar Q(0 )\, \mathcal{W} (0)]_i\ \slashed{\bar n}\ [t^a]^i{}_j\, [\mathcal{W}^\dagger (\bar n \xi) \, Q(\bar n \xi ) ]^j \,, \end{align} on anticommuting the fermion fields. The $Q$ and $\bar Q$ PDFs have been defined in the conventional way, such that for example the deep-inelastic structure functions are proportional to $f_{Q}(x,\mu)+f_{\bar Q}(x,\mu)$. If the singlet PDF is treated as the PDF for quark number, then the total baryon number is given by the difference of the $Q$ and $\bar Q$ PDFs with the sign convention of \eq{3.33g}. The triplet PDFs have an $SU(2)$ charge, and this requires the additional minus sign, so that the total $SU(2)$ charge is given by the sum of $Q$ and $\bar Q$ triplet PDFs. The matrix elements in a target $T$ of momentum $p$ define the singlet and adjoint PDFs \begin{align}\label{eq:3.7} f^{(1)}_{Q/T}(r^-/p^-,\mu) &\equiv \braket{T,p| O^{(1)}_Q(r^-) |T,p}, & f^{(1)}_{\bar Q/T}(r^-/p^-,\mu) &\equiv \braket{T,p| O^{(1)}_{\bar Q}(r^-) |T,p}, \nonumber \\ f^{(\text{adj},a)}_{Q/T}(r^-/p^-,\mu) &\equiv \braket{T,p| O^{(\text{adj},a)}_Q(r^-) |T,p} , & f^{(\text{adj},a)}_{\bar Q/T}(r^-/p^-,\mu) &\equiv \braket{T,p| O^{(\text{adj},a)}_{\bar Q}(r^-) |T,p}\,. \end{align} The singlet PDF for QCD is the same as the usual PDF. The adjoint PDF vanishes for QCD, but not in the electroweak case. The generalization of eqs.~\eqref{eq:3.33}, \eqref{eq:3.33f} and \eqref{eq:3.7} to the electroweak case is straightforward. Since the weak interactions are chiral, it is necessary to use polarized PDFs. One defines fermion and antifermion PDFs using \eqs{3.33}{3.33f} with the Wilson line in the appropriate $SU(3) \times SU(2) \times U(1)$ representation, and the field $Q$ replaced by the chiral fields $q,l,u,d,e$. For left-handed fields $q$ and $l$, this gives the distribution of particles with helicity $h=-1/2$, and for right-handed fields, the distribution of particles with helicity $h=+1/2$. In terms of the usual polarized and unpolarized quark distributions, the PDF in \eq{cs} with $Q \to u$ corresponds to $(f_u+f_{\Delta u})/2=f_{u_+}$, and with $Q \to q$ corresponds to $(f_u-f_{\Delta u})/2+(f_d-f_{\Delta d})/2=f_{u_-}+f_{d_-}$, etc.\ where the $\pm$ subscript denotes the helicity (not chirality) of the quark. For antiquarks, the relation between helicity and chirality is reversed, so the antiquark PDFs for $Q\to u$ and $Q \to q$ give $(f_{\bar u}-f_{\Delta \bar u})/2=f_{\bar u_-}$ and $(f_{\bar u}+f_{\Delta \bar u})/2+(f_{\bar d} + f_{\Delta \bar d})/2=f_{\bar u_+}+f_{\bar d_+}$, respectively. In the unbroken theory, the PDFs are defined using chiral SM fields, and we will use the notation $f_u$, $f_q$, $f_{\bar u}$, $f_{\bar q}$ to denote these four PDFs, dropping the helicity label. In the electroweak sector, we will also denote singlet and adjoint PDFs using superscripts $(I=0)$ and $(I=1)$. We also need the unpolarized and polarized gluon PDF operators defined by~\cite{Collins:1981uw,Manohar:1990kr,Manohar:1990jx} \begin{align}\label{eq:pol} O_G(r^-) &= -\frac{1 }{ 2 \pi r^-} \int\! \mathrm{d} \xi\, e^{-i \xi r^-} {\bar{n}}_\mu [G^{\mu \lambda }(\bar n \xi)\, \mathcal{W} (\bar n \xi )]\ {\bar{n}}_\nu [\mathcal{W}^\dagger (0) \,G^{\nu}{}_\lambda(0)]\,, \nonumber \\ O_{\Delta G}(r^-) &= \frac{i}{ 2 \pi r^-} \int\! \mathrm{d} \xi\, e^{-i \xi r^-} {\bar{n}}_\mu [G^{\mu \lambda }(\bar n \xi)\, \mathcal{W} (\bar n \xi )]\ {\bar{n}}_\nu [\mathcal{W}^\dagger (0) \, {\widetilde G}^{\nu}{}_\lambda(0)]\,, \end{align} ($\widetilde G_{\alpha \beta} = \frac12 \epsilon_{\alpha \beta \gamma \delta} G^{\gamma \delta}$ with $\epsilon_{0123}=+1$) whose matrix elements give the PDFs $f_{G_+} + f_{G_-}$ and $f_{G_+} - f_{G_-}$. It is more convenient to use $f_{G_+}$ and $f_{G_-}$, the distribution of helicity $h=1$ and $h=-1$ gauge bosons, given by the sum and difference of the equations in \eq{pol}.\footnote{ $ {\bar{n}}_\mu G^{\mu \lambda }(\bar n \xi)\, {\bar{n}}_\nu {\widetilde G}^{\nu}{}_\lambda(0) = - {\bar{n}}_\mu {\widetilde G}^{\mu \lambda }(\bar n \xi)\, {\bar{n}}_\nu {G}^{\nu}{}_\lambda(0) $ and $ {\bar{n}}_\mu G^{\mu \lambda }(\bar n \xi)\, {\bar{n}}_\nu G^{\nu}{}_\lambda(0) = {\bar{n}}_\mu {\widetilde G}^{\mu \lambda }(\bar n \xi)\, {\bar{n}}_\nu {\widetilde G}^{\nu}{}_\lambda(0) $ so the other two possibilities for replacing $G^{\mu \lambda} \to \widetilde G^{\mu \lambda}$ do not lead to new PDFs.} There are also double helicity-flip gauge PDFs which are leading twist~\cite{Manohar:1988pp,Jaffe:1989xy}. These correspond to a transition where a helicity $h=\pm 1$ gauge boson is emitted and a helicity $h=\mp 1$ gauge boson is absorbed. Since gauge boson helicity changes by two, there has to be a corresponding change in helicity of the target hadron; as a result these operators only contribute to scattering off targets with spin $\ge 1$, and we neglect them here. They occur in the factorization theorem through box graphs. The transverse $W$ PDF is given by replacing the gluon field-strength tensor by the $SU(2)$ field-strength, and using a Wilson line in the adjoint of $SU(2)$. The transverse $B$ PDF is given by using $B_{\mu \nu}$, and no Wilson line is required since a $U(1)$ field-strength is a gauge singlet. The gauge operator involves two adjoint fields, and \begin{align}\label{eq:reps} \text{adj} \otimes \text{adj} &= [1+ \text{adj} + \bar a a + \bar s s]_S + [\text{adj} + \bar a s + \bar s a]_A\,, \end{align} where the first four representations are in the symmetric product of the two adjoints, and the last three are in the antisymmetric product. The representation $\bar a a$ is a traceless tensor $t^{ab}_{cd}$ antisymmetric in its lower and in its upper indices, $\bar a s$ is a traceless tensor $t^{ab}_{cd}$ antisymmetric in its lower and symmetric in its upper indices, etc. For the special case of $SU(3)$, $\bar ss=\mathbf{27}$, $\bar a s=\mathbf{10}$, $\bar s a = \mathbf{\overline{10}}$, and the $\bar a a$ does not exist. For the special case of $SU(2)$, $\bar ss$ is the isospin $I=2$ representation, $\bar a s$, $\bar s a$, $\bar a a$ do not exist, and $\text{adj}_S$ does not exist since the $d$-symbol vanishes. Further details on the group theory can be found in app.~A of ref.~\cite{Dashen:1994qi}. The various gauge operators are given by ($G_{\mu \lambda}$ denotes a generic field-strength tensor) \begin{align}\label{eq:3.36} O_G^{(R,c)} (r^-) &= -\frac{1 }{ 2 \pi r^-} \int \! \mathrm{d} \xi\, e^{-i \xi r^-}\, {\bar{n}}_\mu [G^{\mu \lambda }(\bar n \xi)\, \mathcal{W} (\bar n \xi )]_a \mathscr{C}_{ab}^{(R,c)}\, {\bar{n}}_\nu [\mathcal{W}^\dagger (0) \,G^{\nu}{}_\lambda(0)]_b \,, \end{align} where $a,b,c$ are gauge indices in the adjoint representation (upper vs.~lower indices do not matter, since the adjoint representation is real). $\mathscr{C}_{ab}^{(R,c)}$ is a Clebsch-Gordan coefficient for combining the two adjoints into state $c$ of representation $R$ given in \eq{reps}. The Clebsch-Gordan coefficients for the singlet, and the symmetric and antisymmetric adjoints are \begin{align} \mathscr{C}_{ab}^{(1)} &= \delta_{ab}\,, & \mathscr{C}_{ab}^{(\text{adj}_S,c)} &= d_{abc}\,, & \mathscr{C}_{ab}^{(\text{adj}_A,c)} &= -\mathrm{i} f_{abc} \,. \end{align} The Clebsch-Gordan coefficients for $\bar aa$, $\bar s s$, $\bar a s$ and $\bar s a$ are given in ref.~\cite{Dashen:1994qi}. We also have the corresponding polarized PDFs given by replacing $G^{\nu}{}_\lambda(0)$ by $- \mathrm{i}\, {\widetilde G}^{\nu}{}_\lambda(0)$ in \eq{3.36}. For the $SU(2) \times U(1)$ case, there are some additional PDFs. There are two more isotriplet gauge PDFs, \begin{align}\label{eq:3.36a} O_{W\!B}^{(I=1,c)} (r^-) &= -\frac{1 }{ 2 \pi r^-} \int \! \mathrm{d} \xi\, e^{-i \xi r^-}\, {\bar{n}}_\mu [W^{\mu \lambda }(\bar n \xi)\, \mathcal{W} (\bar n \xi )]_c \, {\bar{n}}_\nu [ B^{\nu}{}_\lambda(0)] \,,\nonumber \\ O_{BW}^{(I=1,c)} (r^-) &= -\frac{1 }{ 2 \pi r^-} \int \! \mathrm{d} \xi\, e^{-i \xi r^-}\, {\bar{n}}_\mu [B^{\mu \lambda }(\bar n \xi)] \, {\bar{n}}_\nu [\mathcal{W}^\dagger (0) \,W^{\nu}{}_\lambda(0)]_c \,. \end{align} A Wilson line is not needed for the $U(1)$ field-strength tensor, since it is a gauge singlet. Taking the Hermitian conjugate gives $[ O_{W\!B}^{(I=1,c)} (r^-) ]^\dagger = O_{BW}^{(I=1,c)} (r^-)$. We also have the polarized versions of \eq{3.36}, $O_{\Delta W B}$ and $O_{\Delta B W}$. For the massive electroweak gauge bosons, \eq{3.36} and its polarized version only give the PDFs for $h=\pm 1$. We also need the PDFs for $h=0$ longitudinally polarized gauge bosons. As we now discuss, these PDFs can not be written as light-cone Fourier transforms of operators involving the field-strength tensor. For a massive $W$ moving in the $+z$ direction, i.e.~$\hat n = (0,0,1)$, its momentum and polarization are \begin{align} p^\mu &=(E,0,0,p), & \epsilon_+ &= -\frac{1}{\sqrt 2}(0,1,i,0), & \epsilon_- &= \frac{1}{\sqrt 2}(0,1,-i,0), & \epsilon_0 &= \frac{1}{M_W}(p,0,0,E). \end{align} The field strength tensor annihilates a $W$ with amplitude \begin{align} \mathcal{A}^{\mu \nu} = -i(p^\mu \epsilon^\nu - p^\nu \epsilon^\mu). \end{align} For helicity $h=\pm1$ with polarization vectors $\epsilon_{\pm}$, $\mathcal{A}^{\mu \nu}$ has components of order $E$. For $h=0$, \begin{align} \epsilon_0 &=\frac{1}{M_W} p^\mu - \frac{M_W}{E+p} \bar n^\mu, \end{align} and \begin{align} \mathcal{A}^{\mu \nu} = i \frac{M_W^2}{E+p} (p^\mu \bar n^\nu - p^\nu \bar n^\mu). \end{align} Since the $p^\mu/M_W$ term does not contribute to $\mathcal{A}^{\mu \nu}$, $\mathcal{A}^{\mu \nu}$ has components of order $M_W/E$, which are suppressed by $M_W^2/E^2$ relative to transverse polarization, and so are not leading twist. The longitudinal gauge boson PDFs are given in terms of scalar PDFs, using the Goldstone boson equivalence theorem~\cite{chanowitz,bohmbook}. We use the PDF operators \begin{align}\label{eq:cs2} O_H^{(I=0)}(r^-) &= \frac{r^- }{ 2 \pi} \int\! \mathrm{d} \xi\, e^{-i \xi r^-} [ H^\dagger (\bar n \xi )\, \mathcal{W} (\bar n \xi )]\ [\mathcal{W}^\dagger (0) \, H(0 ) ] \,, \nonumber \\ O_H^{(I=1,a)}(r^-) &= \frac{r^- }{ 2 \pi} \int\! \mathrm{d} \xi\, e^{-i \xi r^-} [ H^\dagger (\bar n \xi )\, \mathcal{W} (\bar n \xi )]\ t^a\ [\mathcal{W}^\dagger (0) \, H(0 ) ] \,. \end{align} for the Higgs field, which is given by \begin{align} H &= \begin{pmatrix} H^+ \\ H^0 \end{pmatrix} = \frac{1}{\sqrt 2}\! \begin{pmatrix} \varphi^2 + \mathrm{i} \varphi^1 \\ v + h - \mathrm{i} \varphi^3 \end{pmatrix}\,, \end{align} in the unbroken and broken phase, respectively. Here $h$ is the physical Higgs particle, and the unphysical scalars $\varphi^3,\varphi^\pm=(\varphi^1 \mp \mathrm{i} \varphi^2)/\sqrt{2}$ in the Higgs multiplet can be related to longitudinal electroweak gauge bosons $Z_L, W^\pm_L$ using the Goldstone boson equivalence theorem~\cite{chanowitz,bohmbook}. The $CP$-conjugate $\bar H$ PDFs are given by $H \leftrightarrow \bar H$ and $t^a \to (-t^a)^T$. There are two additional Higgs PDFs $O_{\widetilde HH}^{(I=0)} $, $O_{\widetilde H H}^{(I=1,c)} $ given by replacing $H^\dagger$ in \eq{cs2} by $\widetilde H^\dagger$, where \begin{align}\label{eq:wide} \widetilde H_j &= \epsilon_{jk} H^{\dagger\, k} = \begin{pmatrix} \bar H^0 \\ -H^- \end{pmatrix} . \end{align} The $CP$-conjugate PDFs are given by swapping $H \leftrightarrow \widetilde H$, and $t^a \to (-t^a)^T$. The operator $O_{\widetilde HH}^{(I=0)}$ breaks electromagnetism and doesn't contribute to factorization formulae. $O_{\widetilde H H}^{(I=1,c)}$, $O_{H \widetilde H}^{(I=1,c)}$ do, and are included in our analysis. They have $Y=\pm 1$, and can occur in the factorization theorem in pairs. The $\Delta Q=0$ components have non-zero proton matrix elements. Taking Hermitian conjugates gives $[ O_{\widetilde HH}^{(I=1,c)}(r^-)]^\dagger = -O_{H \widetilde H}^{(I=1,c)} (r^-)$. \subsection{Anomalous dimensions} We will first briefly review rapidity divergences, and the rapidity regulator of ref.~\cite{Chiu:2011qc,Chiu:2012ir}, that we use to treat them. Rapidity divergences arise in e.g.~transverse momentum dependent factorization theorems, where the emission of a single soft gluon involves an integral over its rapidity with a rapidity-independent (i.e.~constant) integrand. To bring this divergence under control, the $\eta$ regulator of ref.~\cite{Chiu:2011qc,Chiu:2012ir} explicitly breaks boost invariance. The resulting $1/\eta$ poles cancel between the collinear and soft operators. These poles lead to renormalization group equations involving the rapidity renormalization scale $\nu$. Just as the $\mu$-evolution can be used to resum invariant mass logarithms, the $\nu$-evolution can be used to resum rapidity logarithms, which arise because the collinear and soft operators have different natural rapidity scales. This will be discussed in more detail in \sec{resummation}, see in particular \fig{nu}. In our case we do not measure the transverse momentum of the gauge boson, but instead have to account for the gauge boson mass. The RG equations of the collinear operators take the form \begin{align} \label{eq:C_RGE} \frac{\mathrm{d}}{\mathrm{d} \ln \mu}\, O_i(r^-,\mu,\nu) &= \sum_{j} \int_0^1\! \frac{\mathrm{d} z}{z}\, \gamma_{\mu,ij}(z,r^-,\mu,\nu)\, O_j\Big(\frac{r^-}{z},\mu,\nu\Big) \,,\nonumber \\ \frac{\mathrm{d}}{\mathrm{d} \ln \nu}\, O_i(r^-,\mu,\nu) &= \gamma_{\nu,i}(\mu,\nu)\, O_i(r^-,\mu,\nu) \,.\end{align} The lower limit on $z$ turns into $r^-/p^-$ when its matrix element is taken in a state with momentum $p$. The anomalous dimension $\gamma_\mu$ depends not just on $z$ but also on $r^-$ because of rapidity divergences. The collinear operators mix under $\mu$ evolution, but are multiplicatively renormalized under $\nu$ evolution. The convolution in \eq{C_RGE} will be abbreviated by $\otimes$. Since we limit ourselves to one-loop results, we find it convenient to use the notation \begin{align} \gamma &\equiv \frac{\alpha}{\pi} \, \hat \gamma\, . \end{align} in intermediate expressions. We now compute the anomalous dimensions $\gamma_\mu$ and $\gamma_\nu$. The indices $i$ and $j$ in \eq{C_RGE} run over all the fermion and gauge boson PDFs. We will use the helicity basis $Q_+$, $Q_-$, $G_+$, $G_-$ rather than the more conventional basis $Q$, $\Delta Q$, $G$, $\Delta G$ used in QCD. In QCD, there is no $\nu$-evolution and parity invariance implies that the $\mu$-evolution kernels satisfy \begin{align} \gamma_{\mu,Q_+ Q_+} & = \gamma_{\mu,Q_- Q_-}\,, & \gamma_{\mu,Q_+ Q_-} & = \gamma_{\mu,Q_- Q_+}\,, & \gamma_{\mu,G_+ G_+} & = \gamma_{\mu,G_- G_-}\,, & \gamma_{\mu,G_+ G_-} & = \gamma_{\mu,G_- G_+}\,, \nonumber \\ \gamma_{\mu,Q_+ G_+} & = \gamma_{\mu,Q_- G_-}\,, & \gamma_{\mu,Q_+ G_-} & = \gamma_{\mu,Q_- G_+}\,, & \gamma_{\mu,G_+ Q_+} & = \gamma_{\mu,G_- Q_-}\,, & \gamma_{\mu,G_+ Q_-} & = \gamma_{\mu,G_- Q_+}\,, \end{align} which allows one to write evolution equations which mix $\{ f_Q,\, f_G\}$, and separately mix $\{f_{\Delta Q}, f_{\Delta G}\}$ using the kernels \begin{align} \gamma_{\mu,QQ} &= \gamma_{\mu,Q_+Q_+} + \gamma_{\mu,Q_+Q_-}\,, & \gamma_{\mu,\Delta Q \Delta Q} &= \gamma_{\mu,Q_+Q_+} - \gamma_{\mu,Q_+Q_-} \nonumber \\ \gamma_{\mu,QG} &= \gamma_{\mu,Q_+G_+} + \gamma_{\mu,Q_+G_-} & \gamma_{\mu,\Delta Q\Delta G} &= \gamma_{\mu,Q_+G_+} - \gamma_{\mu,Q_+G_-} \nonumber \\ \gamma_{\mu,GQ} &= \gamma_{\mu,G_+Q_+} + \gamma_{\mu,G_+Q_-} & \gamma_{\mu,\Delta G\Delta Q} &= \gamma_{\mu,G_+Q_+} - \gamma_{\mu,G_+Q_-} \nonumber \\ \gamma_{\mu,G G} &= \gamma_{\mu,G_+G_+} + \gamma_{\mu,G_+G_-} & \gamma_{\mu,\Delta G\Delta G} &= \gamma_{\mu,G_+G_+} - \gamma_{\mu,G_+G_-}. \end{align} This simplification is not possible in the electroweak sector, since parity is not a good symmetry. We therefore write our results using the helicity basis $ \gamma_{Q_+Q_+}$, $ \gamma_{Q_+ Q_-}$, etc. \subsection{$\gamma_{QQ}$ and $\gamma_{HH}$} \begingroup \renewcommand{\arraystretch}{1.5} \setlength{\arraycolsep}{3pt} \begin{table*} \centering \begin{eqnarray*} \begin{array}{c|c|c} \hline\hline \text{Graph} &\hat \gamma_\mu &\hat \gamma_\nu \\ \hline \begin{minipage}{2cm} \mybox{\includegraphics[width=2cm]{fdfigs/fd3}} \end{minipage} & \frac{2}{(1-z)}_{\!+} -2- 2\,\delta(1-z) \ln \frac{\nu}{\bar n \cdot r} & - \ln \frac{\mu^2}{M^2} \\ \begin{minipage}{2cm} \mybox{\includegraphics[width=2cm]{fdfigs/fd5}} \end{minipage} & 1-z & 0 \\ \hline \text{Total}_1 & \frac{2}{(1-z)}_{\!+} -z-1- 2\,\delta(1-z) \ln \frac{\nu}{\bar n \cdot r} & - \ln \frac{\mu^2}{M^2} \\ \hline \begin{minipage}{2cm} \mybox{\includegraphics[width=2cm]{fdfigs/fd4}} \end{minipage} &2 \bigl( \ln \frac{\nu}{\bar n \cdot r} +1\bigr)\delta(1-z) & \ln \frac{\mu^2}{M^2} \\ \begin{minipage}{1.8cm} \mybox{\includegraphics[width=1.8cm]{fdfigs/fd1}} \end{minipage} & -\frac12\delta(1-z) & 0 \\ \hline \text{Total}_2 & \bigl(2\, \ln \frac{\nu}{\bar n \cdot r} +\frac 32 \bigr) \delta(1-z) & \ln \frac{\mu^2}{M^2} \\ \hline \hline \end{array} \end{eqnarray*} \caption{One-loop diagrams for the renormalization of fermion collinear operators. The columns show the graph and contribution to the $\mu$ and $\nu$ anomalous dimension. The combinatoric factor of $2$ for the first and third graphs, and $-1$ for the wavefunction graph has been included. Subsets of the graphs have been summed to give $\text{Total}_1$ and $\text{Total}_2$. For the singlet fermion PDF, $\text{Total}_1$ and $\text{Total}_2$ have group theory factor $c_F$. For the adjoint PDF, $\text{Total}_1$ has group theory factor $c_F-c_A/2$ and $\text{Total}_2$ has group theory factor $c_F$.} \label{tab:coll_fermion} \end{table*} \endgroup The one-loop gauge diagrams and resulting contributions to the $QQ$ anomalous dimensions are shown in table~\ref{tab:coll_fermion}. Their calculation was performed in section V of ref.~\cite{Manohar:2012jr}, using dimensional regularization for the UV divergences and the rapidity renormalization group~\cite{Chiu:2011qc,Chiu:2012ir} to treat the rapidity divergences. The graphs are divided into two sets, with the sum of each set given in the table. The value of individual diagrams depends on the gauge choice, but $\text{Total}_1$ and $\text{Total}_2$ remain the same. The color factor $c_{QQ}(R)$ for $\text{Total}_1$ depends on the PDF representation, whereas the color factor for $\text{Total}_2$ is $c_F$. The one-loop graphs in Table~\ref{tab:coll_fermion} conserve fermion helicity, implying that the helicity mixing terms $\gamma_{Q_+Q_-}$ and $\gamma_{Q_-Q_+}$ vanish at this order. The expressions for the diagrams hold for both $\gamma_{Q_+Q_+}$ for $Q=u,d,e$ and $\gamma_{Q_-Q_-}$ for $Q=q,l$, and lead to the following anomalous dimensions \begin{align} \label{eq:ga_C} \hat \gamma_{\mu,QQ}^{(R)} &= c_{QQ}(R) \Big( \frac{2}{(1-z)}_{\!+} -z-1 \Big) + \frac 32 c_F \delta(1-z) + 2\left[ c_F- c_{QQ}(R)\right] \ln \frac{\nu}{\bar n \cdot r}\, \delta(1-z) \,,\nonumber \\ &= c_{QQ}(R) \widetilde P_{QQ}(z) + \left[c_F- c_{QQ}(R)\right]\Big( 2 \ln \frac{\nu}{\bar n \cdot r} + \frac 32 \Big) \delta(1-z) \,,\nonumber \\ \hat \gamma_{\nu,Q}^{(R)} &= \left[c_F - c_{QQ}(R)\right] \ln \frac{\mu^2}{M^2} \,,\end{align} with \begin{align} \label{eq:ga_C_ap} \widetilde P_{QQ}(z) &= \frac{2}{(1-z)}_{\!+} -z-1 + \frac 32\, \delta(1-z) \,, \end{align} the usual Altarelli-Parisi evolution kernel. The group theory factor $c_{QQ}(R)$ is \begin{align}\label{eq:3.41} c_{QQ}(1) &= c_F & c_{QQ}(\text{adj}) &= c_F - \frac12 c_A\,. \end{align} As stated above, the expressions for the anomalous dimensions in \eq{ga_C} hold for both $Q_+Q_+$ and $Q_-Q_-$. However, there are differences in the anomalous dimensions due to group theory factors, since the gauge quantum numbers of SM fields depend on chirality (and hence helicity). The above results also hold for the $\bar Q$ PDF. The $\ln \nu/(\bar n \cdot r)$ in \eq{ga_C} arises due to the rapidity divergence. With our conventions $\bar n \cdot r=2 E$, where $E$ is the energy of the outgoing parton. The $QQ$ anomalous dimension for the singlet PDF reproduces the standard result~\cite{Altarelli:1977zs,Dokshitzer:1977sg}. The rapidity divergences cancel in this case to yield $\gamma_\nu=0$. In ref.~\cite{Manohar:2012jr}, the gauge boson mass $M$ that appears in these expressions played the role of infrared regulator, and dropped out in the final result. Here the gauge boson mass $M$ is physical and does not drop out. As we will discuss in \sec{mixing}, the only gauge boson mass that enters for $SU(2) \times U(1)$ is $M=M_W$, with the exception of $O_{\widetilde HH}$, where also $M=M_Z$ contributes. \begingroup \renewcommand{\arraystretch}{1.5} \setlength{\arraycolsep}{3pt} \begin{table*} \centering \begin{eqnarray*} \begin{array}{c|c|c} \hline\hline \text{Graph} &\hat \gamma_\mu &\hat \gamma_\nu \\ \hline \begin{minipage}{2cm} \mybox{\includegraphics[width=2cm]{fdfigs/fd3s}} \end{minipage} & \frac{2}{(1-z)_+}-z - 2 - 2 \delta(1-z) \ln \frac{\nu}{{\bar{n}} \cdot r} & - \ln \frac{\mu^2}{M^2} \\ \begin{minipage}{2cm} \mybox{\includegraphics[width=2cm]{fdfigs/fd5s}} \end{minipage} & z & 0 \\ \hline \text{Total}_1 & \frac{2}{(1-z)_+}- 2 - 2 \delta(1-z) \ln \frac{\nu}{{\bar{n}} \cdot r} & - \ln \frac{\mu^2}{M^2} \\ \hline \begin{minipage}{2cm} \mybox{\includegraphics[width=2cm]{fdfigs/fd4s}} \end{minipage} & \bigl(2 \ln \frac{\nu}{{\bar{n}} \cdot r} +1\bigr)\delta(1-z) & \ln \frac{\mu^2}{M^2} \\ \begin{minipage}{1.8cm} \mybox{\includegraphics[width=1.8cm]{fdfigs/fd1s}} \end{minipage} & \delta(1-z) & 0 \\ \hline \text{Total}_2 & \bigl( 2 \ln \frac{\nu}{{\bar{n}} \cdot r} + 2 \bigr) \delta(1-z) & \ln \frac{\mu^2}{M^2} \\ \hline \hline \end{array} \end{eqnarray*} \caption{One-loop diagrams for the renormalization of scalar collinear operators. Subsets of the graphs have been summed to give $\text{Total}_1$ and $\text{Total}_2$. For the singlet scalar PDF, $\text{Total}_1$ and $\text{Total}_2$ have group theory factor $c_F$. For the adjoint PDF, $\text{Total}_1$ has group theory factor $c_F-c_A/2$ and $\text{Total}_2$ has group theory factor $c_F$.} \label{tab:6} \end{table*} \endgroup An almost identical analysis holds for the mixing of scalar (i.e.\ Higgs) and gauge PDFs. The graphs are shown in table~\ref{tab:6}, and give \begin{align} \label{eq:3.17} \hat \gamma_{\mu,HH}^{(R)} &= c_{HH}(R) \widetilde P_{HH}(z) + \left[c_F- c_{HH}(R)\right]\Big(2 \ln \frac{\nu}{\bar n \cdot r} + 2\Big) \delta(1-z) \,,\nonumber \\ \hat \gamma_{\nu,H}^{(R)} &= \left[c_F - c_{HH}(R)\right] \ln \frac{\mu^2}{M^2} \,,\end{align} with \begin{align} \widetilde P_{HH}(z) &= \frac{2}{(1-z)}_{\!+} -2 + 2\, \delta(1-z) \,. \end{align} The group theory factor $c_{HH}(R)$ for scalars is the same as $c_{QQ}(R)$ for fermions. The scalar results also hold for the $\bar H$ PDF. \begingroup \renewcommand{\arraystretch}{1.5} \setlength{\arraycolsep}{3pt} \begin{table*} \centering \begin{eqnarray*} \begin{array}{c|c|c} \hline\hline \text{Graph} &\hat \gamma_\mu &\hat \gamma_\nu \\ \hline \begin{minipage}{2cm} \mybox{\includegraphics[width=2cm]{fdfigs/fd5h}} \end{minipage} & \frac12(1-z) & 0 \\ \hline \begin{minipage}{1.8cm} \mybox{\includegraphics[width=1.8cm]{fdfigs/fd1h}} \end{minipage} & -\frac14\delta(1-z) & 0 \\ \hline \end{array} \end{eqnarray*} \caption{One-loop Yukawa diagrams for the renormalization of fermion collinear operators. The Yukawa factors multiplying the graphs are given in \eqs{QQ_yuk}{HH_yuk}.} \label{tab:2} \end{table*} \endgroup The Yukawa diagrams which contribute to the fermion anomalous dimensions are shown in table~\ref{tab:2}. We will use the convention \begin{align} \mathcal{L}_Y &= - H^{\dagger j} \overline d_r\, [Y_d]_{rs}\, q_{j,s} - \widetilde H^{\dagger j} \overline u_r\, [Y_u]_{rs}\, q_{j,s} - H^{\dagger j} \overline e_r\, [Y_e]_{rs} \, \ell_{j,s} + \hbox{h.c.} \label{eq:sm} \end{align} for the Yukawa couplings, where $j$ is an $SU(2)$ index, $r,s$ are flavor indices, and $\widetilde H$ is given in \eq{wide}. The Lagrangian in \eq{sm} is written in the weak eigenstate basis. The Yukawa interaction \eq{sm} is gauge invariant only if the weak gauge group is $SU(2)$, so the Yukawa contributions given below are only valid in this case. In the Standard Model, one can pick a flavor basis in which \begin{align}\label{eq:3.21} \frac{v}{\sqrt 2} Y_e &= M_e, & \frac{v}{\sqrt 2} Y_u &= M_u, & \frac{v}{\sqrt 2} Y_d &= M_d V^\dagger, \end{align} where $M_e=\text{diag}(m_e,m_\mu,m_\tau)$, $M_u=\text{diag}(m_u,m_c,m_t)$, $M_d=\text{diag}(m_d,m_s,m_b)$, and $V$ is the CKM mixing matrix. Since the only heavy fermion is the top quark, one can, to a very good approximation let \begin{align}\label{eq:approx} Y_e &\to 0, & Y_d &\to 0, & Y_u &\to \text{diag}(0,0,\sqrt{2}\, m_t/v)\,, \end{align} in the anomalous dimensions. We will, however, retain the full Yukawa dependence for the moment. The scalar exchange contribution to the evolution kernel is not diagonal in flavor. Letting $O_{Q,r,s}$ denote the $Q$ PDF with fields $\bar Q_r$ and $Q_s$ in \eq{3.33}, where $r,s$ are flavor (generation) indices, the Yukawa contribution to the evolution equations are: \begin{align} \label{eq:QQ_yuk} \mu \frac{\mathrm{d}}{\mathrm{d} \mu}\ O^{(I=0)}_{d,r,s} &= \frac{1}{4\pi^2} \biggl[ \frac12 (1-z) [Y_d^\dagger]_{vr} [Y_d]_{sw} \otimes O^{(I=0)}_{q,v,w} -\frac14 \delta(1-z) [Y_d Y_d^\dagger]_{vr}\otimes O^{(I=0)}_{d,v.s} \nonumber \\ & - \frac14 \delta(1-z) [Y_d Y_d^\dagger]_{sv} \otimes O^{(I=0)}_{d,r.v} \biggr] \,, \nonumber \\ \mu \frac{\mathrm{d}}{\mathrm{d} \mu}\ O^{(I=0)}_{u,r,s} &= \frac{1}{4\pi^2} \biggl[ \frac12 (1-z) [Y_u^\dagger]_{vr} [Y_u]_{sw} \otimes O^{(I=0)}_{q,v,w} -\frac14 \delta(1-z) [Y_u Y_u^\dagger]_{vr} \otimes O^{(I=0)}_{u,v.s} \nonumber \\ & - \frac14 \delta(1-z) [Y_u Y_u^\dagger]_{sv}\otimes \otimes O^{(I=0)}_{u,r.v} \biggr] \,, \nonumber \\ \mu \frac{\mathrm{d}}{\mathrm{d} \mu}\ O^{(I=0)}_{e,r,s} &= \frac{1}{4\pi^2} \biggl[ \frac12 (1-z) [Y_e^\dagger]_{vr} [Y_e]_{sw} \otimes O^{(I=0)}_{\ell,v,w} -\frac14 \delta(1-z) [Y_e Y_e^\dagger]_{vr} \otimes O^{(I=0)}_{e,v.s} \nonumber \\ & - \frac14 \delta(1-z) [Y_e Y_e^\dagger]_{sv} \otimes O^{(I=0)}_{e,r.v} \biggr] \,, \nonumber \\ \mu \frac{\mathrm{d}}{\mathrm{d} \mu}\ O^{(I=0)}_{q,r,s} &= \frac{1}{4\pi^2} \biggl[ (1-z) [Y_d]_{vr} [Y_d^\dagger]_{sw} \otimes O^{(I=0)}_{d,v,w} + (1-z) [Y_u]_{vr} [Y_u^\dagger]_{sw} \otimes O^{(I=0)}_{u,v,w} \nonumber \\ & -\frac18 \delta(1-z) [Y_d^\dagger Y_d + Y_u^\dagger Y_u]_{vr} \otimes O^{(I=0)}_{q,v.s} - \frac18 \delta(1-z) [Y_d^\dagger Y_d +Y_u^\dagger Y_u]_{sv} \otimes O^{(I=0)}_{q,r.v} \biggr] \,, \nonumber \\ \mu \frac{\mathrm{d}}{\mathrm{d} \mu}\ O^{(I=0)}_{\ell,r,s} &= \frac{1}{4\pi^2} \biggl[ (1-z) [Y_e]_{vr} [Y_e^\dagger]_{sw} \otimes O^{(I=0)}_{e,v,w} -\frac18 \delta(1-z) [Y_e^\dagger Y_e ]_{vr} \otimes O^{(I=0)}_{\ell,v.s} \nonumber \\ & - \frac18 \delta(1-z) [Y_e^\dagger Y_e]_{sv} \otimes O^{(I=0)}_{\ell,r,v} \biggr] \,, \nonumber \\ \mu \frac{\mathrm{d}}{\mathrm{d} \mu}\ O^{(I=1)}_{q,r,s} &= \frac{1}{4\pi^2} \biggl[ -\frac18 \delta(1-z) [Y_d^\dagger Y_d + Y_u^\dagger Y_u]_{vr} \otimes O^{(I=1)}_{q,v.s} - \frac18 \delta(1-z) [Y_d^\dagger Y_d +Y_u^\dagger Y_u]_{sv} \otimes O^{(I=1)}_{q,r,v} \biggr] \,,\nonumber \\ \mu \frac{\mathrm{d}}{\mathrm{d} \mu}\ O^{(I=1)}_{\ell,r,s} &= \frac{1}{4\pi^2} \biggl[ -\frac18 \delta(1-z) [Y_e^\dagger Y_e ]_{vr} \otimes O^{(I=1)}_{\ell,v.s} - \frac18 \delta(1-z) [Y_e^\dagger Y_e]_{sv} \otimes O^{(I=1)}_{\ell,r.v} \biggr] \,. \end{align} The antifermion evolution equations are given by $CP$ conjugation, i.e.\ by replacing $d,r,s \leftrightarrow \bar d,s,r$, etc.\ on both sides of the equation. The factorization theorem leads to collinear operators with $r=s$, which can mix with $r \not= s$ operators under evolution. In the final results, we will use \eq{approx}, which greatly simplifies the results. Most terms vanish, and the Yukawa evolution is flavor diagonal and only contributes to the third generation. Yukawa couplings give an additional contribution to Higgs wavefunction renormalization, and hence an additional term to the $HH$ anomalous dimension \begin{align} \label{eq:HH_yuk} \gamma_{\mu,HH} &= -\frac1{8\pi^2} \mathrm{tr} \big[ N_c Y_u^\dagger Y_u + N_c Y_d^\dagger Y_d + Y_e^\dagger Y_e \big] \delta(1-z)\,, \end{align} which must be added to \eq{3.17}. \subsection{$\gamma_{GG}$} \begingroup \renewcommand{\arraystretch}{1.5} \setlength{\arraycolsep}{3pt} \begin{table*} \centering \begin{eqnarray*} \begin{array}{c|c|c|c|c} \hline\hline \text{Graph} & \multicolumn{2}{c|}{P_{G_+ G_+}} & \multicolumn{2}{c}{P_{G_+ G_-}} \\ \hline & \hat \gamma_\mu & \hat \gamma_\nu & \hat \gamma_\mu & \hat \gamma_\nu \\ \hline \begin{minipage}{2cm} \mybox{\includegraphics[width=2cm]{fdfigs/fd13}} \end{minipage} & \frac{2}{(1-z)_+}-1- 2 \ln \frac{\nu}{\bar n \cdot r }\,\delta(1-z) & -\ln \frac{\mu^2}{M^2} & 0 & 0 \\ \begin{minipage}{2cm} \mybox{\includegraphics[width=2cm]{fdfigs/fd15}} \end{minipage} & \frac{1}{z}+1-z^2 & 0 & \frac{(1-z)^3}{z} & 0 \\ \begin{minipage}{2cm} \mybox{\includegraphics[width=2cm]{fdfigs/fd17}} \end{minipage} & -1-z & 0 & 0 & 0 \\ \hline \text{Total}_1 & \frac{2}{(1-z)_+} +\frac{1}{z}-1-z-z^2 - 2 \ln \frac{\nu}{\bar n \cdot r }\,\delta(1-z) & -\ln \frac{\mu^2}{M^2} & \frac{(1-z)^3}{z} & 0 \\ \hline \begin{minipage}{2cm} \mybox{\includegraphics[width=2cm]{fdfigs/fd14}} \end{minipage} & c_A\big( 2 \ln \frac{\nu}{\bar n \cdot r}+\frac52\big) \delta(1-z) & c_A \ln \frac{\mu^2}{M^2} & 0 & 0 \\ \begin{minipage}{2cm} \mybox{\includegraphics[width=2cm]{fdfigs/fd22}} \end{minipage} & -\frac32 c_A \delta(1-z) & 0 & 0 & 0 \\ \begin{minipage}{2cm} \mybox{\includegraphics[width=2cm]{fdfigs/fd11}} \end{minipage} & \big(\frac{b_0}{2} - c_A\big)\delta(1-z) & 0 & 0 & 0 \\ \hline \text{Total}_2 & \big(\frac{b_0}{2}+2 c_A \ln \frac{\nu}{\bar n \cdot r} \big)\delta(1-z) & c_A \ln \frac{\mu^2}{M^2} & 0 & 0 \\ \hline\hline \end{array} \end{eqnarray*} \caption{One-loop diagrams for the renormalization of collinear gauge boson operators. The columns show the graph and contribution to the $\mu$ and $\nu$ anomalous dimension for $P_{G_+G_+}$, $P_{G_+G_-}$. At one-loop, $P_{G_+G_+}=P_{G_-G_-}$ and $P_{G_+G_-}=P_{G_-G_+}$. Combinatoric factors have been included. Subsets of the graphs have been summed to give $\text{Total}_1$ and $\text{Total}_2$. $\text{Total}_2$ has group theory factor $1$ in all cases, since its group theory factors are already included in the table. $\text{Total}_1$ has group theory factors given in \eq{3.43}.} \label{tab:coll_gauge} \end{table*} \endgroup The one-loop diagrams contributing to the evolution of gauge-boson collinear operators are listed in table \ref{tab:coll_gauge} for $P_{G_+G_+}$ and $P_{G_+G_-}$. At one-loop order $P_{G_+G_+}=P_{G_-G_-}$ and $P_{G_+G_-}=P_{G_-G_+}$, but these relations do not hold at higher order. As for fermions, the individual graphs depend on the gauge, but $\text{Total}_1$ and $\text{Total}_2$ do not. The anomalous dimensions are \begin{align} \label{eq:ga_C_W} \hat \gamma_{\mu,G_+G_+}^{(R)} &=c_{GG}(R) \left[\frac{2}{(1-z)_+} + \frac{1}{z}-1-z-z^2\right] + \left[\frac{b_0}{2}+2 \left(c_A-c_{GG}(R) \right) \ln \frac{\nu}{\bar n \cdot r} \right]\delta(1-z) \nonumber \\ &=c_{GG}(R) \widetilde P_{G_+G_+}(z) + \left[\frac{b_0}{2}+2 \left(c_A-c_{GG}(R) \right) \ln \frac{\nu}{\bar n \cdot r} \right]\delta(1-z) \,, \nonumber \\ \hat \gamma_{\mu,G_+G_-}^{(R)} &= c_{GG}(R) \frac{(1-z)^3}{z} =c_{GG}(R) \widetilde P_{G_+G_-}(z)\,, \nonumber \\ \hat \gamma_{\nu,G_+G_+}^{(R)} &= \left[ c_A-c_{GG}(R) \right] \ln \frac{\mu^2}{M^2}\delta(1-z) \,, \nonumber \\ \hat \gamma_{\nu,G_+G_-}^{(R)} &= 0 \,,\end{align} where \begin{align} \widetilde P_{G_+G_+}(z) &=\frac{2}{(1-z)_+} + \frac{1}{z}-1-z-z^2 \,,\nonumber \\ \widetilde P_{G_+G_-}(z) &= \frac{(1-z)^3}{z} \,, \end{align} are the helicity components of the usual Altarelli-Parisi kernels excluding the $\delta(1-z)$ term, and $b_0$ is the first term in the $\beta$-function, \begin{align} \mu \frac{\mathrm{d} g}{\mathrm{d} \mu} &=- \frac{g^3}{16\pi^2}\, b_0 + \ord{g^5}\,. \end{align} The group theory factors are \begin{align}\label{eq:3.43} c_{GG}(1) &= c_A\,, & c_{GG}(\text{adj}_S) &= \frac12 c_A\,, & c_{GG}(\text{adj}_A) &= \frac12 c_A\,, \nonumber \\ c_{GG}(\bar a s) &= 0\,, & c_{GG}(\bar sa ) &= 0\,, & c_{GG}(\bar a a) &= 1\,, & c_{GG}(\bar s s) &= -1\,. \end{align} For the singlet PDF, $c_{GG}(1)=c_A$, \eq{ga_C_W} reduces to the standard result~\cite{Altarelli:1977zs}, and $\gamma_\nu$ vanishes. For the present analysis, we need the results for gauge group $SU(2)$, so the only representations in \eq{3.43} which occur are $1$, $\text{adj}_A$ and $\bar ss$, which are the $SU(2)$ singlet, triplet, and quintet representation with weak isospin $I=0,1,2$. For the $W\!B$ and $BW$ PDFs, there are only virtual corrections which are diagonal in helicity, \begin{align} \gamma_{\mu,W\!B} &= \gamma_{\mu,BW} = \frac{\alpha_2}{\pi} \left[\frac{b_{0,2}}{4}+2 \ln \frac{\nu}{\bar n \cdot r} \right]\delta(1-z) + \frac{\alpha_1}{\pi} \left[\frac{b_{0,1}}{4} \right]\delta(1-z)\, , \nonumber \\ \gamma_{\nu,W\!B} &= \gamma_{\nu,BW} = \frac{\alpha_2}{\pi} \ln \frac{\mu^2}{M^2} \,, \end{align} where $b_{0,2}$ is $b_0$ for the $SU(2)$ gauge group, and similarly for $b_{0,3}$ and $b_{0,1}$. The $W\!B$ PDFs do not mix with the triplet $W$ PDF at one-loop, since $W$ and $B$ bosons do not interact at tree level. \subsection{$\gamma_{QG}$ and $\gamma_{HG}$} \begingroup \begin{table*} \renewcommand{\arraystretch}{1.5} \setlength{\arraycolsep}{3pt} \centering \begin{align*} \begin{array}{c|c|c|c|c} \hline\hline \text{Graph} & \multicolumn{2}{c|}{P_{Q_+ G_+}} & \multicolumn{2}{c}{P_{Q_+ G_-}} \\ \hline & \hat \gamma_\mu & \hat \gamma_\nu & \hat \gamma_\mu & \hat \gamma_\nu \\ \hline \begin{minipage}{2cm} \mybox{\includegraphics[width=2cm]{fdfigs/fd20}} \end{minipage} & z^2 & 0 & (1-z)^2 & 0 \\ \hline\hline \end{array} \end{align*} \caption{\label{tab:QG} One-loop diagrams contributing to $QG$ mixing. The columns show the graph and contribution to the $\mu$ and $\nu$ anomalous dimension. The group theory factors are given in \eqs{3.48}{barQG}.} \end{table*} \endgroup Fermion and gauge boson PDFs can mix, and the $QG$ element of the mixing matrix is due to the graph shown in table~\ref{tab:QG}. The graph has no rapidity divergence so (for all four choices of signs) \begin{align}\label{eq:3.47} \hat \gamma_{\mu,Q_{\pm}G_{\pm}}^{(R)} &= c_{QG}(R)\, \widetilde P_{Q_{\pm} G_{\pm}}(z) , & \widetilde P_{Q_+G_+}(z) &= z^2 , & \widetilde P_{Q_+G_-}(z) &= (1-z)^2 , \end{align} where $c_{QG}(R)$ is the group theory factor, and $\widetilde P_{Q_{\pm} G_{\pm}}(z)$ are the usual Altarelli-Parisi kernels. At one-loop, $\widetilde P_{Q_+G_+}(z)=\widetilde P_{Q_-G_-}(z)$ and $\widetilde P_{Q_+G_-}(z)=\widetilde P_{Q_-G_+}(z)$, but the anomalous dimensions do not satisfy these equalities because the group theory factors for $Q_+$ and $Q_-$ differ. Fermion and gluon PDFs which mix must have the same gauge representation, so the only mixing is in the singlet and adjoint sectors, for which \begin{align}\label{eq:3.48} c_{QG}(1) &= t_F \,, \qquad c_{QG}(\text{adj}_S) = \frac12 t_F \,, \qquad c_{QG}(\text{adj}_A) = \frac12 t_F \,.\end{align} where $t_F=1/2$ is the index of the fundamental representation. For the antifermion PDF, \begin{align}\label{eq:barQG} c_{\bar QG}(1) &= t_F \,, \qquad c_{\bar QG}(\text{adj}_S) = -\frac12 t_F \,, \qquad c_{\bar QG}(\text{adj}_A) = \frac12 t_F \,.\end{align} The triplet quark PDF can mix with the $W\!B$ and $BW$ PDFs, with anomalous dimensions \begin{align}\label{eq:3.30} \gamma_{\mu, Q\pm\,W\!B\pm} &= \gamma_{\mu, Q\pm\,BW\pm} = \frac{g_1 g_2}{4 \pi^2}\, \mathsf{y}_Q\,\widetilde P_{Q\pm G\pm}(z), \nonumber \\ \gamma_{\mu, \bar Q\pm\,W\!B\pm} &= \gamma_{\mu, \bar Q\pm\,BW\pm} =- \frac{g_1 g_2}{4 \pi^2}\, \mathsf{y}_Q\, \widetilde P_{Q\pm G\pm}(z) . \end{align} \begingroup \begin{table*} \renewcommand{\arraystretch}{1.5} \setlength{\arraycolsep}{3pt} \centering \begin{minipage}{0.4\textwidth} \begin{align*} \begin{array}{c|c|c} \hline\hline \text{Graph} & \multicolumn{2}{c}{P_{HG_+}={P_{HG_-}}} \\ \hline & \hat \gamma_\mu & \hat \gamma_\nu \\ \hline \begin{minipage}{2cm} \mybox{\includegraphics[width=2cm]{fdfigs/fd20s}} \end{minipage} & z(1-z) & 0 \\ \hline\hline \end{array} \end{align*} \end{minipage} \begin{minipage}{0.4\textwidth} \begin{align*} \begin{array}{c|c|c} \hline\hline \text{Graph} & \multicolumn{2}{c}{P_{G_+H}={P_{G_-H}}} \\ \hline & \hat \gamma_\mu & \hat \gamma_\nu \\ \hline \begin{minipage}{2cm} \mybox{\includegraphics[width=2cm]{fdfigs/fd21s}} \end{minipage} & \frac{(1-z)}{z} & 0 \\ \hline\hline \end{array} \end{align*} \end{minipage} \caption{\label{tab:7} The one-loop diagrams contributing to $HG$ and $GH$ mixing in the anomalous dimension of collinear operators. The columns show the graph and contribution to the $\mu$ and $\nu$ anomalous dimension. The group theory factors are given in eqs.~\eqref{eq:3.48}, \eqref{eq:barQG} and \eqref{eq:3.52}.} \end{table*} \endgroup The above analysis also applies to scalars, using the results in table~\ref{tab:7}, \begin{align} \hat \gamma_{\mu,HG_\pm }^{(R)} &= c_{HG}(R)\, \widetilde P_{HG _\pm}(z) , & \widetilde P_{HG_\pm}(z) &= z(1-z) , \end{align} where the group theory factors $c_{HG}(R)$ are the same as for fermions in eqs.~\eqref{eq:3.48}--\eqref{eq:3.30}. \subsection{$\gamma_{GQ}$ and $\gamma_{GH}$} \begingroup \begin{table*} \renewcommand{\arraystretch}{1.5} \setlength{\arraycolsep}{3pt} \centering \begin{align*} \begin{array}{c|c|c|c|c} \hline\hline \text{Graph} & \multicolumn{2}{c|}{P_{G_+ Q_+}} & \multicolumn{2}{c}{P_{G_+ Q_-}} \\ \hline & \hat \gamma_\mu & \hat \gamma_\nu & \hat \gamma_\mu & \hat \gamma_\nu \\ \hline \begin{minipage}{2cm} \mybox{\includegraphics[width=2cm]{fdfigs/fd21}} \end{minipage} & \frac{1}{z} & 0 & \frac{(1-z)^2}{z} & 0 \\ \hline\hline \end{array} \end{align*} \caption{\label{tab:GQ} One-loop diagrams contributing to $GQ$ mixing. The columns show the graph and contribution to the $\mu$ and $\nu$ anomalous dimension. The group theory factors are given in \eq{3.52}.} \end{table*} \endgroup The $GQ$ element of the mixing matrix is due to the graph shown in table~\ref{tab:GQ}, and gives rise to the anomalous dimension \begin{align}\label{eq:3.51} \hat \gamma_{\mu,G_\pm Q_\pm }^{(R)} &= c_{GQ}(R)\, \widetilde P_{G_\pm Q_\pm}(z) \,, & \widetilde P_{G_+Q_+}(z) &= \frac{1}{z}\,, & \widetilde P_{G_+ Q_-}(z) &= \frac{(1-z)^2}{z}\,, \end{align} where $c_{GQ}(R)$ is the group theory factor, and $\widetilde P_{G_{\pm} Q_{\pm}}(z)$ are the usual Altarelli-Parisi kernels. At one-loop, $\widetilde P_{G_+Q_+}(z)=\widetilde P_{G_-Q_-}(z)$ and $\widetilde P_{G_+Q_-}(z)=\widetilde P_{G_-Q_+}(z)$, but the anomalous dimensions do not satisfy these equalities because the group theory factors for $Q_+$ and $Q_-$ are not equal. As a result, gauge boson PDFs (including the gluon) develop a polarization asymmetry $f_{g_+}-f_{g_-}$ through parity-violating $\mu$ evolution. The only mixing is in the singlet and adjoint sectors, for which \begin{align}\label{eq:3.52} c_{GQ}(1) &= c_F , & c_{GQ}(\text{adj}_S) &= \frac{N^2-4}{2N} , & c_{GQ}(\text{adj}_A) &= \frac12 c_A ,\nonumber \\ c_{G \bar Q}(1) &=c_F, & c_{G \bar Q}(\text{adj}_S) &= -\frac{N^2-4}{2N}, & c_{G \bar Q}(\text{adj}_A) &= \frac12 c_A. \end{align} The $W\!B$ and $BW$ PDFs can mix with the triplet quark PDF, with anomalous dimensions \begin{align}\label{eq:3.54} \gamma_{\mu, W\!B\pm\, Q\pm} &= \gamma_{\mu, BW\pm\, Q\pm} = \frac{g_1 g_2}{4 \pi^2}\, \mathsf{y}_Q\, \widetilde P_{G\pm Q\pm}(z) , \nonumber \\ \gamma_{\mu, W\!B\pm\, \bar Q\pm} &= \gamma_{\mu, BW\pm\, \bar Q\pm} = -\frac{g_1 g_2}{4 \pi^2}\, \mathsf{y}_Q \,\widetilde P_{G\pm Q\pm}(z) . \end{align} Similar results hold for the $GH$ entries using table~\ref{tab:7}, \begin{align}\label{eq:3.51a} \hat \gamma_{\mu,G_\pm H}^{(R)} &= c_{GH}(R)\, \widetilde P_{GH}(z) \,, & \widetilde P_{G_\pm H}(z) &= \frac{(1-z)}{z}\,, \end{align} where the group theory factors $c_{GH}(R)$ are the same as for fermions in \eqs{3.52}{3.54}. \subsection{$\gamma_{HQ}$ and $\gamma_{QH}$} \begingroup \begin{table*} \renewcommand{\arraystretch}{1.5} \setlength{\arraycolsep}{3pt} \centering \begin{minipage}{0.4\textwidth} \begin{align*} \begin{array}{c|c|c} \hline\hline \text{Graph} & \multicolumn{2}{c}{P_{HQ_+}={P_{HQ_-}}} \\ \hline & \hat \gamma_\mu & \hat \gamma_\nu \\ \hline \begin{minipage}{2cm} \mybox{\includegraphics[width=2cm]{fdfigs/fd21h}} \end{minipage} &\quad \frac{z}{2} \quad & 0 \\ \hline\hline \end{array} \end{align*} \end{minipage} \begin{minipage}{0.4\textwidth} \begin{align*} \begin{array}{c|c|c} \hline\hline \text{Graph} & \multicolumn{2}{c}{P_{Q_+H}={P_{Q_-H}}} \\ \hline & \hat \gamma_\mu & \hat \gamma_\nu \\ \hline \begin{minipage}{2cm} \mybox{\includegraphics[width=2cm]{fdfigs/fd20h}} \end{minipage} &\quad \frac12 \quad & 0 \\ \hline\hline \end{array} \end{align*} \end{minipage} \caption{\label{tab:8} The one-loop diagrams contributing to $HQ$ and $QH$ mixing in the anomalous dimension of collinear operators. The columns show the graph and contribution to the $\mu$ and $\nu$ anomalous dimension. The Yukawa factors are give in \eqs{HQ}{QH}.} \end{table*} \endgroup The mixing of fermion and scalar operators via Yukawa couplings is shown in table~\ref{tab:8}. The $HQ$ anomalous dimensions are \begin{align} \label{eq:HQ} \mu \frac{\mathrm{d}}{\mathrm{d} \mu}\ O^{(I=0)}_{H} &= \frac{1}{4\pi^2} \frac{z}{2} \biggl[ [Y_d^\dagger Y_d]_{rs} \otimes O^{(I=0)}_{q,r,s} + [Y_u^\dagger Y_u]_{rs} \otimes O^{(I=0)}_{\bar q,s,r} + 2 [Y_d Y_d^\dagger]_{rs} \otimes O^{(I=0)}_{\bar d,s,r} \nonumber \\ & + 2 [Y_u Y_u^\dagger]_{rs} \otimes O^{(I=0)}_{u,r,s} + [Y_e^\dagger Y_e]_{rs} \otimes O^{(I=0)}_{\ell,r,s} + 2 [Y_e Y_e^\dagger]_{rs} \otimes O^{(I=0)}_{\bar e,s,r} \biggr] \,, \nonumber \\ \mu \frac{\mathrm{d}}{\mathrm{d} \mu}\ O^{(I=1)}_{H} &= \frac{1}{4\pi^2} \frac{z}{2} \biggl[ [Y_d^\dagger Y_d]_{rs} \otimes O^{(I=1)}_{q,r,s} + [Y_u^\dagger Y_u]_{rs} \otimes O^{(I=1)}_{\bar q,s,r} + [Y_e^\dagger Y_e]_{rs} \otimes O^{(I=1)}_{\ell,r,s}\biggr]\,, \end{align} and the $\bar H$ anomalous dimensions are given by charge conjugation, i.e.\ replacing $d,r,s \leftrightarrow \bar d,s,r$, etc.~on the right-hand side. The $QH$ anomalous dimensions are \begin{align} \label{eq:QH} \mu \frac{\mathrm{d}}{\mathrm{d} \mu}\ O^{(I=0)}_{d,r,s} &= \frac{N_c}{8\pi^2} [Y_d Y_d^\dagger]_{sr} \otimes O^{(I=0)}_{\bar H} \,, \nonumber \\ \mu \frac{\mathrm{d}}{\mathrm{d} \mu}\ O^{(I=0)}_{u,r,s} &= \frac{N_c}{8\pi^2} [Y_u Y_u^\dagger]_{sr} \otimes O^{(I=0)}_{H} \,, \nonumber \\ \mu \frac{\mathrm{d}}{\mathrm{d} \mu}\ O^{(I=0)}_{q,r,s} &= \frac{N_c}{8\pi^2} \Bigl( [ Y_d^\dagger Y_d]_{sr} \otimes O^{(I=0)}_{ H} +[ Y_u^\dagger Y_u]_{sr} \otimes O^{(I=0)}_{\bar H} \Bigr)\,, \nonumber \\ \mu \frac{\mathrm{d}}{\mathrm{d} \mu}\ O^{(I=1)}_{q,r,s} &= \frac{N_c}{8\pi^2} \Bigl( [ Y_d^\dagger Y_d]_{sr} \otimes O^{(I=1)}_{ H} +[ Y_u^\dagger Y_u]_{sr} \otimes O^{(I=1)}_{\bar H} \Bigr)\,, \nonumber \\ \mu \frac{\mathrm{d}}{\mathrm{d} \mu}\ O^{(I=0)}_{e,r,s} &= \frac{1}{8\pi^2} [Y_e Y_e^\dagger]_{sr} \otimes O^{(I=0)}_{\bar H} \,, \nonumber \\ \mu \frac{\mathrm{d}}{\mathrm{d} \mu}\ O^{(I=0)}_{\ell,r,s} &= \frac{1}{8 \pi^2} [ Y_e^\dagger Y_e]_{sr} \otimes O^{(I=0)}_{ H}\,, \nonumber \\ \mu \frac{\mathrm{d}}{\mathrm{d} \mu}\ O^{(I=1)}_{\ell,r,s} &= \frac{1}{8\pi^2} [ Y_e^\dagger Y_e]_{sr} \otimes O^{(I=1)}_{ H}\,, \end{align} and the $\bar QH$ anomalous dimensions are given by charge conjugation, i.e.\ by $d,r,s \to \bar d,s,r$ on the l.h.s.\ and $H \leftrightarrow \bar H$ on the r.h.s.\ These results greatly simplify when using \eq{approx}. \subsection{$SU(2) \times U(1)$ mixing} \label{sec:mixing} $SU(2) \times U(1)$ mixing affects the $\nu$-anomalous dimension, since $\hat \gamma_\nu$ contains an explicit gauge boson mass $M$, and $M_W \neq M_Z$ in the Standard Model. As we will now discuss, $M$ is equal to $M_W$ and not $M_Z$ (with one exception). In \sec{low_matching} we will see that the adjoint index in the $SU(2)$ triplet quark and gauge boson operator must be 3, which is a consequence of electric charge conservation. Direct inspection of the calculation in the previous subsections shows that for the exchange of a $W^3 = \cos \theta_W\, Z + \sin \theta_W\, A$ boson, the group theory factor for Total$_1$ and Total$_2$ are the same. Thus its contribution to $\gamma_\nu$ drops out, leaving only contributions involving $M_W$. The one exception is $O_{\widetilde H H}^{(I=1)}$, where direct inspection of its corrections in the broken phase reveals that both $M_W$ and $M_Z$ enter, see \eq{tildeHH}. This is also obvious from the presence of $\alpha_1$ in $\gamma_\nu$. \subsection{Fragmentation functions} \label{sec:RGE_FF} Next we consider the collinear operators for outgoing particles. We start with the case where a particle in the final state is identified, e.g.~the electron in DIS, and its momentum fraction is measured. In this case, the matrix elements of collinear operators lead to fragmentation functions, which were defined in ref.~\cite{Collins:1981uw}. As is well known, the one-loop anomalous dimensions of fragmentation functions can be obtained from those for the PDF, and this holds for the gauge non-singlet case as well. The diagonal anomalous dimensions $QQ$ and $GG$ are the same for the PDF and fragmentation function, except that the helicity labels are interchanged because the role of the field and external state are swapped, \begin{align} \hat \gamma_{\mu, Q_\pm Q_\pm}^{(\text{frag})} (x,r^-,\mu,\nu) &= \hat \gamma_{\mu,Q_\pm Q_\pm}^{(\text{PDF})} (x,r^-,\mu,\nu) \,, & \hat \gamma_{\nu, Q_\pm}^{(\text{frag})} (\mu,\nu) &= \hat \gamma_{\nu,Q_\pm}^{(\text{PDF})} (\mu,\nu) \,, \nonumber \\ \hat \gamma_{\mu,G_+ G_\pm }^{(\text{frag})} (x,r^-,\mu,\nu) &= \hat \gamma_{\mu,G_\pm G_+}^{(\text{PDF})} (x,r^-,\mu,\nu) \,, & \hat \gamma_{\nu, G_\pm}^{(\text{frag})} (\mu,\nu) &= \hat \gamma_{\nu,G_\pm}^{(\text{PDF})} (\mu,\nu) \,,\end{align} etc. For the off-diagonal entries the flavor labels are also swapped, \begin{align} \hat \gamma_{\mu, Q_+G_\pm}^{(\text{frag})} (x,\mu) &= \hat \gamma_{\mu, G_\pm Q_+}^{(\text{PDF})} (x,\mu) \,, & \hat \gamma_{\mu, G_+Q_\pm}^{(\text{frag})} (x,\mu) &= \hat \gamma_{\mu, Q_\pm G_+}^{(\text{PDF})} (x,\mu) \,, \nonumber \\ \hat \gamma_{\mu, HG_\pm}^{(\text{frag})} (x,\mu) &= \hat \gamma_{\mu, G_\pm H}^{(\text{PDF})} (x,\mu) \,, & \hat \gamma_{\mu, G_\pm H}^{(\text{frag})} (x,\mu) &= \hat \gamma_{\mu, HG_\pm}^{(\text{PDF})} (x,\mu) \,,\end{align} etc. We also consider the case where no final state particle is detected, which we obtain by summing over all possible final states. If $D_{Q_\pm \to P}(x,\mu,\nu)$ is the fragmentation function for $Q_\pm$ to produce a particle $P$ with momentum fraction $x$, then not observing the final state gives the completeness relation\footnote{The factor of $x$ accounts for identical particles, as discussed in sec.~2.5 of ref.~\cite{Jain:2011xz}.} \begin{align} \sum_P \int_0^1 \mathrm{d} x\, x\, D_{Q_\pm \to P}(x,\mu,\nu)\,, \end{align} where the sum on $P$ runs over all final states, and the integral is over its momentum fraction. The momentum sum rule for fragmentation functions then implies that \begin{align} \sum_P \int_0^1 \mathrm{d} x\, x\, D_{Q_\pm \to P}(x,\mu,\nu)&=1, & \sum_P \int_0^1 \mathrm{d} x\, x\, D_{G_\pm \to P}(x,\mu,\nu)&=1, \end{align} for the gauge singlet fragmentation functions, and \begin{align} \sum_P \int_0^1 \mathrm{d} x\, x\, D^{(R,\alpha)}_{Q_\pm \to P}(x,\mu,\nu)&=0, & \sum_P \int_0^1 \mathrm{d} x\, x\, D^{(R,\alpha)}_{G_\pm \to P}(x,\mu,\nu)&=0, \end{align} for the gauge non-singlet case. \section{Soft evolution} \label{sec:RGE_S} We now move on to the soft operators, for which the RG equations are given by \begin{align} \frac{\mathrm{d}}{\mathrm{d} \ln \mu}\, {\mathcal S}(\mu,\nu) &= \frac{\alpha(\mu)}{\pi}\, \hat \gamma_{\mu,{\mathcal S}}(\mu,\nu)\, {\mathcal S}(\mu,\nu) \,,\nonumber \\ \frac{\mathrm{d}}{\mathrm{d} \ln \nu}\, {\mathcal S}(\mu,\nu) &= \frac{\alpha(\mu)}{\pi}\,\hat \gamma_{\nu,{\mathcal S}}(\mu,\nu)\, {\mathcal S}(\mu,\nu) \,.\end{align} Since non-Abelian gauge bosons carry gauge charge, soft operators can mix, turning $\hat \gamma_\mu$ into a matrix. This first happens for soft functions with at least four gauge indices. There is a single type of graph that contributes at one-loop, shown in Table.~\ref{tab:soft_diagram}. Graphs where the gluon couples to a single line vanish since $n_i^2=0$. For the graph ${\mathcal S}_i {\mathcal S}_j$ in Table~\ref{tab:soft_diagram}, ${\mathcal S}_i$ and ${\mathcal S}_j$ are Wilson lines along the null vectors $n_i$ and $n_j$. We compute the graph in an Abelian theory, and put in the group theory factors later. The graph is \begin{align} \label{eq:I_S} I_{{\mathcal S}} &= -\mathrm{i} g^2 w^2 \Big(\frac{e^{\gamma_E} \mu^2}{4\pi}\Big)^\epsilon \nu^\eta \int\! \frac{\mathrm{d}^d k}{(2\pi)^d} \frac{n_i \!\cdot\! n_j\,|2k^0|^{-\eta}}{(-n_i \!\cdot\! k+\mathrm{i} 0)(k^2-M^2+\mathrm{i} 0)(n_j \!\cdot\! k +\mathrm{i} 0)} \nonumber \\ &= -\mathrm{i} g^2 w^2 \Big(\frac{e^{\gamma_E} \mu^2}{4\pi}\Big)^\epsilon \nu^\eta \int\! \frac{\mathrm{d} k^+\,\mathrm{d} k^-\,\mathrm{d}^{d-2} \mathbf{k}_\perp}{(2\pi)^d} \frac{2\big|(k^+ - k^-)\sqrt{2/ |n_i \!\cdot\! n_j|}\big|^{-\eta}}{(-k^++ n_i^0\mathrm{i} 0)[k^- k^+ - \mathbf{k}_\perp^{\,2}-M^2+\mathrm{i} 0](-k^- + n_j^0 \mathrm{i} 0)} \nonumber \\ &= \frac{\alpha w^2}{\pi}\, \bigg[ \frac{1}{\eta} \Big(\frac{1}{\epsilon} + \ln \frac{\mu^2}{M^2}\Big) - \frac{1}{2\epsilon^2} + \frac{1}{2\epsilon} \ln \frac{(-n_i \!\cdot\! n_j-\mathrm{i} 0) \nu^2}{2\mu^2} + \ord{\eta^0,\epsilon^0}\bigg] \,,\end{align} when both $S_i$ and $S_j$ are time-ordered. In contrast to refs.~\cite{Chiu:2011qc,Chiu:2012ir}, we use the gauge boson energy $k^0$ to regulate rapidity divergences, whis is more suitable for multiple collinear directions~\cite{Bertolini:2017efs} and does not affect the collinear calculation. In the second line we employed the following light-cone coordinates \begin{align} k^\mu = k^+ \frac{n_j^0 n_j^\mu}{\sqrt{2|n_i \!\cdot\! n_j|}} - k^- \frac{n_i^0 n_i^\mu}{\sqrt{2|n_i \!\cdot\! n_j|}} + k_\perp^\mu \,, \qquad \mathrm{d}^d k = \frac12\,\mathrm{d} k^+\, \mathrm{d} k^-\, \mathrm{d}^{d-2} \mathbf{k}_\perp \,,\end{align} where $n_{i,j}^0 = \pm 1$ is used to keep track of incoming vs.~outgoing directions. Strictly speaking, $k_\perp^\mu$ also enters in the rapidity regulator when $n_i$ and $n_j$ are not back-to-back, but this only contributes at $\ord{\eta}$. Due to this choice of integration variables, eq.~(\ref{eq:I_S}) is the same as the expression when $n_i \cdot n_j = 2$ (given in eq.~(95) of ref.~\cite{Manohar:2012jr}), apart from an additional factor of $|n_i \cdot n_j/2|^{\eta/2}$.\footnote{Depending on the sign of $\mathrm{i} 0$ in the eikonal propagators there are $\mathrm{i} \pi$ contributions~\cite{Rothstein:2016bsq}, which we account for in the branch-cut prescription of the logarithm. Contributions in the conjugate amplitude have the opposite prescription, i.e.~$\ln (-n_i \!\cdot\! n_j+\mathrm{i} 0)$, and for exchange between a Wilson line in the amplitude and conjugate amplitude we find $\ln |n_i \!\cdot\! n_j|$. Fortunately, these $\mathrm{i} \pi$ contributions do not enter in the final expression.} The $\mu$ and $\nu$ anomalous dimensions can be read off from the results in ref.~\cite{Manohar:2012jr}, and are shown in table \ref{tab:soft_diagram}. \begin{table} \centering \begin{eqnarray*} \begin{array}{c|c|c} \hline\hline \text{Graph} & \hat \gamma_\mu &\hat \gamma_\nu \\ \hline \begin{minipage}{2cm} \mybox{\includegraphics[height=2cm]{figs/soft.pdf}} \end{minipage} & \ln \frac{(-n_i \cdot n_j-\mathrm{i} 0) \nu^2}{2\mu^2} & \ln \frac{\mu^2}{M^2} \\ \hline\hline \end{array} \end{eqnarray*} \caption{The one-loop diagram for the soft operator ${\mathcal S}_i {\mathcal S}_j$. The double lines denote the Wilson lines ${\mathcal S}_i$ and ${\mathcal S}_j$. The columns show the graph and its contribution to the $\mu$ and $\nu$ anomalous dimension, apart from the group theory factors given in the text. \label{tab:soft_diagram}} \end{table} We now compute the anomalous dimensions for the soft operators in \eq{soft_ops} including the group theory factors. Write \begin{align}\label{eq:3.62} \ln \frac{(-n_i \cdot n_j-\mathrm{i} 0) \nu^2}{2\mu^2} &= \ln \frac{(-n_i \cdot n_j -\mathrm{i} 0)} {2} + \ln \frac{ \nu^2}{\mu^2}\,. \end{align} First consider the $(n_i \cdot n_j)$-independent pieces in the soft anomalous dimension. For the soft operator \begin{align} {\mathcal S}_{12\ldots n}^{ab}=\mathrm{tr}\big[({\mathcal S}_1 t^{a_1} {\mathcal S}_1^\dagger)( {\mathcal S}_2 t^{a_2} {\mathcal S}_2^\dagger) \ldots ({\mathcal S}_n t^{a_n} {\mathcal S}_n^\dagger)\big], \end{align} one-loop graphs where gauge fields are exchanged between the same Wilson line or between a Wilson line ${\mathcal S}_i$ and its conjugate ${\mathcal S}_i^\dagger$ vanish, since $n_i^2=0$. Gauge boson exchange between ${\mathcal S}_1$ or ${\mathcal S}_1^\dagger$ and the other Wilson lines gives a group theory factor \begin{align} \label{eq:casimir} &\mathrm{tr}\ [t^x,t^{a_1}] [t^x,t^{a_2}] t^{a_3} \ldots t^{a_n} + \mathrm{tr}\ [t^x,t^{a_1}] t^{a_2} [t^x,t^{a_3}] \ldots t^{a_n} +\ldots + \mathrm{tr}\ [t^x,t^{a_1}] t^{a_2} t^{a_3} \ldots [t^x,t^{a_n}] \nonumber \\ &=-\mathrm{tr}\ [t^x, [t^x,t^{a_1}]] t^{a_2} t^{a_3} \ldots t^{a_n} =- c_A \mathrm{tr}\ t^{a_1} t^{a_2} t^{a_3} \ldots t^{a_n} \,.\end{align} Similarly, we add the exchanges between ${\mathcal S}_2,{\mathcal S}_2^\dagger$ and all the other Wilson lines, etc. The sum of all these contributions counts all exchanges twice, so the overall group theory factor is $-n_I c_A/2 $, where $n_I$ is the number of indices, e.g.\ 2 for ${\mathcal S}_{12}^{cd}$ and 4 for ${\mathcal S}_{12}^{cd}{\mathcal S}_{34}^{ef}$. The $\nu$ anomalous dimension has no $n_i \cdot n_j$ dependent terms, so \begin{align}\label{eq:4.7} \hat \gamma_{\nu,{\mathcal S}} = - \frac12 n_I\, c_A \ln \frac{\mu^2}{M^2} \,.\end{align} The $\nu$-anomalous dimensions of the soft and collinear operators cancel. The $\mu$-anomalous dimensions do not cancel between the soft and collinear operators, as there is also a contribution from the hard matching coefficient $\mathcal{H}$ in \eq{hard_L}. For the $\mu$-anomalous dimension, the second term in \eq{3.62} can be treated in the same manner as $\gamma_\nu$, but the first term has to be computed explicitly, accouting for the imaginary part of $\ln (-n_i \cdot n_j)$ depending on whether the soft Wilson lines are time-ordered or anti-time-ordered. We find \begin{align}\label{eq:3.65} \hat \gamma_{\mu,{\mathcal S}_{12}} &= c_A \Big[\ln \frac{\mu^2}{\nu^2} - L_{12}\Big] \,, \nonumber \\ \hat \gamma_{\mu,{\mathcal S}_{123}} &= c_A \Big[\frac32 \ln \frac{\mu^2}{\nu^2} - \frac12 (L_{12}+L_{13}+L_{23}) \Big] \,,\end{align} where we use the abbreviation \begin{align} L_{ij} \equiv \ln \left| \frac{n_i \!\cdot\! n_j}{2} \right| \,.\end{align} Eq.~\eqref{eq:3.65} holds irrespective of whether particles $1,2,3$ are incoming or outgoing particles. There are no $i\pi$ terms from the imaginary part of the logarithm. In an $SU(2)$ gauge theory, there are three soft functions with four gauge indices, $\{{\mathcal S}_{12}^{cd} {\mathcal S}_{34}^{ef}$, ${\mathcal S}_{13}^{ce} {\mathcal S}_{24}^{df}$, ${\mathcal S}_{14}^{cf} {\mathcal S}_{23}^{de}\}$, which mix under renormalization. Their $\mu$-anomalous dimension is given in this basis by ($SU(2)$ only) \begin{align} \label{eq:mix} \hat \gamma_{\mu,{\mathcal S}_{1234}} &= c_A \left[2 \ln \frac{\mu^2}{\nu^2} - \begin{pmatrix} L_{12} + L_{34} & 0 & 0\\ 0 & L_{13} + L_{24} & 0 \\ 0 & 0 & L_{14} + L_{23} \\ \end{pmatrix} \right] + \begin{pmatrix} 0 & -w & w \\ -v & 0 & v \\ -u & u & 0 \\ \end{pmatrix} \,,\end{align} using the conformal cross ratios \begin{align} u = \ln \frac{(n_1 \!\cdot\! n_2)\, (n_3 \!\cdot\! n_4) }{(n_1 \!\cdot\! n_3) \, (n_2 \!\cdot\! n_4)} \,, \qquad v = \ln \frac{(n_1 \!\cdot\! n_2)\, (n_3 \!\cdot\! n_4) }{(n_1 \!\cdot\! n_4)\,( n_2 \!\cdot\! n_3)} \,, \qquad w = \ln \frac{(n_1 \!\cdot\! n_3)\, (n_2 \!\cdot\! n_4) }{(n_1 \!\cdot\! n_4)\, (n_2 \!\cdot\! n_3)} = v - u \,.\end{align} Again, there are no $i\pi$ terms from the imaginary part of the logarithm. We conclude this section by discussing the $U(1)$ soft operator that \emph{only} appears for $O_{\widetilde HH}$. Because $O_{\widetilde HH}$ violates hypercharge, it must be accompanied by $O_{H\widetilde H}$ to ensure that the hard scattering conserves hypercharge.\footnote{Of course hypercharge is spontaneously broken, but this is power suppressed in $M/Q$.} Assuming only a single pair of these collinear operators, the corresponding soft operator takes the form $\bar {\mathscr{S}}^\dagger_1 \mathscr{S}_1 \mathscr{S}_2^\dagger \bar {\mathscr{S}}_2$, where $\mathscr{S}$ and $\bar {\mathscr{S}}$ are $U(1)$ Wilson lines with hypercharge $\pm \mathsf{y}_H$. Its anomalous dimension is given by that for $S_{12}$ in \eqs{4.7}{3.65} with the replacement $c_A \to 4\mathsf{y}_H^2$, and with $M=M_Z$. Note that this $U(1)$ contribution should be expected because $M_Z$ shows up in the rapidity anomalous dimensions of these collinear operators, and the $Z$ boson has a $U(1)$ component. We conclude by noting that in this case the rapidity evolution of the $SU(2)$ soft operator is split equally in two terms involving $M_W$ and $M_Z$, respectively. \section{Matching onto the broken phase} \label{sec:low_matching} We now switch to the broken phase of the theory, which involves matching at the scale $\mu \sim M$. Protons, electrons and neutrinos are now well-defined, and we can use them as external states. Tree-level matching suffices at NLL accuracy, and the matrix elements of collinear operators in proton states are the usual PDFs. For the transversely polarized gauge bosons with helicity $h=\pm 1$, using the Condon-Shortley phase convention, \begin{align} \label{eq:PDF} \langle T | O_{W_h}^{(I=0)} | T \rangle&= f_{{W_h}^+/T} + f_{{W_h}^-/T} + \cos^2 \theta_W f_{{Z_h}/T} + \sin^2 \theta_W f_{{\gamma_h}/T} \nonumber \\ & \quad + \sin \theta_W \cos \theta_W \left( f_{{Z_h}{\gamma_h}/T} + f_{{\gamma_h}{ Z_h}/T}\right) \nonumber \\ &= \sin^2 \theta_W f_{{\gamma_h}/T} \,, \nonumber \\ \langle T | O_{W_h}^{(I=1,I_3=0)} | T \rangle &= f_{{W_h}^+/T} - f_{{W_h}^-/T} \nonumber \\ & = 0 \,, \nonumber \\ \langle T | O_{W_h B_h}^{(I=1,I_3=0)} | T \rangle &= \cos \theta_W \sin \theta_W \left( f_{{\gamma_h} /T} - f_{{Z_h}/T} \right) +\cos^2 \theta_W f_{{Z_h}{\gamma_h}/T} - \sin^2 \theta_W f_{{\gamma_h} {Z_h}/T} \nonumber \\ &= \cos \theta_W \sin \theta_W f_{{\gamma_h} /T} \,, \nonumber \\ \langle T | O_{B_h W_h}^{(I=1,I_3=0)} | T \rangle &= \cos \theta_W \sin \theta_W \left( f_{\gamma_h /T} - f_{Z_h/T} \right) +\cos^2 \theta_W f_{\gamma_h Z_h/T} - \sin^2 \theta_W f_{Z_h \gamma_h /T} \nonumber \\ &= \cos \theta_W \sin \theta_W f_{\gamma_h /T} \,, \nonumber \\ \langle T | O_{W_h}^{(I=2,I_3=0)} | T \rangle &= -\frac{1}{\sqrt 6} \big( f_{W_h^+/T} + f_{W_h^-/T} \big) + \frac2{\sqrt 6}\bigl[ \cos^2 \theta_W f_{Z_h/T} + \sin^2 \theta_W f_{\gamma_h/T} \nonumber \\ & \quad + \sin \theta_W \cos \theta_W \left( f_{Z_h\gamma_h/T} + f_{\gamma_h Z_h/T} \right) \bigr] \nonumber \\ & = \frac2{\sqrt 6} \sin^2 \theta_W f_{\gamma_h/T} \end{align} where we have omitted the arguments $z,\mu,\nu$ of the PDFs. We first rewrote each matrix element in terms of PDFs in the broken phase, and then dropped all terms except for the photon PDF, since they vanish below the electroweak scale. $f_{\gamma_h Z_h/p}$ and $f_{Z_h\gamma_h/p}$ are interference PDFs, and given by matrix elements of the form in \eq{3.36} where one field strength is that of the photon and the other is that of the $Z$ boson. Note that $W$ on the left-hand side refers to the full $SU(2)$ gauge field $W^1,W^2,W^3$, whereas on the right-side it denotes the $W^\pm$ bosons. Collinear operators in the adjoint representation with gauge indices $1,2$ change electric charge by one unit, and therefore have a vanishing matrix element between states of the same charge. Equation~(\ref{eq:PDF}) are initial conditions for DGLAP evolution, and the PDFs are evolved to the high scale $Q$ using the collinear and soft anomalous dimensions. The photon PDF has been computed recently in terms of the hadronic structure functions~\cite{Manohar:2016nzj,Manohar:2017eqh}. The other PDFs in \eq{PDF} can be computed similarly~\cite{Fornal:2018znf}, extending \eq{PDF} beyond tree level. For right-handed fermions, \begin{align}\label{eq:5.2} \langle T | O_{u,r,s}^{(I=0)} | T \rangle &= \delta_{rs}\, f_{u_{+,r}/T} \,, \qquad \langle T | O_{\bar u,r,s}^{(I=0)} | T \rangle = \delta_{rs}\,f_{\bar u_{-,r}/T} \,,\end{align} since $u$ annihilates positive helicity $u$ quarks and creates negative helicity $\bar u$ quarks. Here $r,s$ are generation indices, so $f_{u_{+,r}}$ for $r=1,2,3$ corresponds to the $u_+$, $c_+$ and $t_+$ PDFs (the top quark PDF vanishes below the electroweak scale). Similar expressions hold for $d$ and $e$. The proton matrix element in \eq{5.2} is diagonal in flavor. Operators such as $O_{d,1,2}$ change flavor, and have vanishing matrix element in the proton to zeroth order in the weak interactions. Since the matrix elements in \eq{5.2} are taken at a scale of order $M_Z$, weak interaction corrections are not logarithmically enhanced and can be dropped at the accuracy we are working. Of course, electroweak interactions can not be neglected in evolving \eq{5.2} to high energies; this is after all the point of the paper. Operators such as $O_{d,2,2}$, which measure the strange-quark content of the proton, can have non-zero matrix element~\cite{Kaplan:1988ku}. Left-handed fields have to be converted from the weak eigenstate to the mass eigenstate basis \begin{align}\label{eq:5.3} \langle T | O^{(I=0)}_{q,r,s} | T \rangle &= \delta_{rs} f_{u_{-,r} /T} + \sum_{w} V_{rw}^* V_{sw} f _{d_{-,w} /T} \,, \nonumber \\ \langle T | O^{(I=0)}_{\bar q,r,s} | T \rangle &= \delta_{rs} f_{\bar u_{+,r} /T} + \sum_{w} V_{rw}V_{sw}^* f_{\bar d_{+,w}/T} \,, \nonumber \\ \langle T | O^{(I=1,I_3=0)}_{q,r,s} | T \rangle &= \frac12 \delta_{rs} f_{u_{-,r} /T} - \frac12 \sum_{w} V_{rw}^* V_{sw} f _{d_{-,w} /T} \,, \nonumber \\ \langle T | O^{(I=1,I_3=0)}_{\bar q,r,s} | T \rangle &= -\frac12 \delta_{rs} f_{\bar u_{+,r} /T} +\frac12 \sum_{w} V_{rw}V_{sw}^* f_{\bar d_{+,w}/T} \,, \end{align} and similarly for $\ell$. The Higgs PDFs match on to ($W_L \equiv W_{h=0}$, etc.): \begin{align}\label{eq:5.4b} \langle T | O^{(I=0)}_H | T \rangle &= f_{W_L^+/T} + \frac12 (f_{h/T} + f_{Z_L/T} + f_{Z_L h/T} + f_{h Z_L/T}) = 0 \,, \nonumber \\ \langle T | O^{(I=1,I_3=0)}_H | T \rangle &= \frac12 f_{W_L^+/T} - \frac14 (f_{h/T} + f_{Z_L/T} + f_{Z_L h/T} + f_{h Z_L/T}) = 0 \,, \nonumber \\ \langle T | O^{(I=0)}_{\bar H} | T \rangle &= f_{W_L^-/T} + \frac12 (f_{h/T} + f_{Z_L/T} - f_{Z_L h/T} - f_{hZ_L/T}) = 0 \,, \nonumber \\ \langle T | O^{(I=1,I_3=0)}_{\bar H} | T \rangle &= -\frac12 f_{W_L^-/T} + \frac14 (f_{h/T} + f_{Z_L/T} - f_{Z_Lh/T} - f_{hZ_L/T}) = 0 \,,\end{align} at tree level, using the equivalence theorem to relate the scalar operators $\varphi$ to longitudinal gauge bosons~\cite{chanowitz,bohmbook}. Again, we first rewrote the the operators in terms of PDFs in the broken basis, which in this case all vanish below the electroweak scale. Similarly, the $\widetilde H H$ PDFs give \begin{align}\label{eq:5.4a} \langle T | O^{(I=0)}_{\widetilde H H} | T \rangle &= 0 \,, \nonumber \\ \langle T | O^{(I=1,I_3=1)}_{\widetilde H H} | T \rangle &= -\frac{1}{\sqrt 2} f_{\bar H^0 H^0/T}=-\frac1{2\sqrt 2} (f_{h/T} - f_{Z_L/T} - f_{Z_L h/T} + f_{hZ_L/T}) = 0\,, \nonumber \\ \langle T | O^{(I=1,I_3=0)}_{\widetilde H H} | T \rangle &= 0\,, \nonumber \\ \langle T | O^{(I=1,I_3=-1)}_{\widetilde H H} | T \rangle &= 0 \,.\end{align} Three of the matrix elements vanish because they are operators with non-zero electric charge, and have no diagonal matrix elements in a proton state. Operators in non-singlet representations are the most interesting, since the corresponding matrix elements would have vanished in QCD (i.e.~the PDF for ``red", ``green" and ``blue" quarks are all equal). However, the proton is not an electroweak singlet, and $f_{u/p} \neq f_{d/p}$ and $f_{W^+/p} \neq f_{W^-/p}$. Assuming the proton beam is unpolarized, we can further simplify \eqs{5.2}{5.3} using \begin{align}\label{eq:5.6} f_{u_-/p} = f_{u_+/p} = \tfrac12 f_{u/p} \,, \qquad f_{d_-/p} = f_{d_+/p} = \tfrac12 f_{d/p} \,,\end{align} etc.\ For an incoming neutrino we can simply take \begin{align} \label{eq:f_nu} f_{\nu/\nu}(x,\mu) = \delta(1-x) \,, \qquad f_{e_-/\nu}(x,\mu) = f_{e_+/\nu}(x,\mu) = 0 \,,\end{align} at $\mu \lesssim M$, since the neutrino is neutral under electromagnetic and QCD effects and thus unaffected by renormalization group evolution below the electroweak scale. For an incoming electron the initial condition corresponding to \eq{f_nu} only holds at $\mu = m_e$, because the electron still interacts electromagnetically. Quark distributions, and thus gauge boson distributions at large $\mu$ will become polarized, even if we assume they are unpolarized at low $\mu$ as in \eq{5.6} since the electroweak evolution is chiral. Similarly, the collinear operators for outgoing directions correspond to fragmentation functions. The tree-level matching relations at the scale $\mu \sim M_W$ are \begin{align} D^{(I=0)}_{\ell \to e}(x,\mu,\nu) &= \tfrac12 [D_{\nu_- \to e}(x,\mu) + D_{e_- \to e}(x,\mu)] \,,\nonumber \\ D_{\ell \to e}^{(I=1,I_3=0)}(x,\mu,\nu) &= \tfrac12 [ \tfrac12 (D_{\nu_- \to e}(x,\mu) - D_{e_- \to e}(x,\mu))] \,,\nonumber \\ D^{(I=0)}_{e \to e}(x,\mu,\nu) &= D_{e_+ \to e}(x,\mu) \,,\nonumber \\ D^{(I=0)}_{W_h \to e}(x,\mu,\nu) &= \tfrac13 \big[D_{W_h^+\to e}(x,\mu) + D_{W_h^-\to e}(x,\mu) + \cos^2 \theta_W D_{Z\to e}(x,\mu) + \sin^2 \theta_W D_{\gamma\to e}(x,\mu) \nonumber \\ & \qquad + \sin \theta_W \cos \theta_W \big(D_{Z\gamma\to e}(x,\mu) + D_{\gamma Z\to e}(x,\mu)\big)\big] \nonumber \\ & = \tfrac13 \sin^2 \theta_W D_{\gamma\to e}(x,\mu) \,,\nonumber \\ D_{W_h\to e}^{(I=1,I_3=0)}(x,\mu,\nu) &= \tfrac13 [D_{W_h^+ \to e}(x,\mu) - D_{W_h^- \to e}(x,\mu)] \nonumber \\ & = 0 \,,\end{align} etc., where $h$ is a helicity label. The extra factors of 1/2 for $\ell$ and 1/3 for $W$ compared to \eq{PDF} arise because the fragmentation functions in the symmetric phase are averaged over gauge configurations of the field from which the particle fragments. Lastly, the matrix elements of soft operators at tree-level are \begin{align} \langle 0 | {\mathcal S}_{12}^{cd} |0 \rangle &\ = \tfrac12 \delta^{cd} \,, \nonumber \\ \langle 0 | {\mathcal S}_{123}^{cde} |0 \rangle &= \tfrac{\mathrm{i}}{4} f^{cde} \!\stackrel{N=2}{=} \tfrac{\mathrm{i}}{4} \epsilon^{cde} \,, \nonumber \\ \langle 0 | {\mathcal S}_{12}^{cd} {\mathcal S}_{34}^{ef} |0 \rangle &\ = \tfrac{1}{4} \delta^{cd} \delta^{ef} \,,\end{align} From the collinear functions we know that the only non-vanishing contribution at tree-level requires all gauge indices to be 3, \begin{align} \label{eq:S_tree} \langle 0 | {\mathcal S}_{12}^{33} |0 \rangle = \tfrac12 \,, \qquad \langle 0 | {\mathcal S}_{123}^{333} |0 \rangle = 0 \,, \qquad \langle 0 | {\mathcal S}_{12}^{33} {\mathcal S}_{34}^{33} |0 \rangle = \tfrac{1}{4} \,.\end{align} The one-loop soft matching has been computed in ref.~\cite{Chiu:2009mg}. The $\mu$ dependence of the one-loop matching converts the double-logarithmic Sudakov evolution above the electroweak scale into the usual single-logarithmic DGLAP evolution below the electroweak scale. \section{Resummation} \label{sec:resummation} \begin{figure} \begin{center} \includegraphics[]{figs/fig2.pdf} \end{center} \vspace{-3ex} \caption{\label{fig:nu} Path in $(\nu,\mu)$ space for integrating the anomalous dimensions of collinear and soft operators.} \end{figure} The factorization in \eq{factor} enables the resummation of logarithms of $Q/M$, by separating the cross section into factors that each involve a single invariant mass and rapidity scale. Specifically, the resummation is accomplished by evaluating the hard matching coefficients $\mathcal{H}$, collinear operators $\mathcal{C}$ and soft operators ${\mathcal S}$ at their natural scales\footnote{We did not specify a rapidity scale for the hard coefficient, because it does not contain rapidity divergences, $\gamma_{\nu,{\mathcal S}}+\gamma_{\nu, \mathcal{C}}=0$.} \begin{align} \mu_\mathcal{H} \sim Q \,, \qquad \mu_\mathcal{C} \sim \mu_{\mathcal S} \sim M \,, \qquad \nu_\mathcal{C} \sim Q \,, \qquad \nu_{\mathcal S} \sim M \,,\end{align} where they do not contain large logarithms. RG evolving them to a common scale $(\mu,\nu)$ will exponentiate the logarithms. The $\mu$-anomalous dimension contains $\ln \nu$ terms, and the $\nu$-anomalous dimension contains $\ln \mu$ terms, which are related to each other and proportional to the cusp anomalous dimension. We will evolve the collinear and soft operators to the hard scale. This avoids having to calculate (the evolution of) the process-dependent hard matching coefficients at one loop. The $\nu$-anomalous dimension contains $\ln \mu^2/M_W^2$ terms, so to avoid large logarithms the simplest strategy is to first do the $\nu$ evolution of the soft operator from $\nu = M_W$ to $\nu=Q$ at $\mu=M_W$, and then perform the $\mu$ evolution of the soft and collinear operators from $\mu=M_W$ to $\mu=Q$, as shown in \fig{nu} (see also the discussion above eq.~(4.30) in ref.~\cite{Chiu:2012ir}). Using \eq{4.7}, the $\nu$ evolution of the soft operator gives \begin{align} U_\nu = \exp\bigg[\int_{\nu_{\mathcal S}}^{\nu_\mathcal{C}}\! \frac{\mathrm{d} \nu}{\nu}\, \gamma_{\nu, {\mathcal S}}\bigg] = \exp \bigg[- n_I\, \frac{\alpha_2(\mu)}{\pi} \ln \frac{Q}{M_W} \ln \frac{\mu^2}{M_W^2}\bigg] \,,\end{align} where $n_I$ is the number of gauge indices in the soft factor. When $\mu = M_W$ exactly, $U_\nu = 1$ and can be ignored, but otherwise it must be kept to achieve NLL accuracy. In particular it should be kept when estimating the perturbative uncertainty from scale variations. Note that this analysis does not apply to the quintet contribution $O_{W_\pm}^{(I=2)}$ or the special case of $O_{\tilde H H}^{(I=1)}$, which will be discussed in \sec{PDF_SM}. Moving on to the $\mu$-evolution, we first consider the terms in the collinear and soft $\mu$-evolution that give rise to double logarithms. They are described by the following multiplicative anomalous dimensions: For the soft anomalous dimension, the relevant terms in eqs.~\eqref{eq:3.65}--\eqref{eq:mix} are given by (using $c_A=2$) \begin{align}\label{eq:6.3} \gamma_{\mu,{\mathcal S}}^{\rm DL} = n_I\, \frac{\alpha_2}{\pi}\, \ln \frac{\mu^2}{\nu^2} \,.\end{align} For the collinear anomalous dimensions, the double logarithms arise from the $\ln \nu/(\bar n \cdot p)$ term, which vanishes for the off-diagonal elements. For the diagonal elements, it vanishes for the singlet, and for the triplet PDFs (and FFs) it is given by \begin{align}\label{eq:6.4} \gamma_{\mu,qq}^{(I=1),{\rm DL}} &= \gamma_{\mu,WW}^{(I=1),{\rm DL}} = \gamma_{\mu,W\! B}^{(I=1),{\rm DL}} = \cdots = \frac{2\alpha_2}{\pi}\, \ln \frac{\nu}{\bar n \cdot r}\ \delta(1-z) \,.\end{align} Here $\bar n \cdot r = 2E$, with $E$ the energy of the parton. The triplet PDFs have a single gauge index which is contracted with a soft operator, and the soft operator anomalous dimension \eq{6.3} is proportional to $n_I$, the number of gauge indices. Combining the collinear anomalous dimension in \eq{6.4} for a triplet operator with \eq{6.3} for $n_I=1$ gives \begin{align}\label{eq:6.5} \gamma_{\mu}^{(I=1),\rm DL} = \frac{2\alpha_2}{\pi}\, \ln \frac{\mu}{\bar n \cdot r}\ \delta(1-z) \,.\end{align} The $\nu$ dependence has disappeared in the combined soft plus collinear anomalous dimensions, and the $\ln \mu/(\bar n \cdot r)$ anomalous dimensions is precisely the form of the anomalous dimension computed in refs.~\cite{Chiu:2009mg,Chiu:2009ft}, that gives rise to Sudakov double logarithms. Integrating \eq{6.5} yields the evolution kernel \begin{align}\label{eq:sudakov} U_\mu^{\rm DL} &= \exp\bigg[\int_{M_W}^{Q}\, \frac{\mathrm{d} \mu}{\mu}\, \frac{2\alpha_2 }{\pi}\, \ln \frac{\mu}{\bar n \cdot r}\bigg] \approx \exp\bigg[ - \frac{\alpha_2}{\pi}\, \left(\ln^2 \frac{\bar n \cdot r}{M_W} - \ln^2 \frac{\bar n \cdot r}{Q}\right)\bigg] \nonumber \\ & \approx \exp\bigg[ - \frac{\alpha_2}{\pi}\, \ln^2 \frac{Q}{M_W} \bigg] \,,\end{align} to leading-logarithmic accuracy, since $\bar n \cdot r \sim Q$, the partonic center-of-mass energy of the collision. We recognize \eq{sudakov} as an electroweak Sudakov factor. It suppresses the contribution from collinear operators in the triplet representation. In addition to the double logarithms, there are single logarithms from the evolution of collinear and soft operators. The coefficients of the splitting functions for the collinear operators are modified for non-singlet representations. We give their explicit form for the Standard Model in \sec{PDF_SM}. If there are no identified particles in the final state, then the only collinear functions are for the two incoming beam directions $n_1$ and $n_2$. In this case, the number of adjoint electroweak indices in the soft or collinear sector from the factorization theorem can be $n_I=0,1,2$, depending on whether the collinear operator for the two beams is a singlet or adjoint. The $n_I=1$ case does not occur, since the corresponding soft function vanishes. The beams are back-to-back, so $n_1 \cdot n_2=2$ and the electroweak logarithms have no angular dependence. When there is a single identified particle in the final state, there is a third direction $n_3$, and the number of adjoint indices can be $n_I=0,1,2,3$. There can be angular dependence due to electroweak logarithms from $\ln n_1 \cdot n_3$ and $\ln n_2 \cdot n_3 $ terms in the soft anomalous dimension. These arise from terms with $n_I=2$, since \eq{S_tree} prohibits contributions with $n_I=3$. The $n_I=0$ terms have no angular dependence, and there are no $n_I=1$ terms, as before. With two identified particles in the final-state, as is the case for Drell-Yan, there are now four directions, and contributions with $n_I=4$ are also allowed. Mixing effects in the soft sector, described in \eq{mix}, enter for the first time. \subsection{PDF evolution in the Standard Model} \label{sec:PDF_SM} In this section, we use the collinear evolution results of \sec{RGE_C} to give the PDF evolution equations in the Standard Model, keeping only the Yukawa of the top quark (see \eq{approx}). The splitting functions for $z < 1$ agree with those computed in ref.~\cite{Bauer:2017isx}. The $I=0$ sector gives the usual evolution of gauge-invariant PDFs, \begin{align} \mu \frac{\mathrm{d}}{\mathrm{d} \mu} f_{q,r,s}^{(I=0)}(z) &= \frac{\alpha_3}{\pi} \biggl[ \frac43 \widetilde P_{Q_-Q_-} \otimes f_{q,r,s}^{(I=0)} + \delta_{rs} \widetilde P_{Q_-G_+} \otimes f_{g_+}^{(I=0)} + \delta_{rs} \widetilde P_{Q_-G_-} \otimes f_{g_-}^{(I=0)} \biggr] \nonumber \\ & \quad + \frac{\alpha_2}{\pi} \biggl[ \frac34 \widetilde P_{Q_-Q_-} \otimes f_{q,r,s}^{(I=0)} + \frac{N_c}2 \delta_{rs} \widetilde P_{Q_-G_+} \otimes f_{W_+}^{(I=0)}+ \frac{N_c}2 \delta_{rs} \widetilde P_{Q_-G_-} \otimes f_{W_-}^{(I=0)} \biggr] \nonumber \\ & \quad + \frac{\alpha_1}{\pi} \left[ \mathsf{y}_q^2 \widetilde P_{Q_-Q_-} \!\otimes\! f_{q,r,s}^{(I=0)} \!+\! 2 N_c \mathsf{y}_q^2 \delta_{rs} \widetilde P_{Q_-G_+} \otimes f_{B_+}^{(I=0)} \!+\! 2 N_c \mathsf{y}_q^2 \delta_{rs} \widetilde P_{Q_-G_-} \!\otimes\! f_{B_-}^{(I=0)} \right] \nonumber \\[5pt] & \quad + \frac{Y_t^2}{4\pi^2} \biggl[ \delta_{r3}\delta_{s3} (1-z) \otimes f^{(I=0)}_{u,3,3} -\frac18 \delta_{r3} f^{(I=0)}_{q,3,s}(z) -\frac18 \delta_{s3} f^{(I=0)}_{q,r,3}(z) \nonumber \\[5pt] & \quad + \frac{N_c}{2} \delta_{r3}\delta_{s3}\, 1 \otimes f_{\bar H}^{(I=0)} \biggr] \,,\end{align} \vspace{-2ex} \begin{align} \mu \frac{\mathrm{d}}{\mathrm{d} \mu} f_{u,r,s}^{(I=0)} &= \frac{\alpha_3}{\pi} \left[ \frac43 \widetilde P_{Q_+Q_+} \otimes f_{u,r,s}^{(I=0)} + \frac12 \delta_{rs} \widetilde P_{Q_+G_+} \otimes f_{g_+}^{(I=0)}+ \frac12 \delta_{rs} \widetilde P_{Q_+G_-} \otimes f_{g_-}^{(I=0)} \right] \nonumber \\ & \quad + \frac{\alpha_1}{\pi} \Bigl[ \mathsf{y}_u^2 \widetilde P_{Q_+Q_+} \otimes f_{u,r,s}^{(I=0)} + N_c \mathsf{y}_u^2 \delta_{rs} \widetilde P_{Q_+G_+} \otimes f_{B_+}^{(I=0)} + N_c \mathsf{y}_u^2 \delta_{rs} \widetilde P_{Q_+G_-} \otimes f_{B_-}^{(I=0)} \Bigr] \nonumber \\ & \quad + \frac{Y_t^2}{4\pi^2} \biggl[ \frac12 \delta_{r3}\delta_{s3} (1-z) \otimes f^{(I=0)}_{q,3,3} -\frac14 \delta_{r3} f^{(I=0)}_{u,3,s}(z) -\frac14 \delta_{s3} f^{(I=0)}_{u,r,3}(z) \nonumber \\ & \quad + \frac{N_c}{2} \delta_{r3}\delta_{s3}\, 1 \otimes f_{H}^{(I=0)} \biggr] \,,\end{align} \vspace{-2ex} \begin{align} \mu \frac{\mathrm{d}}{\mathrm{d} \mu} f_{d,r,s}^{(I=0)} &= \frac{\alpha_3}{\pi} \left[ \frac43 \widetilde P_{Q_+Q_+} \otimes f_{d,r,s}^{(I=0)} + \frac12 \delta_{rs} \widetilde P_{Q_+G_+} \otimes f_{g_+}^{(I=0)} + \frac12 \delta_{rs} \widetilde P_{Q_+G_-} \otimes f_{g_-}^{(I=0)} \right] \\ & \quad+ \frac{\alpha_1}{\pi}\left[ \mathsf{y}_d^2 \widetilde P_{Q_+Q_+} \otimes f_{d,r,s}^{(I=0)} + N_c \mathsf{y}_d^2 \delta_{rs} \widetilde P_{Q_+G_+} \otimes f_{B_+}^{(I=0)} + N_c \mathsf{y}_d^2 \delta_{rs} \widetilde P_{Q_+G_-} \otimes f_{B_-}^{(I=0)} \right] ,\nonumber\end{align} \vspace{-2ex} \begin{align} \mu \frac{\mathrm{d}}{\mathrm{d} \mu} f_{\ell,r,s}^{(I=0)} &= \frac{\alpha_2}{\pi} \left[ \frac34 \widetilde P_{Q_-Q_-} \otimes f_{\ell,r,s}^{(I=0)}+ \frac12 \delta_{rs} \widetilde P_{Q_-G_+} \otimes f_{W_+}^{(I=0)}+ \frac12 \delta_{rs} \widetilde P_{Q_-G_-} \otimes f_{W_-}^{(I=0)} \right] \\ & \quad + \frac{\alpha_1}{\pi} \left[ \mathsf{y}_\ell^2 \widetilde P_{Q_-Q_-} \otimes f_{\ell,r,s}^{(I=0)} + \mathsf{y}_\ell^2 \delta_{rs} \widetilde P_{Q_-G_+} \otimes f_{B_+}^{(I=0)} + \mathsf{y}_\ell^2 \delta_{rs} \widetilde P_{Q_-G_-} \otimes f_{B_-}^{(I=0)} \right] ,\nonumber\end{align} \vspace{-3ex} \begin{align} \mu \frac{\mathrm{d}}{\mathrm{d} \mu} f_{e,r,s}^{(I=0)} &= \frac{\alpha_1}{\pi}\left[ \mathsf{y}_e^2 \widetilde P_{Q_+Q_+} \otimes f_{e,r,s}^{(I=0)} + \mathsf{y}_e^2 \delta_{rs} \widetilde P_{Q_+G_+} \otimes f_{B_+}^{(I=0)} + \mathsf{y}_e^2 \delta_{rs} \widetilde P_{Q_+G_-} \otimes f_{B_-}^{(I=0)} \right] ,\end{align} \vspace{-4ex} \begin{align}\label{eq:6.12} \mu \frac{\mathrm{d}}{\mathrm{d} \mu} f_{g_\pm}^{(I=0)} &= \frac{\alpha_3}{\pi} \bigg[ 3 \widetilde P_{G_\pm G_+} \otimes f_{g_+}^{(I=0)} +3 \widetilde P_{G_\pm G_-} \otimes f_{g_-}^{(I=0)} + \frac12 b_{0,3} f_{g_\pm}^{(I=0)}(z) \nonumber \\ &\quad + \frac43 \widetilde P_{G_\pm Q_+} \otimes \sum_{\substack{ i=\bar q, u, d, \\ r=1,\ldots,n_g}} f_{i,r,r}^{(I=0)} + \frac43 \widetilde P_{G_\pm Q_-} \otimes \sum_{\substack{ i=q, \bar u, \bar d \\ r=1,\ldots,n_g}} f_{i,r,r}^{(I=0)} \bigg] ,\end{align} \vspace{-2ex} \begin{align} \mu \frac{\mathrm{d}}{\mathrm{d} \mu} f_{W_\pm}^{(I=0)} &= \frac{\alpha_2}{\pi} \bigg[ 2 \widetilde P_{G_\pm G_+} \otimes f_{W_+} ^{(I=0)} + 2 \widetilde P_{G_\pm G_-} \otimes f_{W_-} ^{(I=0)} + \frac12 b_{0,2} f_{W_\pm}^{(I=0)}(z) \\ &\quad + \frac34 \widetilde P_{G_\pm Q_+ } \otimes \!\sum_{\substack{i=\bar q, \bar \ell \\ r=1,\ldots,n_g}}\!\! f_{i,r,r}^{(I=0)} + \frac34 \widetilde P_{G_\pm Q_-} \otimes \!\sum_{\substack{i=q, \ell, \\ r=1,\ldots,n_g}}\!\! f_{i,r,r}^{(I=0)} + \frac34 \widetilde P_{G_\pm H} \otimes \! \sum_{i=H,\bar H} f_i^{(I=0)} \biggr] ,\nonumber\end{align} \vspace{-1ex} \begin{align} \mu \frac{\mathrm{d}}{\mathrm{d} \mu} f_{B_\pm}^{(I=0)} &= \frac{\alpha_1}{\pi} \bigg[ \frac12 b_{0,1} f_{B_\pm}^{(I=0)}(z) + \widetilde P_{G_\pm Q_+} \otimes \sum_{\substack{i=\bar q, u, d, \bar \ell, e \\ r=1,\ldots,n_g}} \mathsf{y}_i^2 f_{i,r,r}^{(I=0)} \nonumber \\ & \quad + \widetilde P_{G_\pm Q_-} \otimes \sum_{\substack{i=q, \bar u, \bar d, \ell , \bar e \\ r=1,\ldots,n_g}} \mathsf{y}_i^2 f_{i,r,r}^{(I=0)} + \mathsf{y}_H^2 \widetilde P_{G_\pm H} \otimes \sum_{i=H,\bar H} f_i^{(I=0)} \bigg] \,,\end{align} \vspace{-2ex} \begin{align} \mu \frac{\mathrm{d}}{\mathrm{d} \mu} f_H^{(I=0)} &= \frac{\alpha_2}{\pi} \left[ \frac34 \widetilde P_{HH} \otimes f_H^{(I=0)}+ \frac12 \widetilde P_{HG_+} \otimes f_{W_+}^{(I=0)} + \frac12 \widetilde P_{HG_-} \otimes f_{W_-}^{(I=0)} \right] \nonumber \\ & \quad + \frac{\alpha_1}{\pi} \Bigl[ \mathsf{y}_H^2 \widetilde P_{HH} \otimes f_H^{(I=0)} + \mathsf{y}_H^2 \widetilde P_{HG_+} \otimes f_{B_+}^{(I=0)} + \mathsf{y}_H^2 \widetilde P_{HG_-} \otimes f_{B_-}^{(I=0)} \Bigr] \nonumber \\ & \quad + \frac{Y_t^2}{8\pi^2} \left[ z \otimes \left( f_{\bar q,3,3}^{(I=0)} + 2 f_{\bar u,3,3}^{(I=0)} \right) - N_c f_H^{(I=0)}(z) \right] \,,\end{align} In addition, we also have the antiparticle equations given by $CP$ conjugation, $q_-,r,s \leftrightarrow \bar q_+,s,r$, $g_+ \leftrightarrow g_-$, $H \leftrightarrow \bar H$, etc. Some terms have been simplified using $\delta(1-z) \otimes f = f(z)$. In the $I=1$ sector, \begin{align} \mu \frac{\mathrm{d}}{\mathrm{d} \mu} f_{q,r,s}^{(I=1)} &= \frac{\alpha_3}{\pi} \frac43 \widetilde P_{Q_-Q_-} \otimes f_{q,r,s}^{(I=1)} \nonumber \\ &\quad \!+\! \frac{\alpha_2}{\pi} \left[\!- \frac14 \widetilde P_{Q_-Q_-} \!\otimes\! f_{q,r,s}^{(I=1)} \!+\! \Gamma_1 f_{q,r,s}^{(I=1)}(z) \!+\! \frac14 N_c \delta_{rs} \widetilde P_{Q_-G_+} \otimes f_{W_+}^{(I=1)}\!+\! \frac14 N_c \delta_{rs} \widetilde P_{Q_-G_-} \!\otimes\! f_{W_-}^{(I=1)} \right] \nonumber \\[5pt] & \quad + \frac{\alpha_1}{\pi} \mathsf{y}_q^2 \widetilde P_{Q_-Q_-} \otimes f_{q,r,s}^{(I=1)} + \frac{g_1 g_2}{4\pi^2} \mathsf{y}_q N_c \delta_{rs} \widetilde P_{Q_-G_+} \otimes \Bigl( f_{W_+\!B_+}^{(I=1)} + f_{B_+W_+}^{(I=1)} \Bigr) \nonumber \\ &\quad + \frac{g_1 g_2}{4\pi^2} \mathsf{y}_q N_c \delta_{rs} \widetilde P_{Q_-G_-} \otimes \Bigl( f_{W_-\!B_-}^{(I=1)} + f_{B_-W_-}^{(I=1)} \Bigr) \nonumber \\ &\quad + \frac{Y_t^2}{4\pi^2} \left[ -\frac18 \delta_{r3} f^{(I=1)}_{q,3,s}(z) -\frac18 \delta_{s3} f^{(I=1)}_{q,r,3}(z) + \frac{N_c}{2} \delta_{r3}\delta_{s3}\, 1 \otimes f_{\bar H}^{(I=1)} \right] \,,\end{align} \vspace{-3ex} \begin{align} \mu \frac{\mathrm{d}}{\mathrm{d} \mu} f_{\ell,r,s}^{(I=1)} &= \frac{\alpha_2}{\pi} \left[\!- \frac14 \widetilde P_{Q_-Q_-} \!\otimes\! f_{\ell,r,s}^{(I=1)} \!+\! \Gamma_1 f_{\ell,r,s}^{(I=1)}(z) \!+\! \frac14 \delta_{rs} \widetilde P_{Q_-G_+} \!\otimes\! f_{W_+}^{(I=1)} \!+\! \frac14 \delta_{rs} \widetilde P_{Q_-G_-} \!\otimes\! f_{W_-}^{(I=1)} \right] \nonumber \\[5pt] & \quad + \frac{\alpha_1}{\pi} \mathsf{y}_\ell^2 \widetilde P_{Q_-Q_-} \otimes f_{\ell,r,s}^{(I=1)} + \frac{g_1 g_2}{4\pi^2} \mathsf{y}_\ell \delta_{rs} \widetilde P_{Q_-G_+} \otimes \Bigl( f_{W_+\!B_+}^{(I=1)} + f_{B_+W_+}^{(I=1)}\Bigr) \nonumber \\ & \quad + \frac{g_1 g_2}{4\pi^2} \mathsf{y}_\ell \delta_{rs} \widetilde P_{Q_-G_-} \otimes \Bigl( f_{W_-\!B_-}^{(I=1)} + f_{B_-W_-}^{(I=1)}\Bigr) \,,\end{align} \vspace{-2ex} \begin{align} \mu \frac{\mathrm{d}}{\mathrm{d} \mu} f_{W_\pm}^{(I=1)} &= \frac{\alpha_2}{\pi}\bigg[ \widetilde P_{G_\pm G_+} \otimes f_{W_+}^{(I=1)} + \widetilde P_{G_\pm G_-} \otimes f_{W_-}^{(I=1)} + \Gamma_2 f_{W_\pm}^{(I=1)} (z) \!+\! P_{G_\pm Q_+ } \otimes \!\!\! \sum_{\substack{i=\bar q, \bar \ell \\ r=1,\ldots,n_g}} f_{i,r,r}^{(I=1)} \nonumber \\ &\quad \!+\! P_{G_\pm Q_-} \otimes \!\!\! \sum_{\substack{i=q, \ell, \\ r=1,\ldots,n_g}} f_{i,r,r}^{(I=1)} \!+\! P_{G_\pm H}(z) \otimes \!\! \sum_{i=H,\bar H} f_i^{(I=1)} \bigg] ,\end{align} \vspace{-2ex} \begin{align} \mu \frac{\mathrm{d}}{\mathrm{d} \mu} f_{W_\pm B_\pm}^{(I=1)} &= \left[\frac{\alpha_2}{\pi} \Gamma_3 +\frac{\alpha_1}{\pi} \frac14 b_{0,1} \right] f_{W_\pm B_\pm}^{(I=1)} (z) + \frac{g_1 g_2}{4\pi^2} \widetilde P_{G_\pm Q_-} \otimes \sum_{i=q, \ell, r=1,\ldots,n_g} \mathsf{y}_i f_{i,r,r}^{(I=1)} \nonumber \\ & \quad - \frac{g_1 g_2}{4\pi^2} \widetilde P_{G_\pm Q_+} \otimes \sum_{i=\bar q, \bar \ell, r=1,\ldots,n_g} f_{i,r,r}^{(I=1)} \,,\end{align} \vspace{-2ex} \begin{align} \mu \frac{\mathrm{d}}{\mathrm{d} \mu} f_H^{(I=1)} &= \frac{\alpha_2}{\pi} \left[ - \frac14 \widetilde P_{HH} \otimes f_H^{(I=1)} + \Gamma_4 f_H^{(I=1)}(z) + \frac14 \widetilde P_{HG_+} \otimes f_{W_+}^{(I=1)} + \frac14 \widetilde P_{HG_-} \otimes f_{W_-}^{(I=1)} \right] \nonumber \\ & \quad + \frac{\alpha_1}{\pi} \left[ \mathsf{y}_H^2 \widetilde P_{HH} \otimes f_H^{(I=1)} \right] + \frac{Y_t^2}{8\pi^2} \left[ z \otimes f_{\bar q,3,3}^{(I=1)} - N_c f_H^{(I=1)} (z) \right] \,,\end{align} \vspace{-2ex} \begin{align} \mu \frac{\mathrm{d}}{\mathrm{d} \mu} f_{\widetilde H H}^{(I=1)} &= \frac{\alpha_2}{\pi} \left[ - \frac14 \widetilde P_{HH} \otimes f_{\widetilde H H}^{(I=1)} + \Gamma_4 f_{\widetilde H H}^{(I=1)}(z) \right] \nonumber \\[5pt] &\quad + \frac{\alpha_1}{\pi} \left[ -\mathsf{y}_H^2 \widetilde P_{HH} \otimes f_{\widetilde H H}^{(I=1)} + 2 \mathsf{y}_H^2 \Gamma_4 f_{\widetilde H H}^{(I=1)} (z) \right] -\frac{Y_t^2}{8\pi^2} N_c f_{\widetilde H H}^{(I=1)} (z) \,.\end{align} The constants $\Gamma_i$ are \begin{align}\label{eq:Ga_def} \Gamma_1 &= \frac32 + 2 \ln \frac{\nu}{\bar n \cdot r} \,, & \Gamma_2 &= \frac{b_{0,2}}{2}+2 \ln \frac{\nu}{\bar n \cdot r} \,, \nonumber \\ \Gamma_3 &= \frac{b_{0,2}}{4}+2 \ln \frac{\nu}{\bar n \cdot r}\,, & \Gamma_4 &= 2 + 2 \ln \frac{\nu}{\bar n \cdot r} \,. \end{align} The antiparticle equations are given by $CP$ conjugation, $q,r,s \leftrightarrow \bar q,s,r$, $g_+ \leftrightarrow g_-$, $H \leftrightarrow \bar H$, $\mathsf{y}_{q,\ell} \to - \mathsf{y}_{q,\ell}$, etc. With the sign convention between $q$ and $\bar q$ PDFs discussed below \eq{3.33}, $q + \bar q$ is $CP=+$ for the $I=0$ PDF, and $CP=-$ for the $I=1$ PDF. The $\ln \nu/(\bar n \cdot r)$ term, when combined with the soft anomalous dimension, turns in to $\ln \mu/(\bar n \cdot r)$ that yield electroweak Sudakov double logarithms, as discussed in the first part of \sec{resummation}. The evolution in the $I=2$ sector is given by \begin{align} \mu \frac{\mathrm{d}}{\mathrm{d} \mu} f_{W_\pm}^{(I=2)} &= \frac{\alpha_2}{\pi} \left[ -\widetilde P_{G_\pm G_+} \otimes f_{W_+}^{(I=2)} -\widetilde P_{G_\pm G_-} \otimes f_{W_-}^{(I=2)} +\biggl(\frac{b_{0,2}}{2}+6 \ln \frac{\nu}{\bar n \cdot r}\biggr) f_{W_\pm} ^{(I=2)} (z) \right] , \end{align} and only involves the transverse $W$ PDFs. The chiral nature of the electroweak interactions implies that all parton distributions will become polarized. The polarized gluon distribution $f_{\Delta g}=f_{g_+}-f_{g_-}$ will evolve to a non-zero value using \eq{6.12}, even if it vanishes at small values of $\mu$. Finally, the $\nu$ anomalous dimensions are diagonal, and take the simple form \begin{align} \nu \frac{\mathrm{d}}{\mathrm{d} \nu} f_i^{(I=0)} &= 0 ,\nonumber \\ \nu \frac{\mathrm{d}}{\mathrm{d} \nu} f_i^{(I=1,I_3=0)} &= \frac{\alpha_2}{\pi} \ln \frac{\mu^2}{M_W^2} f_i^{(I=1,I_3=0)}, \nonumber \\ \nu \frac{\mathrm{d}}{\mathrm{d} \nu} f_i^{(I=2,I_3=0)} &= \frac{3\alpha_2}{\pi} \ln \frac{\mu^2}{M_W^2} f_i^{(I=2,I_3=0)} \,, \end{align} with the exception of $\widetilde H H$, for which \begin{align} \label{eq:tildeHH} \nu \frac{\mathrm{d}}{\mathrm{d} \nu} f_{\widetilde H H}^{(I=1,I_3=1)} &= \bigg[\frac{\alpha_2}{2\pi} \ln \frac{\mu^2}{M_W^2} + \frac{(\alpha_2+ 4\mathsf{y}_H^2\alpha_1)}{2\pi} \ln \frac{\mu^2}{M_Z^2} \bigg]f_{\widetilde H H}^{(I=1,I_3=1)} \nonumber \\ &= \bigg[\frac{\alpha_2}{2\pi} \ln \frac{\mu^2}{M_W^2} + \frac{\alpha_{\text{em}}}{2\pi \sin^2 \theta_W \cos^2 \theta_W} \ln \frac{\mu^2}{M_Z^2} \bigg]f_{\widetilde H H}^{(I=1,I_3=1)} \,. \end{align} \section{Comparison to literature} \label{sec:comparison} We compare our results to those obtained for the (electroweak) PDF evolution in refs.~\cite{Ciafaloni:2001mu,Ciafaloni:2005fm,Bauer:2017isx}, which is based on splitting functions in the broken phase. Their approach yields, for example, for the $SU(2)$ running with $\mu \gg M$, \begin{align} \label{eq:BFW} \frac{\mathrm{d}}{\mathrm{d} \ln \mu}\, f_q^{(I=1,I_3=0)}(x,\mu) &= \frac{\alpha_2}{\pi}\, \int_0^{1-M/\mu}\, \mathrm{d} z\, \bigg[-\frac14 \widetilde P_{QQ}(z)\, f_q^{(I=1,I_3=0)}\Big(\frac{x}{z},\mu\Big) \nonumber \\ & \quad + \frac18 N_c \widetilde P_{QG}(z)\, f_W^{(I=1,I_3=0)}\Big(\frac{x}{z},\mu\Big) + \dots \bigg] \,, \nonumber \\ \frac{\mathrm{d}}{\mathrm{d} \ln \mu}\, f_W^{(I=1,I_3=0)}(x,\mu) &= \frac{\alpha_2}{\pi}\, \int_0^{1-M/\mu}\, \mathrm{d} z\, \bigg[\widetilde P_{GG}(z)\, f_W^{(I=1,I_3=0)}\Big(\frac{x}{z},\mu\Big) \nonumber \\ & \quad + \widetilde P_{GQ}(z)\, \sum_{i=q,\bar q, \ell, \bar \ell} f_i^{(I=1,I_3=0)}\Big(\frac{x}{z},\mu\Big) + \dots \bigg] \,.\end{align} Here QCD is accounted for through the number of colors $N_c$, the triplet PDFs are \begin{align}\label{eq:3.81} f_q^{(I=1,I_3=0)}(x,\mu) &= \tfrac12[f_{u_-}(x,\mu) - f_{d_-}(x,\mu)] \,, \nonumber \\ f_W^{(I=1,I_3=0)}(x,\mu) &= \sum_{h=\pm} f_{W_h^+}(x,\mu) - f_{W_h^-}(x,\mu) \,,\end{align} i.e.~we assume a single generation, and the (conventional) QCD splitting functions are given by \begin{align} \widetilde P_{QQ} &= \widetilde P_{Q_-Q_-} \,, & \widetilde P_{QG} &= \widetilde P_{Q_-G_-} + \widetilde P_{Q_-G_+} \,, \nonumber \\ \widetilde P_{GG} &= \widetilde P_{G_-G_-} + \widetilde P_{G_-G_+} \,, & \widetilde P_{GQ} &= \widetilde P_{G_-Q_-} + \widetilde P_{G_+Q_-} \,.\end{align} Before comparing this to our results, we want to stress that the polarized $f_{\Delta W}^{(I=1,I_3=0)}$ also mixes into $f_q^{(I=1,I_3=0)}$, and was not accounted for in these earlier calculations. Since $f_{\Delta W}^{(I=1,I_3=0)}$ does not vanish, this effect cannot be ignored~\cite{Fornal:2018znf}. The splitting functions in \eq{BFW} agree with our results in \secs{RGE_C}{PDF_SM} for $z<1$. In writing \eq{BFW}, the soft singularity is cut off by hand through the upper bound on the $z$ integral, so the $\delta$-function terms in the splitting functions do not contribute. To compensate for this, they add ``virtual" contributions obtained from sum rules \begin{align} \label{eq:virtual} \frac{\mathrm{d}}{\mathrm{d} \ln \mu}\, f_q^{(I=1,I_3=0)}(x,\mu) &= \frac{\alpha_2}{\pi} f_q^{(I=1,I_3=0)}(x,\mu) \, \int_0^{1-M/\mu}\, \mathrm{d} z\, z\,\Big[-\frac34 \widetilde P_{QQ}(z) - \frac34 \widetilde P_{GQ}(z) \Big]+ \dots \nonumber \\ &= \frac{\alpha_2}{\pi}\, \Big(\frac32 \ln \frac{M}{\mu} + \frac98 \Big) f_q^{(I=1,I_3=0)}(x,\mu) + \dots \,, \nonumber \\ \frac{\mathrm{d}}{\mathrm{d} \ln \mu}\, f_W^{(I=1,I_3=0)}(x,\mu) &= \frac{\alpha_2}{\pi} f_W^{(I=1,I_3=0)}(x,\mu)\, \int_0^{1-M/\mu}\, \mathrm{d} z\, z\,\Big[-2 \widetilde P_{GG}(z) - \frac12 (N_c+1) \widetilde P_{Q G}(z) \Big] + \dots \nonumber \\ &= \frac{\alpha_2}{\pi}\, \Big(4 \ln \frac{M}{\mu} + \frac12 b_{0,2} \Big) f_q^{(I=1,I_3=0)}(x,\mu) + \dots \,,\end{align} where $N_c+1$ is the number of quark plus lepton electroweak doublets. To compare with our expressions, we need to remove the cutoff in \eq{BFW}, which only matters for the diagonal terms $P_{QQ}$ and $P_{GG}$. Specifically, \begin{align} \label{eq:cutoff} \int_{1-M/\mu}^1\, \mathrm{d} z\, \widetilde P_{QQ}(z)\, f_q^{(I=1,I_3=0)}\Big(\frac{x}{z},\mu\Big) &= \Big(2 \ln \frac{M}{\mu} + \frac32 \Big) \, f_q^{(I=1,I_3=0)}(x,\mu), \nonumber \\ \int_{1-M/\mu}^1\, \mathrm{d} z\, \widetilde P_{GG}(z)\, f_W^{(I=1,I_3=0)}\Big(\frac{x}{z},\mu\Big) &= 2 \ln \frac{M}{\mu} \, f_W^{(I=1,I_3=0)}(x,\mu) \,,\end{align} up to power corrections in $M/Q$. Combining \eqs{virtual}{cutoff} gives, \begin{align} \label{eq:7.5} \frac{\mathrm{d}}{\mathrm{d} \ln \mu}\, f_q^{(I=1,I_3=0)}(x,\mu) &= \frac{\alpha_2}{\pi}\, \bigg[\Big(\frac32 \ln \frac{M}{\mu} + \frac98 \Big) + \frac14 \Big(2 \ln \frac{M}{\mu} + \frac32 \Big)\bigg] f_q^{(I=1,I_3=0)}(x,\mu) + \dots \,, \nonumber \\ &= \frac{\alpha_2}{\pi}\, \Big(2 \ln \frac{M}{\mu} + \frac32 \Big) f_q^{(I=1,I_3=0)}(x,\mu) + \dots \,, \nonumber \\ \frac{\mathrm{d}}{\mathrm{d} \ln \mu}\, f_W^{(I=1,I_3=0)}(x,\mu) &= \frac{\alpha_2}{\pi}\, \bigg[\Big(4 \ln \frac{M}{\mu} + \frac12 b_{0,2} \Big) - 2 \ln \frac{M}{\mu} \bigg] f_W^{(I=1,I_3=0)}(x,\mu) + \dots \nonumber \\ &= \frac{\alpha_2}{\pi}\, \Big(2 \ln \frac{M}{\mu} + \frac12 b_{0,2} \Big) f_W^{(I=1,I_3=0)}(x,\mu) + \dots \,,\end{align} which when evolved from $\mu=M$ to $\mu=Q$ leads to the same Sudakov factor as \eq{sudakov}. Interestingly, the constant terms in the anomalous dimensions in \eq{7.5} agree with those of $\Gamma_1$ and $\Gamma_2$ in \eq{Ga_def}. The approach in refs.~\cite{Ciafaloni:2001mu,Ciafaloni:2005fm,Bauer:2017isx} was developed to obtain the LL corrections, and it does not capture the full NLL corrections. First of all, the anomalous dimensions are not the same as our result. Although they integrate to the same Sudakov factor at LL accuracy, \eq{sudakov} indicates that there are differences beyond LL, including from the $\mu$-dependence of the coupling $\alpha_2$ in the anomalous dimension. Secondly, the cut off $z \leq 1-M/Q$ in \eq{BFW}, though physically motivated, is somewhat arbitrary. In particular, changing $M$ between e.g.~$M_W$ and $M_Z$ leads to changes at NLL accuracy. Nevertheless it is interesting that so much of our calculation is reproduced by just considering the splitting functions in the broken phase of the theory and imposing the momentum sum rules. As discussed earlier, polarization effects must be taken into account because SU(2) is chiral. Finally, once particles are identified in the final state, there is a nontrivial soft function which cannot be reproduced by splitting functions (which is a purely collinear approximation) at NLL. \section{Generalizations and extensions} \label{sec:extensions} In this section we provide a roadmap for a range of extensions, that will be presented in more detail in a forthcoming publication. Specifically, we will touch on extending our approach to higher orders, other processes, kinematic hierarchies and jets. Perhaps most interestingly, we consider a hybrid between inclusive and exclusive processes, which are fully exclusive in the central region of the collision (where detectors are located), but fully inclusive in the forward (beam) regions. These generalizations can of course be combined, depending on the specific process and measurement under consideration. \subsection{Higher orders} \begin{table}[t] \centering \begin{tabular}{l | c c c c c c} \hline \hline & Matching & Non-cusp & Cusp \\ \hline LL & tree & - & $1$-loop \\ NLL & tree & $1$-loop & $2$-loop \\ NLL$'$ & $1$-loop & $1$-loop & $2$-loop \\ NNLL & $1$-loop & $2$-loop & $3$-loop \\ \hline\hline \end{tabular} \caption{Perturbative ingredients needed at different orders in resummed perturbation theory. The columns correspond to the loop order of the matching (both at the high scale $Q$ and low scale $M$), non-cusp and cusp anomalous dimensions.} \label{tab:orders} \end{table} In this paper we limited ourselves to NLL accuracy, which allowed us to perform one-loop calculations of the anomalous dimensions and tree-level calculations of the matching at the hard scale $Q$ in \sec{high_matching} and at the low scale $M$ in \sec{low_matching}. The ingredients needed at different orders in perturbation theory are summarized in table \ref{tab:orders}, where the cusp anomalous dimension refers to the coefficient of the $\ln \nu$ and $\ln \mu$ terms in the anomalous dimensions. The cusp anomalous dimension~\cite{Korchemsky:1987wg} is universal and gives rise to double logarithms (see \sec{resummation}) which is why it is needed at one higher order than the rest of the anomalous dimension. To extend our approach to NLL$'$ requires carrying out the matching at one-loop order. The high-scale matching depends on the process, but is relatively easy because it can be carried out in the symmetric phase of $SU(2)$. The virtual corrections to the high-scale matching for $2 \to 2$ processes are known~\cite{Fuhrer:2010eu}. The low-scale matching involves calculations in the broken phase (see e.g.~\cite{Chiu:2009mg,Chiu:2009ft}), but these can be carried out separately for each of the ingredients of the factorized cross section. Furthermore, these same ingredients appear for other processes and can thus be recycled. Pushing on to NNLL requires in addition the three-loop cusp anomalous dimension~\cite{Korchemsky:1987wg,Moch:2004pa}, and the remainder of the anomalous dimension at two-loop order. Even for $z<1$ these are not simply a multiple of the known two-loop splitting functions~\cite{Furmanski:1980cm,Curci:1980uw,Floratos:1981hs,Ellis:1996nn}, because the group theory factors differ between the real-virtual and real-real contributions. However, the double virtual diagrams do not need to be calculated, because they are independent of the representation, and the sum of all diagrams for the gauge singlet case reproduces the known two-loop splitting functions. \subsection{Other processes} The examples we focussed on in this paper are $2 \to 2$ processes with one quark and one lepton current. For each new type of process that is considered, the hard matching in \sec{high_matching} has to be repeated. The anomalous dimensions of the collinear and soft functions do not depend on the process, so the results of \secs{RGE_C}{RGE_S} are universal, and can be used again. New soft operators will appear for $2 \to N$ processes with $N>2$, or when collinear operators for SU(2) gauge bosons in the quintet representation contribute. This only requires determining the relevant group theory factors, since the basic diagram in table \ref{tab:soft_diagram} is the same. \subsection{Kinematic hierarchies} In our analysis we have assumed that there is a single hard scale $\hat s = Q^2$ describing the short distance process. However, it is possible that the Mandelstam invariants $\hat s_{ij} = (p_i + p_j)^2$ are hierarchical. For example, in a $2 \to 3$ process, two of the energetic final-state particles could be relatively close to each other, such that \begin{align} q^2 \sim \hat s_{45} \ll |\hat s_{ij\neq 45}| \sim Q^2 \,,\end{align} or one of the particles could be much less energetic, \begin{align} q^2 \sim |\hat s_{i5}| \ll |\hat s_{i<j \neq 5}| \sim Q^2 \,.\end{align} This can be described using SCET$_+$~\cite{Bauer:2011uc,Procura:2014cba,Larkoski:2015zka}, by first matching onto SCET for a $2 \to 2$ process at the high scale $Q$, and then resolving the two nearby particles or resolving the soft particle at the lower scale $q$. If $q \lesssim M$, all this is irrelevant from the point of view of electroweak corrections, and these will simply be the same as for the corresponding $2 \to 2$ process. If $Q \gg q \gg M$, one first uses the evolution for the $2 \to 2$ process from the scale $\mu = Q$ to $\mu = q$, then matches onto the $2 \to 3$ process, and uses the evolution for the $2 \to 3$ process from $\mu = q$ down to $\mu = M$. Thus, instead of only $\ln Q/M$ we now also get $\ln q/M$, or equivalently $\ln Q/q$. Lets make this a bit more concrete for the case of two energetic final-state particles that are close to each other (see refs.~\cite{Bauer:2011uc,Pietrulewicz:2016nwo} for additional details). The matching at the scale $\mu \sim q$ maps a single collinear function onto the two collinear functions for the nearby particles and a collinear-soft function describing the radiation between them. The tree-level matching coefficient is simply the appropriate collinear splitting function. The main difference with respect to refs.~\cite{Bauer:2011uc,Pietrulewicz:2016nwo}, is the gauge representation of collinear operators. For example, \begin{align}\label{eq:cmatch} \mathcal{C}_q &\to \frac14\, \mathcal{C}_q \mathcal{C}_W - \frac12\,\mathcal{C}_q^a \mathcal{C}_W^b {\mathcal S}_{q W}^{ab} \,.\end{align} The soft operator ${\mathcal S}_{q W}^{ab}$ in \eq{cmatch} is the collinear-soft function, where the subscript indicates that the Wilson lines are along $q$ and $W$. Thus instead of a soft function for the $2\to 3$ process, we have a soft function for the $2 \to 2$ process and a collinear-soft function. This soft function and collinear-soft function have the same invariant mass scale $\mu \sim M$, but different rapidity scales $\nu \sim M$ vs.~$QM/q$, and the resulting $\nu$-evolution sums single logarithms of $Q/q$. The matching relations for other collinear operators take a form similar to \eq{cmatch}. \subsection{Jets} For the processes we considered, only leptons were identified in the final state (the jet in DIS was not identified). However, one can also consider jets defined through an algorithm like anti-k$_T$~\cite{Cacciari:2008gp} and a jet radius parameter $R$. For $R \ll 1$\footnote{The collinear approximation holds surprisingly well, with smaller than 10\% corrections for values as large as $R = 0.7$~\cite{Jager:2004jh,Mukherjee:2012uz}.}, inclusive jet production can be described by a fragmentation function, which accounts for the jets produced by a parton~\cite{Dasgupta:2014yra,Kang:2016ehg,Dai:2016hzf}. The only difference with standard fragmentation functions is that the evolution stops at the jet scale $QR$, where this fragmentation function is perturbatively calculable. This introduces EW logarithms of $\ln QR/Q = \ln R$ in addition to $\ln Q/M$. The tree-level matching at the scale $\mu = QR$ yields \begin{align} D_{W_\pm \to {\rm jet}}^{(I=0)}(x,\mu,\nu) &= \delta(1-x) \,, \qquad D_{W_\pm \to {\rm jet}}^{(I=1,I_3=0)}(x,\mu,\nu) = 0 \,, \qquad D_{W_\pm \to {\rm jet}}^{(I=2,I_3=0)}(x,\mu,\nu) = 0 \,, \nonumber \\ D_{q\to {\rm jet}}^{(I=0)}(x,\mu,\nu) &= \delta(1-x) \,, \qquad D_{q\to {\rm jet}}^{(I=1,I_3=0)}(x,\mu,\nu) = 0 \,, \nonumber \\ D_{u\to {\rm jet}}(x,\mu,\nu) &= \delta(1-x) \,,\end{align} etc. Because no particle is identified, we do not get a contribution from the gauge non-singlet fragmentation functions. \subsection{Combining inclusive and exclusive processes} \label{sec:incl_excl} \begin{figure} \centering \includegraphics[width=0.6\textwidth]{figs/beam_cones.pdf} \caption{We treat the radiation in cones around the beam axis (green) inclusively, because it is unresolved. $W$ and $Z$ bosons emitted into the central region (blue) are detected, and can therefore be treated in an exclusive manner.} \label{fig:beam_cones} \end{figure} The final generalization we consider mixes our resummation of EW logarithms for inclusive processes with the EW resummation for exclusive processes developed in refs.~\cite{Chiu:2007yn,Chiu:2007dg,Chiu:2008vv,Chiu:2009mg,Chiu:2009ft,Chiu:2009yx,Fuhrer:2010eu,Fuhrer:2010vi}. Specifically, we will assume that we are fully exclusive in the region $|\eta| < \eta_b$ covered by detectors, and fully inclusive near the beams, where $|\eta| > \eta_b$, see \fig{beam_cones}. These beam regions correspond to cones with half opening angle $R_b = 2 \arctan(e^{-\eta_b})$ around the beam axis. We will exploit that for the LHC we can safely assume that $R_b$ is small. Note that fully exclusive only refers to the electroweak corrections, which is justified because the $W$ and $Z$ boson are massive, and not to QCD or QED radiation. In particular, we will not discuss what happens below the scale $M$. Before describing how to combine the inclusive and exclusive cases, it is useful to highlight the similarities and differences between the two. In the exclusive case only virtual diagrams contribute to the collinear functions. For example, from table~\ref{tab:coll_fermion} we read off that the fermion collinear operator has the following \emph{multiplicative} anomalous dimension ($SU(2)$ part only) \begin{align} \label{eq:C_ex} \hat \gamma_{\mu,Q}^{\rm ex} = c_F \Big( 2\ln \frac{\nu}{\bar n_i \cdot r_i} +\frac32 \Big) \,, \qquad \hat \gamma_{\nu,Q}^{\rm ex} = c_F \ln \frac{\mu^2}{M^2} \,.\end{align} Because we only include virtual diagrams, the anomalous dimension is independent of the representation (i.e.~singlet vs.~triplet), and of course there is no mixing. Including the corresponding $\ln \nu^2/\mu^2$ terms from the soft function anomalous dimension, we obtain the same $\mu$-anomalous dimension as in ref.~\cite{Chiu:2007yn}\footnote{The anomalous dimensions differ by an overall factor of -2. The factor of 2 arises because they consider amplitudes, and we consider squared amplitudes. The minus sign is due to the fact that they run their collinear functions from $Q$ to $M$, whereas we do the opposite, i.e.~they renormalize the coefficient functions whereas we renormalize the operators.}. Moving on to the soft function, we note that one cannot make the same simplifications as in \sec{gauge_spin} in the exclusive case. Specifically, the measurement restricting the real radiation sits between the ${\mathcal S}_i$ and ${\mathcal S}_i^\dagger$ in the amplitude and conjugate amplitude, prohibiting us from cancelling them against each other, even for the singlet. The one-loop diagram responsible for the soft anomalous dimension is the same, but in the exclusive case contributions where the gauge boson crosses the cut are of course not allowed. We now describe how to combine exclusive and inclusive resummation to calculate the EW corrections. For this we build on recent work~\cite{Chien:2015cka,Becher:2015hka,Hornig:2016ahz} that has addressed QCD corrections for a similar setup. The main difference is that here we restrict \emph{all} EW radiation in the central region, whereas they restrict the energy of soft radiation through some central jet veto. Instead we have to account for the gauge boson mass $M$. Our approach can be summarized as follows: we use the results from the exclusive case to evolve from $\mu=Q$ to $\mu=QR$, match onto an inclusive descriptions of the two beams at that scale, and then evolve from $\mu=QR$ to $\mu=M$. This matching amounts to replacing the exclusive collinear functions for the beams by inclusive collinear functions and collinear-soft functions. Normalizing the collinear-soft function to 1, the tree-level matching coefficient is 1. The switch from exclusive to inclusive collinear functions at NLL thus simply involves changing the anomalous dimension from \eq{C_ex} to \eq{ga_C}, etc. The collinear-soft function consists of two back-to-back Wilson lines, and its one-loop anomalous dimension is therefore again the result of the diagram in table~\ref{tab:soft_diagram}. Since $n_i \cdot n_j/2 = 1$ (back-to-back Wilson lines), we are left with the following collinear-soft anomalous dimension \begin{align} \mathcal{C}_{Q}:& \quad \hat \gamma_{\mu,CS} = c_F \ln \frac{\nu^2}{\mu^2} \,, & \hat \gamma_{\nu,CS} &= c_F \ln \frac{\mu^2}{M^2} \nonumber \\ \mathcal{C}_{Q}^a: & \quad \hat \gamma_{\mu,CS} = (c_F - \tfrac12 c_A) \ln \frac{\nu^2}{\mu^2} \,, & \hat \gamma_{\nu,CS} &= (c_F - \tfrac12 c_A) \ln \frac{\mu^2}{M^2} \,.\end{align} The overall normalization of the anomalous dimension was fixed using consistency, since the difference in $\nu$-anomalous dimension between the exclusive and inclusive collinear function must be cancelled by the collinear-soft function. The collinear-soft function has the same invariant mass scale $\mu_{CS} \sim M$ as the soft function, but its rapidity scale is $\nu_{CS} \sim M/R_b$. Thus the $\nu$-evolution will resum logarithms of $(M/R_b) / M = R_b$ in addition logarithms of $Q/M$. Finally, we note that there are electroweak nonglobal logarithms (NGLs)~\cite{Dasgupta:2001sh} of the form $\alpha^n \ln^n (QR/M)$ that arise because the radiation from collinear-soft function is unconstrained in the beam region and fully constrained in the central region. Although these logarithms formally enter at NLL, they are only visible in a two-loop calculation and are expected to be relatively small. \section{Conclusions} \label{sec:conclusions} In this paper we considered an alternative to the usual paradigm in which the emissions of the massive $Z$ and $W$ bosons are treated as resolved, such that only virtual electroweak corrections need to be calculated. In fact we started from the opposite extreme, considering a rather inclusive setup in which only one or two particles are identified in the final state, and developing the framework to resum electroweak logarithms. The fact that incoming and outgoing particles are not electroweak singlets played an important role, introducing parton distribution functions and fragmentation functions in the corresponding representations that have a rather different evolution. Specifically, the resummation of nonsinglet distributions involves double logarithms, leading to a Sudakov suppression in the extreme high-energy limit. These contributions are also sensitive to the exchange of soft gauge bosons, and we performed our calculations using a separate ultraviolet, rapidity and infrared regulator to highlight the structure. Furthermore, we demonstrated the importance of polarization effects for gauge bosons, due to the chiral nature of SU(2). For the user mainly interested in our results, we also provided an explicit recipe on how to incorporate these effects at next-to-leading logarithmic accuracy in the appendix. Of course the experimental reality is probably somewhere between the fully exclusive and inclusive case. We therefore consider a mixed case where certain regions of phase space are treated exclusively (inclusively) due to presence (absence) of detectors. This involves a combination of our framework and that devised for exclusive processes, and we can furthermore lean heavily on related developments in QCD cross sections. We also consider the case where a jet (instead of a lepton) is identified in the final state, which is straightforward to incorporate. \begin{acknowledgments} We thank Bartosz Fornal for discussions, and for collaboration in initial stages of this project. This work is supported by the DOE grant DE-SC0009919, the ERC grant ERC-STG-2015-677323, the D-ITP consortium, a program of the Netherlands Organization for Scientific Research (NWO) that is funded by the Dutch Ministry of Education, Culture and Science (OCW), and the Munich Institute for Astro- and Particle Physics (MIAPP) of the DFG cluster of excellence ``Origin and Structure of the Universe". \end{acknowledgments}
{ "timestamp": "2018-09-12T02:07:45", "yymm": "1802", "arxiv_id": "1802.08687", "language": "en", "url": "https://arxiv.org/abs/1802.08687" }
\section{Notation and Preliminaries} The following definitions follow the notation in~\cite{Powley2009}. A {\it cellular automaton} (CA) is a tuple $\langle S, (\mathbb{L}, +), T, f \rangle$ with a set $S$ of states, a lattice $\mathbb{L}$ with a binary operation $+$, a neighbourhood template $T$, and a local rule $f$. The {\it set of states} $S$ is a finite set with elements $s$ taken from a finite alphabet $\Sigma$ with at least two elements. It is common to take an alphabet composed entirely of integers modulo $s$: $\Sigma = \mathbb{Z}_s = \{0,...,s-1\}$. An element of the lattice $i\in \mathbb{L}$ is called a cell. The lattice $\mathbb{L}$ can have $D$ dimensions and can be either infinite or finite with cyclic boundary conditions. The {\it neighbourhood template} $T=\langle\eta_1,...,\eta_m\rangle$ is a sequence of $\mathbb{L}$. In particular, the neighbourhood of cell $i$ is given by adding the cell $i$ to each element of the template $T$: $T=\langle i+\eta_1,...,i+\eta_m\rangle$. Each cell $i$ of the CA is in a particular state $c[i] \in S$. A {\it configuration} of the CA is a function $c: \mathbb{L} \rightarrow S$. The {\it set of all possible configurations} of the CA is defined as $S_\mathbb{L}$. The {\it evolution of the CA} occurs in discrete time steps $t=0,1,2,...,n$. The transition from a configuration $c_t$ at time $t$ to the configuration $c_{(t+1)}$ at time $t+1$ is induced by applying the local rule $f$. The local rule is to be taken as a function $f: S^{|T|} \rightarrow S$ which maps the states of the neighbourhood cells of time step $t$ in the neighbourhood template $T$ to cell states of the configuration at time step $t+1$: \begin{equation} c_{t+1}[i]=f\left(c_t[i+\eta_1],...,c_t[i+\eta_m]\right ) \end{equation} The general transition from configuration to configuration is called the {\it global map} and is defined as: $F: S^\mathbb{L} \rightarrow S^\mathbb{L}$. In the following we will consider only 1-dimensional (1-D) CA as introduced by Wolfram~\cite{StephenWolfram1983,nks}. The lattice can be either finite, i.e. $\mathbb{Z}_N$, having the length $N$, or infinite, $\mathbb{Z}$. In the 1-D case it is common to introduce the {\it radius} of the neighbourhood template which can be written as $\langle -r,-r+1,...,r-1,r \rangle$ and has length $2 r+1$ cells. With a given radius $r$ the local rule is a function $f: \mathbb{Z}_{|S|}^{{|S|}^{(2r+1)}} \rightarrow \mathbb{Z}_{|S|}$ with $\mathbb{Z}_{|S|}^{{|S|}^{(2r+1)}}$ rules. The so called Elementary Cellular Automata (ECA) with radius $r=1$ have the neighbourhood template $\langle -1,0,1\rangle$, meaning that their neighbourhoods comprise a central cell, one cell to the left of it and one to the right. The rulespace for ECA contains $2^{2^{3}}=256$ rules. Here we consider non-equivalent rules subject to the operations complementation, reflection, conjugation and joint transformation (combining reflection and conjugation) (see Supplementary Information). For example, the number of reduced rules for ECA is 88 (see Supplementary Information). In order to keep the notation simple, we adopt the following definitions \cite{NavotGoldenfeld2006}. A cellular automaton at time step $t$ $A=(a(t),\{S_{A} \},f_{A})$ is composed of a lattice $a(t)$ of cells that can each assume a value from a finite alphabet ${S_{A}}$. A single cell is referenced as $a_{n}(t)$. The update rule $f_{A}$ for each time step is defined as $f_{A}: \{S^{2^{2 r+1}}\} \rightarrow \{S_{A}\}$ with $a_{n}(t+1)=f_{A}[a_{n-1}(t),a_{n}(t),a_{n+1}(t)]$. The entire lattice gets updated through the operation $f_{A} a(t)$. \subsection{CA Typical Behaviour and Wolfram's Classes} Wolfram also introduced~\cite{nks} an heuristic for classifying computer programs by inspecting the behaviour of their space-time diagrams. Computer programs behave differently for different inputs. It is possible, and not uncommon, however, to analyze the behaviour of a program asymptotically according to an initial condition metric~\cite{zenilca,zenilchaos}. Wolfram's classes can be characterized as follows: \begin{myitemize} \item Class 1. Symbolic systems which rapidly converge to a uniform state. Examples are rules 0, 32 and 160. \item Class 2. Symbolic systems which rapidly converge to a repetitive or stable state. Examples are rules 4, 108 and 218. \item Class 3. Symbolic systems which appear to remain in a random state. Examples are rules 22, 30, 126 and 189. \item Class 4. Symbolic systems which form areas of repetitive or stable states, but which also form structures that interact with each other in complicated ways. Examples are rules 54 and 110. \end{myitemize} We use the concept of A Wolfram class as a guiding index that popularly assigns some typical behaviour to every ECA even though we have also shown that such distinction is not of fundamental nature~\cite{riedelzenil}. Here, however, it will be useful to study this idea of typical behaviour of rules that are capable of emulating others when included in minimal emulation sets. In~\cite{cook,nks}, it was shown that at least one ECA rule can perform Turing universal computation. It is still an open question whether other rules are capable of Turing universality, but some evidence suggests that they may be, and that cellular automata rules and random computer programs are, in general, highly programmable and candidates of computation universality~\cite{riedelzenil}. Here we extend results reported in~\cite{StephenWolfram1986} regarding the Boolean composition of ECA. We introduce minimal generating sets as candidates for being able to generate all ECA rules, and we introduce new Turing-universality results in ECA by composition and an associated non-ECA Turing-universal CA implementing the composition. \section{Methods} \subsection{Rule Composition} Rule composition for a pair of CA, i.e. \textrm{rule $C$} = $\textrm{rule A} \circ \textrm{rule B}$, is defined as $f^{1}_{C}a(0)=f^{1}_{B} \circ (f^{1}_{A} a(0))$. The lattice output of rule $A$ is the input of rule $B$. One can say that the rule composition of rule $A$ and rule $B$ yields rule $C$. Rule $C$ can be composed out of rule $A$ and rule $B$. The whole evolution of the composite $\textrm{rule A} \circ \textrm{rule B}$ is $f_{B} \circ f_{A} a(2t) =f_{C} a(t)$ which is as long as the whole evolution of $\textrm{rule C}$. More generally one can compose rule $A$ out of n rules $A_n$: $f^{1}_{A}a(0)=f^{1}_{A_1} \circ f^{1}_{A_2} \ldots \circ f^{1}_{A_n} a(0)$. The whole evolution is $f_{C} a(t)=f_{A_1} \circ f_{A_2} \ldots \circ f_{A_n} a(n t)$. In order to find the CA in a higher rule space that implement the Boolean composition of CA in lower rule spaces (e.g. ECA) we introduce the concept of causal separability. \begin{mydefinition} A space time evolution of a function $C:S\rightarrow S^\prime$ (e.g. a CA) is minimally causally separable if, and only if, the rule icon network (see Fig.~\ref{fig_map_50_37b}) of the rule $R$ of $C$ is the smallest rulespace in which $R$ is separable into $|S|$ disconnected networks. \end{mydefinition} Fig.~\ref{fig_map_50_37b} illustrates the basics of (non-)separability under a rule composition of ECA rules, an example demonstrating that the ECA rulespace is not closed under Boolean composition. This will help us find the CA rule in a higher space that implements the emulation of rule 110 under composition of ECA rules (see Fig.~\ref{fig_4color_map_110} in the Supplementary Material). \subsection{Minimal Generating ECA Rule Sets} The questions driving our experiments thus led us to the problem of finding the minimal rule set that generates the full ECA space. The implementation of the algorithm, and thus checking for rule $N$-primality, is equivalent to the formal equivalence checking process in electronic design automation to formally prove that two representations of a circuit design exhibit exactly the same behaviour, which is reducible to the Boolean satisfiability problem (SAT). Since the SAT problem is NP-complete, only algorithms with exponential worst-case complexity can carry this test. We therefore proceeded by sampling methods. One strategy is to start from a subset of ECA rules, composing and finding the emulations that fall back into the ECA rulespace, because only a subset of all possible rule pairs of a given rulespace lead to a rule which itself is a member of the same rulespace, thus clearly indicating that the ECA under composition is not an algebraic structure, as it is not closed under composition. If a rule composition remains in the same rulespace, the rule tuples, after application of the successive rules of the rule composition, map to a cell state which is one-to-one (see Fig.~\ref{fig_map_50_37}). However, this mapping is not one-to-one all the time and the resulting rule composition leaves the rulespace of the constituent rules (again Fig. \ref{fig_map_50_37}). In this paper we will focus on both cases. We investigate the former case where the composite rule remains in the same rulespace, and set our focus on Turing-universality. Another strategy is to start from a subset of ECA rules and start finding the pair of rules that can emulate the rules in question, then move to another rule, knowing the compositions that were already found in the previous steps. However, all sampling approaches have their own potential problems because the result may be dependant on the initial subset of rules chosen to start the exploration. So we also employed a procedure akin to bootstrapping, where we re-sampled the ECA rulespace with different seeds (initial ECA rules) to infer the compositions from a new sample of the same rulespace and sought out convergence patterns. We also explore the mapping to a higher colour rulespace in order to analyze the full richness of the rulespace induced by rule composition. \subsection{Exploratory Algorithms} \label{samplingalg} \subsubsection{Sampling algorithm 1} The sampling algorithm 1 is based on the frequency of rule 51 in all rule pairs (i.e. 2-tuple) emulating another rule. On one hand, the number of distinct rules composed of rule 51 is the highest (see Fig.~\ref{fig_rule_pair_1} (a)) (besides rule 204, which is the identity rule). On the other hand, the number of distinct rule forming a composition pair is the highest as well (see Fig.~\ref{fig_rule_pair_2} (b)). Here rules 204 the identity rule, and 0, the annihilator rule, have the same number of distinct rules and are not of importance in this context. The algorithm searches alongside all rule pairs containing rule 51. If no rule pair containing rule 51 is found, the algorithm searches for other valid rule pairs. It tries to substitute (fold) each rule pair with other rule pairs found in order to form a reduced $n$-tuples. The goal of the algorithm is to minimize set of rule tuples. Since rule 51 can be composed of rules 15 and 170, the algorithm substitutes always rule 51 with rule pair (15,170). \begin{figure}[ht!] \centering \begin{tabular}{c} \label{fig_rule_pair_1}\includegraphics[width=80mm]{Rule_Composed_with_51.png}\\ (a) Distinct count ($y$ axis) of ECA rules which can be emulated\\ by other ECA rules ($x$ axis). \\[6pt] \label{fig_rule_pair_2}\includegraphics[width=80mm]{Rule_pairs_with_contain_51.png}\\ (b) Distinct count ($y$ axis) of ECA rules which form pairs\\with other ECA rules ($x$ axis). \\[6pt] \end{tabular} \caption{\label{fig_rule_pair} (a) Showing the non-uniform distribution count ($y$ axis) of ECA rules which can be emulated by ECA rules ($x$ axis) pairing with another ECA rule. For example, rule pairs containing rule 51 or rule 204 can emulated the most ECA rules. (b) Showing the distinct count ($y$ axis) of ECA rules which form pair with ECA rules ($x$ axis). For example rules 0, 51, and 204 form the most distinct pairs with other ECA rules. } \end{figure} The pseudo-code for the first sampling algorithm is as follows: \begin{myenumerate} \item create empty sets prime.rules and ensemble.tuples as well as the set all.ECA.rules containing all 88 ECA rules. \item do \begin{myenumerate} \item Draw a set containing rule tuples which each having composing rules different than the composed rules and the set as a whole containing all 88 ECA rules. This set is called tuple.pool \item select all tuples from tuple.pool the composing rules of which are not in the set prime.rules \label{alg1_ref1} \item if selection (from step \ref{alg1_ref1}) $=$ empty then break \item draw(one random sample) from tuple set \item add the composing rule to the set prime.rules \item add whole tuple to set ensemble.tuples \end{myenumerate} \item for test.tuples in ensemble.tuples \begin{myenumerate} \item extract all distinct composing rules from set and put them into prime.rules do \begin{myenumerate} \item select at random a rule prime.rule from the set test.tuples not in $\{15, 51, 170\}$. \item set valid.tuples $=$ (get all tuples which have prime.rule as the composite rule) \item remove all tuples from set valid.tuples which have composing rules in set prime.rules test.tuples $=$ (in all tuples of set test.tuples replace the composed rule with the composing rules of set valid.tuples) \item remove prime.rule from set prime.rules \item remove all tuples from test.tuples which have repeating rules. \item break if order of set prime.rules $=$ 0 \end{myenumerate} \end{myenumerate} \item prime.rules $=$ extract all distinct composing tuples from set test.tuples \item prime.rules $=$ prime.rules $+$ rules not contained in prime.tuples from all.ECA.rules \item comp.tuples $=$ test.tuples $+$ \\ (for rule in primes.rules \\ \-\hspace{1cm} tuples $=$ tuples $+$ $\{$rule,\_,\_$\}$ \\ end for). \end{myenumerate} \subsubsection{Sampling algorithm 2} This algorithm does not rely on special insight in the of rule composition pairs for ECA rules as it is only relying on random sampling of the whole set of rule composition pairs. \\ Pseudo-code for the second sampling algorithm is as follows: \begin{myenumerate} \item select all tuples from set all.tuples which do not have repeating rules \item set prime.pool $=$ all 88 ECA rules \item initialize as empty sets all.tuples and new.tuples $=$ $\{$ $\}$ \item initialize as empty set selected.rules $=$ $\{$ $\}$ \begin{myenumerate} \item set rule $=$ (pick a random rule from prime.pool excluding selected.rules) \item set selected.rules $=$ selected.rules $+$ rule \item initialize new.primes, new.rules and rules as empty sets \item rules $=$ $\{$rule$\}$ (start with set containing one rule) \begin{myenumerate} \item tuples $=$ (draw for each rule in rules a tuple from prime.pool which composes rule) \item check if set tuples only contains composing rules which are not already composites \item If length(tuples)$=$0, break \item set rules $=$ (all composed rules in set tuples) \item set new.rules $=$ newrules $+$ tuples \item else break at 100 steps \end{myenumerate} \item new.tuple $=$ fold (definition below) set new.rules to form one tuple adhering to causal order \item primes $=$ select composing rules of new.tuple \item set new.tuples $=$ new.tuples $+$ new.tuple \item if number of distinct rules in set new.tuple $=$ 88 then break \item else break at max steps \end{myenumerate} \item all.tuples $=$ all.tuples $+$ new.tuples \item stop at maximal number of trials max \end{myenumerate} The folding function in this algorithm refers to the substitution of composite rules previously found to be emulated by other prime rules in previous iterations of the same the algorithm, i.e. the substitution of rules that can be decomposed in other prime rules. \subsection{Primality and Rule (De)Composition} \begin{figure} \centering \begin{tabular}{c} \label{fig_map_54}\includegraphics[width=120mm]{comp_mapping_54.png}\\ (a) Rule mapping ECA rule 54. \\[6pt] \label{fig_rcomp_54}\includegraphics[width=60mm]{Rule54.png}\\ (b) Time evolution of ECA rule 54. \\[6pt] \label{fig_map_50_37}\includegraphics[width=100mm]{comp_mapping_50_37.png} \\ (c) Non-causal rule mapping rule 50 $\circ$ 37. \\[6pt]\label{fig_rcomp_50_37b}\includegraphics[width=60mm]{Rule50_37.png} \\ (d) Time evolution of rule 50 $\circ$ 37. \\[6pt] \end{tabular} \caption{\label{fig_map_50_37b} Rule composition of ECA rules. (a) A network representation of the rule icon. (c) The rule icon is not causally separable and therefore the resulting composition is not an ECA rule but belongs to a larger CA rulespace. (b) Emulation of ECA rule 54. (d) Emulation of a non-ECA rule after Boolean composition of ECA rules. ECA is therefore not a closed space under composition.} \end{figure} Many rules can be composed from rule tuples not involving the composite rule itself. As in the case of PCA all ECA rules can be composed from other ECA rules. For example, for rule pairs there are $88^2=7744$ of which $7744-736=7008$ are not in the ECA rulespace. One could investigate these rules and determine to which Wolfram class they belong. For example, the rule pair (50, 37) (see Fig.~\ref{fig_map_50_37}) behaves like a Wolfram class 4 CA and seems potentially of high complexity. \begin{mydefinition} A rule $R\in S$ in rulespace $S$ is $N$-prime if, and only if, it can only be simulated by itself or an equivalent rule under trivial symmetric transformations (see Supplementary Material) in $S$, i.e. no composition exists to simulate $R\in S$ up to the $N$-compositional iteration (see Algorithms in Subsection~\ref{samplingalg}) other than (possibly) a composition of $R$ itself. \end{mydefinition} \begin{mydefinition} A rule $R^\prime \in S$ is $N$-composite if, and only if, it can be decomposed into a composition of other rules in $S$ non-equivalent to $R^\prime$ under trivial symmetry transformations (see Supplementary Material) up to an $N$-compositional iteration (see Algorithms in Subsection~\ref{samplingalg}). \end{mydefinition} \noindent It follows that prime and composite rules are disjoint subsets in the rulespace set of ECA. \subsection{Order Parameters} We use two order parameters that do not play any fundamental role in the main results yet offer a guide to the type of accepted general knowledge about generating rules and space-time evolution dynamics in ECA. Wolfram~\cite{nks} introduced a heuristic of behavioural class based on the typical behaviour of a CA for a random, equal density of non-zero (for binary) states/colours. Class 1 is the most simple (exhibiting the most trivial behaviour), followed by class 2 (converging to a uniform state); class 3 is random-looking and class 4 shows persistent structures and is thus considered the most complex. \section{Proofs and Results} We are interested in those rules which map back to ECA rules. We ask which is the minimal ECA rule set which produces all necessary tuples to compose all other ECA rules. One way to find such a set of `prime' rules is to create a graph having the rules as vertices and the edges created from the pairing of each tuple element to the composite rule. By looking at the vertex-in and vertex-out degrees one can eliminate the vertices which have a vertex-in degree $>0$ and a vertex-out degree $=0$. Taking the set of remaining vertices, i.e. rules, one can further eliminate vertices by exploring symmetries in the remaining graph. \begin{figure}[ht!] \centering \textbf{a}\hspace{5cm}\textbf{b}\\ \includegraphics[width=45mm]{primesdist1.png} \hspace{1cm} \includegraphics[width=45mm]{primesdist2.png}\\ \medskip \textbf{c}\hspace{5cm}\textbf{d}\\ \includegraphics[width=45mm]{compositesdist1.png} \hspace{1cm} \includegraphics[width=45mm]{compositesdist2.png}\\ \medskip \textbf{e}\hspace{5cm}\textbf{f}\\ \includegraphics[width=45mm]{composingprimes1.png}\hspace{1.2cm}\includegraphics[width=45mm]{composingprimes2All.png} \caption{\label{dists}(a,b) Distribution of primes by Wolfram class~\cite{nks} according to algorithms 1 (a) and 2 (b). (c,d) Distribution of vs composite rules by Wolfram class according to algorithms 1 (a) and 2 (b). Distributions of primes as used to generate all other rules under Boolean composition for algorithms 1 (e) and 2 (f). All bins are normalized by number of elements in each class for all 88 non-trivially symmetrical ECA rules.} \end{figure} \subsection{Rule Primality and Causal Decomposition} Table.~\ref{primetable} shows the minimal ECA generating set with the prime rules and rule compositions for the composite rules. \begin{myobservation} None of the prime rules is of Wolfram class 4, i.e. all class 4 rules can be composed by prime rules of a lower class. In other words, all class 4 rules are composite. \end{myobservation} Fig.~\ref{dists} shows the distribution of Wolfram classes for prime and composite rules building the ECA space. Among the ECA prime rules used to generate all others, most belong to class 1 and 2, and these are the only 2 distributions for which sampling algorithms 1 and 2 produced the most different results. Yet in both cases rules 1 and 2 are the building blocks in the minimal ECA-generating sets. Table.~\ref{primetable} in the Supplementary Information provides all the compositions found and the breakdown of ECA prime and composite rules. It is common to find that rule permutations under compositions yield the same ECA rule. For example, rule 110 can be composed out of the prime tuple $170, 15, 118$. This is also true for some of the other permutations. Rule 54 can be composed out of the rule set (15, 108, 170) and all its permutations. However, this is not a rule, and no generating rule set (small or large) was found to be commutative or associative. However, the rule icons of prime rules most frequently used to generate all other ECA rules have similar, high non-zero state/colour density ($> 0.4$), with a Hamming distance of two to rule 110, yet they generate simple behaviour. \subsection{Complexity of Prime and Composite Rules} \begin{figure} \centering \textbf{a}\hspace{5.5cm}\textbf{b}\\ \includegraphics[width=37mm]{minimal1.png} \hspace{1cm} \includegraphics[width=37mm]{minimal2.png} \medskip \textbf{c}\\ \includegraphics[width=83mm]{count1.png} \hspace{1cm}\\ \medskip \textbf{d}\\ \includegraphics[width=83mm]{count2All.png}\\ \medskip \textbf{e}\\ \includegraphics[width=80mm]{buildingblocks.png}\\ \caption{\label{minimal}(a and b) Convergence to minimal sets, with the smallest of size $38$ ECA prime rules generating all other (256 or 88 non-equivalent) ECA rules for different seeds and two different algorithms with algorithm 2 (b) producing the smallest. (c) Top 40 most frequent prime rules able to produce all other ECA under Boolean composition according to algorithm 1 (d) very similar ranking of most frequent rules but with algorithm 2 among not only the minimal 38-element set but also all other 16 minimal sets of size at most 40. (e) ECA space-time evolutions of rules that are the building blocks of all other rules in the ECA space preceded by their rule icon running from random initial conditions for illustration purposes.} \end{figure} Fig.\ref{minimal}(c,d) shows the ECA prime rules with the highest frequency and their associated space-time evolutions, starting from a typical (i.e. random, 0.5 non-zero density) (Fig.\ref{minimal}(e)) among the `building blocks' able to generate the full ECA space (88 non-trivially symmetric rules or 256 rules counting all). They can be classified into two apparent main groups: (i) identity filters (e.g. rule 140, 136 and 200) able to partially `silence' or filter the communication of information from input to output and (ii) rules that transfer information diagonally or `shifters' (such as rules 170 and 14) at different speeds (e.g. slow, like rule 15, versus fast, like rules 14 and 184). The two most frequent ECA rules used to build all others are shifters that transfer information at different speeds with no collisions and no loss of information (rules 15 and 170). The set of primes in the 38-rule minimal set able to produce all other ECA rules (88 non-equivalent and 256 under trivial symmetries) is: 0, 1, 2, 3, 5, 7, 11, 12, 13, 18, 19, 23, 24, 25, 27, 28, 29, 30, 34, 35, 38, 40, 42, 43, 46, 50, 51, 57, 58, 72, 73, 74, 94, 104, 105, 108, 110, 128, 130, 132, 134, 136, 138, 154, 162, 164, 178 and 204. The composite rules in the minimal set are: 4, 6, 8, 9, 10, 14, 15, 22, 26, 32, 33, 36, 37, 41, 44, 45, 54, 56, 60, 62, 76, 77, 78, 90, 106, 122, 126, 140, 142, 146, 150, 152, 156, 160, 168, 170, 172, 184, 200 and 232. The intersection between algorithm 1 and algorithm 2 is significant. Among all the 38 and 40 prime rule sets produced by the 2 algorithms, 27 are the same: 6, 9, 10, 14, 15, 22, 32, 37, 41, 60, 76, 77, 78, 90, 122, 126, 140, 142, 146, 152, 156, 160, 168, 170, 184, 200 and 232. \subsection{Rule Decomposition} Fig.~\ref{decomposition} illustrates the way in which a set of simple ECA composed as building blocks in a Boolean circuit can produce rich behaviour, and how the different elements can causally explain the behaviour of the final system in a top-down and bottom-up way through Boolean decomposition, fine- and coarse-graining. \begin{figure}[ht!] \centering \textbf{a}\\ \includegraphics[width=40mm]{fabriclooking1.png}\\ \textbf{b}\hspace{3.2cm}\textbf{c}\hspace{3.2cm}\textbf{d}\\ \includegraphics[width=25mm]{buildingblock1.png} \hspace{.5cm} \includegraphics[width=25mm]{buildingblock2.png} \hspace{.5cm} \includegraphics[width=25mm]{buildingblock3.png}\\ \textbf{e}\hspace{2.3cm}\textbf{f}\hspace{2.3cm}\textbf{g}\hspace{2.3cm}\textbf{h}\\ \includegraphics[width=100mm]{simpleECAs.png} \caption{\label{decomposition}Causal decomposition: A complex CA (not in the ECA rule space) built by Boolean composition of (e,f,g,h) 4 simple ECA rules: 15 $\circ$ 154 $\circ$ 170 $\circ$ 45. (b,c,d) Pair compositions (15 $\circ$ 45), (15 $\circ$ 154), and (15 $\circ$ 154) of the same rules. Almost all permutations lead to the same behaviour. (f) and (g) build the diagonal `threads', (e) produces the fabric-like core and (h) introduces some random features that make (a) more realistic. All are run from random initial conditions for illustration purposes.} \end{figure} \subsection{ECA Rule 110 Decomposition} An analytic proof that ECA rule 110 can be composed out of prime ECA (170, 15, 118) is as follows: \begin{figure}[ht!] \centering Emulation of rule 110: \hspace{2cm} Emulation of rule 54: \includegraphics[width=6.0cm,angle=0]{CompRule110.png}\includegraphics[width=6.0cm,angle=0]{CompRule54.png} \caption{\label{rule110emulation}Prime rule composition of ECA rule 110 (original rule on top) and emulation by composition of rules 15, 118 and 170 (middle) and of ECA rule 54 (same arrangement, but composition of rules 15, 108 and 170) after coarse-graining the stereographic version of the emulated rule (bottom).} \end{figure} \begin{mytheorem} $\textrm{rule 110 } = \textrm{rule 51 } \circ \textrm{rule 118} = (\textrm{rule 170 } \circ \textrm{rule 15}) \circ \textrm{rule 118}$ \end{mytheorem} The ECA rulespace clearly does not form a group because it is not closed under composition; no clear identity rule was found but identity candidates formed of prime rules were found. In particular, rules 15 and 170 are wild cards as prime rules able to perform bit shifts, and are used in almost every composition as they have the ability to target and shift a rule's bits. \begin{proof} First, we show that $\textrm{rule 51} = \textrm{rule 170} \circ \textrm{rule 15}$: \bigskip \noindent (a): Given are the rules $\textrm{170}$: $(p,q,r) \mapsto r$ and $\textrm{15}$: $(p,q,r) \mapsto \neg p$ \begin{myenumerate} \item Applying $\textrm{170}$: $(p,q,r) \mapsto r$ to the lattice, which shifts the lattice to the right: $p \rightarrow q, q \rightarrow r$, $r \rightarrow p$. \item Applying now rule $\textrm{15}$: $(p,q,r) \mapsto \neg p$ we get: $q = \neg p \rightarrow q = \neg q \text{ (with 1.)} \text{ which is } \textrm{rule 51}: (p,q,r) \mapsto \neg q$. \end{myenumerate} Then, we show that $\textrm{rule 110} = \textrm{rule 51} \circ \textrm{rule 118}$: \bigskip \noindent (b): Given that the rule 51: $(p,q,r) \mapsto \neg q$ and rule 118: $(p,q,r) \mapsto (p \vee q \vee r) \veebar (q \wedge r)$. \begin{myenumerate} \item Applying rule 51: $(p,q,r) \mapsto \neg q$ (rule 51) to the lattice, which negates the lattice. \item Applying $(p,q,r) \mapsto (p \vee q \vee r) \veebar (q \wedge r)$ (rule 118) we get: $(\neg p \vee \neg q \vee \neg r) \veebar (\neg q \wedge \neg r)$ \item By De Morgan's Law: $\neg (p \wedge q \wedge r) \veebar \neg(q \vee r)$ \item Expanding xor: $(\neg (p \wedge q \wedge r) \wedge (q \vee r)) \vee ((p \wedge q \wedge r) \wedge \neg (q \vee r))$ \item =$((\neg p \vee \neg q \vee \neg r) \wedge (q \vee r)) \vee ((p \wedge q \wedge r) \wedge (\neg q \wedge \neg r))$ \item =$((\neg p \vee \neg q \vee \neg r) \wedge (q \vee r)) \vee ((p \wedge q \wedge r \wedge \neg q \wedge \neg r))$ \item =$((\neg p \vee \neg q \vee \neg r) \wedge (q \vee r)) \vee ((p \wedge r \wedge \neg r))$ \item =$((\neg p \vee \neg q \vee \neg r) \vee (p \wedge r \wedge \neg r) ) \wedge ((q \vee r) \vee (p \wedge r \wedge \neg r))$ \item =$((\neg p \vee \neg q \vee \neg r \vee p) \wedge (\neg p \vee \neg q \vee \neg r \vee r) \wedge (\neg p \vee \neg q \vee \neg r \vee \neg r)) \wedge ((q \vee r) \vee (p \wedge r \wedge \neg r))$ \item =$(1 \wedge 1 \wedge (\neg p \vee \neg q \vee \neg r \vee \neg r)) \wedge ((q \vee r) \vee (p \wedge r \wedge \neg r))$ \item =$(\neg p \vee \neg q \vee \neg r) \wedge ((q \vee r) \vee (p \wedge r \wedge \neg r))$ \label{itm:interim} \item $((q \vee r) \vee (p \wedge r \wedge \neg r))$ can be expanded as: \item $(q \vee r \vee p) \wedge (q \vee r \vee r) \wedge (q \vee r \vee \neg r)$ \item =$(q \vee r \vee p) \wedge (q \vee r) \wedge 1$ \item =$(q \vee r) \wedge (1\wedge \vee p)$ \item =$(q \vee r)$ \item Substituting in (\ref{itm:interim}) one gets: $(\neg p \vee \neg q \vee \neg r) \wedge (q \vee r)$ \label{itm:interim1} \item Starting now from rule 110: $(p,q,r) \mapsto (q \wedge \neg p) \vee (q \veebar r)$ by applying the definition of {\it xor}: \item =$(q \wedge \neg p) \vee (q \wedge \neg r) \vee (\neg q \wedge r)$ \item =$(q \wedge \neg p \vee q) \wedge (q \wedge \neg p \vee \neg r) \vee (\neg q \wedge r)$ \item =$((q \wedge \neg p \vee q) \wedge (q \wedge \neg p \vee \neg r)) \vee \neg q) \wedge ((q \wedge \neg p \vee q) \wedge (q \wedge \neg p \vee \neg r)) \vee r) $ \label{itm:interim2} \item The first part of (\ref{itm:interim2}) $((q \wedge \neg p \vee q) \wedge (q \wedge \neg p \vee \neg r)) \vee \neg q)$ can be expanded as: \item =$((q \wedge \neg p) \vee q \vee \neg q) \wedge ((q \wedge \neg p) \vee \neg r \vee \neg q)$ \item =$(1 \wedge ((q \wedge \neg p) \vee \neg r \vee \neg q)$ \item =$((q \wedge \neg p) \vee \neg r \vee \neg q)$ \item =$((q \vee \neg r) \vee (\neg q \vee \neg r)) \vee \neg q$ \item =$(q \vee \neg r \vee \neg q) \vee (\neg p \vee \neg r \vee \neg q) $ \item =$1 \vee (\neg p \vee \neg r \vee \neg q) $ \item =$(\neg p \vee \neg r \vee \neg q) $ \label{itm:interim3} \item The second part of (\ref{itm:interim2}) $((q \wedge \neg p \vee q) \wedge (q \wedge \neg p \vee \neg r)) \vee r)$ can be expanded as: \item =$((q \wedge \neg p \vee q \vee r) \wedge (q \wedge \neg p \vee \neg r \vee r))$ \item =$((q \wedge \neg p \vee q \vee r) \wedge 1)$ \item =$((q \wedge \neg p) \vee q \vee r)$ \item =$(q \vee q \vee r) \wedge (\neg p \vee q \vee r)$ \item =$(q \vee r) \wedge (1 \vee \neg p)$ \item =$(q \vee r)$ \label{itm:interim4} \item Substituting in (\ref{itm:interim2}.) (\ref{itm:interim3}.) and (\ref{itm:interim4}.) one gets: $(\neg p \vee \neg q \vee \neg r) \wedge (q \vee r)$. \label{itm:interim5} \item Since (\ref{itm:interim5}.) = (\ref{itm:interim1}.) this shows $\textrm{rule 110} = \textrm{rule 51} \circ \textrm{rule 118}$ \end{myenumerate} \end{proof} \begin{figure} \centering \begin{tabular}{cc} \label{fig_map_4color_110a}\includegraphics[height=50mm, width=50mm]{CA_50_37_4color_mapping.png} & \label{fig_map_4color_110b}\includegraphics[height=50mm, width=50mm]{CA_110_4color_gray.png} \\ (a) Rule mapping of 4-colour rule & (c) Space-time of 4-colour rule\\ 170 $\circ$ 15 $\circ$ 118 & 170 $\circ$ 15 $\circ$118 with gray scale. \\[6pt] \label{fig_rcomp_110_2colorc}\includegraphics[height=50mm, width=50mm]{CA_110_4color_4color.png} & \label{fig_rcomp_110d}\includegraphics[height=50mm, width=50mm]{CA_110_2color.png} \\ (b) Spacetime of 4-colour rule & (d) Space-time of rule \\ &170 $\circ$ 15 $\circ$ 118. 170 $\circ$ 15 $\circ$ 118.\\[6pt] \end{tabular} \caption{\label{fig_4color_map_110} 4-colour rule equivalent to rule 170 $\circ$ 15 $\circ$ 118. Depicted are examples of space-time evolutions starting from random initial conditions (a) The rule icon is minimally separable and therefore the resulting composition is in the 4-colour CA rulespace. (b) Emulation of the 4-colour equivalent of rule 170 $\circ$ 15 $\circ$ 118 with colour mapping $\Box \rightarrow \Box \Box$, $\blacksquare \rightarrow \blacksquare \blacksquare$, $\mathcolour{red}{\blacksquare} \rightarrow \Box \blacksquare$ and $\mathcolour{green}{\blacksquare} \rightarrow \blacksquare \Box $. (c) Emulation of the 4-colour equivalent of rule 170 $\circ$ 15 $\circ$ 118 with colour re-mapping $\Box \rightarrow \Box$, $\blacksquare \rightarrow \blacksquare$, $\mathcolour{red}{\blacksquare} \rightarrow \mathcolour{gray}{\blacksquare}$ and $\mathcolour{green}{\blacksquare} \rightarrow \mathcolour{gray}{\blacksquare}$. The non-ECA 4-colour Turing-universal cellular automaton that simulates the rule space-time of the composition of ECA rules 170 $\circ$ 15 $\circ$ 118, that is, the Turing-universal ECA rule 110 after coarse-graining.} \end{figure} In a similar fashion, one can prove that the other Wolfram class 4 ECA rules 41, 54 and 106 are also composed of simpler prime rules. \subsection{Multicolour CA Emulating ECA 110} In order to find the CA in a higher rulespace that implements the Boolean composition of ECA emulating ECA rule 110, let's consider a block transformation of the form $\Box \Box \rightarrow \Box$, $\blacksquare \blacksquare \rightarrow \blacksquare$, $\Box \blacksquare \rightarrow \mathcolour{red}{\blacksquare}$ and $\blacksquare \Box \rightarrow \mathcolour{green}{\blacksquare}$ which maps all combinations of the 2-colour pairs with the 4 colours of the larger rulespace. In order to see if the rule icon of a CA generated by ECA rule composition is separable, one executes these steps: \begin{enumerate} \item Choose the de Bruijn sequence for alphabet ${S_{A}}=\{{0,1,2,3}\}=\{{\Box,\blacksquare, \mathcolour{red}{\blacksquare},\mathcolour{green}{\blacksquare}}\}$ and sub-sequences of length $n=3$ as the initial condition. \item Create 3-tuples representing the 4-colour rule tuples with range $r=1$ \item Apply the transformation $\Box \rightarrow \Box \Box$, $\blacksquare \rightarrow \blacksquare \blacksquare$, $\mathcolour{red}{\blacksquare} \rightarrow \Box \blacksquare$ and $\mathcolour{green}{\blacksquare} \rightarrow \blacksquare \Box $ to each tuple. \item Let the CA evolve each tuple 1 step for the chosen rule composition. \item Apply back transformation $\Box \Box \rightarrow \Box$, $\blacksquare \blacksquare \rightarrow \blacksquare$, $\Box \blacksquare \rightarrow \mathcolour{red}{\blacksquare}$ and $\blacksquare \Box \rightarrow \mathcolour{green}{\blacksquare}$ to each resulting output tuple. \item Identify middle cell for each input tuple and pair it with the corresponding output tuple \item Create a graph out off the resulting pairs. \end{enumerate} Fig.~\ref{fig_4color_map_110} illustrates the space-time evolution of this finer-grained CA capable of emulating ECA rule 110 after coarse-graining. The rule icon network for 170 $\circ$ 15$\circ$ 118 is separable in the 4-colour CA space and is thus a native CA belonging to the 2-colour rulespace with closest neighbour, the smallest rulespace in which such an automaton can exist. Fig.~\ref{fig_map_4colour_50_37} in the Supplementary Material illustrates another interesting example of causal decomposition. \section{Conclusions} We have introduced a notion of \textit{prime} and \textit{composite} rule that has allowed us to approach ECA from a group-theoretic point of view whose set composition can emulate all other Elementary Cellular Automata, suggesting minimal generating sets. While it was known that the set is not closed under Boolean composition and the emulations are not commutative, an exhaustive exploration of the emulating minimal sets had not been undertaken. The exploration allowed us to find some interesting emulations, including some Turing-universal compositions by emulation of ECA rule 110. We found that two different sampling algorithms starting from different seeds reach and provide evidence that the smallest generating sets are close to 38 elements if not exactly 38, suggesting that simple rules are the building blocks of more complex rules in minimal generating sets. We have found features that appear to be essential in computation towards universality making a cellular automaton capable of emulating another (universal) cellular automata such as rule 110---and other complex rules---in the way in which these rules need to be composed with rules capable of transfer information horizontally at different rates. The new universality result in ECA is a composition of 2 and 3 rules but the actual Turing-universal cellular automaton is in a higher rulespace whose rule has been given in detail and is capable of emulating ECA rule 110 under coarse-graining. We have introduced novel tools, concepts and methods to explore computation by algebraic/boolean rule composition, and methods for causal composition and decomposition. Our work suggests that novel model-based approaches to studying computational behaviour of computer programs can shed light on fundamental computational and causal processes underlying computing systems and provide a set of powerful tools to study general systems from a computational/informational perspective.
{ "timestamp": "2018-02-27T02:02:48", "yymm": "1802", "arxiv_id": "1802.08769", "language": "en", "url": "https://arxiv.org/abs/1802.08769" }
\section{Introduction} Sampling problems are ubiquitous in a wide range of scientific and engineering disciplines and have received significant attention in Machine Learning and Statistics. In a typical sampling problem, one wants to generate samples from a given target distribution $\pi(x) \propto e^{-U(x)}$, where one is given access to a function $U: \mathbb{R}^d \rightarrow \mathbb{R}$ and possibly its gradient ${\nabla U}$. In many situations, such as when $d$ is large, sampling problems are computationally difficult, and Markov chain Monte Carlo (MCMC) algorithms are used. MCMC algorithms generate samples by running a Markov chain which converges to the target distribution $\pi$. % Unfortunately, many MCMC algorithms work by taking independent steps of short size $\eta$, meaning that they typically only travel a distance roughly proportional to $\sqrt{i} \times \eta$ in $i$ steps, preventing the algorithm from quickly exploring the target distribution. % One MCMC algorithm that can take large steps is the Hamiltonian Monte Carlo (HMC) algorithm. Each step of the HMC Markov chain involves simulating the trajectory of a particle in the ``potential well'' $U$, with the trajectory determined by Hamilton's equations from classical mechanics \cite{betancourt2017conceptual, neal2011mcmc}. To ensure randomization, the momentum is refreshed after each step by independently sampling from a multivariate Gaussian. % HMC is a natural approach to the sampling problem because Hamilton's equations preserve the target distribution $\pi$. % This convenient property reduces the need for frequent Metropolis corrections which slow down traditional MCMC algorithms, and allows HMC to take large steps. HMC was first discovered by physicists \cite{duane1987hybrid}, was adopted soon afterwards with much success in Bayesian Statistics and Machine learning \citep{neal1992bayesian, neal1996bayesian}, and is currently the main algorithm used in the popular software package \emph{Stan} \citep{carpenter2016stan}. Despite its popularity and the widespread belief that HMC is faster than its competitor algorithms in a wide range of high-dimensional sampling problems \citep{creutz1988global, HMC_optimal_tuning, neal2011mcmc, betancourt2014optimizing}, its theoretical properties are not as well-understood as its older competitor MCMC algorithms, such as the random walk Metropolis \cite{mattingly2012diffusion} or Langevin \cite{durmus2016sampling, durmus2016sampling2, dalalyan2017theoretical} algorithms. The lack of theoretical results makes it more difficult to tune the parameters of HMC, and prevents us from having a good understanding of when HMC is faster than its competitor algorithms. % Several recent papers have begun to bridge this gap, showing that HMC is geometrically ergodic for a large class of problems \citep{livingstone2016geometric, durmus2017convergence} and proving quantitative bounds for the convergence rate of an idealized version of HMC on Gaussian target distributions \citep{seiler2014positive}. % Building on probablistic coupling techniques developed in \citep{seiler2014positive}, \cite{durmus2016sampling} and \citep{dalalyan2017theoretical}, \cite{mangoubi2017rapid} later proved a bound of $O^{*}(d^{\frac{1}{2}})$ gradient evaluations for a first-order implementation of HMC when $U$ is $m$-strongly convex with $M$-Lipschitz gradient (here the $O^{*}$ notation only includes dependence on $d$ and excludes regularity parameters such as $M,m$, accuracy parameters, and polylogarithmic factors of $d$). When the dimension is large, computing $O^{*}(d^{\frac{1}{2}})$ gradient evaluations can be prohibitively slow. For this reason, in practice it is much more common to use the second-order ``leapfrog" implementation of HMC, which is conjectured to require $O^{*}(d^{\frac{1}{4}})$ gradient evaluations based on previous simulation \citep{duane1987hybrid} and asymptotic ``optimal scaling" results \cite{kennedy1991acceptances, pillai2012optimal}. % Very recently, \cite{mangoubi2017rapid} made some progress towards this conjecture by proving that $O^{*}(d^{\frac{1}{4}})$ gradient evaluations are required in the special case where $U$ is separable into orthogonal $O(1)$-dimensional strongly convex components satisfying Lipschitz gradient, Lipschitz Hessian and fourth-order regularity conditions. \paragraph{Our contributions.} We introduce a new, and much weaker, regularity condition that allows us to show that, in many cases, HMC requires at most $O^{*}(d^\frac{1}{4})$ gradient evaluations. Roughly, our regularity condition allows the Hessian to change quickly in ``bad" directions associated with the data, while at the same time guaranteeing that the Hessian changes slowly in the directions traveled by the HMC chain with high probability (Assumption \ref{assumption:M3}). % The fact that our regularity condition need not hold for the ``worst-case" directions allows us to show desired bounds on the number of gradient evaluations for a much larger class of distributions than would be possible with more conventional regularity conditions such as the Lipschitz Hessian property. Under our regularity condition we show bounds of $O^{*}(d^{\frac{1}{4}})$ gradient evaluations for the leapfrog implementation of HMC when sampling from a large class of strongly log-concave target distributions (Theorem \ref{thm:summary}). Next, we show that our regularity condition is satisfied by posterior distributions used in Bayesian logistic ``ridge" regression. Computing these posterior distributions is important in statistics and Machine learning applications \citep{polson2013bayesian, genkin2007large, madigan2005bayesian, zhang2001text, hoffman2014no} and quantitative convergence bounds give insight into which MCMC algorithm to use for a given application, and how to optimally tune the algorithm's parameters. % Finally, we perform simulations to evaluate the performance of the HMC algorithm analyzed in this paper, and show that its performance is competitive in both accuracy {and speed} with the Metropolis-adjusted version of HMC despite the lack of a Metropolis filter, when performing Bayesian logistic regression on synthetic data. \paragraph{Related work.} {\em Hamiltonian Monte Carlo.} The earliest theoretical analyses of HMC were the asymptotic ``optimal scaling" results of \cite{kennedy1991acceptances}, for the special case when the target distribution is a multivariate Gaussian. Specifically, they showed that the Metropolis-adjusted implementation of HMC with leapfrog integrator requires a numerical stepsize of $O^{*}(d^{-\frac{1}{4}})$ to maintain an $\Omega(1)$ Metropolis acceptance probability in the limit as the dimension $d \rightarrow \infty$. They then showed that for this choice of numerical stepsize the number of numerical steps HMC requires to obtain samples from Gaussian targets with a small autocorrelation is $O^{*}(d^{\frac{1}{4}})$ in the large-$d$ limit. More recently, \cite{pillai2012optimal} have extended their asymptotic analysis of the acceptance probability to more general classes of separable distributions. The earliest non-asymptotic analysis of an HMC Markov chain was provided in \citep{seiler2014positive} for an idealized version of HMC based on continuous Hamiltonian dynamics, in the special case of Gaussian target distributions. \cite{mangoubi2017rapid} show that idealized HMC can sample from general $m$-strongly logconcave target distributions with $M$-Lipschitz gradient in $\tilde{O}(\kappa^2)$ steps, where $\kappa:= \frac{M}{m}$ (see also \cite{bou2018coupling} for more recent work on idealized HMC). They also show that an unadjusted implementation of HMC with first-order discretization can sample with Wasserstein error $\epsilon>0$ in $\tilde{O}(d^{\frac{1}{2}} \kappa^{6.5} \epsilon^{-1})$ gradient evaluations. In addition, they show that a second-order discretization of HMC can sample from separable target distributions in $\tilde{O}(d^{\frac{1}{4}} \epsilon^{-1} f(m,M,B))$ gradient evaluations, where $f$ is an unknown (non-polynomial) function of $m,M,B$, if the operator norms of the first four Fr{\'e}chet derivatives of the restriction of $U$ to the coordinate directions are bounded by $B$. \cite{lee2017convergence} use the conductance method to show that an idealized version of the Riemannian variant of HMC (RHMC) has mixing time with total variation (TV) error $\epsilon>0$ of roughly $\tilde{O}(\frac{1}{\psi^2 T^2} R \log(\frac{1}{\epsilon}))$, for any $0\leq T \leq d^{-\frac{1}{4}}$, where $R$ is a regularity parameter for $U$ and $\psi$ is an isoperimetric constant for $\pi$. {\em Langevin Algorithms.} % \cite{durmus2016sampling2} show that the unadjusted Langevin algorithm (ULA) can generate a sample from $\pi$ with TV error $\epsilon>0$ in $\tilde{O}(d \kappa^2 \epsilon^{-2})$ gradient evaluations. Using optimization-based techniques from \cite{dalalyan2017further}, \cite{cheng2017convergence, durmus2018analysis} show bounds for ULA in KL divergence. \cite{cheng2017underdamped} show that \textit{underdamped} Langevin requires $\tilde{O}(d^{\frac{1}{2}} \kappa^2 \epsilon^{-1})$ gradient evaluations for Wasserstein error $\epsilon>0$ (see also \cite{cheng2018sharp}). \cite{dwivedi2018log} show that the Metropolis-adjusted Langevin algorithm (MALA) requires $\tilde{O}(\max( d \kappa, \, \, d^{\frac{1}{2}} \kappa^{1.5}) \log(\frac{1}{\epsilon}))$ gradient evaluations from a warm start in the TV metric. {\em Hit and run, ball walk, Random walk Metropolis (RWM).} The Hit-and-run, ball walk, and RWM algorithms are all thought to have a step size of roughly $\Theta(1)$ on sufficiently regular target distributions \citep{roberts1997weak}. Therefore, since most of the probability of a standard spherical Gaussian lies in a ball of radius $\sqrt{d}$, one would expect all three of these algorithms to take roughly $(\sqrt{d})^2 = d$ steps to explore a sufficiently regular target distribution. One should then be able to apply results such as \citep{lovasz2003hit} to show that, from a warm start, RWM requires $\tilde{O}(d \kappa \log(\frac{1}{\epsilon}))$ target function evaluations to sample from the target distribution with TV error $\epsilon$. Interestingly, the only result \citep{dwivedi2018log} we are aware of specialized for the strongly log-concave case gives a bound of $d^2 \kappa^2 \log(\frac{1}{\epsilon})$ target function evaluations for RWM. \paragraph{Organization of the rest of the paper.} In Section \ref{sec:Algorithm} we go over Hamilton's equations and the unadjusted HMC algorithm with second-order leapfrog integrator. In Section \ref{sec:regularity} we go over the regularity assumptions we make on the target distribution. % In Section \ref{sec:results} we state gradient evaluation bounds (Theorem \ref{thm:summary}) that we obtain for the HMC algorithm under these regularity assumptions. Section \ref{sec:proof} is a technical overview of the proof of Theorem \ref{thm:summary}. % In Sections \ref{appendix:preliminaries} and \ref{appendix:toy}, we go over in detail preliminary theorems and definitions used to prove our gradient evaluation bounds. % In Sections \ref{section:Lyapunov}, \ref{appendix:SecondOrderEuler}, \ref{section:main}, and \ref{appendix:Warm} we show various Lemmas and use them to prove our gradient evaluation bounds. % Section \ref{sec:simulations} gives results of simulations that we did to evaluate the accuracy and autocorrelation time of the unadjusted HMC algorithm studied in this paper (Section \ref{sec:simulations_accuracy}), as well as simulations that investigate to what extent our regularity assumptions hold for target distributions used in practical applications (Section \ref{sec:simulations_Lipschitz}). In Section \ref{appendix:logit} we state and prove Lemmas required to apply our gradient evaluation bounds to target distributions used in Bayesian logistic regression. In Section \ref{sec:Conclusion} we discuss conclusions and open problems. \section{Hamilton's equations and the Hamiltonian Monte Carlo algorithm} \label{sec:Algorithm} In this section we present the background and present the Hamiltonian Monte Carlo algorithm; see \cite{neal2011mcmc, betancourt2017conceptual} for a thorough treatment on this topic. \noindent {\bf Hamiltonian Dynamics.} A Hamiltonian of a simple system in $\mathbb{R}^d$ is $$\mathcal{H}(q,p) = U(q) + \frac{1}{2}\|p\|_2^{2},$$ where $q\in \mathbb{R}^d$ represents the ``position'' of a particle in this system, $p \in \mathbb{R}^d$ the ``momentum,'' $U$ the ``potential energy,'' and $\frac{\mathsf{1}}{2}\|p\|_2^{2}$ the ``kinetic energy.'' For fixed $\mathbf{q}, \mathbf{p} \in \mathbb{R}^d$, we denote by $\{q_t(\mathbf{q},\mathbf{p})\}_{t \geq 0}$, $\{ p_t(\mathbf{q},\mathbf{p}) \}_{t \geq 0}$ the solutions to Hamilton's equations: \begin{equation} \frac{\mathrm{d}q_t(\mathbf{q},\mathbf{p})}{\mathrm{d}t} = p_t(\mathbf{q},\mathbf{p}) \qquad \textrm{and} \qquad \frac{\mathrm{d}p_t(\mathbf{q},\mathbf{p})}{\mathrm{d}t} = -{\nabla U}(q_t(\mathbf{q},\mathbf{p})), \end{equation} with initial conditions $q_0(\mathbf{q},\mathbf{p}) = \mathbf{q}$ and $p_0(\mathbf{q},\mathbf{p}) = \mathbf{p}$. When the initial conditions $(\mathbf{q},\mathbf{p})$ are clear from the context, we write $q_t$, $p_t$ in place of $q_t(\mathbf{q},\mathbf{p})$ and $p_t(\mathbf{q},\mathbf{p})$. The gradient $-{\nabla U}$ in the second Hamilton equation is thought of as a ``force" which acts on the particle. \noindent {\bf HMC.} We first consider an idealized version of the HMC Markov chain $X_0, X_1, \ldots$ based on the continuous Hamiltonian dynamics, with update rule $X_{i+1} = q_T(X_i, \mathbf{p}_i)$, where $\mathbf{p}_1, \mathbf{p}_2, \ldots \sim N(0,I_d)$ are iid. Since solutions to Hamilton's equations have invariant distribution $\propto e^{-\mathcal{H}(q,p)} = e^{-U(q)}e^{-\frac{1}{2}\|p\|_2^{2}}$, idealized HMC has stationary distribution $\pi(q) \propto e^{-U(q)}$ equal to the target distribution, without needing a correction such as Metropolis adjustment. This allows HMC to take much larger steps, and hence mix faster, than would otherwise be possible. It is not possible to implement an HMC Markov chain with continuous trajectories, so one must discretize these trajectories using a numerical integrator, such as the popular second-order leapfrog integrator (Step 5 in the algorithm below). In this case, one obtains the following \textit{unadjusted} HMC (UHMC) Markov chain. The number of gradient evaluations required by UHMC is the main object of study in this paper. \begin{algorithm}[H] \caption{Unadjusted HMC \label{alg:Unadjusted}} \textbf{input:} Initial point $X_0^\dagger \in \mathbb{R}^d$, oracle for gradient ${\nabla U}$, $T>0$, $i_{\max} \in \mathbb{N}$, discretization level $\eta>0$\\ \textbf{output:} Samples $X_0^\dagger, \ldots, X_{i_{\max}}^\dagger$ from the (following) UHMC Markov chain % \begin{algorithmic}[1] % \For{$i=0$ to $i_{\mathrm{max}}-1$} \\Sample $\mathbf{p}_i \sim N(0,I_d)$ \\Set $\mathrm{q}_0 = X_i^\dagger$ and $\mathrm{p}_0 = \mathbf{p}_i$ \For{$j=0$ to $\lfloor \frac{T}{\eta} \rfloor-1$}\\ Set $\mathrm{q}_{j+1} = \mathrm{q}_j+\eta \mathrm{p}_j -\frac{1}{2} \eta^2 {\nabla U}(\mathrm{q}_j), \qquad \mathrm{p}_{j+1} = \mathrm{p}_j - \frac{1}{2}\eta {\nabla U}(\mathrm{q}_j) - \frac{1}{2}\eta {\nabla U}\left(\mathrm{q}_{j+1}\right)$ \EndFor \\Set $X_{i+1}^\dagger =\mathrm{q}_{\lfloor \frac{T}{\eta} \rfloor}$ \EndFor \end{algorithmic} \end{algorithm} \noindent \textbf{Initialization:} In this paper we prove gradient evaluation bounds for UHMC from both a \textit{warm start} and \textit{cold start}, which we define as follows: \begin{definition}(Warm start) Let $X_0, X_1,\ldots $ be a Markov chain, and let $\pi$ be our target distribution. We say that $X$ has an $(\omega,\hat{\delta})$-warm start if there is a random variable $\tilde{Y}_0\sim \pi$ such that $\|X_0-\tilde{Y}_0\|_2 < \omega$ with probability $1-\hat{\delta}$ for some $\omega, \hat{\delta}>0$. \end{definition} \begin{definition}(Cold start) We say that $X$ has a cold start if $X_0= x^\star$, with $x^\star:= \mathrm{argmin}_{x\in \mathbb{R}^d} U(x)$. \end{definition} Since UHMC requires $\Theta(\frac{T}{\eta})$ gradient evaluations to compute each Markov chain step $i$, the total number of gradient evaluations required by UHMC is $\Theta(i_{\max} \times \frac{T}{\eta})$. Note that the parameters $i_{\max}$, $T$, $\eta$ are chosen by the user, and the optimal choice of these algorithm parameters may depend on the dimension $d$ and the regularity parameters of $U$ such as $M$ and $m$. \begin{remark} The number of arithmetic operations required to compute the gradient depends on how the function $U$ is provided to us in a given application. In the Bayesian logistic regression application analyzed at the end of Section \ref{sec:results} of this paper, the number of arithmetic operations required to compute the gradient $\nabla U$ is $\Theta(d)$, and is the same number of operations required to evaluate the target function $U$ itself. However, in other applications it can take $2d$ times as many arithmetic operations to compute the gradient $\nabla U$ as it takes to compute the target function $U$. \end{remark} \section{Regularity conditions} \label{sec:regularity} In this section we explain the $\sqrt{d}$ gradient evaluation bound barrier in prior approaches and present our regularity condition that overcomes it. Let $H_x$ denote the Hessian of $U$ at $x\in \mathbb{R}^d$. We start by noting that if one attempts to bound the number of gradient evaluations required by HMC using a conventional Lipschitz bound on the Hessian \begin{equation} \|(H_y-H_x)p\|_2 \leq L_2 \|y-x\|_2 \times \|p\|_2 \qquad \forall x,y, p \in \mathbb{R}^d \end{equation} that is defined with respect to the Euclidean norm, then the bounds that one obtains are no faster than $\sqrt{d}$ gradient evaluations. % The reason is that if we use the usual ``Euclidean" Lipschitz Hessian condition to bound the numerical error, we obtain an error bound of roughly $\sqrt{d}$, since (from a warm start) the trajectories of HMC travel with momentum roughly $N(0,I_d)$, implying that the momentum of these trajectories has Euclidean norm $\sqrt{d}$ with high probability (w.h.p.). % To bound the error of a second-order method such as the leapfrog method used by HMC, we must bound the change of the directional derivative of the gradient along the path taken by the trajectories of the Markov chain. % In particular, when the leapfrog integrator (step 5 of Algorithm \ref{alg:Unadjusted}) takes a numerical step from $\mathrm{q}_j$ to roughly $\mathrm{q}_{j+1} \approx \mathrm{q}_j+\eta \mathrm{p}_j$, one component of the error in computing the continuous Hamiltonian trajectory can be bounded by the quantity $\|(\eta^2 H_{\mathrm{q}_j+\eta \mathrm{p}_j}-\eta^2 H_{\mathrm{q}_j})\mathrm{p}_j\|_2$. % This quantity in turn can be bounded using the Lipschitz Hessian constant by $\eta^2 L_2 \|\eta \mathrm{p}_j\|_2\times \|\mathrm{p}_j\|_2$. Since $\mathrm{p}_j$ is roughly $N(0,I_d)$ we have $\|\mathrm{p}_j\|_2 \approx \sqrt{d}$ w.h.p., which gives an error bound of $\eta^3 L_2 d$ for one leapfrog step and roughly $\eta^2 L_2 d$ for the error of computing an entire HMC trajectory if $T= \Theta^*(1)$. % To obtain an error bound of $\epsilon$ we therefore need $\eta = O^*(\nicefrac{1}{\sqrt{d}})$. When computing a trajectory of length $\Theta^*(1)$ with this stepsize $\eta$, we therefore need to compute $O^*(\sqrt{d})$ numerical steps. % To overcome this $\sqrt{d}$ gradient evaluation barrier, we therefore need to control the change in the Hessian with respect to a norm which does not grow as quickly with the dimension as the Euclidean norm for a random $N(0,I_d)$ momentum vector. We need a better way to bound the quantity $\|(\eta^2 H_{\mathrm{q}_j+\eta \mathrm{p}_j}-\eta^2 H_{\mathrm{q}_j})\mathrm{p}_j\|_2$. One way to do so would be to replace the Euclidean Lipschitz Hessian condition with an infinity-norm Lipschitz condition of the form \begin{equs} \|(H_y - H_x) v \|_2 \leq L_\infty \times \|y-x\|_\infty \|v\|_\infty \end{equs} for some constant $L_\infty>0$. For this norm, $\|\mathrm{p}_j\|_\infty =O(\log(d))$ with high probability since, roughly, $\mathrm{p}_j \sim N(0,I_d)$, implying that $\|(\eta^2 H_{\mathrm{q}_j+\eta \mathrm{p}_j}-\eta^2 H_{\mathrm{q}_j})\mathrm{p}_j\|_2$ is bounded by roughly $\eta^2 L_\infty \log(d)$ rather than $\eta^2 L_2 d$. Since for many distributions of interest this condition does not hold for a small value of $L_\infty$, we generalize this condition, to obtain a smaller $L_\infty$ constant for a wider class of distributions. Towards this end, we define the vector (semi)-norm $\| \cdot \|_{\infty, \mathsf{u}}$ with respect to the collection of unit vectors $\mathsf{u} := \{u_1, \ldots, u_r\}$ by $\|x \|_{\infty, \mathsf{u}} := \max_{i \in \{1,\ldots, r\}} |u_i^\top x|$. The usual infinity norm is just a special case of this new norm if we set $u_i = e_i$ to be the coordinate vectors. Under this more general norm, the magnitude of a random $N(0,I_d)$ vector still grows only logarithmically with $d$, since each component $u_i^\top x$ is a univariate standard normal. The associated matrix norm $\|A \|_{\infty, \mathsf{u}}$ is defined to be $\sup_{\|x\|_{\infty, \mathsf{u}} \leq 1} \|Ax \|_2$. Using this norm, and motivated by the discussion above, we arrive at our new regularity condition. Roughly speaking, our new regularity condition allows the Hessian to change very quickly in $r>0$ ``bad" directions $u_1, \ldots, u_r$, as long as it does not change quickly on average in a random direction (Figure \ref{fig:logistic_Hessian}). \begin{assumption}[\bf Infinity-norm Lipschitz condition] \label{assumption:M3} There exist $L_{\infty}>0$, $r\in \mathbb{N}$, and a collection of unit vectors with $\mathsf{u} = \{u_1, \ldots, u_r\} \subseteq \mathbb{S}^d$, such that for all $x, y \in \mathbb{R}^d$, we have $\|H_y - H_x\|_{\infty, \mathsf{u}} \leq L_{\infty} \sqrt{r} \|y-x\|_{\infty, \mathsf{u}}$. % \end{assumption} We expect this assumption to hold when the target function $U$ is of the form $U(x) = \sum_{i=1}^r f_i(u_i^\top x)$ for functions $f_i:\mathbb{R}\rightarrow \mathbb{R}$ with uniformly bounded third derivatives. In particular, this class includes the target functions used in logistic regression. This condition may also be of independent interest. \begin{figure}[h] \begin{center} \includegraphics[trim={0cm 7cm 0cm 7cm}, clip, scale=0.5]{Hessian_plot} \end{center} \caption{This is a contour plot of the largest eigenvalue $\|H_\theta\|_{\mathrm{op}}$, or ``operator norm", of the Hessian $H_\theta$ of $U(\theta)$ when the ``bad" directions correspond to the data vectors $\mathsf{X}_1 = (0,-1)^\top$, $\mathsf{X}_2 = (-1,0)^\top$, and $\mathsf{X}_3 =(1, 1)$ with $d=2$. At each value of $\theta$, $\|H_\theta\|_{\mathrm{op}}$ changes most quickly in the direction orthogonal to the contour lines. Notice that at most points $\theta$ the fastest change in $\|H_\theta\|_{\mathrm{op}}$ occurs in one of the directions $\mathsf{X}_1$, $\mathsf{X}_2$ and $\mathsf{X}_3$, and that the fastest change in this example occurs in the $\mathsf{X}_3$ direction. This phenomenon is amplified when the dimension $d$ is large, so that the change in the Hessian tends to be much larger in a few ``bad" directions than in a typical random direction.}\label{fig:logistic_Hessian} \end{figure} \paragraph{Additional conditions for cold starts.} When proving bounds from a cold start, roughly speaking we would still like to guarantee that the HMC trajectories travel with speed $O^*(1)$ in any of the ``bad" directions, so that $\|p_t \|_{\infty, \mathsf{u}} = O^*(1)$. However, unlike from a warm start, we have no guarantee that the momentum is roughly $N(0,I_d)$. To bound $\|p_t \|_{\infty, \mathsf{u}}$ we therefore need another way to control the growth of the quantity $|u_i^\top p_t|$ in each ``bad" direction $u_i$. % To do so, we would like to guarantee that the bounds on the ``force" acting on our Hamiltonian trajectory in each $u_i$ direction depend only on the components $u_i^\top q_t$ and $u_i^\top p_t$ of the position and momentum in that direction, regardless of the component of the momentum orthogonal to $u_i$. % Towards this end, we assume the following: \begin{assumption}[\bf Gaussian tail bound condition (for cold start only)] \label{assumption:tail} There exists a constant $b>0$, and a collection of unit vectors $\mathsf{u} = \{u_1, \ldots, u_r\} \subseteq \mathbb{S}^d$, such that $\min \{m u^\top (x-x^\star), M u^\top (x-x^\star) \} -b \leq u^\top {\nabla U}(x) \leq \max \{m u^\top (x-x^\star), M u^\top (x-x^\star) \}+b$ for all $x \in \mathbb{R}^d, \, \,u \in \mathsf{u}.$ \end{assumption} Assumption \ref{assumption:tail} gaurantees that the component of the gradient in each ``bad" direction $u_i$ is bounded solely in terms of the component of the position in that same ``bad" direction. This allows us to apply arguments based on Gronwall's inequality on the projection of the trajectory in each bad direction $u_i$ in order to bound the magnitude of the position and momentum at time $t$ in the direction $u_i$. Using Grownwall's inequality, we bound the component of the initial position and momentum in the direction $u_i$ (Lemmas \ref{lemma:upper_summary} to \ref{lemma:upper2_summary}, Section \ref{section:Lyapunov}), without assuming a warm start. \section{Theoretical results} \label{sec:results} Our main result is a bound on the number of gradient evaluations required by HMC with second-order leapfrog integrator under the infinity-norm Lipschitz condition (Assumption \ref{assumption:M3}), when sampling from $\pi(x) \propto e^{-U(x)}$ if $U$ is $m$-strongly convex with $M$-Lipschitz gradient. We bound the required number of gradient evaluations for both a warm and cold start. Here we focus on the warm start result; See Theorem \ref{thm:cold_formal} in Section \ref{section:main} for bounds from a cold start and Theorem \ref{thm:warm_formal} in Section \ref{appendix:Warm} for a more formal statement of our warm start result. \noindent \begin{theorem}[\bf Bounds for second-order HMC, informal] \label{thm:summary} Let $\pi(x) \propto e^{-U(x)}$ where $U:\mathbb{R}^d \rightarrow \mathbb{R}$ is $m$-strongly convex, $M$-gradient Lipschitz, and satisfies Assumption \ref{assumption:M3}. Then there exist parameters $T, \eta, i_{\mathrm{max}}$, such that from an $(\omega, \delta)$-warm start, Algorithm \ref{alg:Unadjusted} generates an approximate independent sample $X_{i_\mathrm{max}}^\dagger$ from $\pi$ such that $\|X_{i_\mathrm{max}}^\dagger-Y\|_2<\epsilon$ for some $Y\sim \pi$ independent of the initial point $X_0^\dagger$ with probability at least $1-\delta$. Moreover UHMC requires at most $\tilde{O}(d^{\frac{1}{4}} \epsilon^{-\frac{1}{2}} \sqrt{L_{\infty}} \log^{\frac{1}{2}}(\frac{1}{\delta}))$ gradient evaluations whenever $m,M, \omega = O(1)$, $r=\tilde{O}(d)$ and $L_{\infty} = \Omega(1)$. \end{theorem} More generally, for arbitrary $m,M$ and $r$, we show (from a warm start with $\omega =O(1)$) that the number of gradient evaluations is $\tilde{O}(\max\left(d^{\frac{1}{4}}\kappa^{2.75}, \, \, r^{\frac{1}{4}}\sqrt{\tilde{L}_{\infty}} \kappa^{2.25}\right)\epsilon^{-\frac{1}{2}} \log^{\frac{1}{2}}(\frac{1}{\delta}))$, where $\tilde{L}_{\infty}:= \frac{L_{\infty}}{\sqrt{M}}$ and $\kappa:= \frac{M}{m}$ is the ``condition number". From a cold start, under the additional Gaussian tail bound condition (Assumption \ref{assumption:tail}), we show that the number of gradient evaluations is $\tilde{O}(\max\left(d^{\frac{1}{4}}\kappa^{3.5}, \, \, r^{\frac{1}{4}}\sqrt{\tilde{L}_{\infty}} \left(\kappa^{4.25} + \tilde{b}\kappa^{3.25}\right)\right) \epsilon^{-\frac{1}{2}})$, where $\tilde{b}:= \frac{b}{\sqrt{M}}$. \footnote{To obtain our bounds from a warm start, we run UHMC with parameters $T= \frac{1}{6\sqrt{M \kappa}}$, $i_{\mathrm{max}}= \Theta(\frac{1}{mT^2})$, and $\eta = \Theta( \min \{ d^{-\frac{1}{4}} \kappa^{-1.25}, r^{-\frac{1}{4}} \tilde{L}_{\infty}^{-\frac{1}{2}}\kappa^{-0.75}\} \frac{\epsilon^{\frac{1}{2}}}{\sqrt{M}} \log^{-\frac{1}{2}}(\frac{1}{\delta}))$. To obtain our bounds from a cold start, we run UHMC with parameters $T= \frac{1}{6\sqrt{M \kappa}}$, $i_{\mathrm{max}}= \Theta(\frac{1}{mT^2})$, and $\eta = \Theta( \min \{ d^{-\frac{1}{4}} \kappa^{-2}, r^{-\frac{1}{4}} \tilde{L}_{\infty}^{-\frac{1}{2}}(\kappa^{2.75}+ \tilde{b} \kappa^{1.75})^{-1}\} \frac{\epsilon^{\frac{1}{2}}}{\sqrt{M}})$.} If $\kappa = O(1)$, $L_{\infty} =O(1)$ and $r=O(d)$, then our bound on the number of gradient evaluations is $O(d^{\frac{1}{4}} \epsilon ^{-\frac{1}{2}})$ from a warm start. To the best of our knowledge, our bounds are an improvement over all previous gradient evaluation bounds for sampling in this regime, which all have dimension dependence $\sqrt{d}$ or greater. % Also note that while \cite{mangoubi2017rapid} obtains $O^*(d^{\frac{1}{4}})$ bounds in the special case of product distributions, unlike in \cite{mangoubi2017rapid} the condition number dependence in our bounds is polynomial. We are especially interested in the regime where $d$ is large since the number of predictor variables in a statistical model is oftentimes very large \cite{johnson2013numerical, genkin2007large}, and in many cases $\kappa$ and $L_{\infty}$ do not grow, or only grow relatively slowly, with the dimension. We state some concrete examples from Bayesian logistic regression of regimes where our gradient evaluation bounds are an improvement on the previous best bounds in the discussion after Theorem \ref{thm:logit}. \paragraph{Applications to logistic regression.} In Bayesian logistic ``ridge" regression, one would like to sample from the target log-density \begin{equation} \label{eq:logit} U(\theta) = \nicefrac{1}{2}\theta^\top \Sigma^{-1} \theta- \textstyle{\sum_{i=1}^r} \mathsf{Y}_i \log(F(\theta^\top \mathsf{X}_i)) + (1-\mathsf{Y}_i)\log(F(-\theta^\top \mathsf{X}_i)), \end{equation} where the data vectors $\mathsf{X}_1,\ldots \mathsf{X}_r \in \mathbb{R}^d$ are thought of as independent variables, the binary data $\mathsf{Y}_1,\ldots, \mathsf{Y}_r \in \{0,1\}$ are dependent variables, $F(s) :=(e^{-s}+1)^{-1}$ is the logistic function, and $\Sigma$ is positive definite. We define the incoherence of the data as \begin{equs} \mathsf{inc}(\mathsf{X}_1,\ldots \mathsf{X}_r):= \max_{i\in[r]} \sum_{j=1}^r |\mathsf{X}_i^\top \mathsf{X}_j|. \end{equs} We bound the value of the infinity-Lipschitz constant in terms of the incoherence: % \begin{theorem}[\bf Regularity bounds for logistic regression] \label{thm:logit} Let $U$ be the logistic regression target for $r>0$ data vectors $\mathsf{X}_1,\ldots, \mathsf{X}_r$, and let $\mathsf{inc}(\mathsf{X}_1,\ldots,\mathsf{X}_r)\leq C$ for some $C>0$. Then the infinity-norm Lipschitz assumption is satisfied with $L_\infty = \sqrt{C}$ and ``bad" directions $\mathsf{u} = \left \{\frac{\mathsf{X}_i}{\|\mathsf{X}_i\|_2}\right \}_{i=1}^r$. \end{theorem} The proof of Theorem \ref{thm:logit} is given in Section \ref{appendix:logit}. In particular, when the incoherence is $\tilde{O}(1)$, the constant $L_{\infty}$ does not grow with dimension: This includes the separable case when the $\mathsf{X}_i$ vectors are orthogonal and have unit magnitude. It also includes, for instance, the non-separable case where $r=d$ and the $\mathsf{X}_i$ are unit vectors with the first $\sqrt{d}$ of the $\mathsf{X}_i$ vectors isotropically distributed, and the angle between any two of the remaining vectors is greater than $\frac{\pi}{2} - \frac{1}{d}$. In both these examples the number of gradient evaluations required by UHMC under a standard normal prior is $\tilde{O}(d^{\frac{1}{4}} \epsilon^{-\frac{1}{2}})$, since $C$, $M$, $m^{-1}$, (and therefore $\kappa$ and $\tilde{L}_{\infty}$) are all $\tilde{O}(1)$. % When all $r=d$ vectors are isotropically distributed, we have $\sqrt{\tilde{L}_{\infty}} = \tilde{O}(d^{\frac{1}{8}})$ and require $\tilde{O}(d^{\frac{3}{8}} \epsilon^{-\frac{1}{2}})$ gradient evaluations. % In all these examples we therefore obtain an improvement over the existing $\tilde{O}(\sqrt{d}\epsilon^{-1})$ bounds of \cite{cheng2017underdamped, mangoubi2017rapid}. We also provide bounds for $m$ and $M$, showing that our Lipschitz gradient and strong convexity assumptions are satisfied for \begin{equs} M= \lambda_{\mathrm{max}}\left(\Sigma^{-1} + \sum_{k=1}^r \mathsf{X}_k \mathsf{X}_k^\top \right) \qquad \textrm{ and } \qquad m=\lambda_{\mathrm{min}}( \Sigma^{-1}), \end{equs} respectively (Lemma \ref{lemma:logistic2} in Section \ref{appendix:logit}). Finally, to obtain bounds in the cold start setting, we show that Assumption \ref{assumption:tail} is satisfied with \begin{equs} b= 2 \mathsf{inc}(\nicefrac{\mathsf{X}_1}{\sqrt{\|\mathsf{X}_1\|_2}},\ldots \nicefrac{\mathsf{X}_r}{\sqrt{\|\mathsf{X}_r\|_2}}) \end{equs} if $\Sigma$ is a multiple of the identity matrix (Lemma \ref{lemma:logistic3} in Section \ref{appendix:logit}). \section{Technical overview}\label{sec:proof} For simplicity of exposition, in this proof overview we consider the special case where the HMC algorithm is given a warm start and where $\kappa=O(1)$; the general case is proved in Sections \ref{section:main} and \ref{appendix:Warm} (see Theorems \ref{thm:cold_formal} and \ref{thm:warm_formal}). Recall that Algorithm \ref{alg:Unadjusted} generates a Markov chain $X^\dagger$ which approximates the steps taken by the idealized HMC chain $X$. % Since the idealized HMC chain $X$ was shown to mix quickly in \cite{mangoubi2017rapid}, it is enough for us to bound the approximation error $\|X^\dagger_i - X_i\|_2 < \epsilon$ for all $i\leq \mathcal{I}$, where roughly speaking $\mathcal{I} = \Theta(\log \frac{1}{\epsilon})$ is a bound on the mixing time of $X$, if each HMC trajectory is run for time $T=\Theta(1)$ . To prove the conjectured $O^\ast(d^{\frac{1}{4}})$ gradient evaluation bounds, it is enough to show that an error bound $\|X^\dagger_i - X_i\|_2 < \epsilon$ holds for a numerical timestep-size $\eta = \Omega^\ast(d^{-\frac{1}{4}})$, since the HMC algorithm computes $\mathcal{I}= O^\ast(1)$ trajectories and for this choice of $\eta$ each trajectory takes $\frac{T}{\eta} = O^\ast(d^{\frac{1}{4}})$ gradient evaluations to compute. Our goal is therefore to show that the error $\|X^\dagger_i - X_i\|_2$ is bounded by $\epsilon$ for all $i\leq \mathcal{I}$ whenever $\eta = O^\ast(d^{-\frac{1}{4}})$. The structure of our proof is as follows: We begin by bounding the local error of the leapfrog integrator accumulated at each numerical step (Step 1; see Lemma \ref{lemma:euler_error_summary} in Section \ref{appendix:SecondOrderEuler}). Then, we use the fact that (from a warm start) the momentum of the HMC trajectories is roughly $N(0,I_d)$ to show that the continuous HMC trajectories of the idealized chain $X$ are unlikely to travel quickly in any of the ``bad" directions $u_i$ specified in Assumption \ref{assumption:M3} (Step 2). % Specifically, we show that at every step $i$ with high probability the momentum of the HMC trajectories satisfy \begin{equs} \|p_t\|_{\infty, \mathsf{u}} = O(\log(d)) \end{equs} at every time $t\in[0,T]$ (Lemma \ref{lemma:good_summary_warm} in Section \ref{appendix:Warm}). % We then combine steps 1 and 2 to show that the numerical HMC chain also does not travel too quickly in any of the ``bad" directions, and use this fact together with our bounds in Step 2 to bound the global error of the numerical HMC trajectories (Step 3; roughly corresponds to Lemmas \ref{thm:Approx_integrator}, \ref{lemma:coupling} in Section \ref{section:main}). % Finally, we compute the value of $\eta$ needed to bound the error $\epsilon$, and use this to bound the number of gradient evaluations (Theorem \ref{thm:main} in Section \ref{section:main}). Note that when proving bounds from a cold start (Theorem \ref{thm:cold_formal} in Section \ref{section:main}), we use the additional Assumption \ref{assumption:tail} instead of the invariant Gibbs distribution to control the behavior of the trajectories. For simplicity of exposition, formal proofs are defered to Sections \ref{appendix:preliminaries} to \ref{appendix:logit}. \paragraph{Step 1: Error bounds for leapfrog integrator.} In this subsection we show how to use Assumption \ref{assumption:M3} to bound the error of the leapfrog integrator. We are unaware of non-asymptotic second-order bounds for the leapfrog integrator, since the previous error bounds for leapfrog we are aware of only hold in the limit as the numerical step size $\eta$ goes to zero \cite{betancourt2014optimizing, mangoubi2017rapid, HMC_optimal_tuning, leimkuhler2004simulating, hairer2003geometric}. For this reason, we prove new non-asymptotic polynomial time bounds for leapfrog here. Key to our analysis is the observation that the position estimate \begin{equs} \mathrm{q}_{j+1}= \mathrm{q}_j+\eta \mathrm{p}_j -\frac{1}{2} \eta^2 {\nabla U}(\mathrm{q}_j) \end{equs} returned by the leapfrog integrator is exactly the second-order Taylor expansion for $q_\eta(\mathrm{q}_j,\mathrm{p}_j)$, and the momentum estimate \begin{equs} \mathrm{p}_{j+1}:= \mathrm{p}_j - \frac{1}{2}\eta {\nabla U}(\mathrm{q}_j) - \frac{1}{2}\eta {\nabla U}(\mathrm{q}_{j+1}) \end{equs} approximates (with third-order error) the second-order Taylor expansion for $p_\eta(\mathrm{q}_j,\mathrm{p}_j)$ in the following way: \begin{equation} \label{eq:leapfrog3} \mathrm{p}_{j+1} = \mathrm{p}_j - \eta {\nabla U}(\mathrm{q}_j) - \frac{1}{2}\eta^2 \frac{{\nabla U}(\mathrm{q}_{j+1}) - {\nabla U}(\mathrm{q}_j)}{\eta} \approx \mathrm{p}_j - \eta {\nabla U}(\mathrm{q}_j) - \frac{1}{2}\eta^2 H_{\mathrm{q}_j} \mathrm{p}_j. \end{equation} The error in the Taylor expansion is due to the fact that the Hessian $H_{\mathrm{q}_j}$ is not constant over the trajectory. Roughly, we can use Assumption \ref{assumption:M3} to bound the error in the Hessian at each time $0 \leq t \leq \eta$: \begin{equation} \label{eq:H_bounds} \|(H_{q_t} - H_{q_0})p_t\|_2 \leq L_{\infty} \|p_t \|_{\infty, \mathsf{u}} \times \|(q_t - q_0)\|_{\infty, \mathsf{u}} \sqrt{r} \approx t L_{\infty}\| p_t \|_{\infty, \mathsf{u}} ^2 \sqrt{r}. \end{equation} Using Equations \eqref{eq:leapfrog3} and \eqref{eq:H_bounds}, we get an error bound of roughly \begin{equation} \label{eq:leapfrog4} \|\mathrm{p}_{j+1} - p_t(\mathrm{q}_j, \mathrm{p}_j)\|_2 \leq \eta^3 L_{\infty} \textstyle{\sup_{t\in[0,\eta]}} \| p_t(\mathrm{q}_j, \mathrm{p}_j)\|_{\infty, \mathsf{u}}^{2} \sqrt{r} \end{equation} (see Lemma \ref{lemma:euler_error_summary}). Finally, we note that bounding the error for the position variable $\|\mathrm{q}_{j+1} - q_t(\mathrm{q}_j, \mathrm{p}_j)\|_2$ can be accomplished using standard techniques which do not require Assumption \ref{assumption:M3}. \paragraph{Step 2: Bounding $\|p_t\|_{\infty, \mathsf{u}}$ for the idealized HMC chain.} Since the error bound of the leapfrog integrator depends crucially on $\|p_t\|_{\infty, \mathsf{u}}$, our next task is to show that \begin{equation} \label{eq:InfinityNorm} \|p_t\|_{\infty, \mathsf{u}}\leq O(\mathrm{polylog}(d, \nicefrac{1}{\delta}) + \omega) \end{equation} with high probability for the idealized HMC chain (this is roughly the conclusion of Lemma \ref{lemma:good_summary_warm}). % To do so, we use the fact that (from a warm start) the distribution of the momentum at any point on an HMC trajectory is roughly $N(0,I_d)$. % To show this, we would like to use the fact that the position and momentum of HMC trajectories from an idealized HMC chain started at the \textit{stationary} distribution $\pi$, are jointly distributed according to the Gibbs distribution $\propto e^{-U(q_t)}e^{-\frac{1}{2}\|p_t\|_2^{2}}$ at any given time $t$. \textbf{2a: Bounding $\|p_t\|_{\infty, \mathsf{u}}$ from a stationary start} We first consider a copy $\tilde{Y}_i$ of the idealized chain started at the stationary distribution $\tilde{Y}_0 \sim \pi$, and show that the momentum $p_t$ of its trajectories satisfies \begin{equs} \|p_t\|_{\infty, \mathsf{u}} =O(\log(d))\end{equs} at every time $t$ w.h.p. Since the $\tilde{Y}$ chain is started at the stationary distribution, it remains stationary distributed at every step and the position $q_t$ and momentum $p_t$ of its trajectories have Gibbs distribution $\propto e^{-U(q_t)}e^{-\frac{1}{2}\|p_t\|_2^{2}}$ at any fixed time $t$. Using the fact that for every bad direction $u_i$, $|u_i^\top p_t|$ is chi-distributed with 1 degree of freedom, we apply the Hanson-Wright inequality together with a union bound to show that \begin{equs} \|u_i^\top p_t\|_{\infty, \mathsf{u}} =O(\log(\nicefrac{d}{\xi}))\end{equs} at any fixed time $t$ with probability at least $1-\xi$ for any $\xi>0$. % However, our goal is to bound $\|u_i^\top p_t\|_{\infty, \mathsf{u}}$ simultaneously at every time $t$, not just at a fixed time. % Unfortunately, the trajectories are continuous paths, so we cannot directly apply a union bound to obtain a bound at every $t$. % To get around this problem, we consider $\mathcal{J} = \mathrm{poly}(\kappa,d)$ equally spaced timepoints on the interval $[0,T]$, and apply a union bound to show that \begin{equs} \|u_i^\top p_t\|_{\infty, \mathsf{u}} =\mathrm{polylog}(d, \nicefrac{1}{\delta}) \end{equs} with probability at least $1-\delta$. % We then use the ``conservation of energy" property to bound the Euclidean norm of the momentum at every time on the trajectory, implying that the position and momentum do not change by more that $O(1)$ inside each time interval of length $\frac{1}{\mathcal{J}}$. % This in turn implies that \begin{equs} \|p_t\|_{\infty, \mathsf{u}} =\mathrm{polylog}(d, \nicefrac{1}{\delta}) \end{equs} at every time $t\in[0,T]$. \textbf{2b: Bounding $\|p_t\|_{\infty, \mathsf{u}}$ from a warm start} Unfortunately, we cannot apply our results of step 2a directly since we are only assuming that $X_0$ has a warm start, not a stationary start. That is, we only assume that \begin{equs} \|X_0-\tilde{Y}_0\|_2 < \omega \end{equs} for some $\omega>0$, where $\tilde{Y}_0\sim \pi$ is at the stationary distribution. % To show that the trajectories of our warm-started chain also approximately satisfy this Gibbs distribution property, we couple the two copies $X$ and $\tilde{Y}$ of the idealized HMC chain by defining the $\tilde{Y}$ chain using the update rule $\tilde{Y}_{i+1} = q_T(\tilde{Y}_i, \mathbf{p}_i)$ with the same sequence of initial momenta $\mathbf{p}_1,\mathbf{p}_2,\ldots$ that were used to define the $X$ chain (Figure \ref{fig:coupling_Euler}). % Using the fact that the trajectories share the same initial momentum $\mathbf{p}_i$ at every step, we show that at every continuous time $t\in[0,T]$ the Euclidean distance between the position and momentum of the trajectories of the two chains remains bounded by $\omega$. % We therefore have \begin{equs} \|p_t(X_i, \mathbf{p}_i) - p_t(\tilde{Y}_i, \mathbf{p}_i)\|_{\infty, \mathsf{u}} \leq \|p_t(X_i, \mathbf{p}_i) - p_t(\tilde{Y}_i, \mathbf{p}_i)\|_2 \leq \omega \end{equs}. \begin{figure}[H] \begin{center} \includegraphics[trim={0cm 4cm 1cm 3cm}, clip, scale=0.3]{Coupling_HMC_Euler} \end{center} \caption{ Coupling two copies $X$ (blue) and $\tilde{Y}$ (green) of idealized HMC by choosing the same momentum $\textbf{p}_i$ at every step. This coupling causes the distance between the chains to contract at each step if the potential $U$ (red level sets) is strongly convex. The Markov chain $X^\dagger$ (dark grey, dashed) is computed using a numerical integrator and approximates the idealized HMC Markov chain $X$.}\label{fig:coupling_Euler} \end{figure} \paragraph{Step 3: Bounding the global error and the number of gradient evaluations.} So far, we have shown that the trajectories of the idealized HMC chain $X$ satisfy a bound on $\|p_t\|_{\infty, \mathsf{u}}$ (Equation \eqref{eq:InfinityNorm}). If we can extend this bound to the numerical chain, we can apply it to Inequality \eqref{eq:leapfrog4} to show that the error at each step is $O(\eta^3 L_{\infty} \sqrt{r})$ w.h.p. To bound the global error, we use roughly the following inductive argument: inductively assume that the errors $\|\textrm{q}_j - q_{j\eta}(X_i, \mathbf{p}_i)\|_2$ and $\|\textrm{p}_j - p_{j\eta}(X_i, \mathbf{p}_i)\|_2$ at numerical step $j$ are bounded by roughly $j\eta \times \epsilon $. % This implies that % \begin{equation} \|\textrm{p}_j\|_{\infty, \mathsf{u}} \leq \|p_{j\eta}(X_i, \mathbf{p}_i)\|_{\infty, \mathsf{u}} + j\eta \epsilon \stackrel{{\scriptsize \textrm{Eq. }}\ref{eq:InfinityNorm}}{=} \mathrm{polylog}(d, \nicefrac{1}{\delta}) + \omega. \end{equation} Then one can use similar ``conservation of energy" arguments as in the previous section to show that \begin{equs} \|p_t(\mathrm{q}_j, \mathrm{p}_j)\|_{\infty, \mathsf{u}} = \mathrm{polylog}(d, \nicefrac{1}{\delta}) \end{equs} over the short time interval $t\in [0,\eta]$. Plugging this bound into Inequality \eqref{eq:leapfrog4} allows us to bound the error accumulated at step $j$ by $O(\eta^3 L_{\infty} \sqrt{r})$, implying that the inductive assumption also holds for step $j+1$. (see Lemmas \ref{thm:Approx_integrator}, \ref{lemma:coupling} in Section \ref{section:main}) After $\frac{T}{\eta}$ numerical steps, the global error of each trajectory is therefore bounded by $T \times \eta^2 L_{\infty} \sqrt{r}$. The error at step $i$ is bounded by $i\times T \eta^2 L_{\infty} \sqrt{r}$. % Finally, we conclude that \begin{equs} \|X^{\dagger}_{i} - X_{i}\|_2 < \epsilon \end{equs} for $i \leq \mathcal{I}= O(\log(\frac{1}{\epsilon}))$ whenever $\eta^{-1} = \tilde{\Theta}(r^{\frac{1}{4}} \sqrt{\tilde{L}_{\infty}} \epsilon^{-\frac{1}{2}} T)$. Since the algorithm uses a total of $\frac{T}{\eta}$ numerical steps to compute $X_i$ for $i \leq \mathcal{I}$, for $T=\Theta(1)$ the number of gradient evaluations is roughly $\tilde{O}(r^{\frac{1}{4}} \sqrt{\tilde{L}_{\infty}} \epsilon^{-\frac{1}{2}} )$. % In the regime when $r=O(d)$, our bound on the number of gradient evaluations is therefore roughly $\tilde{O}(d^{\frac{1}{4}} \sqrt{\tilde{L}_{\infty}} \epsilon^{-\frac{1}{2}} )$. \begin{remark} \label{remark:MassMatrix} More generally, we can consider Hamiltonian trajectories with Hamiltonian $\mathcal{H}(q,p) = U(q) + \frac{1}{2}p^\top \Omega p$, where $\Omega$ is called the mass matrix. In practice, one tunes the algorithm parameters by using a mass matrix $\Omega = cI_d$ for some constant $c>0$ and by choosing an appropriate integration time $T$ (as well as choosing other parameters such as numerical step-size). % Since using a mass matrix of the form $\Omega = cI_d$ for some constant $c$ is equivalent to rescaling $U$ by a constant factor and tuning the integration time $T$, we analyze the case $\Omega = I_d$ and then ``tune" our algorithm by setting $T = \frac{\sqrt{m}}{6}$, $M=1$ and $\kappa = \frac{1}{m}$ to determine the number of gradient evaluations. (A more general mass matrix $\Omega$ is equivalent to applying a pre-conditioner on $U$.) \end{remark} \section{Preliminaries}\label{appendix:preliminaries} \subsection{Contraction of parallel Hamiltonian trajectories} If the potential $U$ is strongly convex and gradient-Lipschitz, we have the following comparison theorem, originally proved in \citep{mangoubi2017rapid}: \begin{theorem}\label{thm:contraction} [Theorem 6 of \citep{mangoubi2017rapid}] \label{ThmContractionConvexMainResult} For any $0 \leq T \leq \frac{1}{2\sqrt{2}} \frac{\sqrt{m}}{M}$ and any $x,y, \mathsf{p} \in \mathbb{R}^d$ we have \begin{equs} \label{IneqContractLemmaMain} \|q_T(y, \mathsf{p}) - q_T(x, \mathsf{p})\|_2 \leq \left[1- \frac{1}{8} (\sqrt{m}T)^2 \right] \times \|x-y\|_2. \end{equs} \end{theorem} \begin{lemma}[Lemma 2.2 of \citep{mangoubi2017rapid}] \label{MS2} \item Let $(\mathbf{q},\mathbf{p}), (\hat{\mathbf{q}},\hat{\mathbf{p}}) \in \mathbb{R}^{2d}$. For $t \geq 0$ we have \begin{equs} \label{thmboundsconc1} \|q_t(\mathbf{q},\mathbf{p}) - q_t(\hat{\mathbf{q}},\hat{\mathbf{p}})\|_2 &\leq a_1e^{t\sqrt{M}} + a_2e^{-t\sqrt{M}}\\ \|p_t(\mathbf{q},\mathbf{p})-p_t(\hat{\mathbf{q}},\hat{\mathbf{p}})\|_2 &\leq a_1\sqrt{M}e^{t\sqrt{M}} - a_2\sqrt{M}e^{-t\sqrt{M}}, \end{equs} where $a_1 = \frac{1}{2}(\|\mathbf{q}-\hat{\mathbf{q}}\|_2 + \frac{\|\mathbf{p}-\hat{\mathbf{p}}\|_2}{\sqrt{M}})$, $a_2 = \frac{1}{2}(\|\mathbf{q}-\hat{\mathbf{q}}\|_2 - \frac{\|\mathbf{p}-\hat{\mathbf{p}}\|_2}{\sqrt{M}})$. \end{lemma} \begin{remark} It is possible to show a contraction rate of $(1-\Theta(\frac{m}{M}))$ for the special case of multivariate Gaussians even though Theorem \ref{thm:contraction} gives a contraction rate of only $(1-\Theta((\frac{m}{M})^2))$ for more general strongly logconcave distributions. Moreover, a random walk related to HMC but which runs on manifolds, the geodesic walk, can be shown using the Rauch comparison theorem from differential geometry to have mixing time of $\tilde{O}(\frac{\mathfrak{M}}{\mathfrak{m}})$ on any manifold with positive sectional curvature bounded above and below by $\mathfrak{M}, \mathfrak{m}>0$, respectively \cite{mangoubi2016rapid}. Both of these facts suggest that it may be possible to strengthen the contraction bound in Theorem \ref{thm:contraction}. An improvement in this contraction bound would directly imply stronger gradient evaluation bounds in our paper. \end{remark} \subsection{ODE comparison theorem} We make frequent use of the following comparison theorem for systems of ordinary differential equations, a generalization of Gronwall's inequality originally stated in Proposition 1.4 of \citep{comparison_ODE}: \begin{lemma} [ODE comparison theorem, Proposition 1.4 of \citep{comparison_ODE}] \label{LemmaOdeComp} Let $U \subseteq \mathbb{R}^{n}$ and $I \subseteq \mathbb{R}$ be open, nonempty and connected. Let $f, \, g \, : \, I \times U \mapsto \mathbb{R}^{n}$ be continuous and locally Lipschitz maps. Then the following are equivalent: \begin{enumerate} \item For each pair $(t_{0},y)$, $(t_{0}, \overline{y})$ with $t_{0} \in I$ and $y, \overline{y} \in U$, the inequality $y \leq \overline{y}$ implies $z(t) \leq \overline{z}(t)$ for all $t \geq t_{0}$, where \begin{equs} \frac{d}{dt} z &= f(t,z), \qquad z(t_{0}) = y \\ \frac{d}{dt} \overline{z} &= g(t,\overline{z}), \qquad \overline{z}(t_{0}) = \overline{y}. \end{equs} \item For all $i \in \{1,2,\ldots,n\}$ and all $t \geq t_{0}$, the inequality \begin{equs} g(t, (\overline{x}[1], \ldots, \overline{x}[i-1], x[i], \overline{x}[i+1],\ldots,\overline{x}[n]))[i] \geq f(t, (x[1],\ldots,x[i-1],x[i],x[i+1],\ldots,x[n]))[i] \end{equs} holds whenever $\overline{x}[j] \geq x[j]$ for every $j \neq i$. \end{enumerate} \end{lemma} \subsection{Distributions and mixing} We denote the distribution of a random variable $X$ by $\mathcal{L}(X)$ and write $X \sim \nu$ as a shorthand for $\mathcal{L}(X) = \nu$. For two probability measures $\nu_1, \nu_2$ on $\mathbb{R}^d$, define the \textit{Prokhorov distance} \begin{equs} \mathsf{Prok}(\nu_{1},\nu_{2}) = \inf \{ \epsilon > 0 \, : \, \forall \text{ measurable } A \subseteq \mathbb{R}^{d}, \, \nu_{1}(A) \leq \nu_{2}(A_{\epsilon}) + \epsilon \text{ and } \nu_{2}(A) \leq \nu_{1}(A_{\epsilon}) + \epsilon \}, \end{equs} where $A_{\epsilon} = \{ x \in \mathbb{R}^{d} \, : \, \inf_{y \in A} \, \| x-y\|_2 < \epsilon \}$. \section{Toy integrator and Markov chain}\label{appendix:toy} Define the following shorthand for the leapfrog integrator: \begin{equs} q^\bigcirc_{\eta}(\mathbf{q},\mathbf{p}) &:= \mathbf{q}+\eta \mathbf{p} -\frac{1}{2} \eta^2 {\nabla U}(\mathbf{q}),\\ p^\bigcirc_{\eta}(\mathbf{q},\mathbf{p}) &:= \mathbf{p} - \frac{1}{2}\eta {\nabla U}(\mathbf{q}) - \frac{1}{2}\eta {\nabla U}\left(q^\bigcirc_{\eta}(\mathbf{q},\mathbf{p}) \right). \end{equs} Define the ``good" set $\mathsf{G}$ to be $\mathsf{G} := \{(q,p) \in \mathbb{R}^d\times \mathbb{R}^d : |u_i^\top p| < \mathsf{g}_1 \, \forall \, 1\leq i \leq r, \|q\|_2 < \mathsf{g}_2, \|p\|_2 < \mathsf{g}_3\}$ for some (yet to be defined) $\mathsf{g}_1,\mathsf{g}_2, \mathsf{g}_3>0$. % Define the $\epsilon$-thickening of $\mathsf{G}$ to be $\mathsf{G}_\epsilon:= \{(q,p) : \|q-x\|_2 < \epsilon, \|q-x\|_2 < \epsilon \textrm{ for some }(x,y) \in \mathsf{G}\}$. % We define the following local toy integrator: % \begin{definition} \label{defn:toy} % Define the local toy integrator $\diamondsuit$ by the equations \begin{equs} \label{eq:toy} (q^\diamondsuit_{\eta}(\mathbf{q},\mathbf{p}), p^\diamondsuit_{\eta}(\mathbf{q},\mathbf{p})) := \begin{cases}q^\bigcirc_{\eta}(\mathbf{q},\mathbf{p}), p^\bigcirc_{\eta}(\mathbf{q},\mathbf{p}) \quad \quad \quad \quad \textrm{ if } (q_t(\mathbf{q},\mathbf{p}), p_t(\mathbf{q},\mathbf{p})) \in \mathsf{G}_\epsilon \forall t \in [0,\eta]\\ q_{\eta}(\mathbf{q},\mathbf{p}), p_{\eta}(\mathbf{q},\mathbf{p}) \quad \quad \quad \, \, \, \textrm{ otherwise.} \end{cases} \end{equs} \end{definition} Next we define the (global) toy integrator $(q^{\diamonddiamond \eta}_t, p^{\diamonddiamond \eta}_t)$ using the local toy integrator $\diamondsuit$: \begin{algorithm}[H] \caption{Toy integrator \label{alg:integrator_toy}} \textbf{parameters:} Potential $U: \mathbb{R}^d \rightarrow \mathbb{R}$, integration time $T>0$, discretization level $\eta>0$.\\ \textbf{input:} Initial position $\mathbf{q} \in \mathbb{R}^d$, initial momentum $\mathbf{p} \in \mathbb{R}^d$.\\ % \textbf{output:} $q^{\diamonddiamond \eta}_{T}(\mathbf{q},\mathbf{p})$. % \begin{algorithmic}[1] \State Set $q^0=\mathbf{q}$ and $p^0 = \mathbf{p}$. % \For{$i=0$ to $\lceil \frac{T}{\eta}-1 \rceil$} \\ Set $q^{i+1} = q^\diamondsuit_{\eta}(q^i,p^i)$ and $p^{i+1} = p^\diamondsuit_{\eta}(q^i,p^i)$. \EndFor \\ Set $q^{\diamonddiamond \eta}_{T}(\mathbf{q},\mathbf{p}) = q^{i+1}$. \end{algorithmic} \end{algorithm} Set $\hat{X}_{0} = X_0$ and define inductively the toy Markov chain \begin{equs} \hat{X}_{i+1} &= q_{T}^{\diamonddiamond \eta}(\hat{X}_{i}, \mathbf{p}_{i}) \qquad \forall i \in \mathbb{Z}^*. \end{equs} \section{Lyapunov functions for idealized HMC from a cold start} \label{section:Lyapunov} The main purpose of the lemmas in this section is to bound the behavior of the continuous Hamiltonian trajectories to show that they are unlikely to travel too quickly in the ``bad" directions $\mathsf{u}$ specified in Assumption \ref{assumption:M3}. Since we are dealing with the cold start situation in this section, for the rest of this section we assume that Assumption \ref{assumption:tail} holds as well. We begin by bounding the trajectory position and momentum in any direction $u\in \mathsf{u}$ in terms of the initial position and momentum. For every $u \in \mathbb{S}^d$, define $\mathfrak{q}_t^u := |u^\top q_t|$, $\mathfrak{p}_t^u := |u^\top p_t|$. \begin{lemma} \label{lemma:upper_summary} Let $u \in \mathsf{u}$, and define $k_1 := \frac{1}{2}\mathfrak{q}_0^u + \frac{1}{2\sqrt{M}}\mathfrak{p}_0^u + \frac{b}{2M}$ and $k_2 := \frac{1}{2}\mathfrak{q}_0^u - \frac{1}{2\sqrt{M}}\mathfrak{p}_0^u + \frac{b}{2M}$. Then $\mathfrak{q}_t^u \leq k_1 e^{t\sqrt{M}}+ k_2 e^{-t\sqrt{M}} - \frac{b}{M}$ and $\mathfrak{p}_t^u \leq k_1\sqrt{M} e^{t\sqrt{M}} - k_2 \sqrt{M} e^{-t\sqrt{M}}$. \end{lemma} \begin{proof} Hamilton's equations are \begin{equs} \label{eq:hamilton1} \frac{\mathrm{d}}{\mathrm{d} t} q_t & = p_t\\ \frac{\mathrm{d}}{\mathrm{d} t} p_t &= -{\nabla U}(q_t). \end{equs} Hence, \begin{equs} \label{eq:hamilton2} u^\top \frac{\mathrm{d}}{\mathrm{d} t} q_t & = u^\top p_t\\ u^\top \frac{\mathrm{d}}{\mathrm{d} t} p_t &= -u^\top \left({\nabla U}(q_t)\right). \end{equs} Equations \ref{eq:hamilton2} together with Assumption \ref{assumption:tail} imply that \begin{equs} \label{eq:hamilton3} \frac{\mathrm{d}}{\mathrm{d} t} \mathfrak{q}_t^u &= \mathfrak{p}_t^u\\ \frac{\mathrm{d}}{\mathrm{d} t} \mathfrak{p}_t^u &\leq M \mathfrak{q}_t^u + b. \end{equs} Let $(\mathfrak{q}^{u \star}, \mathfrak{p}^{u \star})$ be the solutions to the system of differential equations \begin{equs} \label{eq:hamilton4} \frac{\mathrm{d}}{\mathrm{d} t} \mathfrak{q}^{u \star}_t &= \mathfrak{p}^{u \star}_t\\ \frac{\mathrm{d}}{\mathrm{d} t} \mathfrak{p}^{u \star}_t &= M \mathfrak{q}^{u \star}_t + b. \end{equs} with initial conditions $(\mathfrak{q}^{u \star}_0, \mathfrak{p}^{u \star}_0) = (\mathfrak{q}_0^u, \mathfrak{p}_0^u)$. Solving, we get \begin{equs} \mathfrak{q}^{u \star}_t &= \frac{-b}{M} + k_1 e^{t\sqrt{M}}+ k_2 e^{-t\sqrt{M}}\\ \mathfrak{p}^{u \star}_t &= k_1\sqrt{M} e^{t\sqrt{M}} - k_2 \sqrt{M} e^{-t\sqrt{M}} \end{equs} where \begin{equs} \frac{-b}{M} + k_1+ k_2 &= \mathfrak{q}_0^u\\ k_1\sqrt{M} - k_2 \sqrt{M} &= \mathfrak{p}_0^u. \end{equs} Solving for $k_1$ and $k_2$, we get \begin{equs} k_1 &= \frac{1}{2}\mathfrak{q}_0^u + \frac{1}{2\sqrt{M}}\mathfrak{p}_0^u + \frac{b}{2M}\\ k_2 &= \frac{1}{2}\mathfrak{q}_0^u - \frac{1}{2\sqrt{M}}\mathfrak{p}_0^u + \frac{b}{2M}. \end{equs} But by the ODE comparison theorem, we have $\mathfrak{q}_t^u \leq \mathfrak{q}^{u \star}_t$ and $\mathfrak{p}_t^u \leq \mathfrak{p}^{u \star}_t$ for all $t \geq 0$, which completes the proof. \end{proof} Unfortunately, applying Lemma \ref{lemma:upper_summary} at each step $i$ is not enough to show a $\mathrm{poly}(m,M,b)$ bound on $|u^\top p_t(X_i,\mathbf{p}_i)|$, since Lemma \ref{lemma:upper_summary} allows $|u^{\top}X_i|$ to grow by a constant factor at each step $i$. Rather, applying Lemma \ref{lemma:upper_summary} alone would show a bound that grows exponentially with $i$. % To get around this problem, we first use Lemma \ref{lemma:upper_summary} to prove a lower bound on the position $|u^\top q_t(X_i,\mathbf{p}_i)|$ (Lemma \ref{lemma:lower}), and then use this lower bound to prove an improved upper bound on both the momentum $|u^\top p_t(X_i,\mathbf{p}_i)|$ (Lemma \ref{lemma:upper2_p}) and position $|u^\top q_t(X_i,\mathbf{p}_i)|$ (Lemma \ref{lemma:upper2_summary}). \begin{lemma} \label{lemma:lower} Let $u \in \mathsf{u}$ and suppose that $T \leq \frac{1}{6\sqrt{M}}$. Then for every $0 \leq t \leq T$, we have \begin{equs} \mathfrak{q}_t^u \geq \frac{3}{4}\mathfrak{q}_0^u - \frac{1}{6\sqrt{M}} \left(\frac{3}{2}\mathfrak{p}_0^u + \frac{3}{2}\frac{b}{\sqrt{M}} \right). \end{equs} \end{lemma} \begin{proof} Since $T \leq \frac{1}{6\sqrt{M}}$, by Lemma \ref{lemma:upper_summary} we have \begin{equs} \label{eq:upper3} \mathfrak{p}_t^u &\leq 2 k_1\sqrt{M} e^{t\sqrt{M}} \leq 3 k_1\sqrt{M} =\frac{3}{2}\sqrt{M} \mathfrak{q}_0^u + \frac{3}{2}\mathfrak{p}_0^u + \frac{3}{2}\frac{b}{\sqrt{M}} \end{equs} Thus, \begin{equs} \mathfrak{q}_t^u &\geq \mathfrak{q}_0^u - T \sup_{0\leq t \leq T} \mathfrak{p}_t^u\\ &\geq \mathfrak{q}_0^u - \frac{1}{6\sqrt{M}} \sup_{0\leq t \leq T} \mathfrak{p}_t^u\\ &\geq \mathfrak{q}_0^u - \frac{1}{6\sqrt{M}} \left(\frac{3}{2}\sqrt{M} \mathfrak{q}_0^u + \frac{3}{2}\mathfrak{p}_0^u + \frac{3}{2}\frac{b}{\sqrt{M}} \right)\\ &= \frac{3}{4}\mathfrak{q}_0^u - \frac{1}{6\sqrt{M}} \left(\frac{3}{2}\mathfrak{p}_0^u + \frac{3}{2}\frac{b}{\sqrt{M}} \right). \end{equs} \end{proof} We can now use Lemma \ref{lemma:lower} to show an improved upper bound on the momentum of the Hamiltonian trajectory: \begin{lemma} \label{lemma:upper2_p} Let $u \in \mathsf{u}$. Suppose that $T \leq \frac{1}{6\sqrt{M}}$. Then for every $0 \leq t \leq T$, we have \begin{equs}\label{eq:a8} \mathfrak{p}_t^u &\leq \frac{3}{2}\sqrt{M} \mathfrak{q}_0^u + \frac{3}{2}\mathfrak{p}_0^u + \frac{3}{2}\frac{b}{\sqrt{M}}. \end{equs} \end{lemma} \begin{proof} The proof follows from Inequality \eqref{eq:upper3} in the proof of Lemma \ref{lemma:lower}. \end{proof} We use Lemma \ref{lemma:lower} again, this time to show an improved upper bound on the \textit{position} of the Hamiltonian trajectory: \begin{lemma} \label{lemma:upper2_summary} Let $u \in \mathsf{u}$. Suppose that $T \leq \frac{1}{6\sqrt{M}}$. Then for every $0 \leq t \leq T$, we have $\mathfrak{q}_t^u \leq \mathfrak{q}_0^u + t\times \mathfrak{p}_0^u + \frac{1}{2}t^2 \times \left(-\frac{3m}{4} \mathfrak{q}_0^u + \frac{m}{4\sqrt{M}}\mathfrak{p}_0^u + b+\frac{m}{4M}b\right).$ \end{lemma} \begin{proof} By Lemma \ref{lemma:lower}, we have \begin{equs} \label{eq:a1} |u^\top q_t| = \mathfrak{q}_t^u \geq \frac{3}{4}\mathfrak{q}_0^u - \frac{1}{6\sqrt{M}} \left(\frac{3}{2}\mathfrak{p}_0^u + \frac{3}{2}\frac{b}{\sqrt{M}} \right) \end{equs} for all $0 \leq t \leq T$. Therefore, if $u^\top q_t \geq 0$, by Assumption \ref{assumption:tail} we have \begin{equs} \label{eq:a2} \frac{\mathrm{d}}{\mathrm{d}t}u^\top p_t &= -u^\top {\nabla U}(x)\\ &\leq -m (u^\top q_t) + b\\ &\stackrel{{\scriptsize \textrm{Eq. }}\ref{eq:a1}}{\leq} -m \left(\frac{3}{4}\mathfrak{q}_0^u - \frac{1}{6\sqrt{M}} \left(\frac{3}{2}\mathfrak{p}_0^u + \frac{3}{2}\frac{b}{\sqrt{M}} \right)\right) + b\\ &= -\frac{3m}{4} \mathfrak{q}_0^u + \frac{m}{4\sqrt{M}}\mathfrak{p}_0^u + b+\frac{m}{4M}b \end{equs} for all $0 \leq t \leq T$. Integrating Equation \eqref{eq:a2}, if $u^\top q_t \geq 0$ we get \begin{equs} u^\top p_t &\leq u^\top p_0 + t \times \left(-\frac{3m}{4} \mathfrak{q}_0^u + \frac{m}{4\sqrt{M}}\mathfrak{p}_0^u + b+\frac{m}{4M}b\right)\\ u^\top q_t &= u^\top q_0 + \int_0^t u^\top p_\tau \mathrm{d}\tau \leq u^\top q_0 + t\times u^\top p_0 + \frac{1}{2}t^2 \times \left(-\frac{3m}{4} \mathfrak{q}_0^u + \frac{m}{4\sqrt{M}}\mathfrak{p}_0^u + b+\frac{m}{4M}b\right) \end{equs} for all $0 \leq t \leq T$. A similar calculation for the case $u^\top q_t < 0$ completes the proof. \end{proof} Fix the constants \begin{equs} \label{eq:alpha_beta} \alpha &= \frac{19}{20}\\ \beta &= 2 e^{\frac{1}{2}(\lambda (T+ \frac{m}{8\sqrt{M}}T^2))^2} \times \exp (\lambda [\frac{\frac{4}{\lambda} + (1+\frac{m}{4M})\frac{b}{2}T^2 + \frac{1}{2}\lambda (T+ \frac{m}{8\sqrt{M}}T^2)^2}{\frac{3m}{8}T^2}] + (1+\frac{m}{4M})\frac{b}{2}T^2),\\ \hat{\beta} &= e^{\frac{10}{\frac{1}{8}mT^2}},\\ \lambda &= \frac{1}{16} \end{equs} Recall from Section \ref{sec:proof} that the good set is defined as $\mathsf{G} := \{(q,p) \in \mathbb{R}^d\times \mathbb{R}^d : |u_i^\top p| < \mathsf{g}_1 \, \forall \, 1\leq i \leq r, \|q\|_2 < \mathsf{g}_2, \|p\|_2 < \mathsf{g}_3\}$ for some (yet to be defined) $\mathsf{g}_1,\mathsf{g}_2, \mathsf{g}_3$. We define $\mathsf{g}_1,\mathsf{g}_2, \mathsf{g}_3$ here: \begin{equs} \mathsf{g}_1 = \lambda^{-1}3\sqrt{M}\log(\frac{4\beta}{\lambda \alpha})+\frac{3}{2}\frac{b}{\sqrt{M}}, \end{equs} \begin{equs} \mathsf{g}_2 = \frac{1}{\sqrt{1+\frac{1}{2M}}}\left[\sqrt{\frac{d}{m}}\log\left(\frac{4\hat{\beta}(\mathcal{I}+1)}{\alpha \delta}\right) + \sqrt{8\log\left(\frac{2(\mathcal{I}+1)}{\delta}\right)+2d}\right], \end{equs} and % \begin{equs} \mathsf{g}_3 = \frac{\sqrt{1+\frac{1}{2M}}}{\sqrt{2M+1}}\mathsf{g}_2. \end{equs} Define $X_{i}^u := |u^\top X_{i}|$ for every $u \in \mathbb{S}^d$ and $i \in \mathbb{N}$. We apply Lemma \ref{lemma:upper2_summary} together with the fact that $\mathsf{p}_i \sim N(0,I_d)$ to show a contraction bound for $X_{i}^u$ using the Lyapunov function $V(x) := e^{\lambda x}$, where $\lambda = \frac{1}{16}$. \begin{lemma} \label{lemma:Lyapunov1_summary} Let $u \in \mathsf{u}$. Suppose that $T \leq \frac{1}{6\sqrt{M}}$. Then $\mathbb{E}\left[ V(X_{i}^u)| X_{i-1}^u\right] \leq (1-\alpha)V(X_{i-1}^u) + \beta$ for every $i \in \mathbb{N}$. \end{lemma} \begin{proof} Set $q_0 = X_{i-1}$ and $p_0 = \mathsf{p}_{i-1}$, so that $X_{i}= q_t(q_0,p_0) = q_T$. Then $\mathfrak{q}_0^u = X_{i-1}^u$ and $\mathfrak{q}_T^u = X_{i}^u$. % Furthermore, since $\mathsf{p}_{i-1} \sim N(0, I_d)$, we have that $\mathfrak{p}_0^u \sim \chi_1$. By Lemma \ref{lemma:upper2_summary}, we have \begin{equs} \label{eq:a3} \exp \left( \lambda \mathfrak{q}_t^u\right) &\leq \exp \left( \lambda \left[ \mathfrak{q}_0^u + T\times \mathfrak{p}_0^u + \frac{1}{2}T^2 \times \left(-\frac{3m}{4} \mathfrak{q}_0^u + \frac{m}{4\sqrt{M}}\mathfrak{p}_0^u + b+\frac{m}{4M}b\right)\right] \right)\\ &= \exp \left( \lambda \left[ \mathfrak{q}_0^u (1-\frac{3m}{8}T^2) + \mathfrak{p}_0^u (T+ \frac{m}{8\sqrt{M}}T^2) + (1+\frac{m}{4M})\frac{b}{2}T^2 \right] \right)\\ &=\exp ( \lambda \mathfrak{q}_0^u) \times \exp \left( \lambda \left[ \mathfrak{q}_0^u (-\frac{3m}{8}T^2) + \mathfrak{p}_0^u (T+ \frac{m}{8\sqrt{M}}T^2) + (1+\frac{m}{4M})\frac{b}{2}T^2 \right] \right) \end{equs} Define $\mathsf{c}:= \mathfrak{q}_0^u \geq \frac{\frac{4}{\lambda} + (1+\frac{m}{4M})\frac{b}{2}T^2 + \frac{1}{2}\lambda (T+ \frac{m}{8\sqrt{M}}T^2)^2}{\frac{3m}{8}T^2}$. Then by Inequality \eqref{eq:a3}, we have \begin{equs} \label{eq:a4} \mathbb{E}&\left[\exp \left( \lambda X_{i}^u\right)| X_{i-1}^u\right]\times \mathbbm{1}\{X_{i-1}^u \geq \mathsf{c}\} = \mathbb{E}\left[\exp \left( \lambda \mathfrak{q}_T^{u}\right)\right] \times \mathbbm{1}\{\mathfrak{q}_0 \geq \mathsf{c}\}\\ &\stackrel{{\scriptsize \textrm{Eq. }}\ref{eq:a3}}{\leq} \mathbb{E}_{\mathfrak{p}_0^u \sim \chi_1}\bigg[\exp ( \lambda \mathfrak{q}_0^u)\\ &\qquad \qquad \times \exp \left( \lambda \left[ \mathfrak{q}_0^u (-\frac{3m}{8}T^2) + \mathfrak{p}_0^u (T+ \frac{m}{8\sqrt{M}}T^2) + (1+\frac{m}{4M})\frac{b}{2}T^2 \right] \right)\bigg] \times \mathbbm{1}\{\mathfrak{q}_0 \geq \mathsf{c}\}\\ &= \mathbb{E}_{\mathfrak{p}_0^u \sim \chi_1}\left[ \exp \left( \lambda (T+ \frac{m}{8\sqrt{M}}T^2) \mathfrak{p}_0^u\right) \right] \exp ( \lambda \mathfrak{q}_0^u) \\ &\qquad \qquad \times \exp \left( \lambda \left[ \mathfrak{q}_0^u (-\frac{3m}{8}T^2) + (1+\frac{m}{4M})\frac{b}{2}T^2 \right] \right) \times \mathbbm{1}\{\mathfrak{q}_0 \geq \mathsf{c}\}\\ &\leq 2\mathbb{E}_{G \sim N(0,1)}\left[ \exp \left( \lambda (T+ \frac{m}{8\sqrt{M}}T^2) G\right) \right]\\ &\qquad \qquad \times \exp ( \lambda \mathfrak{q}_0^u) \times \exp \left( \lambda \left[ \mathfrak{q}_0^u (-\frac{3m}{8}T^2) + (1+\frac{m}{4M})\frac{b}{2}T^2 \right] \right) \times \mathbbm{1}\{\mathfrak{q}_0 \geq \mathsf{c}\}\\ &=2 \exp \left( \frac{1}{2}(\lambda (T+ \frac{m}{8\sqrt{M}}T^2))^2\right) \times \exp ( \lambda \mathfrak{q}_0^u)\\ &\qquad \qquad \times \exp \left( \lambda \left[ \mathfrak{q}_0^u (-\frac{3m}{8}T^2) + (1+\frac{m}{4M})\frac{b}{2}T^2 \right] \right) \times \mathbbm{1}\{\mathfrak{q}_0 \geq \mathsf{c}\}\\ &\leq \frac{1}{20} \exp ( \lambda \mathfrak{q}_0^u)\\ &= (1-\alpha) \exp (\lambda X_{i-1}^u). \end{equs} Now suppose instead that $0\leq \mathfrak{q}_0^u < \mathsf{c}$. Then by Lemma \ref{lemma:upper2_summary}, we have \begin{equs} \label{eq:a6} \mathbb{E}\bigg[\exp &\left( \lambda X_{i}^u\right)| X_{i-1}^u\bigg]\times \mathbbm{1}\{X_{i-1}^u < \mathsf{c}\} = \mathbb{E}\left[\exp \left( \lambda \mathfrak{q}_T^{u}\right)\right] \times \mathbbm{1}\{\mathfrak{q}_0 < \mathsf{c}\}\\ &\stackrel{{\scriptsize \textrm{Lemma }}\ref{lemma:upper2_summary}}{\leq} \mathbb{E}_{\mathfrak{p}_0^u \sim \chi_1}\left[\exp \left( \lambda \left[ \mathfrak{q}_0^u + \mathfrak{p}_0^u (T+ \frac{m}{8\sqrt{M}}T^2) + (1+\frac{m}{4M})\frac{b}{2}T^2 \right] \right)\right] \times\mathbbm{1}\{\mathfrak{q}_0^u < \mathsf{c}\} \\ &\leq \mathbb{E}_{\mathfrak{p}_0^u \sim \chi_1}[ \exp \left( \lambda \left[ \mathsf{c} + \mathfrak{p}_0^u (T+ \frac{m}{8\sqrt{M}}T^2) + (1+\frac{m}{4M})\frac{b}{2}T^2 \right] \right)]\\ &= \mathbb{E}_{\mathfrak{p}_0^u \sim \chi_1}\left[ \exp \left( \lambda (T+ \frac{m}{8\sqrt{M}}T^2) \mathfrak{p}_0^u\right) \right] \times \exp (\lambda \mathsf{c} + (1+\frac{m}{4M})\frac{b}{2}T^2)\\ &\leq 2\mathbb{E}_{G \sim N(0,1)}\left[ \exp \left( \lambda (T+ \frac{m}{8\sqrt{M}}T^2) G\right) \right] \times \exp (\lambda \mathsf{c} + (1+\frac{m}{4M})\frac{b}{2}T^2)\\ &= 2 \exp \left( \frac{1}{2}(\lambda (T+ \frac{m}{8\sqrt{M}}T^2))^2\right) \times \exp (\lambda \mathsf{c} + (1+\frac{m}{4M})\frac{b}{2}T^2)\\ &\leq \beta. \end{equs} Combining Equations \eqref{eq:a4} and \eqref{eq:a6} completes the proof. \end{proof} We can now apply the Lyapunov function contraction bound of Lemma \ref{lemma:Lyapunov1_summary} to show that the projection of $X_i$ in any ``bad" direction is bounded w.h.p. \begin{lemma} \label{lemma:concentration_q_summary} Let $u \in \mathsf{u}$. Then $\mathbb{P}[\sup_{\mathcal{I}-s \leq h \leq \mathcal{I}} X_{h}^u > \xi] \leq \frac{2 \beta (s+1)}{\alpha} e^{-\lambda \xi}$ for all $\xi>0.$ \end{lemma} \begin{proof} By Lemma \ref{lemma:Lyapunov1_summary}, we have % \begin{equs} \label{eq:Markov_inequality1} \mathbb{E}[V(X_\mathcal{I}^u)] &\leq (1 - \alpha)\mathbb{E}[V(X_{\mathcal{I}-1}^u)] + \beta \\ &\leq \ldots \\ &\leq (1 - \alpha)^{\mathcal{I}-1} \mathbb{E}[V(X_{0}^u)] + \frac{\beta}{\alpha} \leq \frac{2 \beta}{\alpha} \quad \quad \forall \mathcal{I}\in \mathbb{N}. \end{equs} By Markov's inequality, then, \begin{equs} \label{eq:Markov_inequality2} \mathbb{P}[\sup_{\mathcal{I}-s \leq h \leq \mathcal{I}} V(X_{h}^u) > \xi] \leq \frac{2 \beta (s+1)}{\alpha \xi} \end{equs} for any fixed integers $0 \leq s \leq \mathcal{I}$ and any $\xi>0$. Rewriting this, \begin{equs} \label{eq:Markov_inequality5a} \mathbb{P}[\sup_{\mathcal{I}-s \leq h \leq \mathcal{I}} e^{\lambda X_{h}^u} > \xi] \leq \frac{2 \beta (s+1)}{\alpha \xi} \quad \quad \forall \, \xi>0. \end{equs} Hence, \begin{equs} \label{eq:Markov_inequality5} \mathbb{P}[\sup_{\mathcal{I}-s \leq h \leq \mathcal{I}} X_{h}^u > \xi] \leq \frac{2 \beta (s+1)}{\alpha} e^{-\lambda \xi} \quad \quad \forall \xi>0. \end{equs} \end{proof} We use Lemma \ref{lemma:concentration_q_summary} together with Lemma \ref{lemma:upper_summary} to bound the probability that the projection of the momentum will be large in a given ``bad" direction $u \in \mathsf{u}$ at \textit{any} point on the trajectory. % We need the bound on $X_{h}^u$ in Lemma \ref{lemma:concentration_q_summary} to prove Lemma \ref{lemma:concentration_p_summary}, since the momentum at any given point on the trajectory is a function of both the initial position and initial momentum. Define $\overline{\mathsf{p}_i^u} := \sup_{0\leq t \leq T} |u^\top p_t(X_{i-1}, \mathsf{p}_{i-1})|$ for every $u \in \mathbb{S}^d$ and $i \in \mathbb{N}$. \begin{lemma} \label{lemma:concentration_p_summary} Let $u \in \mathsf{u}$. Then for all $\gamma >3\sqrt{2} +\frac{3}{2}\frac{b}{\sqrt{M}}$ we have \begin{equs} \mathbb{P}[\sup_{\mathcal{I}-s \leq h \leq \mathcal{I}} \overline{\mathsf{p}_h^u} > \gamma] \leq \frac{2 \beta (s+1)}{\alpha} e^{-[\frac{\gamma}{3\sqrt{M}} - \frac{1}{2}\frac{b}{M}]} + (s+1) \times e^{-\frac{[\frac{\gamma}{3} - \frac{1}{2}\frac{b}{\sqrt{M}}]^2-1}{8}}. \end{equs} \end{lemma} \begin{proof} Since, $\mathsf{p}_h^u \sim N(0,1)$, by the Hanson-Wright concentration inequality (see \citep{hanson1971bound, rudelson2013hanson}), we have \begin{equs} \mathbb{P}[\mathsf{p}_h^u>\gamma] \leq e^{-\frac{\gamma^2 -1}{8}} \quad \quad \textrm{ for } \gamma> \sqrt{2}. \end{equs} and hence, \begin{equs} \label{eq:d1} \mathbb{P}[\sup_{\mathcal{I}-s \leq h \leq \mathcal{I}} \mathsf{p}_h^u>\gamma] \leq (s+1) \times e^{-\frac{\gamma^2-1}{8}} \quad \quad \textrm{ for } \gamma> \sqrt{2}. \end{equs} By Inequality \eqref{eq:a8} of Lemma \ref{lemma:upper2_p}, we have \begin{equs} \overline{\mathsf{p}_h^u} \leq \frac{3}{2} X^u_{h-1}\sqrt{M} + \frac{3}{2}\mathsf{p}_{h-1}^u + \frac{3}{2}\frac{b}{\sqrt{M}} \end{equs} implying that \begin{equs} \mathbb{P}[\sup_{\mathcal{I}-s \leq h \leq \mathcal{I}} \overline{\mathsf{p}_h^u} > \gamma] &\leq \mathbb{P}\left[\frac{3}{2} \sqrt{M} \sup_{\mathcal{I}-s \leq h \leq \mathcal{I}} X^u_{h-1} + \frac{3}{2} \sup_{\mathcal{I}-s \leq h \leq \mathcal{I}} \mathsf{p}_{h-1}^u + \frac{3}{2}\frac{b}{\sqrt{M}} > \gamma \right]\\ &\leq \mathbb{P}\left[\frac{3}{2} \sqrt{M} \sup_{\mathcal{I}-s \leq h \leq \mathcal{I}} X^u_{h-1} > \frac{\gamma}{2} - \frac{3}{4}\frac{b}{\sqrt{M}}\right] + \mathbb{P}\left[\frac{3}{2} \sup_{\mathcal{I}-s \leq h \leq \mathcal{I}} \mathsf{p}_{h-1}^u > \frac{\gamma}{2} - \frac{3}{4}\frac{b}{\sqrt{M}}\right]\\ % &= \mathbb{P}\left[\sup_{\mathcal{I}-s \leq h \leq \mathcal{I}} X^u_{h-1} > \frac{\gamma}{3\sqrt{M}} - \frac{1}{2}\frac{b}{M}\right] + \mathbb{P}\left[\sup_{\mathcal{I}-s \leq h \leq \mathcal{I}} \mathsf{p}_{h-1}^u > \frac{\gamma}{3} - \frac{1}{2}\frac{b}{\sqrt{M}}\right]\\ % &\stackrel{{\scriptsize \textrm{Eq. }} \ref{eq:d1} \textrm{ and Lemma } \ref{lemma:concentration_q_summary}}{\leq} \frac{2 \beta (s+1)}{\alpha} e^{-\lambda [\frac{\gamma}{3\sqrt{M}} - \frac{1}{2}\frac{b}{M}]} + (s+1) \times e^{-\frac{[\frac{\gamma}{3} - \frac{1}{2}\frac{b}{\sqrt{M}}]^2-1}{8}} \end{equs} for $\gamma >3\sqrt{2} +\frac{3}{2}\frac{b}{\sqrt{M}}$. \end{proof} Next, we show Lemma \ref{lemma:energy_bounds}, which bounds the probability that the Euclidean norm of the position or momentum will be large at any point on the trajectory: \begin{lemma} \label{lemma:energy_bounds} Define $\overline{X_{h}} := \sup_{0\leq t\leq T} \|q_t(X_{h-1}, \mathsf{p}_{h-1})\|_2$ and $\overline{\mathsf{p}_h} := \sup_{0\leq t \leq T} \|p_t(X_{h-1}, \mathsf{p}_{h-1})\|_2$ for every $h \in \mathbb{N}$. Then \begin{equs} \mathbb{P}[\sup_{\mathcal{I}-s \leq h \leq \mathcal{I}} \overline{\mathsf{p}_h} > \xi\sqrt{2M+1}] \leq (s+1)\left(\frac{2 \hat{\beta}}{\alpha} e^{-\sqrt{\frac{m}{d}} \xi} + e^{-\frac{\xi^2-d}{8}}\right) \quad \quad \textrm{ for } \xi> \sqrt{2d}. \end{equs} and \begin{equs} \mathbb{P}\left[\sup_{\mathcal{I}-s \leq h \leq \mathcal{I}} \overline{X_{h}} > \xi \sqrt{1+\frac{1}{2M}}\right] \leq (s+1)\left(\frac{2 \hat{\beta}}{\alpha} e^{-\sqrt{\frac{m}{d}} \xi} + e^{-\frac{\xi^2-d}{8}}\right) \quad \quad \textrm{ for } \xi> \sqrt{2d}. \end{equs} \end{lemma} \begin{proof} By Lemma E.5 and Inequality E.14 of \citep{mangoubi2017rapid} (and setting $\hat{C} = \frac{10}{\frac{1}{8}mT^2}\sqrt{\frac{d}{m}}$ and $\lambda = \sqrt{\frac{m}{d}}$ in Lemma E.5), we have \begin{equs} \label{eq:t1} \mathbb{P}[\|X_{h}\|_2 > \xi] \leq \frac{2 \hat{\beta}}{\alpha} e^{-\sqrt{\frac{m}{d}} \xi} \quad \quad \forall \xi>0. \end{equs} Since, $\|\mathsf{p}_{h-1}\|_2^2 \sim \chi^2_d$, by the Hanson-Wright concentration inequality, we have \begin{equs} \label{eq:t2} \mathbb{P}[\|\mathsf{p}_h\|_2>\xi] \leq e^{-\frac{\xi^2-d}{8}} \quad \quad \textrm{ for } \xi> \sqrt{2d}. \end{equs} Suppose that $\|X_{h-1}\|_2 \leq \xi$ and $\|\mathsf{p}_{h-1}\|_2 \leq \xi$. Then $U(X_{h-1}) \leq M \xi^2$, and $\mathcal{H}(X_{h-1}, \mathsf{p}_{h-1}) \leq M \xi^2 + \frac{1}{2}\xi^2 = (M+\frac{1}{2})\xi^2$. By the conservation of energy property of Hamiltonian dynamics, this implies $\overline{\mathsf{p}_h} \leq \sqrt{2\mathcal{H}(X_{h-1}, \mathsf{p}_{h-1})} \leq \xi\sqrt{2M+1}$ and $\|\overline{X_{h}}\|_2 \leq \sqrt{\frac{1}{M}\mathcal{H}(X_{h-1}, \mathsf{p}_{h-1})} = \xi \sqrt{1+\frac{1}{2M}}$. % Therefore, Equations \eqref{eq:t1} and \eqref{eq:t2} imply that \begin{equs} \mathbb{P}[\overline{\mathsf{p}_h} > \xi\sqrt{2M+1}] &\leq \mathbb{P}[\|X_{h}\|_2 > \xi] + \mathbb{P}[\|\mathsf{p}_h\|_2>\xi]\\ &\leq \frac{2 \hat{\beta}}{\alpha} e^{-\sqrt{\frac{m}{d}} \xi} + e^{-\frac{\xi^2-d}{8}} \quad \quad \textrm{ for } \xi> \sqrt{2d}. \end{equs} and \begin{equs} \mathbb{P}\left[\overline{X_{h}} > \xi \sqrt{1+\frac{1}{2M}}\right] \leq \frac{2 \hat{\beta}}{\alpha} e^{-\sqrt{\frac{m}{d}} \xi} + e^{-\frac{\xi^2-d}{8}} \quad \quad \textrm{ for } \xi> \sqrt{2d}. \end{equs} A union bound over $\mathcal{I}-s \leq h \leq \mathcal{I}$ completes the proof. \end{proof} We can now prove the main result of this section: \begin{lemma} \label{lemma:good_summary} With probability at least $1-4\delta$ we have $(q_t(X_i, \mathsf{p}_i), p_t(X_i, \mathsf{p}_i)) \in \mathsf{G}$ for every $0\leq t\leq T$ and every $0 \leq i \leq \mathcal{I}$. \end{lemma} \begin{proof} The proof follows directly from Lemmas \ref{lemma:concentration_p_summary} and \ref{lemma:energy_bounds} and our choice of constants $\mathsf{g}_1, \mathsf{g}_2, \mathsf{g}_3$. \end{proof} \section{Leapfrog integrator error bounds} \label{appendix:SecondOrderEuler} In this section we use Assumption \ref{assumption:M3} to bound the error of one numerical step of the leapfrog integrator (Lemma \ref{lemma:euler_error_summary}): \begin{lemma} \label{lemma:euler_error_summary} Let $(\mathbf{q},\mathbf{p}) \in \mathbb{R}^{2d}$. Define $\overline{\mathfrak{p}^u} := \sup_{0\leq t \leq \eta} |u^\top p_t(\mathbf{q},\mathbf{p})|$ for every $u \in \mathsf{u}$. Also define $\overline{\mathfrak{q}} := \sup_{0\leq t \leq \eta} \|q_t(\mathbf{q},\mathbf{p})\|_2$ and $\overline{\mathfrak{p}} := \sup_{0\leq t \leq \eta} \|p_t(\mathbf{q},\mathbf{p})\|_2$. Then $\|q^\bigcirc_{\eta}(\mathbf{q},\mathbf{p}) - q_\eta(\mathbf{q},\mathbf{p})\|_2 \leq \frac{1}{6} \eta^3 \overline{\mathfrak{p}} M$ and\\ $\|p^\bigcirc_{\eta}(\mathbf{q},\mathbf{p}) - p_\eta(\mathbf{q},\mathbf{p})\|_2 \leq \frac{1}{3}\eta^3 \left[ (M)^2 \overline{\mathfrak{q}} + L_{\infty} \max_{1\leq i \leq r} (\overline{\mathfrak{p}^{u_i}})^{2} \sqrt{r} \right].$ \end{lemma} \begin{proof}[Proof of Lemma \ref{lemma:euler_error_summary}] We have \begin{equs} \label{eq:b1} \|{\nabla U}(q_t) - {\nabla U}(q_0)\|_2 \leq \|q_t - q_0\|_2 M \leq t \overline{\mathfrak{p}} M \quad \quad \forall 0\leq t \leq \eta. \end{equs} Hence, \begin{equs} \|q^\bigcirc_{\eta}(\mathbf{q},\mathbf{p}) - q_\eta(\mathbf{q},\mathbf{p})\|_2 &= \left \|\int_0^\eta \int_0^t {\nabla U}(q_\tau(\mathbf{q}, \mathbf{p}))- {\nabla U}(\mathbf{q}) \mathrm{d}\tau \mathrm{d}t \right \|_2\\ &\leq \int_0^\eta \int_0^t \| {\nabla U}(q_\tau(\mathbf{q}, \mathbf{p}))- {\nabla U}(\mathbf{q}) \|_2 \mathrm{d}\tau \mathrm{d}t\\ &\stackrel{{\scriptsize \textrm{Eq. }}\ref{eq:b1}}{\leq} \int_0^\eta \int_0^t \tau \overline{\mathfrak{p}} M \mathrm{d}\tau \mathrm{d}t\\ &= \frac{1}{6} \eta^3 \overline{\mathfrak{p}} M.\\ \end{equs} This completes the first Inequality of Lemma \ref{lemma:euler_error_summary}. To prove the second Inequality of Lemma \ref{lemma:euler_error_summary}, we note that \begin{equs} \label{eq:b2} |u^\top(q_t - q_0)| \leq t \overline{\mathfrak{p}}^u \quad \quad \forall 0\leq t \leq \eta. \end{equs} Therefore, by Assumption \ref{assumption:M3}, we have \begin{equs} \label{eq:b3} \|H_{q_t}p_t - H_{q_0}p_0\|_2 &\leq \|H_{q_t}p_t - H_{q_0}p_t\|_2 + \|H_{q_0}p_t - H_{q_0}p_0\|_2\\ &= \|(H_{q_t} - H_{q_0})p_t\|_2 + \|H_{q_0}(p_t - p_0)\|_2\\ &\stackrel{{\scriptsize \textrm{Assumption }}\ref{assumption:M3}}{\leq} L_{\infty} \max_i |u_i^\top p_t | \times \max_i |u_i^\top(q_t - q_0)| \sqrt{r} + M \|p_t - p_0\|_2\\ &\stackrel{{\scriptsize \textrm{Eq. }}\ref{eq:b2}}{\leq} L_{\infty} \max_i \overline{\mathfrak{p}^{u_i}} \times \max_i t \overline{\mathfrak{p}^{u_i}} \sqrt{r} + M \|p_t - p_0\|_2\\ &\leq L_{\infty} t \max_i (\overline{\mathfrak{p}^{u_i}})^{2} \sqrt{r} + M\times t M \overline{\mathfrak{q}}. \end{equs} Moreover, \begin{equs} \label{eq:leapfrog1} \bigg \|{\nabla U}\left(q_0 + \eta p_0 -\frac{1}{2}\eta^2 {\nabla U}(q_0)\right) &- {\nabla U}(q_0 + \eta p_0) \bigg\|_2\\ &\leq M \left\|(q_0 + \eta p_0 -\frac{1}{2}\eta^2 {\nabla U}(q_0)) - (q_0 + \eta p_0) \right \|_2\\ &\leq \frac{1}{2}\eta^2 M \|{\nabla U}(q_0)\|_2 \\ &\leq \frac{1}{2}\eta^2 M \times M \overline{\mathfrak{q}}. \end{equs} and \begin{equs} \label{eq:leapfrog2} \|\frac{1}{\eta} [{\nabla U}(q_0 + \eta p_0) - &{\nabla U}(q_0)] - H_{q_0}p_0\|_2 = \|\frac{1}{\eta} \int_0^\eta H_{q_0 + \eta p_0}p_0 \mathrm{d}t - H_{q_0}p_0\|_2\\ % &= \|\frac{1}{\eta} \int_0^\eta H_{q_0 + \eta p_0}p_0 \mathrm{d}t - H_{q_0}p_0\|_2\\ &= \|\frac{1}{\eta} \int_0^\eta (H_{q_0 + \eta p_0} - H_{q_0})p_0 \mathrm{d}t\|_2\\ &\leq \frac{1}{\eta} \int_0^\eta \|(H_{q_0 + \eta p_0} - H_{q_0})p_0\|_2 \mathrm{d}t\\ &\stackrel{{\scriptsize \textrm{Assumption }}\ref{assumption:M3}}{\leq} \frac{1}{\eta} \int_0^\eta L_{\infty} \max_i |u_i^\top p_0 | \times \max_i |u_i^\top(q_0 + \eta p_0 - q_0)| \sqrt{r} \mathrm{d}t\\ &= \eta L_{\infty} \max_i |u_i^\top p_0 | \times \max_i |u_i^\top p_0| \sqrt{r}\\ &\leq \eta L_{\infty}\max_i (\overline{\mathfrak{p}^{u_i}})^2 \sqrt{r} \end{equs} Hence, \begin{equs} \|p^\bigcirc_{\eta}(\mathbf{q},\mathbf{q}) &- p_\eta(\mathbf{q},\mathbf{p})\|_2\\ & = \left \|\int_0^\eta \int_0^t H_{q_\tau(\mathbf{q}, \mathbf{p})} \mathbf{p}_\tau(\mathbf{q}, \mathbf{p}) - \frac{1}{\eta}\left[{\nabla U}\left(\mathbf{q} + \eta \mathbf{q} -\frac{1}{2}\eta^2 {\nabla U}(\mathbf{q})\right) - {\nabla U}(\mathbf{q})\right] \mathrm{d} \tau \mathrm{d}t \right \|_2\\ &\stackrel{{\scriptsize \textrm{Eq. }}\ref{eq:leapfrog1}, \ref{eq:leapfrog2}}{\leq} \int_0^\eta \int_0^t \| H_{q_\tau(\mathbf{q}, \mathbf{p})} \mathbf{p}_\tau(\mathbf{q}, \mathbf{p}) - H_{\mathbf{q}} \mathbf{p} \|_2 \mathrm{d} \tau \mathrm{d}t + \frac{1}{12}\eta^3 M^2 \overline{\mathfrak{q}} + \frac{1}{6}\eta^3 L_{\infty}\max_i (\overline{\mathfrak{p}^{u_i}})^2 \sqrt{r}\\ &\stackrel{{\scriptsize \textrm{Eq. }}\ref{eq:b3}}{\leq} \int_0^\eta \int_0^t L_{\infty} \tau \max_i (\overline{\mathfrak{p}^{u_i}})^{2} \sqrt{r} + \tau (M)^2 \overline{\mathfrak{q}} \mathrm{d} \tau \mathrm{d}t + \frac{1}{12}\eta^3 M^2 \overline{\mathfrak{q}} + \frac{1}{6}\eta^3 L_{\infty}\max_i (\overline{\mathfrak{p}^{u_i}})^2 \sqrt{r}\\ &= \frac{1}{6}\eta^3 \left[L_{\infty} \max_i (\overline{\mathfrak{p}^{u_i}})^{2} \sqrt{r} + (M)^2 \overline{\mathfrak{q}} \right] + \frac{1}{12}\eta^3 M^2 \overline{\mathfrak{q}} + \frac{1}{6} \eta^3 L_{\infty}\max_i (\overline{\mathfrak{p}^{u_i}})^2 \sqrt{r}\\ &\leq \frac{1}{3} \eta^3 \left[L_{\infty} \max_i (\overline{\mathfrak{p}^{u_i}})^{2} \sqrt{r} + (M)^2 \overline{\mathfrak{q}} \right]. \end{equs} This completes the proof of the second Inequality of Lemma \ref{lemma:euler_error_summary}. \end{proof} \section{Analysis of the coupled chains from a cold start} \label{section:main} In this Section we first bound the error of the toy integrator (Lemmas \ref{lemma:local_toy} and \ref{thm:Approx_integrator}), and then bound the error of the actual numerical HMC chain $X^\dagger$ by showing that the chain $\hat{X}$ generated with the toy integrator and the numerical chain $X^\dagger$ are equal with high probability (Lemma \ref{lemma:coupling}). Since we are dealing with the cold start situation in this section, for the rest of this section we assume that Assumption \ref{assumption:tail} holds as well. First, we use our bound on the local leapfrog integrator error (Lemma \ref{lemma:euler_error_summary}) to bound the error of the local toy integrator $\diamondsuit$: \begin{lemma} \label{lemma:local_toy} For any $(\mathbf{q},\mathbf{p}) \in \mathbb{R}^d\times \mathbb{R}^d$ and $\eta>0$, we have \begin{equs} \label{eq:b4} \|q^\diamondsuit_{\eta}(\mathbf{q},\mathbf{p}) - q_\eta(\mathbf{q},\mathbf{p})\|_2 \leq \eta^3 \epsilon_1 \end{equs} and \begin{equs} \label{eq:b5} \|p^\diamondsuit_{\eta}(\mathbf{q},\mathbf{p}) - p_\eta(\mathbf{q},\mathbf{p})\|_2 \leq \eta^3 \epsilon_2. \end{equs} \end{lemma} \begin{proof} The proof follows directly from Lemma \ref{lemma:euler_error_summary}, Definition \ref{defn:toy}, and the definitions of $\mathsf{G}$, $\epsilon_1$ and $\epsilon_2$. \end{proof} Next, we use our bound on the local toy integrator (Lemma \ref{lemma:local_toy}) to bound the error of the global toy integrator: \begin{lemma} \label{thm:Approx_integrator} For $T$, $\eta$ satisfying $\eta \leq T \leq \frac{1}{6} \frac{\sqrt{m}}{M}$ and $\frac{T}{\eta} \in \mathbb{N}$, the integrator $(q_T^{\diamonddiamond \eta}, p_T^{\diamonddiamond \eta})$ gives error in the position of \begin{equs} \label{eq:f20} \|q_T^{\diamonddiamond \eta}(\mathbf{q},\mathbf{p}) - q_T(\mathbf{q},\mathbf{p})\|_2 \leq T\times \eta^2 \left(\epsilon_1 + \frac{\epsilon_2}{\sqrt{M}}\right)e \end{equs} and an error in momentum of \begin{equs} \label{eq:f20p} \|p_T^{\diamonddiamond \eta}(\mathbf{q},\mathbf{p}) - p_T(\mathbf{q},\mathbf{p})\|_2 \leq T\times \eta^2 \left(\epsilon_1 + \frac{\epsilon_2}{\sqrt{M}}\right)e \times \sqrt{M} \end{equs} for every $(\mathbf{q},\mathbf{p}) \in \mathbb{R}^d\times \mathbb{R}^d$. \end{lemma} \begin{proof} The proof follows roughly along the lines of Inequality C.4 of Lemma C.2 in \citep{mangoubi2017rapid}: Set notation for $(q^i,p^i)$ as in Algorithm \ref{alg:integrator_toy} for all $i \in \mathbb{Z}^{*}$. Define $\hat{q}^{i+1}:= q_\eta(q^i,p^i)$ and $\hat{p}^{i+1}:= q_\eta(q^i,p^i)$ for every $i \in \mathbb{Z}^{*}$. Define $T_i:= T- \eta \times i$ for all $i$. Therefore, since $T_{i+1} \leq T \leq \frac{1}{6} \frac{\sqrt{m}}{M} \leq \frac{1}{\sqrt{M}}$, for all $i\leq\frac{T}{\eta}-1$, we have: \begin{equs} \label{eq:f19} \|q_{T_{i+1}}(q^{i+1}, p^{i+1}) - q_{T_i}(q^i, p^i)\|_2 &= \|q_{T_{i+1}}(q^{i+1}, p^{i+1}) - q_{T_{i+1}}(\hat{q}^{i+1}, \hat{p}^{i+1})\|_2\\ & \leq \frac{1}{2}\left(\|q^{i+1} - \hat{q}^{i+1} \|_2 + \frac{\|p^{i+1} - \hat{p}^{i+1} \|_2}{\sqrt{M}}\right)e^{T_{i+1}\sqrt{M}}\\ &\qquad \qquad + \frac{1}{2}\left(\|q^{i+1} - \hat{q}^{i+1} \|_2 - \frac{\|p^{i+1} - \hat{p}^{i+1} \|_2}{\sqrt{M}}\right)e^{-T_{i+1}\sqrt{M}}\\ & \leq \frac{1}{2}\left(\|q^{i+1} - \hat{q}^{i+1} \|_2 + \frac{\|p^{i+1} - \hat{p}^{i+1} \|_2}{\sqrt{M}}\right)e \\ &\qquad \qquad + \frac{1}{2}\left(\|q^{i+1} - \hat{q}^{i+1} \|_2 - \frac{\|p^{i+1} - \hat{p}^{i+1} \|_2}{\sqrt{M}}\right)e^{0}\\ & \leq \left(\|q^{i+1} - \hat{q}^{i+1} \|_2 + \frac{\|p^{i+1} - \hat{p}^{i+1} \|_2}{\sqrt{M}}\right)e\\ & \stackrel{{\scriptsize \textrm{Lemma }}\ref{lemma:local_toy}}{\leq} \left(\eta^3 \epsilon_1 + \frac{\eta^3 \epsilon_2}{\sqrt{M}}\right)e = \eta^3 \left(\epsilon_1 + \frac{\epsilon_2}{\sqrt{M}}\right)e, \end{equs} where the first inequality follows from Lemma \ref{MS2}, and the second inequality is true since $0\leq T_{i+1} \leq \frac{1}{\sqrt{M}}$ and since the functions $e^t + e^{-t}$ and $e^t - e^{-t}$ are both nondecreasing in $t$ for $t\geq0$. Therefore, since $q^{\frac{T}{\eta}} = q_T^{\diamonddiamond \eta}(\mathbf{q},\mathbf{p})$, $T_0=T$, and $(q^0,p^0) =(\mathbf{q},\mathbf{p})$, by the triangle inequality we have \begin{equs} \|q_T^{\diamonddiamond \eta}(\mathbf{q},\mathbf{p}) - q_T(\mathbf{q},\mathbf{p})\|_2 &= \|q^{\frac{T}{\eta}} - q_{T_0}(q^0,p^0)\|_2\\ &\leq \|q^{\frac{T}{\eta}} - q_{T_{(\frac{T}{\eta}-1)}}(q^{\frac{T}{\eta}-1}, p^{\frac{T}{\eta}-1}) \|_2 + \sum_{i=0}^{\frac{T}{\eta}-2} \|q_{T_{i+1}}(q^{i+1}, p^{i+1}) - q_{T_i}(q^i, p^i)\|_2\\ &= \|q^{\frac{T}{\eta}} - \hat{q}^{\frac{T}{\eta}} \|_2 + \sum_{i=0}^{\frac{T}{\eta}-2} \|q_{T_{i+1}}(q^{i+1}, p^{i+1}) - q_{T_i}(q^i, p^i)\|_2\\ &\stackrel{{\scriptsize \textrm{Lemma }}\ref{lemma:local_toy}}{\leq} \eta^3 \epsilon_1 + \sum_{i=0}^{\frac{T}{\eta}-2} \|q_{T_{i+1}}(q^{i+1}, p^{i+1}) - q_{T_i}(q^i, p^i)\|_2\\ &\stackrel{{\scriptsize \textrm{Eq. }}\ref{eq:f19}}{\leq} \eta^3 \epsilon_1 + \sum_{i=0}^{\frac{T}{\eta}-2} \eta^3 \left(\epsilon_1 + \frac{\epsilon_2}{\sqrt{M}}\right)e \leq \frac{T}{\eta}\times \eta^3 \left(\epsilon_1 + \frac{\epsilon_2}{\sqrt{M}}\right)e. \end{equs} This completes the proof of Inequality \eqref{eq:f20}. The proof of Inequality \eqref{eq:f20p} is nearly identical except that we gain a factor of $\sqrt{M}$; we omit the details. \end{proof} We now apply Lemmas \ref{lemma:good_summary} and \ref{thm:Approx_integrator} to show that the toy chain $\hat{X}$ coincides with the numerical HMC chain $X^\dagger$ with high probability, and use this fact to bound the error of the numerical chain $X^\dagger$: \begin{lemma} \label{lemma:coupling} Set \begin{equs} \eta \leq \sqrt{\frac{{\frac{\epsilon}{800\mathcal{I}} \min(1,\frac{1}{\sqrt{M}})}}{T \left(\epsilon_1 + \frac{\epsilon_2}{\sqrt{M}}\right)e}}. \end{equs} Then with probability at least $1-4\delta$ we have $X_i^\dagger=\hat{X}_i$ and $\|X_i^\dagger-X_i\|_2 \leq \frac{1}{2} \epsilon$ for all $i \leq \mathcal{I}$. \end{lemma} \begin{proof} For every $j \in \mathbb{Z}^{*}$, define inductively on $i$ the Markov chain $X^{(j)} = X_j^{(j)}, X_{j+1}^{(j)}, X_{j+2}^{(j)}, \ldots$ by the recursion \begin{equs} X_{j}^{(j)} &= \hat{X}_j\\ X_{i+1}^{(j)} &= q_T(\hat{X}_i,\mathsf{p}_i) \qquad \qquad \forall i \geq j. \end{equs} In particular, for every $i$ we have $\hat{X}_i = X_i^{(i)}$ and $X_i = X_i^{(0)}$. Then \begin{equs} \label{eq:e1} \|\hat{X}_i - X_i\|_2 &= \| \sum_{j=0}^{i-1} X_i^{(j+1)} - X_i^{(j)}\|_2 \leq \sum_{j=0}^{i-1} \|X_i^{(j+1)} - X_i^{(j)}\|_2\\ &\stackrel{{\scriptsize \textrm{Theorem }}\ref{thm:contraction}}{\leq} \sum_{j=0}^{i-1} \|X_{j+1}^{(j+1)} - X_{j+1}^{(j)}\|_2 = \sum_{j=0}^{i-1} \|\hat{X}_{j+1} - q_T(\hat{X}_j,\mathsf{p}_j)\|_2\\ &= \sum_{j=0}^{i-1} \|q_T^{\diamonddiamond \eta}(\hat{X}_j,\mathsf{p}_j) - q_T(\hat{X}_j,\mathsf{p}_j)\|_2 \stackrel{{\scriptsize \textrm{Lemma }}\ref{thm:Approx_integrator}}{\leq} \sum_{j=0}^{i-1} \frac{\epsilon}{800\mathcal{I}}\min(1,\frac{1}{\sqrt{M}})\\ &= i\times \frac{\epsilon}{800\mathcal{I}}\min(1,\frac{1}{\sqrt{M}}) \leq \frac{\epsilon}{800}\min(1,\frac{1}{\sqrt{M}}). \end{equs} Hence, by Lemma \ref{thm:Approx_integrator}, for all $0 \leq i \leq \mathcal{I}$, and all $0\leq t \leq T$ s.t. $\frac{t}{\eta} \in \mathbb{N}$, we have \begin{equs} \label{eq:toytrajectory_q} \|q_t^{\diamonddiamond \eta}(\hat{X}_i,\mathsf{p}_i) - q_t(X_i,\mathsf{p}_i)\|_2 &\leq \|q_t^{\diamonddiamond \eta}(\hat{X}_i,\mathsf{p}_i) - q_t(\hat{X}_i,\mathsf{p}_i)\|_2 + \|q_t(\hat{X}_i,\mathsf{p}_i) - q_t(X_i,\mathsf{p}_i)\|_2\\ &\stackrel{{\scriptsize \textrm{Lemma }}\ref{thm:Approx_integrator}}{\leq} \frac{\epsilon}{800\mathcal{I}}\min(1,\frac{1}{\sqrt{M}}) + \|q_t(\hat{X}_i,\mathsf{p}_i) - q_t(X_i,\mathsf{p}_i)\|_2\\ &\stackrel{{\scriptsize \textrm{Th. }}\ref{thm:contraction}}{\leq} \frac{\epsilon}{800\mathcal{I}}\min(1,\frac{1}{\sqrt{M}}) + \|\hat{X}_i - X_i\|_2\\ &\stackrel{{\scriptsize \textrm{Eq. }}\ref{eq:e1}}{\leq} \frac{\epsilon}{800\mathcal{I}}\min(1,\frac{1}{\sqrt{M}}) + \frac{1}{800}\epsilon\min(1,\frac{1}{\sqrt{M}}) \leq \frac{\epsilon}{400}\min(1,\frac{1}{\sqrt{M}}) \end{equs} and \begin{equs} \label{eq:toytrajectory_p} \|p_t^{\diamonddiamond \eta}(\hat{X}_i,\mathsf{p}_i) - p_t(X_i,\mathsf{p}_i)\|_2 &\leq \|p_t^{\diamonddiamond \eta}(\hat{X}_i,\mathsf{p}_i) - p_t(\hat{X}_i,\mathsf{p}_i)\|_2 + \|p_t(\hat{X}_i,\mathsf{p}_i) - p_t(X_i,\mathsf{p}_i)\|_2\\ &\stackrel{{\scriptsize \textrm{Lemma }}\ref{thm:Approx_integrator}}{\leq} \frac{\epsilon}{800\mathcal{I}}\min(1,\frac{1}{\sqrt{M}}) + \|p_t(\hat{X}_i,\mathsf{p}_i) - p_t(X_i,\mathsf{p}_i)\|_2\\ &\stackrel{{\scriptsize \textrm{Lemma }}\ref{MS2}}{\leq} \frac{\epsilon}{800\mathcal{I}}\sqrt{M}\min(1,\frac{1}{\sqrt{M}}) + \|\hat{X}_i - X_i\|_2 \sqrt{M}e^{T\sqrt{M}}\\ &\stackrel{{\scriptsize \textrm{Eq. }}\ref{eq:e1}}{\leq} \frac{\epsilon}{800\mathcal{I}}\sqrt{M}\min(1,\frac{1}{\sqrt{M}})\\ &\qquad \qquad + \frac{\epsilon}{140}\sqrt{M}\min(1,\frac{1}{\sqrt{M}}) \leq \frac{\epsilon}{100}\sqrt{M}\min(1,\frac{1}{\sqrt{M}}). \end{equs} Consider any $0 \leq i \leq \mathcal{I}$ and any $0\leq t \leq T$ s.t. $\frac{t}{\eta} \in \mathbb{N}$. Let $(\mathsf{q}, \mathsf{p}) = (q_t^{\diamonddiamond \eta}(\hat{X}_i,\mathsf{p}_i), p_t^{\diamonddiamond \eta}(\hat{X}_i,\mathsf{p}_i))$ be the position and momentum on the trajectory of the toy HMC chain $\hat{X}$ at time $t$, and let $(\tilde{\mathsf{q}}, \tilde{\mathsf{p}}) = (q_t(X_i,\mathsf{p}_i), p_t(X_i,\mathsf{p}_i))$ be the position and momentum at the same time $t$ on the trajectory of the idealized HMC chain $X$. Then by Lemma \ref{MS2} we have for all $0\leq \tau \leq \eta$ that % \begin{equs} \|q_\tau(\mathsf{q}, \mathsf{p}) - q_\tau(\tilde{\mathsf{q}}, \tilde{\mathsf{p}})\|_2 &\leq (\|\mathsf{q}-\tilde{\mathsf{q}}\|_2 + \frac{\|\mathsf{p}-\tilde{\mathsf{p}}\|_2}{\sqrt{M}})e^{\eta \sqrt{M}} \stackrel{{\scriptsize \textrm{Eq. }}\ref{eq:toytrajectory_q}, \ref{eq:toytrajectory_p}}{\leq} (\frac{\epsilon}{400} +\frac{\epsilon}{100})e^{\eta \sqrt{M}} \leq \frac{\epsilon}{20} \end{equs} and \begin{equs} \|p_\tau(\mathsf{q}, \mathsf{p}) - p_\tau(\tilde{\mathsf{q}}, \tilde{\mathsf{p}})\|_2 &\leq (\|\mathsf{q}-\tilde{\mathsf{q}}\|_2 + \frac{\|\mathsf{p}-\tilde{\mathsf{p}}\|_2}{\sqrt{M}})e^{\eta \sqrt{M}}\\ & \stackrel{{\scriptsize \textrm{Eq. }}\ref{eq:toytrajectory_q}, \ref{eq:toytrajectory_p}}{\leq} (\frac{\epsilon}{400} +\frac{\epsilon}{100})\sqrt{M}e^{\eta \sqrt{M}}\min(1,\frac{1}{\sqrt{M}}) \leq \frac{\epsilon}{20}. \end{equs} Therefore, $(q_\tau(\mathsf{q}, \mathsf{p}), p_\tau(\mathsf{q}, \mathsf{p}))\in \mathsf{G}_\epsilon$ whenever $(q_\tau(\tilde{\mathsf{q}}, \tilde{\mathsf{p}}), p_\tau(\tilde{\mathsf{q}}, \tilde{\mathsf{p}})) \in\mathsf{G}$. Let $\mathcal{G}$ be the event that $(q_t(X_i, \mathsf{p}_i), p_t(X_i, \mathsf{p}_i)) \in \mathsf{G}$ for every $0\leq t\leq T$ and every $0 \leq i \leq \mathcal{I}$. Then $(q_\tau(\mathsf{q}, \mathsf{p}), p_\tau(\mathsf{q}, \mathsf{p}))\in \mathsf{G}_\epsilon$ whenever $\mathcal{G}$ occurs. Therefore, if the event $\mathcal{G}$ occurs, Definition \ref{defn:toy} implies that the toy integrator is the same as the Euler integrator for every point on the trajectories of $\hat{X}$. Hence, $\hat{X}_i = X^\dagger_i$ for all $i \leq \mathcal{I}$ if $\mathcal{G}$ occurs. Moreover, Inequality \eqref{eq:e1} implies that $\|\hat{X}_i - X_i\|_2 \leq \frac{1}{2}\epsilon$ for all $0 \leq i \leq \mathcal{I}$, and by Lemma \ref{lemma:good_summary}, $\mathbb{P}[\mathcal{G}] = 1-4\delta$. % Therefore, $\hat{X}_i = X^\dagger_i$ and $\|X_i^\dagger - X_i\|_2 \leq \frac{1}{2}\epsilon$ with probability $1-4\delta$. \end{proof} Finally, we use Lemma \ref{lemma:coupling} to show that the numerical HMC chain $X^\dagger$ converges to within an error $\epsilon$ of the target distribution in $\mathcal{I} \times \frac{T}{\eta}$ gradient evaluations: \begin{theorem} \label{thm:main} Fix $0<c\leq1$. Set $0 < T \leq \frac{1}{6} \frac{\sqrt{m}}{M}$. % Set $\mathcal{I} \geq \frac{\log(\frac{2R}{\epsilon})}{\Delta c}$, where $\Delta = \frac{1}{8} (\sqrt{m}T)^2$ and $R:= d+ \sqrt{4d \log(\frac{M}{m}) - 8 \log(\delta)}$. Set \begin{equs} \eta \leq \sqrt{\frac{{\frac{\epsilon}{800\mathcal{I}} \min(1,\frac{1}{\sqrt{M}})}}{T \left(\epsilon_1 + \frac{\epsilon_2}{\sqrt{M}}\right)e}} \end{equs} Then for any $c\mathcal{I} \leq i \leq \mathcal{I}$, with probability at least $1-5\delta$ we have \begin{equs} \|X_i^{\dagger} -Y_i\|_2 \leq \epsilon, \end{equs} where $Y_i \sim \pi$. Moreover, the number of gradient evaluations is at most $\mathcal{I} \times \frac{T}{\eta}$. \end{theorem} \begin{proof} Let $Z$ be a $\chi$ random variable with $d$ degrees of freedom. Since $U = -\log(\pi)$ is $m$-strongly convex, we have \begin{equs} \mathbb{P}[\|Y_i\|_2 > \gamma] &= \frac{\int_{\|z\|_2> \gamma} \pi(z) \mathrm{d}z}{\int_{\mathbb{R}^d} \pi(z) \mathrm{d}z} = \frac{\int_{\|z\|_2> \gamma} \exp(-U(z)) \mathrm{d}z}{\int_{\mathbb{R}^d} \exp(-U(z)) \mathrm{d}z}\\ &\leq \frac{\int_{\|z\|_2> \gamma} \exp(-\frac{1}{2}m\|z\|_2^2) \mathrm{d}z}{\int_{\mathbb{R}^d} \exp(-\frac{1}{2}M \|z\|_2^2) \mathrm{d}z}\\ &\leq \frac{\int_{\mathbb{R}^d} \exp(-\frac{1}{2}m \|z\|_2^2) \mathrm{d}z}{\int_{\mathbb{R}^d} \exp(-\frac{1}{2}M \|z\|_2^2) \mathrm{d}z} \times \frac{\int_{\|z\|_2> \gamma} \exp(-\frac{1}{2}m\|z\|_2^2) \mathrm{d}z}{\int_{\mathbb{R}^d} \exp(-\frac{1}{2}m \|z\|_2^2) \mathrm{d}z}\\ &=\left(\frac{M}{m}\right)^{\frac{d}{2}} \times \mathbb{P}[Z >\gamma]\\ &\leq \left(\frac{M}{m}\right)^{\frac{d}{2}} \times e^{-\frac{(\gamma-d)^2}{8}}\\ &=e^{\frac{d}{2} \mathrm{log}(\frac{M}{m}) -\frac{(\gamma-d)^2}{8}} \end{equs} for all $\gamma >2d$, where the third inequality holds by the Hanson-Wright inequality Thus, \begin{equs} \mathbb{P}\left[\|Y_i\|_2 > R\right] < \delta. \end{equs} Hence, for any $c\mathcal{I} \leq i \leq \mathcal{I}$ we have with probability at least $1-5\delta$ that \begin{equs} \|X_i^{\dagger} - Y_i\|_2 &\leq \|X^{\dagger}_i - X_i\|_2 + \|X_i - Y_i\|_2\\ &\stackrel{{\scriptsize \textrm{Lemma }}\ref{lemma:coupling}}{\leq} \frac{\epsilon}{2} + \|X_i - Y_i\|_2\\ &\stackrel{{\scriptsize \textrm{Theorem }}\ref{thm:contraction}}{\leq} \frac{\epsilon}{2} + (1-\Delta)^i \times \|\hat{X}_0 - X_0\|_2\\ &= \frac{\epsilon}{2} + (1-\Delta)^i \times R\\ &\leq \frac{\epsilon}{2} + \frac{\epsilon}{2} = \epsilon. \end{equs} \end{proof} Next we show convergence in the Prokhorov metric: \begin{corollary} \label{cor:prokhorov} Choose $\delta = \frac{\epsilon}{5}$. Fix $c$, $\mathcal{I}$ and $\eta$ as in Theorem \ref{thm:main}. Then \begin{equs} \label{eq:prok} \mathsf{Prok}(\mathcal{L}(X_i), \pi) \leq 2\epsilon \qquad \qquad \forall \, c\mathcal{I} \leq i \leq \mathcal{I}. \end{equs} \end{corollary} \begin{proof} The proof follows directly from Theorem \ref{thm:main}. \end{proof} Finally, we compute a more explicit bound on the number of gradient evaluations. After $\mathcal{I}$ steps the error is bounded by $\|X^{\dagger}_{\mathcal{I}}- X_{\mathcal{I}}\|_2 < \epsilon$ (Theorem \ref{thm:main}), and the number of gradient evaluations is $\mathcal{I} \times \frac{T}{\eta}$. % To determine the bound on the required number of gradient evaluations, we may set $M=1$, since this is equivalent to tuning the algorithm parameters (see Remark \ref{remark:MassMatrix}). % Setting $M=1$, we have $\epsilon_1 = \tilde{O}( \sqrt{d}\kappa^{2.5} \log(\frac{1}{\delta}))$ and $\epsilon_2 = \tilde{O}([\sqrt{d} \kappa^{2.5} + L_{\infty} \left(\kappa^4 + b^2\kappa^2\right)\sqrt{r}]\log(\frac{1}{\delta}) )$, implying a numerical step size of $\eta = \frac{1}{\sqrt{\mathcal{I}T}} \tilde{\Theta}(\epsilon^{\frac{1}{2}}[ d^{\frac{-1}{4}}\kappa^{-1.25} + \frac{1}{\sqrt{L_{\infty}}} \left(\kappa^{-2} + \kappa^{-1} b^{-1}\right) r^{\frac{-1}{4}}] \log^{-\frac{1}{2}}(\frac{1}{\delta}))$. % Recalling that $\mathcal{I}= \tilde{\Theta}(\kappa^2)$ and $T = \tilde{\Theta}(\kappa^{-\frac{1}{2}})$, the number of gradient evaluations is % $\mathcal{I} \times \frac{T}{\eta} = \tilde{O}\left(\epsilon^{-\frac{1}{2}} \bigg[ d^{\frac{1}{4}}\kappa^{3.5} + \sqrt{L_{\infty}} \left(\kappa^{4.25} + b\kappa^{3.25}\right) r^{\frac{1}{4}} \bigg] \log^{\frac{1}{2}}(\frac{1}{\delta})\right)$. We compute the number of gradient evaluations in more detail: \begin{lemma} \label{cor:RunningTime} Suppose that $M=1$. Let $0 < T \leq \frac{1}{6} \frac{\sqrt{m}}{M}$, and fix $c$, $\mathcal{I}$ and $\eta$ as in Theorem \ref{thm:main}, with $\eta$ set to the maximum value allowed in Theorem \ref{thm:main}. % Then for any $c\mathcal{I} \leq i \leq \mathcal{I}$ we have with probability at least $1-5\delta$ that \begin{equs} \|X_i^{\dagger}-Y_i\|_2 \leq \epsilon, \end{equs} where $Y_i \sim \pi$. Moreover, for $T = \Theta(\frac{\sqrt{m}}{M})$ the number of gradient evaluations is \begin{equs} O\left(\epsilon^{-\frac{1}{2}} \bigg[ d^{\frac{1}{4}}\kappa^{3.5} + \sqrt{L_{\infty}} \left(\kappa^{4.25} + b\kappa^{3.25}\right) r^{\frac{1}{4}} \bigg] \log^{\frac{1}{2}}(\frac{1}{\delta})\right). \end{equs} \end{lemma} \begin{proof} \begin{comment} \begin{equs} \alpha &= \frac{19}{20}\\ \beta &= 2 e^{\frac{1}{2}(\lambda (T+ \frac{m}{8\sqrt{M}}T^2))^2} \times \exp (\lambda [\frac{\frac{4}{\lambda} + (1+\frac{m}{4M})\frac{b}{2}T^2 + \frac{1}{2}\lambda (T+ \frac{m}{8\sqrt{M}}T^2)^2}{\frac{3m}{8}T^2}] (1-\frac{3m}{8}T^2) + (1+\frac{m}{4M})\frac{b}{2}T^2),\\ \hat{\beta} &= e^{\frac{10}{\frac{1}{8}mT^2}} \end{equs} \begin{equs} \mathsf{g}_1 &= \lambda^{-1}3\sqrt{M}\log(\frac{4\beta}{\lambda \alpha})+\frac{3}{2}\frac{b}{\sqrt{M}},\\ \mathsf{g}_2 &= \frac{1}{\sqrt{1+\frac{1}{2M}}}\left[\sqrt{\frac{d}{m}}\log\left(\frac{4\hat{\beta}(\mathcal{I}+1)}{\alpha \delta}\right) + \sqrt{8\log\left(\frac{2(\mathcal{I}+1)}{\delta}\right)+2d}\right]\\ \mathsf{g}_3 &= \frac{\sqrt{1+\frac{1}{2M}}}{\sqrt{2M+1}}\mathsf{g}_2\\ \end{equs} \begin{equs} \epsilon_1 = \frac{1}{6} (\mathsf{g}_3+{\epsilon}) M, \qquad \epsilon_2 = \frac{1}{6} \left[ (M)^2 (\mathsf{g}_2+{\epsilon}) + L_{\infty} (\mathsf{g}_1+{\epsilon})^{2}\sqrt{r} \right]. \end{equs} \end{comment} For $T = \Theta(\frac{\sqrt{m}}{M})$, we have \begin{equs} \log(\beta) \sim \lambda^2 \frac{m}{M^2} + \lambda\left(\frac{M^2}{m^2} + \frac{b}{m} + \frac{1}{m}\right), \end{equs} \begin{equs} \log(\hat{\beta}) \sim \frac{M^2}{m^2}, \end{equs} \begin{equs} \mathsf{g}_1 &\sim \lambda^{-1}\sqrt{M}\log(\beta)+\frac{b}{\sqrt{M}} -\log(\lambda),\\ \left(\sqrt{1+\frac{1}{M}}\right)\mathsf{g}_2 &\sim \sqrt{\frac{d}{m}}\left(\frac{M^2}{m^2} + \log(\frac{1}{\delta})\right) + \sqrt{\log(\frac{1}{\delta})} + \sqrt{d}. \ \end{equs} Set $\lambda=\frac{1}{16}$. Since $M=1$, we have \begin{equs} \epsilon_1 &\sim \mathsf{g}_3 \sim \mathsf{g}_2 \sim \sqrt{\frac{d}{m}}\left(\frac{1}{m^2}+\log(\frac{1}{\delta})\right) + \sqrt{d}+ \sqrt{\log(\frac{1}{\delta})}\\ &= \tilde{O}\left(\sqrt{d}\left(\frac{1}{m^{2.5}}+\frac{1}{\sqrt{m}} + 1\right)\log(\frac{1}{\delta})\right) \sim \frac{\sqrt{d}}{m^{2.5}} \log(\frac{1}{\delta}) \end{equs} and \begin{equs} \epsilon_2 &\sim \mathsf{g}_2 + L_{\infty} (\mathsf{g}_1)^{2}\sqrt{r}\\ &\sim \frac{\sqrt{d}}{m^{2.5}} \log(\frac{1}{\delta}) + L_{\infty} (\log^2(\beta)+b^2 +1)\sqrt{r}\\ &\sim \frac{\sqrt{d}}{m^{2.5}} \log(\frac{1}{\delta}) + L_{\infty} \left(\left[ m + \frac{1}{m^2} + \frac{b}{m} + \frac{1}{m}\right]^2+b^2 +1\right)\sqrt{r}\\ &\sim \frac{\sqrt{d}}{m^{2.5}} \log(\frac{1}{\delta}) + L_{\infty} \left(m^2 + \frac{1}{m^4} + \frac{b^2}{m^2} + \frac{1}{m^2}+b^2 +1\right)\sqrt{r}\\ &\sim \frac{\sqrt{d}}{m^{2.5}} \log(\frac{1}{\delta}) + L_{\infty} \left(\frac{1}{m^4} + \frac{b^2}{m^2}\right)\sqrt{r}. \end{equs} Thus, \begin{equs} \epsilon_1 + \frac{\epsilon_2}{\sqrt{M}} = \tilde{O}\left( \left[\frac{\sqrt{d}}{m^{2.5}} + L_{\infty} \left(\frac{1}{m^4} + \frac{b^2}{m^2}\right)\sqrt{r}\right]\log(\frac{1}{\delta}) \right) \end{equs} Hence by Theorem \ref{thm:main} the number of gradient evaluations is \begin{equs} \epsilon^{-\frac{1}{2}}&\mathcal{I}^{1.5} T^{1.5} \sqrt{ \epsilon_1 + \frac{\epsilon_2}{\sqrt{M}} }\sim \epsilon^{-\frac{1}{2}}\mathcal{I}^{1.5} T^{1.5}\sqrt{ \epsilon_1 + \frac{\epsilon_2}{\sqrt{M}} }\\ &\sim \epsilon^{-\frac{1}{2}} \frac{M^{1.5}}{m^{2.25}}\sqrt{ \epsilon_1 + \frac{\epsilon_2}{\sqrt{M}} }\\ &= \tilde{O}\left( \epsilon^{-\frac{1}{2}} \frac{1}{m^{2.25}}\bigg[ \frac{d^{\frac{1}{4}}}{m^{1.25}} + \sqrt{L_{\infty}} \left(\frac{1}{m^2} + \frac{b}{m}\right) r^{\frac{1}{4}} \bigg] \log^{\frac{1}{2}}(\frac{1}{\delta})\right)\\ % &\sim \epsilon^{-\frac{1}{2}} \bigg[ d^{\frac{1}{4}}\kappa^{3.5} + \sqrt{L_{\infty}} \left(\kappa^{4.25} + b\kappa^{3.25}\right) r^{\frac{1}{4}} \bigg] \log^{\frac{1}{2}}(\frac{1}{\delta}). \end{equs} \end{proof} \begin{theorem}[Bounds from a cold start] \label{thm:cold_formal} % Let $\pi(x) \propto e^{-U(x)}$ where $U:\mathbb{R}^d \rightarrow \mathbb{R}$ is $m$-strongly convex, $M$-gradient Lipschitz, and satisfies Assumptions \ref{assumption:M3} and \ref{assumption:tail}. Suppose that $X_0:= \mathrm{argmin}_{x\in \mathbb{R}^d} U(x)$. % Then there exist parameters $T, \eta>0$ and $i_\mathrm{max} \in \mathbb{N}$ such that with probability $1-\delta$ the UHMC algorithm with second-order leapfrog integrator generates a point $X_{i_\mathrm{max}}$ such that $\|X_{i_\mathrm{max}} - Y\|_2 < \epsilon$ for some random variable $Y\sim \pi$. Moreover, the number of gradient evaluations is $\tilde{O}(\max\left(d^{\frac{1}{4}}\kappa^{3.5}, \, \, r^{\frac{1}{4}}\sqrt{\tilde{L}_{\infty}} \left(\kappa^{4.25} + \tilde{b}\kappa^{3.25}\right)\right) \epsilon^{-\frac{1}{2}} \log^{\frac{1}{2}}(\frac{1}{\delta}))$. \end{theorem} \begin{proof} The proof is a direct consequence of Lemma \ref{cor:RunningTime}. \end{proof} \section{Bounds from a warm start} \label{appendix:Warm} In this section we consider the warm start case. We start by observing that by replacing the definitions of $\mathcal{G}$, $\epsilon_1$ and $\epsilon_2$, we can show an alternative to Lemma \ref{lemma:good_summary}, Lemma \ref{lemma:good_summary_warm}, which holds for a warm start even if Assumption \ref{assumption:tail} does not hold. Let $\mathcal{J}$ be a number satisfying $\mathcal{J} = \frac{M}{\sqrt{m}} \times \left(\left[T 80^2 Md(1+\frac{1}{m})\log^2(\frac{\mathcal{I} \mathcal{J}}{\delta})\right]^{\frac{1}{2}}, \frac{\sqrt{m}}{T} \right)$. We replace the definition of the good set with the following definition: \begin{equs} \label{eq:warm_G} \mathsf{G} := \bigg \{(x,y): &\|x\|_2 \leq 81\frac{\sqrt{d}}{\sqrt{m}}\log(\frac{\mathcal{I} \mathcal{J}}{\delta}) + 2 \omega,\\ % & \|y\|_2 \leq 81\sqrt{d}\log(\frac{\mathcal{I} \mathcal{J}}{\delta}) + 2\omega\sqrt{M},\\ % & \max_j |u_j^\top y| \leq 81\log(\frac{\mathcal{I} \mathcal{J}}{\delta}) + 2\omega \sqrt{M}\bigg\}. \end{equs} Next, we replace the definitions of $\epsilon_1$ and $\epsilon_2$ with the following alternative definitions: \begin{equs} \label{eq:warm_epsilon1} \epsilon_1 := \frac{1}{6}\left[81\sqrt{d}\log(\frac{\mathcal{I} \mathcal{J}}{\delta}) + 2\omega\sqrt{M}\right]M \end{equs} and % \begin{equs} \label{eq:warm_epsilon2} \epsilon_2 := \frac{1}{3}\eta^3 \left[ (M)^2 (81\frac{\sqrt{d}}{\sqrt{m}}\log(\frac{\mathcal{I} \mathcal{J}}{\delta}) + 2 \omega) + L_{\infty} (81\log(\frac{\mathcal{I} \mathcal{J}}{\delta}) + 2\omega \sqrt{M})^{2} \sqrt{r} \right]. \end{equs} \begin{lemma} \label{lemma:good_summary_warm} Suppose that $\|X_0-\tilde{Y}_0\|_2 < \omega$ with probability $1-\hat{\delta}$, where the marginal distribution of $\tilde{Y}_0$ is $\pi$. Then with probability at least $1-4\delta-\hat{\delta}$ we have $(q_t(X_i, \mathsf{p}_i), p_t(X_i, \mathsf{p}_i)) \in \mathsf{G}$ for every $0\leq t\leq T$ and every $0 \leq i \leq \mathcal{I}$. % \end{lemma} \begin{proof} Define the Markov chain $\tilde{Y}$ by the update rule $\tilde{Y}_{i+1} = q_T(\tilde{Y}_i, \mathbf{p}_i)$. Then Theorem \ref{ThmContractionConvexMainResult} implies that $\|X_i-\tilde{Y}_i\|_2 < \omega$ for all $i\geq 0$ and, moreover, that $\|q_t(X_i, \mathsf{p}_i) -q_t(\tilde{Y}_i, \mathsf{p}_i)\|_2 \leq \omega$ for all $i\geq 0$ and all $0\leq t \leq T$. Therefore, by Lemma \ref{MS2} we have \begin{equs} \|p_t(X_i, \mathsf{p}_i) -p_t(\tilde{Y}_i, \mathsf{p}_i)\|_2 \leq \|X_i - \tilde{Y}_i\|_2 \sqrt{M}e^{T\sqrt{M}} \leq 2 \omega \sqrt{M}. \end{equs} We must now show bounds on $Q_{t,i}:=q_t(\tilde{Y}_i, \mathsf{p}_i)$ and $P_{t,i}:=p_t(\tilde{Y}_i, \mathsf{p}_i)$. Towards this end, note that $(Q_{t,i}, P_{t,i})$ has distribution $\Pi(x,v) \propto e^{-\mathcal{H}(x,v)}$ at every $t,i$. First, by the Hanson-Wright inequality and the fact that $U$ is $m$-strongly convex we have that \begin{equs} \mathbb{P}[\|Q_{t,i}\|_2>\frac{\xi}{\sqrt{m}}] \leq e^{-\frac{\xi^2-d}{8}} \quad \quad \textrm{ for } \xi> \sqrt{2d}. \end{equs} Therefore, \begin{equs} \label{eq:EventBound1} \|Q_{t,i}\|_2 \leq 80\frac{\sqrt{d}}{\sqrt{m}}\log(\frac{\mathcal{I\mathcal{J}}}{\delta}) \end{equs} for every $i\leq \mathcal{I}$ and every $t \in \{\frac{1}{\mathcal{J}},\ldots, \frac{T}{\mathcal{J}}\}$ with probability $1-\delta$. Also by the Hanson-Wright inequality we have, \begin{equs} \mathbb{P}[\|P_{t,i}\|_2>\xi] \leq e^{-\frac{\xi^2-d}{8}} \quad \quad \textrm{ for } \xi> \sqrt{2d}. \end{equs} Therefore, \begin{equs} \label{eq:EventBound2} \|P_{t,i}\|_2 \leq 80\sqrt{d}\log(\frac{\mathcal{I\mathcal{J}}}{\delta}) \end{equs} for every $i\leq \mathcal{I}$ and every $t \in \{\frac{0}{\mathcal{J}},\frac{1}{\mathcal{J}},,\ldots, \frac{T-1}{\mathcal{J}}\}$ with probability $1-\delta$. Again by the Hanson-Wright inequality for every $u_j \in \mathsf{u}$ we have, \begin{equs} \mathbb{P}[|P_{t,i}^\top u_j|>\xi] \leq e^{-\frac{\xi^2-1}{8}} \quad \quad \textrm{ for } \xi> \sqrt{2}. \end{equs} Therefore, \begin{equs} \label{eq:EventBound3} |P_{t,i}^\top u_j| \leq 80\log(\frac{\mathcal{I\mathcal{J} r}}{\delta}) \end{equs} for every $i\leq \mathcal{I}$, every $t \in \{\frac{0}{\mathcal{J}},\ldots, T-\frac{1}{\mathcal{J}}\}$ and every $j \in \{1,\ldots,r\}$ with probability $1-\delta$. Let $E$ be the event that Inequalities \eqref{eq:EventBound1}, \eqref{eq:EventBound2} and \eqref{eq:EventBound3} all hold for every $i\leq \mathcal{I}$, every $t \in \{\frac{0}{\mathcal{J}},\ldots, \frac{T-1}{\mathcal{J}}\}$ and every $j \in \{1,\ldots,r\}$. Then whenever $E$ occurs we have for all $i\leq \mathcal{I}$ and every $t \in \{\frac{0}{\mathcal{J}},\ldots, \frac{T-1}{\mathcal{J}}\}$ that \begin{equs} \mathcal{H}(Q_{t,i}, P_{t,i}) &= \mathcal{H}(Q_{0,i}, P_{0,i}) \leq M\|Q_{0,i}\|_2^2 + \frac{1}{2}\| P_{0,i}\|_2^2 \leq M\left(80\frac{\sqrt{d}}{\sqrt{m}}\log(\frac{\mathcal{I\mathcal{J}}}{\delta})\right)^2 + \left(80\sqrt{d}\log(\frac{\mathcal{I\mathcal{J}}}{\delta})\right)^2\\ & =80^2 Md(1+\frac{1}{m})\log^2(\frac{\mathcal{I} \mathcal{J}}{\delta}). \end{equs} Then whenever $E$ occurs, for every $0\leq t\leq T$ and every $i \leq \mathcal{I}$ we have \begin{equs} \label{eq:velocity_bound} \|P_{t,i}\|_2 \leq \sqrt{2\mathcal{H}(Q_{t,i}, P_{t,i})} \leq 80 \sqrt{2} \sqrt{Md}\sqrt{1+\frac{1}{m}}\log(\frac{\mathcal{I} \mathcal{J}}{\delta}). \end{equs} Then Equation \eqref{eq:velocity_bound} implies that \begin{equs} \|Q_{t+\tau,i} - Q_{t,i}\|_2 \leq \frac{1}{\sqrt{m}} \qquad \forall 1\leq \tau \leq \frac{T}{\mathcal{J}}, \end{equs} and hence that \begin{equs} \|Q_{t,i}\|_2 \leq 80\frac{\sqrt{d}}{\sqrt{m}}\log(\frac{\mathcal{I\mathcal{J}}}{\delta}) + \frac{1}{\sqrt{m}} \end{equs} for all $0\leq t \leq T$ and all $i \leq \mathcal{I}$ whenever $E$ occurs. But since the $\|{\nabla U}(Q_{t,i})\|_2 \leq M \|Q_{t,i}\|_2$, we must have that \begin{equs} \|P_{t+\tau,i} - P_{t,i}\|_2 \leq \frac{1}{\sqrt{m}} \times \frac{T}{\mathcal{J}} \leq 1 \qquad \forall 1\leq \tau \leq \frac{T}{\mathcal{J}} \end{equs} whenever $E$ occurs, and hence that \begin{equs} \|P_{t,i}\|_2 \stackrel{{\scriptsize \textrm{Eq.}}\ref{eq:EventBound2}}{\leq} 80\sqrt{d}\log(\frac{\mathcal{I\mathcal{J}}}{\delta}) + 1 \qquad \forall 0\leq t \leq T \end{equs} and \begin{equs} |P_{t,i}^\top u_j| \stackrel{{\scriptsize \textrm{Eq.}}\ref{eq:EventBound3}}{\leq} 80\log(\frac{\mathcal{I\mathcal{J} r}}{\delta}) + 1 \qquad \forall 0\leq t \leq T \end{equs} whenever $E$ occurs. This completes the proof of the Lemma. \end{proof} We can now prove our main result from a warm start: \begin{theorem} [Bounds from a warm start] \label{thm:warm_formal} Let $\pi(x) \propto e^{-U(x)}$ where $U:\mathbb{R}^d \rightarrow \mathbb{R}$ is $m$-strongly convex, $M$-gradient Lipschitz, and satisfies Assumption \ref{assumption:M3}. Suppose that there is a random variable $\tilde{Y}_0\sim \pi$ such that $\|X_0-\tilde{Y}_0\|_2 < \omega$ with probability $1-\hat{\delta}$ for some $\omega, \hat{\delta}>0$. Then there exist parameters $T, \eta>0$ and $i_\mathrm{max} \in \mathbb{N}$ such that, with probability $1-\delta-\hat{\delta}$, the UHMC algorithm with second-order leapfrog integrator generates a point $X_{i_\mathrm{max}}$ where $\|X_{i_\mathrm{max}} - Y\|_2 < \epsilon$ for some random variable $Y\sim \pi$ which is independent of $\tilde{Y}_0$. Moreover, the number of gradient evaluations is $\tilde{O}(\max\left(d^{\frac{1}{4}}\kappa^{2.75}, \, \, r^{\frac{1}{4}}\sqrt{\tilde{L}_{\infty}} \kappa^{2.25}(1+\sqrt{\omega}), \,\, \kappa^{2.25} \sqrt{\omega}\right) \epsilon^{-\frac{1}{2}} \log^{\frac{1}{2}}(\frac{1}{\delta}))$. \end{theorem} \begin{proof} First, we observe that, for our new definition of $\mathcal{G}$ (Equation \eqref{eq:warm_G}), Lemmas \ref{lemma:local_toy} and \ref{thm:Approx_integrator} hold with exactly the same proofs. Then, we note that the proof of Lemma \ref{lemma:coupling} holds as well if we use Lemma \ref{lemma:good_summary_warm} in place of Lemma \ref{lemma:good_summary}, the only difference being that the conclusion of Lemma \ref{lemma:coupling} holds with probability $1-4\delta -\hat{\delta}$ instead of with probability $1-4\delta$. % This in turn implies that the statement and proof of Theorem \ref{thm:main} hold for the new values of $\epsilon_1$ and $\epsilon_2$ as well, but with probability $1-5\delta -\hat{\delta}$ instead of $1-5\delta$. % Finally, a nearly identical calculation as the one in the proof of Corollary \ref{cor:RunningTime} implies a bound of $\tilde{O}(\mathrm{max}\left(\epsilon^{-\frac{1}{2}} d^{\frac{1}{4}} \kappa^{\frac{11}{4}}, \, \, \epsilon^{-\frac{1}{2}} \kappa^{\frac{9}{4}}\sqrt{\omega}, \, \, \kappa^{\frac{9}{4}}\sqrt{\tilde{L}_{\infty}}r^{\frac{1}{4}}(1+\sqrt{\omega})\right))$ gradient evaluations. \end{proof} \section{Application: Bayesian inference with logistic regression} \label{appendix:logit} In this section we prove Theorem \ref{thm:logit}. We also show Lemmas \ref{lemma:logistic3} and \ref{lemma:logistic2}, which can be used to bound the constants $m,M$ and $b$. Recall that we consider potentials of the form \begin{equs} U(x) = \frac{1}{2}x^\top \Sigma^{-1} x- \sum_{i=1}^r \mathsf{Y}_i \log(F(x^\top \mathsf{X}_i)) + (1-\mathsf{Y}_i)\log(F(-x^\top \mathsf{X}_i)), \end{equs} where $F$ is the logistic function and the ``data" satisfies $\mathsf{Y}_i \in \{0,1\}$ and $\mathsf{X}_i \in \mathbb{R}^d$ for every $i$. The regularization term $\frac{1}{2}x^\top \Sigma^{-1} x$ corresponds to a Gaussian prior used in Bayesian logistic ridge regression. Define the unit vector $u_i := \frac{\mathsf{X}_i}{\|\mathsf{X}_i\|_2}$ for every $i\in \{1,\ldots, r\}$. We make the following assumptions about the data vectors and $\Sigma^{-1}$: \begin{assumption} \label{assumption:logistic1} There exist $C, \hat{C}>0$ such that $\sum_{j=1}^r |\mathsf{X}_i^\top \mathsf{X}_j| \leq C$ and $\sum_{j=1}^r |\left(\frac{\mathsf{X}_i}{\|\mathsf{X}_i\|_2}\right)^\top \mathsf{X}_j| \leq \hat{C}$ for every $i \in \{1,\ldots, r\}$. \end{assumption} \begin{assumption} \label{assumption:logistic2} There exist $m,M >0$ s.t. $\lambda_{\mathrm{max}}(\Sigma^{-1} + \sum_{k=1}^r \mathsf{X}_k \mathsf{X}_k^\top) \leq M$, and $\lambda_{\mathrm{min}}( \Sigma^{-1}) \geq m$. \end{assumption} We start by going over some basic properties of the logistic function and the potential $U$: The logistic function is defined as \begin{equs} F(s) := \frac{e^s}{1+e^s} = \frac{1}{e^{-s}+1}. \end{equs} The gradient of $U$ is \begin{equs} {\nabla U}(x) &= \Sigma^{-1} x- \sum_{i=1}^r \left[\mathsf{Y}_i \mathsf{X}_i - \frac{\mathsf{X}_i}{1+ e^{-x^\top \mathsf{X}_i}}\right]\\ &= \Sigma^{-1} x- \sum_{i=1}^r \left[ \mathsf{Y}_i \mathsf{X}_i - \mathsf{X}_iF(x^\top \mathsf{X}_i) \right]. \end{equs} The Hessian $H_x$ of $U$ is \begin{equs} H_x = \Sigma^{-1} + \sum_{k=1}^r F'(x^\top \mathsf{X}_k) \mathsf{X}_k \mathsf{X}_k^\top. \end{equs} Therefore, \begin{equs} H_y - H_x = \sum_{k=1}^r [F'(y^\top \mathsf{X}_k) - F'(x^\top \mathsf{X}_k)] \mathsf{X}_k \mathsf{X}_k^\top. \end{equs} Also note that \begin{equs} \label{eq:f3} \left|F''(s)\right| \leq 1. \end{equs} We can now show that Assumption \ref{assumption:M3} holds for the constant $L_{\infty} = \sqrt{C}$: \begin{lemma} \label{lemma:logistic1} Suppose that Assumptions \ref{assumption:logistic1} and \ref{assumption:logistic2} hold, and that $|(y-x)^\top u_i| \leq \mathfrak{a}$ and $|v^\top u_i| \leq \mathfrak{b}$ for all $i$. Then \begin{equs} \|(H_y - H_x)v\|_2 &\leq \mathfrak{a}\mathfrak{b} \sqrt{C}\sqrt{r} \end{equs} \end{lemma} \begin{proof} \begin{equs} \|(H_y - H_x)v\|_2 &= \left \| \sum_{k=1}^r [F'(-y^\top \mathsf{X}_k) - F'(-x^\top \mathsf{X}_k)] \mathsf{X}_k \mathsf{X}_k^\top v \right\|_2\\ &\stackrel{{\scriptsize \textrm{Eq.}}\ref{eq:f3}}{\leq} \sqrt{ \sum_{i=1}^r \sum_{j=1}^r |(y-x)^\top \mathsf{X}_i| \times |(y-x)^\top \mathsf{X}_j| \times |v^\top \mathsf{X}_i| \times | \mathsf{X}_j^\top v| \times |\mathsf{X}_i^\top\mathsf{X}_j|}\\ &\leq \mathfrak{a} \mathfrak{b} \sqrt{ \sum_{i=1}^r \sum_{j=1}^r |\mathsf{X}_i^\top\mathsf{X}_j|}\\ &\stackrel{{\scriptsize \textrm{Assumptions }}\ref{assumption:logistic1}, \ref{assumption:logistic2}}{\leq} \mathfrak{a} \mathfrak{b} \sqrt{ \sum_{i=1}^r C}\\ &= \mathfrak{a} \mathfrak{b} \sqrt{ r C}.\\ \end{equs} \end{proof} We can now prove Theorem \ref{thm:logit} \begin{proof}[Proof of Theorem 2] By Lemma \ref{lemma:logistic1} for every $x,y,v \in \mathbb{R}^d$ we have \begin{equs} \|(H_y - H_x)v\|_2 \leq \|y-x\|_{\infty, \mathsf{u}}\times \|v\|_{\infty, \mathsf{u}} \sqrt{C}\sqrt{r} \end{equs} implying that \begin{equs} \|(H_y - H_x)\frac{v}{\|v\|_{\infty, \mathsf{u}} }\|_2 \leq \|y-x\|_{\infty, \mathsf{u}} \sqrt{C}\sqrt{r}. \end{equs} Hence, \begin{equs} \sup_{\|v\|_{\infty, \mathsf{u}} \leq 1} \|(H_y - H_x) v\|_2 \leq \|y-x\|_{\infty, \mathsf{u}} \sqrt{C}\sqrt{r} \qquad \forall x,y,v \in \mathbb{R}^d. \end{equs} Therefore the infinity-norm Lipschitz assumption (Assumption \ref{assumption:M3}) is satisfied with constant $L_\infty = \sqrt{C}$ and ``bad" directions $\mathsf{u} = \left \{\frac{\mathsf{X}_i}{\|\mathsf{X}_i\|_2}\right \}_{i=1}^r$. This completes the proof of Theorem \ref{thm:logit}. \end{proof} Next, we show that Assumption \ref{assumption:tail} holds as well, for the constant $b=2 \hat{C}$: \begin{lemma} \label{lemma:logistic3} Suppose that Assumptions \ref{assumption:logistic1} and \ref{assumption:logistic2} hold, and that $\Sigma$ is a multiple of the identity matrix. Then for any $i \in \{1,\ldots, r\}$ and $x \in \mathbb{R}^d$ we have \begin{equs} \min(m u_i^\top x, M u_i^\top x) - 2 \hat{C} \leq u_i^\top {\nabla U}(x) \leq \max(m u_i^\top x, M u_i^\top x) + 2 \hat{C}. \end{equs} \end{lemma} \begin{proof} \begin{equs} u_i^\top {\nabla U}(x) & = u_i^\top \Sigma^{-1} x- \sum_{j=1}^r \left[\mathsf{Y}_j u_i^\top \mathsf{X}_j - u_i^\top \mathsf{X}_jF(x^\top \mathsf{X}_j)\right]\\ &\leq u_i^\top \Sigma^{-1} x + 2\sum_{j=1}^r |u_i^\top \mathsf{X}_j|\\ &= u_i^\top \Sigma^{-1} x + 2 \sum_{j=1}^r \left|\left(\frac{\mathsf{X}_i}{\|\mathsf{X}_i\|_2}\right)^\top \mathsf{X}_j\right|\\\\ &\stackrel{{\scriptsize \textrm{Assumption}}\ref{assumption:logistic1}}{\leq} u_i^\top \Sigma^{-1} x + 2 \hat{C}\\ &\stackrel{{\scriptsize \textrm{Assumption}}\ref{assumption:logistic2}}{\leq} \max(m u_i^\top x, M u_i^\top x) + 2 \hat{C}. \end{equs} Similarly, \begin{equs} u_i^\top {\nabla U}(x) & = u_i^\top \Sigma^{-1} x- \sum_{j=1}^r \left[\mathsf{Y}_j u_i^\top \mathsf{X}_j - u_i^\top \mathsf{X}_jF(x^\top \mathsf{X}_j)\right]\\ &\geq u_i^\top \Sigma^{-1} x - 2\sum_{j=1}^r |u_i^\top \mathsf{X}_j|\\ &= u_i^\top \Sigma^{-1} x - 2 \sum_{j=1}^r \left|\left(\frac{\mathsf{X}_i}{\|\mathsf{X}_i\|_2}\right)^\top \mathsf{X}_j\right|\\ &\stackrel{{\scriptsize \textrm{Assumption}}\ref{assumption:logistic1}}{\geq} u_i^\top \Sigma^{-1} x - 2 \hat{C}\\ &\stackrel{{\scriptsize \textrm{Assumption}}\ref{assumption:logistic2}}{\geq} \min(m u_i^\top x, M u_i^\top x) - 2 \hat{C}. \end{equs} \end{proof} Finally, we show a bound on the condition number: \begin{lemma} \label{lemma:logistic2} Suppose that Assumption \ref{assumption:logistic2} holds. Then \begin{equs} m \leq \lambda_{\mathrm{min}}(H_x) \leq \lambda_{\mathrm{max}}(H_x) \leq M \qquad \qquad \forall x \in \mathbb{R}^d. \end{equs} \end{lemma} \begin{proof} \begin{equs} \label{eq:f5} \lambda_{\mathrm{max}}(H_x) &= \sup_{\|z\|_2\leq1} z^\top H_x z\\ % &= \sup_{\|z\|_2 \leq1} \left( z^\top \Sigma^{-1} z + \sum_{k=1}^r F^2(-x^\top \mathsf{X}_k) z^\top \mathsf{X}_k \mathsf{X}_k^\top z \right)\\ &= \sup_{\|z\|_2 \leq1} \left( z^\top \Sigma^{-1} z + \sum_{k=1}^r F^2(-x^\top \mathsf{X}_k) |z^\top \mathsf{X}_k|^2 \right)\\ &\leq \sup_{\|z\|_2 \leq1} \left( z^\top \Sigma^{-1} z + \sum_{k=1}^r |z^\top \mathsf{X}_k|^2 \right)\\ &\leq \sup_{\|z\|_2 \leq1} \left( z^\top \Sigma^{-1} z + \sum_{k=1}^r z^\top \mathsf{X}_k \mathsf{X}_k^\top z \right) \\ &= \lambda_{\mathrm{max}}( \Sigma^{-1} + \sum_{k=1}^r \mathsf{X}_k \mathsf{X}_k^\top)\\ &\stackrel{{\scriptsize \textrm{Assumption }}\ref{assumption:logistic2}}{\leq} M. \end{equs} Moreover, \begin{equs} \lambda_{\mathrm{min}}(H_x) &= \lambda_{\mathrm{min}}\left( \Sigma^{-1} + \sum_{k=1}^r F''(x^\top \mathsf{X}_k) \mathsf{X}_k \mathsf{X}_k^\top\right) \geq \lambda_{\mathrm{min}}( \Sigma^{-1}) \stackrel{{\scriptsize \textrm{Assumption }}\ref{assumption:logistic2}}{\geq} m. \end{equs} \end{proof} \section{Simulations} \label{sec:simulations} % \subsection{Accuracy and autocorrelation time of Unadjusted HMC} \label{sec:simulations_accuracy} % The purpose of our first set of simulations is to show that in practical situations analyzed in this paper the unadjusted HMC algorithm (UHMC) is competitive with other popular sampling algorithms in terms of both accuracy and in terms of the number of gradient evaluations required. % We compare UHMC to Metropolis-adjusted HMC (MHMC) \cite{duane1987hybrid}, the Metropolis-adjusted Langevin algorithm (MALA) \cite{roberts1996exponential} and the unadjusted Langevin algorithm (ULA) \cite{roberts1996exponential}. All simulations were implemented on MATLAB (see the GitHub repository \url{https://github.com/mangoubi/HMC} for our MATLAB code used to implement these algorithms). % We consider the setting of Bayesian logistic regression with standard normal prior, with synthetic ``independent variable" data vectors generated as $\mathbf{X}_i = \frac{Z_i}{\|Z_i\|_2}$ for $Z_1,\ldots, Z_r \sim N(0,I_d)$ iid, for dimension $d=1000$ and $r=d$. To generate the synthetic ``dependent variable" binary data, a vector $\beta=(\beta_1,\ldots, \beta_d)$ of regression coefficients was first generated as $\beta = \frac{W}{\|W\|_2}$ where $W \sim N(0,I_d)$. The binary dependent variable synthetic data $\mathsf{Y}_1, \ldots, \mathsf{Y}_d$ were then generated as independent Bernoulli random variables, setting $\mathsf{Y}_i = 1$ with probability $\frac{1}{1+e^{-\beta^\top \mathsf{X}_i}}$ and $\mathsf{Y}_i = 0$ otherwise. Each Markov chain was initialized at a point $X_0$ chosen randomly as $X_0 \sim N(0,I_d)$. % To compare the accuracy, we computed the ``marginal accuracy" (MA) of the samples generated by each chain over a fixed number (50,000) of numerical steps for different step sizes $\eta$ in the interval $[0.1,0.6]$ (Figure \ref{fig:Auto}, top). % Among all four of the algorithms, we found that UHMC had the highest accuracy at the accuracy-optimizing step size (the accuracy-optimizing step size was $\eta=0.35$ for UHMC). % To compare the runtime we computed the autocorrelation time of the samples for a test function $f(x)=\|x\|_1$.\footnote{The autocorrelation time can be estimated as $1+2\sum_{s=1}^{s_{\mathrm{max}}} \rho_s$, where $\rho_s$ is the autocorrelation at lag $s$ for some large $s_{\mathrm{max}}$\cite{goodman2010ensemble, chen2014stochastic}.} % We found that the autocorrelation time of UHMC was fastest at the autocorrelation time-optimizing step size (the autocorrelation time-optimizing step size was $\eta=0.5$ for UHMC) (Figure \ref{fig:Auto}, bottom). When running UHMC and MHMC, we used a trajectory time $T$ equal to $\frac{\pi}{3}$, rounded down to the nearest multiple of $\eta$. \begin{figure}[H] \begin{center} \includegraphics[trim={0cm -1.2cm 1cm 0cm}, clip, scale=0.4]{MarginalAccuracyVsEta} \includegraphics[trim={0cm 0cm 1cm 0cm}, clip, scale=0.53]{Auto_vs_Eta_synthetic} \end{center} \caption{\footnotesize Marginal accuracy (top) and autocorrelation time (bottom) versus numerical step size $\eta$ for MALA, ULA, MHMC, and UHMC.}\label{fig:Auto} \end{figure} \begin{figure}[H] % \begin{center} \includegraphics[trim={0cm 0cm 0cm 0cm}, clip, scale=0.25]{LipschitzvsL3_SquareRoot2} \end{center} \caption{\footnotesize Logarithmic plot comparing an estimate for the quantity used to bound the numerical error of second-order integrators using the Euclidean Lipschitz constant and the infinity-norm Lipschitz constant, at different values of $d$.}\label{fig:LipschitzL_infity} \end{figure} \begin{remark} The marginal accuracy is used as a heuristic to compare accuracy of samplers (see e.g. \cite{durmus2017convergence}, \cite{faes2011variational} and \cite{chopin2017leave}). The marginal accuracy between the measure $\mu$ of a sample and the target $\pi$ is $MA(\mu, \pi) := 1-\frac{1}{2d} \sum_{i=1}^d \|\mu_i - \pi_i\|_{\mathrm{TV}}$, where $\mu_i$ and $\pi_i$ are the marginal distributions of $\mu$ and $\pi$ for the coordinate $x_i$. Since MALA is known to sample from the correct stationary distribution and is geometrically ergodic for the class of distributions analyzed in this paper, we used the samples generated after running MALA for a very long time ($10^6$ steps) to obtain a more accurate approximation for $\pi$ as a benchmark with which to compare the sampling accuracy of the four different algorithms when run for a much shorter amount of time ($50,000$ numerical steps). % \end{remark} \subsection{Comparing Euclidean and infinity-norm Lipschitz conditions} \label{sec:simulations_Lipschitz} % The goal of our second set of simulations was to compare the optimal values of the usual Euclidean Lipschitz Hessian constant $L_2$ to the constant $L_\infty$ from our infinity-norm Lipschitz condition of Assumption \ref{assumption:M3}. We performed this comparison for the logistic regression example of the previous simulation with synthetic data generated in the same way, but for different values of $d$, with $r=d$. The optimal values of $L_2$ and $L_{\infty}$ are $L_2 = \sup_{x,y,v\in \mathbb{R}^d} \frac{1}{\|y-x\|_2 \|v\|_2} \|(H_y -H_x)v\|_2$ and $L_{\infty} = \sup_{x,y,v\in \mathbb{R}^d} \frac{1}{\sqrt{r} \|y-x\|_{\infty, \mathsf{u}} \|v\|_{\infty, \mathsf{u}}} \|(H_y - H_x)v\|_2$, where the supremum is taken over the values of $x,y,v$ for which the function is defined. At each value of $d$ we used MATLAB's ``fminunc" function to search for the optimal values of $L_2$ and $L_\infty$. Recall that to bound the error of a numerical integrator with momentum $p_t$, one may use one of the two quantities $\sqrt{L_2} \|p_t\|_2$ and $\sqrt{L_\infty} r^{\frac{1}{4}}\|p_t\|_{\infty, \mathsf{u}}$. % We plot the median value of these quantities for random momenta $p_t \sim N(0,I_d)$ (Figure \ref{fig:LipschitzL_infity}). % Our results show that the median of $\sqrt{L_2} \|p_t\|_2$ increases with $d$ at a faster rate than the median of $\sqrt{L_\infty} r^{\frac{1}{4}}\|p_t\|_{\infty, \mathsf{u}}$ over the interval $d\in[1,1000]$. This suggests that gradient evaluation bounds based on our infinity-norm Lipschitz condition can be much tighter for distributions used in practice than bounds based on the usual Euclidean Lipschitz condition. \section{Conclusions and future directions}\label{sec:Conclusion} In this paper, we show that the conjecture of \cite{creutz1988global}, which says that HMC requires $O^{*}(d^{1/4})$ gradient evaluations, is true when sampling from strongly log-concave targets satisfying weak regularity properties associated with the input data. In doing so, we introduce a new regularity property for the Hessian (Assumption \ref{assumption:M3}) that is much weaker than the Lipschitz Hessian property, and show that for a class of functions arising in statistics and machine learning this property holds for natural conditions on the data. % One future direction is to further weaken Assumption \ref{assumption:M3}, which says that the Hessian does not change too quickly in all but a few fixed bad directions, by instead allowing these directions to vary based on the position $x$. Our simulations show that UHMC is competitive with MHMC on synthetic data that satisfies our regularity assumption. Further, we show that the constant in our regularity assumption grows much more slowly with the dimension than the Lipschitz constant of the Hessian. It would also be interesting to extend our results to non-logconcave targets, and to $k$th-order numerical implementations of generalizations of HMC, such as RHMC. \paragraph{Bounds for MHMC} Another open problem is to show tight gradient evaluation bounds for MHMC. Since the Metropolis-adjusted HMC Markov chain preserves the stationary distribution exactly, it should be possible to show that the number of gradient evaluations is polylogarithmic in $\epsilon^{-1}$, improving on the number of gradient evaluations required by unadjusted HMC which grows like $\epsilon^{-\frac{1}{2}}$. Unfortunately, the probablistic coupling approach used in our current paper is unlikely to work for MHMC, since Metropolis ``accept/reject" steps tend to break the coupling of the two Markov chains if the chains have different acceptance probabilities, causing one chain to accept its proposal while the other rejects the proposal. An alternative approach might be to use a proof based on the conductance method (see for instance \cite{vempala2005geometric}). Unlike the coupling approach, the conductance method is compatible with Metropolis ``accept/reject" steps. % In the conductance method one aims to prove mixing time bounds in terms of the ``bottleneck ratio" of the target density $\pi$, by bounding the transition probability of the Markov chain between any set $S \subseteq \mathbb{R}^d$ and its complement $S^c$ (also called the conductance between the sets) in terms of this bottleneck ratio. % The bottleneck ratio can be defined as $\phi^{*} = \inf_{S \subseteq \mathbb{R}^d \, : \, 0 < \pi(S) < \frac{1}{2}} \phi(S)$ where $\phi(S) = \frac{ \int_{\partial S} \pi(x) \mathrm{d}x}{\pi(S) }$. There are however many difficulties in applying the conductance approach when HMC is run with optimal parameters, since the optimal trajectory time parameter $T$ is oftentimes too long for a straightforward application of the conductance approach. For instance, a long HMC trajectory may cross from $S$ to $S^c$ multiple times, making it difficult to relate the behavior of a Markov chain such as HMC that takes long steps to the geometric properties of the distribution $\pi$. % To begin to get around this problem, one might try to make use of the fact that, under the Lipschitz gradient assumption for $U$, the momentum of HMC trajectories causes them to travel for a long time in roughly the same direction, allowing one to bound the conductance of HMC on subsets, such as the half-spaces, which have very ``regular" boundaries. % Unfortunately, the conductance method requires one to bound the conductance for all subsets $S$, not just those with very regular boundaries. A challenging problem is then to extend the conductance bounds to subsets that have more irregular boundaries, and to use these conductance bounds to bound the number of gradient evaluations required by MHMC. \newpage \bibliographystyle{plain}
{ "timestamp": "2018-08-13T02:00:57", "yymm": "1802", "arxiv_id": "1802.08898", "language": "en", "url": "https://arxiv.org/abs/1802.08898" }
\section{Introduction} Asymptotically-free gauge theories show various phases depending on the matter contents, the (global) structure of the gauge groups, spacetime dimensions, temperature and so on. It is usually difficult to exactly analyze the low-energy dynamics since it is strongly-coupled. In order to extract some analytic results, supersymmetry is a very useful tool. Non-renormalization theorems and holomorphy strongly constrain the SUSY dynamics and enable us to derive exact results \cite{Seiberg:1994bz,Seiberg:1994pq,Seiberg:1997vw}. For theories with four supercharges, supersymmetry can determine an exact form of the superpotential and we can find a quantum moduli space of vacua. In this paper, we are interested in low-energy dynamics of the supersymmetric $Spin(7)$ gauge theories. In four spacetime dimensions, $\mathcal{N}=1$ $Spin(N)$ gauge theories with vector matters and with various spinor matters were extensively studied (see \cite{Pouliot:1995zc,Cho:1997kr} \cite{Pouliot:1995sk,Pouliot:1996zh,Kawano:2007rz,Maru:1998hp,Strassler:1997fe,Berkooz:1997bb,Csaki:1997cu,Cho:1997sa,Kawano:1996bd, Csaki:1996zb}). For particular matter contents, the theories confine and the low-energy effective description has no gauge-interaction. For more general matter contents, we sometimes find the Seiberg dual descriptions which are the ``chiral'' theories and phenomenologically interesting. In three spacetime dimensions, the corresponding $Spin(N)$ gauge theories are not well-studied. In \cite{Aharony:2013kma} (see also \cite{Aharony:2011ci}), the 3d $\mathcal{N}=2$ $Spin(N)$ guage theory with $N_f$ vector matters was investigated and its Seiberg duality was proposed by dimensionally reducing the 4d Seiberg duality. However the $Spin(N)$ gauge theories with spinor matters are not studied at all. In this paper, we study the quantum aspects of the 3d $\mathcal{N}=2$ $Spin(7)$ gauge theories with spinorial and vectorial matters. Especially we will find new s-confinement phases for these theories and derive exact superpotentials which govern the confined phases. In order to verify the consistency of our analysis, we compute superconformal indices for these theories and for the dual (confined) descriptions. We will observe a complete agreement of the indices. As another check of our findings, we also test the various Higgs branch. Along the Higgs branch we find the s-confinement description of the 3d $\mathcal{N}=2$ $G_2$ or $SU(4)$ gauge theories with various matters. For the $G_2$ Higgs branch, we will reproduce the same superpotential discussed in \cite{Nii:2017npz}. Along the $SU(4)$ Higgs branch, we reproduce the known s-confinement phases and also find new s-confinement phases for the 3d $\mathcal{N}=2$ $SU(4)$ gauge theories with anti-symmetric matters. We also discuss the connection to the 4d $\mathcal{N}=1$ $Spin(7)$ gauge theories by incorporating the KK-monopoles. This paper is organized as follows. In Section 2, we will briefly review the 4d $\mathcal{N}=1$ $Spin(7)$ gauge theories with spinorial matters. In Section 3, the Coulomb branch of the $Spin(7)$ vector multiplet is (semi-)classically studied. In Section 4, a 3d $\mathcal{N}=2$ $Spin(7)$ gauge theory with matters in a spinorial representation is investigated. We also compute the superconformal indices. In Section 5, we study the $Spin(7)$ theory with spinor and vector matters with special attention to the s-confinement phases. In Section 6, we summarize our results and comment on possible future directions. \section{Review of 4d $\mathcal{N}=1$ $Spin(7)$ gauge theories} In this section, we will briefly review the dynamics of the 4d $\mathcal{N}=1$ $Spin(7)$ gauge theories with spinorial matters. Table \ref{4dspin7} shows the matter contents and their quantum numbers. Due to the chiral anomalies in 4d, $U(1)$ and $U(1)_R$ global symmetries are anomalous and then we have to combine them into a new $U(1)_R$ symmetry \begin{align} U(1)_R^{new}=U(1)_R^{old} -\left(R_S-1 +\frac{5}{N_S} \right)U(1). \end{align} In this paper, we are interested in the 3d theories and these $U(1)$ symmetries are not anomalous. Hence we will use spurious charge assignment also in 4d. \begin{table}[H]\caption{Quantum numbers for 4d $\mathcal{N}=1$ $Spin(7)$ theories} \begin{center} \begin{tabular}{|c||c||c|c|c|c|c| } \hline &$Spin(7)$&$SU(N_S)$&$U(1)$&$U(1)_R$ &$U(1)_R^{new}$ \\ \hline $S$ & $\mathbf{2^{N}}=\mathbf{8}$&${\tiny \yng(1)}$&1& $R_S$&$1-\frac{5}{N_S}$ \\ $\lambda$ &$\mathbf{Adj.}$&1&0&$1$ &1\\ \hline $\eta=\Lambda_{N_f,N_S}^b$&1&1&$2N_S$&$2N_S(R_S-1) +10$ &0 \\ \hline $M_{SS}:=SS$&1&$\tiny \yng(2)$&2&$2R_S$ &$2-\frac{10}{N_S}$\\ $B_S:=S^4$&1&${\tiny \yng(1,1,1,1)}$&$4$&$4R_S$ & $4-\frac{20}{N_S}$ \\[10pt] \hline \end{tabular} \end{center}\label{4dspin7} \end{table} \noindent In Table \ref{4dspin7}, $\eta$ is a dynamical scale of the $Spin(7)$ gauge group and $b$ is a coefficient of the one-loop beta function, which is given by \begin{align} \beta &=-\frac{g^3}{16 \pi^2} b,~~~b=15-N_S. \end{align} Quantum dynamics depends on the number of spinor multiplets. We simply enumerate the results and give some comments. For $N_S \le 3$, we only need the gauge invariant $M_{SS}$ in order to describe the Higgs branch. The superpotential to govern the low-energy dynamics is \begin{align} W_{N_S \le 3} &=\left(\frac{\eta}{\det M_{SS}} \right)^{\frac{1}{5-N_S}} ~~~~(N_S \le 3). \end{align} From the superpotential, there is no stable SUSY vacuum. At generic points of the moduli space, the gauge group is maximally broken to $SU(2)$. The gaugino condensation of the remaining $SU(2)$ generates this superpotential. For $N_S=4$, we need also the baryonic operator $B_S$. At generic points of the moduli space, the gauge group is now completely broken and thus we can reliably use the instanton calculation. One-instanton configurations generate \begin{align} W_{N_S=4} &=\frac{\eta}{\det M_{SS} -B_S^2}. \end{align} For $N_S=5$, the Higgs branch coordinates $M_{SS}$ and $B_S$ need one constraint between them. The classical constraint is modified quantum-mechanically and realized by using the Lagrange multiplier $X$ as \begin{align} W_{N_S=5} &= X\left( \det M_{SS} -M_{ij}B^i B^j -\eta \right). \end{align} For $N_S=6$, the quantum moduli space is the same as the classical one. The classical constraints between the Higgs branch coordinates are depicted as \begin{align} W_{N_S=6} &=\frac{1}{\eta} \left( \det M -M_{ik} M_{jl} B^{ij}B^{kl} -\mathrm{Pf}\, B \right). \label{4dF6} \end{align} For the 4d $\mathcal{N}=1$ $Spin(7)$ gauge theory with spinors and vectors, we will not review it here and see \cite{Cho:1997kr, Csaki:1996zb}. \section{Coulomb branch and Monopole operators} In this section, we will define the (semi-)classical Coulomb branch coordinates which correspond to the monopoles with a magnetic charge $g_i=\vec{\alpha}^*_i \cdot \vec{H}$, where $\vec{\alpha}_i$ is a simple root and $\vec{\alpha}^*_i $ denotes a dual root $\frac{2 \vec{\alpha}}{\vec{\alpha}^2}$. $\vec{H}$ is a Cartan subalgebra. The Coulomb branch operators parametrize the flat directions of the scalar fields from the vector superfields. The adjoint scalar field in a vector superfield is defined as \begin{align} \phi := \left( \sum_{i=1}^r \phi_i \vec{\alpha}^*_i \right) \cdot \vec{H}, \end{align} where we used the gauge transformation and diagonalized the adjoint scalar into the Cartan part. In this notation, the weyl chamber is given by \begin{align} \sum_{j=1}^r \,A_{ij} \phi_j \ge 0 ~~~(\text{for each } i) \end{align} where $A_{ij}:=\vec{\alpha}_i \cdot \vec{\alpha}_j^*$ is a Cartan matrix. The Coulomb brach coordinate for each simple root is equivalent to the on-shell action of each monopole, which is given by \begin{align} V_{\alpha_k} := \exp \left[ \sum_{i=1}^r \vec{\alpha}_k^* \cdot \vec{\alpha}_i^* \phi_i \right], \end{align} where we omitted the normalization of the action. Rigorously speaking, the Coulomb branch operator includes the dual photon which is a dualized scalar from the $U(1)$ photon. Here we omitted it for simplicity since the dual photon dependence is easily restored. Since the Coulomb branch coordinates are originally the member of the vector superfield, it is neutral under the flavor symmetries. However, the zero-modes around the monopole background spontaneously break the flavor symmetries. As a result, the Coulomb branch operators have non-trivial charges under the non-linearly realized flavor symmetries \cite{Affleck:1982as} which is the mixing between the original flavor symmetries and the topological $U(1)$ symmetry. The magnitude of the mixing is related to the number of the fermion zero-modes. Hence we need to calculate the zero-modes around the monopole background by employing the Callias index theorem \cite{Callias:1977kg,Weinberg:1979zt,deBoer:1997kr}. The Callias index theorem claims that the number of fermion zero-modes is obtained by the following formula \begin{align} N=\sum_w \frac{1}{2} \mathrm{sign}(w (\phi)) w(g), \end{align} where the summation is taken over all the weight of the matters and $g$ is a magnetic charge of the monopole which we consider. $\phi$ is an adjoint scalar field in the vector multiplet. Let us consider the classical Coulomb branch of a 3d $\mathcal{N}=2$ $Spin(7)$ gauge theory. In our notation, the Weyl chamber which we chose is defined by \begin{align} 2 \phi_1- \phi_2 &\ge 0\\ -\phi_1+2 \phi_2-2 \phi_3 &\ge 0\\ -\phi_2 +2 \phi_3 &\ge 0. \end{align} In order to simplify the Weyl chamber, we sometimes change the variables as \begin{align} \phi_1 &=: \sigma_1 \\ \phi_2 &=: \sigma_1 + \sigma_2 \\ \phi_3 &=:\frac{1}{2} (\sigma_1+\sigma_2+\sigma_3). \end{align} In this redefinition, the Weyl chamber is simplified to \begin{align} \sigma_1 \ge \sigma_2 \ge \sigma_3 \ge 0. \end{align} The Coulomb branch operators are defined as \begin{align} Y_1 &\simeq \exp [2 \phi_1 -\phi_2] =\exp [\sigma_1-\sigma_2]\\ Y_2 &\simeq \exp[- \phi_1 + 2 \phi_2 -2 \phi_3] =\exp [\sigma_2 -\sigma_3]\\ Y_3 & \simeq \exp[-2 \phi_2 +4 \phi_3] = \exp [2 \sigma_3] \\ Z &:= Y_1 Y_2^2 Y_3 \simeq \exp[ \phi_2] =\exp[\sigma_1+\sigma_2] \\ Y &:= \sqrt{Y_1 Z} \simeq \exp [\phi_1] = \exp [\sigma_1],~~~Y_{spin}:=Y_1 Z \end{align} where $Z$ corresponds to a lowest co-root and plays an important role when we study the connection between 3d and 4d theories. $Y$ and $Y_{spin}$ were defined in \cite{Aharony:2011ci, Aharony:2013kma}, which are the globally defined Coulomb branch coordinates for the 3d $\mathcal{N}=2$ $O(N)$ or $Spin(N)$ gauge theories with vectorial matters. By using the Callias index theorem, one can compute the fermion zero-modes around each magnetic monopole. Table \ref{zeromode} summarizes the fermion zero-modes for each operator. Notice that we have to divide the Weyl chamber further into two regions depending on the sign of $\phi_1-\phi_3$ for the spinor zero-modes. \begin{table}[H]\caption{Fermion zero-modes} \begin{center} \begin{tabular}{|c||c|c|c| } \hline &adjoint&vector&spinor \\ \hline $Y_1$&2&0&$1+\mathrm{sign(\phi_1-\phi_3)}$\\ $Y_2$&2&0&0 \\ $Y_3$&2&2&$1-\mathrm{sign(\phi_1-\phi_3)}$ \\ \hline $Z:= Y_1 Y_2^2 Y_3$&8&2&2 \\ $Y:= \sqrt{Y_1 Z}$ $(\phi_1 >\phi_3)$&5&1&2 \\ $Y_{spin}:=Y_1 Z$ $(\phi_1 >\phi_3)$&10&2&4 \\ \hline \end{tabular} \end{center}\label{zeromode} \end{table} For the 3d $\mathcal{N}=2$ pure $Spin(7)$ theory without matters, all the Coulomb branch operators $Y_i$ get two gaugino zero-modes. Thus we have the non-perturbative superpotential like $\frac{1}{Y_i}$ and there is no stable SUSY vacuum. When we turn on the matters in a vectorial representation, $Y_3$ gets additional zero-modes from the vectorial fermions and $W=\frac{1}{Y_3}$ is not allowed. As a result, one dimensional Coulomb branch would remain as the (quantum) moduli space. For an $(S)O(7)$ case with vector matters \cite{Aharony:2011ci}, $Y$ is a globally defined one-dimensional Coulomb branch operator. For a $Spin(7)$ theory with vectorial matters the correct coordinate is $Y_{spin}$ \cite{Aharony:2013kma}. In these theories, $Z$ appears when we put the corresponding 4d theories on a circle. Let us next consider the $Spin(7)$ theory with spinorial matters. For $\phi_1 > \phi_3$, $Y_1$ has zero-modes from the spinor in addition to the gaugino zero-modes. Thus, it is expected that $Y_1$ is not lifted and that there is a one-dimensional Coulomb branch for $\phi_1 > \phi_3$. The same argument would be available also for $\phi_1 < \phi_3$ and $Y_3$ is un-lifted. In this theory, we need one globally defined coordinate and we will use $Z$ for parametrizing it. When both the vectors and the spinors are added into the $Spin(7)$ theory, the Coulomb branch becomes more complicated. For $\phi_1 > \phi_3$, $Y_1$ and $Y_3$ have more than two fermion zero-modes. Hence they are not lifted while $Y_2$ is still lifted via the monopole superpotential. For $\phi_1 < \phi_3$, only $Y_3$ has more than two zero-modes and $Y_{1,2}$ are lifted. We therefore need to introduce two coordinates for the description of the (semi-)classical Coulomb moduli. We expect that one of them would be the operator $Z$. This is because the zero-mode of $Z$ does not depend on the sign of $\phi_1-\phi_3$ so that $Z$ would be globally defined on the whole Coulomb branch. The other one would be described by $Y$ or $Y_{spin}$. Notice that this analysis is completely (semi-)classical. Therefore the quantum effects might modify these pictures. In fact we will see that the 3d $\mathcal{N}=2$ $Spin(7)$ gauge theories with $N_f$ vectors and $N_S$ spinors sometimes show the one-dimensional Coulomb branch. \section{3d $\mathcal{N}=2$ $Spin(7)$ theories with spinorial matters} In a previous section, we studied the (semi-)classical Coulomb branch of the $Spin(7)$ theory. Here we examine the quantum aspects of the $Spin(7)$ Coulomb branch. Let us start with the 3d $\mathcal{N}=2$ $Spin(7)$ gauge theory with spinorial matters. The Higgs branch is parametrized by a meson $M_{SS}:=SS$ for $N_S \le 3$. The baryonic operator $B_S:=S^4$ is also necessary for $N_S \ge 4$. The matter contents and their quantum numbers are summarized in Table \ref{Tspinor}. The table also includes the dynamical scale $\eta$ of the 4d gauge coupling. Since the $U(1)$ symmetries are anomalous in 4d due to the chiral anomalies, the dynamical scale is charged under the $U(1)$ symmetries. For the Coulomb branch, we predict that $Z$ is a correct monopole operator. \begin{table}[H]\caption{Quantum numbers for 3d $\mathcal{N}=2$ $Spin(7)$ with $N_f$ spinors} \begin{center} \scalebox{0.88}{ \begin{tabular}{|c||c||c|c|c| } \hline &$Spin(7)$&$SU(N_S)$&$U(1)_S$&$U(1)_R$ \\ \hline $S$ & $ \mathbf{2^N}=\mathbf{8}$&${\tiny \yng(1)}$&1& $R_S$ \\ $\lambda$ &$\mathbf{Adj.}$&1&0&$1$ \\ \hline $\eta=\Lambda_{N_f,N_S}^b$&1&1&$2N_S$&$2N_S(R_S-1) +10$ \\ \hline $M_{SS}:=SS$&1&$\tiny \yng(2)$&2&$2R_S$ \\ $B_S:=S^4$&1&${\tiny \yng(1,1,1,1)}$&$4$&$4R_S$ \\[10pt]\hline $Y_1$&1&1&$-N_S(1+\mathrm{sign}(\phi_1-\phi_3))$& $-2-N_S(R_S-1)(1+\mathrm{sign}(\phi_1-\phi_3))$\\ $Y_2$&1&1&0&$-2$\\ $Y_3$&1&1&$-N_S(1-\mathrm{sign}(\phi_1-\phi_3))$& $-2-N_S(R_S-1)(1-\mathrm{sign}(\phi_1-\phi_3)) $\\ \hline $Z:=Y_1Y_2^2Y_3$&1&1&$-2N_S$& $-8 -2N_S (R_S-1) $ \\ \hline \end{tabular}} \end{center}\label{Tspinor} \end{table} For any $N_S$, the superpotential $W= \eta Z$ is available, which is dynamically generated from the KK-monopole and necessary when connecting the 3d theory to the 4d theory. From Table \ref{Tspinor}, we find that the following superpotentials are consistent with all the symmetries. \begin{align} W_{N_S \le 3} &= \left( \frac{1}{Z \det M_{SS}} \right)^{\frac{1}{4-N_S}} \\ W_{N_S = 4} &= X \left[ Z (\det M_{SS} -B_S^2) -1 \right] \\ W_{N_S=5} &= Z \left( \det \, M_{SS} - B_{S}^i B_{S}^j M_{SS,ij} \right) \end{align} Consequently, there is no stable SUSY vacuum for $N_S \le 3$. The Higgs and Coulomb branches are quantum-mechanically merged for $N_S=4$. The large values of the Higgs branch is connected to the small value of the Coulomb branch. Importantly the origin of the moduli space is not a vacuum. For $N_S =5$, the theory is s-confining, where the origin belongs to the vacua. For $N_S \ge 6$ we have no simple superpotential. In what follows, we will verify our superpotentials above in various ways. It is easy to check the parity anomaly matching for $N_S=5$. The UV and IR descriptions produce the same anomalies. By adding the term $\eta Z$ from the KK-monopole, the 4d superpotentials \begin{align} W_{N_S \le 3}^{\mathbb{S}_1 \times \mathbb{R}_3} &= \left( \frac{1}{Z \det M_{SS}} \right)^{\frac{1}{4-N_S}}+\eta Z \longrightarrow ~~W_{N_S \le 3}^{4d}= \left( \frac{\eta}{\det \,M_{SS}} \right)^{\frac{1}{5-N_s}} \\ W_{N_S=4}^{\mathbb{S}_1 \times \mathbb{R}_3} &= X \left[ Z (\det M_{SS} -B_S^2) -1 \right] +\eta Z \longrightarrow ~~ W^{4d}_{N_S=4} = \frac{\eta}{\det \, M_{SS} -B_S^2} \\ W_{N_S=5}^{\mathbb{S}_1 \times \mathbb{R}_3} &=Z \left( \det \, M_{SS} - B_{S}^i B_{S}^j M_{SS,ij} \right) +\eta Z \nonumber \\ & \qquad \longrightarrow ~~ W^{4d}_{N_S=5}= X \left[ \det \, M_{SS} - B_{S}^i B_{S}^j M_{SS,ij} -\eta \right] \end{align} are correctly reproduced. Next let us introduce a complex mass deformation. We restrict ourself to the case with $N_S=5$ and introduce a complex mass to the last flavor. By integrating out the massive modes, we arrive at the quantum constraint for $N_S=4$ as follows. \begin{align} W=W_{N_S=5} +m M_{SS,55} \rightarrow \begin{cases} B_S^i=0 & (i=1,2,3,4) \\ M_{SS,i5} =0 &(i=1,2,3,4) \\ m=Z (\det \, \hat{M}_{SS} -B_{S}^5 B_S^{5}) \end{cases} \end{align} We can also test the Higgs branch. When a spinor gets a vev $\braket{M_{SS,N_SN_S}} = v^2$, the theory flows to the 3d $\mathcal{N}=2$ $G_2$ gauge theory with $N_S-1$ fundamentals \cite{Nii:2017npz}. The superpotential above correctly explains this flow. For $N_S=5$, we need the following identification between the $Spin(7)$ and $G_2$ moduli coordinates. \begin{gather} M_{SS,ij} =: M^{G_2}_{ij} ~(i,j=1,\cdots,4) \\ B_{S}^i =:vB^i_{G_2}~(i=1,\cdots,4),~~~B_{S}^5 =F_{G_2} \end{gather} The superpotential reduces to \begin{align} W=v^2 Z( \det \,M^{G_2} - B^i_{G_2} M_{ij}^{G_2}B^j_{G_2} -F_{G_2}^2 ) =: Z_{G_2}( \det \,M^{G_2} - B^i_{G_2} M_{ij}^{G_2}B^j_{G_2} -F_{G_2}^2 ), \end{align} where we absorbed the vev into the monopole operator. This superpotential was first obtained in \cite{Nii:2017npz}. The similar argument can be applied also for $N_S \le 4$ and the $G_2$ superpotentials are reproduced. Finally we briefly discuss the theory with $N_S \ge 6$. In this case one cannot write down the superpotential. From the analysis of the semi-classical Coulomb branch, it is expected that the Coulomb branch is still one-dimensional (it is labeled by $Z$) and that the quantum moduli space would be identical to the (semi-)classical one. If the fractional power in a superpotential is allowed, one can still write down the ``effective'' superpotential. For $N_S=6$, the superpotential \begin{align} W_{N_S=6}= \left[ Z \left( \det M_{SS} -M_{SS}^2 B_S^2 -\mathrm{Pf} \, B_S \right) \right]^{\frac{1}{2}} \end{align} is consistent with all the symmetries. By adding a term $\eta Z$, the 4d result \eqref{4dF6} is reproduced. However, the fractional power leads to the branch-cut singularities on the origin of the moduli space and we have to introduce new massless degrees of freedom along the singularities. Presumably, some Seiberg dual descriptions would explain these massless modes and a certain superconformal fixed point is realized on the origin of the moduli space. We don't discuss it further in this paper and will tackle with this problem elsewhere. \subsection{Superconformal Indices} Since the $Spin(7)$ theory with five spinors exhibits the s-confinement phase, the superconformal index is simple enough and it is computed from the dual side. This would be another check of our analysis. For the definitions of the superconformal indices, see for example \cite{Bhattacharya:2008bja,Kim:2009wb,Imamura:2011su,Imamura:2011uj,Kapustin:2011jm,Spiridonov:2009za,Bashkirov:2011vy,Kim:2013cma}. The index on the dual side has the contributions from the meson $M_{SS,ij}$, the baryon $B_S^i$ and the Coulomb branch operator $Z$. We set $R_S =\frac{1}{8}$ for simplicity and use a fugacity $u$ for the global $U(1)_S$ symmetry which rotates the spinor. The full index (or the index of the dual description) becomes \scriptsize \begin{align} I_{magnetic}^{N_S=5} &= 1+15 u^2 x^{1/4}+125 u^4 \sqrt{x}+\left(\frac{1}{u^{10}}+755 u^6\right) x^{3/4}+\left(3675 u^8+\frac{15}{u^8}\right) x+ \left(15252 u^{10}+\frac{125}{u^6}\right) x^{5/4} \nonumber \\ &\quad +\left(\frac{1}{u^{20}}+55880 u^{12}+\frac{750}{u^4}\right) x^{3/2}+ 5 \left(37004 u^{14}+ \frac{717}{u^2}+\frac{3}{u^{18}}\right) x^{7/4}+\left(562985 u^{16}+\frac{125}{u^{16}}+14402\right) x^2 \nonumber \\ &\quad +\left(\frac{1}{u^{30}}+1594185 u^{18}+\frac{750}{u^{14}}+50245 u^2\right) x^{9/4}+\left(4241879 u^{20}+155550 u^{4}+ \frac{3585}{ u^{12}}+\frac{15}{u^{28}}\right) x^{5/2} \nonumber \\ &\quad +\left(10688125 u^{22}+433550 u^{6}+\frac{14403}{ u^{10}}+\frac{125}{u^{26}}\right) x^{11/4}+\left(\frac{1}{u^{40}}+25661515 u^{24}+\frac{750}{u^{24}}+1097955 u^8+\frac{50270}{u^8}\right) x^3 +\cdots \end{align} \normalsize We will briefly explain the low-lying operators below. \begin{itemize} \item The first term is an identity operator. \item The second term $15 u^2 x^{1/4}$ is identified with a meson contribution $M_{SS,ij}$ which has $15$ independent components. \item The third term $125 u^4 \sqrt{x}$ consists of two operators. One is a baryonic operator $B_S^i$ which contributes to the index as $5 u^4 x^{1/2}$ and the other is a square of the mesons $M_{SS} \otimes M_{SS}$, whose flavor indices are symmetrized. Thus we have $\mathbf{15} \times \mathbf{15}|_{symmetric~part} = 120= \overline{\mathbf{50}}+\overline{\mathbf{70}}'$ in an $SU(5)$ notation. \item The fourth term $\left(\frac{1}{u^{10}}+755 u^6\right) x^{3/4}$ contains the monopole operator which is denoted by $Z$. The remaining parts are the symmetric products of the Higgs branch operators, $M_{SS}^3$ and $M_{SS} B_S$. \item The higher order contributions are recognized as composite operators of $M_{SS}, B_S$ and $Z$ by properly symmetrizing the flavor indices. \end{itemize} Let us move on to the electric side. The index on the electric side is decomposed into the index for each GNO charge $(m_1,m_2,m_3),~m_i \in \mathbb{Z}/2$. Since we now discuss the $Spin(7)$ gauge group, we have to sum up only the sectors with $m_1+m_2 +m_3 \in \mathbb{Z}$ \cite{Aharony:2013kma}. We need to consider the GNO charges $(0,0,0), \left( \frac{1}{2},\frac{1}{2},0\right),\left( 1,1,0\right),\left( \frac{3}{2},\frac{3}{2},0\right)$ and $(2,2,0)$ up to $O(x^3)$. The index with zero GNO charge becomes \footnotesize \begin{align} I_{electric}^{(0,0,0)} &=1+15 u^2 x^{1/4}+125 u^4 \sqrt{x}+755 u^6 x^{3/4}+3675 u^8 x+15252 u^{10} x^{5/4}+55880 u^{12} x^{3/2} +185020 u^{14} x^{7/4} \nonumber \\ &\quad +\left(562985 u^{16}-25\right) x^2+\left(1594185 u^{18}-400 u^2\right) x^{9/4} +\left(4241879 u^{20}-3450 u^4\right) x^{5/2} \nonumber \\ &\qquad +\left(10688125 u^{22}-21200 u^6\right) x^{11/4} +\left(25661515 u^{24}-103775 u^8\right) x^3+\cdots. \end{align} \normalsize \noindent The first term is an identity operator and regarded as the state $\ket{0,0,0}$. Since the gauge group is not broken in this sector, we can freely act the Higgs branch operators on the state $\ket{0,0,0}$. For example, $15 u^2 x^{1/4}$ is identified with $M_{SS,ij} \ket{0,0,0}$. Next let us study the sectors with non-zero GNO charges. \footnotesize \begin{align} I_{eletric}^{\left(\frac{1}{2},\frac{1}{2},0 \right)} &=\frac{x^{3/4}}{u^{10}}+\frac{15 x}{u^8}+\frac{125 x^{5/4}}{u^6}+\frac{750 x^{3/2}}{u^4}+\frac{3585 x^{7/4}}{u^2}+14427 x^2+50645 u^2 x^{9/4} \nonumber \\ &\qquad +159000 u^4 x^{5/2}+\left(454750 u^6-\frac{24}{u^{10}}\right) x^{11/4}+\frac{5 \left(240346 u^{16}-75\right) x^3}{u^8}+\cdots \\ I_{electric}^{(1,1,0)}&=\frac{x^{3/2}}{u^{20}}+\frac{15 x^{7/4}}{u^{18}}+\frac{125 x^2}{u^{16}}+\frac{750 x^{9/4}}{u^{14}}+\frac{3585 x^{5/2}}{u^{12}}+\frac{14427 x^{11/4}}{u^{10}}+\frac{50645 x^3}{u^8}+\cdots \\ I_{electric}^{\left(\frac{3}{2},\frac{3}{2},0 \right)} &=\frac{x^{9/4}}{u^{30}}+\frac{15 x^{5/2}}{u^{28}}+\frac{125 x^{11/4}}{u^{26}}+\frac{750 x^3}{u^{24}} + \cdots,~~~~I_{electric}^{(2,2,0)} =\frac{x^3}{u^{40}}+\cdots \end{align} \normalsize \noindent The sector with a GNO charge $\left( \frac{1}{2},\frac{1}{2},0\right)$ contains the monopole operator. The first term $\frac{x^{3/4}}{u^{10}}$ is $Z$ (see Table \ref{Tspinor}) and the corresponding state is expressed as $\ket{\frac{1}{2}, \frac{1}{2},0}$. The proceeding two terms $\frac{15 x}{u^8}+\frac{125 x^{5/4}}{u^6}$ are $M_{SS}\ket{\frac{1}{2}, \frac{1}{2},0}$ and $(M_{SS}^2+B_S)\ket{\frac{1}{2}, \frac{1}{2},0}$ respectively. The term $\frac{750 x^{3/2}}{u^4}$ needs some explanation. The GNO charge assignment $\left( \frac{1}{2},\frac{1}{2},0\right)$ breaks the gauge group to $ Spin(3) \times SU(2) \times U(1)$. The spinor reduces to $(\mathbf{2},\mathbf{2})_0$ where we omitted the charged fields since we cannot act the charged fields on $\ket{\frac{1}{2}, \frac{1}{2},0}$ a la \cite{Bashkirov:2011vy}. Therefore we cannot totally anti-symmetrize the $SU(5)$ flavor indices of the reduced spinors (fourth-order anti-symmetrization is still allowed). Therefore, in the product $M_{SS} \times B_S = \mathbf{15} \otimes \bar{\mathbf{5}} = \mathbf{5} +\mathbf{70} $, we have to discard the $\mathbf{5}$ representation. As a result, $\frac{750 x^{3/2}}{u^4}$ is regarded as $(M_{SS}^3 +M_{SS}B_S)\ket{\frac{1}{2}, \frac{1}{2},0}$. The similar argument is available for higher order terms. By summing up these indices, we observe the complete agreement between the electric and magnetic sides. \section{3d $\mathcal{N}=2$ $Spin(7)$ with vector and spinor matters} Let us next study the 3d $\mathcal{N}=2$ $Spin(7)$ gauge theory with $N_f$ vector matters and with $N_S$ spinor maters. From the analysis in Section 3, one might expect that the quantum Coulomb branch is two-dimensional. However the previous argument was semi-classical and it would be generally modified. In fact, we will see that the dimension of the Coulomb branch extremely depends on the matter contents. In this section, we are mostly interested in s-confinement phases. We will find the s-confining descriptions for $(N_f,N_S)=(0,5),(1,4),(2,3),(3,2)$ and $(4,1)$. Since we have already discussed the $(N_f,N_S)=(0,5)$ case, we start with $(N_f, N_S) =(1,4) $. The dynamics for the theories with fewer matters can be obtained from the s-confinement description by integrating out massive fields. \subsection{$(N_f, N_S) =(1,4) $} From the (semi-)classical analysis of the Coulomb branch operators for the simple roots, the Coulomb moduli should be divided into two parts depending on the sign of $\phi_1 -\phi_3$. Thus we expected that two (quantum) Coulomb moduli $Z$ and $Y$(or $Z$ and $Y_{spin}$) are necessary. However, in this phase with $(N_f,N_S)=(1,4)$, we can relate these two coordinates by acting the Higgs branch coordinates on the monopole operator. For example, $Y$ has the same quantum numbers as $Z^2 \times (\alpha B'_S P_{A,1}^2 +\beta M_{QQ}B_SB'_S)$ with some numerical coefficients $\alpha$ and $\beta$. The deep reason behind this identification is unclear, but it is allowed at least from a symmetry argument. We predict that the (quantum) Coulomb branch is described by a single $Z$ coordinate. The validity of this prediction can be checked via various deformations and the superconformal indices below. For the description of the Higgs branch, we define following operators. \begin{gather} M_{QQ}:=QQ,~~~ M_{SS} :=S^2 \\ P_{A1}:= SQS,~~~B_{S} := S^4 ,~~~B'_{S} := S^4Q \end{gather} We listed the quantum numbers of the matter contents and of the moduli coordinates in Table \ref{T14}. From the table, one can write down the superpotential \begin{align} W=Z \left[ M_{QQ} (\det \, M_{SS}- B_S^2 ) +B'^2_{S} +B_S P_{A1}^2 +M_{SS}^2 P_{A1}^2 \right] +\eta Z, \end{align} where the last term is generated by a KK-monopole and absent in a 3d limit. By integrating out the Coulomb branch, we obtain a quantum constraint in 4d. This IR description gives the same parity anomalies as the UV theory. In addition, we cannot satisfy the parity anomaly matching if we introduce two Coulomb branch operators. This is a first non-trivial check of our prediction. \begin{table}[H]\caption{Quantum numbers for $(N_f,N_S)=(1,4)$} \begin{center} \scalebox{0.76}{ \begin{tabular}{|c||c||c|c|c|c| } \hline &$Spin(7)$&$SU(4)$&$U(1)_Q$&$U(1)_S$&$U(1)_R$ \\ \hline Q& ${\tiny \yng(1)}$&1&1&0& $R_f$ \\ $S$ & $\mathbf{2^{N}}=\mathbf{8}$&${\tiny \yng(1)}$&0&1& $R_S$ \\ $\lambda$ &$\mathbf{Adj.}$&1&0&0&$1$ \\ \hline $\eta=\Lambda_{N_f,N_S}^b$&1&1&$2$&$8$&$2(R_f-1)+8(R_S-1) +10=2R_f+8R_S$ \\ \hline $M_{QQ}:=QQ$&1&1&2&0&$2R_f$ \\ $M_{SS}:=SS$&1&$\tiny \yng(2)$&0&2&$2R_S$ \\ $B_S:=S^4$&1&1&0&$4$&$4R_S$ \\ $B'_S:=S^4 Q$&1&1&1&4& $R_f+4R_S$ \\ $P_{A1}:=SQS$&1&${\tiny \yng(1,1)}$&1&2&$R_f+2R_S$ \\ \hline $Z:=Y_1Y_2^2Y_3$&1&1&$-2$&$-8$& $-8 -8(R_S-1) -2(R_f-1)=2-2R_f-8R_S$ \\ $Y:=\sqrt{Y_1 Z}$ $(\phi_1 \ge \phi_3)$ &1&1&$-1$&$-8$& $-5-(R_f-1) -8(R_S-1)=4-R_f-8R_S$ \\ $Y_{spin}:=Y_1^2Y_2^2Y_3$ $(\phi_1 \ge \phi_3)$&1&1&$-2$&$-16$&$-10-2(R_f-1)-16(R_S-1)=8-2R_f-16R_S$ \\ \hline \end{tabular}} \end{center}\label{T14} \end{table} In order to test the superpotential above, let us consider various directions of the Higgs branch, which would justify our analysis. First, we consider introducing the vectorial vev $\braket{M_{QQ}} =v^2 $ which breaks the $Spin(7)$ group to $Spin(6) \cong SU(4)$. The low-energy theory is a 3d $\mathcal{N}=2$ $SU(4)$ gauge theory with four flavors in a (anti-)fundamental representation, which is s-confining \cite{Aharony:1997bx}. Since the global symmetries are enhanced to $SU(4) \times SU(4)$ in the low-energy limit, we have to rename and decompose the fields as \begin{gather} M_{SS,ij} =: M_i^j+M_j^i ~~~(i,j=1,\cdots,4)\\ B_{S}=:\frac{v}{4}(B+\bar{B}),~~~B'_S =:\frac{v}{4}(B-\bar{B}) \\ P_{A1}=:v (M_i^j -M_j^i), \end{gather} where $ M_i^j$ is regarded as a meson and $B,\bar{B}$ are (anti-)baryonic operators in the $SU(4)$ theory. By absorbing the vev into the redifinition of the monopole operator, the superpotential reduces to \begin{align} W= v^2 Z \left[ \det M-B\bar{B} \right] =:Y_{SU(4)} \left[ \det M-B\bar{B} \right], \end{align} which is precisely the superpotential of the 3d $\mathcal{N}=2$ $SU(4)$ theory with four flavors \cite{Aharony:1997bx}. Next, let us focus on the $G_2$ direction of the Higgs branch, which is achieved by introducing a vev for a single spinorial field as $\braket{M_{SS,44}}=v^2$. The low-energy theory becomes a 3d $\mathcal{N}=2$ $G_2$ gauge theory with four fundamental matters, which is again s-confining \cite{Nii:2017npz}. Although the vev breaks the global $SU(4)$ symmetry to $SU(3)$, we again have the enhanced $SU(4)$ symmetry at the low-energy limit since the vector and the spinors become the same representation in $G_2$. We need the following identification between the $Spin(7)$ and $G_2$ moduli coordinates. \begin{gather} M_{SS,ij} =: M_{ij}^{G_2} ~~(i,j=1,2,3),~~~~ M_{QQ}=: M_{44}^{G2}\\ B_{S}=:vB_{G_2}^4,~~~B'_S =:v F \\ \epsilon^{ijk}P_{A1, jk}=: B^i_{G_2},~~~~P_{A1,i4}=: vM_{i,4}^{G_2}~~(i,j,k=1,2,3) \end{gather} The superpotential reduces to \begin{align} W=v^2 Z \left[ \det M^{G_2} +F^2 +B^a M_{ab} B^c \right]:=Z_{G_2} \left[ \det M^{G_2} +F^2 +B^a M_{ab} B^c \right], \end{align} which is the superpotential observed in \cite{Nii:2017npz}. We can also consider a complex mass deformation for a vectorial matter. By introducing the mass term $m M_{QQ}$, we find that $B'_S$ and $P_{A,1}$ are integrated out. The equation of motion for $M_{QQ}$ leads to a quantum constraint \begin{align} m+Z(\det \, M_{SS} -B_{S}^2)=0, \end{align} which was observed in a previous section with $N_S=4$. \subsubsection*{Superconformal Indices} As an additional test of our analysis, we compute the superconformal index of the 3d $\mathcal{N}=2$ $Spin(7)$ theory with $(N_f, N_S) =(1,4) $. Since the theory is s-confining, the dual description does not contain any gauge group. The index on the dual side is \scriptsize \begin{align} I_{dual} &=1+x^{1/4} \left(t^2+10 u^2\right)+6 t u^2 x^{3/8}+\sqrt{x} \left(t^4+10 t^2 u^2+56 u^4\right)+x^{5/8} \left(6 t^3 u^2+61 t u^4\right) \nonumber \\ &\quad +x^{3/4} \left(t^6+10 t^4 u^2+\frac{1}{t^2 u^8}+77 t^2 u^4+230 u^6\right)+x^{7/8} \left(6 t^5 u^2+61 t^3 u^4+346 t u^6\right) \nonumber \\ &\quad+x \left(t^8+10 t^6 u^2+77 t^4 u^4+446 t^2 u^6+\frac{10}{t^2 u^6}+771 u^8+\frac{1}{u^8}\right)+x^{9/8} \left(6 t^7 u^2+61 t^5 u^4+402 t^3 u^6+1436 t u^8+\frac{6}{t u^6}\right) \nonumber \\ &\quad +x^{5/4} \left(t^{10}+10 t^8 u^2+77 t^6 u^4+446 t^4 u^6+2007 t^2 u^8+\frac{t^2}{u^8}+\frac{56}{t^2 u^4}+2232 u^{10}+\frac{10}{u^6}\right) \nonumber \\ &\quad +x^{11/8} \left(6 t^9 u^2+61 t^7 u^4+402 t^5 u^6+2017 t^3 u^8+4856 t u^{10}+\frac{6 t}{u^6}+\frac{60}{t u^4}\right) \nonumber \\ &\quad +x^{3/2} \left(t^{12}+10 t^{10} u^2+77 t^8 u^4+446 t^6 u^6+\frac{1}{t^4 u^{16}}+2133 t^4 u^8+\frac{t^4}{u^8}+7398 t^2 u^{10}+\frac{10 t^2}{u^6}+\frac{230}{t^2 u^2}+5776 u^{12}+\frac{76}{u^4}\right) \nonumber \\ &\quad +\cdots, \end{align} \normalsize \noindent where we set $R_f=R_S=\frac{1}{8}$ for simplicity. $t$ and $u$ are the fugacities for the $U(1)_Q$ and $U(1)_S$ symmetries respectively. The magnetic index has the contributions from $M_{QQ},M_{SS},B_S,B'_S,P_{A1}$ and $Z$. For the index on the electric side, we have to sum up the indices from the GNO charges $(0,0,0),\left( \frac{1}{2},\frac{1}{2},0 \right)$ and $(1,1,0)$ up to $O(x^{3/2})$. Remember that the GNO charge $(m_1,m_2,m_3)$ must satisfy the relation $\sum m_i \in \mathbb{Z}$. The electric indices are \tiny \begin{align} I_{electric}^{(0,0,0)} &=1+x^{1/4} \left(t^2+10 u^2\right)+6 t u^2 x^{3/8}+\sqrt{x} \left(t^4+10 t^2 u^2+56 u^4\right)+x^{5/8} \left(6 t^3 u^2+61 t u^4\right) \nonumber \\ &\qquad +x^{3/4} \left(t^6+10 t^4 u^2+77 t^2 u^4+230 u^6\right)+x^{7/8} \left(6 t^5 u^2+61 t^3 u^4+346 t u^6\right) \nonumber \\ &\qquad +x \left(t^8+10 t^6 u^2+77 t^4 u^4+446 t^2 u^6+771 u^8\right)+x^{9/8} \left(6 t^7 u^2+61 t^5 u^4+402 t^3 u^6+1436 t u^8\right) \nonumber \\ & \qquad +x^{5/4} \left(t^{10}+10 t^8 u^2+77 t^6 u^4+446 t^4 u^6+2007 t^2 u^8+2232 u^{10}\right)\nonumber \\ &\qquad +x^{11/8} \left(6 t^9 u^2+61 t^7 u^4+402 t^5 u^6+2017 t^3 u^8+4856 t u^{10}\right) \nonumber \\ &\qquad+x^{3/2} \left(t^{12}+10 t^{10} u^2+77 t^8 u^4+446 t^6 u^6+2133 t^4 u^8+7398 t^2 u^{10}+5776 u^{12}\right)+\cdots \\ I^{\left( \frac{1}{2},\frac{1}{2},0 \right)}_{electric} &=\frac{x^{3/4}}{t^2 u^8}+x \left(\frac{10}{t^2 u^6}+\frac{1}{u^8}\right)+\frac{6 x^{9/8}}{t u^6}+x^{5/4} \left(\frac{t^2}{u^8}+\frac{56}{t^2 u^4}+\frac{10}{u^6}\right)+x^{11/8} \left(\frac{6 t}{u^6}+\frac{60}{t u^4}\right)+x^{3/2} \left(\frac{t^4}{u^8}+\frac{10 t^2}{u^6}+\frac{230}{t^2 u^2}+\frac{76}{u^4}\right)+\cdots \\ I_{electric}^{(1,1,0)} &=\frac{x^{3/2}}{t^4 u^{16}} +\cdots \end{align} \normalsize \noindent From the sector with zero GNO charge, we can read off the Higgs branch operators. The second term $x^{1/4} \left(t^2+10 u^2\right)$ represents the mesonic operators $M_{QQ}$ and $M_{SS,ij}$. The third term $6 t u^2 x^{3/8}$ corresponds to $P_{A1,ij}$. The baryonic operators $B_S$ and $B'_S$ are represented as $u^4 x^{1/2}$ and $tu^4 x^{5/8}$ respectively. The higher order terms are the products of the Higgs branch operators, whose flavor indices have to be symmetrized. The index with a GNO charge $\left( \frac{1}{2},\frac{1}{2},0 \right)$ contains the monopole operator. The first term $\frac{x^{3/4}}{t^2 u^8}$ can be regarded as $Z$ (see Table \ref{T14}). The index with a GNO charge $(1,1,0)$ represents $Z^2$. By summing up these three sectors we observe exact matching between the magnetic and electric indices. \subsection{$(N_f, N_S) =(2,3) $} Let us next consider the 3d $\mathcal{N}=2$ $Spin(7)$ gauge theory with two vectors and three spinors (see Table \ref{T23}). In this case, we also have a similar relation between $Z^3$ and $Y_{spin}$. Therefore, we expect that the quantum Coulomb branch is one-dimensional although the (semi-)classical analysis suggested the two-dimensional coordinates. We use the coordinate $Z$ to parametrize the Coulomb branch. The Higgs branch is described by the following operators \begin{gather} M_{QQ}:=QQ,~~~M_{SS}:=S^2 \\ P_{A1}:= SQS,~~~ P_{A2}:=SQ^2S. \end{gather} Notice that the spinors and the vectors now can be anti-symmetrized and we omitted the gamma matrices above for simplicity. The superpotential consistent with all the symmetries is \begin{align} W=Z \left[ \det \, M_{QQ} \det \, M_{SS} +P_{A1}^2 M_{QQ} M_{SS}+P_{A1}^2 P_{A2} +P_{A2}^2 M_{SS} \right] +\eta Z, \label{W23} \end{align} where the last term exists only when we put the theory on $\mathbb{S}^1 \times \mathbb{R}^3$. By integrating out the Coulomb branch operator, we obtain a 4d quantum constraint. \begin{table}[H]\caption{Quantum numbers for $(N_f, N_S) =(2,3) $} \begin{center} \scalebox{0.71}{ \begin{tabular}{|c||c||c|c|c|c|c| } \hline &$Spin(7)$&$SU(2)$&$SU(3)$&$U(1)_Q$&$U(1)_S$&$U(1)_R$ \\ \hline Q& ${\tiny \yng(1)}$&${\tiny \yng(1)}$&1&1&0& $R_f$ \\ $S$ & $\mathbf{2^{N}}=\mathbf{8}$&1&${\tiny \yng(1)}$&0&1& $R_S$ \\ $\lambda$ &$\mathbf{Adj.}$&1&1&0&0&$1$ \\ \hline $\eta=\Lambda_{N_f,N_S}^b$&1&1&1&$4$&$6$&$4(R_f-1)+6(R_S-1) +10$ \\ \hline $M_{QQ}:=QQ$&1&$\tiny \yng(2)$&1&2&0&$2R_f$ \\ $M_{SS}:=SS$&1&1&$\tiny \yng(2)$&0&2&$2R_S$ \\ $P_{A1}:=SQS$&1&$\tiny \yng(1)$&$\tiny \overline{\yng(1)}$&1&2&$R_f+2R_Q$ \\ $P_{A2}:=SQ^2S$&1&1&$\tiny \overline{\yng(1)}$&2&2&$2R_f+2R_S$ \\ \hline $Z:=Y_1Y_2^2Y_3$&1&1&1&$-4$&$-6$& $-8 -4 (R_f-1)-6 (R_S-1)=2-4R_f-6R_S$ \\ $Y_{spin}:=Y_1^2Y_2^2Y_3$ $(\phi_1 \ge \phi_3)$&1&1&1&$-4$&$-12$&$-10-4(R_f-1)-12(R_S-1)=6-4R_f-12R_S$ \\ \hline \end{tabular}} \end{center}\label{T23} \end{table} Let us confirm the validity of the superpotential \eqref{W23}. The UV and IR descriptions yield the same parity anomalies. As in the previous case, we can test the $SU(4)$ Higgs branch with $\braket{M_{QQ,22}} =v^2$, where the theory reduces to a 3d $\mathcal{N}=2$ $SU(4)$ gauge theory with one antisymmetric matter and with three (anti-)fundamental flavors. It is not known in the literature whether this low-energy theory is s-confining or not. However we can show that this theory indeed exhibits an s-confinement phase. Table \ref{Tsu41anti} shows the matter contents and their quantum numbers of the $SU(4)$ theory. \begin{table}[H]\caption{Quantum numbers for $SU(4)$ with ${\tiny \protect\yng(1,1)}$ and 3 $({\tiny \protect\yng(1)}+{\tiny \overline{\protect\yng(1)}})$} \begin{center} \scalebox{1}{ \begin{tabular}{|c||c||c|c|c|c|c|c| } \hline &$SU(4)$&$SU(3)$&$SU(3)$&U(1)&$U(1)$&$U(1)$&$U(1)_R$ \\ \hline $A$ &${\tiny \yng(1,1)}$&1&1&1&0&0&$R_A$ \\ $Q$&{\tiny \yng(1)}&${\tiny \yng(1)}$&1&0&1&1&$R_Q$ \\ $\tilde{Q}$&${\tiny \overline{\yng(1)}}$&1&${\tiny \yng(1)}$&0&$-1$&1&$R_Q$ \\ \hline $T:=A^2$ &1&1&1&2&0&0&$2R_A$ \\ $M:=Q\tilde{Q}$&1&${\tiny \yng(1)}$&${\tiny \yng(1)}$&0&0&2&$2R_Q$ \\ $B_A:=AQ^2$ &1&${\tiny \overline{ \yng(1)}}$&1&1&2&2&$R_A+2R_Q$ \\ $\bar{B}_A:=A \tilde{Q}^2$ &1&1&${\tiny \overline{\yng(1)}}$&1&$-2$&2&$R_A+2R_Q$ \\ \hline $Y_{SU(4)}$ &1&1&1&$-2$&0&$-6$&$2-2R_A-6R_Q$ \\ \hline \end{tabular}} \end{center}\label{Tsu41anti} \end{table} The Coulomb branch $Y_{SU(4)}$ corresponds to the breaking $SU(4) \rightarrow SU(2) \times U(1) \times U(1)$. The non-perturbative superpotential becomes \begin{align} W=Y_{SU(4)} (T \det \, M+M B_A \bar{B}_A). \label{su41anti} \end{align} In deriving the above, we assumed that the Coulomb branch is one-dimensional. This is plausible because the theory flows to a theory with one-dimensional Coulomb branch along the Higgs branch. For instance, when $M$ gets a vev with rank 1, the low-energy theory becomes a 3d $\mathcal{N}=2$ $SU(3)$ gauge theory with three (anti-)fundamental flavors. This theory has one Coulomb branch coordinate and is also s-confining \cite{Aharony:1997bx}. When $B_A$ or $\bar{B}_A$ gets an expectation value, the theory flows to a 3d $\mathcal{N}=2$ $SU(2)$ with four fundamental matters, which is again s-confining and has a one-dimensional Coulomb branch. Finally, when $T$ gets a vev, the theory flows to a 3d $\mathcal{N}=2$ $USp(4)$ theory with six fundamentals, which is s-confining and has a one-dimensional Coulomb branch. We can derive the superpotential \eqref{su41anti} from \eqref{W23}. Since the global symmetry is enhanced to $SU(3) \times SU(3)$, we decompose the Higgs branch operators as \begin{gather} M_{QQ,11} =: T,~~~M_{SS,ij}=M_{i \underline{j}} +M_{j \underline{i}} \\ P_{A,1} = \left( \begin{array}{c} B_A^i+\bar{B}_A^{\underline{i}} \\ v \epsilon^{ijk} (M_{j \underline{k}} - M_{k \underline{j}} ) \end{array} \right),~~~P_{A,2}^i = v (B_A^i -\bar{B}_A^{\underline{i}}). \end{gather} By properly rescaling the Coulomb branch operator $Z$, we arrive at the $SU(4)$ superpotential \eqref{su41anti}. We can also test the $G_2$ Higgs branch $ \braket{M_{SS,33}}=v^2$, where the theory reduces to a 3d $\mathcal{N}=2$ $G_2$ gauge theory with four fundamental matters, which is s-confining. We can derive the matter contents and the superpotential of the $G_2$ theory from our superpotential. We have to decompose the fields as follows. \begin{gather} M_{QQ,ij}=M_{ij}^{G_2}~~(i,j=1,2),~~~~M_{SS,ab}=M_{a+2,b+2}^{G2}~~(a,b=1,2) \\ P_{A1} = \begin{pmatrix} vM_{14}^{G_2} & -vM_{13}^{G_2} & B^2_{G_2}\\ vM_{24}^{G_2} & -vM_{23}^{G_2} & -B_{G_2}^1\\ \end{pmatrix},~~~P_{A2}=(v B_{G_2}^3 ,v B_{G_2}^4 ,F_{G_2}) \end{gather} By substituting these expressions into the superpotential \eqref{W23}, the $G_2$ superpotential is reproduced although unnecessary terms like $F_{G_2}(M^{G_2}_{13}M^{G_2}_{24}-M^{G_2}_{14}M^{G_2}_{23})$ are also generated. Presumably, this is because our description only respects $SU(2) \times SU(2) \times U(1) \subset SU(4)$ of the $G_2$ theory. In the RG flow, these terms are supposed to be suppressed. \subsubsection*{Superconformal Indices of $SU(4)$ with ${\tiny \protect\yng(1,1)}$ and 3 $({\tiny \protect\yng(1)}+{\tiny \overline{\protect\yng(1)}})$} We start with the index of the 3d $\mathcal{N}=2$ $SU(4)$ gauge theory with one anti-symmetric matter and with three (anti-)fundamental flavors. Since the theory is s-confining, the confined description also yields the same index. The dual index has the contributions from $T,M,B_A,\bar{B}_A$ and $Y_{SU(4)}$. The dual index becomes \tiny \begin{align} I_{dual}^{SU(4)} &=1+x^{1/3} \left(9 t^2+u^2\right)+6 t^2 u \sqrt{x}+x^{2/3} \left(\frac{1}{t^6 u^2}+45 t^4+9 t^2 u^2+u^4\right)+6 t^2 u x^{5/6} \left(9 t^2+u^2\right) \nonumber \\ &\qquad +x \left(165 t^6+\frac{1}{t^6}+66 t^4 u^2+\frac{9}{t^4 u^2}+9 t^2 u^4+u^6\right)+x^{7/6} \left(\frac{6}{t^4 u}+6 t^2 u \left(45 t^4+9 t^2 u^2+u^4\right)\right) \nonumber \\ & \qquad +x^{4/3} \left(\frac{1}{t^{12} u^4}+495 t^8+353 t^6 u^2+\frac{u^2}{t^6}+66 t^4 u^4+\frac{9}{t^4}+9 t^2 u^6+\frac{45}{t^2 u^2}+u^8\right) \nonumber \\ &\qquad +x^{3/2} \left(990 t^8 u+326 t^6 u^3+54 t^4 u^5+\frac{6 u}{t^4}+6 t^2 u^7+\frac{48}{t^2 u}\right) \nonumber \\ &\qquad +x^{5/3} \left(\frac{1}{t^{12} u^2}+\frac{9}{t^{10} u^4}+1287 t^{10}+1431 t^8 u^2+353 t^6 u^4+\frac{u^4}{t^6}+66 t^4 u^6+\frac{9 u^2}{t^4}+9 t^2 u^8+\frac{57}{t^2}+u^{10}+\frac{164}{u^2}\right)+ \cdots, \end{align} \normalsize \noindent where we set $R_A=R_Q=\frac{1}{6}$ for simplicity. $t$ is a fugacity for the $U(1)$ axial symmetry and $u$ counts the number of the anti-symmetric tensor. We did not include the fugacity for $U(1)$ baryon symmetry. For the electric side, we have to sum up the following sectors up to $O(x^{5/3})$. \tiny \begin{align} I^{(0,0,0)}_{electric}&= 1+x^{1/3} \left(9 t^2+u^2\right)+6 t^2 u \sqrt{x}+x^{2/3} \left(45 t^4+9 t^2 u^2+u^4\right)+x^{5/6} \left(54 t^4 u+6 t^2 u^3\right)+x \left(165 t^6+66 t^4 u^2+9 t^2 u^4+u^6\right)\nonumber \\ &\qquad +x^{7/6} \left(270 t^6 u+54 t^4 u^3+6 t^2 u^5\right)+x^{4/3} \left(495 t^8+353 t^6 u^2+66 t^4 u^4+9 t^2 u^6+u^8\right) \nonumber \\ &\qquad +x^{3/2} \left(990 t^8 u+326 t^6 u^3+54 t^4 u^5+6 t^2 u^7\right)+x^{5/3} \left(1287 t^{10}+1431 t^8 u^2+353 t^6 u^4+66 t^4 u^6+9 t^2 u^8+u^{10}\right)+\cdots \\ I^{\left(\frac{1}{2},0,0 \right)}_{electric}&=\frac{x^{2/3}}{t^6 u^2}+x \left(\frac{1}{t^6}+\frac{9}{t^4 u^2}\right)+\frac{6 x^{7/6}}{t^4 u}+x^{4/3} \left(\frac{u^2}{t^6}+\frac{9}{t^4}+\frac{45}{t^2 u^2}\right)+x^{3/2} \left(\frac{6 u}{t^4}+\frac{48}{t^2 u}\right)+x^{5/3} \left(\frac{u^4}{t^6}+\frac{9 u^2}{t^4}+\frac{57}{t^2}+\frac{164}{u^2}\right)+\cdots \\ I^{(1,0,0)}_{electric} &=\frac{x^{4/3}}{t^{12} u^4}+\frac{x^{5/3} \left(9 t^2+u^2\right)}{t^{12} u^4}+\cdots \end{align} \normalsize \noindent The sector with zero GNO charge contains the Higgs branch operators. The second term $x^{1/3} \left(9 t^2+u^2\right)$ corresponds to $M$ and $T$. The third term $6 t^2 u \sqrt{x}$ is the baryonic operators $B_{A}$ and $\bar{B}_A$. The sector with a GNO charge $\left(\frac{1}{2},0,0 \right)$ contains the Coulomb branch operator $Y_{SU(4)}$. We observe exact matching of the indices between the electric and magnetic sides. \subsubsection*{Superconformal Indices of $Spin(7)$ with $(N_f,N_S)=(2,3)$} Let us also examine the superconformal indices of the 3d $\mathcal{N}=2$ $Spin(7)$ gauge theory with $(N_f,N_S)=(2,3)$, which is s-confining. First, the magnetic index has the contributions from $M_{QQ},M_{SS},P_{A1},P_{A2}$ and $Z$. The index can be expanded as \tiny \begin{align} I_{magnetic}&=1+x^{1/4} \left(3 t^2+6 u^2\right)+6 t u^2 x^{3/8}+\sqrt{x} \left(6 t^4+21 t^2 u^2+21 u^4\right)+x^{5/8} \left(18 t^3 u^2+36 t u^4\right) \nonumber \\ &+x^{3/4} \left(10 t^6+\frac{1}{t^4 u^6}+45 t^4 u^2+102 t^2 u^4+56 u^6\right)+x^{7/8} \left(36 t^5 u^2+126 t^3 u^4+126 t u^6\right) \nonumber \\ &+x \left(15 t^8+78 t^6 u^2+249 t^4 u^4+\frac{6}{t^4 u^4}+357 t^2 u^6+\frac{3}{t^2 u^6}+126 u^8\right) \nonumber \\ &+x^{9/8} \left(60 t^7 u^2+270 t^5 u^4+542 t^3 u^6+\frac{6}{t^3 u^4}+336 t u^8\right) \nonumber \\ &+x^{5/4} \left(21 t^{10}+120 t^8 u^2+462 t^6 u^4+1001 t^4 u^6+\frac{21}{t^4 u^2}+987 t^2 u^8+\frac{21}{t^2 u^4}+252 u^{10}+\frac{6}{u^6}\right) \nonumber \\ &+x^{11/8} \left(90 t^9 u^2+468 t^7 u^4+1284 t^5 u^6+1722 t^3 u^8+\frac{36}{t^3 u^2}+756 t u^{10}+\frac{18}{t u^4}\right) \nonumber \\ &+x^{3/2} \left(28 t^{12}+171 t^{10} u^2+\frac{1}{t^8 u^{12}}+741 t^8 u^4+1998 t^6 u^6+3207 t^4 u^8+\frac{56}{t^4}+2310 t^2 u^{10}+\frac{10 t^2}{u^6}+\frac{99}{t^2 u^2}+462 u^{12}+\frac{45}{u^4}\right) +\cdots, \label{Imag23} \end{align} \normalsize \noindent where $t$ and $u$ are the fugacities for the $U(1)_Q$ and $U(1)_S$ symmetries. We set $R_f=R_S=\frac{1}{8}$ for simplicity. For the electric side, the index is decomposed into the sectors with different GNO charges. We have to sum up the following sectors up to $O(x^{3/2})$. \scriptsize \begin{align} I_{electric}^{(0,0,0)} &=1+x^{1/4} \left(3 t^2+6 u^2\right)+6 t u^2 x^{3/8}+\sqrt{x} \left(6 t^4+21 t^2 u^2+21 u^4\right)+x^{5/8} \left(18 t^3 u^2+36 t u^4\right) \nonumber \\& \qquad +x^{3/4} \left(10 t^6+45 t^4 u^2+102 t^2 u^4+56 u^6\right)+x^{7/8} \left(36 t^5 u^2+126 t^3 u^4+126 t u^6\right) \nonumber \\ &\qquad +x \left(15 t^8+78 t^6 u^2+249 t^4 u^4+357 t^2 u^6+126 u^8\right) +2 x^{9/8} \left(30 t^7 u^2+135 t^5 u^4+271 t^3 u^6+168 t u^8\right) \nonumber \\ &\qquad +x^{5/4} \left(21 t^{10}+120 t^8 u^2+462 t^6 u^4+1001 t^4 u^6+987 t^2 u^8+252 u^{10}\right)\nonumber \\ &\qquad+x^{11/8} \left(90 t^9 u^2+468 t^7 u^4+1284 t^5 u^6+1722 t^3 u^8+756 t u^{10}\right) \nonumber \\ &\qquad +x^{3/2} \left(28 t^{12}+171 t^{10} u^2+741 t^8 u^4+1998 t^6 u^6+3207 t^4 u^8+2310 t^2 u^{10}+462 u^{12}\right)+\cdots \\ I_{electric}^{\left(\frac{1}{2},\frac{1}{2},0 \right)} &= \frac{x^{3/4}}{t^4 u^6}+x \left(\frac{6}{t^4 u^4}+\frac{3}{t^2 u^6}\right)+\frac{6 x^{9/8}}{t^3 u^4}+x^{5/4} \left(\frac{21}{t^4 u^2}+\frac{21}{t^2 u^4}+\frac{6}{u^6}\right)+x^{11/8} \left(\frac{36}{t^3 u^2}+\frac{18}{t u^4}\right) \nonumber \\ &\qquad +x^{3/2} \left(\frac{56}{t^4}+\frac{10 t^2}{u^6}+\frac{99}{t^2 u^2}+\frac{45}{u^4}\right) +\cdots \\ I_{electric}^{(1,1,0)} &= \frac{x^{3/2}}{t^8 u^{12}} +\cdots \end{align} \normalsize \noindent The sector with zero GNO charge contains the Higgs branch operators. The second term $x^{1/4} \left(3 t^2+6 u^2\right)$ corresponds to the mesons $M_{QQ}$ and $M_{SS}$. The third term $6 t u^2 x^{3/8}$ represents $P_{A1}$ while $P_{A2}$ appears as $3 t^2 u^2 x^{1/2}$. The monopole operator is contained in the sector with a GNO charge $\left(\frac{1}{2},\frac{1}{2},0 \right)$. The first term $\frac{x^{3/4}}{t^4 u^6}$ is identified with the monopole operator $Z$. The proceeding terms are regarded as the products between $Z$ and the Higgs branch operators. By summing up these three sectors, we reproduce the magnetic index \eqref{Imag23}. \subsection{$(N_f,N_S) =(3,2) $} Let us move on to the 3d $\mathcal{N}=2$ $Spin(7)$ gauge theory with three vectors and two spinors. This case will require two Coulomb branch coordinates even at a quantum level. First, we enumerate the Higgs branch coordinates. \begin{gather} M_{QQ} :=QQ,~~~M_{SS} :=S^2\\ P_{S3} := SQ^3S,~~~P_{A1}:=SQS,~~~ P_{A2} :=SQ^2S \end{gather} Table \ref{T32} below shows the matter contents and their quantum numbers. We also listed the 4d dynamical scale and the moduli coordinates. \begin{table}[H]\caption{Quantum numbers for $(N_f,N_S) =(3,2) $} \begin{center} \scalebox{0.75}{ \begin{tabular}{|c||c||c|c|c|c|c| } \hline &$Spin(7)$&$SU(3)$&$SU(2)$&$U(1)_Q$&$U(1)_S$&$U(1)_R$ \\ \hline Q& ${\tiny \yng(1)}$&${\tiny \yng(1)}$&1&1&0& $R_f$ \\ $S$ & $\mathbf{2^{N}}=\mathbf{8}$&1&${\tiny \yng(1)}$&0&1& $R_S$ \\ $\lambda$ &$\mathbf{Adj.}$&1&1&0&0&$1$ \\ \hline $\eta=\Lambda_{N_f,N_S}^b$&1&1&1&$2N_f$&$2N_S$&$6(R_f-1)+4(R_S-1) +10$ \\ \hline $M_{QQ}:=QQ$&1&$\tiny \yng(2)$&1&2&0&$2R_f$ \\ $M_{SS}:=SS$&1&1&$\tiny \yng(2)$&0&2&$2R_S$ \\ $P_{S3}:=SQ^3S$&1&1&${\tiny \yng(2)}$&3&2& $3R_f+2R_S$ \\ $P_{A1}:=SQS$&1&${\tiny \yng(1)}$&1&1&2&$R_f+2R_S$ \\ $P_{A2}:=SQ^2S$ &1&${\tiny \bar{ \yng(1)}}$&1&2&2& $2R_f+2R_S$ \\ \hline $Z:=Y_1Y_2^2Y_3$&1&1&1&$-6$&$-4$& $-8 -6 (R_f-1) -4(R_S-1)=2-6R_f-4R_S$ \\ $Y:=\sqrt{Y_1 Z}$ $(\phi_1 \ge \phi_3)$ &1&1&1&$-3$&$-4$& $-5-3(R_f-1) -4(R_S-1)=2-3R_f-4R_S$ \\ \hline \end{tabular}} \end{center}\label{T32} \end{table} From the zero-mode counting of the Coulomb branch operators, we expected that there are two Coulomb branch directions un-lifted. One coordinate would be globally defined on the whole Weyl chamber, which was denoted by $Z$, and the other is defined on the region of $\phi_1 > \phi_3$, which is $Y$ or $Y_{spin}$. From various consistency checks, we assume that these two directions are $Z$ and $Y$ in this case. Being different from the other examples, we cannot find any simple relation between them (we need to include at least a fractional power of the Higgs branch operators). Consequently, the two-dimensional coordinates are necessary for the quantum Coulomb branch of $(N_f,N_S)=(3,2)$. One can write down the superpotential consistent with all the symmetries listed in Table \ref{T32}. \begin{align} W &= Z \left( \det \, M_{QQ} \det \, M_{SS} -\det \, P_{S3}+ P_{A2}^2 M_{QQ} -\frac{1}{2} P_{A1}^2 M_{QQ}^2 \right) \nonumber \\ & \qquad +Y \left( P_{A1}P_{A2} -M_{SS} P_{S3} \right)+\eta Z, \end{align} where the last term appears when we put a theory on $\mathbb{S}^1 \times \mathbb{R}^3$ and it is absent in a 3d discussion. We can check the validity of this s-confinement phase in various ways. First, remember that the 4d superpotential for $(N_f,N_S) =(3,2) $ takes the following form \cite{Cho:1997kr}, \begin{align} W_{4d}^{(N_f,N_S)=(3,2)} &=X_1 \left( \det \, M_{QQ} \det \, M_{SS} - \det \, P_{S3}+ P_{A2}^2 M_{QQ} -\frac{1}{2} P_{A1}^2 M_{QQ}^2 +\eta \right) \nonumber \\ &\qquad +X_2 \left( P_{A1}P_{A2} - M_{SS} P_{S3} \right) \end{align} and this is easily reproduced by integrating out the two Coulomb branch operators. Second, we consider the Higgs branch along which the gauge group is broken to $G_2$. This can be achieved by higgsing the spinorial matter, let's say $\braket{M_{SS,22}}=v^2$. In order to properly obtain the $G_2$ superpotential we have to rename the fields as \begin{gather} M_{QQ} =: M^{G_2}_{ij} ~(i,j \le 3),~~~~M_{SS,11} =: M^{G_2}_{44},\\ P_{A,1}=:2v M^{G_2}_{i4},~~~P_{A,2}=: v B_{G_2}^i \\ P_{S3,12}=:v F_{G_2},~~~P_{S3,11}=: \sum_{i=1,2,3} 2B^i_{G_2} M_{i4}^{G2} + B^4_{G_2} M_{44}^{G_2},~~~P_{S3,22}=:-v^2B_{G2}^4, \end{gather} where $Y$ and $P_{S3,11}$ become massive and integrated out. By substituting these expressions into the superpotential, we arrive at the $G_2$ superpotential \cite{Nii:2017npz}. Next, let us study another Higgs branch $\braket{M_{QQ,33}}=v^2$ along which the $Spin(7)$ group is broken to $SU(4)$. The low-energy theory becomes a 3d $\mathcal{N}=2$ $SU(4)$ gauge theory with two antisymmetric matters and with two (anti-)fundamental flavors. This was studied in \cite{Csaki:2014cwa} (see also \cite{Nii:2016jzi}). Table \ref{Tsu42anti} shows the matter contents, moduli coordinates and their quantum numbers. This theory has a two-dimensional Coulomb branch parametrized by $Y$ and $\tilde{Y}$. These two monopole operators correspond to the breaking $SU(4) \rightarrow SU(2) \times U(1) \times U(1)$ and $SU(4) \rightarrow SU(2) \times SU(2) \times U(1)$ respectively. This theory is known to be s-confining and the effective superpotential becomes \begin{align} W= Y(T^2 \det M_0 +TB \bar{B} +\det M_2 ) +\tilde{Y} (M_0M_2 +B \bar{B}) \label{Wsu4}, \end{align} where we neglected the relative coefficients for simplicity. \begin{table}[H]\caption{$SU(4)$ with 2 ${\tiny \protect\yng(1,1)}$ and 2 $({\tiny \protect\yng(1)} +\protect\overline{{\tiny \protect\yng(1)}})$} \begin{center} \scalebox{1}{ \begin{tabular}{|c||c||c|c|c|c|c|c|c| } \hline &$SU(4)$&$SU(2)$& $SU(2)$&$SU(2)$&$U(1)$&$U(1)$&$U(1)$&$U(1)_R$ \\ \hline $A$&${\tiny \yng(1,1)}$&${\tiny \yng(1)}$&1&1&1&0&0 &0\\ $Q$&${\tiny \yng(1)}$&1&${\tiny \yng(1)}$&1&0&1&1&0 \\ $\tilde{Q}$&${\tiny \overline{\yng(1)}}$&1&1&${\tiny \yng(1)}$&0&1&$-1$&0 \\ \hline $T:=A^2$&1&${\tiny \yng(2)}$&1&1&2&0&0&0 \\ $M_0:=Q \tilde{Q}$&1&1&${\tiny \yng(1)}$&${\tiny \yng(1)}$&0&2&0&0 \\ $M_2:=Q A^2\tilde{Q}$&1&1&${\tiny \yng(1)}$&${\tiny \yng(1)}$&2&2&0&0 \\ $B:=AQ^2$&1&${\tiny \yng(1)}$&1&1&1&2&2&0 \\ $\bar{B}:=A\tilde{Q}^2$ &1&${\tiny \yng(1)}$&1&1&1&2&$-2$&0 \\ \hline $Y$&1&1&1&1&$-4$&$-4$&0&2 \\ $\tilde{Y}$&1&1&1&1&$-2$&$-4$&0&2 \\ \hline \end{tabular}} \end{center}\label{Tsu42anti} \end{table} \noindent Since the global symmetries are enhanced to $SU(2) \times SU(2) \times SU(2) $, the fields are decomposed into \begin{gather} M_{QQ,ij \le 2} =: T, ~~M_{SS,ii} =:M_{0 ii},~~ M_{SS,12} =: \frac{M_{0,12} +M_{0,21}}{2}, \\ P_{S3,ii} =: vM_{2,ii}, ~~P_{S3,12}=: \frac{v(M_{2,12} +M_{2,21}) }{2},\\ P_{A,1i}=: \frac{B_i+\bar{B}_i}{\sqrt{2}}~(i=1,2),~~P_{A,1,3} = \frac{v(M_{0,12} -M_{0,21})}{\sqrt{2}}\\ P_{A,2,i} =: \frac{v\epsilon^{ij}(B-\bar{B})_j }{ \sqrt{2}},~~P_{A,2,3} =: \frac{M_{2,12} -M_{2,21}}{\sqrt{2}}. \end{gather} By substituting these expressions into the superpotential, we reproduce the superpotential \eqref{Wsu4}. \subsubsection*{Superconformal Indices} As another non-trivial check of our analysis, we study the superconformal indices of the 3d $\mathcal{N}=2$ $Spin(7)$ gauge theory with $(N_f,N_S)=(3,2)$. Since the dual description has no gauge group, the index is simple and expanded as \tiny \begin{align} I_{dual}&=1+x^{1/3} \left(\frac{1}{t^6 u^4}+6 t^2+3 u^2\right)+3 t u^2 \sqrt{x}+x^{2/3} \left(\frac{1}{t^{12} u^8}+\frac{3}{t^6 u^2}+\frac{6}{t^4 u^4}+21 t^4+21 t^2 u^2+6 u^4\right) \nonumber \\ &\quad +x^{5/6} \left(\frac{3}{t^5 u^2}+\frac{1}{t^3 u^4}+21 t^3 u^2+9 t u^4\right)+x \left(\frac{1}{t^{18} u^{12}}+\frac{3}{t^{12} u^6}+\frac{6}{t^{10} u^8}+56 t^6+\frac{6}{t^6}+81 t^4 u^2+\frac{21}{t^4 u^2}+51 t^2 u^4+\frac{21}{t^2 u^4}+10 u^6\right) \nonumber \\ & \quad +x^{7/6} \left(\frac{3}{t^{11} u^6}+\frac{1}{t^9 u^8}+81 t^5 u^2+\frac{9}{t^5}+71 t^3 u^4+\frac{21}{t^3 u^2}+18 t u^6+\frac{6}{t u^4}\right) +x^{4/3} \biggl(\frac{1}{t^{24} u^{16}}+\frac{3}{t^{18} u^{10}}+\frac{6}{t^{16} u^{12}} \nonumber \\ &\qquad \quad +\frac{6}{t^{12} u^4}+\frac{21}{t^{10} u^6}+\frac{21}{t^8 u^8}+126 t^8+231 t^6 u^2+\frac{10 u^2}{t^6}+231 t^4 u^4+\frac{51}{t^4}+96 t^2 u^6+\frac{81}{t^2 u^2}+15 u^8+\frac{56}{u^4}\biggr)+\cdots, \label{dual \end{align} \normalsize \noindent where $t$ and $u$ are the fugacities for the $U(1)_Q$ and $U(1)_S$ symmetries. We set $R_f=R_S=\frac{1}{6}$ for simplicity. Next, let us consider the index on the electric side for each GNO charge. We start with the sector with zero GNO charge. \tiny \begin{align} I_{electric}^{(0,0,0)} &=1+x^{1/3} \left(6 t^2+3 u^2\right)+3 t u^2 \sqrt{x}+x^{2/3} \left(21 t^4+21 t^2 u^2+6 u^4\right)+x^{5/6} \left(21 t^3 u^2+9 t u^4\right)+x \left(56 t^6+81 t^4 u^2+51 t^2 u^4+10 u^6\right) \nonumber \\ &+x^{7/6} \left(81 t^5 u^2+71 t^3 u^4+18 t u^6\right)+x^{4/3} \left(126 t^8+231 t^6 u^2+231 t^4 u^4+96 t^2 u^6+15 u^8\right)+x^{3/2} \left(231 t^7 u^2+300 t^5 u^4+160 t^3 u^6+30 t u^8\right)\nonumber \\ &+x^{5/3} \left(252 t^{10}+546 t^8 u^2+746 t^6 u^4+486 t^4 u^6+156 t^2 u^8+21 u^{10}\right)+3 t x^{11/6} \left(182 t^8 u^2+305 t^6 u^4+250 t^4 u^6+96 t^2 u^8+15 u^{10}\right) +\cdots \end{align} \normalsize \noindent This sector contains only the Higgs branch operators. $M_{QQ}, M_{SS}, P_{S3}, P_{A1}$ and $ P_{A2}$ appear as $6t^2 x^{1/3}, 3 u^2 x^{1/3}, 3t^3u^2 x^{5/6}, 3 t u^2 x^{1/2}$ and $3 t^2 u^2 x^{2/3} $ respectively. These are consistent with our analysis (see Table \ref{T32}). The next contribution is a sector with a GNO charges $\left(\frac{1}{2},\frac{1}{2},0 \right)$. \tiny \begin{align} I_{electric}^{\left(\frac{1}{2},\frac{1}{2},0 \right)} &=\frac{x^{1/3}}{t^6 u^4}+x^{2/3} \left(\frac{3}{t^6 u^2}+\frac{6}{t^4 u^4}\right)+x^{5/6} \left(\frac{3}{t^5 u^2}+\frac{1}{t^3 u^4}\right)+x \left(\frac{6}{t^6}+\frac{21}{t^4 u^2}+\frac{21}{t^2 u^4}\right)+x^{7/6} \left(\frac{9}{t^5}+\frac{21}{t^3 u^2}+\frac{6}{t u^4}\right)\nonumber \\ &\quad+x^{4/3} \left(\frac{10 u^2}{t^6}+\frac{51}{t^4}+\frac{81}{t^2 u^2}+\frac{56}{u^4}\right)+x^{3/2} \left(\frac{18 u^2}{t^5}+\frac{68}{t^3}+\frac{21 t}{u^4}+\frac{81}{t u^2}\right)+x^{5/3} \left(\frac{15 u^4}{t^6}+\frac{96 u^2}{t^4}+\frac{126 t^2}{u^4}+\frac{216}{t^2}+\frac{231}{u^2}\right)\nonumber \\ &\quad +x^{11/6} \left(\frac{30 u^4}{t^5}+\frac{56 t^3}{u^4}+\frac{152 u^2}{t^3}+\frac{231 t}{u^2}+\frac{270}{t}\right)+x^2 \left(\frac{21 u^6}{t^6}+\frac{156 u^4}{t^4}+\frac{252 t^4}{u^4}+\frac{441 u^2}{t^2}+\frac{546 t^2}{u^2}+650\right)+\cdots \end{align} \normalsize \noindent This sector contains two Coulomb branch operators. From Table \ref{T32}, the first term $\frac{x^{1/3}}{t^6 u^4}$ is identified with the operator $Z$. The other operator $Y$ also appears in this sector as $\frac{x^{5/6}}{t^3 u^4}$. The GNO charge (or the vev of $Z$) breaks the gauge group to $Spin(3) \times U(2)$. Under this breaking, the vector matters supply a $\mathbf{3}$ representation of $Spin(3)$. Consequently, $Y$ is understood also as $Y \sim Z t^3 x^{1/2} \sim Z Q^3$, where $ZQ^3$ is regarded as the product with the monopole and the $Spin(3)$ baryon. We cannot UV-complete $Q^3$ into a gauge invariant operator in terms of the UV elementary fields. Therefore, we need two monopole operators for the quantum Coulomb moduli. Up to $O(x^2)$, we have to also include the following sectors. \tiny \begin{align} I_{electric}^{(1,1,0)} &=\frac{x^{2/3}}{t^{12} u^8}+x \left(\frac{3}{t^{12} u^6}+\frac{6}{t^{10} u^8}\right)+x^{7/6} \left(\frac{3}{t^{11} u^6}+\frac{1}{t^9 u^8}\right)+x^{4/3} \left(\frac{6}{t^{12} u^4}+\frac{21}{t^{10} u^6}+\frac{21}{t^8 u^8}\right)+x^{3/2} \left(\frac{9}{t^{11} u^4}+\frac{21}{t^9 u^6}+\frac{6}{t^7 u^8}\right) \nonumber \\ &+x^{5/3} \left(\frac{10}{t^{12} u^2}+\frac{51}{t^{10} u^4}+\frac{81}{t^8 u^6}+\frac{56}{t^6 u^8}\right)+x^{11/6} \left(\frac{18}{t^{11} u^2}+\frac{68}{t^9 u^4}+\frac{81}{t^7 u^6}+\frac{21}{t^5 u^8}\right)+x^2 \left(\frac{15}{t^{12}}+\frac{96}{t^{10} u^2}+\frac{216}{t^8 u^4}+\frac{231}{t^6 u^6}+\frac{126}{t^4 u^8}\right)+\cdots \\ I_{electric}^{\left(\frac{3}{2},\frac{3}{2},0 \right)} &=\frac{x}{t^{18} u^{12}}+x^{4/3} \left(\frac{3}{t^{18} u^{10}}+\frac{6}{t^{16} u^{12}}\right)+x^{3/2} \left(\frac{3}{t^{17} u^{10}}+\frac{1}{t^{15} u^{12}}\right)+x^{5/3} \left(\frac{6}{t^{18} u^8}+\frac{21}{t^{16} u^{10}}+\frac{21}{t^{14} u^{12}}\right) \nonumber \\ &+x^{11/6} \left(\frac{9}{t^{17} u^8}+\frac{21}{t^{15} u^{10}}+\frac{6}{t^{13} u^{12}}\right)+x^2 \left(\frac{10}{t^{18} u^6}+\frac{51}{t^{16} u^8}+\frac{81}{t^{14} u^{10}}+\frac{56}{t^{12} u^{12}}\right)+\cdots \\ I_{electric}^{(2,2,0)}&=\frac{x^{4/3}}{t^{24} u^{16}}+x^{5/3} \left(\frac{3}{t^{24} u^{14}}+\frac{6}{t^{22} u^{16}}\right)+x^{11/6} \left(\frac{3}{t^{23} u^{14}}+\frac{1}{t^{21} u^{16}}\right)+x^2 \left(\frac{6}{t^{24} u^{12}}+\frac{21}{t^{22} u^{14}}+\frac{21}{t^{20} u^{16}}\right)+\cdots \\ I_{electric}^{\left(\frac{5}{2},\frac{5}{2},0 \right)}&=\frac{x^{5/3}}{t^{30} u^{20}}+\frac{3 x^2 \left(2 t^2+u^2\right)}{t^{30} u^{20}} +\cdots,~~I_{electric}^{(3,3,0)}=\frac{x^2}{t^{36} u^{24}} +\cdots,~~ I_{electric}^{(1,0,0)} = \frac{x^{5/3}}{t^6 u^8}+\frac{6 x^2}{t^4 u^8} +\cdots,~~ I_{electric}^{(3/2,1/2,0)} =\frac{x^2}{t^{12} u^{12}}+\cdots. \end{align} \normalsize \noindent These indices are consistent with the index of the dual side \eqref{dual}. \if0 \begin{align} I_{electric}^{(3,2)} &=1+x^{1/3} \left(\frac{1}{t^6 u^4}+6 t^2+3 u^2\right)+3 t u^2 \sqrt{x}+x^{2/3} \left(\frac{1}{t^{12} u^8}+\frac{3}{t^6 u^2}+\frac{6}{t^4 u^4}+21 t^4+21 t^2 u^2+6 u^4\right)\nonumber \\ &+x^{5/6} \left(\frac{3}{t^5 u^2}+\frac{1}{t^3 u^4}+21 t^3 u^2+9 t u^4\right) \nonumber \\ &+x \left(\frac{1}{t^{18} u^{12}}+\frac{3}{t^{12} u^6}+\frac{6}{t^{10} u^8}+56 t^6+\frac{6}{t^6}+81 t^4 u^2+\frac{21}{t^4 u^2}+51 t^2 u^4+\frac{21}{t^2 u^4}+10 u^6\right) \nonumber \\ &+x^{7/6} \left(\frac{3}{t^{11} u^6}+\frac{1}{t^9 u^8}+81 t^5 u^2+\frac{9}{t^5}+71 t^3 u^4+\frac{21}{t^3 u^2}+18 t u^6+\frac{6}{t u^4}\right) \nonumber \\ &+x^{4/3} \left(\frac{1}{t^{24} u^{16}}+\frac{3}{t^{18} u^{10}}+\frac{6}{t^{16} u^{12}}+\frac{6}{t^{12} u^4}+\frac{21}{t^{10} u^6}+\frac{21}{t^8 u^8}+126 t^8+231 t^6 u^2+\frac{10 u^2}{t^6}+231 t^4 u^4+\frac{51}{t^4}+96 t^2 u^6+\frac{81}{t^2 u^2}+15 u^8+\frac{56}{u^4}\right) \nonumber \\ &+x^{3/2} \left(\frac{3}{t^{17} u^{10}}+\frac{1}{t^{15} u^{12}}+\frac{9}{t^{11} u^4}+\frac{21}{t^9 u^6}+\frac{6}{t^7 u^8}+231 t^7 u^2+300 t^5 u^4+\frac{18 u^2}{t^5}+160 t^3 u^6+\frac{68}{t^3}+30 t u^8+\frac{21 t}{u^4}+\frac{81}{t u^2}\right) \nonumber \\ &+x^{5/3} \left(\frac{1}{t^{30} u^{20}}+\frac{3}{t^{24} u^{14}}+\frac{6}{t^{22} u^{16}}+\frac{6}{t^{18} u^8}+\frac{21}{t^{16} u^{10}}+\frac{21}{t^{14} u^{12}}+\frac{10}{t^{12} u^2}+\frac{51}{t^{10} u^4}+252 t^{10}+\frac{81}{t^8 u^6}+546 t^8 u^2+\frac{57}{t^6 u^8}+746 t^6 u^4+\frac{15 u^4}{t^6}+486 t^4 u^6+\frac{96 u^2}{t^4}+156 t^2 u^8+\frac{126 t^2}{u^4}+\frac{216}{t^2}+21 u^{10}+\frac{231}{u^2}\right) \nonumber \\ &+x^{11/6} \left(\frac{3}{t^{23} u^{14}}+\frac{1}{t^{21} u^{16}}+\frac{9}{t^{17} u^8}+\frac{21}{t^{15} u^{10}}+\frac{6}{t^{13} u^{12}}+\frac{18}{t^{11} u^2}+\frac{68}{t^9 u^4}+546 t^9 u^2+\frac{81}{t^7 u^6}+915 t^7 u^4+\frac{21}{t^5 u^8}+750 t^5 u^6+\frac{30 u^4}{t^5}+288 t^3 u^8+\frac{56 t^3}{u^4}+\frac{152 u^2}{t^3}+45 t u^{10}+\frac{231 t}{u^2}+\frac{270}{t}\right)\nonumber \\ &+x^2 \left(\frac{1}{t^{36} u^{24}}+\frac{3}{t^{30} u^{18}}+\frac{6}{t^{28} u^{20}}+\frac{6}{t^{24} u^{12}}+\frac{21}{t^{22} u^{14}}+\frac{21}{t^{20} u^{16}}+\frac{10}{t^{18} u^6}+\frac{51}{t^{16} u^8}+\frac{81}{t^{14} u^{10}}+\frac{57}{t^{12} u^{12}}+462 t^{12}+\frac{15}{t^{12}}+1134 t^{10} u^2+\frac{96}{t^{10} u^2}+1941 t^8 u^4+\frac{216}{t^8 u^4}+1725 t^6 u^6+\frac{21 u^6}{t^6}+\frac{231}{t^6 u^6}+861 t^4 u^8+\frac{132}{t^4 u^8}+\frac{252 t^4}{u^4}+\frac{156 u^4}{t^4}+231 t^2 u^{10}+\frac{546 t^2}{u^2}+\frac{441 u^2}{t^2}+28 u^{12}+637\right)+O\left(x^{13/6}\right) \end{align} \begin{align} I_{magnetic}^{(3,2)} &=1+x^{1/3} \left(\frac{1}{t^6 u^4}+6 t^2+3 u^2\right)+3 t u^2 \sqrt{x}+x^{2/3} \left(\frac{1}{t^{12} u^8}+\frac{3}{t^6 u^2}+\frac{6}{t^4 u^4}+21 t^4+21 t^2 u^2+6 u^4\right)\\&+x^{5/6} \left(\frac{3}{t^5 u^2}+\frac{1}{t^3 u^4}+21 t^3 u^2+9 t u^4\right)+x \left(\frac{1}{t^{18} u^{12}}+\frac{3}{t^{12} u^6}+\frac{6}{t^{10} u^8}+56 t^6+\frac{6}{t^6}+81 t^4 u^2+\frac{21}{t^4 u^2}+51 t^2 u^4+\frac{21}{t^2 u^4}+10 u^6\right) \\ &+x^{7/6} \left(\frac{3}{t^{11} u^6}+\frac{1}{t^9 u^8}+81 t^5 u^2+\frac{9}{t^5}+71 t^3 u^4+\frac{21}{t^3 u^2}+18 t u^6+\frac{6}{t u^4}\right) \\ &+x^{4/3} \left(\frac{1}{t^{24} u^{16}}+\frac{3}{t^{18} u^{10}}+\frac{6}{t^{16} u^{12}}+\frac{6}{t^{12} u^4}+\frac{21}{t^{10} u^6}+\frac{21}{t^8 u^8}+126 t^8+231 t^6 u^2+\frac{10 u^2}{t^6}+231 t^4 u^4+\frac{51}{t^4}+96 t^2 u^6+\frac{81}{t^2 u^2}+15 u^8+\frac{56}{u^4}\right)+x^{3/2} \left(\frac{3}{t^{17} u^{10}}+\frac{1}{t^{15} u^{12}}+\frac{9}{t^{11} u^4}+\frac{21}{t^9 u^6}+\frac{6}{t^7 u^8}+231 t^7 u^2+300 t^5 u^4+\frac{18 u^2}{t^5}+160 t^3 u^6+\frac{68}{t^3}+30 t u^8+\frac{21 t}{u^4}+\frac{81}{t u^2}\right)+x^{5/3} \left(\frac{1}{t^{30} u^{20}}+\frac{3}{t^{24} u^{14}}+\frac{6}{t^{22} u^{16}}+\frac{6}{t^{18} u^8}+\frac{21}{t^{16} u^{10}}+\frac{21}{t^{14} u^{12}}+\frac{10}{t^{12} u^2}+\frac{51}{t^{10} u^4}+252 t^{10}+\frac{81}{t^8 u^6}+546 t^8 u^2+\frac{57}{t^6 u^8}+746 t^6 u^4+\frac{15 u^4}{t^6}+486 t^4 u^6+\frac{96 u^2}{t^4}+156 t^2 u^8+\frac{126 t^2}{u^4}+\frac{216}{t^2}+21 u^{10}+\frac{231}{u^2}\right)+x^{11/6} \left(\frac{3}{t^{23} u^{14}}+\frac{1}{t^{21} u^{16}}+\frac{9}{t^{17} u^8}+\frac{21}{t^{15} u^{10}}+\frac{6}{t^{13} u^{12}}+\frac{18}{t^{11} u^2}+\frac{68}{t^9 u^4}+546 t^9 u^2+\frac{81}{t^7 u^6}+915 t^7 u^4+\frac{21}{t^5 u^8}+750 t^5 u^6+\frac{30 u^4}{t^5}+288 t^3 u^8+\frac{56 t^3}{u^4}+\frac{152 u^2}{t^3}+45 t u^{10}+\frac{231 t}{u^2}+\frac{270}{t}\right)+x^2 \left(\frac{1}{t^{36} u^{24}}+\frac{3}{t^{30} u^{18}}+\frac{6}{t^{28} u^{20}}+\frac{6}{t^{24} u^{12}}+\frac{21}{t^{22} u^{14}}+\frac{21}{t^{20} u^{16}}+\frac{10}{t^{18} u^6}+\frac{51}{t^{16} u^8}+\frac{81}{t^{14} u^{10}}+\frac{57}{t^{12} u^{12}}+462 t^{12}+\frac{15}{t^{12}}+1134 t^{10} u^2+\frac{96}{t^{10} u^2}+1941 t^8 u^4+\frac{216}{t^8 u^4}+1725 t^6 u^6+\frac{21 u^6}{t^6}+\frac{231}{t^6 u^6}+861 t^4 u^8+\frac{132}{t^4 u^8}+\frac{252 t^4}{u^4}+\frac{156 u^4}{t^4}+231 t^2 u^{10}+\frac{546 t^2}{u^2}+\frac{441 u^2}{t^2}+28 u^{12}+637\right)+O\left(x^{13/6}\right) \end{align} \fi \normalsize \subsection{$(N_f,N_S )=(4,1) $} In this subsection, we will investigate the 3d $\mathcal{N}=2$ $Spin(7)$ gauge theory with four vectors and one spinor. In order to describe the Higgs branch of the moduli space, we need to define the following gauge invariant operators % \begin{gather} M_{QQ}:=QQ,~~~M_{SS}:=SS \\ P:=S Q^3 S,~~~R:=S Q^4 S . \end{gather} Notice that only the symmetric product of the spinor is available. The theory has the $SU(4) \times U(1)_Q \times U(1)_S \times U(1)_R$ global symmetries. Table \ref{T41} shows the quantum numbers of the moduli coordinates. \begin{table}[H]\caption{Quantum numbers for $(N_f,N_S )=(4,1) $} \begin{center} \scalebox{0.88}{ \begin{tabular}{|c||c||c|c|c| } \hline &$SU(4)_Q$&$U(1)_Q$&$U(1)_S$&$U(1)_R$ \\ \hline $\eta=\Lambda_{N_f,N_S}^b$&1&$8$&$2$&$8(R_f-1)+2(R_S-1) +10=8R_f+2R_S$ \\ \hline $M_{QQ}:=QQ$&$\tiny \yng(2)$&2&0&$2R_f$ \\ $M_{SS}:=SS$&1&0&2&$2R_S$ \\ $P:=SQ^3S$&${\tiny \bar{\yng(1)}}$&3&2& $3R_f+2R_S$ \\ $R:=SQ^4S$&1&4&2& $4R_f+2R_S$\\ \hline $Z:=Y_1Y_2^2Y_3$&1&$-8$&$-2$& $-8 -2 (R_S-1) -8 (R_f-1)=2-8R_f-2R_S$ \\ $Y_{spin}:= Y_1 Z$ $(\phi_1 \ge \phi_3)$&1&$-8$&$-4$& $-10 -8(R_f-1)-4(R_S-1)=2-8R_f -4R_S$ \\ \hline \end{tabular}} \end{center}\label{T41} \end{table} From the analysis of the Coulomb branch corresponding to the semi-classical monopoles, one might expect that two-dimensional subspace of the classical Coulomb moduli remains flat and these are parametrized by $Z$ and $Y_{spin}$. In this case, however one can identify these two Coulomb branch operators as $Z \sim Y_{spin}M_{SS}$. Therefore it is plausible to expect that the quantum Coulomb branch is one-dimensional. The superpotential consistent with all the symmetries takes \begin{align} W &=Y_{spin} [M_{SS}^2 \det \, M_{QQ} +P^2 M_{QQ} - R^2 ] +\eta Y_{spin } M_{SS}, \label{W41} \end{align} where the term proportional to $\eta$ is generated by a KK-monopole and absent in a 3d limit. Originally the KK-monopole contribution is $\eta Z$ but now it is expressed in terms of $Y_{spin}$. We can easily check the parity anomaly matching between the UV theory and the IR description \eqref{W41}. One might consider that the quantum Coulomb branch is described by $Y$ instead of $Y_{spin}$. However, in this case, we cannot satisfy the parity anomaly matching for $k_{U(1)_R U(1)_R}$. By integrating out the Coulomb branch $Y_{spin}$, we reproduce the 4d result with a single quantum constraint \cite{Cho:1997kr} \begin{align} M_{SS}^2 \det \, M_{QQ} +P^2 M_{QQ} - R^2+\eta M_{SS}=0. \end{align} Therefore, the identification, $Z \sim Y_{spin}M_{SS}$, properly reduces the 3d result to the 4d constraint. Let us check the complex mass deformation for the spinorial matter, which leads to the 3d $\mathcal{N}=2$ $Spin(7)$ gauge theory with four vector matters. The superpotential becomes \begin{align} W=Y_{spin} [M_{SS}^2 \det \, M_{QQ} +P^2 M_{QQ} - R^2 ] +m M_{SS} \end{align} and the equations of motion for $M_{SS},P$ and $R$ are \begin{gather} m+2Y_{spin} M_{SS} \det M_{QQ} =0, \\ Y_{spin}PM_{QQ} =0, \\ RY_{spin} =0, \end{gather} which lead to $P^i=R=0$ and $M_{SS}$ is integrated out. The low-energy superpotential results in \begin{align} W=\frac{1}{Y_{spin} \det M_{QQ}}. \end{align} This is consistent with the observation in \cite{Aharony:2011ci} with modification of the Coulomb branch operator. This difference is due to the fact that we deal with not an $SO(7)$ group but a $Spin(7)$ group. Next, we will test the Higgs branch. When the spinor gets a vev $\braket{M_{SS}}=v^2$, the gauge group is broken to $G_2$. The low-energy limit becomes a 3d $\mathcal{N}=2$ $G_2$ gauge theory with four fundamentals from the vector matters. Under the breaking we have the following identification between the $Spin(7)$ and $G_2$ theories \begin{align} P^i=:v^2 B_{G_2}^i,~~R=:vF_{G_2},~~Y_{spin }v^2 =:Z_{G_2}. \end{align} The superpotential reduces to \begin{align} W=Z_{G_2} \left[ \det M_{QQ} -F^2 +BM_{QQ}B \right], \end{align} which is precisely the $G_2$ superpotential observed in \cite{Nii:2017npz}. Let us consider the different direction of the Higgs branch $\braket{M_{QQ,44}} =v^2$, along which the gauge group is broken as $Spin(7) \rightarrow SU(4)$. The low-energy theory becomes a 3d $\mathcal{N}=2$ $SU(4)$ gauge theory with three antisymmetric matters and one (anti-)fundamental flavor. Since the UV theory is s-confining, the low-energy $SU(4)$ theory is also confining. We can directly show that this theory indeed exhibits an s-confinement phase. Table \ref{su43anti} shows the matter contents of the $SU(4)$ theory and their quantum numbers. \begin{table}[H]\caption{$SU(4)$ with 3 ${\tiny \protect\yng(1,1)}$ and $({\tiny \protect\yng(1)}+{\tiny \overline{\protect\yng(1)}})$} \begin{center} \scalebox{1}{ \begin{tabular}{|c||c||c|c|c|c|c| } \hline &$SU(4)$&$SU(3)$&$U(1)$&$U(1)_B$&$U(1)_A$&$U(1)_R$ \\ \hline $A$ &${\tiny \yng(1,1)}$&${\tiny \yng(1)}$&1&0&0&$R_A$ \\ $Q$&${\tiny \yng(1)}$&1&0&1&1&$R_Q$ \\ $\tilde{Q}$&${\tiny \overline{\yng(1)}}$&1&0&1&$-1$&$R_Q$ \\ \hline $T:=A^2=\mathrm{Pf}\,A$&1&${\tiny \yng(2)}$&2&0&0&$2R_A$ \\ $M_0:=Q\tilde{Q}$&1&1&0&2&0&$2R_Q$ \\ $M_2:=QA^2\tilde{Q}$&1&${\tiny \overline{\yng(1)}}$&2&2&0&$2R_A+2R_Q$ \\ $B:=QA^3Q$&1&1&3&2&2&$3R_A+2R_Q$ \\ $\bar{B}:=\tilde{Q}A^3\tilde{Q}$&1&1&3&2&$-2$&$3R_A+2R_Q$ \\ \hline $Y$&1&1&$-6$&$-2$&0&$2-2R_Q-6R_A$ \\ $\hat{Y}$&1&1&$-6$&$-4$&$0$&$2-4R_Q-6R_A$ \\ \hline \end{tabular}} \end{center}\label{su43anti} \end{table} From the classical analysis of the $SU(4)$ Coulomb brach (see \cite{Csaki:2014cwa,Amariti:2015kha}), one might expect that there are two types of Coulomb branch corresponding to \begin{align} Y \leftrightarrow \begin{pmatrix} \sigma & && \\ &0&& \\ &&0& \\ &&& -\sigma\\ \end{pmatrix} ,~~~~~~ \hat{Y} \leftrightarrow \begin{pmatrix} \sigma & && \\ &\sigma&& \\ &&-\sigma& \\ &&& -\sigma\\ \end{pmatrix}. \end{align} However, Table \ref{su43anti} suggests that these two variables are related as $Y \sim \hat{Y}M_0.$ Consequently the quantum Coulomb branch becomes one-dimensional. We obtain the confining superpotential \begin{align} W= \hat{Y}(T^3M_0^2+TM_2^2+B\bar{B}). \label{Wsu431} \end{align} One can flow to this superpotential also from the UV description of \eqref{W41}. In order to show this, we have to rename the fields as follows \begin{gather} M_{QQ,ij} =: T ~~(i,j=1,2,3),~~~M_{SS}=:M_0\\ P^{i=1,2,3} =:v M_2,~~~P^{i=4}=:\frac{B+\bar{B}}{2},~~~R=: \frac{v(B-\bar{B})}{2}. \end{gather} By substituting these expressions, we reproduce the superpotential \eqref{Wsu431}. \subsubsection*{Superconformal Indices of $SU(4)$ with 3 ${\tiny \protect\yng(1,1)}$ and $({\tiny \protect\yng(1)}+{\tiny \overline{\protect\yng(1)}})$} Let us first study the superconformal indices of the 3d $\mathcal{N}=2$ $SU(4)$ gauge theory with three antisymmetric matters and with one (anti-)fundamental flavor. Since the theory is s-confining, the index must be equivalent to the index of the dual description with $T,M_0,M_2,B,\bar{B}$ and $\hat{Y}$ (not including $Y$). The full index (or the index on the magnetic side) is \scriptsize \begin{align} I_{mag}&=1+x^{1/3} \left(\frac{1}{t^4 u^6}+t^2+6 u^2\right)+x^{2/3} \left(\frac{1}{t^8 u^{12}}+\frac{6}{t^4 u^4}+t^4+\frac{1}{t^2 u^6}+9 t^2 u^2+21 u^4\right)+2 t^2 u^3 x^{5/6} \nonumber \\ &\quad +x \left(\frac{1}{t^{12} u^{18}}+\frac{6}{t^8 u^{10}}+\frac{1}{t^6 u^{12}}+t^6+9 t^4 u^2+\frac{21}{t^4 u^2}+39 t^2 u^4+\frac{9}{t^2 u^4}+56 u^6+\frac{1}{u^6}\right)+2 t^2 u^3 x^{7/6} \left(t^2+6 u^2\right)\nonumber \\ &\quad +x^{4/3} \biggl(\frac{1}{t^{16} u^{24}}+\frac{6}{t^{12} u^{16}}+\frac{1}{t^{10} u^{18}}+\frac{21}{t^8 u^8}+t^8+\frac{9}{t^6 u^{10}} \nonumber \\ &\qquad \qquad +9 t^6 u^2+\frac{\frac{1}{u^{12}}+56}{t^4}+45 t^4 u^4+t^2 \left(119 u^6+\frac{1}{u^6}\right)+\frac{36}{t^2 u^2}+126 u^8+\frac{9}{u^4}\biggr)+\cdots, \label{ISUF} \end{align} \normalsize \noindent where $t$ represents the $U(1)$ charge of the (anti-)fundamental matters and $u$ counts the anti-symmetric matters. We set $R_A=R_Q=\frac{1}{6}$ for simplicity. Next, we show the index for each GNO charge. The important sectors are only listed below. \scriptsize \begin{align} I_{electric}^{(0,0,0)} &=1+x^{1/3} \left(t^2+6 u^2\right)+x^{2/3} \left(t^4+9 t^2 u^2+21 u^4\right)+2 t^2 u^3 x^{5/6}+x \left(t^6+9 t^4 u^2+39 t^2 u^4+56 u^6\right) \nonumber \\ &+x^{7/6} \left(2 t^4 u^3+12 t^2 u^5\right)+x^{4/3} \left(t^8+9 t^6 u^2+45 t^4 u^4+119 t^2 u^6+126 u^8\right)+x^{3/2} \left(2 t^6 u^3+18 t^4 u^5+42 t^2 u^7\right) \nonumber \\ &+x^{5/3} \left(t^{10}+9 t^8 u^2+45 t^6 u^4+157 t^4 u^6+294 t^2 u^8+252 u^{10}\right)+x^{11/6} \left(2 t^8 u^3+18 t^6 u^5+78 t^4 u^7+112 t^2 u^9\right) \nonumber \\ & \qquad +x^2 \left(t^{12}+9 t^{10} u^2+45 t^8 u^4+167 t^6 u^6+432 t^4 u^8+630 t^2 u^{10}+462 u^{12}-11\right)+\cdots, \\ I_{electric}^{\left(\frac{1}{2},0,0 \right)} &=\frac{x^{2/3}}{t^2 u^6}+x \left(\frac{9}{t^2 u^4}+\frac{1}{u^6}\right)+x^{4/3} \left(\frac{t^2}{u^6}+\frac{36}{t^2 u^2}+\frac{9}{u^4}\right)+x^{5/3} \left(\frac{t^4}{u^6}+\frac{9 t^2}{u^4}+\frac{100}{t^2}+\frac{36}{u^2}\right) \nonumber \\ &+x^2 \left(\frac{t^6}{u^6}+\frac{9 t^4}{u^4}+\frac{36 t^2}{u^2}+\frac{225 u^2}{t^2}+100\right)+\cdots, \\ I_{electric}^{\left(\frac{1}{2},\frac{1}{2},-\frac{1}{2} \right)} &=\frac{x^{1/3}}{t^4 u^6}+\frac{6 x^{2/3}}{t^4 u^4}+\frac{21 x}{t^4 u^2}+\frac{56 x^{4/3}}{t^4}+\frac{126 u^2 x^{5/3}}{t^4}+\frac{252 u^4 x^2}{t^4}+\cdots. \end{align} \normalsize \noindent The sector with zero GNO charge contains the Higgs branch operators. The second term $x^{1/3} \left(t^2+6 u^2\right)$ represents the mesons $M_0$ and $T$. The third term $x^{2/3} \left(t^4+9 t^2 u^2+21 u^4\right)$ contains $M_0^2,T^2, M_0T$ and $M_2$. The fourth term corresponds to the baryonic operators $B$ and $\bar{B}$. The sector with a GNO charge $\left(\frac{1}{2},0,0 \right)$ classically represents the Coulomb branch operator $Y$ as $\frac{x^{2/3}}{t^2 u^6}$, which is not a quantum Coulomb branch operator. In this sector, the gauge group is broken to $SU(2) \times U(1) \times U(1)$. Therefore, the BPS scalar states are $Y M_0$ and $Y A^2$ where $M_0$ and $A^2$ are constructed from the fields not interacting with the monopole background. Hence $Y A^2$ contains the nine contributions $\frac{9x}{t^2 u^4}$ while $T:=A^2$ has six components. Quantum mechanically, these nine contributions are decomposed into $\hat{Y} M_0 T$ and $\hat{Y}M_2$. This observation is very consistent with our prediction $Y \sim \hat{Y} M_0$. The sector with a GNO charge $\left(\frac{1}{2},\frac{1}{2},-\frac{1}{2} \right)$ contains the genuine Coulomb branch operator $\hat{Y}$ as $\frac{x^{1/3}}{t^4 u^6}$ and this is consistent with Table \ref{su43anti}. Since the gauge group is broken to $SU(2) \times SU(2) \times U(1)$ in this sector, we cannot take a product between $\hat{Y}$ and the (anti-)fundamental fields which are all charged under the $U(1)$. Therefore the proceeding terms are identified with $\hat{Y}T^n$. By summing up all the other sectors contributing to the lower orders in the index we reproduce the full index \eqref{ISUF}. \subsubsection*{Superconformal Indices of $Spin(7)$ with $(N_f,N_c)=(4,1)$} We also discuss the superconformal indices for the 3d $\mathcal{N}=2$ $Spin(7)$ theory with $(N_f,N_c)=(4,1)$. Since the theory is s-confining, the full index should be equivalent to the index of the dual description \eqref{W41} without the last term. The R-charges of the elementary chiral superfields are all set to be $R_f=R_S=\frac{1}{8}$. The full index is given by \tiny \if0 \begin{align} I_{magnetic}^{(N_f,N_S)=(4,1)} &= 1+x^{1/4} \left(\frac{1}{t^4 u^2}+10 t^2+u^2\right)+\sqrt{x} \left(\frac{1}{t^8 u^4}+55 t^4+\frac{1}{t^4}+10 t^2 u^2+\frac{10}{t^2 u^2}+u^4\right)+4 t^3 u^2 x^{5/8 } \nonumber \\ &+x^{3/4} \left(\frac{1}{t^{12} u^6} +\frac{2}{t^8 u^2}+\frac{10}{t^6 u^4}+220 t^6+56 t^4 u^2+\frac{u^2}{t^4}+10 t^2 u^4+\frac{10}{t^2}+u^6+\frac{55}{u^2}\right)+\frac{x^{7/8} \left(40 t^6 u^2+4 t^4 u^4+4\right)}{t} \nonumber \\ &+x \left(\frac{1}{t^{16} u^8}+\frac{10}{t^{10} u^6}+715 t^8+\frac{2}{t^8}+10 t^2 u^6+\frac{55 t^8+2}{t^{12} u^4}+\frac{220 t^8+20}{t^6 u^2}+\frac{\left(56 t^8+1\right) u^4}{t^4}+\frac{10 \left(23 t^8+1\right) u^2}{t^2}+u^8+56\right) \nonumber \\ &+x^{9/8} \left(\frac{4 \left(55 t^8+1\right) u^2}{t}+40 t^5 u^4+\frac{4}{t^5 u^2}+4 t^3 u^6+40 t\right)\nonumber \\ &+x^{5/4} \biggl(\frac{1}{t^{20} u^{10}}+\frac{2}{t^{16} u^6}+\frac{10}{t^{14} u^8}+\frac{2}{t^{12} u^2}+\frac{20}{t^{10} u^4}+2002 t^{10}+769 t^8 u^2+\frac{2 u^8+55}{t^8 u^6}+240 t^6 u^4+\frac{20}{t^6} \nonumber \\ &\qquad \qquad +\frac{t^4 \left(56 u^8+715\right)}{u^2}+\frac{u^8+110}{t^4 u^2}+10 t^2 \left(u^8+23\right)+\frac{10 \left(u^8+22\right)}{t^2 u^4}+u^2 \left(u^8+56\right) \biggr) +\cdots \end{align} \fi \begin{align} I_{magnetic}^{(N_f,N_S)=(4,1)} &=1+x^{1/4} \left(10 t^2+u^2\right)+\sqrt{x} \left(\frac{1}{t^8 u^4}+55 t^4+10 t^2 u^2+u^4\right)+4 t^3 u^2 x^{5/8} \nonumber \\\ &\qquad +x^{3/4} \left(220 t^6+56 t^4 u^2+10 t^2 u^4+\frac{10 t^2+u^2}{t^8 u^4}+u^6\right)+4 t^3 u^2 x^{7/8} \left(10 t^2+u^2\right) \nonumber \\ &\qquad+x \left(\frac{1}{t^{16} u^8}+715 t^8+\frac{1}{t^8}+230 t^6 u^2+\frac{10}{t^6 u^2}+56 t^4 u^4+\frac{55}{t^4 u^4}+10 t^2 u^6+u^8\right) \nonumber \\ &\qquad +4 t^3 u^2 x^{9/8} \left(\frac{1}{t^8 u^4}+55 t^4+10 t^2 u^2+u^4\right) \nonumber \\ &\qquad+x^{5/4} \left(\frac{1}{t^{16} u^6}+\frac{10}{t^{14} u^8}+2002 t^{10}+770 t^8 u^2+\frac{u^2}{t^8}+240 t^6 u^4+\frac{10}{t^6}+56 t^4 u^6+\frac{55}{t^4 u^2}+10 t^2 u^8+\frac{220}{t^2 u^4}+u^{10}\right) +\cdots, \label{I41m} \end{align} \normalsize \noindent where $t$ and $u$ are the fugacities for the $U(1)_Q$ and $U(1)_S$ symmetries. The index is decomposed into the sectors with different GNO charges on the electric side. Since the $Spin (7)$ gauge group is considered, the following sectors are necessary up to $O(x^{5/4})$. \scriptsize \begin{align} I_{electric}^{(0,0,0)} &= 1+x^{1/4} \left(10 t^2+u^2\right)+\sqrt{x} \left(55 t^4+10 t^2 u^2+u^4\right)+4 t^3 u^2 x^{5/8}+x^{3/4} \left(220 t^6+56 t^4 u^2+10 t^2 u^4+u^6\right) \nonumber \\ &\qquad +x^{7/8} \left(40 t^5 u^2+4 t^3 u^4\right)+x \left(715 t^8+230 t^6 u^2+56 t^4 u^4+10 t^2 u^6+u^8\right) +4 x^{9/8} \left(55 t^7 u^2+10 t^5 u^4+t^3 u^6\right) \nonumber \\ &\qquad+ x^{5/4} \left(2002 t^{10}+770 t^8 u^2+240 t^6 u^4+56 t^4 u^6+10 t^2 u^8+u^{10}\right) +\cdots \\ I_{electric}^{ \left( \frac{1}{2},\frac{1}{2},0 \right)} &=\frac{x^{3/4}}{t^8 u^2}+x \left(\frac{1}{t^8}+\frac{10}{t^6 u^2}\right)+\frac{4 x^{9/8}}{t^5 u^2}+x^{5/4} \left(\frac{u^2}{t^8} +\frac{10}{t^6}+\frac{55}{t^4 u^2}\right) +\cdots, \\ I_{electric}^{(1,0,0)} &=\frac{\sqrt{x}}{t^8 u^4}+\frac{10 x^{3/4}}{t^6 u^4}+\frac{55 x}{t^4 u^4}+\frac{220 x^{5/4}}{t^2 u^4} +\cdots, \\ I_{electric}^{(2,0,0)} &=\frac{x}{t^{16} u^8}+\frac{10 x^{5/4}}{t^{14} u^8}+\cdots,~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ I_{electric}^{\left( \frac{3}{2} ,\frac{1}{2},0 \right)} =\frac{x^{5/4}}{t^{16} u^6} +\cdots. \end{align} \normalsize \noindent The summation of these indices precisely matches the index \eqref{I41m} on the magnetic side. The index with a GNO charge $(1,0,0)$ explains the monopole operator $Y_{spin}$ whose R-chrage is $\frac{1}{2}$. This sector breaks the gauge group to $Spin(5) \times U(1)$. The spinor matters are all charged under this $U(1)$. Therefore, the BPS scalar states do not contain the spinorial fields on the $Y_{spin}$ background. The second term $\frac{10 x^{3/4}}{t^6 u^4}$ only includes $Y_{spin} M_{QQ}$. However one can generally consider the products of the chiral superfields $Y_{spin}$ and $M_{SS}$ in the chiral ring. These are contained in the sector with a GNO chaerge $\left( \frac{1}{2},\frac{1}{2},0 \right)$. This sector corresponds to the operator $Z \sim Y_{spin} M_{SS}$. The second term $x \left(\frac{1}{t^8}+\frac{10}{t^6 u^2}\right)$ is regarded as $Z(M_{SS}+M_{QQ}) \sim Y_{spin}(M_{SS}^2 +M_{SS}M_{QQ}) $. The third term $\frac{4 x^{9/8}}{t^5 u^2}$ corresponds to $ZQ^3 \sim Y_{spin}SSQ^3 \sim Y_{spin} P$. This is again consistent with our prediction $Z \sim Y_{spin} M_{SS}$. \section{Summary and Discussion} In this paper, we investigated the 3d $\mathcal{N}=2$ supersymmetric $Spin(7)$ gauge theories with spinorial and vectorial matters. The 3d $\mathcal{N}=2$ $Spin(7)$ gauge theory with only the spinor matters has the one-dimensional (quantum) Coulomb branch parametrized by $Z$. For $N_S \le 3$, we found no stable SUSY vacuum. For $N_S=4$, the Higgs branch and the Coulomb branch are merged. For $N_S=5$, the theory is s-confining. For the theory with both spinors and vectors, the Coulomb branch becomes two-dimensional at least semi-classically and needs two coordinates $Z$ and $Y$ (or $Y_{spin}$). However, sometimes we can relate these two coordinates quantum-mechanically by taking the product of the Higgs and Coulomb branch coordinates. If this is possible, the Coulomb branch becomes one-dimensional. Especially we focused on the s-confinement phases which appear for $(N_f,N_S)=(0,5),(1,4),(2,3),(3,2)$ and $(4,1)$. We found and tested various s-confinement phases for the $Spin(7)$ theories. As a byproduct, we could obtain the s-confinement phases for the 3d $\mathcal{N}=2$ $SU(4)$ gauge theories with $n$ anti-symmetric matters and with $4-n$ (anti-)fundamental flavors. For $n=1,3$, the s-confinement phases were not known in the literature. We also tested the validity of our analysis by computing the superconformal indices. The indices are perfectly consistent with our prediction on the Coulomb branch coordinates and also consistent with the s-confinement phases which we found. In this paper, we expected that two-dimensional coordinates are semi-classically described by $Z$ and $Y$ (or $Z$ and $Y_{spin}$). Since the $Z$ coordinate is globally defined without depending on the sign of $\phi_1-\phi_3$, it is plausible to expect that $Z$ is necessary in any cases. However we could not find a priori way for choosing $Y$ or $Y_{spin}$ for the description of the remaining Coulomb branch. Just from various consistencies (including the SCI calculation, parity anomaly matching, deformations), we decided which one is more appropriate. For instance, $Z$ and $Y$ are presumably the natural coordinates for $(N_f,N_S )=(3,2)$ while $Z$ and $Y_{spin}$ are chosen for $(N_f,N_S )=(4,1)$ and $Z$ was equivalent to $Y_{spin} M_{SS}$. However, these decisions and reasoning were not conclusive. It would be nice if we gain a clear understanding of the quantum Coulomb branch. It is interesting to study 3d $\mathcal{N}=2$ $Spin(N)$ $(N>7)$ theories with vector matters and with spinor matters. In the case of $Spin(2N)$ groups, two types of spinor representations are available. Hence the phase diagrams would be more richer than the $Spin(2N+1)$ cases. We will soon come back to this generalization elsewhere. It is worth searching for Seiberg dual descriptions for the 3d $\mathcal{N}=2$ $Spin(7)$ gauge theories with spinorial matters. In 4d, the dual theory has an $SU(N_S-4)$ gauge group with $N_S$ anti-fundamental matters and with a matter in a symmetric representation. When we naively put a dual theory on a circle, the resulting Coulomb branch would be more than one-dimensional because a symmetric tensor divides the (classical) Coulomb branch. Furthermore the Coulomb branch operators are dressed by Higgs branch operators \cite{Amariti:2015kha} because the matter contents are ``chiral'' in a 4d sense. Deriving the 3d duality from the 4d duality becomes very complicated in this case. We don't have any simple 3d dual to the $Spin(7)$ now but would like to report some progresses along this direction in the near future. \section*{Acknowledgments} This work is supported by the Swiss National Science Foundation (SNF) under grant number PP00P2\_157571/1. \bibliographystyle{ieeetr}
{ "timestamp": "2018-02-27T02:00:58", "yymm": "1802", "arxiv_id": "1802.08716", "language": "en", "url": "https://arxiv.org/abs/1802.08716" }
\section{Introduction} Ever since its discovery, the adiabatic technique of electromagnetically induced transparency (EIT) \cite{harris1997electromagnetically,*boller1991observation} has been the crux for light storage and manipulation. The interaction of a $\Lambda$ atomic system with a control field opens a transparency window for a resonant signal field to propagate through without absorption. As the group velocity is directly related to the properties of the optical susceptibility, this also gives a way to control the speed of propagation of the signal field all the way to where it can be stopped, and thus mapped into a spin wave. This procedure (and the retrieval of the stored signal) is best described in terms of dark-state polaritons (light-matter excitations) as was done in \cite{fleischhauer2000dark}. This opened the door to numerous techniques for optical information storage where the main contenders are a far-off-resonant Raman scheme \cite{nunn2007mapping,*reim2011single} and a photon-echo based procedure \cite{moiseev2001complete}. For the first, the fields are highly detuned thus allowing the adiabatic elimination of the higher atomic level. Thus, the field is mapped into the ground states via stimulated Raman transitions. The second is an extension of the well-known phenomenon of photon echo in inhomogeneously broadened two-level atoms \cite{kurnit1964observation,*mossberg1982time}. A resonant signal photon wave-packet is absorbed and subsequentially mapped into the stable ground states by means of a short $\pi$-control pulse. The stored field can be recovered by a second counter-propagating $\pi$-control pulse. Furthermore, in a series of papers \cite{gorshkov2007universal,*gorshkov2007photon,*gorshkov2007photon2,% *gorshkov2007photon3,*gorshkov2008photon}, Gorshkov \emph{et al.} brought all of these techniques into a ``universal approach to optimal photon storage'' and devised a procedure to maximize the efficiency given any signal field. The enormous success of the $\Lambda$ configuration motivated the study of more complicated atomic systems, such as the double $\Lambda$ system where enhancement of nonlinear effects can be achieved as well as the storage of two signal pulses \cite{lukin2000resonant,*raczynski2004electromagnetically}. An $N$-type system has also been proposed for the control of two-photon absorption via quantum interference which can lead to an improvement in EIT \cite{harris1998photon,*yan2001nonlinear,*jiang2006optical}, as well as a giant enhancement of the Kerr nonlinearity \cite{niu2005giant}. One of the most prolific extensions has been the four-level system in a tripodal configuration (see Fig.~\ref{fig:tripod}) where many advances have been made for light control and storage (e.~g.~double EIT \cite{paspalakis2002transparency,*wang2006large,*li2007two}). The formalism of dark-state polaritons \cite{fleischhauer2000dark} has been extended for this four-level system in different kinds of scenarios. Depending on the initial preparation of the medium there can be either two signal pulses and one control, which leads to the possibility of storage of a photonic quantum bit \cite{karpa2008resonance}, or there can be just one signal pulse and two control pulses that allow two-channel-light storage \cite{raczynski2006polariton}. The existence of temporal and spatial vector solitons in this four-level system has also been shown \cite{hang2009weak,*liu2009ultraslow,*qi2011spatial}. Furthermore, it has been demonstrated that under the influence of a classical (intense) control field a propagating probe field acquires a phase that affects its state of polarization. There are proposals for using this effect to enhance the sensibility of Faraday magnetometers \cite{petrosyan2004magneto} or to serve as a polarization phase gate \cite{rebic2004polarization}. In addition, some of these new phenomena have been demonstrated experimentally such as dark resonance switching \cite{ham2000coherence}, the existence of two transparency windows, enhanced crossed-phase modulation \cite{li2008enhanced}, the propagation of matched slow pulses \cite{macrae2008matched}, and two-field storage \cite{karpa2008resonance,wang2009slowing} (we refer the interested reader to these manuscripts for examples of experimental realizations of the tripodal scheme). The original work on pulse storage was tied to a requirement for adiabaticity (effectively near-constant intensity) of the control pulse which limits the speed of the process. This point was carefully studied by Matsko \emph{et al.}~\cite{matsko2001nonadiabatic}. They showed that the storage and retrieval are still possible by ``instantaneously'' switching the control field off and on. This was further studied by Shakhmuratov \emph{et al.}~\cite{shakhmuratov2007instantaneous} where they added the effect of an rf field in an $N$ configuration. Even though most proposals work with cw control fields that are turned off and on, this might not be the best strategy. When one considers the optimization problem the resulting optimal field acquires a temporal structure that clearly deviates from the standard cw field \cite{gorshkov2007universal,*gorshkov2007photon,*gorshkov2007photon2,% *gorshkov2007photon3,*gorshkov2008photon,nunn2007mapping}. Another usual assumption is that the signal field is of quantum nature (low intensity). In \cite{dey2003storage} they go beyond these assumptions by means of a series of numerical experiments. But they restrict themselves to cw control fields and use the adiabatic theory of Grobe \emph{et al.}~\cite{grobe1994formation} to interpret their results as their signal pulses are long enough (the duration is about 100 times the lifetime in the excited state) that spontaneous emission needs to be included. In the present work, we depart from the usual considerations for pulse storage as we consider the joint evolution of resonant-intense-broadband pulses (these are much shorter and about two orders of magnitude stronger that the ones considered in \cite{dey2003storage}). This is the realm of self-induced transparency (SIT), which was introduced by McCall and Hahn in their seminal papers \cite{mccall1967self,*mccall1969self}. They showed the crucial role that the total pulse area, defined as \begin{equation} \theta(x)=\int^{\infty}_{-\infty}\Omega(x,t)dt, \label{eq:area} \end{equation} plays in this type of coherent interaction. Recent research has shown how the interaction of broadband pulses with matter opens the door to high-speed switching. In this framework, it has been shown that storage, manipulation and retrieval of a signal pulse in a $\Lambda$ system is possible \cite{groves2013jaynes} even in non-idealized conditions \cite{gutierrez2015manipulation} and how the methodology presented there can be extended to accommodate the storage of multiple pulses and added control of the information stored \cite{gutierrez2015multi}. A generalized two-pulse area was shown to play a role \cite{clader2007two}: $\Theta_{12}(x)=\sqrt{|\theta_1(x)|^2+| \theta_2(x)|^2}$. Now, we extend this exploration to the tripodal atom interacting with three fields in resonance. We show how a vector soliton can be stored in the coherences of the ground states and then retrieved. We also find that the competing process of stealing population from the common excited state leads to a constraint on the area of each field as determined by a three-pulse area defined in Eq.~\eqref{eq:3area}. This is in alignment with previous results for the $\Lambda$ and double-$\Lambda$ systems \cite{clader2007two,groves2009multipulse} \section{Theoretical model} For the tripodal atom (see Fig.~\ref{fig:tripod}), each ground state is only connected to the excited state $\ket 0$ by the dipole moment operator \begin{figure} \centering \includegraphics[scale=1]{trip} \caption{\label{fig:tripod}(Color online) Four-level atom in a tripodal configuration, interacting with three fields in resonance: (a) with the population initially distributed equally but incoherently among the ground states $ \ket 1$ and $\ket 2$ which leads to the presence of two signal pulses and one control and (b) with all the population in state $\ket 1$ which leads to one signal pulse and two control pulses.} \end{figure} \noindent and the fields are given by \begin{equation} \vec{E}(x,t)=\sum_{j=1}^3\vec{\mathcal E}_{j}(x,t)e^{i(k_{j0}x-\omega_{j0} t)} +c.c. \end{equation} Here, $\omega_{10}$, $\omega_{20}$ and $\omega_{30}$ are the field frequencies, $k_{10}$, $k_{20}$ and $k_{30}$ are the vacuum wave numbers and $\vec{\mathcal E}_{1}(x, t )$, $\vec{\mathcal E}_{2}(x, t )$ and $ \vec{\mathcal E}_{3}(x,t)$ are the slowly-varying field envelopes. For simplicity the fields are taken to be at resonance and we will neglect the effects of Doppler broadening by considering a gas of cold atoms (given some minor substitutions, the results presented here will remain valid even in the presence of Doppler broadening \cite{gutierrez2016storage}). We will further assume that the fields are pulses short enough to neglect spontaneous emission but long enough so that the variation of the envelopes is much slower than the oscillations given by the optical frequency. This justifies the use of the slowly-varying-envelope approximation (SVEA). Considering the dipole and rotating-wave approximations, the Hamiltonian takes the form \begin{equation} H=-\frac{\hbar}{2} \left( \Omega_1 \ket 0 \bra 1 + \Omega_2 \ket 0 \bra 2 + \Omega_3 \ket 0 \bra 3\right) + c.c., \label{hrwa} \end{equation} where we defined the Rabi frequencies $\Omega_{j}(x,t)=2\vec{d}_{0j}\cdot \vec{\mathcal E}_{j}(x, t )/\hbar$ and the non-zero off diagonal elements of $H$ can then be written as $H_{0j}=-\hbar \Omega_j(x,t)/2$ for $j=1,\,2,\,3$. The dynamics of the field-matter system are described in terms of the von Neumann equation for the density matrix, \begin{equation} i\hbar \pd{\rho}{t}=[ H, \rho], \end{equation} and Maxwell's wave equations in the slowly-varying envelope approximation \begin{subequations} \label{meqs} \begin{align} \left(\pd{ }{x}+\frac{1}{c}\pd{}{t}\right)\Omega_{1}&=i\mu_{10} \rho_{01},\\ \left(\pd{ }{x}+\frac{1}{c}\pd{}{t}\right)\Omega_{2}&=i\mu_{20} \rho_{02},\\ \left(\pd{ }{x}+\frac{1}{c}\pd{}{t}\right)\Omega_{3}&=i\mu_{30} \rho_{03}. \end{align} \end{subequations} where we defined the atom-field coupling parameters as $\mu_{j0}=N\omega_{j0}|d_{j0}|^2/ \hbar \epsilon_0c$ with $j=1,\,2,\,3$. These give a set of nonlinear partial differential equations that need to be solved simultaneously. Therefore, we cannot move forward, at least analytically, without making further assumptions. The key lies in considering equal atom-field coupling parameters for all three transitions, $\mu_{10}=\mu_{20}= \mu_{30}=\mu$ (this assumption might not be realistic but given the stability of this type of solution in non-idealized conditions shown in \cite{gutierrez2015manipulation} we expect the results to remain valid for the most part). Doing so, and introducing the constant matrix $W=i\ket 0 \bra 0$, we can write the evolution equations as \begin{equation} \label{eq:mb} i\hbar \pd{\rho}{T}=[ H, \rho], \quad \text{and} \quad \pd{H}{Z}=- \frac{\hbar \mu}{2}[W, \rho], \end{equation} in terms of the traveling-wave coordinates $T=t-x/c$ and $Z=x$. By computing the commutator, one can verify that the matrix equation for the field [Eqs.~\eqref{eq:mb}] indeed reduces to Eqs.~\eqref{meqs} plus some (irrelevant) trivial equations. From the form in which the evolution equations were written in Eqs.~(\ref{eq:mb}) it is easy to show that the system is integrable (Appendix \ref{app:meth}) and therefore solvable by standard methods such as inverse scattering \cite{gardner1967method,*ablowitz1973nonlinear,*lamb1980elements, *chakravarty2014inverse}, the B\"acklund transformation \cite{lamb1971analytical,*miura1976backlund,*park1998field} and the Darboux transformation \cite{gu2006darboux,cieslinski2009algebraic}, to name a few. This study contrasts with many previous results in that it considers the simultaneous evolution of the three fields instead of assuming one of them to be a strong constant field that induces some extra nonlinearities in the evolution of the other two \cite{petrosyan2004magneto,rebic2004polarization,hang2009weak,% *liu2009ultraslow,*qi2011spatial}. In addition, we consider the full nonlinear interaction which contrasts to the first-order approximation in the fields done in \cite{paspalakis2002transparency}. \section{Two-pulse storage} We now proceed to solve the Maxwell-Bloch equations for a tripodal atomic system [Eqs.~(\ref{eq:mb})] using the single- soliton Darboux transformation and the nonlinear superposition rule. This method allows us to start from a trivial solution and obtain complicated pulse dynamics by some integration and algebraic manipulation. A review of this method can be found in Refs.~ \cite{cieslinski2009algebraic,gutierrez2015multi} and is similar to the one presented by Clader and Eberly in \cite{clader2007two}. For the interested reader, an outline of the principal steps is presented in Appendix \ref{app:meth}. We will start by taking the situation depicted in Fig.~\ref{fig:tripod}(a) as our trivial solution, that is, an incoherent preparation of the medium given by $\rho=1/2(\ket 1 \bra 1 + \ket 2 \bra 2)$ and no fields. Applying the Darboux transformation to this seed solution, we obtain a first-order solution for which the analytic expressions can be simplified in the limits of infinitely long negative and positive times (the expression of the involution matrix, used to compute the solution in these limits is presented in Table I of Appendix \ref{app:lim}). This provides us with the state of the system before and after the interaction. An example of the pulse dynamics is represented in Fig.~\ref{fig:pulses1}. \begin{figure} \centering \includegraphics[scale=1.]{pulses1} \caption{\label{fig:pulses1}(Color online) Pulse evolution dictated by the second order solution obtained from the medium preparation shown in Fig.~\ref{fig:tripod} (a). The encoding of the vector soliton is separated by the ellipses from its retrieval and displacement.} \end{figure} Initially ($T/\tau_a\ll -1$), we have two SIT pulses of duration $\tau_a$ propagating throughout the medium at a reduced group velocity for the two signal fields. Their shapes are given by \begin{subequations}\label{eq:om1} \begin{equation} \Omega_1=\frac{2}{\tau_a}\cos \left( \frac{\nu}{2}\right)\text{sech} \left(\frac{T}{\tau_a}-\frac{\kappa_a}{4} Z+\eta \right), \end{equation} \begin{equation} \Omega_2=\frac{2}{\tau_a}e^{i\phi}\sin \left(\frac{\nu}{2}\right)\text{sech} \left(\frac{T}{\tau_a}-\frac{\kappa_a}{4} Z+\eta \right), \end{equation} \end{subequations} where we introduced the absorption coefficient $\kappa_a=\mu \tau_a/2$ and the constants of integration $\eta$, $\phi$ and $\nu$. The angles $\phi$ and $\nu$ define the area of each signal pulse, $\theta_1=2\pi\cos \left(\nu/ 2\right) $ and $\theta_2=2\pi e^{i\phi}\sin \left(\nu/2\right)$ (the area of the first pulse was taken to be real in order to fix the global phase). Note that the signal pulses are ``normalized'' by the total two-pulse area (as defined by Clader and Eberly in \citep{clader2007two}), that is, $\sqrt{| \theta_1|^2+|\theta_2|^2}=2\pi$. In this limit, the control pulse tends towards zero. If we assume that the two fields address different transitions due to their polarization as in \cite{hang2009weak,*liu2009ultraslow,*qi2011spatial}, then the stored field can be written as \begin{align} \label{eq:vect} \vec{E}_s(x,t)=&\frac{\hbar}{d \tau_a}\left[\cos \left(\frac{\nu}{2}\right) \vec{p_1}+e^{i\phi}\sin \left(\frac{\nu}{2}\right)\vec{p_2}\right] \nonumber \\ &\times\text{sech} \left(\frac{T}{\tau_a}-\frac{\kappa_a}{4} Z+\eta \right)e^{i(k_{s}x-\omega_{s}t)} +c.c. \end{align} to show that it can be seen as a vector soliton ($\vec{p_1}$ and $\vec{p_2}$ are two orthogonal polarization vectors). Given the free parameters $\phi$ and $\nu$ the polarization can occupy any point on the surface of the Poincar\'e sphere. As the signal pulses propagate, the control pulse starts to drive some of the population to the ground state $\ket 3$ thus amplifying its asymptotically small amplitude while depleting the signal pulses. This transfer slowly takes over until we are left with a decoupled control pulse propagating away with the light's phase velocity. The full solution for the pulses is provided in Eqs.~\eqref{app:comvec}. During the interaction, the information of the vector soliton is imprinted into the ground state elements of the density matrix in the form of a spin-wave. This ``imprint'' is depicted in Fig.~\ref{fig:den1}. Even if the coherences are non-zero everywhere (because of the infinitely long tails of the sech function) it is clear that they are well localized around their center which we identify as their location. Computing the total pulse area for each pulse [see Eqs.~\eqref{app:areavec}] we get \begin{equation} \label{eq:imp1} \frac{\sqrt{|\theta_1(x)|^2+|\theta_2(x)|^2}}{|\theta_3(x)|}=e^{- \frac{\kappa_a}{2} (x-x_1)}, \end{equation} where $x_1$ gives the location of the imprint. Therefore, the imprint is located at the position where the two-signal-pulse area is equal to the control pulse area. \begin{figure} \centering \includegraphics[width=1.\linewidth]{den1} \caption{\label{fig:den1}(Color online) Imprint as it has been encoded by the pulse dynamics shown in Fig.~\ref{fig:pulses1} in the ground state coherences (a) before and (b) after the displacement.} \end{figure} This two-pulse storage, when taken to low intensities (quantum states of light), can be seen as a qubit storage procedure as the phase between the two pulses is also encoded within the atomic medium. This was actually done in \cite{karpa2008resonance} but was based on the usual EIT procedure of slowing down the quantum signal by means of a classical control pulse. Taking a closer look to the expressions for the area of each pulse in the first-order solution, we see that a role for a three-pulse area is implied: \begin{equation} \label{eq:3area} \Theta_{123}(x)=\sqrt{|\theta_1(x)|^2+|\theta_2(x)|^2+|\theta_3(x)|^2}=2\pi. \end{equation} This result is a clear statement of the relationship between this solution and the original SIT solution. The pulse area is a key quantity, not only controlling the reshaping of pulses but also the storage and manipulation of the information stored in the medium. Its importance goes well beyond the two-level atom remaining a constant in pulse dynamics for multi-level systems \cite{clader2007two,groves2009multipulse}. For non-idealized input pulse shapes, these multi-level systems follow a similar behavior to the predictions of the area theorem for a two-level system \cite{clader2007two,clader2008two,groves2009multipulse} even in the absence of Doppler broadening \cite{gutierrez2015manipulation}. Other results referring to pulse area theorems have been worked out for the $ \Lambda$ configuration. The first was done by Tan-no \emph{et al.}~\cite{tan1972two} but is limitied to highly-detuned fields in two-photon resonance. Clader and Eberly presented a clear comparison of this stimulated Raman scattering (SRS) with the exact Maxwell-Bloch equations \cite{clader2007two}. Another famous result is the dark area theorem \cite{eberly2002wave}, where the dark area is not defined in the usual manner (it has time derivatives of the Rabi frequencies). Nevertheless, it provides a simple spatial evolution of a quantity involving both Rabbi frequencies which helps understand the interaction of the two fields. This result is very much in line with the original area theorem \cite{mccall1967self,*mccall1969self}. A more recent result was presented by Shchedrin \emph{et al.}~in \cite{shchedrin2015analytic}. There they consider pulse interaction beyond the rotating wave approximation extending the applicability of their results. However, the pulse area theorem lacks any mention of spatial evolution and is in fact formulated more like a energy conservation equation which still involves the Rabi frequencies. To this we could add the conservation laws that can be deduced from the integrability of the Maxwell-Bloch equations (see for example \cite{chakravarty2015soliton}). As we already mentioned our result is similar to the ones presented in \cite{clader2007two,groves2009multipulse} and thus we expect homologous results for the case of partially mixed states \cite{clader2008two}. Another valid solution, that we can obtain from the same seed solution (by an appropriate choice of integration constants, see Appendix \ref{app:lim}), is that of a sech-shaped-$2\pi$-control pulse propagating through the medium at the speed of light in vacuum. Superimposing this solution with the one previously described, we obtain a second-order solution that (by an appropriate choice of parameters) describes a well-defined pulse sequence. Here, we first have the storage of the signal pulses and then the collision of another control pulse, of different duration $\tau_b$, with the imprint. Upon interaction with the imprint, the control pulse retrieves the stored signal pulses along with the pulse area and relative phase information. The retrieved pulses then propagate farther into the medium and then are again stored, but in a displaced location (see Figs.~\ref{fig:pulses1} and \ref{fig:den1}). The displacement of the imprint is controlled by the duration of the pulses via \begin{equation} \label{eq:dis1} \delta =\kappa_a(x_2-x_1)=2\ln \left| \frac{\tau_a+\tau_b}{\tau_a-\tau_b} \right|, \end{equation} where $x_2$ is the new location of the imprint. In reality, we are not going to have an infinite medium, but this can be used to our advantage. Given a finite medium, Eq.~\eqref{eq:imp1} tells us how we should map this solution to initial conditions as the ratio of the areas of the entering pulses will control the location of the imprint. Now, if the duration $\tau_b$ is tailored so that the displacement given by Eq.~\eqref{eq:dis1} is larger than the length of the medium, the control pulse will move the imprint outside. This would frustrate the re-encoding of the signal pulses and thus effectively retrieve the information stored in the atomic medium. Therefore, the finiteness of the medium provides us with the means to retrieve the stored pulses. Additionally, there could be some residual coherence between the ground states $\ket 1$ and $\ket 2$ from the initial preparation. But, using the same formalism, we can show that the storage-retrieval procedure is still viable (an extended discussion about the effects of partial coherence in a $\Lambda$ system is presented in \cite{clader2008two} and one would expect similar results for this case). \section{Two-channel memory} \label{sec:tcm} The three-pulse area is, of course, not limited to the specific solution of the storage of a vector soliton. We can consider another possibility for the seed solution, such as the one depicted in Fig.~\ref{fig:tripod}(b). The medium is prepared in the ground state $\ket 1$ and again with no fields. In this case, the first-order solution describes a signal pulse that is stored via the interaction of two control pulses with the atomic system as intermediary. The pulse dynamics start with an SIT signal pulse of duration $\tau_a$ propagating at a reduced velocity. As this pulse propagates, the asymptotically small control pulses start to drive some of the population to the ground states $ \ket 2$ and $\ket 3$ thus amplifying them while depleting the signal pulse. The full solution for the pulses is provided in Eqs.~\eqref{app:comch}. During this interchange, the signal pulse is encoded in the coherences of the ground states. We can think of this system as having two channels, represented by the coherences $\rho_{12}$ (channel 1-2) and $\rho_{13}$ (channel 1-3), connected by the coherence $\rho_{23}$. Computing the total pulse area for each pulse [see Eqs.~\eqref{app:areach}] we get the following relation, \begin{equation} \label{eq:cimp1} \frac{|\theta_1(x)|}{\sqrt{|\theta_2(x)|^2+|\theta_3(x)|^2}}=e^{-\kappa_a(x- x_1)} \end{equation} where $x_1$ denotes the location of the imprint. In a similar fashion to the previous solution, the initial location of the imprint is determined by the ratio between the signal pulse area and the two control pulse areas. The amount of information stored in channel 1-2 is determined by the ratio between the area of the control pulse $\Omega_2$ and the two-control-pulse area squared, $r_{1-2}=|\theta_2(x)|^2/(|\theta_2(x)|^2+|\theta_3(x)|^2)$, and similarly for channel 1-3, $r_{1-3}=|\theta_3(x)|^2/(|\theta_2(x)|^2+|\theta_3(x)|^2) $. The particular case of storing in just one channel can be seen in Fig.~\ref{fig:pulses2}. The possibility for two-channel storage had already been mentioned in \cite{raczynski2006polariton} but this study was based on an EIT type interaction. \begin{figure} \centering \includegraphics[scale=1.]{pulses2} \caption{\label{fig:pulses2}(Color online) Pulse evolution dictated by the second order solution obtained from the medium preparation shown in Fig.~\ref{fig:tripod} (b). The encoding of the signal pulse is separated by the ellipses from the channel-switching and displacement.} \end{figure} Here again, by studying the expressions for the individual pulse areas, we find that by summing them as prescribed by Eq.~\eqref{eq:3area} we obtain $ \Theta_{123} (x)=2\pi$. This goes to show that the three-pulse area is not limited to a specific type of preparation for the system. Now, the solution just presented is interesting by itself. Therefore, its interaction with subsequent pulses is well deserving of mention. Let's consider the second-order solution born out of the superposition of the solution previously discussed and that of two control pulses of total two-pulse area equal to $2\pi$ and duration $\tau_b$ decoupled from the medium, we discover that the information can be displaced between channels by the subsequent interaction of the imprint with other control pulses. For simplicity, we will consider the case in which the imprint was only encoded into channel 1-2 and we want to move all the information into the other channel (this case is depicted in Figs.~\ref{fig:pulses2} and \ref{fig:den2}). In this scenario, simple expressions can be worked out for the necessary pulse areas to achieve the channel-switching of the information encoded in the medium and its displacement. These are given by \begin{equation} |\theta_2|=2\pi \sqrt{\frac{1+\tau_b/\tau_a}{2}} \quad \text{and} \quad | \theta_3|=2\pi \sqrt{\frac{1-\tau_b/\tau_a}{2}} \end{equation} and the corresponding displacement of the imprint is \begin{equation} \delta=\frac{1}{2}\ln\left(\frac{\tau_a+\tau_b}{\tau_a-\tau_b}\right). \end{equation} We note that the two things are related and depending on the duration of the control pulses we will have to choose the appropriate pulse area. Here, we assumed that $\tau_a>\tau_b$. The relative phase between the two control pulses determines the phase of the coherence which in turn dictates the phase of the retrieved pulse. The channel-switching of the imprint is shown in Fig.~\ref{fig:den2}. Additionally, as long as the signal pulse is only stored in one channel, we can displace the imprint and retrieve it (if we consider a finite medium) by the results already obtained for a $\Lambda$ system \cite{groves2013jaynes,gutierrez2015manipulation}. \begin{figure} \centering \includegraphics[scale=1]{den2} \caption{\label{fig:den2}(Color online) Imprint as it has been encoded by the pulse dynamics shown in Fig.~\ref{fig:pulses2} in the ground state coherences before (continuous lines) and after (dashed lines) the channel-switching.} \end{figure} It is also interesting to note that, together with the displacement of the imprint, the intensity of the control pulses is inverted (see Fig.~\ref{fig:pulses2}). If the two control pulses address different transitions due to their polarization then this inversion of intensity is actually a rotation in the polarization state of the vector-control pulse. This change in the polarization in the tripodal configuration has already been studied in different regimes of light \cite{petrosyan2004magneto,rebic2004polarization}. Of course the change of polarization is directly tied to the initial imprint, namely its phase and the ratios $r_{1-2}$ and $r_{1-3}$. It is also relevant to mention that if we had considered an initial preparation of the medium analogous to the one in \cite{byrne2003polarization} (i.~e.~incoherent superposition of the ground states), we would have obtained a three-color switching analogous to the polarization switching mentioned in their work along with similar stochastic dynamics when dealing with disordered populations \cite{atkins2012stochastic,*newhall2013random}. \section{Summary} In summary, we have presented analytic solutions to the Maxwell-Bloch equations for a tripodal system which suggest the possibility of high-speed storage and retrieval of a vectorial soliton as well as that of a two-channel memory. These results are similar to the ones derived in \cite{raczynski2006polariton,karpa2008resonance} that are based on the dark-state polariton formalism which inevitably carries with it the adiabatic approximation limiting the speed of the processes involved. Still, it is interesting to note the similarities of our results to previous treatments dealing with quantum states of light entailing the free space propagation of the control pulse and thus the omission of any change of shape during propagation. We also defined an extension for the area theorem in this three-pulse scenario which is not tied to a specific preparation of the medium. This result raises new questions, such as: What is the appropriate way to combine the individual pulse areas in given multi-level and multi-pulse systems? Clearly, there must be a dependence on the connection between atomic levels and the number of ground- and excited-states. We expect these results to be of scientific importance for the continuing development of the pulse storage and manipulation field. \section*{Acknowledgments} This work was supported by NSF through Grant nos. (PHY-1203931, PHY-1505189) and a CONACYT fellowship awarded to R.~G.~C.
{ "timestamp": "2018-02-27T02:07:38", "yymm": "1802", "arxiv_id": "1802.08896", "language": "en", "url": "https://arxiv.org/abs/1802.08896" }
\section{Introduction} Quantum entanglement is a very important physical resource in quantum information \cite{RH09}, recently, it has been shown that the quantum discord \cite{HO01,LH01}, as another kind of quantum correlation, can depict the quantumness of quantum state more deeply than the quantum entanglement in some quantum information processing, such as the DQC1 model \cite{AD08} and is more robust against the decoherence environment. By using different methods, several definitions of quantum correlation were proposed, such as based on quantum information theory \cite{HO01,LH01,Luo08}, Hilbert-Schmidt norm \cite{Dakic10}, relative-entropy \cite{Modi10}, trace norm \cite{FP13} and so on. However, the definition of the quantum correlation that capturing both good physical property and computability is the local quantum uncertainty \cite{DG13}, which is based on the skew information \cite{EW63,SL03prl}. The local quantum uncertainty (LQU) can not only measure the quantum correlation, but also apply to the field of quantum metrology \cite{DG13,Yu14epl} which was widely used in the field of quantum frequency standard \cite{Santarelli99}, gravitational wave detection \cite{Ligo16}, quantum clock synchronization \cite{VG01} and so on. The goal of parameter estimation is not only to determine the value of the unknown parameters, but also to investigate the accuracy of the measurements \cite{VG06,VG11}. Just using the classical methods, the limit of the precision of measurements is the standard quantum limit or short noise limit, i.e., $1/\sqrt{N}$, where $N$ is the times of experiments or the numbers of particles used in the experiments \cite{VG11}. However, utilizing the quantum character, such as the quantum squeezing \cite{CC81}, quantum entanglement \cite{SH97}, NOON state \cite{Dorner09}, entangled coherent states \cite{Joo11} and so on, the precision of quantum parameter estimation can be enhanced and surpass the standard quantum limit, even arrive the Heisenberg limit $1/N$. Other quantum technology, such as weak measurement \cite{SP15,LZ15}, dynamical decoupling \cite{QT13}, quantum error correction \cite{WD14}, can also be used to enhance the precision of quantum metrology. The multiple quantum parameters estimation have also been reported \cite{PH13,JY14}. In the quantum metrology, the precision of quantum parameter estimation is governed by the quantum Cram\'{e}r-Rao inequality $\Delta(\theta)\geq 1/\sqrt{N\mathcal{F}_{\theta}}$, where the quantum Fisher information (QFI) is given by $\mathcal{F}_{\theta}=\mathrm{Tr}\{\rho_{\theta} L_{\theta}^2\}$ with symmetric logarithmic derivative $L_{\theta}$ determined by $2\partial_{\theta}\rho=\rho L_{\theta}+L_{\theta}\rho$. The QFI plays the central role of the quantum metrology, and the larger value of QFI means higher precision of the parameter estimation, when the repeat times $N$ is fixed. In the unitary evolution \cite{VG06}, the probing state can be embedded by a parameter $\theta$ through the unitary transformation, i.e., $\rho_{\theta}=U_{\theta}^{\dagger}\rho U_{\theta}$ with $U_{\theta}=e^{ik\theta}$. The quantum state $\rho_{\theta}$ satisfies the von Neumann-Landau equation $i\partial\rho_{\theta}/\partial\theta=k\rho_{\theta}-\rho_{\theta}k$, the QFI and skew information satisfy the inequality relation \cite{SL03} \begin{equation} \mathcal{I}(\rho,k)\leq\frac{1}{4}\mathcal{F}_{\theta}(\rho,k)\leq2\mathcal{I}(\rho,k) .\label{1} \end{equation} Through optimizing the operator $k$, the precision of the parameter can be bound by the local quantum uncertainty \cite{DG13}. However, the effect of the environment is not concerned in Refs. \cite{SL03,DG13}. In the open systems, due to the unavoidable interaction with its surroundings \cite{HB07,HB16,AR12}, the unitary dynamics of quantum system will be distorted by the noise, the decoherence effects will influence the precision of the quantum parameter estimation, for example, the metrological advantage using the quantum entanglement will be weaken \cite{Zhangym13} or disappear \cite{SH97}. The quantum metrology in the open systems had been investigated intensively \cite{AC12,XL10,Escher11}. A natural question arises whether the similar relation given in Eq. (\ref{1}) still holds in open systems. In this paper, we will reexamine the relation between the precision of quantum parameter estimation (QFI) and the quantum correlation (LQU) in the open systems. We employ two coupled two-level systems interacting with the independent non-Markovian environments, respectively; and assume that the initial state with embedded phase parameter $\theta$ is entangled state; and investigate the effects of the environment and the coupling interaction between the two subsystems on the QFI and LQU. We find that the QFI does't depend on the estimated parameter, and its decay can be restrained through enhancing the coupling strength or non-Markovianity. While, the LQU is related to the phase parameter $\theta$ and depicts plentiful phenomena. We find that a similar relation as Eq. (\ref{1}) is satisfied if the coupling between the two systems is switched off or the initial state is Bell state, although there doesn't exist such a bound relation generally. This paper is organized as follows. In Sec. 2, we give the preliminaries about the LQU and QFI. In Sec. 3, the employed model is described. In Sec. 4, the dynamics of the QFI and LQU are investigated, and their relationship is also considered. The conclusion is given in Sec. 5. \section{The Preliminaries} In this section, we will give brief introduction on the local quantum uncertainty and the quantum Fisher information. The LQU based on the skew information is given by \cite{DG13} \begin{eqnarray} \mathcal{U}(\rho)=\min_{K^{\Lambda}} \mathcal{I}(\rho,K^{\Lambda}), \label{eq:lqu} \end{eqnarray} where $K^{\Lambda}=K_{a}\otimes \mathbb{I}_{b}$ means the non-Hermitian operator on subsystem $A$ with non-degenerate spectrum $\Lambda$. The skew information $\mathcal{I}(\rho,K^{\Lambda})$ denotes the non-commutation between the quantum state $\rho$ and the operator $K^{\Lambda}$, which is defined as \begin{eqnarray} \mathcal{I}(\rho,K^{\Lambda})=-\frac{1}{2}\mathrm{Tr}[\rho^{1/2},K^{\Lambda}]^{2}.\label{dingyi:skew} \end{eqnarray} For a $2\otimes d$-dimensional system, the LQU can be given by a closed form as \begin{eqnarray} \mathcal{U}(\rho)=1-\lambda_{\max}(W_{ab}),\label{lqudingyi} \end{eqnarray} where $\lambda_{\max}(\cdot)$ is the maximum eigenvalue and $W_{ab}$ is $3\times3$ symmetric matrix $ (W_{ab})_{ij}=\mathrm{Tr}\{\rho^{1/2}(\sigma_{i}\otimes \mathbb{I}_{b})\rho^{1/2}(\sigma_{j}\otimes \mathbb{I}_{b})\}$ with $i,j=x,y,z$. The quantum Fisher information is the maximum information about the estimated parameter $\theta$ obtained from optimal measurements. In this paper, the QFI is chosen as the SLD (symmetric logarithmic derivative) definition $\mathcal{F}_{\theta}(\rho_{\theta})=\mathrm{Tr}\{\rho_{\theta} L_{\theta}^2\}$, where the SLD $L_{\theta}$ is given by $2\partial_{\theta}\rho_{\theta}=\rho_{\theta} L_{\theta}+L_{\theta}\rho_{\theta}$. For quantum state $\rho_{\theta}$, the QFI $\mathcal{F}_{\theta}$ for the estimated parameter $\theta$ is given as follows \cite{MP09,JM11} \begin{eqnarray} \mathcal{F}_{\theta}(\rho_{\theta})=\sum_{i}\frac{{(\partial_{\theta}\lambda_{i})^{2}}}{\lambda_{i}} +\sum_{i\neq j}\frac{{2(\lambda_{i}-\lambda_{j})^{2}}}{\lambda_{i}+\lambda_{j}} \vert\langle\varphi_{i}\vert\partial_{\theta}\varphi_{j}\rangle\vert^{2},\label{formula:qfi} \end{eqnarray} where $\lambda_{i}$ is the eigenvalue of the estimated state $\rho_{\theta}$, $\vert\varphi_{i}\rangle$ is the corresponding eigenvector, and $\partial_{\theta}(\cdot)$ means the partial derivative. For non-full rank density matrix, the expression of the QFI $\mathcal{F}_{\theta}$ can be rewritten as \cite{Zhangym13,LiuJ13,LiuJ14,LiuJ14a} \begin{eqnarray} \mathcal{F}_\theta(\rho_\theta)=\sum_{i=1}^r\frac{(\partial_\theta\lambda_i)^2}{\lambda_i} +\sum_{i=1}^r4\lambda_i\langle\partial_\theta\varphi_i\vert\partial_\theta\varphi_i\rangle -\sum_{i,j=1}^r\frac{8\lambda_i\lambda_j}{\lambda_i+\lambda_j}\vert\langle\varphi_i\vert\partial_\theta\varphi_j\rangle\vert^2, \end{eqnarray} where $r$ is the rank of the density matrix. For pure state $\vert\varphi\rangle$, the QFI can be simplified as $\mathcal{F}_\theta(\vert\varphi\rangle)=4\left(\langle\partial_\theta\varphi\vert\partial_\theta\varphi\rangle -\vert\langle\varphi\vert\partial_\theta\varphi\rangle\vert^2\right)$. \section{The model and its solution} \begin{figure}[t] \centering \includegraphics[width=0.85\columnwidth]{tu1.eps}\\ \caption{(Color online) The model of the phase parameter estimation in open systems. Two coupled atoms $A$ and $B$ interact with non-Markovian environment independently and the coupling strength between the two atoms is $J$. The phase parameter $\theta$ is embedded into the state $\rho_{\theta}$ through unity operation $U_{\theta}$. The quantum correlation and the quantum Fisher information can be obtained after measurement.}\label{tu1} \end{figure} The model for the phase parameter estimation is shown in Fig. \ref{tu1}. The coupled two-level atoms $A$ and $B$ interact with independent non-Markovian environments, respectively. The transition frequency is assumed as the same, i.e., $\omega_{a}=\omega_{b}=\omega_{0}$, the mode of the reservoir is $\omega_{k}$ and the interaction between the subsystem and reservoir is expressed as $g_k$, and the coupling strength between the subsystems $A$ and $B$ is $J$. Due to the interaction between the quantum system and its surroundings, the dynamics of the quantum state $\rho$ consisted of subsystems $A$ and $B$ will be not unitary. The quantum Fisher information and the quantum correlation can be obtained after measurements. In the nature unit, i.e., $\hbar=1$, the whole Hamiltonian for the system and environment is \begin{equation} H=H_a+H_b+H_{ab} \end{equation} with \begin{eqnarray*} H_{a} & = & \omega_{0}\sigma_{a}^{+}\sigma_{a}^{-}+\sum_{k} \omega_{k}a_{k}^{\dagger}a_{k} +\sum_{k}(g_{k}^{*}\sigma_{a}^{+}a_{k}+\mathrm{h.c.}),\\ H_{b} & = & \omega_{0}\sigma_{b}^{+}\sigma_{b}^{-}+\sum_{k}\omega_{k}b_{k}^{\dagger}b_{k} +\sum_{k}(g_{k}^{*}\sigma_{b}^{+}b_{k}+\mathrm{h.c.}),\\ H_{ab} & = & J(\sigma_{a}^{+}\sigma_{b}^{-}+\sigma_{a}^{-}\sigma_{b}^{+}), \end{eqnarray*} where, $\sigma_{a}^+(\sigma_b^+)=\vert e\rangle\langle g\vert$ and $\sigma_{a}^-(\sigma_b^-)=\vert g\rangle\langle e\vert$ denote the rasing and lowing operators for subsystem $A(B)$; $a_k^\dag(b_k^\dag)$ and $a_k(b_k)$ represent the creation and annihilation operators for the $k$-th reservoir mode with frequency $\omega_k$ for atom $A(B)$; $H_{ab}$ means the hopping interaction between the subsystems $A$ and $B$. The abbreviation h.c. means the Hermitian conjugation. Without loss of generality, we will assume that the initial state of the quantum system is superposition state and both the independent reservoirs interacting with the atoms $A$ and $B$ are vacuum state, i.e., $\vert 0\rangle_a\vert 0\rangle_b$. The initial state of the system and environment is \begin{equation} \vert\psi_0\rangle=\left(a_0\vert eg\rangle+b_0\vert ge\rangle\right)\vert0\rangle_{a}\vert0\rangle_{b}, \end{equation} where $\vert e\rangle$ is the excited state and $\vert g\rangle$ means the ground state, the coefficients $a_0$ and $b_0$ satisfy $\vert a_0\vert^2+\vert b_0\vert^2=1$. Before the evolution, an estimated phase parameter $\theta$ will be embedded into the initial state through unitary operation $U_{\theta}$. The initial state will become $\left(a_0e^{i\theta}\vert eg\rangle+b_0\vert ge\rangle\right)\vert0\rangle_{a}\vert0\rangle_{b}$. In the whole processing, there only exists one excited photon. In this paper, we only consider that each subsystem resonates with the reservoir mode and the structure of the reservoir is assumed as the Lorentzian form \begin{eqnarray} I(\omega)=\frac{1}{2\pi}\frac{\gamma_{0}\lambda^{2}}{\lambda^{2}+(\omega-\omega_{0})^{2}}, \end{eqnarray} where $\omega_0$ is the transition frequency of the subsystems $A$ and $B$; the parameter $\lambda$ is connected to the reservoir correlation time $\tau_B=1/\lambda$, which is determined by the spectral width of the coupling between the system and reservoir; the parameter $\gamma_0$ is related with the time scale $\tau_R$ of system change $\tau_R=1/\gamma_0$. When $\tau_B\gg\tau_R$ is satisfied, the non-Markovian phenomenon will strongly affect the dynamics of quantum system. The evoluted quantum state in time $t$ is \begin{eqnarray} \vert\psi(t)\rangle &=& A(t)\vert eg\rangle\vert0\rangle_a\vert0\rangle_b+\sum_{k}C_{k}(t)\vert gg\rangle\vert1_{k}\rangle_a\vert0\rangle_b\notag \\ &&+B(t)\vert ge\rangle\vert0\rangle_a\vert0\rangle_b+\sum_{k}D_{k}(t)\vert gg\rangle\vert0\rangle_a\vert1_{k}\rangle_b.\label{eq:psi-t} \end{eqnarray} Substituting $\vert\psi(t)\rangle$ into the Schr\"{o}dinger equation $i\vert\dot{\psi}(t)\rangle=H\vert\psi(t)\rangle$, similar with Ref. \cite{HS16}, one can obtain the following differential equations for parameters $A(t),B(t),C_k(t)$ and $D_k(t)$: \begin{subequations} \begin{eqnarray} i\dot{A}(t) & = &\omega_{0}A(t)+J\cdot B(t)+\sum_{k}g_{k}C_{k}(t),\label{eq:At-independent}\\ i\dot{B}(t) & = &\omega_{0}B(t)+J\cdot A(t)+\sum_{k}g_{k}D_{k}(t),\label{eq:Bt-independent}\\ i\dot{C}_{k}(t) & =& \omega_{k}C_{k}(t)+g_{k}^{*}A(t),\label{eq:ck-independent}\\ i\dot{D}_{k}(t) & = &\omega_{k}D_{k}(t)+g_{k}^{*}B(t).\label{eq:dk-independent} \end{eqnarray}\label{A1} \end{subequations} The solution of the coefficients $A(t)$ and $B(t)$ can be given as \begin{eqnarray} A(t)=a(t)e^{-i\omega_0t},B(t)=b(t)e^{-i\omega_0t}\label{11} \end{eqnarray} where \begin{eqnarray} a(t)&=&\frac{h}{2}e^{-\frac{1}{2}(\lambda+iJ)t}\left[a_0e^{i\theta}+b_0\right]+\frac{h^{\ast}}{2}e^{-\frac{1}{2}(\lambda-iJ)t}[a_0e^{i\theta}-b_0],\notag\\ b(t)&=&\frac{h}{2}e^{-\frac{1}{2}(\lambda+iJ)t}\left[a_0e^{i\theta}+b_0\right]-\frac{h^{\ast}}{2}e^{-\frac{1}{2}(\lambda-iJ)t}[a_0e^{i\theta}-b_0],\notag\\ h&=&\cosh\left(\frac{d\cdot t}{2}\right)+\frac{\lambda-iJ}{d}\sinh\left(\frac{d\cdot t}{2}\right),\notag\\ d&=&\sqrt{-J^{2}-2iJ\lambda+\lambda(-2\gamma_0+\lambda)}.\label{eq:atbt} \end{eqnarray} The detailed derivation is given in the Appendix. After tracing out the effects of the environment, the reduced density matrix of the quantum system $\rho(t)$ can be given by \begin{eqnarray} \rho(t)=\left(\begin{array}{cccc} 0 & 0 & 0 & 0\\ 0 & \vert a(t)\vert^{2} & a(t) b(t)^{*} & 0\\ 0 & b(t)a(t)^{*} & \vert b(t)\vert^{2} & 0\\ 0 & 0 & 0 & 1-\vert a(t)\vert^{2}-\vert b(t)\vert^{2} \end{array}\right).\label{rho} \end{eqnarray} In the following, we will investigate the QFI for the estimated parameter $\theta$, the LQU for the quantum state $\rho(t)$, and the relationship between them. \section{The dynamics and relationship between QFI and LQU} \begin{figure}[h!] \centering \includegraphics[width=1\columnwidth]{sanweitu.eps}\\ \caption{The dynamics of the quantum Fisher information and local quantum uncertainty. The variation of QFI and LQU along with the initial state parameter $a_0$ are shown in panels (a) and (b). The parameters are $J=1.5\gamma_0$ (coupling strength), $\lambda=0.1\gamma_0$ (environment parameter) and $\theta=\pi$ (phase parameter). The dynamics of QFI and LQU along with the coupling strength $J$ are shown in panels (c) and (d). The initial state is chosen as Bell state, the other parameters are chosen as $\lambda=0.15\gamma_0$ and $\theta=\pi/2$. The evolution of the QFI and LQU along with the environment parameter $\lambda$ are shown in panels (e) and (f). The other parameters are $J=1.5\gamma_0$, $a_0=0.5$ and $\theta=0$. In all sub-panels, the parameter $\gamma_0$ is chosen as unit.}\label{Fig:1} \end{figure} Through tedious calculation, one can obtain that the LQU for quantum state $\rho(t)$ is \begin{eqnarray} \mathcal{U}(\rho)=1-\max\{W_1,W_2\},\label{eq:lqu} \end{eqnarray} where $W_1,W_2$ are given by \begin{eqnarray} W_1=2\vert a(t)\vert^2\sqrt{\frac{\lambda_2}{\lambda_1}},W_2=1-\frac{4\vert a(t)b(t)\vert^2}{\lambda_1},\label{w1w2} \end{eqnarray} and $\lambda_i$ is the eigenvalue of quantum state $\rho(t)$ in Eq. (\ref{rho}) with $\lambda_1=\vert a(t)\vert^2+\vert b(t)\vert^2, \lambda_2=1-\lambda_1$. The QFI for the phase parameter $\theta$ of quantum state $\rho(t)$ can be obtained as follows \begin{eqnarray} \mathcal{F}_{\theta}(\rho)=\frac{4\vert b(t)\partial_{\theta}a(t)-a(t)\partial_{\theta}b(t)\vert^2}{\lambda_1}.\label{eq:Fq} \end{eqnarray} The amplitude parameters $a(t)$ and $b(t)$ in Eq. (\ref{eq:atbt}) can be rewritten as \begin{eqnarray} a(t)&=&\frac{x+x^*}{2}a_0e^{i\theta}+\frac{x-x^*}{2}b_0,\notag\\ b(t)&=&\frac{x-x^*}{2}a_0e^{i\theta}+\frac{x+x^*}{2}b_0, \end{eqnarray} where the parameter $x$ is defined as \begin{eqnarray} x=e^{-\frac{1}{2}(\lambda+iJ)t} h,\label{eq:x} \end{eqnarray} with the parameter $h$ given in Eq. (\ref{eq:atbt}). So, the QFI $\mathcal{F}_{\theta}(\rho)$ in Eq. (\ref{eq:Fq}) can be simplified as \begin{eqnarray} \mathcal{F}_{\theta}(\rho)=4\vert a_0b_0\vert^2\vert x\vert^2.\label{eq:Frho} \end{eqnarray} One can conclude that the value of QFI does not involve the estimated phase parameter $\theta$. The dynamics of the QFI and LQU for the reduced system $\rho(t)$ is shown in Fig. \ref{Fig:1}. The evolution of the QFI and LQU along with the initial state parameter $a_0$ and time $t$ are plotted in panels (a) and (b), respectively. The QFI reaches the maximum value in $a_0=1/\sqrt{2}$, i.e., the initial state is Bell state $\vert\psi_0\rangle=\frac{1}{\sqrt{2}}\left(\vert eg\rangle+\vert ge\rangle\right)$, which can also be concluded from Eq. (\ref{eq:Frho}). Unlike the quantum Fisher information, the LQU strongly depends on the phase parameter $\theta$, and exhibits more fertile phenomena than the quantum Fisher information. Because the value of the local quantum uncertainty is determined by the competition between $1-W_1$ and $1-W_2$, the influences suffering from the surrounding environment and the coupling interaction between subsystems will lead to the strongly periodic oscillation in the dynamics of the quantum correlation. In both Fig. \ref{Fig:1} (a) and (b), the coupling strength is $J=1.5\gamma_0$, the non-Makovian parameter is chosen as $\lambda=0.1\gamma_0$. The phase parameter in the dynamics of the quantum correlation is chosen as $\theta=\pi$. Due to the value of the QFI is independent of the estimated phase parameter $\theta$, the phase parameter $\theta$ in panel (a) can be chosen randomly. One can find the similar phenomena when the other parameters condition are chosen. The evolution of QFI and LQU along with the coupling strength $J$ are shown in Fig. \ref{Fig:1} (c) and (d), where the initial state is chosen as Bell state $\vert\psi_0\rangle=\frac{1}{\sqrt{2}}\left(\vert eg\rangle+\vert ge\rangle\right)$. Because the QFI can reach the maximum once the other condition is fixed. The phase parameter is selected as $\theta=\pi/2$ and the non-Markovian parameter is chosen as $\lambda=0.15\gamma_0$. We can find that the decay of the QFI is rapidly when the coupling interaction $J$ is small. However, as the increasing of the interaction between subsystems, the decay of the QFI can be suppressed dramatically. The coupling interaction $J(\sigma_a^+\sigma_b^-+\sigma_a^-\sigma_b^+)$ can make a hump transition between the quantum state $\vert eg\rangle$ and $\vert ge\rangle$ and the stronger coupling interaction can also make more information about phase $ \theta$ stay at the state $\vert eg\rangle$ or $\vert ge\rangle$, so one can conclude that the strong coupling interaction can significantly restrain the decay of the QFI. However, the LQU displays stronger periodicity as the increasing of coupling strength $J$. It can be explained as the competition between $1-W_1$ and $1-W_2$ can be more intense when the coupling $J$ is bigger. The consequence is that the (quasi-)period of oscillation will be shorter. In Fig. \ref{Fig:1} (e) and (f), we plot the dynamics of QFI and LQU along with the environment parameter $\lambda$. The initial state coefficient is chosen as $a_0=0.5$, the phase parameter $\theta$ is chosen as $0$, and the coupling strength $J$ is chosen as $1.5\gamma_0$. Due to the interaction between the system and environment, the quantum state $\vert eg\rangle$ or $\vert ge\rangle$ will be transformed into the state $\vert gg\rangle$ and the information of the quantum state will be lost. The parameter $\lambda$ is connected to the spectral width of the reservoir, the small $\lambda$ means not only the weak interaction between the system and reservoir, but also the strong non-Markovianity. So the process of losing information will slow down, and the strong non-Markovianity means the information can backflow from the environment to the quantum system \cite{XL10}. So, one can find that the decay of the QFI and LQU can be suppressed when the environment parameter $\lambda$ is smaller, in other word, the non-Markovianity is more remarkable. When the quantum state $\rho_{\theta}$ satisfies the von Neumann-Landau equation $i\partial\rho_{\theta}/\partial\theta=k\rho_{\theta}-\rho_{\theta}k$ ($\rho_\theta$ can be generated by operator $k$ through $\rho_{\theta}=e^{-i\theta k}\rho e^{i\theta k}$), the quantum Fisher information and the skew information satisfy the inequality \begin{eqnarray} \mathcal{I}(\rho,k)\leq\frac{1}{4}\mathcal{F}_{\theta}(\rho,k)\leq2\mathcal{I}(\rho,k).\label{ineq:luo} \end{eqnarray} For two-qubits system, if the operator $k$ is the local observable on subsystem $A$ , such as $k=K_{a}\otimes \mathbb{I}_{b}$, optimizing all local observable $k$, one can arrive the definition of local quantum uncertainty in Eq. (\ref{lqudingyi}). Using the quantum Cram\'{e}r-Rao inequality $\Delta(\theta)\geq 1/\sqrt{N\mathcal{F}_{\theta}}$, the parameter precision can be bound by LQU $\Delta(\theta)<1/\sqrt{4\mathcal{U}}$ when the repeat number is $N=1$ \cite{DG13}. A nature question is whether the QFI and LQU still satisfy the similar relationship in the open quantum systems, for example the model depicted in Fig. \ref{tu1}. With the help of numerical simulation, one can see that the QFI and LQU do not satisfy the inequality relation (\ref{ineq:luo}) generally. However, for some special cases, one can also find that the QFI is bounded by the LQU. In Fig. \ref{Fig3}, the difference between QFI and LQU is dotted $10^4$ times, where the coupling strength is $J=0$ and the other parameters are chosen randomly. The difference of $\mathcal{F}_{\theta}-\mathcal{U}$ is shown in panel (a), and the maximum difference is $1/4$ and the minimum value is $0$. The difference of $2\mathcal{U}-\mathcal{F}_{\theta}$ is shown in panel (b) and its range is from $0$ to $1$. This can be given a simple proof. \begin{figure}[t] \centering \includegraphics[width=1\columnwidth]{Jis0.eps}\\ \caption{(Color online) The difference between the QFI and LQU. The coupling strength $J$ is $0$, the parameter $\gamma_0$ is chosen as unit, and the other parameters are chosen randomly. The number of running times is $10^4$.} \label{Fig3} \end{figure} When the coupling strength $J$ between the two subsystems is $0$, in other worlds, the coupling between the two systems is switched off, the parameter $x$ in Eq. (\ref{eq:x}) can be simplified as \begin{eqnarray} x_0=e^{-\frac{1}{2}\lambda t} h_0, \end{eqnarray} where $h_0=\cosh(d_0t/2)+\lambda/d_0 \sinh(d_0t/2)$ with $d_0= \sqrt{\lambda(-2\gamma_0+\lambda)}$. It is easy to find that the parameter $x_0$ is real, and the amplitude parameters $a(t)$ and $b(t)$ can be simplified as \begin{eqnarray} a(t)=a_0e^{i\theta}x_0,b(t)=b_0x_0. \end{eqnarray} The eigenvalues $\lambda_1$ and $\lambda_2$ for the reduced density matrix of quantum state $\rho(t)$ in Eq. (\ref{rho}) can be simplified as $\lambda_1=\vert x_0\vert^2$ and $\lambda_2=1-\vert x_0\vert^2$. Firstly, we will prove that the value of LQU is smaller than the value of QFI. The QFI $\mathcal{F}_{\theta}(\rho)$ in Eq. (\ref{eq:Frho}) can be simplified as \begin{eqnarray} \mathcal{F}_{\theta}(\rho)=4\vert a_0b_0\vert^2\vert x_0\vert^2. \end{eqnarray} The LQU $\mathcal{U}(\rho)$ in Eq. (\ref{eq:lqu}) can be rewritten as \begin{eqnarray} \mathcal{U}(\rho)=\min\{\mathcal{U}_1,\mathcal{U}_2\}, \end{eqnarray} where $\mathcal{U}_1, \mathcal{U}_2$ are given by \begin{eqnarray} \mathcal{U}_1=1-2\vert a_0\vert^2\sqrt{\vert x_0\vert^2(1-\vert x_0\vert^2)}, \mathcal{U}_2=4\vert a_0b_0\vert^2\vert x_0\vert^2. \end{eqnarray} Obviously, the value of $\mathcal{F}_{\theta}$ equals with $\mathcal{U}_2$. The value of LQU is always chosen the smaller one between $\mathcal{U}_1$ and $\mathcal{U}_2$, so the value of LQU is $\mathcal{U}_1$ if and only if $\mathcal{U}_1<\mathcal{U}_2$, otherwise the value of LQU will be determined by $\mathcal{U}_2$. So, the LQU $\mathcal{U}$ and QFI $\mathcal{F}_{\theta}$ meet the following relation: \begin{eqnarray} \mathcal{U}\leq\mathcal{F}_{\theta}.\label{ineq1} \end{eqnarray} In the following, we will find out the maximum difference between the QFI and LQU. Due to the parameter $\mathcal{U}_2$ equals to the value of $\mathcal{F}_{\theta}$, so we only pay attention to the condition that the value of LQU is chosen as $\mathcal{U}_1$. For simplicity, we will assume that $\vert a_0\vert^2=m$, $\vert x_0\vert^2=n$, and the range of $m$ and $n$ are both from $0$ to $1$. The difference between the QFI and LQU can be expressed as \begin{eqnarray} \delta_1&=&{\mathcal{F}_{\theta}-\mathcal{U}}\notag\\ &=&4m(1-m)n-\left(1-2m\sqrt{n(1-n)}\right).\label{qfiminuslqu} \end{eqnarray} The second derivative of $m$ is negative, i.e., $\partial^2\delta_1/\partial m^2<0$, so the function of $\delta_1$ reaches the maximum value at the point $\partial\delta_1/\partial m=0$. After some algebra, one can obtain that \begin{eqnarray} \max\delta_1=\frac{1}{4}, \end{eqnarray} which is displayed in the Fig. \ref{Fig3}(a). Then, we will prove that $\mathcal{F}_{\theta}\leq2\mathcal{U}$. The value of LQU is also chosen as $\mathcal{U}_1$, the function of $2\mathcal{U}-\mathcal{F}_{\theta}$ can be expressed as \begin{eqnarray} \delta_2&=&2\mathcal{U}-\mathcal{F}_{\theta}\notag\\ &=&2\left(1-2m\sqrt{n(1-n)}\right)-4m(1-m)n. \end{eqnarray} One can obtain that the second derivative of $\delta_2$ is positive ($\partial^2\delta_2/\partial m^2>0$), so the function of $\delta_2$ arrives the minimum value at the point $\partial\delta_2/\partial m=0$ and $\min\delta_2=\left(\sqrt{1-n}-\sqrt{n}\right)^2\geqslant0$. It will lead the following inequality \begin{eqnarray} \mathcal{F}_{\theta}\leq2\mathcal{U}.\label{ineq2} \end{eqnarray} Combining Eqs. (\ref{ineq1}) and (\ref{ineq2}), one can conclude that the QFI and LQU still satisfy the following relation: \begin{eqnarray} \mathcal{U}(\rho)\leq\mathcal{F}_{\theta}(\rho)\leq2\mathcal{U}(\rho),\label{eq23} \end{eqnarray} when the two subsystems have no interaction. In another case, the initial state is chosen as Bell state $\vert\psi_0\rangle=\frac{1}{\sqrt{2}}(\vert eg\rangle+\vert ge\rangle)$, i.e., $a_0=b_0=1/\sqrt{2}$. The QFI of phase parameter $\theta$ is \begin{eqnarray} \mathcal{F}_{\theta}(\rho)=\vert x\vert^2, \end{eqnarray} where the parameter $x$ is given in Eq. (\ref{eq:x}). The LQU for the quantum state can be expressed as \begin{eqnarray} \mathcal{U}(\rho)=\min\{\mathcal{U}_1,\mathcal{U}_2\}, \end{eqnarray} where \begin{eqnarray} \mathcal{U}_1=1-\frac{2\vert a(t)\vert^2\sqrt{\vert x\vert^2(1-\vert x\vert^2)}}{\vert x \vert^2}, \mathcal{U}_2=\frac{4\vert a(t)\vert^2(\vert x\vert^2-\vert a(t)\vert^2)}{\vert x\vert^2},\notag \end{eqnarray} with $\vert a(t)\vert^2=\vert x\vert^2/2+\mathfrak{R}\mathfrak{I}\sin\theta$, $\mathfrak{R}$ meaning the real part of $x$ and $\mathfrak{I}$ meaning the image part. Obviously, the value of $\mathcal{U}_2$ and $\mathcal{F}_{\theta}$ satisfy the inequality $\mathcal{U}_2\leq\mathcal{F}_{\theta}$. So, when the value of LQU is chosen as $\mathcal{U}_2$, one can conclude that $\mathcal{U}\leq\mathcal{F}_{\theta}$. In the condition that the value of $\mathcal{U}_1$ is smaller than $\mathcal{U}_2$, the value of LQU will be chosen as $\mathcal{U}_1$, the inequality $\mathcal{U}\leq\mathcal{F}_{\theta}$ is satisfied automatically. However, the QFI and LQU does't satisfy the relationship $\mathcal{F}_{\theta}(\rho)\leq 2\mathcal{U}(\rho)$ generally, which can be obtained with the help of numerical simulation. When the state satisfies the von Neumann-Landau equation, the amount of the quantum correlation present in a bipartite mixed state guarantees a minimum precision in the optimal phase estimation protocol \cite{DG13}. However, considering the effects of the environment and the coupling interaction between subsystems, the quantum Fisher information and the local quantum uncertainty don't satisfy the similar relationship generally. However, when the coupling strength between the two subsystems is zero or the initial state is Bell state, we can also obtain the inequality $\mathcal{U}(\rho)\leq\mathcal{F}_{\theta}(\rho)$, the optimal detection strategy which asymptotically saturates the quantum Cram\'{e}r-Rao bound produces the best precision of parameter $\theta _{\mathrm{best}}$ \begin{eqnarray} \Delta({\theta _{\mathrm{best}}}) = \frac{1}{\sqrt{\mathcal{F}_{\theta}}}\le\frac{1}{\sqrt{ \mathcal{U}}}. \end{eqnarray} Obviously, the quantum parameter precision can still be bound by the quantum correlation in the above two special cases. \section{Conclusion} In summary, the local quantum uncertainty is not only a good measurement of quantum correlation, but also an effective bound on the precision of parameter estimation through the relation between the quantum Fisher information and skew information. In this paper, we reexamined the relationship between the local quantum uncertainty and the precision of the parameter estimation through extending the parameter estimation protocol to the open quantum systems. We employ the coupled two-level systems interacting with independent non-Markovian reservoir, in which the initial state is entangled state with embedded phase parameter $\theta$ through unitary operation. In general, the precision of phase parameter can not be bound by the quantum correlation. However, for some special cases, for example the coupling between two subsystems is switched off or the initial state is Bell state, the phase parameter precision can also be bound by the quantum correlation. In this paper, we only investigate the structure of environment is Lorentzian form, the general relationship between the QFI and LQU in the open systems is deserved endeavor in our further investigation. \section*{Acknowledgments} This work was supported by the National Natural Science Foundation of China under Grants No.11747022, 11775040 and 11375036, the Xinghai Scholar Cultivation Plan, the Doctoral Startup Foundation of North University of China (No.130088), and the Science Foundation of North University of China (No.2017031). \renewcommand{\theequation}{A\arabic{equation}} \setcounter{equation}{0} \section*{Appendix: The derivation of $a(t)$ and $b(t)$ in Eq. (\ref{eq:atbt})} In order to solve the equations (\ref{A1}), the frame rotating is used: $A(t)=a(t)e^{-i\omega_{0}t}$, $B(t)=b(t)e^{-i\omega_{0}t}$, $C_{k}(t)=c_{k}(t)e^{-i\omega_{k}t}$, $D_{k}(t)=d_{k}(t)e^{-i\omega_{k}t}$. For the Eq. (\ref{eq:ck-independent}), one can obtain that \begin{eqnarray} \dot{c}_{k}(t)=-i g_{k}^{*}a(t)e^{-i(\omega_{0}-\omega_{k})t}. \end{eqnarray} The form integral for $c_k(t)$ is \begin{equation} c_{k}(t)=-i\int_{0}^{t}g_{k}^{*}a(\tau)e^{-i(\omega_{0}-\omega_{k})\tau}\text{d}\tau. \label{eq:ck-jifen} \end{equation} Replacing the Eq. (\ref{eq:ck-jifen}) into Eq. (\ref{eq:At-independent}), one can obtain the following integro-differential equation: \begin{eqnarray} \dot{a}(t)=-iJ\cdot b(t)-\sum_{k}|g_{k}|^{2}\int_{0}^{t}a(\tau)e^{-i(\omega_{0}-\omega_{k})(t-\tau)}\text{d}\tau. \end{eqnarray} Assuming the coupling structure between the atom and reservoir is $I(\omega)$ and taking the limit of reservoir mode, one can obtain the final integro-differential equation for $a(t)$ \begin{eqnarray} \dot{a}(t)=-iJ\cdot b(t)-\int_{0}^{t}\text{d}\tau f(t-\tau)a(\tau),\label{A5} \end{eqnarray} where the kernel $f(t-\tau)$ can be expressed in terms of the spectral density of reservoir $I(\omega)$ \begin{equation} f(t-\tau)=\int\text{d}\omega I(\omega)e^{i(\omega_{0}-\omega)(t-\tau)}. \label{eq:fttau} \end{equation} Similarly, one can arrive the integro-differential equation for $b(t)$ \begin{eqnarray} \dot{b}(t)=-iJ\cdot a(t)-\int_{0}^{t}\text{d}\tau f(t-\tau)b(\tau).\label{A7} \end{eqnarray} In order to solve the integro-differential equation (\ref{A5}) and (\ref{A7}), the spectral density of reservoir is assumed as the Lorentzian form \[ I(\omega)=\frac{{1}}{2\pi}\frac{\gamma_{0}\lambda^{2}}{\lambda^{2}+(\omega-\omega_{0})^{2}}. \] Using the Laplace transformation $F(s)=\int_0^{\infty}f(t)e^{-st}\text{d}t$, the integro-differential equations (\ref{A5}) and (\ref{A7}) can be solved as \begin{eqnarray} a(s)&=&\frac{-iJ\cdot b_0+[s+f(s)]a_0e^{i\theta}}{J^{2}+[s+f(s)]^{2}},\notag\\ b(s)& = & \frac{-iJ\cdot a_0e^{i\theta}+[s+f(s)]b_0}{J^{2}+[s+f(s)]^{2}}.\label{Abs} \end{eqnarray} In order to obtain the amplitude parameters $a(t)$ and $b(t)$ in Eq. (\ref{eq:atbt}), the inverse Laplace transform for Eq. (\ref{Abs}) is used, i.e., $f(t)=\frac{{1}}{2\pi i}\int_{\gamma-i\infty}^{\gamma+i\infty}F(s)e^{st}\text{d}s$ (positive arbitrary constant $\gamma$ are chosen lied to the right of all the singularities of functions $F(s)$).
{ "timestamp": "2018-02-27T02:08:54", "yymm": "1802", "arxiv_id": "1802.08942", "language": "en", "url": "https://arxiv.org/abs/1802.08942" }
\section{Introduction} Cancer is a complex disease arising from molecular alterations that interact with and obstruct normal biological processes and produce phenotypic changes. Imaging modalities, such as microscopic imaging of hematoxylin-and-eosin (H\&E) stained slides and high-throughput genomics provide complementary information about the phenotypic traits (such as cell morphology) and molecular traits (such as gene expression and mutations) in a tumor. The Cancer Genome Atlas (TCGA)~\cite{Koboldt2012} provides rich resources of imaging, genomic, and clinical data and exemplifies the growing interest in comprehensive phenotypic and genomic data sets for disease understanding. To explore connections between multiple data modalities, correlation analysis is the most straightforward approach. However, since gene expression is the product of complex interacting cellular processes, cross-modality connections should be made that consider their relative levels and not just pairwise relationships. A recent effort to explore correlation analysis for paired image and genomic data is the work by Cooper \textit{et al.}~\cite{Cooper2015}. The authors extracted human-annotated measures of necrosis and angiogenesis, along with cellular features, from histopathological images of glioblastomas and studied their correlation with gene expression. To incorporate interactions between genes, each patient's gene expression was represented as a mixture of clustered gene signatures derived from the data. Canonical correlation analysis (CCA)~\cite{hotelling1936relations} is an alternative approach for exploration that extends pairwise correlation analysis by considering linear combinations of the variables of each modality. An advantage of CCA is that it requires no preprocessing of gene expression or image features to incorporate their interactions, but rather learns them via the linear model. A sparsity-based extension is Sparse CCA (SCCA)~\cite{witten2009penalized}, which makes CCA possible for high-dimensional data with few samples and is particularly suited for gene expression. We explored the use of CCA and SCCA for unbiased discovery of connections between histopathological image features and gene expression of breast cancer tumors. In particular, we extracted cellular features of shape, color, and texture from images using CellProfiler~\cite{Carpenter2006} and a reliable, efficient patch-based approach for nuclear segmentation using convolutional neural networks (CNNs)~\cite{Chidester2018}. Using CCA, we discovered a significant correlation of 0.763 ($p \approx 1e^{-14}$) between the texture and shape features of cells and the expression of PAM50 genes, and enabled a separation of patients based on subtypes without leveraging specific subtype information. Using SCCA, we discovered a correlation of 0.471 ($p \approx 7e^{-3} $) between a subset of image features and genes. Pathway analysis of the selected subset of genes using DAVID~\cite{Huang2008} revealed a meaningful connection between cell size and several genes related to immune response. Based upon these findings, we propose the use of CCA and its sparse variant as a preliminary discovery tool for imaging-genomic connections. \begin{figure}[t] \centering \includegraphics[width=0.45\textwidth]{images/tcga2} \caption{CCA workflow for imaging-genomics} \label{workflow} \vspace{-1.25em} \end{figure} \section{Imaging-Genomics and CCA} Our overall CCA workflow on paired histopathological images and gene expression of patients is shown in Fig~\ref{workflow}, which consists of nuclear and cellular segmentation and feature extraction, and CCA and SCCA to discover significant connections between images and gene expression. \subsection{Nuclear Segmentation and Feature Extraction} \label{cell_features} In diagnosing breast cancer, the morphology and granularity of nuclei is an important indicator. Therefore, we employed computational image analysis methods to extract quantitative features describing these qualities of cells and their nuclei. We have developed a reliable and efficient patch-based CNN approach for segmentation~\cite{Chidester2018}, similar to that recently proposed by Janowczyk and Madabhushi~\cite{Janowczyk2016}, which scans an image patch-by-patch and produces a binary label for each patch, indicating if the center pixel of the patch is contained within a nucleus or not. Our patch-based CNN produces a full binary nuclear segmentation mask, which is then fed to CellProfiler~\cite{Carpenter2006}, along with the corresponding H\&E image, to refine the segmentation and extract quantitative features describing the shape, texture, and color of the nuclei and cells. To summarize these features across all of the cells of the image, the mean, standard deviation, and percentiles at increments of 10\% of the distribution of each feature are calculated. This yielded $\sim 2400$ unique statistics of image features, which defined the image feature vector for the corresponding patient. Since analyzing entire WSIs is computationally demanding, we manually selected several representative patches from the tumor regions of each WSI and calculated the image feature vectors based on these patches, for each patient. \subsection{Canonical Correlation Analysis} For the imaging-genomics formulation, let $\mathbf{X}\in \mathbb{R}^{n \times p}$ denote the matrix of expression levels of $p$ genes for $n$ patients and $\mathbf{Y}\in \mathbb{R}^{n \times q}$ denote the matrix of $q$ image features for the same $n$ patients. To understand the information shared between $\mathbf{X}$ and $\mathbf{Y}$, we make use of CCA and its sparse variant. Introduced by Hotelling~\cite{hotelling1936relations}, CCA is a method for determining the linear relationship between two sets of variables. Given two sets of variables, $\mathbf{X}$ and $\mathbf{Y}$, attributed to the same $n$ samples, CCA seeks linear combinations of the variables in each domain that are maximally correlated with each other. Formally, CCA seeks $\mathbf{\alpha} \in \mathbb{R}^p$ and $\mathbf{\beta} \in \mathbb{R}^q$ that maximize the objective function $$\max_{\mathbf{\alpha}, \mathbf{\beta}} \mathbf{\alpha}^T \mathbf{X}^T \mathbf{Y}\mathbf{\beta} \ \ \text{such that} \ \ \mathbf{\alpha}^T \mathbf{X}^T \mathbf{X\alpha} = \mathbf{\beta}^T \mathbf{Y}^T \mathbf{Y \beta} = 1,$$ where the columns of $\mathbf{X}$ and $\mathbf{Y}$ are standardized to mean zero and unit variance. The vectors $\mathbf{\alpha}$ and $\mathbf{\beta}$ are referred to as the canonical weights and $\mathbf{X\alpha}$ and $\mathbf{Y\beta}$ are the canonical variates. This process can be repeated to find $k$ dimensions of canonical variates. Similar to principal component analysis, orthogonality constraints are imposed such that corresponding variates are orthogonal to previously found pairs. The correlations of each variable of each domain with its corresponding canonical variate are called the \textit{canonical loadings}. For example, for image feature $f_1 $ and the first variate $\mathbf{Y}\beta_1$, both $\in \mathbb{R}^p$, the loading $L(f_1, \mathbf{Y}\beta_1) =~\text{corr}(f_1, \mathbf{Y}\beta_1)$ where $\text{corr}(\cdot)$ is the Pearson's correlation. Here, we employ CCA to obtain the canonical weights, and hence the canonical variates, and look to identify the genes and image features of most importance in the variate space. For most of the genomic data used today, $n~\ll~\max(p,q)$, while CCA is only suitable when $n~\geq~\max(p,q)$. Applying CCA to high-dimensional, low-sample data therefore requires selecting a subset of the features in advance, or first mapping the features to a lower dimensional space, limiting the utility of the approach. To overcome this issue, many versions of \textit{penalized CCA} have been proposed, which can work for high-dimensional data, while preserving interpretability. We work with the formulation described by Witten \textit{et al.}~\cite{witten2009penalized}. Called \textit{SCCA}, this method optimizes the objective function $$\max_{\mathbf{\alpha}, \mathbf{\beta}} \mathbf{\alpha}^T \mathbf{X}^T \mathbf{Y}\mathbf{\beta},$$ $$\text{such that} \ \| \alpha \|^2 \leq 1, \| \beta \|^2 \leq 1, P_x(\alpha)\leq c_x, P_y(\beta) \leq c_y,$$ where $P_x$ and $P_y$ are convex penalty functions, often chosen to impose sparsity. For our analysis, we chose the $L_1$ penalty function. For multiple variates, the algorithm is iterated. \section{Results and Discussion} \begin{figure*}[t] \vspace{-1.5em} \centering \hspace*{0cm} \begin{subfigure}[c]{0.45\textwidth} \centering \hspace{-0.5cm} \includegraphics[width=\textwidth]{images/heatmap_image_regular} \hspace{-0.5cm}\caption{ } \end{subfigure}% ~ \begin{subfigure}[c]{0.18\textwidth} \centering \includegraphics[trim={7cm 0 0 0},clip,width=\textwidth]{images/heatmap_genes_regular} \caption{ } \end{subfigure}% ~ \begin{subfigure}[c]{0.37\textwidth} \centering \includegraphics[width=\textwidth]{images/patients_cca} \caption{ } \end{subfigure}% \vspace*{0.5em} \begin{subfigure}[t]{0.58\textwidth} \centering \hspace*{-0.5cm} \includegraphics[width=\textwidth]{images/heatmap_image} \caption{ } \end{subfigure}% ~ \begin{subfigure}[t]{0.42\textwidth} \centering \hspace*{-0.5cm} \includegraphics[trim={13.55cm 0 0 0},clip, width=\textwidth]{images/heatmap_genes} \hspace*{-0.5cm} \caption{ } \end{subfigure}% \caption{Canonical loadings of image features (a)(d) and expression of genes (b)(e) based on CCA(top row) and SCCA(bottom row), horizontal axis:~genes/image-features, vertical axis: variate number, (c) shows the mapping of patients onto the 1st variate} \vspace{-1.0em} \label{fig:heatmap} \end{figure*} We applied the overall method on 615 breast invasive carcinoma (BRCA) patients from TCGA. Histopathological images for TCGA patients are in the form of whole slides (WSIs), and in order to reduce the computational burden of image analysis and to avoid contamination in the analysis by normal cells near the tumor, we manually selected up to fifteen representative patches of 1000$\times$1000 pixels from each WSI in the tumor region for segmentation and feature extraction. Gene expression was retrieved from TCGA using cBioPortal, which normalized expression levels to z-scores. The analyses are done in R using the default CCA package and the SCCA package provided by Witten \textit{et al.}~\cite{witten2009penalized}. \subsection{Using CCA} To apply CCA to the extracted image features and gene expression of TCGA-BRCA patients, we first had to select a smaller subset of both image features and genes such that $\max(p,q)< n$. Of the image features, we used only the mean and standard deviation of the shape, texture and color features, which resulted in 84 image features per patient. As a meaningful subset of genes to analyze, we chose the PAM50 set of 50 genes, which has been shown to be discriminative of the general grouping of patients into molecular subtypes~\cite{parker2009supervised}. Using CCA on these restricted sets of variables, we found four canonical variates of statistical significance (p-value less than 0.05, computed using Wilk's lambda statistic) with strong correlation ($\left\{0.76, 0.64, 0.61, 0.59 \right\}$ respectively). Beyond the first four variates, the significance of the correlation quickly dropped. To interpret the learned canonical variates, we examined the canonical loadings of each image feature and gene with each variate, which are shown in Fig.~\ref{fig:heatmap}(a)(b). We observe that the first canonical variate is highly correlated with many PAM50 genes, with correlations as high as $0.8$, which implies that this variate is highly representative of PAM50 expression. The loadings of the image features in Fig.~\ref{fig:heatmap}(a) are grouped by category, which reveals the strongest correlation for most variates is with several texture features of the hematoxylin stain, area, and shape. The first variate shows a strong positive correlation particularly with texture features describing the entropy and variance of the hematoxylin stain within the nucleus and shape features describing the nucleus. Subsequent variates showed much lower loadings, so while still significantly correlated within their imaging counterpart, the interpretation is not as clear. To further understand the first variate, the $615$ patients are mapped into the corresponding variate space. The scatter plot of the mappings ($\alpha^T\mathbf{X}$, $\beta^T\mathbf{Y}$ on $x$ and $y$ axes, respectively) is shown in Fig.~\ref{fig:heatmap}(c), with the color representing the true subtype. Luminal~A patients are clustered towards the left, and Basal patients to the right, while HER2 and Luminal~B patients are spread out in between. This spread of the subtypes is, interestingly, in accordance with the expected prognosis of the patients. It is also noted that the range of values in the image variate is considerably smaller than those of the genes, suggesting that we should consider a more diverse set of image features. Though CCA can potentially pick any relevant linear combination of features resulting in any possible ordering of the subtypes, the result of information from both modalities was a meaningful order of subtypes: Luminal~A, Luminal~B, HER2 and Basal. Thus, while it is known that the PAM50 gene set is indicative of molecular subtype, CCA was able to identify the particular combination of genes and image features which can map patients into the subtype, without leveraging particular subtype information. \begin{figure*}[t] \vspace{-1.0cm} \centering \hspace*{0cm} \vspace{-0.5em} \includegraphics[width=\textwidth]{images/p_value_genes} \vspace{-1.5em} \caption{Plot of variates vs pathways defined by genes with correlation~$>$~0.35 based on SCCA (Intensity of color represents the -$\log $(p-value), the number is the percentage of pathway genes overlapping with $0$ meaning not computed).} \vspace{-1.0em} \label{fig:p_value_genes} \end{figure*} \subsection{Using SCCA} In contrast to CCA, we were able to analyze all image features and genes using SCCA, allowing the algorithm to discover which subset of each is most correlated. Using an L1 penalty factor of 0.1 for both image and genomic variables, we obtained sets of 45-60 genes and 30-45 image features with non-zero weights for each of the ten canonical variates, respectively, with correlations in the range of 0.35-0.47, with an overall p-value of 0.001. To interpret the learned canonical variates of SCCA, we make use of the loadings as before, as shown in Fig.~\ref{fig:heatmap}(d)(e). The category of `cell' indicates that the feature is of the cytoplasmic region surrounding the nucleus, which mostly describe area and shape. All other features are extracted from the nucleus only. Since SCCA can consider all genes and image features, it can reveal novel, unbiased phenotype-genotype associations. We selected genes whose expression levels were highly correlated ($>~0.35$) with the canonical variates discovered by SCCA and investigated their collective function using the online functional annotation tool DAVID~\cite{Huang2008}, which can test for association of gene sets with KEGG pathways. The KEGG pathways significantly associated are shown in Fig~\ref{fig:p_value_genes}. The first variate and others showed a similar correlation pattern with both image features and gene expressions, which is likely a result of the lack of enforcement of orthogonality by SCCA. DAVID revealed that, for the first variate, the highly correlated genes were strongly associated with pathways related to immune response, including primary immunodeficiency, natural killer cell mediated cytotoxicity, and to lymphocytes, including Th1 and Th2 cell differentiation, T-cell and B-cell receptor signaling, and NF-kappa B signaling. Fig.~\ref{fig:heatmap}(d)(e) shows that the expression of these genes has a strong correlation with area and shape features through the latent canonical variates. Given that lymphocytes are easily distinguished by their small size and circular shape, we could hypothesize that these canonical variates are capturing image and genomic descriptions of the presence of lymphocytes within the tumor, which is indeed a biologically relevant association for cancer. Variates five, six, and ten capture texture and cell hematoxylin features, which are indicative of DNA content, and intensity features of both stains. These variates were found to be correlated with gene sets associated with the cell cycle and p53 signaling pathways (related to DNA damage repair and apoptosis), all of which have important implications for tumor development. The second variate too could have implications for cancer, as it was associated with pathways involved in cell processes such as cell maintenance (ECM-receptor interaction), adhesion (focal adhesion), and proliferation (Wnt signaling and proteoglycans in cancer), and the cycle (PI3K-Akt signaling), though the lack of strong correlation with particular image features necessitates further investigation for a clear interpretation. \section{Conclusions and Future Work} We have demonstrated the utility of CCA and SCCA in discovering connections between cellular features and gene expressions for breast cancer. The learned canonical variates represent latent spaces that link the two modalities and provide insight into their joint variation. Their biological relevance was shown through their association with diverse pathways with implications for cancer, and could benefit from a more diverse range of image features. We envision that such a correlation analysis could be a preliminary step in studies of phenotypic and genomic traits, with follow-up affirmation and testing by pathologists and biologists, toward new insights into genetic diseases. \bibliographystyle{IEEEbib} {\fontsize{9pt}{10pt}\selectfont
{ "timestamp": "2018-02-27T02:07:51", "yymm": "1802", "arxiv_id": "1802.08910", "language": "en", "url": "https://arxiv.org/abs/1802.08910" }
\section{Computational Algorithm} \label{section:algo} \vspace{-0.1in} Our proposed algorithm contains three nested loops: (1) homotopy optimization scheme, (2) active set strategy, and (3) minimization within active coordinates. \vspace{0.05in} \noindent\textbf{(I) Outer Loop}: The homotopy optimization scheme adopts a geometrically decreasing sequence of regularization parameters $\{\lambda_t\}_{t=0}^N$ with a decay ratio $\alpha \in (0, 1)$, i.e., $\lambda_t = \alpha^t \lambda_0$. Specifically, we choose $\lambda_0=\norm{\nabla \cL(0)}_\infty$. By verifying the KKT condition, $$\min_{\xi\in\partial \mathcal{R}_{\lambda_0}(0)}\norm{\nabla\cL(0)+\xi}_{\infty} = 0,$$ we have $\hat{\theta}^{\{0\}}=0$ as the optimum to \eqref{general-loss} for $\lambda=\lambda_0$. For $t\geq 1$, we initialize the solution for the $t$-th outer loop using the output solution $\hat{\theta}^{\{t-1\}}$ of the $(t-1)$-th outer loop. This is also known as \textit{pathwise optimization scheme} \vspace{0.05in} \noindent\textbf{(II) Middle Loop}: The active set identification iteratively updates the active set. Specifically, at the $k$-th iteration of the middle loop, we denote the solution by $\theta^{(k)}$, the active set containing the indices of all active coordinates by ${\mbox{$\mathcal{A}$}}_{k}$, and denote the inactive set by $\Ac_{k} = [d] \backslash {\mbox{$\mathcal{A}$}}_{k}$. For notional simplicity, we omit the outer loop index $t$. We update the solution by \begin{align} \theta^{[k+1]} = \mathop{\rm argmin}_{\theta \in \RR^d}~\mathcal{F}_{\lambda} (\theta) ~~\textrm{subject to}~~\theta_{\Ac_k} = 0. \label{eqn:inner} \end{align} The subroutine for solving \eqref{eqn:inner} will be introduced later (inner loop). We then update the active set by the sequential screening rule. Particularly, we first shrink the active set by removing coordinates with zero values (a.k.a. the backward step), \begin{align*} {\mbox{$\mathcal{A}$}}_{k+0.5} = {\mbox{$\mathcal{A}$}}_{k}\setminus\{j\in{\mbox{$\mathcal{A}$}}_k|&\theta_j^{[k+1]}=0\},~~\Ac_{k+0.5} = [d] \backslash {\mbox{$\mathcal{A}$}}_{k}; \end{align*} Then we expand the active set by adding inactive coordinates (a.k.a. the forward step), \begin{align*} {\mbox{$\mathcal{A}$}}_{k+1} = {\mbox{$\mathcal{A}$}}_{k}\cup\{j\in\Ac_{k+0.5}|~|\nabla_j\cL(\theta^{[k+1]})|\geq (1-\zeta)\lambda\}, \end{align*} and $\Ac_{k+1} = [d] \backslash {\mbox{$\mathcal{A}$}}_{k+1}$. As can be seen from, an inactive coordinate is selected only when its coordinate gradient of the loss function is sufficiently large in magnitude. The middle loop is terminated when the active set no longer changes. An illustrative example is provided in Figure \ref{active-set}. \begin{figure}[H] \vspace{-0.05in} \centering \includegraphics[width=0.475\textwidth]{Forward_Backward.pdf} \caption{\small An illustrative of the active set strategy. The active set is iteratively shrunken and expanded. Such a behavior is very similar to stepwise variable selection in classical statistics.} \label{active-set} \vspace{-0.05in} \end{figure} \begin{remark} As can be seen, our proposed sequential screening rule has a explicit global control of coordinate selection based on the magnitude of the coordinate gradients of the loss function. Compared with the cyclic rule in \cite{friedman2007pathwise}, this prevents from overselection of active coordinates. Compared with the greedy rule used in PICASSO \begin{align*} {\mbox{$\mathcal{A}$}}_{k+1} = {\mbox{$\mathcal{A}$}}_{k}\cup\{j\in\Ac_{k+0.5}|j = \argmax_j|\nabla_j\cL(\theta^{[k+1]})|\}, \end{align*} our propose sequential screening rule can aggressively select multiple coordinates. \end{remark} \vspace{0.05in} \noindent\textbf{(III) Inner Loop}: Recall that we need to solve the subproblem \eqref{eqn:inner} in each iteration of the middle loop. Since all coordinates in ${\mbox{$\mathcal{A}$}}_k$ stay at zero values, for notational simplicity, we rewrite \eqref{eqn:inner} as \begin{align}\label{general-loss-active} \tilde{\beta}^{[k+1]} = \mathop{\rm argmin}_{\beta \in \RR^s}~\tilde{\mathcal{F}}_{\lambda} (\beta), \end{align} where $s = |{\mbox{$\mathcal{A}$}}_k|$, $\tilde{\beta}^{[k+1]} = \theta^{[k+1]}_{{\mbox{$\mathcal{A}$}}_k}$, $\theta^{[k+1]}_{\Ac_k} = 0$, and $\tilde{\mathcal{F}}(\beta)$ denotes the same function as $\mathcal{F}_{\lambda}(\theta)$ by only considering $\beta$ as the input. Similar notation applies to $\tilde{\cL}(\theta)$ and $\tilde{\ell}_i(\theta)$. We then show how we apply use the proximal subsampled Newton algorithm to \eqref{general-loss-active} in the inner loop. For notational simplicity again, we omit the index of the middle loop for notational simplicity. Specifically, at the $m$-th iteration of the inner loop, we take \begin{align}\label{subproblem-quadratic} \beta^{(m+0.5)} = \mathop{\rm argmin}_{\beta \in \RR^s}\cQ(\beta;\beta^{(m)}) + \mathcal{R}_{\lambda} (\beta), \end{align} where $\cQ(\beta;\beta^{(m)})$ denotes the second order Tayler approximation of $\tilde{\cL}(\beta)$ at $\beta^{(m)}$, i.e., \begin{align}\label{quadratic-subsampled} &\cQ(\beta;\beta^{(m)}) = \tilde{\cL}(\beta^{(m)}) + \nabla\tilde{\cL}(\beta^{(m)})^\top(\beta-\beta^{(m)})\notag\\ &\hspace{1in} + \frac{1}{2}\norm{\beta-\beta^{(m)}}_{H^{(m)}}^2. \end{align} Note that $H^{(m)}$ in \eqref{quadratic-subsampled} is the subsampled Hessian matrix, i.e., given $\mathcal{I}_m$ sampled from $[d]$ uniformly at random, \begin{align*} H^{(m)} = \frac{1}{|\mathcal{I}_m|}\sum_{i\in\mathcal{I}_m} \nabla^2\tilde{\ell}_i(\beta^{(m)}). \end{align*} Compared with the exact Hessian matrix, the subsampled Hessian matrix can significantly reduce the computational cost, when $n\gg |\mathcal{I}_m|$. Once we get $\beta^{(m+0.5)}$, we take \begin{align*} \beta^{(m+1)} = \beta^{(m)}+\eta_m(\beta^{(m+0.5)}-\beta^{(m)}), \end{align*} where $\eta$ is obtained by the backtracking line search to ensure the monotone decreasing. \begin{remark} Note that \eqref{subproblem-quadratic} is a regularized quadratic program, which can be efficiently solved by coordinate descent algorithms. See more details in \cite{li2016faster}. \end{remark} \section{Introduction}\label{sec:introduction} In this work, we consider the following linearly constrained optimization problem: \begin{align}\label{eq:original} \min_{x\in\mathbb{R}^N}\; f(x) \quad {\rm s.t.} \quad A x=b \end{align} where $f(x):\mathbb{R}^{N}\to \mathbb{R}$ is a smooth function (possibly non-convex); $A\in\mathbb{R}^{M\times N}$ is not full column rank; $b\in\mathbb{R}^{M}$ is a known vector. An important application of problem \eqref{eq:original} is in the non-convex distributed optimization and learning -- a problem that has gained considerable attention recently, and has found applications in training neural networks \cite{Lian17decentralized}, distributed information processing and machine learning \cite{Forero11,hong14nonconvex_admm}, and distributed signal processing \cite{Lorenzo16}. In distributed optimization and learning, the common setup is that a network consists of $N$ distributed agents collectively optimize the following problem \begin{align}\label{eq:global:consensus} \min_{v\in\mathbb{R}} \quad \sum_{i=1}^{N} f_i(v) + g(v), \end{align} where $f_i(v): \mathbb{R}\to\mathbb{R}$ is a function local to agent $i$ (note, for notational simplicity we assume that $v$ is a scalar); $g(v)$ represents some smooth regularization function known to all agents. Below we present two problem formulations based on different topologies and application scenarios. \noindent{\bf Scenario 1: The Global Consensus.} Suppose that all the agents are connected to a single central node. The distributed agents can communicate with the controller, but they are not able to directly communicate among themselves. In this case problem \eqref{eq:global:consensus} can be equivalently formulated into the following global consensus problem \cite{BoydADMMsurvey2011,hong14nonconvex_admm} \begin{align}\label{eq:global:consensus:1} \min_{\{x_i\}_{i=0}^{N}}\; \sum_{i=1}^{N}f_i(x_i) + g(x_0), \quad {\rm s.t.} \quad x_i =x_0,\; \forall~i. \end{align} The setting of the above global consensus problem is popular in applications such as parallel computing, in which the existence of central controller can orchestrate the activity of all agents; see \cite{icml2015_zhangb15, li2013distributed}. To cast the problem into the form of \eqref{eq:original}, define \begin{align}\label{eq:global:consensus:equiv:0} f(x) & = \sum_{i=1}^{N}f_i(x_i) + g(x_0), \nonumber\\ &A_1 = I_{N}, \; A_2 = 1_N, \; A=[A_1, -A_2], \; b=0, \end{align} where $I_N\in\mathbb{R}^{N\times N}$ is the identity matrix; $1_N\in\mathbb{R}^{N}$ is the all one vector. \noindent{\bf Scenario 2: Distributed Optimization Over Networks.} Suppose that there is no central controller, and the $N$ agents are connected by a network defined by an {\it undirected} graph ${\mbox{$\mathcal{G}$}}=\{\mathcal{V}, \mathcal{E}\}$, with $|\mathcal{V}|=N$ vertices and $|\mathcal{E}|=E$ edges. Each agent can only communicate with its immediate neighbors, and it can access one component function $f_i$. This problem has wide applications ranging from distributed communication networking \cite{liao15semi-async}, distributed and parallel machine learning \cite{Forero11,mateos_dlasso, shalev14proximaldual}, to distributed signal processing \cite{Schizas08}. Define the node-edge incidence matrix $A\in\mathbb{R}^{E\times N}$ as following: if $e\in\mathcal{E}$ and it connects vertex $i$ and $j$ with $i>j$, then $A_{ev}=1$ if $v=i$, $A_{ev}=-1$ if $v=j$ and $A_{ev}=0$ otherwise. Introduce $N$ local variables $x=[x_1,\cdots, x_N]^T$, and suppose the graph $\{{\mbox{$\mathcal{V}$}}, \mathcal{E}\}$ is connected. Then as long as as the graph is connected, the following formulation is equivalent to the global consensus problem, which is precisely problem \eqref{eq:original} \begin{align}\label{eq:global:consensus:equiv} \hspace{-0.5cm}\min_{x\in\mathbb{R}^N} \; f(x):=\sum_{i=1}^{N} \left(f_i(x_i)+\frac{1}{N} g(x_i)\right),\; {\rm s.t.}\; Ax = 0. \end{align} \subsection{The objective of this work} The research question we attempt to address in this work is: \begin{center} \noindent\fcolorbox{black}[rgb]{0.9,0.9,0.9}{\begin{minipage}{5in} \begin{center} {\bf(Q)} ~Can we design primal-dual algorithms capable of computing second-order stationary solutions for \eqref{eq:original}? \end{center} \end{minipage}} \end{center} Let us first analyze the first-order stationary (ss1) and second-order stationary (ss2) solutions for problem \eqref{eq:original}. For a general smooth nonlinear problem in the following form \begin{align}\label{eq:nonlinear} \min_{x\in\mathbb{R}^N}\; g(x) \quad {\rm s.t.} \quad h_i(x)=0, \quad i=1,\cdots, m, \end{align} the first-order necessary condition is given as \begin{align}\label{eq:first_order} \hspace{-0.5cm}\nabla g(x^*) + \sum_{i=1}^{m}\langle \lambda_i^*, \nabla h_i(x^*)\rangle = 0, \quad h_i(x^*) = 0, \; \forall~i. \end{align} The second-order necessary condition is given below [see Proposition 3.1.1 in \cite{bertsekas99}]. Suppose $x^*$ is regular, then \begin{align}\label{eq:necessary} \begin{split} &\langle y, (\nabla^2 g(x^*) +\sum_{i=1}^{m}\lambda^*_i \nabla^2 h_i(x^*))y\rangle \ge 0,\\ &\forall~y\in \{y\ne 0\mid \langle \nabla h_i(x^*), y\rangle =0, \; \forall~i=1,\cdots, m \}. \end{split} \end{align} Applying the above result to our problem, we obtain the following first- and second-order necessary condition for problem \eqref{eq:original} \footnote{Note that for linear constraints no further regularity is needed for the existence of multipliers} \begin{subequations}\label{eq:first:and:second} \begin{align} &\nabla f(x^*) + A^T \lambda^* = 0, \quad A x^* = b. \label{eq:stationarity:ss1}\\ &\langle y, \nabla^2 f(x^*) y\rangle \ge 0,\quad \forall\;\; y \in \{y\mid A y =0\}.\label{eq:stationarity} \end{align} \end{subequations} In other words, the second-order necessary condition is equivalent to the condition that $\nabla^2 f(x^*)$ is positive semi-definite in the null space of $A$. Similarly, the sufficient condition for {\it strict} local minimizer is given by \begin{align}\label{eq:sufficient:1} &\nabla f(x^*) + A^T \lambda^* = 0, \quad A x^* = b. \\ &\langle y, \nabla^2 f(x^*) y\rangle > 0,\quad \forall\;\; y\ne 0, \; \mbox{and}\; y\in \{y\mid A y =0\}.\nonumber \end{align} To proceed, we need the following claim [see Lemma 3.2.1 in \cite {bertsekas99}] \begin{claim}\label{claim:null:space} Let $P$ and $Q$ be two symmetric matrices. Assume that $Q$ is positive semidefinite and $P$ is positive definite on the null space of $Q$, that is, $x^T P x >0$ for all $x\ne 0$ with $x^T Q x =0$. Then there exists a scalar $\bar{c}$ such that \begin{align}\label{eq:pd:1} P+cQ \succ 0, \quad \forall ~ c\ge \bar{c}. \end{align} Conversely, if there exists a scalar $\bar{c}$ such that \eqref{eq:pd:1} is true, then we have $x^T P x >0$ for all $x\ne 0$ with $x^T Q x =0$. \end{claim} % By Claim \ref{claim:null:space}, the sufficient condition \eqref{eq:sufficient:1} can be equivalently written as: \begin{align}\label{eq:sufficient} &\nabla f(x^*) + A^T \lambda^* = 0, \quad A x^* = b. \\ &\nabla^2 f(x^*) + \gamma A^T A \succ 0, \; \mbox{for~some~}\gamma>0. \end{align} It is worth mentioning that checking both of the above sufficient and necessary conditions can be done in polynomial time, but when there are inequality constraints, checking second-order conditions can be NP-hard; see \cite{Murty1987}. In the following we will refer to the condition \eqref{eq:stationarity:ss1} as ss1 solution and condition \eqref{eq:stationarity} as the ss2 solution. According to the above definition, we define a {\it strict saddle} point to be the solution $x^*$ such that \begin{align}\label{eq:strict} \begin{split} &\nabla f(x^*) + A^T \lambda^* = 0, \quad A x^* = b, \\ &\exists~ y\in\{y\mid Ay=0, y\ne 0\}, \; \mbox{and}\; \sigma>0\quad \mbox{such that}\; \langle y, \nabla^2 f(x^*) y\rangle \le -\sigma \|y\|^2. \end{split} \end{align} It is easy to verify using Claim \ref{claim:null:space} that the above condition implies that for the same $\sigma>0$, the following is true \begin{align}\label{eq:strict:2} \begin{split} &\nabla f(x^*) + A^T \lambda^* = 0, \quad A x^* = b, \\ &\sigma_{\min}\left(\gamma A^T A + \nabla^2 f(x^*) \right) \le -\sigma, \quad \forall~\gamma>0 \end{split} \end{align} where $\sigma_{\min}$ denotes the smallest eigenvalue of a matrix. Clearly, if a ss1 solution $x^*$ does not satisfy \eqref{eq:strict}, i.e., \begin{align} \forall~y, \; \mbox{s.t.} \; A y =0, \quad \langle y, \nabla f^2(x^*) y\rangle \ge 0, \end{align} then \eqref{eq:stationarity} is true. In this work, we will develop primal-dual algorithms that avoid converging to the strict saddles \eqref{eq:strict}. \subsection{Existing literature} Many recent works have been focused on designing algorithms with convergence guarantees to local minimum points/ss2 for non-convex unconstrained problems. These include second-order methods such as trust region method \cite{conn2000trust}, cubic regularized Newton's method \cite{nesterov2006cubic}, and a hybrid of first-order and second-order methods \cite{sama17}. When only gradient information is available, it has been shown that with random initialization, gradient descent (GD) converges to ss2 for unconstrained smooth problems with probability one \cite{jlee16jordan}. Recently, a perturbed version of GD which occasionally adds noise to the iterates has been proposed \cite{jin2017jordan}, and such a method converges to the ss2 with faster convergence rate than the ordinary gradient descent algorithm with random initialization. When manifold constraints are present, it is shown in \cite{lee17first_order} that manifold gradient descent converges to ss2, provided that each time the iterates are always feasible (ensured by performing a potentially expensive second-order retraction operation). However, there has been no work analyzing whether classical primal-dual gradient type methods based on Lagrangian relaxation are also capable of computing ss2. The consensus problem \eqref{eq:global:consensus} and \eqref{eq:global:consensus:equiv} have been studied extensively in the literature when the objective functions are all convex; see for example \cite{Nedic09subgradient, nedic2015distributed, shi2014extra, aybat2016primal}. Primal methods such as distributed subgradient method \cite{Nedic09subgradient}, the EXTRA method \cite{shi2014extra}, as well as primal-dual based methods such as Alternating Direction Method of Multipliers (ADMM) \cite{BoydADMMsurvey2011,Schizas09,chang14distributed} have been studied. On the contrary, only recently there have been some work addressing the more challenging problems without assuming convexity of $f_i$'s; see recent developments in \cite{bianchi2013convergence, hong14nonconvex_admm,Lorenzo16,hong17icml}. In particular, reference \cite{hong14nonconvex_admm} develops non-convex ADMM based methods (with global sublinear convergence rate) for solving the global consensus problem \eqref{eq:global:consensus:1}. Reference \cite{hong17icml} proposes a primal-dual based method for unconstrained non-convex distributed optimization over a connected network (without a central controller), and derives the first global convergence rate for distributed non-convex optimization. In \cite{Lorenzo16} the authors utilize certain gradient tracking idea to solve a constrained nonsmooth distributed problem over possibly time-varying networks. It is worth noting that the distributed algorithms proposed in all these works converge to ss1. There has been no distributed schemes that can provably converge to ss2 for smooth non-convex problem in the form of \eqref{eq:global:consensus}. \section{The Gradient Primal-Dual Algorithm} In this section, we introduce the gradient primal-dual algorithm (GPDA) for solving the non-convex problem \eqref{eq:original}. Let us introduce the augmented Lagrangian (AL) as \begin{align} L(x,y) = f(x)+ \langle \lambda, Ax-b\rangle +\frac{\rho}{2}\|Ax-b\|^2\label{eq:augmented}, \end{align} where $\lambda\in\mathbb{R}^{M}$ is the dual variable. The steps of the GPDA algorithm are described in the table below. Each iteration of the GPDA performs a gradient descent step on the AL (with stepsize being $1/\beta$), followed by taking one step of approximate dual gradient ascent (with stepsize $\rho>0$). The GPDA is closely related to the classical Uzawa primal-dual method \cite{UZAWA58}, which has been utilized to solve {\it convex} saddle point problems and linearly constrained {\it convex} problems \cite{Nedic2009saddle}. It is also related to the proximal method of multipliers (Prox-MM) first developed by Rockafellar in \cite{rockafellar1976augmented}, in which a proximal term has been added to the augmented Lagrangian in order to make it strongly convex in each iteration. The latter method has also been applied for example, in solving certain large-scale linear programs; see \cite{wright_proximal}. However the theoretical results derived for Prox-MM in \cite{rockafellar1976augmented, wright_proximal} are only developed for convex problems. Further, such an algorithm requires that the proximal Lagrangian to be optimized with {\it increasing} accuracy as the algorithm progresses. Finally, we note that both step \eqref{eq:x:update} and \eqref{eq:mu:update} can be decomposable over the variables, therefore they are easy to be implemented in a distributed manner (as will be explained shortly). \begin{center} \vspace{-0.5cm} \fbox{ \begin{minipage}{\columnwidth} \smallskip \centerline{\bf {Algorithm 1. The gradient primal-dual algorithm}} \smallskip At iteration $0$, initialize $\lambda^0 $ and $x^0$. At each iteration $r+1$, update variables by: \begin{subequations} \begin{align} x^{r+1}& =\arg\min\; \langle \nabla f(x^r) + A^T\lambda^r + \rho A^T (A x^r-b), x-x^r\rangle +\frac{\beta}{2}\|x-x^r\|^2\label{eq:x:update}\\ \lambda^{r+1}& = \lambda^r +\rho \left(A x^{r+1} - b \right) \label{eq:mu:update} \end{align} \end{subequations} \end{minipage} } \end{center} \subsection{Application in distributed optimization problem} To see how the GPDA can be specialized to the problem of distributed optimization over the network \eqref{eq:global:consensus:equiv}, let us begin by writing the optimality condition of \eqref{eq:x:update}. We have \begin{align}\label{eq:optimality} \nabla f(x^r) + A^T \lambda^r + \rho A^T A x^r + \beta (x^{r+1}-x^r) = 0. \end{align} Subtracting \eqref{eq:optimality} with its counterpart at iteration $r$, we obtain \begin{align*} &\nabla f(x^r) - \nabla f(x^{r-1}) + A^T (\lambda^r-\lambda^{r-1}) + \rho A^T A (x^r-x^{r-1}) +\beta w^{r+1} =0. \end{align*} where we have defined $w^{r+1} = (x^{r+1}-x^r) -(x^r-x^{r-1})$. Rearranging, and use the fact that $A^T A =L_{-}\in\mathbb{R}^{N\times N}$ is the {\it signed Laplacian matrix}, and $b=0$ in \eqref{eq:global:consensus:equiv}, we obtain \begin{align}\label{eq:distributed:iteration} x^{r+1} & = x^r + (x^{r}-x^{r-1}) +\frac{1}{\beta}\big(-\nabla f(x^r)+ \nabla f(x^{r-1}) -\rho L_{-}x^r - \rho L_{-} (x^r-x^{r-1})\big). \end{align} Consider problem \eqref{eq:global:consensus:equiv} (for simplicity assume that $g\equiv 0$), the above iteration can be implemented in a distributed manner, where each agent $i$ performs \begin{align x_i^{r+1} & = x_i^r + (x_i^{r}-x_i^{r-1}) +\frac{1}{\beta}\bigg(-\nabla f_i(x_i^r)+ \nabla f_i(x_i^{r-1}) -2\rho \left(d_i x^r_i-\sum_{j\in \mathcal{N}_i} x^r_j\right) + \rho \big(d_ix_i^{r-1} -\sum_{j\in\mathcal{N}_i} x_j^{r-1} \big)\bigg),\nonumber \end{align} where $\mathcal{N}_i:=\{j\mid j\ne i, (i,j)\in\mathcal{E}\}$ is the set of neighbors of node $i$; $d_i$ is the degree for node $i$. Clearly, to implement this iteration each node only needs to know the information from the past two iterations about its immediate neighbors. \subsection{Convergence to ss1 solutions} We first state our main assumptions. \vspace{-0.2cm} \begin{itemize} \item[A1.] The function $f(x)$ is smooth and has Lipschitz continuous gradient, as well as Lipschitz continuous Hessian: \begin{align} \hspace{-0.9cm}\|\nabla f(x) - \nabla f(y)\|&\le L\|x-y\|, \; \forall~x,y\in\mathbb{R}^N\; \label{eq:Lip}\\ \hspace{-0.9cm}\|\nabla^2 f(x) - \nabla^2 f(y)\|&\le M \|x-y\|, \; \forall~x,y\in \mathbb{R}^N\; \label{eq:Lip:Hessian}. \vspace{-0.3cm} \end{align} \vspace{-0.4cm} \item [A2.] The function $f(x)$ is lower bounded over $x\in\mathbb{R}^N$. Without loss of generality, assume that $f(x)\ge 0$. \vspace{-0.2cm} \item [A3.] The constraint $Ax = b$ is feasible over $x\in X$. Further, $A^T A$ is {not} full rank. \vspace{-0.2cm} \item [A4.] The function $f(x)+\frac{\rho}{2}\|Ax-b\|^2$ is coercive. \vspace{-0.2cm} \item [A5.] The function $f$ is proper and it satisfies the Kurdyka-{\L}ojasiewicz (K{\L}) property. That is, at $\hat{x}\in\mathbb{R}$ if there exist $\eta \in (0\;\infty]$, a neighborhood $V$ of $\hat{x}$ and a continuous concave function $\phi: [0, \; \eta)\to \mathbb{R}_{+}$ such that: 1) $\phi(0) = 0$ and $\phi$ is continuously differentiable on $[0,\; \eta]$ with positive derivatives; 2) for all $x\in\mathbb{R}^N$, satisfying $f(\hat{x})<f(x)<f(\hat{x})+\eta$, it holds that \begin{align} \phi'(f(x)-f(\hat{x}))\mbox{dist}(0,\partial f(x))\ge 1. \end{align} where $\partial f(x)$ is the limiting subdifferential defined as \begin{align*} &\partial f(x) = \bigg\{\{v\in\mathbb{R}^N: \exists x^t\to x, v^t\to v, ~\mbox{with}~\lim\inf_{z\to x^t}\frac{f(x)-f(x^t)-\langle v^t,z-x^t\rangle}{\|x-x^t\|}\ge 0, \forall~t\bigg\}. \end{align*} \vspace{-0.2cm} \end{itemize} \vspace{-0.2cm} We comment that a wide class of functions enjoys the K{\L} property, for example a semi-algebraic function is a KL function; for detailed discussions of the K{\L} property we refer the readers to \cite{Bolte14, Li14splitting}. Below we will use $\sigma_{i}(\cdot)$, $\sigma_{\max}(\cdot)$, $\sigma_{\min}(\cdot)$ and $\tilde{\sigma}_{\min}(\cdot)$ to denote the $i$th, the maximum, the minimum, and the smallest {\it non-zero} eigenvalues of a matrix, respectively. The convergence of GPDA to the ss1 is similar to Theorem 3.1 in \cite{hong16decomposing} and Corollay 4.1 in \cite{hong16decomposing}. Algorithmically, the main difference is that the algorithms analyzed in \cite{hong16decomposing} do not linearize the penalty term $\frac{\rho}{2}\|A x-b\|^2$, and they make use of the same penalty and proximal parameters, that is, $\rho=\beta$. In this work, in order to show the convergence to ss2, we need to have the freedom of tuning $\beta$ while fixing $\rho$, therefore $\beta$ and $\rho$ have to be chosen differently. However, in terms of analysis, there is no major difference between these versions. For completeness, we only outline the key proof steps in the Appendix. \begin{claim}\label{claim:convergence} Suppose Assumptions [A1] -- [A5] are satisfied. For appropriate choices of $\rho$, and $\beta$ satisfying \eqref{eq:first:order:condition} given in the appendix, and starting from any feasible point $(x^0,\lambda^0)$, the GPDA converges to the set of ss1 solutions. Further, if $L(x^{r},\lambda^{r})$ is a $K{\L}$ function, then $(x^{r+1},\lambda^{r+1})$ converges globally to a unique point $(x^{*},\lambda^{*})$. \end{claim} \subsection{Convergence to ss2} One can view Claim \ref{claim:convergence} as some variation of known results. On the contrary, in this section we show one of the main contributions of this work, which demonstrates that GPDA can converge to solutions beyond the ss1. To this end, first let us rewrite the $x$ update step using its first-order optimality condition as follows \begin{align*} x^{r+1} = x^r - \frac{1}{\beta}\left(\nabla f(x^r)+ A^T \lambda^r + \rho A^T (A x^r-b)\right). \end{align*} Therefore the iteration can be written as \begin{align*} &\begin{bmatrix} x^{r+1}\\ \lambda^{r+1} \end{bmatrix} = \begin{bmatrix} x^{r} - \frac{1}{\beta}\left(\nabla f(x^r)+ A^T \lambda^r + \rho A^T (A x^r-b)\right)\\ \lambda^{r} + \rho (A x^{r+1}-b) \end{bmatrix}\nonumber\\ &= \begin{bmatrix} x^{r} - \frac{1}{\beta}\left(\nabla f(x^r)+ A^T \lambda^r + \rho A^T (A x^r-b)\right)\\ \lambda^{r} + \rho \left(A \left(x^{r} - \frac{1}{\beta}\left(\nabla f(x^r)+ A^T \lambda^r + \rho A^T (A x^r-b)\right)\right) - b \right). \end{bmatrix} \end{align*} The compact way to write the above iteration is \begin{align}\label{eq:recursion} &\begin{bmatrix} I_N & 0_{N\times M}\\ -\rho A & I_M \end{bmatrix} \begin{bmatrix} x^{r+1}\\ \lambda^{r+1} \end{bmatrix} \nonumber\\ & = \begin{bmatrix} x^{r} - \frac{1}{\beta}\left(\nabla f(x^r)+ A^T \lambda^r + \rho A^T (A x^r-b)\right)\\ \lambda^{r}-\rho b \end{bmatrix}, \end{align} where $I_N$ denotes the $N$-by-$N$ identity matrix $0_{N\times M}$ denotes the $N$-by-$M$ all zero matrix. Next let us consider approximating $\nabla f(x)$ near a first-order stationary solution $x^*$. Let us define $$H:=\nabla^2 f(x^*), \quad d^{r+1} := -x^*+x^{r+1}.$$ Claim \ref{claim:convergence} implies that when $\rho,\beta$ are chosen appropriately, then $d^{r+1}\to 0$. Therefore for any given $\xi>0$ there exists an iteration index $R(\xi)>0$ such that the following holds \begin{align}\label{eq:d:bounded} \|d^{r+1}\|\le \xi, \quad \forall~r-1\ge R(\xi). \end{align} Next let us approximate the gradients around $\nabla f(x^*)$: \begin{align}\label{eq:f:app} &\nabla f(x^{r+1}) =\nabla f(x^{*}+ d^{r+1}) \nonumber\\ &=\nabla f(x^*) +\int^{1}_{0} \nabla^2 f(x^*+t d^{r+1})d^{r+1} dt \nonumber\\ & = \nabla f(x^*) +\int^{1}_{0} (\nabla^2 f(x^*+t d^{r+1}) - H)d^{r+1} dt + H d^{r+1}\nonumber\\ &:= \nabla f(x^*) +\Delta^{r+1}d^{r+1} + H d^{r+1}, \end{align} where in the last inequality we have defined \begin{align}\label{eq:Delta} \Delta^{r+1} := \int^{1}_{0} (\nabla^2 f(x^*+t d^{r+1}) - H) d^{r+1}dt. \end{align} From Assumption [A1] and \eqref{eq:d:bounded} we have \begin{align*} \|\Delta^{r+1}\|\le M \|d^{r+1}\| \le M \xi, \quad \forall~r\ge R(\xi). \end{align*} Therefore we have \begin{align}\label{eq:delta:0} \lim_{r\to\infty}\|\Delta^{r+1}\|\to 0. \end{align} Using the approximation \eqref{eq:f:app}, we obtain \begin{align}\label{eq:diff:gradf} \nabla f(x^{r}) = \nabla f(x^*) +\Delta^{r}d^{r} + H d^{r} \end{align} Plugging \eqref{eq:diff:gradf} into \eqref{eq:recursion}, the iteration \eqref{eq:recursion} can be written as {\begin{align}\label{eq:x:compact:2} & \begin{bmatrix} x^{r+1}\\ \lambda^{r+1} \end{bmatrix} = \begin{bmatrix} I_N & 0_{N\times M}\\ \rho A & I_M \end{bmatrix}\begin{bmatrix} I_N - \frac{1}{\beta}\left(H + \rho A^T A \right)& -\frac{1}{\beta} A^T\\ 0_{M\times N} & I_M \end{bmatrix}\begin{bmatrix} x^{r}\\ \lambda^{r} \end{bmatrix}\nonumber\\ &+\begin{bmatrix} I_N & 0_{N\times M}\\ \rho A & I_M \end{bmatrix} \begin{bmatrix} \nabla f(x^*) + \Delta^r d^r -H x^*\\ -\rho b \end{bmatrix} \end{align}} Then the above iteration can be compactly written as \begin{align}\label{eq:system} z^{r+1} = Q^{-1}T z^r + Q^{-1}c^r \end{align} for some appropriately defined vectors $z^{r+1}, z^r, c^r$ and matrices $M, T$ which are given below \begin{align}\label{eq:perturb} T&:= \begin{bmatrix} I_N- \frac{1}{\beta}\left(H + \rho A^T A \right)& -\frac{1}{\beta} A^T \\ 0_{M\times N} & I_M \end{bmatrix}\in\mathbb{R}^{(N+M)\times (N+M)}\nonumber\\ Q &: =\begin{bmatrix} I_N & 0_{N\times M}\\ -\rho A & I_M \end{bmatrix}\in\mathbb{R}^{(N+M)\times (N+M)} \\ c^r &:= \begin{bmatrix} \nabla f(x^*) + \Delta^r d^r -H x^*\\ -\rho b \end{bmatrix}, \; z := \begin{bmatrix} x\\ \lambda \end{bmatrix} \end{align} It is clear that $c^r$ is a bounded sequence. As a direct result of Claim \ref{claim:convergence}, we can show that every fixed point of the above iteration is an ss1 solution for problem \eqref{eq:original}. \begin{corollary}\label{cor:fixed:point} Suppose that Assumptions [A1]--[A5] are satisfied, and the parameters are chosen according to \eqref{eq:first:order:condition}. Then every fixed point of the mapping $g(z)$ defined below, is a first-order stationary solution for problem \eqref{eq:original}. \begin{align}\label{eq:g} &g(z):= g([z_1, z_2]) = \begin{bmatrix} I_N & 0_{N\times M}\\ \rho A & I_M \end{bmatrix} \begin{bmatrix} z_1 - \frac{1}{\beta}\left(\nabla f(z_1)+ A^T z_2 + \rho A^T A z_1\right)\\ z_2-\rho b \end{bmatrix} \nonumber. \end{align} \end{corollary} To proceed, we analyze the dynamics of the system \eqref{eq:system}. The following claim is a key result that characterizes the eigenvalues for the matrix $Q^{-1}T$. We refer the readers to the appendix for detailed proof. \begin{claim}\label{claim:sigma:T} Suppose Assumptions [A1] -- [A5] hold, and that \begin{align*} \beta > \sigma_{\max} (H + \rho A^T A). \end{align*} Let $(x^*, \lambda^*)$ be an ss1 solution satisfying \eqref{eq:first_order}, and that $x^*$ is a strict saddle \eqref{eq:strict}. Let $\sigma_i(Q^{-1}T)$ be the $i$th eigenvalue for matrix $Q^{-1}T$. Then $Q^{-1}T$ is invertible, and there exists a real scalar $\delta^*>0$ which is independent of iteration index $r$, such that the following holds: \begin{align*} \exists~i\in [N], \;\; \mbox{\rm s.t.}\;\; \sigma_i(Q^{-1}T) = 1+ \delta^*. \end{align*} \end{claim} % % \begin{theorem}\label{thm:main} Suppose that Assumptions [A1]--[A5] hold true, and that the following parameters are chosen \begin{align} \beta >\sigma_{\max}(\rho A^T A) + L, \quad \mbox{and}\quad \beta, \rho~\mbox{satisfy}~\eqref{eq:first:order:condition}. \end{align} Suppose that $(x^{0}, \lambda^0)$ are initialized randomly. Then with probability one, the iterates $\{(x^{r+1},\lambda^{r+1})\}$ generated by the GPDA converges to an ss2 solution \eqref{eq:first:and:second}. \end{theorem} \vspace{-0.3cm} \noindent{\bf Proof.} We utilize the stable manifold theorem \cite{Shub87,lee16}. We will verify the conditions given in Theorem 7 \cite{lee16} to show that the system \eqref{eq:system} is not stable around strict saddle points. \noindent{\bf Step 1.} We will show that the mapping $g(z)$ defined in \eqref{eq:g} is diffeomorphism. First, suppose there exists $w_1 =(x_1, y_1)$, $w_2 = (x_2,y_2)$ such that $g(w_1) = g(w_2)$. Using the definition of $g$, and the fact that the matrix $[I\; 0; -\rho A \; I]$ is invertible, we obtain $y_2 = y_1$. Using the above two results, we obtain \begin{align*} \hspace{-0.3cm}- x_1+ \frac{1}{\beta}(\rho A^T A x_1 + \nabla f (x_1)) = - x_2+ \frac{1}{\beta}(\rho A^T A x_2 + \nabla f (x_2)). \end{align*} Then we have \begin{align*} (x_1-x_2) = \frac{1}{\beta}\left(\nabla f(x_1)-\nabla f(x_2)\right) + \frac{\rho}{\beta} A^T A (x_1-x_2) \end{align*} This implies that \begin{align} \|x_1-x_2\|\le \left(\frac{L}{\beta} +\frac{\rho}{\beta}\sigma_{\max}(A^T A)\right)\|x_1-x_2\|\nonumber. \end{align} Suppose that the following is true \begin{align} \beta > \sigma_{\max}(\rho A^T A) + L.\label{eq:beta} \end{align} Then we have $x_1=x_2$, implying $y_1=y_2$. This says that the mapping $g$ is injective. To show that the mapping is surjective, we see that for a given tuple $(x^{r+1}, \lambda^{r+1})$, the iterate $x^{r}$ is given by \begin{align*} \ell(x^{r+1},\lambda^{r+1}) = - x^{r}+ \frac{1}{\beta}(\rho A^T A x^{r} + \nabla f (x^{r})) \end{align*} where $\ell(x^{r+1},\lambda^{r+1})$ is some function of $(\lambda^{r+1}, x^{r+1})$. It is clear that $x^{r}$ is the unique solution to the following convex problem [with $\beta$ satisfying \eqref{eq:beta}] \begin{align*} x^{r} = \arg\min_{x}\frac{1}{2}\|x - \ell(x^{r+1},\lambda^{r+1})\|^2 - \frac{1}{\beta}\left( f(x) + \frac{\rho}{2}\| A x\|^2\right). \end{align*} Additionally, using the definition of the mapping $g$ in \eqref{eq:g}, we have that the Jacobian matrix for the mapping $g$ is given by \begin{align} D g(z)&= \begin{bmatrix} I_N & 0_{N\times M}\\ -\rho A & I_M \end{bmatrix}\begin{bmatrix} I - \frac{1}{\beta}\left(H + \rho A^T A \right)& -\frac{1}{\beta} A^T\\ 0_{M\times N} & I_M \end{bmatrix} \nonumber\\ &= Q^{-1}T. \end{align} Then it has been shown in Claim \ref{claim:sigma:T} that as long as the following is true \begin{align} \beta> {L} + {\rho}\sigma_{\max}(A^T A) \end{align} the Jacobian matrix $D g(z)$ is invertible. By applying the inverse function theorem, $g^{-1}$ is continuously differentiable. \noindent{\bf Step 2.} We can show that at a strict saddle point $x^*$, for the Jacobian matrix $Dg(z^*)$ evaluated at $z^*=(x^*,\lambda^*)$, the span of the eigenvectors corresponding to the eigenvalues of magnitude less than or equal to 1 is not the full space. This is easily done since according to Claim \ref{claim:sigma:T}, $Dg(z^*) = Q^{-1} T$ has one eigenvalue that is strictly greater than 1. \noindent{\bf Step 3.} Combining the previous two steps, and by utilizing Theorem 7 \cite{lee16}, we conclude that with random initialization, the GPDA converges to the second-order stationary solutions with probability one. \hfill{\bf Q.E.D.}\smskip \section{The Gradient ADMM Algorithm} In this section, we extend the argument in the previous section to an algorithm belonging to the class of method called alternating direction method of multipliers (ADMM). Although the main idea of the analysis extends those in the previous section, the presence of {\it two} blocks of primal variables instead of one significantly complicates the analysis. Consider the following problem \begin{align}\label{eq:original:block} \min\; f(x) + g(y) \quad \mbox{s.t.}\quad Ax+By = b \end{align} where $x\in\mathbb{R}^{N_1}$, $y\in\mathbb{R}^{N_2}$ and $N_1 + N_2 = N$; $b\in\mathbb{R}^{M}$. Clearly the global consensus problem \eqref{eq:global:consensus:1} can be formulated into the above two-block problem, with the following identification: $x:=\{x_1,\cdots, x_N\}$, $y:=x_0$, $f(x):=\sum_{i=1}^{N}f_i(x_i)$, $g(y):=g(x_0)$, $A=I_N$, $B=-1$, $b=0$. For this problem, the first- and second-order necessary conditions are given by [cf. \eqref{eq:first:and:second}] \begin{align} \hspace{-0.4cm} &\nabla f(x^*)+ (\lambda^*)^T A =0,\quad \nabla g(y^*)+ (\lambda^*)^T B =0, \label{eq:second-order-admm}\\ &z^T \begin{bmatrix} \nabla^2 f(x^*) & \hspace{-0.2cm}0\\ 0 & \hspace{-0.2cm}\nabla^2 g(x^*) \end{bmatrix} z\succeq 0, \forall y\in \left\{z \mid \begin{bmatrix} A^T A & \hspace{-0.2cm}A^T B \\ B^T A & \hspace{-0.2cm}B^T B \end{bmatrix} z =0\right\}.\nonumber \end{align} Similarly as before, we will refer to solutions satisfy the first line as ss1 solutions, and those that satisfy both as ss2 solutions. Therefore, a strict saddle point is defined as a point $(x^*,y^*,\lambda^*)$ that satisfies the following conditions \begin{align} &\nabla f(x^*)+ (\lambda^*)^T A =0,\quad \nabla g(y^*)+ (\lambda^*)^T B =0,\nonumber\\ &z^T \begin{bmatrix} \nabla^2 f(x^*) & 0\\ 0 & \nabla^2 g(y^*) \end{bmatrix} z\le -\sigma\|z\|^2,\; \mbox{for some}\; \sigma>0, \; z \; \mbox{satisfying} \; \begin{bmatrix} A^T A & A^T B \\ B^T A & B^T B \end{bmatrix} z = 0.\label{eq:strict:admm} \end{align} Define the AL function as \begin{align*} \hspace{-0.3cm}L(x,y;\lambda) = f(x)+g(y) +\langle \lambda, Ax+By-b\rangle +\frac{\rho}{2}\|Ax +By-b\|^2. \end{align*} The gradient ADMM (G-ADMM) algorithm that we propose is given below. \begin{center} \vspace{-0.2cm} \fbox{ \begin{minipage}{\columnwidth} \smallskip \centerline{\bf {Algorithm 2. The gradient ADMM}} \smallskip At iteration $0$, initialize $\lambda^0 $ and $x^0$. At each iteration $r+1$, update variables by: \begin{subequations} \begin{align} x^{r+1}& =\arg\min_{x}\; \langle \nabla f(x^r) + A^T \lambda^r + \rho A^T (A x^r + By^r-b), x-x^r\rangle +\frac{\beta}{2}\|x-x^r\|^2\label{eq:x:update:admm}\\ y^{r+1}& =\arg\min_{y}\; \langle \nabla g(y^r) + B^T \lambda^r + \rho B^T (A x^{r+1} + By^r-b), y-y^r\rangle +\frac{\beta}{2}\|y-y^r\|^2 \label{eq:y:update:admm}\\ \lambda^{r+1}& = \lambda^r +\rho \left(A x^{r+1} + By^{r+1} - b \right) \label{eq:mu:update:admm}. \end{align} \end{subequations} \end{minipage} } \end{center} We note that in the GADMM, the $x$ and $y$ steps perform gradient steps to optimize the AL, instead of performing the exact minimization as the original convex version of ADMM does \cite{BoydADMMsurvey2011,EcksteinBertsekas1992}. The reason is that the direct minimization may not be possible because the non-convexity of $f$ and $g$ makes the subproblem of minimizing the AL w.r.t. $x$ and $y$ also non-convex. Note that the gradient steps have been used in the primal updates of ADMM when dealing with convex problems, see \cite{gao14}, but their analyses do not extend to the non-convex setting. It is also worth noting that the key difference between Algorithm 2 and 1 is that, in the $y$ update step \eqref{eq:y:update:admm} of Algorithm 2, the newly updated $x^{r+1}$ is used. If in this step $x^{r}$ is used instead of $x^{r+1}$, then Algorithm 2 is equivalent to Algorithm 1. Also there are quite a few recent works applying ADMM-type method to solve a number of non-convex problems; see, e.g., \cite{li14nonconvex, wang15nonconvexadmm, Goncalves17} and the references therein. However, to the best of our knowledge, these algorithms do not take exactly the same form as Algorithm 2 described above, despite the fact that their analyses all appear to be quite similar (i.e., some potential function based on the AL is shown to be descending at each iteration of the algorithm). In particular, in \cite{Goncalves17}, both the $x$ and $y$ subproblems are solved using a proximal point method; In \cite{jiang16admm}, the $x$-step is solved using the gradient step, while the $y$-step is solved using the conventional exact minimization. Of course, none of these works analyzed the convergence of these methods to ss2 solutions. \subsection{Application in global consensus problem} We discuss how Algorithm 2 can be applied to solve the global consensus \eqref{eq:global:consensus:1}. For this problem, the distributed nodes and the master node alternate between their updates: \begin{subequations} \begin{align*} x^{r+1}_i & = \arg\min_{x_i} \; \left\langle \nabla f_i(x^r_i) + \lambda^r_i + \rho (x^r_i - x^r_0), x_i-x^r_i \right\rangle +\frac{\beta}{2}\|x_i-x^r_i\|^2, \; \forall~i\\ x^{r+1}_0 & = \arg\min_{x_0}\; \langle \nabla g(x_0) - \sum_{i=1}^{N}(\lambda^r_i + \rho (x^{r+1}_i - x^r_0)), x_0-x^r_0\rangle +\frac{\beta}{2}\|x_0-x^r_0\|^2. \end{align*} \end{subequations} Clearly, for fixed $x_0$, the distributed nodes are able to perform their computation completely in parallel. \subsection{Convergence to first-order stationary solutions} First we make the following assumptions. \begin{itemize} \vspace{-0.3cm} \item[B1.] The function $f(x)$ and $g(y)$ are smooth and both have Lipschitz continuous gradient and Hessian, with constants $L_f$, $L_g$, $M_f$ and $M_g$. \vspace{-0.2cm} \item [B2.] $f(x)$ and $g(y)$ are lower bounded over $\mathbb{R}^N$. Without loss of generality, assume $f(x)\ge 0, g(y)\ge 0$. \vspace{-0.2cm} \item [B3.] $Ax +By = b$ is feasible over $x\in \mbox{dom\,}(f)$ and $y\in \mbox{dom\,}(g)$; the matrix $[A; B]\in\mathbb{R}^{M\times N}$ is {\it not} full rank. \vspace{-0.2cm} \item [B4.] $f(x) + g(y)+\frac{\rho}{2}\|Ax +By-b\|^2$ is a coercive function. \vspace{-0.2cm} \item [B5.] $f(x)+g(x)$ is a (K{\L}) function given in [A5]. \end{itemize} \vspace{-0.3cm} Based on the above assumptions, the convergence of Algorithm 2 to the ss1 solutions can be shown following similar line of arguments as in \cite{li14nonconvex, wang15nonconvexadmm, Goncalves17,jiang16admm}. However, since the exact form of this algorithm has not appeared before, for completeness we provide the proof outline in the appendix. \begin{claim}\label{claim:convergence:admm} Suppose Assumptions [B1] -- [B5] are satisfied. For appropriate choices of $\beta, \rho$ [see \eqref{eq:condition:admm} in the Appendix for the precise expression], and starting from any point $(x^0, y^0, \lambda^0)$, Algorithm 2 converges to the set of ss1 points. Further, if $L(x^{r+1}, y^{r+1},\lambda^{r+1})$ is a K{\L} function, then Algorithm 2 converges globally to a unique point $(x^{*}, y^*, \lambda^{*})$. \end{claim} \subsection{Convergence to ss2 solutions} The optimality conditions for the $(x,y)$ update is given as \begin{align*} &\nabla f(x^r) + A^T \lambda^r + \rho A^T (A x^r+ B y^r-b) + \beta (x^{r+1}-x^r)=0\nonumber\\ &\nabla g(y^r) + B^T \lambda^r + \rho B^T (A x^{r+1}+ B y^r-b) + \beta (y^{r+1}-y^r)=0\nonumber. \end{align*} These conditions combined with the update rule of the dual variable give the following compact form of the algorithm \begin{align* \hspace{-0.5cm}\begin{bmatrix} x^{r+1}\\y^{r+1}\\\lambda^{r+1} \end{bmatrix} = \begin{bmatrix} x^r-\frac{1}{\beta}\left(\nabla f(x^r) + A^T \lambda^r + \rho A^T (A x^r+ By^r -b)\right)\\ y^r-\frac{1}{\beta}\left(\nabla g(y^r) + B^T \lambda^r + \rho B^T (A x^{r+1}+ By^r -b)\right)\\ \lambda^r + \rho\left(A x^{r+1} + B y^{r+1} -b\right) \end{bmatrix}. \end{align*} To compactly write the iterations in the form of a linear dynamic system, define \begin{align*} z^{r+1}: = [x^{r+1}; y^{r+1}; \lambda^{r+1}]\in\mathbb{R}^{2N+M}. \end{align*} Next we approximate the iteration around a stationary solution $x^*$. Suppose that $\nabla^2 f(x^*) = H$ and $\nabla^2 g(y^*) = G$. Then similarly as the derivation of \eqref{eq:x:compact:2}, we can write \begin{align*} P z^{r+1} = T^{r} z^r + d = (T+E^{r}) z^r + d^r \end{align*} where we have defined \begin{subequations} \begin{align} \hspace{-0.5cm}P &:= \begin{bmatrix} I_N & 0 & 0 \\\frac{\rho}{\beta} B^T A & I_N & 0 \\ -\rho A & -\rho B & I_M \end{bmatrix}, \; E^r := \begin{bmatrix} \Delta^{r}_{H}\\\Delta^{r}_G\\0 \end{bmatrix}\\%\in\mathbb{R}^{(2N+M)\times (2N+M)},\\ \hspace{-0.5cm} d &:= \begin{bmatrix} \frac{\rho}{\beta} A^T b +\nabla f(x^*) - \Delta^r_H x^* - H x^*\\ \frac{\rho}{\beta} B^T b +\nabla g(y^*) - \Delta^r_G x^*- G y^*\\ -\rho b \end{bmatrix} \label{eq:M}\\%\in\mathbb{R}^{2N+M}\label{eq:M}\\ \hspace{-0.5cm}T&:=\begin{bmatrix} I_N -\frac{1}{\beta}H-\frac{\rho}{\beta}A^T A & \hspace{-0.5cm}-\frac{\rho}{\beta} A^T B & \hspace{-0.3cm} -\frac{1}{\beta} A^T \\ 0 & \hspace{-0.5cm} I_N-\frac{1}{\beta} G +\frac{\rho}{\beta}B^T B & \hspace{-0.2cm}-\frac{1}{\beta}B^T\\ 0 & \hspace{-0.3cm} 0 & \hspace{-0.2cm} I_M \end{bmatrix}\label{eq:T \end{align} \end{subequations} with the following \begin{align*} &\Delta_H^{r+1} := \int^{1}_{0} (\nabla^2 f(x^*+t d_x^{r+1}) - H) d^{r+1}_x dt\\ &\Delta_G^{r+1} := \int^{1}_{0} (\nabla^2 g(y^*+t d_y^{r+1}) - G) d^{r+1}_y dt,\\ &\mbox{with}\quad d_{x}^{r+1} := -x^*+x^{r+1}, \quad d_{y}^{r+1} := -y^*+y^{r+1}. \end{align*} By noting that $P$ is an invertible matrix, we conclude that the new iteration $z^{r+1}$ can be expressed as \begin{align} z^{r+1} = P^{-1}(T+ E^{r+1}) z^r + P^{-1}d^r\label{eq:z:iteration}. \end{align} Now in order to analyze the stability at a point $(x^*,y^*)$, similarly as before we need to analyze the eigenvalues of the matrix $P^{-1}T$ at a stationary solution. We note that $P$ is a lower triangular matrix and $\det P=1$. This implies that $\det(P^{-1} T-\mu I) =\det(T-\mu P)$. We have the following characterization on the determinant of $T-\mu P$; please see Appendix for detailed proof. \begin{claim}\label{claim:real} We have the following for $\det[T-\mu P]$: \noindent 1) $\det[T-P]=0$, i.e., $1$ is an eigenvalue of $P^{-1}T$. \noindent 2) Suppose that the following condition is satisfied \begin{align*} \beta> \rho\sigma_{\max}(A^T A) + L_f, \quad \beta> \rho\sigma_{\max}(B^T B) + L_g, \end{align*} Then $\det[T]\ne 0$, i.e., the matrix $P^{-1}T$ is invertible. \noindent 3) Define a $2N\times 2N$ matrix $U(\mu) = [U_{11}(\mu)\; U_{12}(\mu); U_{12}(\mu)\; U_{22}(\mu)]$, with \begin{subequations} \begin{align}\label{eq:Q} U_{11}(\mu) &= -\mu\left(2I - \frac{2\rho}{\beta}A^T A-\frac{1}{\beta} H -\mu I\right) + I - \frac{\rho}{\beta}A^T A -\frac{1}{\beta} H\\ U_{12}(\mu) & = \mu \frac{2\rho}{\beta}A^T B -\frac{\rho}{\beta}A^T B = (2\mu-1)\frac{\rho}{\beta}A^T B \\ U_{21}(\mu) & = \mu^2\frac{\rho}{\beta}B^T A\\ U_{22}(\mu) & = -\mu\left(2 I -\frac{1}{\beta}G -\frac{2\rho}{\beta}B^T B -\mu I\right) + I-\frac{1}{\beta} G- \frac{\rho}{\beta}B^T B. \end{align} \end{subequations} Then we have $\det[U(\mu)] = \det[T-\mu P]$, and that for any $\delta\in\mathbb{R}_{+}$ the eigenvalues of $U(1+\delta)$ are the same as those of the following symmetric matrix \begin{align}\label{eq:symmetric} \begin{bmatrix} U_{11}(1+\delta) & \hspace{-0.3cm}{(\delta+1)}{\sqrt{2\delta+1}}\frac{\rho}{\beta}A^T B\\ {(\delta+1)}{\sqrt{2\delta+1}} \frac{\rho}{\beta}B^T A & \hspace{-0.3cm} U_{22}(1+\delta). \end{bmatrix} \end{align} \end{claim} Based on Claim \ref{claim:real}, we will show that the matrix $P^{-1}T$ has a {\it real} eigenvalue $\mu=1+\delta$, with $\delta>0$ being a positive number. To this end, plugging $\mu=1+\delta$ to the expression of the $U$ matrix in \eqref{eq:Q} we have \begin{align*} U_{11}(1+\delta) &= \delta^2 I + \frac{\rho}{\beta}(1+2\delta) A^T A + \frac{\delta}{\beta} H\\ U_{21}(1+\delta) & = (1+\delta)^2 \frac{\rho}{\beta} B^T A , \quad U_{12}(1+\delta) = (1+2\delta) \frac{\rho}{\beta}A^T B\\ U_{22}(1+\delta) & = \delta^2 I + \frac{\rho}{\beta}(1+2\delta) B^T B + \frac{\delta}{\beta} G. \end{align*} Therefore, in this case we can express $U(1+\delta)$ as \begin{align*} U(1+\delta) = (2\delta +1) U(1) + \frac{\delta}{\beta} \begin{bmatrix} H & 0\\ 0 & G \end{bmatrix} + \delta^2\begin{bmatrix} I & 0 \\ \frac{\rho}{\beta}B^T A & I \end{bmatrix} \nonumber. \end{align*} It remains to show that there exists $\delta^*>0$ such that the determinant of the above matrix is zero. To this end, we rewrite the above expression as follows \begin{align}\label{eq:Q:delta} U(1+\delta) &= \delta\left( \frac{2\delta+1}{\delta} U(1) + \frac{1}{\beta} \begin{bmatrix} H & 0\\ 0 & G \end{bmatrix} + \delta \begin{bmatrix} I & 0 \\ \frac{\rho}{\beta}B^T A & I \end{bmatrix}\right)\nonumber\\ &:= \delta\left(F(\delta) + E(\delta)\right) \end{align} where for notational simplicity, we have defined \begin{align* F(\delta)& = \frac{(2\delta +1)}{\delta} U(1) + \frac{1}{\beta} \begin{bmatrix} H & 0\\ 0 & G \end{bmatrix} , \;E(\delta) = \delta\begin{bmatrix} I & 0 \\ \frac{\rho}{\beta}B^T A & I \end{bmatrix}. \end{align*} Note that from \eqref{eq:strict:admm}, we know that at a strict saddle point, there exists $y$ such that \begin{align} U(1) y =0, \quad y^T\begin{bmatrix} H & 0\\ 0 & G \end{bmatrix} y\le -\sigma\|y\|^2, \end{align} which implies \begin{align} y^T \left(\gamma U(1) + \begin{bmatrix} H & 0\\ 0 & G \end{bmatrix} \right)y \le -\sigma\|y\|^2, \; \forall~\gamma. \end{align} This further implies that the matrix $F(\delta)$ has eigenvalue no greater than $-\sigma/\beta$ for any $\delta$. Next we invoke a matrix perturbation result \cite{Stewart90} to argue that the matrix $F(\delta)+ E(\delta)$ also has negative eigenvalue as long as the parameter $\delta>0$ is small enough. For a given matrix $\tilde{F} = F+ E\in\mathbb{R}^{N\times N}$, let us define the following quantity, which is referred to as the optimal matching distance between $F$ and $\tilde{F}$ [see Chapter 4, Section 1, Definition 1.2 in \cite{Stewart90}] \begin{align}\label{eq:md} \mbox{md}(F, \tilde{F}): = \min_{\Pi} \max_{j\in[N]}| \tilde{\sigma}_{\Pi(j)}-\sigma_j| \end{align} where $\Pi$ is taken over all permutations of $[N]$, and $\sigma_j$ (resp $\tilde{\sigma}_j$) is the $j$th eigenvalue of $F$ (resp. $\tilde{F}$). We have the following results characterizing the matching distance of two matrices $F$ and $\tilde{F}$ \cite{Stewart90}: \begin{claim}\label{claim:md1} Suppose that $F$ is diagonalizable, i.e., $X^{-1} F X =\Upsilon$. Then the following is true \begin{align} \mbox{\rm md}(F, \tilde{F})\le (2N-1)\|X\|\|X^{-1}\|\|E\|. \end{align} \end{claim} Let us apply Claim \ref{claim:md1} to the matrices $F(\delta)$ and $F(\delta)+E(\delta)$. Note that \begin{align*} \|E\|_2 &= \delta\sigma_{\max}\left(\begin{bmatrix} I & \frac{\rho}{\beta}A^T B\\ \frac{\rho}{\beta}B^T A & \frac{\rho^2}{\beta^2} B^T A A^T B +I \end{bmatrix}\right) := \delta d \end{align*} where $d$ is a fixed number independent of $\delta$. By applying Claim \ref{claim:md1}, and using the fact that $\|X\|=1$, we obtain the following \begin{align} \mbox{md}(F(\delta),F(\delta)+E(\delta))\le (2N-1) \delta d. \end{align} Clearly, we can pick $\delta = \frac{\sigma}{2 d \beta(2N-1)} $, which implies that \begin{align} \mbox{md}(F(\delta), F(\delta)+ E(\delta))\le \frac{\sigma}{2\beta}. \end{align} This combined with the fact that $F(\delta)$ has an eigenvalue smaller or equal to $-\sigma/\beta$ regardless of the choice of $\delta$, and that all the eigenvalues of $F(\delta)+ E(\delta)$ are real (cf. Claim \ref{claim:real}), we conclude that there exists an index $i\in [N]$ such that \begin{align} \sigma_{i}(F(\delta)+ E(\delta))\le -\frac{\sigma}{2\beta}. \end{align} This implies that \begin{align*} \sigma_i(U(1+\delta)) &\stackrel{\eqref{eq:Q:delta}}= \delta\sigma_{i}(F(\delta)+ E(\delta)) \le -\frac{\sigma\delta}{2\beta} = -\frac{\sigma^2}{4 \beta^2(2N-1)}. \end{align*} In conclusion, we have the following claim. \begin{claim}\label{eq:positive:negative} There exists $\hat{\delta}>0$ and $\tilde\delta>0$ such that \begin{align} \sigma_{\min}(U(1+\hat{\delta}))<0,\; \sigma_i (U(1+\tilde{\delta}))>1, \quad \forall~i. \end{align} \end{claim} \vspace{-0.2cm} \noindent{\bf Proof.} The first claim comes directly form our above discussion. The second claim is also easy to see by analyzing the eigenvalues for the symmetric matrix in \eqref{eq:symmetric}, for large positive $\delta$. \hfill{\bf Q.E.D.}\smskip \vspace{-0.2cm} Using the results in Claim \ref{claim:real} and Claim \ref{eq:positive:negative}, and using the fact that the eigenvalues for $U(1+\delta)$ are continuous functions of $\delta$, we conclude that there exists $\delta^*>0$ such that $\det[U(1+\delta^*)]=0$. The result below summarizes the proceeding discussion. \begin{claim}\label{claim:sigma:MT} Suppose Assumptions [B1] --[B5] hold true. Let $(x^*, y^*, \lambda^*)$ be a first-order stationary solution satisfying \eqref{eq:first_order}, and that it is a strict saddle point satisfying \eqref{eq:strict:admm}. Let $\sigma_i(P^{-1}T)$ be the $i$th eigenvalue for matrix $P^{-1}T$. Then the following holds: \begin{align} \exists~i\in [N], \;\; \mbox{\rm s.t.}\;\; |\sigma_i(P^{-1}T)|>1. \end{align} Further, when $\beta$ satisfies \begin{align} \label{eq:beta:admm:invertible} \hspace{-0.2cm}\beta> \rho\sigma_{\max}(A^T A) + L_f, \; \beta> \rho\sigma_{\max}(B^T B) + L_g, \end{align} The matrix $P^{-1}T$ is invertible. \end{claim} The rest of the proof uses a similar argument as in Theorem \ref{thm:main}. We have the following result for the GADMM algorithm. \begin{theorem}\label{thm:main:2} Suppose that Assumptions [B1] -- [B5] hold, and $\beta, \rho$ are chosen according to \eqref{eq:beta:admm:invertible} and \eqref{eq:condition:admm} in the Appendix. Suppose that $(x^{0}, y^{0}, \lambda^0)$ are initialized randomly. Then with probability one, the iterates generated by the GADMM converge to an ss2 solution satisfying \eqref{eq:second-order-admm}. \end{theorem} % \section{Conclusion} The main contribution of this work is to show that primal-dual based first-order methods are capable of converging to second-order stationary solutions, for linearly constrained non-convex problems. The main techniques that we have leveraged is the Stable Manifold Theorem and its recently developed connection to first-order optimization methods. One important implication of our result is that, properly designed distributed non-convex optimization methods (for both the global consensus problem and the distributed optimization problem over a multi-agent network) can also converge to second-order stationary solutions. To the best of our knowledge, this is the first algorithm for non-convex distributed optimization that is capable of computing second-order stationary solutions. Some preliminary numerical results (included in the appendix) also show that the proposed algorithms work well and they are able to avoid strict saddle points. % % % %
{ "timestamp": "2018-02-27T02:08:53", "yymm": "1802", "arxiv_id": "1802.08941", "language": "en", "url": "https://arxiv.org/abs/1802.08941" }
\section*{Methods} TaAs has 12 pairs of Weyl nodes which are divided into 4 pairs of W1 and 8 pairs of W2. The $E_F$ of the as-grown single-crystalline TaAs is close to the two types of Weyl nodes which are separated 13 meV in energy space, therefore its Fermi surface consist pairs of Weyl electron pockets (see Fig. S7) \cite{zcl,arnold2016chiral}. We prepared the single crystals of TaAs by the standard chemical vapor transfer (CVT) \cite{scheafer1964chemical,MurrayTA_mainGrowth} in this study. The large single crystals we used for the magnetic torque is shown as an inset in Fig. S1. Its polished surface shows the (001) plane which was confirmed by X-ray diffraction measurements. The single crystal for parallel magnetization measurements in the pulsed field is 1$mm$$\times$1$mm$$\times$5$mm$ for acquiring the data with higher resolution. The magnetic torque measurements were performed using capacitive cantilever in water-cooled magnet with the steady fields up to 33T in the Chinese High Magnetic Field Laboratory (CHMFL), Hefei. In order to estimate the background signal from the cantilever and the cable, the empty cantilever was calibrated on the same conditions. The details of the calibration and measurements are shown in SI. \begin{acknowledgments} C.-L. Z. appreciates Lu Li's crucial comments on small angle torque theory and treats his research work as precious homework from Shui-Fen Fan. J.-L. Z. thanks Dr. Lin Jiao for sharing his analysis program with us. C.-M. W. thanks lots of discussions from Yuriy Sharlai. S.J. is supported by National Basic Research Program of China (Grant Nos. 2014CB239302) and National Natural Science Foundation of China (Grant No. 11774007). J.-L. Z. is supported by National Natural Science Foundation of China No.11504378. H.Z.L was supported by the National Key R \& D Program (Grant No.2016YFA0301700) and National Natural Science Foundation of China under Grant No. 11574127. C.-M. W. is supported by the National Natural Science Foundation of China (Grant No. 11474005). H.L. acknowledges the Singapore National Research Foundation for the support under NRF Award No. NRF-NRFF2013-03. The National Magnet Laboratory is supported by the National Science Foundation Cooperative Agreement no. DMR-1157490, the State of Florida, and the US Department of Energy. Work at Los Alamos National Laboratory was supported by the Department of Energy, Office of Science, Basic Energy Sciences program LABLF100 "Science of 100 Tesla." C.-L. Z and C.-M. W. contributed equally to this work. \end{acknowledgments} \bibliographystyle{unsrt}
{ "timestamp": "2018-02-27T02:04:12", "yymm": "1802", "arxiv_id": "1802.08801", "language": "en", "url": "https://arxiv.org/abs/1802.08801" }
\section{Introduction} An old result due to Lov\'asz \cite{lov67} states a graph $G$ can be characterized by counting homomorphisms from all graphs $F$ to $G$. That is, two graphs $G$ and $H$ are isomorphic if and only if, for all $F$, the number $\hom(F,G)$ of homomorphisms from $F$ to $G$ equals the number $\hom(F,H)$ of homomorphism from $F$ to $H$. This simple result has far reaching consequences, because mapping graphs $G$ to their \emph{homomorphism vectors} $\vechom(G):=\big(\hom(F,G)\big)_{F\text{ graph}}$ (or suitably scaled versions of these infinite vectors) allows us to apply tools from functional analysis in graph theory. This is the foundation of the beautiful theory of graph limits, developed by Lov\'asz and others over the last 15 years (see \cite{lov12}). However, from a computational perspective, representing graphs by their homomorphism vectors has the disadvantage that the problem of computing the entries of these vectors is~NP-complete. To avoid this difficulty, we may want to restrict the homomorphism vectors to entries from a class of graphs for which counting homomorphisms is tractable. That is, instead of considering the full homomorphism vector $\vechom(G)$ we consider the vector $\vechom_{\mathcal{F}}(G):=\big(\hom(F,G)\big)_{F\in\mathcal{F}}$ for a class $\mathcal{F}$ of graphs such that the problem of computing $\hom(F,G)$ for given graphs $F\in\mathcal{F}$ and $G$ is in polynomial time. Arguably the most natural example of such a class $\mathcal{F}$ is the class of all trees. More generally, computing $\hom(F,G)$ for given graphs $F\in\mathcal{F}$ and $G$ is in polynomial time for all classes $\mathcal{F}$ of bounded tree width, and under a natural assumption from parameterized complexity theory, it is not in polynomial time for any class $\mathcal{F}$ of unbounded tree width \cite{daljon04}. This immediately raises the question what the vector $\vechom_{\mathcal{F}}(G)$, for a class $\mathcal{F}$ of bounded tree width, tells us about the graph $G$. A first nice example (Proposition~\ref{prop:spectrum}) is that the vector $\vechom_{\mathcal{C}}(G)$ for the class $\mathcal{C}$ of all cycles characterizes the spectrum of a graph, that is, for graphs $G,H$ we have $\vechom_{\mathcal{C}}(G)=\vechom_{\mathcal{C}}(H)$ if and only if the adjacency matrices of $G$ and $H$ have the same eigenvalues with the same multiplicities. This equivalence is a basic observation in spectral graph theory (see~\cite[Lemma~1]{van2003graphs}). Before we state deeper results along these lines, let us describe a different (though related) motivation for this research. Determining the similarity between two graphs is an important problem with many applications, mainly in machine learning, where it is known as ``graph matching''~(e.g.~\cite{confogsanven04}). But how can the similarity between graphs be measured? An obvious idea is to use the \emph{edit distance}, which simply counts how many edges and vertices have to be deleted from or added to one graph to obtain the other. However, two graphs that have a small edit distance can nevertheless be structurally quite dissimilar (e.g.~\cite[Section 1.5.1]{lov12}). The edit distance is also very hard to compute as it is closely related to the notoriously difficult quadratic assignment problem~{(e.g.~\cite{arvkobkuhvas12,nagsvi09})}. Homomorphism vectors offer an alternative, more structurally oriented approach to measuring graph similarity. After suitably scaling the vectors, we can can compare them using standard vector norms. This idea is reminiscent of the ``graph kernels'' used in machine learning~{(e.g.\ \cite{visschrakonbor10})}. Like the homomorphism vectors, many graph kernels are based on the idea of counting certain patterns in graphs, such as paths, walks, cycles or subtrees, and in fact any inner product on the homomorphism vectors yields a graph kernel. A slightly different type of graph kernel is the so-called Weisfeiler-Leman (subtree) kernel~\cite{sheschlee+11}. This kernel is derived from the \emph{color refinement} algorithm (a.k.a.\ the \emph{1-dimensional Weisfeiler-Leman algorithm}), which is a simple and efficient heuristic to test whether two graphs are isomorphic (e.g.\ \cite{grokermlaschwe17+}). The algorithm computes a coloring of the vertices of a graph based on the iterated degree sequences, we give the details in Section~\ref{sec:tree}. To use it as an isomorphism test, we compare the color patterns of two graphs. If they are different, we say that color refinement \emph{distinguishes} the graphs. If the color patterns of the two graphs turn out to be the same, the graphs may still be non-isomorphic, but the algorithm fails to detect this. Whether color refinement is able to distinguish two graphs~$G$ and~$H$ has a very nice linear-algebraic characterization due to Tinhofer~\cite{tin86,tin91}. Let~$V$ and $W$ be the vertex sets and let $A\in\{0,1\}^{V\times V}$ and $B\in\{0,1\}^{W\times W}$ be the adjacency matrices of~$G$ and $H$, respectively. Now consider the system~$\Fiso(G,H)$ of linear equations: \begin{center} \vspace{-1em} \begin{minipage}{7cm} \begin{empheq}[ left={\Fiso(G,H):\quad\empheqlbrace}, box=\colbox ]{align} AX &=XB \tag{F1}\label{eq:F1}\\ X\boldsymbol 1_W &=\boldsymbol 1_V \tag{F2}\label{eq:F2}\\ \boldsymbol 1_V^TX &=\boldsymbol 1_W^T \tag{F3}\label{eq:F3} \end{empheq} \end{minipage} \end{center} In these equations, $X$ denotes a $(V\times W)$-matrix of variables and $\boldsymbol 1_{U}$ denotes the all-1 vector over the index set~$U$. Equations~\eqref{eq:F2} and~\eqref{eq:F3} simply state that all row and column sums of~$X$ are supposed to be $1$. Thus the nonnegative integer solutions to $\Fiso(G,H)$ are permutation matrices, which due to~\eqref{eq:F1} describe isomorphisms between~$G$ and~$H$. The nonnegative real solutions to $\Fiso(G,H)$, which in fact are always rational, are called \emph{fractional isomorphisms} between~$G$ and~$H$. Tinhofer proved that two graphs are fractionally isomorphic if and only if color refinement does not distinguish them. For every $k\ge 2$, color refinement has a generalization, known as the \emph{$k$-dimensional Weisfeiler-Leman algorithm ($k$-WL)}, which colors not the vertices of the given graph but $k$-tuples of vertices. Atserias and Maneva~\cite{atsman13} (also see~\cite{mal14}) generalized Tinhofer's theorem by establishing a close correspondence between $k$-WL and the level-$k$ Sherali-Adams relaxation of $\Fiso(G,H)$. \subsection*{Our results} How expressive are homomorphism vectors $\vechom_{\mathcal{F}}(G)$ for restricted graph classes $\mathcal{F}$\,? We consider the class $\mathcal{T}$ of trees first, where the answer is surprisingly clean. \begin{theorem}\label{theo:1} For all graphs $G$ and $H$, the following are equivalent: \begin{enumerate}[i] \item\label{it:homvec trees} $\vechom_{\mathcal{T}}(G)=\vechom_{\mathcal{T}}(H)$. \item\label{it:colref} Color refinement does not distinguish $G$ and $H$. \item\label{it:fraciso} $G$ and $H$ are fractionally isomorphic, that is, the system $\Fiso(G,H)$ of linear equations has a nonnegative real solution. \end{enumerate} \end{theorem} As mentioned before, the equivalence between \ref{it:colref} and \ref{it:fraciso} is due to Tinhofer~\cite{tin86,tin91}. An unexpected consequence of our theorem is that we can decide in time $O((n+m)\log n)$ whether $\vechom_{\mathcal{T}}(G)=\vechom_{\mathcal{T}}(H)$ holds for two given graphs $G$ and $H$ with $n$ vertices and $m$~edges. (If two graphs have a different number of vertices or edges, then their homomorphism counts already differ on the 1-vertex or 2-vertex trees.) This is remarkable, because every known algorithm for computing the entry $\hom(T,G)$ of the vector $\vechom_{\mathcal{T}}(G)$ requires quadratic time when~$T$ and~$G$ are given as input. It is a consequence of the proof of Theorem~\ref{theo:1} that, in order to characterize an $n$-vertex graph~$G$ up to fractional isomorphisms, it suffices to restrict the homomorphism vector~$\vechom_{\mathcal{T}}(G)$ to trees of height at most $n-1$. What happens if we restrict the structure of trees even further? In particular, let us restrict the homomorphism vector to its path entries, that is, consider~$\vechom_{\mathcal{P}}(G)$ for the class~$\mathcal{P}$ of all paths. Figure~\ref{fig:path} shows an example of two graphs~$G$ and~$H$ with $\vechom_{\mathcal{P}}(G)=\vechom_{\mathcal{P}}(H)$ and $\vechom_{\mathcal{T}}(G)\neq\vechom_{\mathcal{T}}(H)$. \begin{figure} \centering \begin{tikzpicture} \path[use as bounding box] (120:1.6) -- (-120:1.6) -- (0:1.6); \node[bullet] (n) at (0 :0 ) {}; \node[bullet] (n1) at (0 :0.7) {}; \node[bullet] (n1b) at (0 :1.4) {}; \node[bullet] (n2) at (120 :0.7) {}; \node[bullet] (n2b) at (120 :1.4) {}; \node[bullet] (n3) at (-120:0.7) {}; \node[bullet] (n3b) at (-120:1.4) {}; \draw[thick] (n) -- (n1) -- (n1b); \draw[thick] (n) -- (n2) -- (n2b); \draw[thick] (n) -- (n3) -- (n3b); \end{tikzpicture} \hspace{2cm} \begin{tikzpicture} \path[use as bounding box] (120:1.6) -- (-120:1.6) -- (0:1.6); \node[bullet] (n1) at (0 :0.7) {}; \node[bullet] (n2) at (60 :0.7) {}; \node[bullet] (n3) at (120 :0.7) {}; \node[bullet] (n4) at (180 :0.7) {}; \node[bullet] (n5) at (240 :0.7) {}; \node[bullet] (n6) at (300 :0.7) {}; \node[bullet] (n) at (0:1.4) {}; \draw[thick] (n1) -- (n2) -- (n3) -- (n4) -- (n5) -- (n6) -- (n1); \end{tikzpicture} \caption{Two fractionally non-isomorphic graphs with the same path homomorphism counts.} \label{fig:path} \end{figure} Despite their weaker distinguishing capabilities, the vectors $\vechom_{\mathcal{P}}(G)$ are quite interesting. They are related to graph kernels based on counting walks, and they have a clean algebraic description: it is easy to see that $\hom(P_k,G)$, the number of homomorphisms from the path $P_k$ of length $k$ to $G$, is equal to the number of length-$k$ walks in~$G$, which in turn is equal to~$\boldsymbol 1^TA^k\boldsymbol 1$, where~$A$ is the adjacency matrix of $G$ and $\boldsymbol 1$ is the all-$1$ vector of appropriate length. \begin{theorem}\label{theo:2} For all graphs $G$ and $H$, the following are equivalent: \begin{enumerate}[i] \item\label{it:hom paths} $\vechom_{\mathcal{P}}(G)=\vechom_{\mathcal{P}}(H)$. \item\label{it:Fiso} The system $\Fiso(G,H)$ of linear equations has a real solution. \end{enumerate} \end{theorem} While the proof of Theorem~\ref{theo:1} is mainly graph-theoretic---we establish the equivalence between the assertions \ref{it:hom paths} and \ref{it:Fiso} by expressing the ``colors'' of color refinement in terms of specific tree homomorphisms---the proof of Theorem~\ref{theo:2} is purely algebraic. We use spectral techniques, but with a twist, because neither does the spectrum of a graph $G$ determine the vector $\vechom_{\mathcal{P}}(G)$ nor does the vector determine the spectrum. This is in contrast with~$\vechom_{\mathcal{C}}(G)$ for the class~$\mathcal{C}$ of all cycles, which, as we already mentioned, distinguishes two graphs if and only if they have the same spectrum. Let us now turn to homomorphism vectors $\vechom_{\mathcal{T}_k}(G)$ for the class $\mathcal{T}_k$ of all graphs of tree width at most~$k$. We will relate these to $k$-WL, the $k$-dimensional generalization of color refinement. We also obtain a corresponding system of linear equations. Let $G$ and $H$ be graphs with vertex sets $V$ and $W$, respectively. Instead of variables $X_{vw}$ for vertex pairs ${(v,w)\in V \times W}$, as in the system $\Fiso(G,H)$, the new system has variables $X_{\pi}$ for $\pi\subseteq V\times W$ of size $|\pi|\le k$. We call $\pi=\{(v_1,w_1),\ldots,(v_\ell,w_\ell)\}\subseteq V\times W$ a \emph{partial bijection} if $v_i=v_j\iff w_i=w_j$ holds for all $i,j$, and we call it a \emph{partial isomorphism} if in addition ${v_iv_j\in E(G)\iff w_iw_j\in E(H)}$ holds for all~$i,j$. Now consider the following system~$\Liso[k](G,H)$ of linear equations: \vspace{-1.5em} \begin{center} \begin{minipage}{13cm} \begin{empheq}[ left={\Liso[k](G,H):\quad\empheqlbrace}, box=\colbox ]{align} \sum_{v\in V} X_{\pi\cup\{(v,w)\}}&= X_\pi && \parbox[t]{4cm}{for all $\pi\subseteq V\times W$ of size $|\pi|\le k-1$ and all $w\in W$} \tag{L1}\label{eq:ck1}\\ \sum_{w\in W} X_{\pi\cup\{(v,w)\}}&=X_\pi && \parbox[t]{4cm}{for all $\pi\subseteq V\times W$ of size $|\pi|\le k-1$ and all $v\in V$} \tag{L2}\label{eq:ck2}\\ X_\pi&=0 && \parbox[t]{5cm}{for all $\pi\subseteq V\times W$ of size $|\pi|\le k$ such that $\pi$ is not a partial isomorphism from $G$ to $H$} \tag{L3}\label{eq:pk}\\ X_\emptyset&=1 \tag{L4}\label{eq:ck3} \end{empheq} \end{minipage} \end{center} This system is closely related to the Sherali-Adams relaxations of $\Fiso(G,H)$: Every solution for the level-$k$ Sherali-Adams relaxation of $\Fiso(G,H)$ yields a solution to $\Liso[k](G,H)$, and every solution to $\Liso[k](G,H)$ yields a solution to the level $k-1$ Sherali-Adams relaxation of $\Fiso(G,H)$ \cite{atsman13,groott15}. Our result is this: \begin{theorem}\label{theo:3} For all $k\ge 1$ and for all graphs $G$ and $H$, the following are equivalent: \begin{enumerate}[i] \item\label{it:hom twk} $\vechom_{\mathcal{T}_k}(G)=\vechom_{\mathcal{T}_k}(H)$. \item\label{it:kWL} $k$-WL does not distinguish $G$ and $H$. \item\label{it:Liso nonneg} $\Liso[k+1](G,H)$ has a nonnegative real solution. \end{enumerate} \end{theorem} The equivalence between \ref{it:kWL} and \ref{it:Liso nonneg} is implicit in previous work~\cite{immlan90,atsman13,groott15}. The system~$\Liso[k](G,H)$ has another nice interpretation related to the proof complexity of graph isomorphism: it is shown in \cite{bergro15} that $\Liso[k](G,H)$ has a real solution if and only if a natural system of polynomial equations encoding the isomorphisms between $G$ and $H$ has a degree-$k$ solution in the Hilbert Nullstellensatz proof system \cite{beaimpkra+94,bus98}. In view of Theorem~\ref{theo:2}, it is tempting to conjecture that the solvability of $\Liso[k+1](G,H)$ characterizes the expressiveness of the homomorphism vectors $\vechom_{\mathcal{P}_k}(G)$ for the class $\mathcal{P}_k$ of all graphs of path width $k$. Unfortunately, we only prove one direction of this conjecture. \begin{theorem}\label{theo:4} Let $k$ be an integer with $k\ge 2$ and let $G,H$ be graphs. If $\Liso[k+1](G,H)$ has a real solution, then $\vechom_{\mathcal{P}_k}(G)=\vechom_{\mathcal{P}_k}(H)$. \end{theorem} Combining this theorem with a recent result from \cite{gropak17} separating the nonnegative from arbitrary real solutions of our systems of equations, we obtain the following corollary. \begin{corollary} For every $k$, there are graphs $G$ and $H$ with $\vechom_{\mathcal{P}_k}(G)=\vechom_{\mathcal{P}_k}(H)$ and $\vechom_{\mathcal{T}_2}(G)\neq\vechom_{\mathcal{T}_2}(H)$. \end{corollary} \begin{comment} \begin{table} \renewcommand{\arraystretch}{1.1} \resizebox{\columnwidth}{!}{% \begin{tabular}{llllll} \multicolumn{2}{l}{Homomorphisms from~$\mathcal{F}$} & Algorithm & Linear system & \\\midrule $\mathcal{T}$ & trees & Color-refinement & $\Fiso$ [$\ge 0$] & Thm.~\ref{theo:1} \\ $\mathcal{P}$ & paths & & $\Fiso$ & Thm.~\ref{theo:2}\\ $\mathcal{C}$ & cycles & Spectrum & & Prop.~\ref{prop:spectrum}\\ $\mathcal{T}_k$ & tree width $k$ & $k$-dimensional Weisfeiler--Leman & $\Liso[k+1]$ [$\ge 0$] & Thm.~\ref{theo:3}\\ $\mathcal{P}_k$ & path width $k$ & & $\Liso[k+1]$ & Thm.~\ref{theo:4} ``$\Leftarrow$''\\ \end{tabular} } \caption{Summary of our results. Each non-empty entry of the table represents an incomplete graph isomorphism test, and each row indicates tests with the same distinguishing power. For example, Thm.~\ref{theo:1} is that $\vechom_\mathcal{T}(G)=\vechom_\mathcal{T}(H)$ holds if and only if color-refinement does not distinguish~$G$ and~$H$ if and only if~$\Fiso(G,H)$ has a non-negative solution. An exception is the last row, for which we only have one implication.} \end{table} \end{comment} \section{Preliminaries} \subparagraph*{Basics.} Graphs in this paper are simple, undirected, and finite (even though our results transfer to directed graphs and even to weighted graphs). For a graph~$G$, we write~$V(G)$ for its vertex set and~$E(G)$ for its edge set. For $v\in V(G)$, the set of neighbors of~$v$ are denoted with~$N_G(v)$. For $S\subseteq V(G)$, we denote with~$G[S]$ the subgraph of~$G$ induced by the vertices of~$S$. A \emph{rooted graph} is a graph~$G$ together with a designated root vertex~$r(G)\in V(G)$. We write multisets using the notation $\mset{1,1,6,2}$. \subparagraph*{Matrices.} An $LU$-decomposition of a matrix~$A$ consists of a lower triangular matrix~$L$ and an upper triangular matrix~$U$ such that $A=LU$ holds. Every finite matrix~$A$ over~$\mathbf{R}$ has an $LU$-decomposition. We also use \emph{infinite matrices} over~$\mathbf{R}$, which are functions~$A:I\times J\to\mathbf{R}$ where~$I$ and~$J$ are locally finite posets and countable. The matrix product~$AB$ is defined in the natural way via $(AB)_{ij} = \sum_{k} A_{ik} B_{kj}$ if all of these inner products are finite sums, and otherwise we leave it undefined. An $n\times n$ real symmetric matrix has real eigenvalues and a corresponding set of orthogonal eigenspaces. The spectral decomposition of a real symmetric matrix $M$ is of the form $M = \lambda_1 P_1 + \dots + \lambda_l P_l$ where $\lambda_1,\dots,\lambda_l$ are the eigenvalues of~$M$ with corresponding eigenspaces $W_1,\dots,W_l$. Moreover, each~$P_j$ is the projection matrix corresponding to the projection onto the eigenspace $W_j$. Usually, $P_j$ is expressed as $P_j = UU^T$ for a matrix $U$ whose columns form an orthonormal basis of~$W_j$. \subparagraph*{Homomorphism numbers.} Recall that a mapping $h:V(F)\to V(G)$ is a homomorphism if $h(e)\in E(G)$ holds for all $e\in E(F)$ and that $\hom(F,G)$ is the number of homomorphisms from~$F$ to~$G$. Let $\surj(F,G)$ be the number of homomorphisms from~$F$ to~$G$ that are surjective on both the vertices and edges of~$G$. Let $\inj(F,G)$ be the number of injective homomorphisms from~$F$ to~$G$. Let $\sub(F,G)=\inj(F,G)/\aut(F)$, where $\aut(F)$ is the number of automorphisms of $F$. Observe that $\sub(F,G)$ is the number of subgraphs of~$G$ that are isomorphic to $F$. Where convenient, we view the objects $\hom$, $\surj$, and $\inj$ as infinite matrices; the matrix indices are all unlabeled graphs, sorted by their size. However, we only use one representative of each isomorphism class, called the \emph{isomorphism type} of the graphs in the class, as an index in the matrix. Then $\surj$ is lower triangular and $\inj$ is upper triangular, so $\hom = \surj \cdot \sub$ is an LU-decomposition of $\hom$. Finally, $\ind(F,G)$ is the number of times~$F$ occurs as an induced subgraph in~$G$. Similarly to the homomorphism vectors $\vechom_{\mathcal{F}}(G)$ we define vectors $\vecinj_\mathcal{F}(G)$ and $\vecind_\mathcal{F}(G)$. Finally, let $G,H$ be rooted graphs. A homomorphism from~$G$ to~$H$ is a graph homomorphism that maps the root of~$G$ to the root of~$H$. Moreover, two rooted graphs are isomorphic if there is an isomorphism mapping the root to the root. \section{Homomorphisms from trees} \label{sec:tree} \subsection{Color refinement and tree unfolding} Color refinement iteratively colors the vertices of a graph in a sequence of \emph{refinement rounds}. Initially, all vertices get the same color. In each refinement round, any two vertices~$v$ and~$w$ that still have the same color get different colors if there is some color $c$ such that~$v$ and~$w$ have a different number of neighbors of color $c$; otherwise they keep the same color. We stop the refinement process if the vertex partition that is induced by the colors does not change anymore, that is, all pairs of vertices that have the same color before the refinement round still have the same color after the round. More formally, we define the sequence $C^G_0,C^G_1,C^G_2,\ldots$ of colorings as follows. We let $C^G_0(v) = 1$ for all $v\in V(G)$, and for $i\ge 0$ we let $C^G_{i+1}(v) = \msetc{C^G_i(u)}{u\in N_G(v)}$. We say that color refinement \emph{distinguishes} two graphs~$G$ and~$H$ if there is an $i\ge 0$ with \begin{equation}\label{eq:colref distinguishes} \msetc{C_{i}^G(v)}{v\in V(G)}\neq\msetc{C_i^H(v)}{v\in V(H)}\,. \end{equation} We argue now that the color refinement algorithm implicitly constructs a tree at~$v$ obtained by simultaneously taking all possible walks starting at~$v$ (and not remembering nodes visited in the past). For a rooted tree~$T$ with root~$r$, a graph~$G$, and a vertex~$v\in V(G)$, we say that $T$ is \emph{a tree at~$v$} if there is a homomorphism $f$ from $T$ to $G$ such that $f(r)=v$ and, for all non-leaves $t\in V(T)$, the function~$f$ induces a bijection between the set of children of $t$ in in $T$ and the set of neighbors of~$f(t)$ in $G$. In other words, $f$ is a homomorphism from~$T$ to~$G$ that is \emph{locally bijective}. If~$T$ is an infinite tree at~$v$ and does not have any leaves, then~$T$ is uniquely determined up to isomorphisms, and we call this \emph{the infinite tree at~$v$} (or the \emph{tree unfolding} of~$G$ at~$v$), denoted with~$T(G,v)$. For an infinite rooted tree~$T$, let~$T_{\le d}$ be the finite rooted subtree of~$T$ where all leaves are at depth exactly~$d$. For all finite trees~$T$ of depth~$d$, define $\crr(T,G)\in\set{0,\dots,\abs{V(G)}}$ to be the number of vertices~$v\in V(G)$ for which~$T$ is isomorphic to $T(G,v)_{\le d}$. Note that this number is zero if not all leaves of~$T$ are at the same depth~$d$ or if some node of~$T$ has more than~$n-1$ children. The $\veccrr$-vector of~$G$ is the vector~$\veccrr(G)=(\crr(T,G))_{T\in\mathcal{T}_r}$, where~$\mathcal{T}_r$ denotes the family of all rooted trees. The following connection between the color refinement algorithm and the $\veccrr$-vector is known. \begin{lemma}[Angluin~\cite{DBLP:conf/stoc/Angluin80}, also see Krebs and Verbitsky~{\cite[Lemma~2.5]{DBLP:conf/lics/KrebsV15}}]\label{lem: colref cr} For all graphs $G$ and $H$, color refinement distinguishes $G$ and $H$ if and only if $\veccrr(G)\neq\veccrr(H)$ holds. \end{lemma} \subsection{Proof of Theorem~\ref{theo:1}} Throughout this section, we work with rooted trees. For a rooted tree $T$ and an (unrooted) graph $G$, we simply let $\hom(T,G)$ be the number of homomorphisms of the plain tree underlying $T$ to $G$, ignoring the root. Let~$T$ and $T'$ be rooted trees. A homomorphism~$h$ from $T$ to $T'$ is \emph{depth-preserving} if, for all vertices~$v\in V(T)$, the depth of~$v$ in~$T$ is equal to the depth of~$h(v)$ in~$T'$. Moreover, a homomorphism~$h$ from~$T$ to~$T'$ is \emph{depth-surjective} if the image of~$T$ under~$h$ contains vertices at every depth present in~$T'$. We define~$\dhom(T,T')$ as the number of homomorphisms from~$T$ to~$T'$ that are both depth-preserving and depth-surjective. Note that $\dhom(T,T')=0$ holds if and only if~$T$ and $T'$ have different depths. \begin{lemma}\label{lem:hom dhom cr} Let $T$ be a rooted tree and let $G$ be a graph. We have \begin{equation}\label{eq:hom dhom cr} \hom( T, G ) = \sum_{T'} \dhom(T,T') \cdot \crr(T',G) \,, \end{equation} where the sum is over all unlabeled rooted trees~$T'$. In other words, the matrix identity $\hom = \dhom \cdot \crr$ holds. \end{lemma} \begin{proof} Let~$d$ be the depth of~$T$ and let~$r$ be the root of~$T$. Every~$T'$ with $\dhom(T,T')\ne 0$ has depth~$d$ too and there are at most~$n$ non-isomorphic rooted trees~$T'$ of depth~$d$ with $\crr(T',G)\ne 0$. Thus the sum in \eqref{eq:hom dhom cr} has only finitely many non-zero terms and is well-defined. For a rooted tree~$T'$ and a vertex $v\in V(G)$, let $H(T',v)$ be the set of all homomorphisms~$h$ from~$T$ to~$G$ such that $h(r)=v$ holds and the tree unfolding~$T(G,v)_{\le d}$ is isomorphic to~$T'$. Let $H(T')=\bigcup_{v\in V(G)}H(T',v)$ and observe $|H(T',v)|=\dhom(T,T')$. Since $\crr(T',G)$ is the number of $v\in V(G)$ with $T(G,v)_{\le d}\cong T'$, we thus have $|H(T')|=\dhom(T,T')\cdot \crr(T',G)$. Since each homomorphism from~$T$ to~$G$ is contained in exactly one set~$H(T')$, we obtain the desired equality \eqref{eq:hom dhom cr}. \end{proof} For rooted trees $T$ and $T'$, let $\dsurj(T,T')$ be the number of depth-preserving and surjective homomorphisms from~$T$ to~$T'$. In particular, not only do these homomorphisms have to be depth-surjective, but they should hit every vertex of~$T'$. For rooted trees $T$ and $T'$ of the same depth, let $\dsub(T,T')$ be the number of subgraphs of~$T'$ that are isomorphic to~$T$ (under an isomorphism that maps the root to the root); if $T$ and $T'$ have different depths, we set $\dsub(T,T')=0$. \begin{lemma}\label{lem: dhom dsurj sub} $\dhom = \dsurj \cdot \dsub$ is an $LU$-decomposition of~$\dhom$, and $\dsurj$ and $\dsub$ are invertible. \end{lemma} As is the case for finite matrices, the inverse of a lower (upper) triangular matrix is lower (upper) triangular. As the matrix $\dsurj$ is lower triangular and the matrix $\dsub$ is upper triangular, their inverses are as well. We are ready to prove our first main theorem. \begin{proof}[Proof of Theorem~\ref{theo:1}] We only need to prove the equivalence between assertions~\ref{it:homvec trees} and~\ref{it:colref}. For every graph~$G$, let $\vechom_r(G):=\big(\hom(T,G)\big)_{T\in\mathcal{T}_r}$. By our convention that for a rooted tree~$T$ and an unrooted graph $G$ we let $\hom(T,G)$ be the number of homomorphisms of the plain tree underlying $T$ to $G$, for all $G$ and $H$ we have $\vechom_r(G)=\vechom_r(H)\iff\vechom(G)=\vechom(H)$. By Lemma~\ref{lem: colref cr}, it suffices to prove for all graph $G,H$ that \begin{equation}\label{eq:homcr1} \veccrr(G)=\veccrr(H)\iff\vechom_r(G)=\vechom_r(H)\,. \end{equation} We view the vectors $\vechom_r(G)$ and $\veccrr(G)$ as infinite column vectors. By Lemma~\ref{lem:hom dhom cr}, we have \begin{equation} \vechom_r(G)=\dhom\cdot\veccrr(G) \text{ and } \vechom_r(H)=\dhom \cdot \veccrr(H)\,. \end{equation} The forward direction of \eqref{eq:homcr1} now follows immediately. It remains to prove the backward direction. Since $\dhom=\dsurj\cdot\dsub$ holds by Lemma~\ref{lem: dhom dsurj sub} for two invertible matrices $\dsurj$ and $\dsub$, we can first left-multiply with $\dsurj^{-1}$ to obtain the equivalent identities \begin{equation} \dsurj^{-1}\cdot\vechom_r(G)=\dsub\cdot\veccrr(G) \text{ and } \dsurj^{-1}\cdot\vechom_r(H)=\dsub\cdot\veccrr(H)\,. \end{equation} Now suppose $\vechom_r(G)=\vechom_r(H)$ holds, and set $\boldsymbol v=\vechom_r(G)$. Then $\dsurj^{-1}\cdot\boldsymbol v$ is well-defined, because $\dsurj$ and its inverse are lower triangular. Thus we obtain $\dsub\cdot\veccrr(G)=\dsub\cdot\veccrr(H)$ and set $\boldsymbol w=\veccrr(G)$. Unfortunately, $\dsub^{-1}\cdot \boldsymbol w$ may be undefined, since $\dsub^{-1}$ is upper triangular. While we can still use a matrix inverse, the argument becomes a bit subtle. The crucial observation is that $\crr(T',G)$ is non-zero for at most~$n$ different trees~$T'$, and all such trees have maximum degree at most $n-1$. Thus we do not need to look at \emph{all} trees but only those with maximum degree~$n$. Let $\widetilde{\mathcal{T}}$ be the set of all unlabeled rooted trees of maximum degree at most~$n$. Let $\veccrr'=\veccrr|_{\widetilde{\mathcal{T}}}$, let $\boldsymbol w'=\boldsymbol w|_{\widetilde{\mathcal{T}}}$, and let $\dsub'=\dsub|_{\widetilde{\mathcal{T}} \times \widetilde{\mathcal{T}}}$. Then we still have the following for all $T\in\widetilde{\mathcal{T}}$ and $G$: \begin{equation} \boldsymbol w'_{T} = \sum_{T'\in\widetilde{\mathcal{T}}} \dsub'(T,T')\cdot\crr'(T',G)\,. \end{equation} The new matrix $\dsub'$ is a principal minor of~$\dsub$ and thus remains invertible. Moreover, $\dsub'^{-1}\cdot \boldsymbol w'$ is well-defined, since \begin{equation}\label{eq: dsub inverse expanded} \sum_{T'\in\widetilde{\mathcal{T}}} \dsub'^{-1}(T,T')\cdot \boldsymbol w'_{T'} \end{equation} is a finite sum for each~$T$: The number of (unlabeled) trees $T'\in\widetilde{\mathcal{T}}$ that have the same depth~$d$ as~$T$ is bounded by a function in~$n$ and~$d$. Thus~$\dsub'^{-1}\cdot \boldsymbol w'=\veccrr'(G)$. By a similar argument, we obtain $\dsub'^{-1}\cdot \boldsymbol w'=\veccrr'(H)$. This implies $\veccrr'(G)=\veccrr'(H)$ and thus $\veccrr(G)=\veccrr(H)$. \end{proof} \section{Homomorphisms from cycles and paths}\label{sec:path} While the arguments we saw in the proof of Theorem~\ref{theo:1} are mainly graph-theoretic, the proof of Theorem~\ref{theo:2} uses spectral techniques. To introduce the techniques, we first prove a simple, known result already mentioned in the introduction. We call two square matrices \emph{co-spectral} if they have the same eigenvalues with the same multiplicities, and we call two graphs \emph{co-spectral} if their adjacency matrices are co-spectral. \begin{prop}[e.g. {\cite[Lemma~1]{van2003graphs}}]\label{prop:spectrum} Let $\mathcal{C}$ be the class of all cycles (including the degenerate cycle of length $0$, which is just a single vertex). For all graphs $G$ and $H$, we have $\vechom_{\mathcal{C}}(G)=\vechom_{\mathcal{C}}(H)$ if and only if $G$ and $H$ are co-spectral. \end{prop} For the proof, we review a few simple facts from linear algebra. The trace $\op{tr}(A)$ of a square matrix $A\in\mathbb R^{n\times n}$ is the sum of the diagonal entries. If the eigenvalues of $A$ are $\lambda_1,\ldots,\lambda_n$, then $\op{tr}(A)=\sum_{i=1}^n\lambda_i$. Moreover, for each $\ell\ge0$ the eigenvalues of the matrix~$A^\ell$ are $\lambda_1^\ell,\ldots,\lambda_n^\ell$, and thus $\op{tr}(A^\ell)=\sum_{i=1}^n\lambda_i^\ell$. The following technical lemma encapsulates the fact that the information $\op{tr}(A^\ell)$ for all~$\ell\in\mathbf{N}$ suffices to reconstruct the spectrum of~$A$ with multiplicities. We use the same lemma to prove Theorem~\ref{theo:2}, but for Proposition~\ref{prop:spectrum} a less general version would suffice. \begin{lemma}\label{lem: limit lemma} Let $X,Y\subseteq\mathbf{R}$ be two finite sets and let $c\in\mathbf{R}^X_{\ne 0}$ and $d\in\mathbf{R}^{Y}_{\ne 0}$ be two vectors. If the equation \begin{equation}\label{eq:sumidentity} \sum_{x\in X} c_x x^\ell = \sum_{y\in Y} d_{y} y^\ell \end{equation} holds for all~$\ell\in\mathbf{N}$, then $X=Y$ and $c=d$. \end{lemma} \begin{proof} We prove the claim by induction on~$k:=\abs{X}+\abs{Y}$. For $k=0$, the claim is trivially true since both sums in~\eqref{eq:sumidentity} are equal to zero by convention. Let $\hat x=\arg\max\setc{\abs{x}}{x\in X\cup Y}$ and let $\hat x\in X$ without loss of generality. If~$\hat x=0$, then $X=\set{0}$ and we claim that~$Y=\set{0}$ holds. Clearly \eqref{eq:sumidentity} for~$\ell=0$ yields $0\ne c_0 = \sum_{y\in Y} d_y$. In particular, $Y\ne\emptyset$ holds. Since~$\hat x=0$ is the maximum of~$X\cup Y$ in absolute value, we have~$Y=\set{0}$ and thus also $c=d$. Now suppose that $\hat x\ne 0$ holds. We consider the sequences~$(a_\ell)_{\ell\in\mathbf{N}}$ and $(b_\ell)_{\ell\in\mathbf{N}}$ with \begin{align} a_\ell = \frac{1}{\hat x^{\ell}}\cdot \sum_{x\in X} c_x x^\ell \quad\text{and}\quad b_\ell = \frac{1}{\hat x^{\ell}}\cdot \sum_{y\in Y} d_y y^\ell \,. \end{align} Note that $a_\ell=b_\ell$ holds for all $\ell\in\mathbf{N}$ by assumption. Observe the following simple facts: \begin{enumerate} \item[1)] If $-\hat x\not\in X$, then $\lim_{\ell\to\infty} a_\ell=c_{\hat x}$. \item[2)] If $-\hat x\in X$, then $\lim_{\ell\to\infty} a_{2\ell}=c_{\hat x}+c_{-\hat x}$ and $\lim_{\ell\to\infty} a_{2\ell+1}=c_{\hat x}-c_{-\hat x}$. \end{enumerate} As well as the following exhaustive case distinction for~$Y$: \begin{enumerate} \item[a)] If $\hat x,-\hat x\not\in Y$, then $\lim_{\ell\to\infty} b_\ell = 0$. \item[b)] If $\hat x\in Y$ and $-\hat x\not\in Y$, then $\lim_{\ell\to\infty} b_{\ell} = d_{\hat x}$. \item[c)] If $\hat x\not\in Y$ and $-\hat x\in Y$, then $\lim_{\ell\to\infty} b_{2\ell} = d_{-\hat x}$ and $\lim_{\ell\to\infty} b_{2\ell+1} = -d_{-\hat x}$. \item[d)] If $\hat x,-\hat x\in Y$, then $\lim_{\ell\to\infty} b_{2\ell} = d_{\hat x}+d_{-\hat x}$ and $\lim_{\ell\to\infty} b_{2\ell+1} = d_{\hat x}-d_{-\hat x}$. \end{enumerate} If~$-\hat x\not\in X$ holds, we see from 1) that~$a_\ell$ converges to the non-zero value~$c_{\hat x}$. Since the two sequences are equal, the sequence~$b_\ell$ also converges to a non-zero value. The only case for~$Y$ where this happens is b), and we get $\hat x\in Y$, $-\hat x\not\in Y$, and $c_{\hat x}=d_{\hat x}$. On the other hand, if~$-\hat x\in X$, we see from 2) that $a_\ell$ does not converge, but the even and odd subsequences do. The only cases for~$Y$ where this happens for~$b_\ell$ too are c) and d). We cannot be in case c), since the two accumulation points of~$b_\ell$ just differ in their sign, while the two accumulation points of~$a_\ell$ do not have the same absolute value. Thus we must be in case d) and obtain $x,\hat x\in Y$ as well as \[ c_{\hat x} + c_{-\hat x} = d_{\hat x} + d_{-\hat x} \quad\text{and}\quad c_{\hat x} - c_{-\hat x} = d_{\hat x} - d_{-\hat x}\,. \] This linear system has full rank and implies $c_{\hat x}=d_{\hat x}$ and $c_{-\hat x}=d_{-\hat x}$. Either way, we can remove $\set{\hat x}$ or $\set{\hat x,-\hat x}$ from both~$X$ and~$Y$ and apply the induction hypothesis on the resulting instance $X',Y',c',d'$. Then~$(X,c)=(Y,d)$ follows as claimed. \end{proof} \begin{proof}[Proof of Proposition~\ref{prop:spectrum}] For all $\ell\ge 0$, the number of homomorphisms from the cycle~$C_\ell$ of length~$\ell$ to a graph $G$ with adjacency matrix~$A$ is equal to the number of closed length-$\ell$ walks in~$G$, which in turn is equal to the trace of $A^\ell$. Thus for graphs $G,H$ with adjacency matrices $A,B$, we have $\vechom_{\mathcal{C}}(G)=\vechom_{\mathcal{C}}(H)$ if and only if $\op{tr}(A^\ell)=\op{tr}(B^\ell)$ holds for all~$\ell\ge 0$. If~$A$ and~$B$ have the same spectrum~$\lambda_1,\dots,\lambda_n$, then $\op{tr}(A^\ell)=\lambda_1^\ell+\dots+\lambda_n^\ell=\op{tr}(B^\ell)$ holds for all~$\ell\in\mathbf{N}$. For the reverse direction, suppose~$\op{tr}(A^\ell)=\op{tr}(B^\ell)$ for all~$\ell\in\mathbf{N}$. Let~$X\subseteq\mathbf{R}$ be the set of eigenvalues of~$A$ and for each~$\lambda\in X$, let~$c_\lambda\in\set{1,\dots,n}$ be the multiplicity of the eigenvalue~$\lambda$. Let~$Y\subseteq\mathbf{R}$ and $d_\lambda$ for~$\lambda\in Y$ be the corresponding eigenvalues and multiplicities for~$B$. Then for all~$\ell\in\mathbf{N}$, we have \[ \sum_{\lambda\in X} c_\lambda \lambda^\ell = \op{tr}(A^\ell) = \op{tr}(B^\ell) = \sum_{\lambda\in Y} d_\lambda \lambda^\ell \,. \] By Lemma~\ref{lem: limit lemma}, this implies~$(X,c)=(Y,d)$, that is, the spectra of~$A$ and~$B$ are identical. \end{proof} In the following example, we show that the vectors $\vechom_{\mathcal{C}}$ for the class $\mathcal{C}$ of cycles and $\vechom_{\mathcal{T}}$ for the class $\mathcal{T}$ of trees are incomparable in their expressiveness. \begin{figure} \centering \begin{tikzpicture} \begin{scope} \node[bullet] (a) at (0 ,0 ) {}; \node[bullet] (b1) at (1 ,0) {}; \node[bullet] (b2) at (0 ,1) {}; \node[bullet] (b3) at (-1 ,0) {}; \node[bullet] (b4) at (0 ,-1) {}; \draw[thick] (a) edge (b1) edge (b2) edge (b3) edge (b4); \end{scope} \begin{scope}[xshift=6cm] \node[bullet] (a) at (0 ,0 ) {}; \node[bullet] (b1) at (1 ,0) {}; \node[bullet] (b2) at (0 ,1) {}; \node[bullet] (b3) at (-1 ,0) {}; \node[bullet] (b4) at (0 ,-1) {}; \draw[thick] (b1) edge (b2) (b2) edge (b3) (b3) edge (b4) (b4) edge (b1); \end{scope} \end{tikzpicture} \caption{Two co-spectral graphs} \label{fig:cospectral} \end{figure} \begin{example} The graphs $G$ and $H$ shown in Figure~\ref{fig:cospectral} are co-spectral and thus $\vechom_{\mathcal{C}}(G)= \vechom_{\mathcal{C}}(H)$, but it is easy to see that $\vechom_{\mathcal{P}}(G)\neq \vechom_{\mathcal{P}}(H)$ for the class $\mathcal{P}$ of all paths. Let $G'$ be a cycle of length $6$ and $H'$ the disjoint union of two triangles. Then obviously, $\vechom_{\mathcal{C}}(G')\neq \vechom_{\mathcal{C}}(H')$. However, color refinement does not distinguish $G'$ and $H'$ and thus $\vechom_{\mathcal{T}}(G')= \vechom_{\mathcal{T}}(H')$. \end{example} Let us now turn to the proof of Theorem \ref{theo:2}. \begin{proof}[Proof of Theorem~\ref{theo:2}] Let~$A$ and~$B$ be the adjacency matrices of~$G$ and~$H$, respectively. Since~$A$ is a symmetric and real matrix, its eigenvalues are real and the corresponding eigenspaces are orthogonal and span~$\mathbf{R}^n$. Let~$\boldsymbol 1$ be the~$n$-dimensional all-$1$ vector, and let~$X=\set{\lambda_1,\dots,\lambda_k}$ be the set of all eigenvalues of~$A$ whose corresponding eigenspaces are not orthogonal to~$\boldsymbol 1$. We call these eigenvalues the \emph{useful} eigenvalues of~$A$ and without loss of generality assume $\lambda_1>\dots>\lambda_k$. The~$n$-dimensional all-$1$ vector~$\boldsymbol 1$ can be expressed as a direct sum of eigenvectors of~$A$ corresponding to useful eigenvalues. In particular, there is a unique decomposition $\boldsymbol 1=\sum_{i=1}^k u_i$ such that each~$u_i$ is a non-zero eigenvector in the eigenspace of~$\lambda_i$. Moreover, the vectors~$u_1,\dots,u_k$ are orthogonal. For the matrix~$B$, we analogously define its set of useful eigenvalues~$Y=\set{\mu_1,\dots,\mu_{k'}}$ and the direct sum~$\boldsymbol 1=\sum_{i=1}^{k'} v_i$. We prove the equivalence of the following three assertions (of which the first and third appear in the statement of Theorem~\ref{theo:2}). \begin{enumerate} \item $\vechom_{\mathcal{P}}(G)=\vechom_{\mathcal{P}}(H)$. \item $A$ and $B$ have the same set of useful eigenvalues~$\lambda_1,\dots,\lambda_k$ and $\|u_i\|=\|v_i\|$ holds for all~$i\in\set{1,\dots,k}$. Here,~$\|.\|$ denotes the Euclidean norm with~$\|x\|^2=\sum_j x_j^2$. \item The system $\Fiso(G,H)$ of linear equations has a real solution. \end{enumerate} Note that in 2, we do not require that the useful eigenvalues occur with the same multiplicities in~$A$ and~$B$. We show the implications (1 $\Rightarrow$ 2), (2 $\Rightarrow$ 3), and (3 $\Rightarrow$ 1). \medskip (1 $\Rightarrow$ 2): Suppose that $\hom(P_\ell,G)=\hom(P_\ell,H)$ holds for all paths~$P_\ell$. Equivalently, this can be stated in terms of the adjacency matrices $A$ and $B$: for all $\ell\in\mathbf{N}$, we have $\trans{\boldsymbol 1} A^{\ell} \boldsymbol 1 = \trans{\boldsymbol 1} B^{\ell} \boldsymbol 1$. We claim that~$A$ and~$B$ have the same useful eigenvalues, and that the projections of~$\boldsymbol 1$ onto the corresponding eigenspaces have the same lengths. Note that~$A^\ell\boldsymbol 1=\sum_{i=1}^k \lambda^\ell_i u_i$ holds. Thus we have \begin{equation} \trans{\boldsymbol 1} A^{\ell} \boldsymbol 1 = \paren*{\sum_{i=1}^k \trans{u_i}} \paren*{\sum_{i=1}^k \lambda^\ell_i u_i} = \sum_{i=1}^k \|u_i\|^2 \cdot \lambda^\ell_i \,. \end{equation} The term $\trans{\boldsymbol 1} B^{\ell} \boldsymbol 1$ can be expanded analogously, which together yields \begin{equation} \sum_{i=1}^k \|u_i\|^2 \cdot \lambda^\ell_i = \sum_{i=1}^{k'} \|v_i\|^2 \cdot \mu^\ell_i \quad\text{for all $\ell\in\mathbf{N}$.} \end{equation} Since all coefficients $c_{\lambda_i}=\|u_i\|^2$ and $d_{\mu_i}=\|v_i\|^2$ are non-zero, we are in the situation of Lemma~\ref{lem: limit lemma}. We obtain $k=k'$ and, for all $i\in\set{1,\dots,k}$, we obtain $\lambda_i=\mu_i$ and $\|u_i\|=\|v_i\|$. This is exactly the claim that we want to show. \medskip (2 $\Rightarrow$ 3): We claim that the $(n\times n$)-matrix~$X$ defined via \begin{equation} X = \displaystyle\sum_{i=1}^k\frac{1}{\|u_i\|^2} \cdot u_i{v^T_i} \end{equation} satisfies the $\Fiso$ equations $AX=XB$ and $X \boldsymbol 1 = \boldsymbol 1=X^T\boldsymbol 1$. Indeed, we have \begin{align} AX = \sum_{i=1}^k\frac{1}{\|u_i\|^2} \cdot A u_i{v^T_i} = \sum_{i=1}^k\frac{\lambda_i}{\|u_i\|^2} \cdot u_i{v^T_i} = \sum_{i=1}^k\frac{1}{\|u_i\|^2} \cdot u_i{v^T_i}B^T = XB^T = XB\,, \end{align} This follows, since~$Au_i=\lambda_iu_i$, $Bv_i=\lambda_iv_i$, and $B$ is symmetric. Moreover, we have \begin{align} X\boldsymbol 1 = \sum_{i=1}^k\frac{1}{\|u_i\|^2} \cdot A u_i{v^T_i}\boldsymbol 1 = \sum_{i=1}^k\frac{1}{\|u_i\|^2} \cdot u_i{v^T_i}\sum_{j=1}^k v_j = \sum_{i=1}^k\frac{1}{\|u_i\|^2} \cdot u_i \cdot {v^T_i}v_i = \boldsymbol 1\,. \end{align} This holds by definition of~$u_i$ and~$v_i$ and from $v_i^Tv_i=\|v_i\|^2=\|u_i\|^2$. The claim $X^T\boldsymbol 1 = \boldsymbol 1$ follows analogously. \medskip (3 $\Rightarrow$ 1): Suppose there is a matrix~$X$ with $X^T{\boldsymbol 1} = X{\boldsymbol 1}= {\boldsymbol 1}$ and $AX=XB$. We obtain $A^{\ell} X = X B^{\ell}$ by induction for all~$\ell\in\mathbf{N}_{>0}$. For $\ell=0$, this also holds since $A^0=I_n$ by convention. As a result, we have $\trans{{\boldsymbol 1}} A^\ell {\boldsymbol 1} =\trans{{\boldsymbol 1}} A^\ell X {\boldsymbol 1} = \trans{{\boldsymbol 1}} X B^\ell {\boldsymbol 1} = \trans{{\boldsymbol 1}} B^\ell {\boldsymbol 1}$ for all~$\ell\in\mathbf{N}$. Since these scalars count the length-$\ell$ walks in $G$ and $H$, respectively, we obtain $\hom(P_\ell,G)=\hom(P_\ell,H)$ for all paths~$P_\ell$ as claimed. \end{proof} \section{Homomorphisms from bounded tree width and path width} We briefly outline the main ideas of the proofs of Theorems~\ref{theo:3} and \ref{theo:4}; the technical details are deferred to the appendix. In Theorem~\ref{theo:3}, the equivalence between \ref{it:kWL} and~\ref{it:Liso nonneg} is essentially known, so we focus on the equivalence between~\ref{it:hom twk} and~\ref{it:kWL}. The proof is similar to the proof of Theorem~\ref{theo:1} in Section \ref{sec:tree}. Let us fix $k\ge 2$. The idea of the $k$-WL algorithm is to iteratively color $k$-tuples of vertices. Initially, each $k$-tuple $(v_1,\ldots,v_k)$ is colored by its \emph{atomic type}, that is, the isomorphism type of the labeled graph $G[\{v_1,\ldots,v_k\}]$. Then in the refinement step, to define the new color of a $k$-tuple $\bar v$ we look at the current color of all $k$-tuples that can be reached from $k$ by adding one vertex and then removing one vertex. Similar to the tree unfolding of a graph $G$ at a vertex $v$, we define the \emph{Weisfeiler-Leman tree unfolding} at a $k$-tuple $\bar v$ of vertices. These objects have some resemblance to the pebbling comonad, which was defined by Abramsky, Dawar, and Wang~\cite{DBLP:conf/lics/AbramskyDW17} in the language of category theory. The WL-tree unfolding describes the color of $\bar v$ computed by $k$-WL; formally it may be a viewed as a pair $(T,F)$ consisting of a graph $F$ together with a ``rooted'' tree decomposition (potentially infinite, but again we cut it off at some finite depth). Similar to the numbers $\crr(T,G)$ and the vector $\veccrr(G)$, we now have numbers $\wl((T,F),G)$ and a vector $\wl(G)$ such that $\wl(G)=\wl(H)$ holds if and only if $k$-WL does not distinguish~$G$ and $H$. Then we define a linear transformation $\Phi$ with $\vechom_{\mathcal{T}_k}(G)=\Phi \wl(G)$. The existence of this linear transformation directly yields the implication \ref{it:kWL}$\implies$\ref{it:hom twk} of Theorem~\ref{theo:3}. To prove the converse, we show that the transformation $\Phi$ is invertible by giving a suitable $LU$-decomposition of full rank. This completes our sketch of the proof of Theorem~\ref{theo:3}. \medskip The proof of Theorem~\ref{theo:4} requires a different argument, because now we have to use a solution~$(X_\pi)$ of the system $\Liso[k+1](G,H)$ to prove that the path width $k$ homomorphism vectors $\vechom_{\mathcal{P}_k}(G)$ and $\vechom_{\mathcal{P}_k}(H)$ are equal. The key idea is to express entries of a suitable variant of $\vechom_{\mathcal{P}_k}(G)$ as a linear combinations of entries of the corresponding vector for $H$ using the values $X_\pi$ as coefficients. \section{Conclusions} We have studied the homomorphism vectors $\vechom_{\mathcal{F}}(G)$ for various graph classes $\mathcal{F}$, focusing on classes $\mathcal{F}$ where it is tractable to compute the entries $\hom(F,G)$ of the vector. Our main interest was in the ``expressiveness'' of these vectors, that is, in the question what $\vechom_{\mathcal{F}}(G)$ tells us about the graph $G$. For the classes $\mathcal{C}$ of cycles, $\mathcal{T}$ of trees, $\mathcal{T}_k$ of graphs of tree width at most $k$, and $\mathcal{P}$ of paths, we have obtained surprisingly clean answers to this question, relating the homomorphism vectors to various other well studied formalisms that on the surface have nothing to do with homomorphism counts. Some interesting questions remain open. The most obvious is whether the converse of Theorem~\ref{theo:4} holds, that is, whether for two graphs $G$, $H$ with $\vechom_{\mathcal{P}_k}(G)=\vechom_{\mathcal{P}_k}(H)$, the system $\Liso[k+1](G,H)$ has a real solution (and hence the Nullstellensatz propositional proof system has no degree-$(k+1)$ refutation of $G$ and $H$ being isomorphic). Another related open problem in spectral graph theory is to characterize graphs which are identified by their spectrum, up to isomorphism. In our framework, Proposition \ref{prop:spectrum} ensures that we can equivalently ask for the following characterization: for which graphs $G$ does the vector $\vechom_{\mathcal{C}}(G)$ determine the entire homomorphism vector $\vechom(G)$? Despite the computational intractability, it is also interesting to study the vectors $\vechom_{\mathcal{F}}(G)$ for classes $\mathcal{F}$ of unbounded tree width. Are there natural classes $\mathcal{F}$ (except of course the class of all graphs) for which the vectors $\vechom_{\mathcal{F}}(G)$ characterize $G$ up to isomorphism? For example, what about classes of bounded degree or the class of planar graphs? And what is the complexity of deciding whether $\vechom_{\mathcal{F}}(G)=\vechom_{\mathcal{F}}(H)$ holds when $G$ and $H$ are given as input? Our results imply that this problem is in polynomial time for the classes $\mathcal{T}$, $\mathcal{T}_k$, and~$\mathcal{P}$. For the class of all graphs, it is in quasi-polynomial time by Babai's quasi-polynomial isomorphism test~\cite{bab16}. Yet it seems plausible that there are classes~$\mathcal{F}$ (even natural classes decidable in polynomial time) for which the problem is co-NP-hard. Maybe the most interesting direction for further research is to study the graph similarity measures induced by homomorphism vectors. A simple way of defining an inner product on the homomorphism vectors is by letting \[ \Big\langle \vechom_{\mathcal{F}}(G) , \vechom_{\mathcal{F}}(H)\Big\rangle:= \sum_{\substack{k\ge 1\\\mathcal{F}_k\neq\emptyset}}\frac{1}{k^k|\mathcal{F}_k|}\sum_{F\in\mathcal{F}_k}\hom(F,G)\hom(F,H), \] where $\mathcal{F}_k$ denotes the class of all graph $F\in\mathcal{F}$ with~$k$ vertices. The mapping $(G,H)\mapsto \langle \vechom_{\mathcal{F}}(G) , \vechom_{\mathcal{F}}(H)\rangle$ is what is known as a \emph{graph kernel} in machine learning. It induces a (pseudo)metric $d_{\mathcal{T}}$ on the class of graphs. It is an interesting question how it relates to other graph similarity measures, for example, the metric induced by the Weisfeiler-Leman graph kernel. Our Theorem~\ref{theo:1} implies that the metric $d_{\mathcal{T}}$ for the class $\mathcal{T}$ of trees and the metric induced by the Weisfeiler-Leman graph kernel have the same graphs of distance zero.
{ "timestamp": "2018-05-23T02:10:08", "yymm": "1802", "arxiv_id": "1802.08876", "language": "en", "url": "https://arxiv.org/abs/1802.08876" }
\section*{} \vspace{-1cm} \footnotetext{\textit{$^{a}$~Department of Chemistry and Biochemistry, University of California San Diego, La Jolla, California 92093, United States. E-mail: jyuenzhou@ucsd.edu}} \section{Introduction} Light plays a fundamental role in chemistry. It is an essential ingredient in many biological and synthetic chemical reactions and a basic tool for the investigation of molecular properties \cite{turro1991modern, balzani2014photochemistry}. However, in most cases where photons participate in a chemical event, the interaction of light and matter is weak enough that it may be treated as a small perturbation. In this \textit{weak-coupling} regime, radiation provides only a gateway for a molecular system to change its quantum state. This paradigm is the basis of spectroscopy and photochemical dynamics, but fails entirely when the interaction of a single photon with many molecules is intense enough to overcome their dissipative processes. In this case, just as strongly-interacting atoms form molecules, hybrid states (\textit{molecular polaritons}) with mixed molecular and photonic character result from the strong coupling of the electromagnetic (EM) field with molecules \cite{hopfield_theory_1958, agranovich1960dispersion, ebbesen_hybrid_2016}. In this regime, both photons and molecular excited-states lose their individuality, just as atoms do when they form molecules. The generic setup of a device with polaritonic excitations consists of a medium that confines electromagnetic fields to the microscale (hereafter we will refer to this as an optical \textit{microcavity} \cite{kavokin2017microcavities, baranov2018}, although strong coupling has also been achieved with metal layers supporting plasmons \cite{torma_strong_2015, vasa2018}) and a condensed-phase molecular ensemble with one or more \textit{bright} (optical) transitions nearly-resonant with the optical cavity. Typically, molecular transitions can be well approximated to have no wave vector dependence (since optical wavelengths are much larger than molecular length scales) with linewidth dominated by the coupling to intra and intermolecular degrees of freedom. Conversely, the microcavity spectra shows a well-defined wave vector dependence (Fig. \ref{cavmod}) and homogeneous broadening due to the weak interaction with the external EM fields \cite{kavokin2017microcavities}. The strong coupling regime is achieved when the rate of energy exchange between microcavity photons and the molecular system is larger than the rate of cavity-photon leakage and molecular dephasing. In this case, the elementary optical excitations (quasiparticles) of the heterostructure consist of superpositions of delocalized (coherent) molecular states and photonic degrees of freedom, with energies and lifetimes which can significantly differ from those of the bare excitations. The capacity to tune the energies and photon/molecular content of polariton states is a main attraction of the strong coupling regime (Sec. \ref{bas}). Nevertheless, the hybrid cavity also contains a large number of incoherent, or \textit{dark} molecular states, which may be more or less localized depending on disorder \cite{houdre_vacuum-field_1996, agranovich2003}, and geometry \cite{gonzalez-ballestero2016} of the hybrid material. Despite their weak contribution to the optical response, the dark states are fundamentally important for a description of the novel chemical dynamics emergent at the strong coupling regime. While theoretical studies of hybrid states of light and matter date back to the 1950s \cite{hopfield_theory_1958, agranovich1960dispersion}, and observations of atomic and solid-state cavity-polaritons first happened in the 1980s \cite{meschede1985, raizen1989} and 1990s \cite{weisbuch_observation_1992, houdre_room_1993}, respectively, it is only recently that experimental \cite{lidzey1998, schouwink2001, holmes2004a, dintinger_strong_2005, kena-cohen_strong_2008, kena-cohen2010, virgili2011, schwartz2011, aberra_guebrou_coherent_2012, hutchison_modifying_2012, schwartz_polariton_2013, simpkins_spanning_2015, thomas_ground-state_2016, zhong2017, chikkaraddy2016, melnikau2017, baieva_dynamics_2017, crum2018, rozenman2018, dunkelberger2018b, cheng2018} and theoretical \cite{cwik2014, pino2015, herrera2016, galego2016, kowalewski2016, bennett2016a, wu2016, herrera_dark_2017, zhang2017, feist2017, dimitrov2017, flick_cavity_2017, zeb2018, yuen-zhou2017, martinez-martinez2018, du2017, martinez-martinez2017b, ribeiro2017c} activity have flourished in the field of strongly coupled chemistry. This attention can be attributed in part to the experimental observations of polariton effects on chemical dynamics, which thus offer novel pathways for the control of molecular processes \cite{ebbesen_hybrid_2016}. In this review we provide a theoretical perspective on the recent advances in molecular polaritons arising from electronic (\textit{organic exciton-polaritons}) or vibrational (\textit{vibrational-polaritons}) degrees of freedom. Our discussions are primarily based on quantum-mechanical effective models, which are general enough to be applied in regimes where a classical description is inaccurate \cite{carusotto_quantum_2013}. Furthermore, the models discussed here describe only the relevant low-energy degrees of freedom probed by experiments. This allows a consistent and predictive description of the behavior of strongly coupled ensembles including a macroscopic number of molecules. First-principles approaches are explored in Refs. \cite{ruggenthaler2014, dimitrov2017, flick_cavity_2017}. Finally, it is not our intent to provide a complete review of the fast-growing molecular polariton literature. We have decided to present the basic theory and illustrate it with examples, that we believe, show general principles that might be useful for future investigations of polariton chemistry. For reviews on other aspects of molecular polaritons not emphasized here, see Refs. \cite{holmes2007, agranovich2011, kena-cohen2012, torma_strong_2015, ebbesen_hybrid_2016, sukharev_optics_2017, herrera2018, vasa2018, s.dovzhenko2018}. This review is organized as follows. In Sec. \ref{bas} we present the general concepts that form the basis for molecular polaritonics. Sec. \ref{orgp} provides an overview of the theory of organic exciton-polaritons. This is illustrated with applications to polariton-mediated chemical reactivity (Sec. \ref{orgappI}), energy transfer (Sec. \ref{orgappII}) and singlet fission (Sec. \ref{orgappIII}). In Sec. \ref{vibp} we discuss the theory and phenomenology of vibrational-polaritons. We focus on the effects of vibrational anharmonicity on their nonlinear response, and revisit exciting experimental results probing the thermodynamics and kinetics under vibrational strong coupling in Secs. \ref{vib_trans} and \ref{vib_app}, respectively. The ultrastrong regime of light-matter interaction is briefly introduced in Sec. \ref{ultra}. This review is concluded in Sec. \ref{epi}. \section{Polariton basics}\label{bas} We introduce the basic notions of polariton behavior in this Sec. by examining the simplest models displaying strong coupling between light and matter. The bare microcavity modes are reviewed in Sec. \ref{optspec}, and the spectrum resulting from the strong coupling of two-level systems with a single cavity mode is discussed in Sec. \ref{jctcs}. The intuition given by the results discussed in this section will guide all later developments. \subsection{Optical microcavity spectra}\label{optspec} The optical cavities employed for molecular strong coupling studies generally consist of two highly-reflective (at the frequencies of interest) parallel metallic or dielectric mirrors separated by a distance $L$ on the order of $\mu$m \cite{kavokin2017microcavities}. The length of the cavity is typically chosen to be resonant with a molecular transition. The EM modes of these devices are classified by the in-plane wave vector $\mathbf{q}$, the integer band number $m$ (where $q_z = m\pi/L$) associated with the transverse confinement direction, and the electric field polarization [transverse magnetic (TM) or transverse electric (TE)] \cite{steck2007quantum, kavokin2017microcavities} (see Fig.\ref{cavmod}a). The TE polarization is perpendicular to the incidence plane, while the TM belongs to it. The former vanishes at the mirrors, in contrast to the latter. However, in lossless microcavities, the TM and TE modes are degenerate, and when $|\mathbf{q}| \rightarrow 0$ their spatial distributions become identical. The microcavity is typically engineered to have a single band (Fig. \ref{cavmod}b) containing a resonance with the material (though there exist exceptions \cite{simpkins_spanning_2015, coles2014}). The remaining bands are highly off-resonant, and thus, may be neglected in a low-energy theory of polaritons. It is also sometimes useful to perform a long wavelength approximation which disregards the spatial variation of the electric field, and includes explicitly only a single microcavity mode which interacts with a macroscopic collection of optically-active molecular transitions. This is appropriate whenever the density of accessible molecular states is much larger than the photonic (total internal reflection of incident radiation with $\theta > \theta_c$ (Fig. \ref{cavmod}a) establishes a natural cut-off frequency for the cavity modes which can be accessed by excitation with external radiation; alternatively, a cutoff can be imposed on cavity photons which are highly-detuned from the molecular transition \cite{pino2015, Daskalakis2017}). Such condition is fulfilled in most observations of molecular polaritons in condensed-phase media. \begin{figure} \includegraphics[width=\columnwidth]{cavmod} \caption{(a) Representation of microcavity modes excited by radiation (purple) incident at angle $\theta$. The electric field polarization of the TE modes (red) lies along $\hat{n}_{\mathbf{q}}$ while that of TM (blue and green) has $\hat{e}_{\mathbf{q}}$ and $\hat{e}_{z}$ components. (b) Dispersion (energy as a function of mirror-plane wave vector $\mathbf{q}$) of the photonic mode of index $m$ in microcavity with transverse length $L$. }\label{cavmod} \end{figure} \subsection{Jaynes-Cummings and Tavis-Cummings models}\label{jctcs} When we introduce a bright two-level system to a single-mode microcavity we obtain the Jaynes-Cummings (JC) model \cite{jaynes_comparison_1963}. In particular, it consists of a \textit{lossless} cavity-mode of frequency $\omega_c$ interacting with a two-level system of transition energy $\hbar\omega_s$. Thus, the effective Hamiltonian of the JC model is given by \begin{equation} H_{\text{JC}} = \hbar \omega_c a^\dagger a + \hbar \omega_s \sigma_+ \sigma_- - \hbar g_s\left(a^\dagger \sigma_- + a\sigma_+\right), \label{jceq}\end{equation} where $a \left(a^\dagger\right)$ is the cavity photon annihilation (creation) operator, $\sigma_+$ $(\sigma_-)$ creates (annihilates) excitons, and $\hbar g_s = \mu \cdot E\sqrt{\hbar \omega_c/2\epsilon V_c}$ is the strength of the radiation-matter interaction, where $\epsilon$ is the dielectric constant of the intracavity medium, $V_c$ is the effective mode volume of the (cavity) photon \cite{ujihara1991}, $E$ is the photon electric field amplitude at the emitter position, and $\mu$ is the transition dipole moment of the latter. Notably, this Hamiltonian implicitly assumes that the light-matter interaction is \textit{strong} relative to the damping of each degree of freedom, yet \text{weak} compared to both $\omega_c$ and $\omega_s$. Thus, only states with equal total number of excitations \begin{equation} N_{\text{exc}} = a^\dagger a + \sigma_+ \sigma_- \end{equation} are coupled by the light-matter interaction. As a result, the ground-state of the hybrid system is equivalent to that of the decoupled. The same is clearly not true for the excited-states. For any $N_{\text{exc}} = N > 0$, $H_{\text{JC}}$ has two hybrid photon-matter eigenstates. For example, the lowest-lying excited-states of the system have $N_{\text{exc}} = 1$. They are given by (Fig. \ref{jctc}a) \begin{align} & \ket{\rm LP} = \text{cos}(\theta_{\text{JC}})\ket{g,1}+\text{sin}(\theta_{\text{JC}})\ket{e,0}, \nonumber \\ & \ket{\rm UP} = -\text{sin}(\theta_{\text{JC}})\ket{g,1} + \text{cos}(\theta_{\text{JC}})\ket{e,0}, \label{jcp}\end{align} where $\ket{g,N} (\ket{e,N})$ denotes a state where the material is in the ground(excited)-state and the cavity has $N$ photons, and $\theta_{\text{JC}} = \text{tan}^{-1}\left[2g_s/(\omega_c-\omega_s)\right]$ is the polariton mixing-angle, which determines the probability amplitude for a photon or emitter to be observed when the state of the system is either $\ket{\text{LP}}$ or $\ket{\text{UP}}$. The state $\ket{\rm LP}$ is called \textit{lower polariton}, while $\ket{\rm UP}$ is the \textit{upper polariton}. Their energy difference is $\hbar\Omega_R = 2\hbar\sqrt{\Delta^2/4+g_s^2}$, where $\Delta = \omega_c - \omega_s$ is the \textit{detuning} between the photon and emitter frequencies. At resonance $(\omega_c = \omega_s)$, the LP and UP become a maximally entangled superposition of emitter and cavity photon, with \textit{vacuum Rabi splitting} $\hbar\Omega_R = 2\hbar g_s$ [the terminology refers to the process by which introduction of an emitter to the cavity \textit{vacuum} (initial-state $\ket{e,0}$) leads to coherent (Rabi) oscillations with frequency $\Omega_R/2$ in the probability to detect a photon or a material excited-state inside the cavity \cite{rabi1937, agarwal1984}]. For positive detunings, the UP (LP) has a higher photon (emitter) character, while the opposite is true when $\omega_c < \omega_s$. The JC model shows other interesting features, such as photon blockade \cite{birnbaum2005}. However, because it contains no more than a \textit{single} structureless [\textit{i.e.}., the internal (vibronic, vibro-rotational, etc) structure of molecular excitations is not considered] emitter, the JC model is only of pedagogical significance for chemistry, although conditions in which a single-molecule is strongly coupled to a microcavity have only been achieved in few studies \cite{chikkaraddy2016, benz_single-molecule_2016, wang2017}. In fact, for reasons we will discuss next, most experiments which probe the strong coupling regime employ a molecular ensemble including a macroscopic number of emitters. \begin{figure} \includegraphics[width=\columnwidth]{jctc} \caption{(a) Jaynes-Cummings (JC) model: emitter and photon strongly couple to form hybridized states termed lower and upper polaritons (LP and UP, respectively) separated in energy by $2\hbar g_{s}$; (b) Tavis-Cummings (TC) model: $N$ emitters interact strongly with a photon to yield polariton (LP, UP) and $N-1$ dark states. The latter do not couple to light and thus maintain the original emitter energy.}\label{jctc}\end{figure} The generalization of the JC model for the case where $N$ \textit{identical} two-level emitters interact strongly with a lossless cavity mode is denoted the Tavis-Cummings (TC) \cite{tavis_exact_1968, tavis_approximate_1969} or Dicke model \cite{dicke_coherence_1954}. It is described with the Hamiltonian \begin{equation} H_{\text{TC}} = \hbar \omega_c a^\dagger a + \hbar \omega_s\sum_{i=1}^N \sigma_+^{(i)}\sigma_-^{(i)} - \hbar g_s \sum_{i=1}^N \left(a^\dagger \sigma_-^{(i)} + a\sigma_+^{(i)}\right), \label{tceq}\end{equation} where the superscript $i$ labels each of the $N$ emitters. Note that while $N$ can be very large, the emitters are assumed to occupy a region of space where the variation of the electric field amplitude can be neglected. The spectrum of $H_{\text{TC}}$ differs markedly from that of $H_{\text{JC}}$. However, the total number of excitations of the system $N_{\text{exc}} = a^\dagger a + \sum_{i=1}^N \sigma_+^{(i)}\sigma_-^{(i)}$ remains a constant of motion. Thus, similar to $H_{\text{JC}}$, the TC model only allows hybrid states in which all components share the same $N_{\text{exc}}$. Specifically, there exists $N+1$ basis states with $N_{\text{exc}} = 1$: a single state with all emitters in the ground-state and a cavity photon, $\ket{g,1} \equiv \ket{0,0,0...,0;1} $, and $N$ states where a single-molecule is excited and the cavity EM field is in its ground-state, $\ket{e^{(i)},0} \equiv \ket{0,0,...,e^{(i)},...,0,0;0}, i \in \{1,...,N\}$. The stationary states with $N_{\text{exc}}=1$ (Fig. \ref{jctc}b) not only include the polaritons, but also a degenerate manifold of \textit{dark states} $\ket{\rm D_\mu},~\mu \in \{1,...,N-1\}$, for which an orthogonal basis may be given by delocalized \textit{non-totally-symmetric} (under any permutation of the emitters) molecular excited-states orthogonal to the \textit{permutationally-invariant} bright state \cite{vetter2016single, agarwal1998}. This can be easily seen by rewriting $H_{\text{TC}}$ in terms of bright- and dark state operators, $\sigma^{(\text{B})} = \frac{1}{\sqrt{N}}\sum_{i=1}^{N} \sigma^{(i)}$ and $\sigma^{(\rm D_\mu)}$, respectively, where $\sigma$ can be any operator acting on the total Hilbert space of the two-level system ensemble, and the normalization is chosen so that the commutation relations of the $\sigma$ matrices are preserved. In this basis, $H_{\text{TC}}$ is given by \begin{equation} H_{\text{TC}} = \hbar \omega_c a^\dagger a + \hbar \omega_s \sigma_+^{(\text{B})}\sigma_-^{(\text{B})} - \hbar \sqrt{N}g_s \left(a^\dagger \sigma_-^{(\text{B})} + a\sigma_+^{(\text{B})}\right) + H_{\rm D}, \label{htc} \end{equation} where $H_{\rm D} = \hbar \omega_s \sum_{\mu=1}^{N-1} \sigma_+^{(\rm D_\mu)}\sigma_-^{(\rm D_\mu)}$ is the dark Hamiltonian. From Eq. \ref{htc} and its similarity with Eq. \ref{jceq}, it is clear that the hybrid eigenstates with $N_{\text{exc}}=1$ are given by \begin{align} & \ket{\rm LP} = \text{cos}(\theta_{\text{TC}})\ket{g,1}+\text{sin}(\theta_{\text{TC}})\ket{\text{B},0}, \nonumber \\ & \ket{\rm UP} = -\text{sin}(\theta_{\text{TC}})\ket{g,1} + \text{cos}(\theta_{\text{TC}})\ket{\text{B},0}, \end{align} where $\ket{\text{B},0}$ is the totally-symmetric bright emitter state \begin{equation} \ket{\text{B},0} = N^{-1/2}\sum_{i=1}^N\ket{e^{(i)},0}, \end{equation} and $\theta_{\text{TC}} = \text{tan}^{-1}\left[2\sqrt{N}g_s/(\omega_c-\omega_s)\right]$. Notably, these states are simple generalizations of the JC polaritons provided in Eq. \ref{jcp}. However, the vacuum Rabi splitting given by Eq. \ref{htc} is significantly enhanced compared to JC, as a result of the \textit{collective} light-matter coupling $g_s \sqrt{N}$ which couples a cavity photon to a delocalized bright emitter. In fact, at resonance, the (collective) vacuum Rabi splitting in the TC model is given by $\hbar\Omega_R = 2\sqrt{N}g_s$. Since $g_s \propto V_c^{-1/2}$, it follows that $\hbar\Omega_R$ scales with the density of emitters in the optical mode volume. Thus, it is much easier to reach strong coupling between light and matter with a large concentration of optically-active material. \begin{figure} \includegraphics[width=\columnwidth]{orgcav} \caption{The l.h.s. gives a pictorial representation of a set of molecular emitters embedded in a resonant planar microcavity; the r.h.s presents the cavity, emitter and polariton dispersions $[\text{energy in terms of wave vector} (\mathbf{q})]$ according to the multimode generalization of the TC model.}\label{orgsk}\end{figure} Introduction of emitter disorder and cavity losses to the JC and TC models does not change the essential conclusions of the above discussion as long as $\hbar\Omega_R$ remains larger than the broadening due to the photonic and emitter damping. For instance, while inhomogeneous broadening breaks the degeneracy and delocalization of the dark manifold and leads to photonic transfer (intensity borrowing) from the LP and UP to these states, the fraction of transferred photon is proportional to $|\eta/\hbar \Omega_R|^2$, where $\eta$ is the energetic width of the inhomogeneous disorder. For $\hbar \Omega_R \gg \eta$, the photonic contribution to the TC dark states is very small and may be neglected in most cases \cite{houdre_vacuum-field_1996} (though it can be significant in cavity-absorption measurements). The same analysis allows us to conclude that inhomogeneous broadening is suppressed in polariton spectra. In fact, this has been repeatedly observed in atomic and inorganic semiconductor quantum well cavity-polaritons where the emitters are approximately structureless (see e.g., \cite{manceau2017}). We emphasize the above discussion disregards any dependence of the hybrid cavity Hamiltonian on real-space position or wave vector. In reality, the emitters are spatially distributed within the cavity volume and interact differently with the cavity-mode continuum (Fig. \ref{cavmod}) according to their positions (in the case of molecular systems there is also a dependence on the orientation of the transition dipole moment). Thus, the LP and UP define bands with dispersion given by $\omega_{\rm LP}(\mathbf{q})$ and $\omega_{\rm UP}(\mathbf{q})$, respectively (Fig. \ref{orgsk}). The effects of disorder are more complex in this case, since they may lead to strong polariton localization and scattering \cite{agranovich2003, litinskaya_loss_2006, agranovich_nature_2007, litinskaya_propagation_2008}. Nevertheless, if the emitter density of states (DOS) is much larger than that of the cavity EM field, then the dark molecular modes still constitute the majority of the states of the hybrid microcavity. This observation is crucial for the investigation of relaxation dynamics in molecular polaritons, and we devote more attention to it in Sec. \ref{orgrel}. The aspects of the strong coupling regime discussed in this Sec. are essential for a description of molecular polaritons. Nonetheless, the TC model is still too primitive for most chemistry purposes since the two-level systems mimicking electronic states carry no internal (for example, vibronic) structure which is fundamental for the description of molecular dynamics. \section{Organic exciton-polaritons}\label{orgp} Organic semiconductors are suitable materials for strong coupling due to their large transition dipole moments and sufficiently narrow linewidths \cite{agranovich2009excitations}. In fact, the first observations of molecular cavity-polaritons originated from the coupling of an optical microcavity with the organic excitons of a Zn-porphyrin dye \cite{lidzey1998}, and later with J-aggregate films \cite{lidzey1999}. Recent years have seen many remarkable developments including demonstrations of reversible optical switching \cite{schwartz2011}, suppression of photochemical reactivity \cite{hutchison_modifying_2012}, room-temperature polariton lasing \cite{kena-cohen2010} and Bose-Einstein condensation \cite{plumhof_room-temperature_2014}, enhanced charge conductivity \cite{orgiu_conductivity_2015}, and long-range excitation energy transfer \cite{coles2014b, zhong2017, georgiou2018}. We introduce the effective Hamiltonian of organic polaritons in Sec. \ref{orgbas}, review their relaxation dynamics in Sec. \ref{orgrel}, and discuss some of their applications in light of the presented theoretical framework in Secs \ref{orgappI}, \ref{orgappII}, and \ref{orgappIII}. \begin{table*} \caption{Timescales relevant for the description of organic (J-aggregate) microcavity relaxation dynamics} \label{tbl:compare}\centering % \begin{tabular}{>{\centering}m{3.5cm}>{\centering}m{3cm}>{\centering}m{3cm}>{\centering}m{3.0cm}>{\centering}m{2.0cm}} \hline Process & Initial state(s) & Final state(s) & Timescale & Ref.\tabularnewline \hline Rabi splitting & \textemdash{} & \textemdash{} & $15-50~\rm{fs}$ ($80-300~\rm{meV}$) & \cite{hobson2002}\tabularnewline \hline Cavity leakage & cavity photon & \textemdash{} & $ 35-100~\rm{fs}$ & \cite{coles2011a, michetti2008}\tabularnewline \hline & exciton & \textemdash{} & $10-1000~\rm{fs}$ & \cite{taylor1984, reiser1982} \tabularnewline & UP & incoherent excitons & $\sim 50~\rm{fs}$ & \cite{agranovich2003}\tabularnewline Vibrational relaxation & incoherent excitons & LP & $\sim 10~\rm{ps}$ & \cite{litinskaya2004}\tabularnewline & incoherent excitons & UP & $\sim 1~\mu$s & \cite{coles2011}\tabularnewline \hline & UP & \textemdash{} & $\sim 100$ fs & \cite{michetti2008}\tabularnewline Photoluminescence & LP & \textemdash{} & $\sim 1 ~\rm{ps}$ & \cite{wang2014}\tabularnewline & bare exciton & \textemdash{} & $\sim 1-20 ~\rm{ps}$ & \cite{furuki2001, song2004, schwartz_polariton_2013, wang2014}\tabularnewline \hline \end{tabular} \end{table*} \subsection{Effective descriptions} \label{orgbas} The main novelty introduced by organic (Frenkel) exciton-polaritons \cite{may2004charge, agranovich2009excitations} is the significant local \textit{vibronic coupling} of the molecular excited-states with inter and intramolecular vibrational modes. This gives rise to inhomogeneously broadened linewidths, vibronic progressions, and Stokes shifts in the optical spectra of organic systems \cite{may2004charge, agranovich2009excitations}. It also gives rise to photochemical reactivity. Thus, it is unsurprising that vibronic coupling\textemdash absent from the JC and TC models\textemdash is a source of novel organic exciton-polariton behavior. The simplest way exciton-phonon coupling affects polariton behavior is by introducing an \textit{efficient} channel for nonradiative polariton decay (Table I). This happens because observed organic microcavity Rabi splittings are of the order of a few hundred meVs (Table I). In this energy interval, it is common for the molecular environment to have significant phonon DOS, which therefore plays an important role in assisting polariton relaxation. Agranovich et al. first recognized in a seminal work \cite{agranovich2003} the effects of inhomogeneities on the organic polariton spectrum, and similarly, the role of resonant phonon emission and absorption on polariton relaxation dynamics. In particular, in Ref. \cite{agranovich2003}, the authors employed a macroscopic electrodynamics model to show that when the main source of disorder is inhomogeneous broadening of the molecular system (typically the case for organic microcavities), only the LP and UP states within a specific region of wave vector space (near the photon-exciton resonance) achieve large coherence lengths (typically a few $\mu m$) \cite{agranovich2003, aberra_guebrou_coherent_2012}. The remaining states with significant molecular character may be considered for practical purposes to form an incoherent reservoir [containing both the dark states described in Sec. \ref{jctcs} (Fig. \ref{jctc}) and also polaritons localized due to inhomogeneities] which is weakly-coupled to the cavity. Given that the latter are much more numerous than molecular polaritons, they form energy \textit{traps} fundamentally important in relaxation dynamics, as we discuss in Secs. \ref{orgrel}, \ref{orgappII}, and \ref{orgappIII}. Here we note that while the treatment in Ref. \cite{agranovich2003} is phenomenological, its fundamental conclusions were later confirmed by various numerical simulations and experimental data \cite{litinskaya2004, michetti_polariton_2005, litinskaya_loss_2006, agranovich_nature_2007, michetti_polariton_2008}. We discuss the relaxation dynamics of organic microcavities according to this picture in more detail in Sec. \ref{orgrel}. An alternative approach to the investigation of organic cavity-polaritons was introduced by {\'C}wik et al. \cite{cwik2014}, who investigated their properties with a generalization of the Holstein Hamiltonian \cite{holstein1959} appropriate for the study of strongly coupled systems. In this \textit{Holstein-Tavis-Cummings} (HTC) model, the TC emitters (Eq. \ref{htc}) are assigned one or more independent vibrational degrees of freedom; these are linearly coupled to each organic exciton in accordance with the displaced oscillator model of vibronic coupling \cite{holstein1959} \begin{equation} H_{\text{exc-ph}} = \sum_{i=1}^N \sum_{j=1}^{N_{\text{ph}}}\lambda_j \hbar \omega_j \sigma_{+}^{(i)} \sigma_{-}^{(i)}(b_{ij}+b_{ij}^\dagger), \label{excph} \end{equation} where the exciton operators follow the notation of the previous section, $b_{ij} (\omega_j)$ is the annihilation operator (natural frequency) of a harmonic phonon mode coupled to the $i$th-exciton, and $\lambda_j$ is the dimensionless vibronic coupling constant \cite{may2004charge}. Thus, in the absence of disorder, the (single-cavity mode) HTC Hamiltonian is given by \begin{equation} H_{\text{HTC}} = H_{\text{TC}} + H_{\text{ph}} + H_{\text{exc-ph}}, \label{htceq} \end{equation} where $H_{\text{ph}} = \sum_{i=1}^N \sum_{j=1}^{\text{N}_{\text{ph}}} \hbar \omega_j b_{ij}^\dagger b_{ij}$ generates the free dynamics of $N_{\text{ph}}$ phonon modes per exciton. The single-photon (exciton) eigenstates of Eq. \ref{htceq} (with a single phonon mode per molecule) were systematically investigated by Herrera and Spano in Refs. \cite{herrera2016, herrera_absorption_2017, herrera_dark_2017, herrera2018} (see also \cite{zeb2018, cwik_excitonic_2016}). These authors reported qualitatively distinct stationary states for $H_{\text{HTC}}$ depending on the ratio of Rabi splitting and phonon frequency $\Omega_R/\omega_v$. An important limit (with consequences discussed in Sec. \ref{orgappI} and \ref{orgappIII}) occurs when the light-matter interaction is much stronger than the local vibronic coupling, \textit{i.e.}, $\Omega_R/\lambda_v \omega_{v} \gg 1$. In this case, the phenomenon of \textit{polaron decoupling} is manifested \cite{spano_optical_2015, herrera2016}. This refers to a significant suppression of the vibronic coupling in the polariton states of a molecular ensemble strongly coupled to a microcavity (as discussed in Sec. \ref{jctcs}). It occurs as a consequence of the delocalized character of the polariton states (inherited from the photonic coherence volume and forced by the strong light-matter interaction); when the Rabi splitting is a few times larger than the considered vibronic couplings, the polaritons become (to a large extent) immune to the local (vibronic) perturbations acting on the excitonic states. This intuitive effect was studied long ago, as it is also the reason that delocalized excitations of organic J-aggregates have narrower lineshapes and weaker Stokes shift than the corresponding monomers \cite{knapp1984}. Further discussion of the different regimes of the HTC model is given in Refs. \cite{herrera2018, zeb2018}. It was also applied to the study of polariton effects on electron transfer \cite{herrera2016} (Sec. \ref{orgappI}), Raman spectrum \cite{strashko2016}, and organic polariton photoluminescence \cite{herrera_absorption_2017}. Notably, when vibrational relaxation and cavity leakage happen at comparable rates to the Rabi frequency \cite{herrera2018}, the behavior of the HTC eigenstates is essentially similar to that given by the theory first introduced by Agranovich et al. \cite{agranovich2003}. In this case, a simpler kinetic approach \cite{agranovich2003, herrera2018} where vibronic coupling acts a weak perturbation inducing incoherent scattering (see next Secs.) is well-suited to the description of organic polariton relaxation dynamics and photoluminescence. In particular, simulations of both phenomena are consistent with the LP being the main source of photoluminescence in microcavity experiments (though it was recently shown that with surface plasmons as the electromagnetic component, van Hove singularities arise and enable ultrafast photoluminescence from the UP \cite{yuen-zhou2017}). Given the timescales presented in Table I, the incoherent treatment of polariton-phonon dynamics is well-justified in many cases. Our further considerations will be based on it unless otherwise mentioned. \subsection{Relaxation dynamics}\label{orgrel} \begin{figure} \centering \includegraphics[scale=0.5]{dos} \caption{Effect of DOS on vibrational-relaxation dynamics in the regime of strong coupling between $N$ excitons and a single photon mode. For large $N$, decay from UP to dark states (DOS $ \approx (N-1)/\text{exciton linewidth}$) is much faster compared to that from dark states to LP (DOS $\approx 1/\text{LP linewidth}$) because transition rate scales with final DOS. When many modes are considered, the polariton bands have larger DOS but still much smaller than their dark state counterpart.} \label{dosf} \end{figure} Given the lossy character of the microcavities and plasmonic layers routinely employed in in strong coupling experiments \cite{kavokin2017microcavities}, any practical use of organic polariton devices must account for the dissipative processes which may affect their performance. Typically, the damping of both bare cavity-photons and organic excitons can be reasonably approximated with a Markovian Master equation treatment \cite{nitzan2006chemical, gardiner1991quantum}. Said approach assumes these degrees of freedom interact weakly with a macroscopic bath characterized by its system-dependent spectral density \cite{nitzan2006chemical}. A choice needs to be made of whether each molecule has an independent bath, or a single set of environment modes interacts with all excitons. Both situations were explored in the work of del Pino et al. \cite{pino2015} (the pure-dephasing rates given in this work were corrected in Ref. \cite{martinez-martinez2018a}). Our discussion will assume the independent-baths scenario, which is the more realistic description for the study of disordered organic aggregates. The main dissipation channel for molecular polaritons involves the coupling of their photonic part to the external EM field modes via transmission through the cavity mirrors \cite{kavokin2017microcavities, savona1995}. This happens because most experiments employ optical resonators with low quality factor $Q(=\omega_c/\kappa$, where $\kappa$ is the cavity leakage rate and also the full-width at half-maximum of the cavity mode of interest), thus leading to cavity-photon escape rates which are faster than molecular fluorescence or nonradiative decay (Table I). As mentioned, organic exciton-polaritons may also decay nonradiatively by vibronic coupling with molecular phonon modes (Table 1). Such relaxation occurs between polariton and dark states and is well described with Fermi's golden rule (FGR) \cite{dirac1927}. According to this framework, a quantum transition with higher density of final states will exhibit a faster rate compared to that with lower DOS if both processes are mediated by the same perturbations. The prominence of this DOS dependence in organic polariton relaxation dynamics was first characterized in Ref. \cite{agranovich2003}, which showed that via local phonon emission (Eq. \ref{excph}), the UP decays to the dark manifold much faster than the latter decays to the LP (Fig. \ref{dosf}). Agranovich et al. \cite{agranovich2003} considered a single Raman-active phonon-mode \cite{tartakovskii2001} with frequency nearly matching the Rabi splitting in Eq. \ref{excph} and inhomogeneously broadened spectral distribution of incoherent excitons for the dark band. The resulting vibrational-relaxation time to these ``dark" states (with one phonon) from the UP (with zero phonons and formed from exciton-resonant cavity-mode) was determined to be $\sim 50~\text{fs}$ (Table 1). This timescale is in good agreement with the low UP photoluminescence observed experimentally, given that the typical resolution of these measurements is on the order of $100~\rm{fs}$ \cite{zhong2017}. In contrast, a timescale of $\sim 10$ ps (Table 1) for the transition from the dark states to the LP band was obtained in Ref. \cite{litinskaya2004}. Qualitatively, the difference between the rates of these relaxation processes is a direct manifestation of the final DOS in each case (Fig. \ref{dosf}). Indeed, even when considering light-matter coupling to the entire cavity-mode continuum (Sec. \ref{optspec}), the vast majority ($70-99\%$)\cite{agranovich2002, agranovich2003, litinskaya2004} of states with significant exciton character are dark/incoherent. Therefore, the latter form a reservoir which acts as an energy \textit{sink}. While inelastic scattering of dark modes may also increase the UP population, this process is relatively suppressed (timescale $\sim 1~ \mu\text{s}$; Table 1) as dictated by detailed balance. To further corroborate the association of vibrational relaxation with photoluminescence, Michetti and LaRocca simulated organic microcavity emission with a kinetic model based on rates obtained with FGR \cite{michetti2008, michetti2009}. Experimental results were accurately reproduced, specifically the ratio of photoluminescence intensity of both polariton bands, as well as their temperature dependence \cite{michetti2008, michetti2009}. \subsection{Polariton-mediated photochemical reactivity}\label{orgappI} The first report of drastic effects of polaritons on photochemistry was given by Hutchison et al. \cite{hutchison_modifying_2012}. In particular, a reduced rate was observed for spiropyran-merocyanine photoisomerization under conditions where the \textit{product} of the transformation is resonant with the optical cavity. Later, Galego et al. \cite{galego2016} showed a mechanism for the suppression of polariton-mediated photochemical reactions where the \textit{reactants} are the strongly coupled species. In this case, the reaction rate decreases because the effective LP potential energy surface (PES) has a contribution from the (largely) non-reactive electronic ground-state PES (the reaction was assumed to proceed through the LP) \cite{galego2016}. Yet another example of polariton-mediated chemical reactivity was presented by Herrera and Spano \cite{herrera2016}. In this work, the regime of polaron decoupling (Sec. \ref{orgbas}) was assumed to show that nonadiabatic intramolecular electron transfer (ET) rates can be enhanced or suppressed when the electron-donor is strongly coupled with an optical cavity. A lower (higher) ET rate was shown to arise when the bare excited-donor and acceptor equilibrium geometries are displaced along the same (opposite) direction(s) relative to the electronic ground-state. In this case, the strong light-matter interaction induces a reduction (increase) of the difference between the electronically excited donor and acceptor equilibrium geometries, which effectively accelerates (inhibits) the reaction. Given that the energetics of the electronically excited-states determines the ET driving force, the manipulation of the polariton energies (Sec. \ref{jctcs}) provides yet another knob for the control of ET processes. \subsection{Polariton-assisted remote long-range energy transfer}\label{orgappII} \begin{figure} \includegraphics[width=\columnwidth]{paret} \caption{Representations of polariton-assisted remote energy tranfer (PARET), where donor-acceptor separation $\Delta z\approx 1\text{ }\mu\text{m}$ for various cases of strong coupling to surface plasmons. (a) ``Carnival effect'' \textit{i.e.}, role-reversed) PARET from dense slab of acceptors (featuring SC to SPs) to dilute monolayer of donors. Inset: cartoon highlighting the ``carnival effect'', or role reversal, between donors and acceptors. (b) PARET from dense slab of strongly coupled donors to dilute monolayer of acceptors. (c) PARET from dense slab of donors to dense slab of acceptors (both are strongly coupled to SPs). Inset: cartoon illustrating the vibrational relaxation that mediates PARET in this case. } \label{fgr:paret} \end{figure} Excitation energy transfer (EET) converts the excitation of a donor (D) molecular species into that of a resonant acceptor (A) species \cite{may2004charge}. In most cases, this process is mediated by nonradiative dipole-dipole interactions and referred to as F\"orster resonance energy transfer (FRET) \cite{forster_zwischenmolekulare_1948}. However, it is limited to molecular separations of $\sim 1-10~\text{nm}$ \cite{medintz2013fret}. Recently, it has been shown experimentally that efficient \textit{long-range} EET can be achieved in organic microcavities at the strong coupling regime \cite{coles2014b, zhong2017, georgiou2018}. A variant of this process was first studied by Basko et al. \cite{basko2000}, who investigated the effects of acceptor strong coupling on the decay of weakly-coupled donor excited-states without emphasis on the distance-independent character of the microcavity energy transfer. In a recent work \cite{du2017}, we provided a comprehensive theory of this phenomenon, which was denoted by \textit{polariton-assisted remote energy transfer} (PARET, Fig. \ref{fgr:paret}). The setup included separated donor and acceptor molecular slabs placed above a plasmonic layer. The distance-dependence of energy transfer rates was examined for \textit{exclusive} donor or acceptor strong coupling, and also for the case where \textit{both} chemical species are strongly coupled to the plasmonic layer. The effective Hamiltonian we employed is a simple generalization of the previously discussed models; it is given by \begin{equation} H=H_{\text{D}}+H_{\text{A}}+H_{\text{P}}+H_{\text{DA}}+H_{\text{DP}}+H_{\text{AP}}, \label{hparet} \end{equation} where the first two terms on the right-hand side are HTC Hamiltonians (Eq. \ref{htceq})\textemdash except modified to include spatial variations in the light-matter coupling\textemdash for donor and acceptors. Similarly, the SPs are described by $H_{\text{P}}$, which contains terms that describe both coherent and lossy plasmon dynamics. The interaction part of Eq. \ref{hparet} includes the weak dipole-dipole coupling between donor and acceptor states ($H_\text{{DA}}$) and plasmon resonance energy transfer (PRET) between excitons and SPs ($H_{\text{DP}}, H_{\text{AP}}$) \cite{liu2007}. Each strong coupling scenario (whether only one or both molecular slabs are strongly coupled) is associated to a distinct partitioning of $H$ into a zeroth-order Hamiltonian $H_{0}$ and a (weak) perturbation $V$. For instance, when the donor is the only strongly coupled species (Fig. \ref{fgr:paret}b), $H_{0} = H_{\text{A}} + H_{\text{D}} + H_{\text{P}}+H_{\text{DP}}$ and $V = H_{\text{DA}}+H_{\text{AP}}$. Given the partitioning appropriate for each scenario, the EET rates are obtained with FGR. In this case, EET from donor polaritons to bare acceptors was theoretically predicted to happen even at micron donor-acceptor separations \cite{du2017}. Such PARET is attributable to the PRET contribution, which evanescently decays from the metal surface across distances as long as microns depending on the wave vector of the resonant SP. In contrast, the EET rate from the purely excitonic (donor) dark states to acceptors approaches that obtained in bare FRET for a sufficiently thick donor slab, as intuitively expected from a dense set of purely excitonic states. Conversely, strong coupling to only acceptors actually leads to a donor-to-polariton rate that is significantly smaller than the bare FRET. In analogy to our discussion of relaxation dynamics in Sec. \ref{orgrel}, this arises because the polariton band onto which the transfer is expected to happen has a much lower DOS than the dark state manifold. Furthermore, as in donor-exclusive strong coupling, the donor-to-dark-acceptors EET rate converges to the bare FRET rate (for a sufficiently thick acceptor slab). However, for intense enough acceptor-SP coupling, the donors and acceptors actually reverse roles (``carnival effect", Fig. \ref{fgr:paret}a) \cite{du2017}. In contrast, in a different regime where strong coupling is realized with both donors and acceptors (Fig. \ref{fgr:paret}c), long-rate EET is mediated by vibrational relaxation \cite{du2017}. This induces transitions among polaritons\textemdash delocalized across donors and acceptors\textemdash and dark states with common excitonic character. By the same DOS arguments just discussed, EET to polaritons is much slower than that to the dark state manifolds. Nevertheless, the former is calculated to outcompete fluorescence, and the latter occurs as fast as molecular vibrational relaxation \cite{du2017}. Consequently, PARET from a mainly-donor to a mostly-acceptor state is theoretically attainable for chromophoric-slab separations of at least hundreds of nm. In fact, the computed rates for this case are in qualitative agreement with experimental data, even when the nature of the electromagnetic modes differ from one study to the other \cite{coles2014b}. It is worth mentioning that other schemes have been theoretically proposed to enhance excitation energy transport by exploiting strong light-matter coupling \cite{feist_extraordinary_2015, schachenmayer_cavity-enhanced_2015} (in conjunction with novel methods of topological protection \cite{yuen-zhou2014, yuen-zhou_plexciton_2016}). \subsection{Polariton-assisted singlet fission}\label{orgappIII} \begin{figure} \includegraphics[width=\columnwidth]{scheme_sf} \caption{Pictorial representation of singlet fission in pentacene. Blue (red) denotes a singlet (triplet) exciton.}\label{sf}\end{figure} Singlet fission (SF) is a spin-allowed process where a (one-body) singlet exciton is converted into a (two-body) triplet-triplet (TT) state with vanishing total spin (Fig. \ref{sf}) \cite{smith2010, smith2013}. This phenomenon is of fundamental importance to the energy sciences, as it has been proven to enhance the efficiency of organic solar cells \cite{congreve2013, yost2014} by increasing the number of excitons produced per photon absorbed by an organic photovoltaic device, \textit{i.e.}, the external quantum yield (EQY). Given the demonstrated ability of molecular polaritons to influence chemical dynamics, it is natural to enquire what possibilities exist for the control of singlet fission in organic microcavities. In Ref. \cite{martinez-martinez2017b}, we proposed a model for the investigation of polariton-assisted SF of acene chains in a microcavity which, for comparison purposes, also considered the competition of SF with other singlet quenching mechanisms. In order to quantitatively establish the effects of strong coupling on TT yield, Mart\'inez-Mart\'inez et al. employed the Pauli master equation formalism \cite{nitzan2006chemical, may2004charge}. The results highlight again (Secs. \ref{orgrel} and \ref{orgappII}) the essential (Fig. \ref{dosf}) influence of strong coupling on the DOS of donor and acceptor manifolds: the polariton manifold has a small DOS in comparison to the dark and TT. As a consequence, polariton decay to either dark or TT states is significantly faster than the reverse process. Another important finding is that to achieve polariton-based enhancement in the TT yield of an arbitrary SF material, the ideal candidate must have $\Delta G_{\text{SF}} = E_{\text{TT}}-E_{\text{S}}\ll0$ (see Fig. \ref{sff}). In this way, for sufficiently large Rabi splitting, the LP can be tuned close to resonance with a high-frequency bath mode (known as the inner-sphere in Marcus theory literature \cite{jortner1976}) of the TT states. This reduces the energy barrier between the donor (LP) and acceptor states with respect to the bare material. Moreover, detailed balance implies thermal suppression of vibrational relaxation upward from LP to dark states at large Rabi splittings, and the most favorable decay channel directs singlet excited-state population to the TT manifold. In summary, Ref. \cite{martinez-martinez2017b} indicates that under experimentally accessible conditions polariton-assisted SF can outcompete SF quenching mechanisms, and turn materials with poor EQY into highly-efficient sensitizers. \begin{figure} \includegraphics[width=\columnwidth]{scheme_SCf} \caption{Scheme of the transfer processes relevant to SF under normal (top) and strong coupling (bottom) conditions. Solid (jagged) arrows indicate radiative (nonradiative) decay processes. Dashed arrows account for transitions between states with different electronic character. Thicker lines indicate larger DOS.} \label{sff} \end{figure} \section{Vibrational-polaritons}\label{vibp} Vibrational-polaritons occur when dipole-active molecular vibrations interact strongly with the EM field of a microcavity (Fig. \ref{ircavf}). Studies of these novel excitations are stimulated by the possibilities they may offer for the \textit{selective} control of chemical bonds. In particular, there exists interest in employing vibrational strong coupling (VSC) to e.g., catalyze or inhibit chemical reactions \cite{thomas_ground-state_2016}, suppress or enhance intramolecular vibrational relaxation, and control the nonlinear optical response of molecular systems in the infrared (IR) \cite{xiang2017, dunkelberger2018b}. Furthermore, vibrational-polaritons might also provide desired novel sources of coherent mid-IR light. Solid-state phonon-polaritons have been investigated since the 1960s \cite{henry1965, mills1974}, and some early studies of liquid-phase molecular vibrational-polaritons date back to the 1980s \cite{denisov1987}. However, it is only recently that \textit{cavity (or surface-plasmon) vibrational-polaritons} have been observed and systematically studied in the polymeric, liquid, and solution-phases \cite{shalabney_coherent_2015, george_liquid-phase_2015, long_coherent_2015, muallem2016a, muallem2016, simpkins_spanning_2015, casey_vibrational_2016, vergauwe_quantum_2016, thomas_ground-state_2016, dunkelberger_modified_2016, ahn2018, kapon_vibrational_2017, memmi2017, crum2018}. This is important because many important chemical reactions happen in the liquid-phase. Notably, under illumination with weak fields, the response of vibrational microcavities is similar to that of organic exciton-polaritons \cite{fano1956}. However, fundamentally novel behavior of vibrational-polaritons can be observed when higher excitations \cite{mukamel1999principles, hamm2011concepts} of the hybrid system are optically \cite{dunkelberger_modified_2016, xiang2017, ribeiro2017c, dunkelberger2018b} or thermally \cite{thomas_ground-state_2016, chervy2018} accessed. In this Sec. we provide an overview of the properties of vibrational-polaritons with emphasis on their features which are qualitatively distinct from those of exciton-polaritons. We review the basic theory of VSC in Sec. \ref{vib_bas}, and discuss recently reported experimental and theoretical results on the \textit{nonlinear} interactions of vibrational-polaritons in Sec. \ref{vib_trans}. We conclude our discussion of IR strong coupling with some comments on recent tantalizing experimental observations of non-trivial VSC effects on chemical reactivity and IR emission which have been reported by the Ebbesen group \cite{thomas_ground-state_2016, chervy2018}. \subsection{Basic features of vibrational strong coupling}\label{vib_bas} In contrast to the electronic, the dynamics of a bare vibrational degree of freedom can be well-approximated at low energies by a \textit{weakly} anharmonic oscillator \cite{herzberg1939molecular}. This implies, e.g., that the $v = 1\rightarrow v = 2$ vibrational transition frequency $\omega_{12}$ is only weakly detuned from $\omega_{01}$, and the \textit{effective} transition dipole moment $\mu_{12}$ can be expressed as $\mu_{12} = \sqrt{2}\mu_{01}(1+\beta)$ where $\beta$ is typically a small number. However, these anharmonic properties can only be manifested in experiments that probe the \textit{nonlinear} \cite{mukamel1999principles, yuen2014ultrafast} optical response of vibrational-polaritons (Sec. \ref{vib_trans}). Still, there are important differences between the linear optical response of vibrational- and organic exciton-polaritons. For instance, while the excitonic strong coupling of organic aggregates is facilitated by their large transition dipole moments (e.g., $\mu_{01} \approx 5-15~\rm{D}$ in the case of J-aggregates \cite{valleau2012}), the intensity of vibrational transitions is often much weaker in comparison (in general $\mu_{01} <1.5~\rm{D}$). Thus, the Rabi splittings of vibrational-polaritons ($5-20$ meV \cite{shalabney_coherent_2015, long_coherent_2015, george_liquid-phase_2015, casey_vibrational_2016, thomas_ground-state_2016, vergauwe_quantum_2016, crum2018, memmi2017}) are generally weaker than those of organic microcavities (Table 1). However, vibrational linewidths are often much smaller compared to those of organic excitons. In addition, resonant IR microcavities have lower photon leakage rates ($0.1-5 ~\text{ps}$\cite{shalabney_coherent_2015, long_coherent_2015, george_liquid-phase_2015, casey_vibrational_2016, thomas_ground-state_2016, vergauwe_quantum_2016, crum2018, memmi2017}) than the organic (Table 1), for the wavelength of vibrational transitions generally belongs to the mid-IR ($\lambda = 3-30~\mu m$) \cite{kavokin2017microcavities}. Thus, there exist many opportunities for strong coupling of cavity EM fields with molecular vibrational degrees of freedom. Typically, molecular vibrations with large absorptivity are dominated by polar functional groups such as carbonyl (C$=$O), amide ($\text{H}_2\text{N}-\text{C$=$O}$) and cyanide ($\rm{C}\equiv N$). In fact, most of the observed vibrational-polaritons arose from the strong coupling of IR cavities with the $\rm C=O$ or $\rm C\equiv N$ bonds of organic polymers \cite{shalabney_coherent_2015, long_coherent_2015, muallem2016}, neat organic liquids \cite{george_liquid-phase_2015}, polypeptides \cite{vergauwe_quantum_2016}, transition metal complexes \cite{casey_vibrational_2016, simpkins_spanning_2015}, and liquid crystals \cite{hertzog2017}. Yet, given the dependence of the collective Rabi splitting on the molecular density, there is no requirement that the strongly coupled bonds need to be significantly polarizable; in fact, vibrational-polaritons have been also been reported for alkene ($\text{C$=$C}$) \cite{george_liquid-phase_2015}, and silane $\text{C$-$\text{Si}}$ \cite{thomas_ground-state_2016} bonds. \begin{figure}[h] \centering \includegraphics[scale=0.35]{ircav} \caption{Representation of strong coupling between a planar optical cavity and the carbonyl bonds of polyvinyl acetate chains.}\label{ircavf}\end{figure} Based on the discussion above, the \textit{harmonic} Hamiltonian describing VSC of a lossless single-mode IR cavity with an ensemble of $N$ independent \textit{identical} molecular vibrational modes is given by \begin{equation} H^{(0)} = \hbar \omega_c a^\dagger a + \hbar\omega_0 \sum_{i=1}^N b_i^\dagger b_i - \hbar g_s \sum_{i=1}^N \left(b_i^\dagger a + a^\dagger b_i\right), \label{h0v} \end{equation} where $b_i$ ($b_i^\dagger$) denotes the \textit{bosonic} annihilation (creation) operator for the vibration localized at molecule $i$, and the other constants are defined in Sec. \ref{bas}. A more realistic description of the system would include the dissipative dynamics of both cavity and matter degrees of freedom. However, here simplifications arise relative to the description of organic exciton-polaritons: vibrational spectra show no Stokes shift, and their absorption bands are in some cases dominated by homogeneous broadening. Thus, it is in general easier to model the effects of cavity and vibrational damping on the polariton linewidths. In particular, the vibrational environment (represented by both intra and intermolecular degrees of freedom) may be accurately modeled as a thermal distribution of harmonic oscillators (bath) which interact weakly with the system \cite{nitzan2006chemical}. It is again reasonable to assume that the bath of each vibrational degree of freedom is independent (Sec. \ref{orgrel}). Under these conditions and the usual assumptions of dissipative Markovian dynamics \cite{gardiner1991quantum, may2004charge}, it can be shown that the IR cavity optical response is determined by the normal mode frequencies and dissipation rates of the classical problem of two coupled damped oscillators representing the cavity photon and the bright (totally-symmetric) superposition of molecular excited-states (see e.g., the Supporting Information of \cite{ribeiro2017c}). \subsection{Transient vibrational-polaritons}\label{vib_trans} The first pump-probe (pp) spectra of vibrational-polaritons were obtained by Dunkelberger et al. \cite{dunkelberger_modified_2016}. These experiments employed liquid-phase solutions of W(CO)$_6$ in hexane. The T$_{1u}$ triply-degenerate carbonyl mode was chosen to couple to the cavity as its effective transition dipole moment is relatively large ($\approx 1\rm{D}$), and its linewidth is sufficiently small ($\approx 3 ~\text{cm}^{-1}$). Further insight on the nonlinear behavior of $\text{W(CO)}_6$ vibrational-polaritons was reported recently by Xiang et al. \cite{xiang2017}, who also provided the first 2D spectra of vibrational-polaritons. Both the pp and 2D spectra showed unambiguous evidence of \textit{asymmetric} polariton\textemdash polariton and polariton\textemdash dark state interactions \cite{dunkelberger_modified_2016, xiang2017}, \textit{even} when linear reflectivity measurements showed equally-intense LP and UP response. In particular, the IR cavity differential probe transmission (pp transmission minus linear probe transmission) displayed a consistently large (small) negative feature at the linear LP (UP) frequency, and positive shifted (relative to the linear spectrum) transmission resonances for the LP and UP (Fig. \ref{trvib}). These observations were interpreted in Ref. \cite{ribeiro2017c} with a microscopic model of vibrational-polaritons which included the effect of vibrational anharmonicity on the polariton optical response. Both \textit{mechanical} and \textit{electrical} nonlinearities were added to the model described by Eq. \ref{h0v}. Mechanical (or bond) anharmonicity represents the tendency that bonds break at high energies, while electrical anharmonicity occurs due to nonlinearity of the effective vibrational transition dipole moment with respect to small displacements of the nuclei from equilibrium (e.g., due to non-Condon effects \cite{gerhard1968infrared, khalil_coherent_2003, ishii2009}). In practice, the main effect of mechanical and electrical nonlinearities is to redshift overtone transitions from the fundamental and give band absorption intensities which violate the harmonic oscillator scaling, respectively. Thus, the effective Hamiltonian of anharmonic vibrational-polaritons interacting with a single cavity-mode can be written as \cite{ribeiro2017c} \begin{equation} H = H^{(0)} - \hbar \alpha \sum_{i=1}^N b_i^\dagger b_i^\dagger b_i b_i -\hbar \beta \sum_{i=1}^{N}\left(b_i^\dagger b_i^\dagger b_i a +a^\dagger b_i^\dagger b_i b_i \right), \label{hv} \end{equation} where $\alpha$ characterizes mechanical anharmonicity, \textit{i.e.}., $2\alpha = \omega_{12}-\omega_{10}$, and $\beta$ parametrizes the deviation of $\mu_{12}$ from that predicted for a harmonic dipole. This theory provided pp spectra with the same essential features as experimentally reported (Fig. \ref{trvib}) \cite{xiang2017, ribeiro2017c}. It shows that the pump-probe transmission contains three resonances resulting from the interaction of cavity photons with a population of molecular vibrations in the ground and first excited-states (the latter of which is a byproduct of the pump excitation of the system at earlier times). The largely suppressed probe-transmission (large negative signal in Fig. \ref{trvib}) in a neighborhood of the linear LP frequency is a result of its near-resonance with the $1 \rightarrow 2$ transition of dark states. \begin{figure}[h] \centering \includegraphics[scale=0.5]{vibtr} \caption{Experimental and theoretical pump-probe (differential) transmission spectra of strongly coupled $\text{W(CO)}_6$ in an optical microcavity \cite{ribeiro2017c}. These results correspond to the case where the cavity photon and the molecular vibration (asymmetric C$=$O stretch) are resonant.} \label{trvib} \end{figure}The effect of the nonlinearity is weaker in UP since its frequency is highly off-resonant with $\omega_{12}$ (Fig. \ref{orbf}). Given that vibrational anharmonicity generally manifests as $\omega_{12} < \omega_{01}$, the much larger anharmonicity of LP (compared to UP) is expected to be a \textit{generic} feature of IR microcavities. In other words, the studies discussed in this Sec. indicate that vibrational-LP modes are \textit{softer} than the UP. Further corroboration of this theory came in a recent study by Dunkelberger et al. \cite{dunkelberger2018b} who measured the pump-probe spectra of the $\text{W(CO)}_6-\text{hexane}$ system at low concentrations such that the Rabi splitting was small enough for the LP to be off-resonant with $\omega_{12}$ by nearly $10~\text{cm}^{-1}$ (in this case, given that the LP and UP are both significantly off-resonant with $\omega_{12}$, the asymmetry in the transient polariton response is diminished). \begin{figure} \centering \includegraphics[scale=0.4]{supp-diagram} \caption{Energy level hybridization diagram including the photonic ($\omega_c$) and vibrational transitions ($\omega_{01}, \omega_{12}$) involved in the formation of linear $[\text{LP}^{(1)},\text{UP}^{(1)}]$ and (effective) transient vibrational-polaritons $[\text{UP}^{(2)}$, and the combination of $\text{LP}^{(1)}$ and the $1\rightarrow 2$ vibrational transition] \cite{xiang2017}. Note the significant interaction between the linear LP mode and the $1\rightarrow 2$ excitations (represented by the coefficient $c$) arising from the incoherent population of vibrational modes induced by a pump after sufficiently long probe-delay times (see text).} \label{orbf} \end{figure} \subsection{Applications: Chemical kinetics and thermal emission}\label{vib_app} We conclude our discussion of vibrational-polaritons by mentioning two recent observations of the effects of VSC on chemical reactions and thermal emission. First, Thomas et al. \cite{thomas_ground-state_2016} provided conclusive evidence that an organic silane deprotection reaction proceeds via a different mechanism under conditions where the C$-$Si bond is strongly coupled to an optical microcavity, \textit{even in the absence of external photon pumping of the polariton system}. Specifically, the reaction rate was measured as a function of temperature under normal and VSC conditions, and the resulting kinetic curves provide transition-state theory estimates for the entropy and enthalpy of activation. The entropy of activation was reported to be positive under VSC, but negative otherwise. In addition, the kinetics was strongly dependent on the Rabi splitting and, e.g., under weak coupling, the reaction rate was indistinguishable from that measured outside the cavity. Similarly puzzling results were shown recently by Chervy et al. \cite{chervy2018}, who reported non-thermalized thermal emission of cavity ($\text{C}=\text{O}$) vibrational-polaritons of an organic polymer at $373 ~\text{K}$. It was also observed that while the bare polymer and cavity emission spectra matched the theoretical thermal emission, the strongly coupled system showed emission peaks at frequencies displaced from the expected (based on the linear optical spectra). These experiments show the rich dynamics featured by vibrational-polaritons. They have in common the fact that both investigated phenomena arise from thermally-activated \textit{anharmonic} excited-state dynamics of vibrational-polaritons. Further work is needed to understand the sources of the observed behaviors. We expect that their microscopic interpretation will likely shed light on novel ways to control chemical bonds with VSC. \section{Ultrastrong coupling}\label{ultra} All of our previous considerations assumed that the (collective) Rabi splitting was stronger than the dissipative couplings of the bare molecule (or cavity), but also much weaker than the transition energy of interest. The ultrastrong coupling (USC) regime is characterized by the violation of the latter assumption \cite{ciuti_quantum_2005, ciuti_input-output_2006}. In particular, the onset of USC is conventionally defined to arise for vacuum Rabi splittings that satisfy $\Omega_R/{\omega_0} > 10\%$ \cite{moroz2014, ciuti_quantum_2005}. When this condition is fulfilled, significant deviations from the approximate light-matter coupling assumed in Eqs. \ref{jceq} and \ref{tceq} become relevant. In particular, at USC, states with different excitation number (number of photons + molecular excited-states) are allowed to hybridize, while our previous discussion assumed that the interaction of radiation with bright-molecular states preserves the total excitation number of the systems. An essential consequence is that while the ground-state of a strongly coupled system $(\ket{0})$ is indistinguishable from the decoupled where all degrees of freedom are at their ground-state, the lowest-energy state of an ultrastrongly coupled system is a superposition of states consisting of correlated photons and delocalized bright molecular excitations \cite{ciuti_quantum_2005, ciuti_input-output_2006}. Notably, molecular USC was first achieved less than ten years ago \cite{schwartz2011}. While this field has seen considerable progress including recent reports of organic exciton \cite{schwartz2011, kena-cohen2013, balci2013, gambino2014, george_ultra-strong_2015} and vibrational USC \cite{george_multiple_2016}, the exploration of USC effects on chemical transformations is only beginning to be understood. We discuss a specific case below. In a recent work \cite{martinez-martinez2018}, we studied the effects of USC in the electronic ground-state energy landscape of a molecular ensemble. In particular, we considered a simplified model of a molecular slab interacting with a plasmonic field in the USC regime. The Born-Oppenheimer Hamiltonian of the system is given by \begin{equation} H_{\text{BO}} = T_{\text{nuc}}+H_{\text{el}}(\mathbf{R}), \end{equation} where $T_{\text{nuc}}$ is the total nuclear kinetic energy operator, $\mathbf{R}$ denotes the nuclear configuration of all molecules, and \begin{equation} H_{\text{el}}(\mathbf{R})= H_{\text{g}}(\mathbf{R}) + H_{\text{pl}}+H_{\text{e}}(\mathbf{R})+H_{\text{pl-e}}(\mathbf{R}), \label{hult} \end{equation} where $H_{\text{g}}(\mathbf{R})=\sum_{\mathbf{n}}\hbar \omega_{g}(R_{\mathbf{n}})$ is the Born-Oppenheimer electronic ground-state energy of the ensemble at an arbitrary nuclear configuration $\{R_{\mathbf{n}}\}$ ($R_{\mathbf{n}}$ is the nuclear coordinate of molecule located at site $\mathbf{n}$ within the molecular slab), $H_{\text{pl}}=\sum_{\mathbf{k}}\hbar \omega_{\mathbf{k}}a_{\mathbf{k}}^{\dagger}a_{\mathbf{k}}$ is the Hamiltonian of bare plasmon modes with dispersion $E(\mathbf{k}) = \hbar \omega_{\mathbf{k}}$ (where $\mathbf{k}$ is the plasmon in-plane wave vector) and creation (annihilation) operators $a_{\mathbf{k}}^{\dagger}$ ($a_{\mathbf{k}}$). The exciton Hamiltonian for ensemble nuclear configuration $\{R_{\mathbf{n}}\}$ is given by $H_{\text{e}}(\mathbf{R})=\sum_{\mathbf{n}}\left[\hbar \omega_{e}(R_{\mathbf{n}})-\hbar\omega_{g}(R_{\mathbf{n}})\right]b_{\mathbf{n}}^{\dagger}(R_{\mathbf{n}})b_{\mathbf{n}}(R_{\mathbf{n}})$, where $b_{\mathbf{n}}^{\dagger}(R_{\mathbf{n}})$ $[b_{\mathbf{n}}(R_{\mathbf{n}})]$ is the creation (annihilation) operator of the $\mathbf{n}$th-site. The exciton-plasmon interaction is given by \begin{equation} H_{\text{pl-e}}(\mathbf{R})=\sum_{\mathbf{k}}\sum_{\mathbf{n}}\hbar g_{\mathbf{k}}^{\mathbf{n}}(R_{\mathbf{n}})\left(a_{\mathbf{k}}^{\dagger}+a_{\mathbf{k}}\right)\left[b_{\mathbf{n}}^{\dagger}(R_{\mathbf{n}})+b_{\mathbf{n}}(R_{\mathbf{n}})\right], \label{ple-u} \end{equation} where $g_{\mathbf{k}}^\mathbf{n}(R_{\mathbf{n}})$ is the interaction between the plasmon with wave vector $\mathbf{k}$ and the $\mathbf{n}$th exciton. It depends on the position and geometry of the molecule since the plasmonic electric fields vary in space and the molecular transition dipole moment is assumed to depend on $R_\mathbf{n}$. Notice the explicit inclusion in $H_{\text{pl-e}}$ of terms that do not preserve the total number of excitations (see also Fig. \ref{ultraf}). We also note that the maximal value of the collective couplings obtained with this model does not surpass $20\%$ of the exciton gap; this justifies the neglect of the EM field diamagnetic terms in Eq. \ref{hult} \cite{deliberato2014}. The molecules that constitute the referred ensemble can undergo isomerization. This is described by the electronic ground and excited adiabatic PESs, $\hbar \omega_{g}(R_{\mathbf{n}})$ and $\hbar \omega_{e}(R_{\mathbf{n}})$, respectively. The former has a double-well structure and an avoided-crossing with the latter. Ref. \cite{martinez-martinez2018} analyzed various cross-sections of the dressed (collective) ground-state PES arising under USC. This included the cut where all molecular coordinates were frozen at the reactant configuration except for a single molecule. Thus, such reaction coordinate represents an \textit{effective} single-molecule PES. However, it was observed that the effective reaction barrier is almost unaffected by the collective light-matter coupling. Rather, the maximal energetic shifts induced by USC were identified as Lamb shifts which are small in comparison to the thermal energy. However, these results do not discourage further application of the ultrastrong coupling to chemical systems. The conditions studied in Ref. \cite{martinez-martinez2018} were such that the light-matter interaction was near the edge of the USC regime, where the total exciton (photon) population in the ground-state is small, and a perturbative treatment of their effects is valid. In this case, the USC ground-state deviation from the bare system is nearly inconsequential. We believe that future theoretical and experimental studies of USC in the non-perturbative regime will present novel possibilities for electronic ground-state chemical dynamics. \begin{figure} \centering \includegraphics[width=0.75\columnwidth]{ultra} \caption{Feynman diagram for spontaneous production of correlated exciton-photon pairs from the bare system ground-state $\ket{0}$. This process is significant in the ultrastrong coupling regime where light-matter couplings of the form $(a_{-\mathbf{k}}^\dagger b_{\mathbf{k}}^\dagger + \text{h.c.})$ become relevant (see Eq. \ref{ple-u})}\label{ultraf}\end{figure} \section{Epilogue}\label{epi} We hope to have convinced the reader that: (i) the phenomena emergent from the (ultra)strong coupling regime presents novel opportunities for the control of chemical transformations induced by electronic and vibrational dynamics, and (ii) there remains much experimental and theoretical work to be done to unravel all of the intricacies and possibilities of polariton-mediated chemistry. Future experimental work will certainly entertain creative ways to steer chemical events using optical cavities in various regimes of external pumping and thermodynamic conditions, as well as new opportunities to harness many-body quantum effects towards the control of physicochemical properties of molecules. From the theoretical perspective, we expect novel applications and further development of effective condensed matter theories that describe the emergent phenomenology afforded by molecular polaritons. As we have shown here, these theories are particularly powerful in predicting nontrivial thermodynamic-limit behavior which can be directly employed to guide experiments. Lastly, there is a push towards the development of \textit{ab initio} quantum chemistry and quantum and semiclassical dynamics methodologies to simulate molecular polaritonic systems with atomistic detail \cite{ruggenthaler2014, bennett2016a, flick2017, luk_multiscale_2017, vendrell2017coherent}. Future studies of molecular polariton theory are expected to integrate quantum optics with the standard toolbox of chemical dynamics including e.g., surface-hopping methods \cite{subotnik2016}, quantum master equations and path-integral approaches \cite{makri1995, tanimura2006}. Still, the complex interplay between electronic, nuclear, and photonic degrees of freedom in complex dissipative environments presents a whole new set of challenges for computational methods, which will require novel solutions. \section*{Conflicts of interest} There are no conflicts to declare. \section*{Acknowledgments} RFR, MD, JCGA, and JYZ acknowledge support from the NSF CAREER Award CHE-164732. LAMM and JCGA are grateful for the support of the UC-Mexus CONACyT scholarship for doctoral studies. All authors were partially supported by generous UCSD startup funds. We acknowledge illuminating discussions we had throughout our collaborations with Wei Xiong, Stephane Kena-Cohen, Vinod Menon, Jeff Owrutsky, Adam Dunkelberger, Blake Simpkins, and Bo Xiang. \balance
{ "timestamp": "2018-02-26T02:12:41", "yymm": "1802", "arxiv_id": "1802.08681", "language": "en", "url": "https://arxiv.org/abs/1802.08681" }
\section{Introduction} \label{S:intro} Let $\Gamma(S_n)$ be the Cayley graph of the symmetric group $S_n$ with generators given by the adjacent transpositions $\pi_i = (i, i + 1)$ for $i = 1, \mathellipsis, n -1$. A {\bf sorting network} is a shortest path from the identity permutation $\text{id}_n = 12 \cdots n$ to the reverse permutation $\text{rev}_n = n \cdots 21$ in $\Gamma(S_n)$. The length of any sorting network is $N = {n \choose 2}$. \medskip Sorting networks are known as {\bf reduced decompositions} of the reverse permutation, as any sorting network can equivalently be represented as a minimal length decomposition of the reverse permutation as a product of adjacent transpositions: $\text{rev}_n = \pi_{k_N} \mathellipsis \pi_{k_1}$. The combinatorics of sorting networks have been studied in detail under this name. There are connections between sorting networks and Schubert calculus, quasisymmetric functions, zonotopal tilings of polygons, and aspects of representation theory. For more background on these topics, see Stanley \cite{stanley1984number}; Bj\"orner and Brenti \cite{bjorner2006combinatorics}; Garsia \cite{garsia2002saga}; Tenner \cite{tenner2006reduced}; and Manivel \cite{manivel2001symmetric}. \begin{figure} \centering \includegraphics[scale=0.7]{Wiring-3.pdf} \caption{A ``wiring diagram" for a sorting network with $n = 4$. In this diagram, trajectories are drawn as continuous curves for clarity.} \label{fig:wiring} \end{figure} \medskip In computer science, sorting networks are viewed as $N$-step algorithms for sorting a list of $n$ numbers. At step $i$, the sorting network algorithm sorts the numbers at positions $k_i$ and $k_{i+1}$ into increasing order. This process sorts any list in $N$ steps. \medskip As in this algorithmic interpretation of sorting networks, we will think of the elements $\{1, \mathellipsis, n \}$ as particles being sorted in time (see Figure \ref{fig:wiring}). We use the notation $\sigma(x, t) = \pi_{k_\floor{t}} \mathellipsis \pi_{k_2}\pi_{k_1}(x)$ for the position of particle $x$ at time $t$. We call $(k_1, \dots, k_N)$ the \textbf{swap sequence} for $\sigma$. \subsection{Main limit theorems} \label{SS:main-thms} Angel, Holroyd, Romik, and Vir\'ag \cite{angel2007random} initiated the study of uniform random $n$-element sorting networks. They studied the limiting behaviour of the space-time swap distribution, rescaled particle trajectories, time-$t$ permutation matrices, and the Cayley graph path itself. \medskip They proved a law of large numbers for the space-time swap distribution, and based on strong numerical evidence, made conjectures about the limiting behaviour of the other three objects. In this paper, we prove these conjectures. \bigskip \noindent {\bf I. Rescaled particle trajectories.} \qquad For a sorting network $\sigma$, define the global trajectory $$ \sigma_G(x, t) = \frac{2\sigma(x, Nt)}n - 1. $$ The function $\sigma_G(x, \cdot)$ is the trajectory of particle $x$, with time rescaled so that the sorting process finishes at time $1$, and space rescaled so that the trajectory stays in the interval $[-1, 1]$. In \cite{angel2007random}, the authors conjectured that with high probability, all particle trajectories in a uniform random sorting network are close to sine curves (see Figure \ref{fig:sinecurves}). \medskip They proved that limiting trajectories are $1/2$-H\"older with H\"older constant $\sqrt{8}$. Dauvergne and Vir\'ag \cite{dauvergne1} improved upon this by showing that limiting trajectories are $\pi$-Lipschitz, and that trajectories of particles starting within $\epsilon$ of the edge are distance $2 \sqrt{\epsilon}$ away from a sine curve in uniform norm. \medskip Our first theorem proves the sine curve conjecture from \cite{angel2007random}. Here and throughout the paper we use the notation $\sigma^n$ for a uniform random $n$-element sorting network. \begin{customthm}{1}[Sine curve limit] \label{T:sine-curves} For each $n$ there exist random variables $\{(A^n_i, \Theta^n_i) \in [0, 1] \times [0, 2\pi]\}_{i \in \{1, \mathellipsis, n\}}$ such that for any $\epsilon > 0$, we have that $$ \mathbb{P} \left( \max_{i \in [1, n] } \max_{t \in [0, 1]} \card{ \sigma^n_G(i, t) - A_i^n \sin(\pi t + \Theta_i^n) } > \epsilon \right) \to 0 \qquad \;\text{as}\; \;\; n \to \infty. $$ \end{customthm} \begin{figure} \centering \includegraphics[scale=0.8]{RSN-sinecurves.pdf} \caption{This is a diagram of selected particle trajectories in a 2000 element sorting network. This image is taken from \cite{angel2007random}.} \label{fig:sinecurves} \end{figure} \bigskip \noindent {\bf II. Permutation Matrices.} \qquad For a uniform $n$-element sorting network $\sigma^n$, define $$ \rho^n_t= \frac{1}n \sum_{i=1}^n \delta \left( \sigma^n_G(i, 0), \sigma^n_G(i, t) \right). $$ Here $\delta(x, y)$ is a $\delta$-mass at the point $(x, y)$. The measure $\rho^n_t$ rescales the time-$t$ permutation matrix of $\sigma^n$, placing atoms of weight $1/n$ at the positions of the $1$'s. Define the {\bf Archimedean measure} $\mathfrak{Arch}$ on the square $[-1, 1]^2$ to be the measure with Lebesgue density $$ f(x, y) = \frac{1}{2\pi\sqrt{1 - x^2 - y^2}} $$ on the unit ball $B(0, 1)$, and $0$ elsewhere. This is simply the projected surface area measure of the $2$-sphere. Define the {\bf time-$t$ Archimedean measure} by $$ (X, X \cos (\pi t) + Y \sin (\pi t)) \stackrel{d}{=} \mathfrak{Arch}_{t}, \qquad \text{ where } \;\; (X, Y) \stackrel{d}{=} \mathfrak{Arch}. $$ Note that $\mathfrak{Arch} = \mathfrak{Arch}_{1/2}$. In \cite{angel2007random}, the authors conjectured that $\rho^n_t$ converges weakly to $\mathfrak{Arch}_{t}$ (see Figure \ref{fig:circles}). They proved that for any $t$, the support of $\rho^n_t$ lies in a particular octagon with high probability. In \cite{dauvergne1}, the authors showed that the support of any limit of $\rho^n_t$ lies within the elliptical support of $\mathfrak{Arch}_t$. \begin{figure} \centering \includegraphics[scale=0.9]{RSN-circles.pdf} \caption{This is a diagram of the measures $\{\rho^n_t : t \in \{0, 1/10, 2/10, \mathellipsis 1\} \}$ in a $500$-element sorting network. The blue octagons are the octagonal support bounds proved in \cite{angel2007random}, and the red ellipses are the support bounds proved in \cite{dauvergne1}. This figure is from \cite{angel2007random}.} \label{fig:circles} \end{figure} \medskip Our second theorem proves the weak convergence of the random measures $\rho^n_t$. We also show that the support of $\rho^n_t$ and $\mathfrak{Arch}_t$ are close. To state this theorem, recall that the Hausdorff distance between two sets $A, B \subset \mathbb{R}^2$ is $$ d_H(A, B) = \max \left\{\sup_{a \in A} \inf_{b \in B} d(a, b), \;\sup_{b \in B} \inf_{a \in A} d(a, b) \right\}. $$ \begin{customthm}{2}[Permutation matrix limit] \label{T:matrices} For any $t \in [0, 1]$, $ \rho^n_t \to \mathfrak{Arch}_t $ in probability as $n \to \infty$. That is, for any weakly open set $U$ in the space of probability measures on $[-1, 1]^2$ containing $\mathfrak{Arch}_t$, we have that $$ \mathbb{P}(\rho^n_t \in U) \to 1 \qquad \;\text{as}\; \quad n \to \infty. $$ Moreover, $$ d_H(\text{supp}(\rho^n_t), \text{supp}(\mathfrak{Arch}_t)) \to 0 \qquad \text{ in probability } \;\text{as}\; n \to \infty. $$ \end{customthm} Angel, Holroyd, Romik, and Vir\'ag \cite{angel2007random} also considered a permutation matrix evolution for $\sigma^n$. Let $j \in \{1, \mathellipsis, n\}$, and consider the random complex-valued function $$ Z^n_j(t) = e^{\pi i t} \left[\sigma^n_G(j, t) + i\sigma^n_G(j, t + 1/2) \right], \qquad t \in [0, 1/2]. $$ For a fixed $t$, $(Z^n_1(t), \mathellipsis, Z^n_n(t))$ is the set of points in the scaled permutation matrix for \[ \sigma^n(\cdot, t + 1/2)(\sigma^n)^{-1}(\cdot, t) \] after a counterclockwise rotation by $\pi t$ (see Figure \ref{fig:window}). Theorem \ref{T:sine-curves} guarantees that each of the paths $Z^n_j$ localize. \begin{customthm}{3}[Path Localization] \label{T:unif-rotation} Let $\sigma^n$ be a uniform random $n$-element sorting network. Then $$ \max_{j \in [1, n]} \max_{s, t \in [0, 1]} |Z^n_j(t) - Z^n_j(s)| \to 0 \qquad \text{in probability as } \;\; n \to \infty. $$ \end{customthm} \bigskip \noindent {\bf III. Great Circles.} \qquad We can embed the vertices of $\Gamma(S_n)$ into $\mathbb{R}^n$ by sending the permutation $\tau \in S_n$ to the point $\close{\tau} = (\tau(1), \mathellipsis, \tau(n)) \in \mathbb{R}^n$. For any $\tau \in S_n$, the point $\close{\tau}$ lies on the $(n-2)$-sphere $\mathbb{S}^{n-2} = \mathbb{L}_n \cap \mathbb{K}_n$, where \begin{align*} \mathbb{L}_n =& \left\{ (x_1, \mathellipsis x_n) \in \mathbb{R}^n : \sum_{i=1}^n x_i = \frac{n(n+1)}2 \right\} \qquad \;\text{and}\; \\ \mathbb{K}_n =& \left\{ (x_1, \mathellipsis x_n) \in \mathbb{R}^n : \sum_{i=1}^n x_i^2 = \frac{n(n+1)(2n+1)}6\right\}. \end{align*} If we also embed the edges of $\Gamma(S_n)$ as straight lines between the embedded vertices, we get an object called the {\bf permutahedron} (see Figure \ref{fig:permutahedron}). \medskip \begin{figure} \centering \includegraphics[scale=0.3]{Permutahedron.png} \caption{The permutahedron for $S_4$.} \label{fig:permutahedron} \end{figure} In \cite{angel2007random}, the authors conjectured that a uniform random sorting network is close to a great circle in $\mathbb{S}^{n-2}$ under this embedding. They showed that this conjecture implies the other global limiting results for uniform random sorting networks. Our strongest theorem proves this conjecture. \medskip For two functions $f, g: [0, 1] \to \mathbb{R}^n$, define the distance function $$ d_\infty(f, g) = \sup_{t \in [0, 1]} ||f(t) - g(t)||_\infty. $$ This is the uniform norm on functions, where the pointwise distance is the $L^\infty$ distance. Also, for an $n$-element sorting network $\sigma$, we define the embedded and time-rescaled path $ \bar{\sigma}:[0, 1] \to \mathbb{S}^{n-2} $ by letting $$ \bar{\sigma}(t) = (\sigma(1, Nt), \sigma(2, Nt), \dots, \sigma(n, Nt)). $$ \begin{customthm}{4}[Great circles] \label{T:geom-limit} Let $\bar{\sigma}^n$ be the embedding of $\sigma^n$ into $\mathbb{S}^{n-2}$. For every $n$, there exists a random path $C_n:[0, 1] \to \mathbb{S}^{n-2}$ such that $C_n$ is a constant-velocity parametrization of an arc of a great circle in $\mathbb{S}^{n-2}$ starting at $(1, \mathellipsis, n)$ and finishing at $(n, \mathellipsis, 1)$, and such that $$ \frac{d_\infty(\bar{\sigma}^n, C_n)}{n} \to 0 \qquad \text{in probability as} \;\; n \to \infty. $$ \end{customthm} Note that it is easy to find sorting networks that aren't close to a great circle in $\mathbb{S}^{n-2}$. For example, the ``bubble sort" sorting network given by the swap sequence $$ (1, 2, \mathellipsis n-1, 1, 2, \mathellipsis, n-2, \mathellipsis, 1, 2, 1) $$ is $d_\infty$-distance $n - 1 - o(1)$ from any great circle. \subsection{Random trajectory limits} To prove the main theorems of Section \ref{SS:main-thms}, we first analyze the limit of a random particle trajectory. This approach was first considered by Rahman, Vir\'ag, and Vizer \cite{rahman2016geometry}, and continued in \cite{dauvergne1}. \medskip Let $\mathcal{D}$ be the closure in the uniform norm $||\cdot||_u$ of the space of all possible sorting network trajectories $\sigma_G(x, \cdot):[0, 1] \to [-1, 1]$. The space $(\mathcal{D}, ||\cdot||_u)$ is a complete separable metric space. The only functions in $\mathcal{D}$ are continuous functions and the sorting network trajectories themselves. \medskip Let $Y_n \in \mathcal{D}$ be a uniform $n$-element sorting network trajectory. That is, if $\sigma^n$ is a uniform $n$-element sorting network, and $I_n$ is an independent uniform random variable on $\{1, \mathellipsis n\}$, then $$ Y_n = \sigma^n_G(I_n, \cdot). $$ We refer to $Y_n$ as the {\bf trajectory random variable} of $\sigma^n$. In \cite{dauvergne1}, Dauvergne and Vir\'ag proved that the sequence $\{Y_n\}_{n \in \mathbb{N}}$ is precompact in distribution, and that any subsequential limit is almost surely Lipschitz (we state their results precisely in Section \ref{S:local}). In this paper, we show that $\{Y_n\}_{n \in \mathbb{N}}$ converges in distribution, and identify its limit. \begin{customthm}{5}[The weak trajectory limit] \label{T:weak-limit} Let $(X, Z) \sim \mathfrak{Arch}$, and define the Archimedean path $\mathcal{A} \in \mathcal{D}$ by $\mathcal{A}(t) = X\cos(\pi t) + Z\sin(\pi t)$. Then $$ Y_n \stackrel{d}{\to} \mathcal{A} \qquad \;\text{as}\; \; n \to \infty. $$ \end{customthm} Theorem \ref{T:weak-limit} will be used in the proof of all our main theorems from Section \ref{SS:main-thms}. Most of the paper is devoted to its proof. We note here that we can equivalently write $$ \mathcal{A}(t) = \sqrt{1 - V^2}\sin(\pi t + 2\pi U), $$ where $V$ and $U$ are independent uniform random variables on $[0, 1]$. \bigskip \noindent \textbf{Random $m$-out-of-$n$ sorting networks.} \qquad We will also use Theorem \ref{T:weak-limit} to identify the limit of random $m$-out-of-$n$ subnetworks. This answers a question of Angel and Holroyd \cite{angel2010random}. This limit can also be by found by using the stronger great circle theorem (Theorem \ref{T:geom-limit}), as was noted in \cite{angel2010random}. \medskip Let $\sigma$ be an $n$-element sorting network. For $A \subset \{1, \dots, n\}$, let $\sigma|_A$ be the $|A|$-element sorting network given by restricting $\sigma$ to the set $A$. Specifically, for $i \in \{1, \dots, N\}$ define $\sigma^*_A(\cdot, i)$ be the relative ordering of the particles in $A$ in the permutation $\sigma(\cdot, i)$. This gives a sequence of $N$ permutations $\{\sigma^*_A(\cdot, i) \in S_{|A|} : i \in \{1, \dots N\} \}$. Removing duplicates gives the permutation sequence for sorting network $\sigma|_A$. For $m < n$, let the \textbf{random $m$-out-of-$n$ subnetwork $\tau^n_m$} be the restriction of $\sigma^n$ to a uniform $m$-element subset of $\{1, \mathellipsis n\}$, chosen independently from $\sigma^n$. \medskip Let $\{x_1, \mathellipsis x_n\}$ be a set of points in $\mathbb{R}^2$ in general position, and such that no two pairs of points determine parallel lines. Label the points in order of increasing $x$-coordinate. For all but finitely many angles $\theta$, listing the labels of the points $\{x_1, x_2, \mathellipsis x_n\}$ in increasing order of their horizontal projections after rotation by $\pi \theta$ gives a permutation $\tau_\theta$. The \textbf{geometric sorting network} associated to $\{x_1, \mathellipsis x_n\}$ is simply the sequence of permutations $\{\tau_\theta, \theta \in [0, 1] \}$ listed in order of increasing $\theta$. \begin{customthm}{6}[The subnetwork limit] \label{T:subnetwork} Let $\{X_1, X_2, \mathellipsis, X_m \}$ be random points in the unit ball $B(0, 1)$ sampled from the Archimedean distribution $\mathfrak{Arch}$, and let $\tau_m$ be the associated geometric sorting network. Then $$ \tau^n_m \stackrel{d}{\to} \tau_m \qquad \;\text{as}\; \; n \to \infty. $$ \end{customthm} \subsection{The local speed distribution} As a by-product of the proof of Theorem \ref{T:weak-limit}, we find the distribution of speeds in the local limit of random sorting networks. To state this result, we first give an informal description of this limit (a precise description is given in Section \ref{S:local}). The existence of this limit was established independently by Angel, Dauvergne, Holroyd, and Vir\'ag \cite{angel2017local}, and by Gorin and Rahman \cite{gorin2017}. Define $$ U_n (x, t) = \sigma^n(\floor{n/2} + x, nt) - \floor{n/2}. $$ Each path $U_n(x, \cdot)$ is a locally scaled particle trajectory. With an appropriate notion of convergence, we have that $$ U_n \stackrel{d}{\to} U, $$ where $U$ is a random function from $\mathds{Z} \times [0, \infty) \to \mathds{Z}$. $U$ is the local limit at the centre of the sorting network. We can also take a local limit centred at particle $\floor{\alpha n}$ for any $\alpha \in (0, 1)$. The result is the process $U$ with time rescaled by a semicircle factor $2\sqrt{\alpha(1 - \alpha)}$. \medskip In \cite{dauvergne1}, Dauvergne and Vir\'ag proved that particles in $U$ have asymptotic speeds. Specifically, they showed that for every $x \in \mathds{Z}$, the limit $$ S(x) = \lim_{t \to \infty} \frac{U(x, t) - U(x, 0)}{t} \qquad \text{exists }\text{almost surely}. $$ They showed that $S(x)$ has distribution $\mu$ independent of $x$. In this paper, we identify $\mu$. \begin{customthm}{7} \label{T:main-2} The measure $\mu$ is the arcsine distribution on $[-\pi, \pi]$ given by the Lebesgue density $$ f(x) = \frac{1}{\pi\sqrt{\pi^2 - x^2}}. $$ \end{customthm} \subsection*{Related work} In addition to the papers mentioned above, the local behaviour of random sorting networks has also been studied by Angel, Gorin, and Holroyd \cite{angel2012pattern}. The frequency of particular substrings in the swap sequence of a random sorting network has been analyzed by Reiner \cite{reiner2005note} and Tenner \cite{tenner2014expected}. Fulman and Stein \cite{fulman2014stein} have analyzed the distribution of the first swap in a random sorting network. \medskip All of this work relies on a bijection of Edelman and Greene \cite{edelman1987balanced} between sorting networks and Young tableaux of a particular shape. Little \cite{little2003combinatorial} found another bijection between these two sets, and Hamaker and Young \cite{HY} proved that these bijections coincide. \medskip Problems involving limits of sorting networks under different measures have been considered by Angel, Holroyd, and Romik \cite{angel2009oriented}, and also by Young \cite{young2014markov}. Uniform ``relaxed" random sorting networks have been analyzed by Kotowski and Vir\'ag \cite{kotowski2016limits} (see also \cite{rahman2016geometry}). \subsection*{Outline of the paper} Most of the paper is devoted to proving Theorem \ref{T:weak-limit}. The key to proving this theorem is the notion of particle flux across a curve $h$. Heuristically, particle flux measures the number of times that particles cross $h$ in a large-$n$ sorting network. It is defined in terms of the local speed distribution $\mu$. \medskip Particle flux is a useful quantity because any limit of $Y_n$ must minimize this quantity among curves $h$ with $h(0) = -h(1)$. This is due to the fact that in any sorting network, every pair of particles swaps \textit{exactly} once, whereas any particle must cross a line with $h(0) = -h(1)$ \textit{at least} once. \medskip In Section \ref{S:local}, we introduce the necessary background for the paper. In Section \ref{S:flux-intro}, we formally define particle flux and establish its properties. In Section \ref{S:unique}, we then use these properties to partially characterize minimal flux paths $h$ with $h(0) = -h(1)$. In Section \ref{S:path-speed}, we use this characterization to relate the local speed distribution $\mu$ to the derivative distribution for subsequential limits of $Y_n$. \medskip This allows us to derive an integral transform formula for $\mu$, which we do in Section \ref{S:integral-formula}. In Section \ref{S:transform}, we invert this integral transform in order to find $\mu$. This allows us to immediately prove Theorem \ref{T:weak-limit}. In this section, we also prove a slightly stronger version of Theorem \ref{T:weak-limit}, allowing us to establish Theorem \ref{T:subnetwork}. \medskip In the remainder of the paper, we use Theorem \ref{T:weak-limit} to establish our stronger limit theorems. In Section \ref{S:strong-limit}, we combine Theorem \ref{T:weak-limit} with bounds from \cite{angel2007random} to prove Theorems \ref{T:sine-curves}, \ref{T:matrices}, and \ref{T:unif-rotation}. Finally, in Section \ref{S:geom-limit}, we combine Theorem \ref{T:weak-limit} with Theorem \ref{T:sine-curves} to prove Theorem \ref{T:geom-limit}. \section{Preliminaries} \label{S:local} In this section, we introduce the necessary background about sorting networks. The most basic observation about uniform $n$-element sorting networks is that they exhibit a type of time-stationarity. This was first observed in \cite{angel2007random}. \begin{theorem} \label{T:time-stat} Let $(K^n_1, K^n_2, \mathellipsis K^n_N)$ be the swap sequence of a random $n$-element sorting network $\sigma^n$. Then $$ \left(K^n_1, K^n_2, \mathellipsis K^n_N\right) \stackrel{d}{=} \left(n - K^n_N, K^n_1, K^n_2, \mathellipsis K^n_{N - 1}\right). $$ \end{theorem} We will repeatedly use time-stationarity of sorting networks to reduce proofs to statements about the beginning of a sorting network. Using the Edelman-Greene bijection between sorting networks and Young tableaux, Angel, Holroyd, Romik, and Vir\'ag \cite{angel2007random} also found an explicit formula for the distribution of $K^n_1$. \begin{theorem} \label{T:dist-k1} Let $K^n_1$ be the location of the first swap of $\sigma^n$. For any $i \in \{1, \dots, n-1\}$, we have that $$ \mathbb{P}(K^n_1 = i) = \frac{1}N \frac{[3\cdot5\cdot7\cdots(2i - 1)][3\cdot 5 \cdots (2(n-i) - 1)]}{[2\cdot4\cdot6\cdots(2i - 2)][2\cdot 4 \cdots (2(n-i) - 2)]} \le \frac{3}n. $$ Moreover, if $\{i_n\}_{n \in \mathbb{N}}$ is any sequence of integers such that $i_n/n\to (\alpha + 1)/2$ for some $\alpha \in (-1, 1)$, then $$ n\mathbb{P}(K^n_1 = i_n) \to \frac{4\sqrt{1 - \alpha^2}}{\pi} \qquad \;\text{as}\; n \to \infty. $$ \end{theorem} \bigskip \noindent \textbf{The local limit.} \qquad Define a {\bf swap function} as a map $V:\mathbb{Z} \times [0, \infty) \to \mathds{Z}$ with the following properties: \smallskip \begin{enumerate}[nosep,label=(\roman*)] \item For each $x$, $V(x,\cdot)$ is cadlag with nearest neighbour jumps. \item For each $t$, $V(\cdot,t)$ is a bijection from $\mathds{Z}$ to $\mathds{Z}$. \item Define $V^{-1}(x, t)$ by $V(V^{-1}(x, t),t) = x$. Then for each $x$, $V^{-1}(x, \cdot)$ is a cadlag path with nearest neighbour jumps. \item For any time $t \in (0, \infty)$ and any $x \in \mathds{Z}$, $$ \lim_{s \to t^-} V^{-1}(x, s) = V^{-1}(x + 1, t) \qquad \text{if and only if} \qquad \lim_{s \to t^-} V^{-1}(x +1, s) = V^{-1}(x, t). $$ \end{enumerate} We think of a swap function as a collection of particle trajectories $\{V(x, \cdot) : x \in \mathds{Z}\}$. Condition (iv) guarantees that the only way that a particle at position $x$ can move up at time $t$ is if the particle at position $x+1$ moves down. That is, particles move by swapping with their neighbours. \medskip Let $\mathcal{A}$ be the space of swap functions endowed with the following topology. A sequence of swap functions $V_n \to V$ if each of the cadlag paths $V_n(x, \cdot) \to V(x, \cdot)$ and $V^{-1}_n(x, \cdot) \to V^{-1}(x, \cdot)$. Convergence of cadlag paths is convergence in the Skorokhod topology. We refer to a random swap function as a \textbf{swap process}. \medskip For a swap function $V$ and a time $t \in (0, \infty)$, define $$ V(\cdot, t, s) = V(V^{-1}(t, \cdot), t + s). $$ The function $V(\cdot, t, s)$ is the increment of $V$ from time $t$ to time $t + s$. \medskip Now for $i \in \{1, \dots n\}$, define $$ r_n(i) = \sqrt{1 - (2i/n - 1)^2}, $$ and consider the shifted, time-scaled swap process $$ U_{n}^{i}(x, s) = \sigma^n \left(i + x, \frac{ns}{r_n(i)} \right) - i. $$ To ensure that $U_{n}^{i}$ fits the definition of a swap process, we can extend it to a random function from $\mathds{Z} \times [0, \infty) \to \mathds{Z}$ by letting $U_{n}^{i}$ be constant after time $\frac{n-1}{2r_n(i)}$, and with the convention that $U_{n}^{i}(x, s)= x$ whenever $x \notin \{1 -i, \mathellipsis n - i\}$. In the swap processes $U_{n}^{i}$, all particles are labelled by their initial positions. The following is shown in \cite{angel2017local}, and also essentially in \cite{gorin2017}. \begin{theorem} \label{T:local} There exists a swap process $U$ such that the following holds. For any $\alpha \in (-1, 1)$, and any sequence of integers $\{i_n\}_{n \in \mathbb{N}}$ such that $i_n/n \to (\alpha + 1)/2$, we have that $$ U_{n}^{i_n} \stackrel{d}{\to} U \qquad \;\text{as}\; \; n \to \infty. $$ The swap process $U$ has the following properties: \begin{enumerate}[nosep,label=(\roman*)] \item $U$ is stationary and mixing of all orders with respect to the spatial shift $\tau U(x, t) = U(x + 1, t) - 1$. \item $U$ has stationary increments in time: for any $t \ge 0$, the process $U(\cdot, t, s)_{s\ge 0}$ has the same law as $U(\cdot,s)_{s\geq 0}$. \item $U$ is symmetric: $U(\cdot, \cdot) \stackrel{d}{=} - \; U(- \; \cdot, \cdot)$. \item For any $t \in [0, \infty)$, $\mathbb{P}($There exists $x \in \mathds{Z}$ such that $U(x, t) \neq \lim_{s \to t^-} U(x, t)) = 0$. \end{enumerate} \smallskip Moreover, for any sequence of times $\{t_n : n \in \mathbb{N}\}$ such that $\frac{n-1}{2r_n(i_n)} - t_n \to \infty$ as $n \to \infty$, $$ U^{i_n}_n (\cdot, t_n, \cdot) \stackrel{d}{\to} U \qquad \;\text{as}\; n \to \infty. $$ \end{theorem} Now, for a swap function $V$, let $W(V, t)$ be the number of times that the particles at locations $0$ and $1$ swap in the interval $[0, t]$. That is, $$ W(V, t) = \card {\left\{s \in (0, t] : \lim_{r \to t^-} V^{-1}(0, r) = V^{-1}(1, s) \right\}}. $$ As a by-product of the proof of convergence of $U_n^{i_n}$ to $U$, the authors of \cite{angel2017local} also found the expectation of $W(U, t)$. \begin{theorem}[Proposition 7.10, \cite{angel2017local}] \label{T:swaps} Let $\alpha \in (-1, 1)$, and let $\{i_n\}_{n \in \mathbb{N}}$ be any sequence of integers converging to $(\alpha + 1)/2.$ Then for any $t \in [0, \infty)$, we have that $$ \mathbb{E} W(U_n^{i_n}, t) \to \mathbb{E} W(U, t) = \frac{4t}{\pi}. $$ \end{theorem} Now let $Q(V, t)$ be the number of swaps that particle $0$ makes by time $t$ in a swap function $V$. That is, $$ Q(V, t) = \card {\Big\{ s \in (0, t] : \lim_{r \to s^-} V(0, r) \neq V(0, s) \Big\}}. $$ Dauvergne and Vir\'ag \cite{dauvergne1} used a stationarity argument to prove an analogous result to Theorem \ref{T:swaps} for $Q(u, t)$. \begin{theorem}[Lemma 3.2, \cite{dauvergne1}] \label{T:swaps-2} For any $t \in [0, \infty)$, we have that $$ \mathbb{E} Q(U, t) = \frac{8t}{\pi}. $$ \end{theorem} The fact that $U$ is stationary in both time and space implies that the point process of swaps of a given particle $x$ in $U$ is also stationary. This realization combined with the ergodic theorem allows us to conclude that all particles in $U$ have asymptotic speeds. This observation was used in \cite{dauvergne1} to prove results about the relationship between the local and global limit. We use their results as a starting point in our proofs. \begin{theorem}[Theorem 1.7, \cite{dauvergne1}] \label{T:local-speeds} For every $x \in \mathds{Z}$, the limit $$ S(x) = \lim_{t \to \infty} \frac{U(x, t) - U(x, 0)}{t} \qquad \text{exists } \text{almost surely}. $$ $S(x)$ is a symmetric random variable with distribution $\mu$ independent of $x$. The support of $\mu$ is contained in the interval $[-\pi, \pi]$. \end{theorem} We refer to $\mu$ as the {\bf local speed distribution}. Dauvergne and Vir\'ag \cite{dauvergne1} also used Theorem \ref{T:local-speeds} to find limiting swap rates in $U$. Define \begin{equation*} D_\mu^+(c) = \int(y - c)^+d \mu(y), \;\;\;\; D_\mu^-(c) = \int(y - c)^-d \mu(y) \;\;\;\;\;\text{and}\; \;\;\;\; D_\mu(c) = \int|y - c|d \mu(y). \end{equation*} \begin{theorem}[Theorem 1.8, \cite{dauvergne1}] \label{T:particle-swaps} Let $S(0)$ be the asymptotic speed of particle $0$ in $U$. For any $x \in \mathds{Z}$, $$ \frac{Q(U, t)}t \to D_\mu(S(0)) \qquad \text{ almost surely and in } L^1. $$ In particular, the random variables $Q(U, t)/t$ are uniformly integrable and $\mathbb{E} D_\mu(S(0)) = 8/\pi$. \end{theorem} We also need an analogous result regarding crossings of lines in the local limit. Let $L(t) = ct + d$ be a line of constant slope $c$. For a swap function $V$, define $$ C^+(V, L, t) = \card{\Big\{x \in \mathds{Z}: V(x, 0) \le L(0), V(x, t) > L(t)\Big\}}. $$ $C^+(V, L, t)$ is the total number of particles that are below $L$ at time $0$ and above $L$ at time $t$. We symmetrically define $C^-(V, L, t)$ as the total number of particles that are above $L$ at time $0$ and below $L$ at time $t$, and let $C(V, L, t) = C^-(V, L, t) + C^+(V, L, t)$. \begin{theorem}[Theorem 5.7, \cite{dauvergne1}] \label{T:line-rate} Let $L(t) = ct + d$. Then almost surely and in $L^1$, we have that \begin{align*} \lim_{t \to \infty} \frac{C^+(U, L, t)}t = D_\mu^+(c), \qquad \lim_{t \to \infty} \frac{C^-(U, L, t)}t = D_\mu^-(c), \qquad \lim_{t \to \infty} \frac{C(U, L, t)}t = D_\mu(c). \end{align*} \end{theorem} We also record here a few basic facts about the functions $D_\mu$ and $D_\mu^+$ that will be used throughout the paper. These properties can be proven using basic facts about integrals, and the fact that $\mu$ is symmetric and supported in $[-\pi, \pi]$. \begin{lemma} \label{L:convex} \begin{enumerate}[nosep, label=(\roman*)] \item Both $D_\mu$ and $D_\mu^+$ are convex, $1$-Lipschitz functions. \item For all $x$, we have that $D_\mu^+(x) \le D_\mu(x) \le |x| \vee \pi$. \item Suppose that $L(t) = at + b$ is tangent to either $D_\mu$ or $D_\mu^+$. Then $b \in [0, \pi]$. \item $D_\mu$ is a symmetric function, and hence minimized at $0$. \item $D_\mu^+(\pi) = 0$, and $D_\mu(\pm \pi) = \pi$. \end{enumerate} \end{lemma} \bigskip \noindent \textbf{Subsequential limits of $Y_n$.} \qquad Recall that $Y_n$ is the trajectory random variable of $\sigma^n$. We record here the main result of \cite{dauvergne1} regarding subsequential limits of $Y_n$. Here and throughout the paper, the phrase ``subsequential limit of $Y_n$'' always refers to a subsequential limit of $Y_n$ in distribution. \medskip We say that a path $y \in \mathcal{D}$ is $g(y)$-Lipschitz if $y$ is absolutely continuous and if for almost every $t$, $|y'(t)| \le g(y(t))$. \begin{theorem}[Theorem 1.4, \cite{dauvergne1}] \label{T:bounded-speed} \begin{enumerate}[nosep, label=(\roman*)] \item The sequence $\{Y_n\}$ is precompact in distribution. \item Suppose that $Y$ is a subsequential limit of $Y_n$ (in distribution). Then $$ \mathbb{P}\bigg(Y \text{ is } \pi\sqrt{1-y^2}\text{-Lipschitz},\;Y(0) = -Y(1) \bigg) = 1. $$ Moreover, $Y(t)$ is uniformly distributed on $[-1, 1]$ for every $t$. \end{enumerate} \end{theorem} In addition, we observe here that any subsequential limit $Y$ of $Y_n$ inherits certain symmetries from $\sigma^n$. \begin{prop} \label{P:Y-symmetries} Let $Y$ be any subsequential limit of $Y_n$. \smallskip \begin{enumerate}[nosep, label=(\roman*)] \item Define $Y_t \in \mathcal{D}$ by $$ Y_t(s) = \begin{cases} \;\;Y(s + t), \quad &s \le t.\\ \;\; -Y(s + t - 1), \quad &s > t. \end{cases} $$ For any $t \in [0, 1]$, we have that $Y_t \stackrel{d}{=} Y$. \item $Y \stackrel{d}{=} - Y$. \item Define $Z \in \mathcal{D}$ by $Z(t) = Y(1 - t)$. Then $Z \stackrel{d}{=} Y$. \end{enumerate} \end{prop} \begin{proof} Property (i) follows from time stationarity of random sorting networks (Theorem \ref{T:time-stat}). Properties (ii) and (iii) follow from the corresponding properties of the swap sequence $(K^n_1, \dots, K^n_N)$ of $\sigma^n$: \[ \left(K^n_1, K^n_2, \mathellipsis K^n_N\right) \stackrel{d}{=} \left(n - K^n_1, n - K^n_2, \mathellipsis, n - K^n_{N}\right) \stackrel{d}{=} \left(K^n_N, \mathellipsis, K^n_2, K^n_1\right). \qquad \qedhere \] \end{proof} \section{Particle flux for Lipschitz paths} \label{S:flux-intro} In this section, we introduce particle flux and prove that it measures the amount of particles that cross a line in a typical sorting network. Let $\text{\fontfamily{ppl}\selectfont Lip}$ be the set of Lipschitz paths from $[0,1] \to [-1, 1]$ (we will use this notation throughout the paper). Define the \textbf{local speed} of a function $h$ at time $t$ by $$ s(t) = \frac{d(\arcsin(h))}{dt} = \begin{cases} \frac{h'(t)}{\sqrt{1 - h^2(t)}}, \quad &|h(t)| < 1 \\ 0, \quad &|h(t)| = 1. \end{cases} $$ When $h \in \text{\fontfamily{ppl}\selectfont Lip}$, the local speed $s(t)$ exists for almost every time $t$. Recalling the definition of $D_\mu$ from the previous section, we then define the {\bf particle flux} of $h$ over a set $A$ by \begin{equation} \label{E:path-flux} J(h; A) = \frac{1}2 \int_A D_\mu(s(t)) \sqrt{1-h^2(t)}dt. \end{equation} We define $J(h) = J(h; [0, 1])$. Note that $J(h) < \infty$ for any Lipschitz function $h$. This follows from Lemma \ref{L:convex} (ii), which implies that $$ D_\mu(s(t)) \sqrt{1-h^2(t)} \le [\pi \vee |s(t)|] \sqrt{1-h^2(t)} \le \pi \vee |h'(t)|. $$ We will also consider {\bf positive particle flux} $J^+(h; A)$ and {\bf negative particle flux} $J^-(h; A)$ defined by $$ J^+(h; A) = \frac{1}2 \int_A D^+_\mu(s(t)) \sqrt{1-h^2(t)}dt, \qquad J^-(h; A) = \frac{1}2 \int_A D^-_\mu(s(t)) \sqrt{1-h^2(t)}dt. $$ Again, we let $J^+(h) = J^+(h; [0, 1])$ and $J^-(h) = J^-(h; [0, 1])$. \medskip We now connect flux to random sorting networks. If a random sorting network resembles the local limit in a local window around the global space-time position $(t, h(t))$, then by Theorem \ref{T:line-rate}, the number of distinct particles that cross $h$ in this window should be proportional to $$ D_\mu\left(\frac{h'(t)}{\sqrt{1 -h^2(t)}}\right) \sqrt{1-h^2(t)}. $$ The scaling factors of $\sqrt{1-h^2}$ come from the semicircle rescaling of time in the local limit away from the center. \medskip Therefore in a typical large-$n$ sorting network, where most local windows resemble the local limit, $J(h)$ should be proportional to the number of particles that cross the line $h$, counting multiple crossings for a given particle if and only if the crossings happen at globally distinguishable locations. \medskip The factor of $1/2$ is to account for the difference between the ${n \choose 2}^{-1}$ scaling in the global limit and the $n^{-1}$ scaling in the local limit. Similarly, $J^+(h)$ and $J^-(h)$ should be proportional to the number of upcrossings and downcrossings of the line $h$ in a large $n$ sorting network. \medskip Now let $\text{\fontfamily{ppl}\selectfont Lip}_r$ be the set of paths $h \in \text{\fontfamily{ppl}\selectfont Lip}$ with $h(0) = -h(1)$. If $h \in \text{\fontfamily{ppl}\selectfont Lip}_{r}$, then in any sorting network, every particle must cross $h$ at least once unless the particle starts at $h(0)$. Therefore $J(h)$ should be bounded below for such paths. Since any two particles in a sorting network cross each other exactly once, $J(h)$ should achieve this lower bound when $h$ is a trajectory limit. \medskip The next theorem makes rigorous this intuition behind the definition of particle flux. To state the theorem, for a random variable $Y \in \mathcal{D}$ and a path $h \in \text{\fontfamily{ppl}\selectfont Lip}$, we define $$ P^+_Y(h; [a, b]) = \mathbb{P} \Big( \exists s < t \in [a, b] \text{ such that } Y(s) < h(s) \;\text{and}\; Y(t) > h(t) \Big). $$ In other words, $P^+_Y(h; [a, b])$ is the probability that $Y$ upcrosses $h$ in the interval $[a, b]$. We similarly define the downcrossing probability $$ P^-_Y(h; [a, b]) = \mathbb{P} \Big( \exists s < t \in [a, b] \text{ such that } Y(s) > h(s) \;\text{and}\; Y(t) < h(t) \Big). $$ \begin{theorem} \label{T:particle-crossings} Let $Y$ be any subsequential limit of $Y_n$. \begin{enumerate}[label=(\roman*)] \item Let $h \in \text{\fontfamily{ppl}\selectfont Lip}$ and $[a, b] \subset [0, 1]$. Then $P^+_Y(h; [a, b]) \le J^+(h; \; [a, b])$ and $P^-_Y(h; [a, b]) \le J^-(h; [a, b])$. \item Let $h \in \text{\fontfamily{ppl}\selectfont Lip}_r$. Then $J(h) \ge 1$. \item $\mathbb{P}(J(Y) = 1) = 1$. \end{enumerate} \end{theorem} To prove this theorem, we first need two key lemmas about convergence to the local limit. Recall that $C^+(V, L, t)$ is the number of upcrossings of a line $L(s) = cs + d$ in the interval $[0, t]$ in a swap function $V$. Recall also the definition of $U_n^{i}$ from Section \ref{S:local}.\begin{lemma} \label{L:L1-swap-rate} Let $\alpha \in (-1, 1)$, and suppose that $\{i_n\}_{n \in \mathbb{N}}$ is a sequence of integers such that $i_n/n \to (1 + \alpha)/2$. Let $X$ be a uniform random variable on $[0, 1]$ that is independent of $U, U_n^{i_n}$, let $\{c_n\}_{n \in \mathbb{N}}$ be a sequence of real numbers converging to $c \in \mathbb{R}$, and let $\{d_n\}_{n \in \mathbb{N}}$ be a sequence of real numbers in $[0, 1]$. Define $$ L_n(s) = c_ns + d_n + X, \quad \text{and} \quad L(s) = cs + X. $$ Then for any time $t \in (0, \infty)$, we have that \begin{enumerate}[label=(\roman*)] \item $C^+(U_n^{i_n}, L_n, t) \stackrel{d}{\to} C^+(U, L, t)$ as $n \to \infty$. \item $\mathbb{E} C^+(U_n^{i_n}, L_n, t) \to \mathbb{E} C^+(U, L, t)$ as $n \to \infty$. \item $\mathbb{E} C^+(U_n^{i_n}, L_n, t) \le 3t + |c_n t| + 2$ for all $n$. \end{enumerate} \end{lemma} \begin{proof} First note that $d_n + X \Mod 1 \stackrel{d}{=} X$ for all $n$. Therefore by the stationarity of $U$ with respect to integer-valued spatial shifts (Theorem \ref{T:local} (i)), we have that \begin{equation} \label{E:dn-shift} C^+(U^{i_n}_n, L_n, t) \stackrel{d}{\to} C^+(U, L, t) \quad \text{if and only if} \quad C^+(U^{i_n}_n, c_ns + X, t) \stackrel{d}{\to} C^+(U, L, t). \end{equation} Now if $V_n \in \mathcal{A}$ is a sequence of swap functions converging to swap function $V$, then $$ C^+(V_n, c_ns + X, t) \to C^+(V, L, t) $$ unless $V$ either has a swap at time $t$, or $ct + X \in \mathds{Z}$. By Theorem \ref{T:local} (iv), for any time $t$, the event where $U$ has a swap at time $t$ has probability $0$. Moreover, the probability that $ct + X \in \mathds{Z}$ is also $0$. Therefore $$ C^+(U^{i_n}_n, c_ns + X, t) \stackrel{d}{\to} C^+(U, L, t), $$ and hence (i) follows by statement \eqref{E:dn-shift}. Now recall that $W(V, t)$ is the number of swaps at location $0$ in a swap function $V$ in the interval $[0, t]$. For any swap function $V$ and any line $H(s) = as+b$, we have that \begin{equation} \label{E:C-W} C^+(V, H, t) \le W(V, t) + |a t| + 2. \end{equation} To see why this is true, observe that every particle $x$ with $x \le 0$ and $V(x, t) > 1$ must move from position $0$ to position $1$ at least once in the interval $[0, t]$, therefore contributing to $W(V, t)$. Every particle $x$ that upcrosses $H$ in the interval $[0, t]$ fits this description, unless $$ at \le V(x, t) \le 1. $$ There are at most $|a t| + 2$ such values of $x$, proving \eqref{E:C-W}. \medskip Now again since $U$ has no swap at time $t$ almost surely (Theorem \ref{T:local} (iv)), $W(U_n^{i_n}, t) \stackrel{d}{\to} W(U, t)$. Also, by Theorem \ref{T:swaps}, we have that $$ \mathbb{E} W(U_n^{i_n}, t) \to \mathbb{E} W(U, t). $$ Hence, the random variables $W(U_n^{i_n}, t)$ are uniformly integrable (see Proposition 3.12, \cite{kallenberg2006foundations}). Therefore by \eqref{E:C-W} applied to the swap processes $U_n^{i_n}$ and the lines $L_n$, the random variables $C^+(U_n^{i_n}, L_n, t)$ are also uniformly integrable, and hence converge in expectation since they converge in distribution (again by Proposition 3.12, \cite{kallenberg2006foundations}). \medskip Finally, the bound on the probability distribution for the first swap location in a random sorting network (Theorem \ref{T:dist-k1}) and time stationarity (Theorem \ref{T:time-stat}) allows us to conclude that for any $n, i_n$ and $t$, that $$ \mathbb{E} W(U_n^{i_n}, t) \le 3t, $$ which in turn proves the bound on the expectation of $C^+(U_n^{i_n}, L_n, t)$ via \eqref{E:C-W}. \end{proof} This next lemma is similar to Lemma \ref{L:L1-swap-rate}, but deals with particle speeds rather than upcrossing rates. For a swap function $V$ we define $$ S(V, t) = \frac{V(0, t)}{t}. $$ This is the average speed of particle $0$ in the interval $[0, t]$. \begin{lemma} \label{L:L1-flux} Let $\{U_n^{i} : i \in \{i, \dots n\}, n \in \mathbb{N}\}$ be the array of locally scaled random sorting networks defined in Section \ref{S:local}, and let $U$ be the local limit. Let $I$ be a uniform random variable on $(-1, 1)$, independent of everything else. For each $n$, define $I_n = \ceil{n(I + 1)/2}$, let $$ R_n = \sqrt{1 - \left[\frac{2I_n}n - 1\right]^2}, \qquad \;\text{and}\; \qquad R = \sqrt{1 - I^2}. $$ Then the following statements hold. \begin{enumerate}[label=(\roman*)] \item For any $t \in (0, \infty)$, we have that $R_nD_\mu(S(U_n^{I_n}, R_nt)) \xrightarrow[n \to \infty]{d} RD_\mu(S(U, R t))$. \item For any $t \in (0, \infty)$, we have that $\mathbb{E} R_n D_\mu(S(U_n^{I_n}, R_nt)) \xrightarrow[n \to \infty]{} \mathbb{E} R D_\mu \left(S(U, R t) \right)$. \item $\mathbb{E} R D_\mu \left(S(U, R t) \right) \to 2$ as $t \to \infty$. \end{enumerate} \end{lemma} \begin{proof} Fix $t \in (0, \infty)$. If $V_n$ is a sequence of swap functions converging to a swap function $V$ with no swap at time $t$, then $D_\mu(S(V_n, t)) \to D_\mu(S(V, t))$. Now condition on $I$. This fixes $R_n, I_n,$ and $R$. For any fixed time $t \in (0, \infty)$, the process $U$ has no swap at time $t$ almost surely (Theorem \ref{T:local} (iv)). Therefore for any bounded continuous test function $f$, we have that $$ \mathbb{E} \bigg[f\Big(R_nD_\mu(S(U_n^{I_n}, R_nt))\Big) \; \Big | \; I \bigg] \to \mathbb{E} \bigg[f\Big(RD_\mu(S(U,Rt))\Big) \; \Big| \; I \bigg]. $$ Taking expectations proves the distributional convergence in (i). Now, by Lemma \ref{L:convex}, we have that $D_\mu(s) \le |s| + \pi$ for any $s \in \mathbb{R}$. Recalling that $Q(V, t)$ is the number of swaps made by particle $0$ in the interval $[0, t]$ in a swap process $V$, this implies that \begin{equation} \label{E:Q-bd} R_n\left(D_\mu(S_{I_n}(U_n^{I_n}, R_nt))\right)\le R_n(|S_{I_n}(U_n^{I_n}, R_nt)| + \pi) \le \frac{Q(U_n^{I_n}, R_nt)}{t} + \pi. \end{equation} Now, we similarly have that $Q(U_n^{I_n}, R_nt) \stackrel{d}{\to} Q(U, Rt)$, again since $U$ has no swap at time $t$ almost surely. Moreover, $$ \mathbb{E} Q(U_n^{I_n}, R_nt) = \frac{2\floor{nt}}n, $$ since this expectation is simply the expected number of swaps made by a random particle in a sorting network after $2nt$ steps. By Theorem \ref{T:swaps-2}, we also have that $$ \mathbb{E} Q(U, Rt) = \frac{4t}{\pi} \int_{-1}^1 \sqrt{1 - x^2}dx = 2t, $$ so $\mathbb{E} Q(U_n^{I_n}, R_nt) \to \mathbb{E} Q(U, Rt)$. Again, by Proposition 3.12 from \cite{kallenberg2006foundations}, this implies that the random variables $\{Q(U_n^{I_n}, R_nt)\}$ are uniformly integrable, and hence so are the random variables $R_n D_\mu(S(U_n^{I_n}, R_nt))$. Since these random variables converge in distribution to $R D_\mu(S(U, R t))$, they must also converge in expectation, proving (ii). \medskip Now we prove (iii). First, Theorem \ref{T:local-speeds} implies that \begin{equation} \label{E:D-mu-lim} R D_\mu(S(U, Rt)) \xrightarrow[t \to \infty]{} R D_\mu(S(0)) \quad \text{almost surely}, \end{equation} where $S(0)$ is the speed of particle $0$ in $U$. Analogously to \eqref{E:Q-bd}, we also have that $$ R D_\mu(S(U, R t)) \le \frac{Q(U, t)} {t} + \pi. $$ By Theorem \ref{T:particle-swaps}, the right hand side above is uniformly integrable, and hence so is the left hand side. Therefore since $RD_\mu(S(U, R t))$ has an almost sure limit by \eqref{E:D-mu-lim}, it also converges in expectation. Finally, Theorem \ref{T:particle-swaps} implies that \begin{equation*} \mathbb{E} R D_\mu(S(0)) = 2. \qedhere \end{equation*} \end{proof} To prove Theorem \ref{T:particle-crossings}, we also need two auxiliary results. The first is a simple lemma about uniform convergence of functions. This will be used in the proof of Theorem \ref{T:particle-crossings} (i). \begin{lemma} \label{L:mesh} Let $f_n:[0, 1] \to [-1, 1]$ be a sequence of functions converging uniformly to a continuous function $f$, and let $h:[0, 1] \to [-1, 1]$ be any continuous function. Let $$ \{\Pi_n = \{t_{n, 0} =0< t_{n, 1} < \dots < t_{n, m(n)}=1\}\}_{n \in \mathbb{N}} $$ be a sequence of partitions of $[0, 1]$ such that $$ \text{mesh}(\Pi_n) = \max_{i \in \{0, 1, \dots, m(n) - 1\}} |t_{n, i + 1} - t_{n, i}| \to 0 \qquad \;\text{as}\; n \to \infty. $$ Let $[a, b] \subset [0, 1]$. If there exist times $s < t \in [a, b]$ such that $f(s) < h(s)$ and $f(t) > h(t)$, then for all large enough $n$, there exists a time $t_{n, i} \in [a, b]$ such that $f_n(t_{n, i}) \le h(t_{n, i})$ and $f_n(t_{n, i + 1}) > h(t_{n, i + 1})$. \end{lemma} \begin{proof} By the continuity of $f$ and $h$, there exists an $\epsilon > 0$ and disjoint intervals $[s, s_+]$ and $[t_-, t]$ such that $f(r) < h(r) - \epsilon$ for all $r \in[s, s_+]$ and $f(r) > h(r) + \epsilon$ for all $r \in [t_-, t]$. Therefore by uniform convergence, for all large enough $n$ we have that $f_n(r) < h(r) - \epsilon/2$ for all $r \in[s, s_+]$ and $f(r) > h(r) + \epsilon/2$ for all $r \in [t_-, t]$. Now, for large enough $n$ we also have that $$ \text{mesh}(\Pi_n) < \min(s_+ - s, t - t_-). $$ Therefore for such $n$, there must exists $i_1 < i_2 \in \Pi_n \cap [a, b]$ such that $f_n(t_{n, i_1}) < h(t_{n, i_1})$ and $f_n(t_{n, i_2}) > h(t_{n, i_2})$. Thus for some $i \in \{i_1, \dots i_2 - 1\}$, we must have that $f_n(t_{n, i}) \le h(t_{n, i})$ and $f_n(t_{n, i + 1}) > h(t_{n, i + 1})$. \end{proof} To prove part (iii), we also need that $J(\cdot)$ is lower semicontinuous. \begin{prop} \label{P:lsc-E} Let $h_n \in \text{\fontfamily{ppl}\selectfont Lip}$ be a sequence converging uniformly to $h$. Then for any set $A \subset [0, 1]$, we have that $$ J(h; A) \le \liminf_{n \to \infty} J(h_n ; A). $$ \end{prop} \begin{proof} Note that $J(h; A)$ is of the form $\int_A G(h(t), h'(t))dt$, where $G$ is a continuous, positive function, such that for any fixed value of $a$, $G(a, \cdot)$ is convex. This follows from the convexity of $D_\mu$ (Lemma \ref{L:convex}). Functionals of this form are lower semicontinuous in the uniform norm by Theorem 1.6 from \cite{struwe1996variational}. \end{proof} \begin{proof}[Proof of Theorem \ref{T:particle-crossings}.] \noindent \textbf{Proof of (i):} \qquad Note first that by the symmetry of $Y$ (Proposition \ref{P:Y-symmetries}), that $$ \mathbb{P}^-_Y(h; [a, b]) = \mathbb{P}^+_{-Y}(-h; [a, b]) = \mathbb{P}^+_{Y}(-h; [a, b]) $$ for any path $h \in \text{\fontfamily{ppl}\selectfont Lip}$ and any interval $[a, b]$. Also, $J^+(-h) = J^-(h)$ by the symmetry of $\mu$ (Theorem \ref{T:local-speeds}). Therefore the assertion that $ \mathbb{P}^-_Y(h; [a, b]) \le J^-(h) $ is equivalent to the assertion that $\mathbb{P}^+_Y(-h; [a, b]) \le J^+(-h)$, and so to prove Theorem \ref{T:particle-crossings} (i), it suffices to prove that $\mathbb{P}^+_Y(h; [a, b]) \le J^+(h)$ for every path $h \in \text{\fontfamily{ppl}\selectfont Lip}$. \medskip We first prove this for $h \in \text{\fontfamily{ppl}\selectfont Lip}$ with range in the open interval $(-1, 1)$. Let $t \in \mathds{Z} \cap (0, \infty)$, and for $s \in [0, 1]$, define $$ s_{n, t} = \frac{2t}{n-1} \floor{\frac{(n-1)s}{2t}}, \quad \;\text{and}\; \quad s^+_{n, t} = \min\left(s_{n, t} + \frac{2t}{n-1}, 1\right). $$ Let $X$ be a uniform random variable on $[0, 1]$, independent of the sequence $\{Y_n\}$, and let $$ A^+_{n, t, s} = \left\{ Y_n(s_{n, t}) < h(s_{n, t}) + \frac{2X}n \;\; \;\text{and}\; \;\; Y_n(s^+_{n, t}) \ge h(s^+_{n, t}) + \frac{2X}n \right\}. $$ Now observe that when $s < b_{n, t}$, then $s_{n, t}^+ \le b_{n, t}$, and so $s^+_{n, t} = s_{n, t} + 2t/(n-1)$. Therefore for such $s$, time stationarity of random sorting networks (Theorem \ref{T:time-stat}) implies that $A^+_{n, t, s}$ has the same probability as the event $$ \left\{ Y_n(0) < h(s_{n, t}) + \frac{2X}n \;\; \;\text{and}\; \;\; Y_n\left(\frac{2t}{n-1}\right) \ge h(s^+_{n, t}) + \frac{2X}n \right\}. $$ Here have used that $t \in \mathds{Z}$ to apply time-stationarity. We can then express the upcrossing probability $\mathbb{P}(A^+_{n, t, s})$ in terms of the expected number of upcrossings in the local scaling. For $u \in [-1, 1]$, define $$ [u]_n = \floor{\frac{n(u + 1)}2}, \quad \;\; \{u\}_n = \frac{n(u + 1)}2 - \floor{\frac{n(u + 1)}2}, \quad \;\; g_n(u) = \sqrt{1 - \left(\frac{2[u]_n}n - 1\right)^2}. $$ Then for $s < b_{n, t}$, we have that $$ n \mathbb{P}(A^+_{n, t, s}) = \mathbb{E} C^+\left(U_n^{[h(s_{n, t})]_n}, L_{n, s}, g_n(h(s_{n, t})) t \right), $$ where $$ L_{n, s}(r) = \frac{n\left(h(s^+_{n, t}) - h(s_{n, t})\right)}{2tg_n(h(s_{n, t}))} r + \{h(s_{n, t})\}_n + X. $$ Also, let $$ L_{s}(r) = \frac{h'(s)}{\sqrt{1 -h^2(s)}}r + X. $$ Since $h$ is Lipschitz and hence differentiable almost everywhere, Lemma \ref{L:L1-swap-rate} (ii) implies that for almost every $s \in [0, b)$, that \begin{equation} \label{E:n-PA} \lim_{n \to \infty} n\mathbb{P}(A^+_{n, t, s}) = \lim_{n \to \infty} \mathbb{E} C^+\left(U_n^{[h(s_{n, t})]_n}, L_{n, s}, g_n(h(s_{n, t})) t \right) =\mathbb{E} C^+(U, L_s, \sqrt{1 - h^2(s)}t). \end{equation} Here we have used that the range of $h$ is in $(-1, 1)$ to ensure convergence. Moreover, there exists a constant $C$ such that \begin{equation} \label{E:finite-bd} n\mathbb{P}(A^+_{n, t, s}) \le Ct \end{equation} for all $n \in \mathbb{N}, s < b_{n, t}$. This follows from Lemma \ref{L:L1-swap-rate} (iii) and the fact that $h$ is Lipschitz. Now let $Z_{n, t}$ be the number of times $s$ of the form $s_{n, t}$ in the interval $[a_{n, t}, b_{n, t})$ such that the upcrossing event $A^+_{n, t, s}$ occurs. We have that $$ Z_{n, t} = \int_{a_{n, t}}^{b_{n, t}} \frac{(n-1)\mathbbm{1} (A^+_{n, t, s})}{2t} ds. $$ Therefore by the bounded convergence theorem, we have that \begin{equation} \label{E:cvg-ab-flux} \mathbb{E} Z_{n, t} = \int_{a_{n, t}}^{b_{n, t}} \frac{(n-1)\mathbb{P} (A^+_{n, t, s})}{2t} ds \xrightarrow[n \to \infty]{} \frac{1}{2} \int_{a}^b \frac{\mathbb{E} C^+(U, L_s, \sqrt{1 - h^2(s)}t)}t ds. \end{equation} Now, $\mathbb{E} Z_{n, t}$ bounds $\mathbb{P}(Z_{n, t} \ge 1)$. Therefore for any subsequential limit $Y$ of $Y_n$, Lemma \ref{L:mesh} applied to a subsequence $Y_{n_i} - 2X/n \to Y$ implies that $$ \lim_{n \to \infty} \mathbb{E} Z_{n, t} \ge \liminf_{i \to \infty} \mathbb{P}(Z_{n_i, t} \ge 1) \ge P^+_Y(h; [a, b]). $$ The integrand on the right hand side of \eqref{E:cvg-ab-flux} is bounded above by $C$ for all $t \in \mathds{Z} \cap (0, \infty)$ by \eqref{E:n-PA} and \eqref{E:finite-bd}. Therefore by Theorem \ref{T:line-rate} and the bounded convergence theorem, the right hand side of \eqref{E:cvg-ab-flux} converges to $$ \frac{1}{2} \int_{a}^b D^+_\mu\left(\frac{h'(s)}{\sqrt{1 - h^2(s)}}\right) \sqrt{1 - h^2(s)} ds = J^+(h; \; [a, b]) \qquad \;\text{as}\; \;\; t \to \infty. $$ This proves Theorem \ref{T:particle-crossings} (i) for $h \in \text{\fontfamily{ppl}\selectfont Lip}$ with range in $(-1, 1)$. Now we extend this to all $h \in \text{\fontfamily{ppl}\selectfont Lip}$. Define $h_m = h \vee(-1 + 1/m) \wedge (1 - 1/m)$, and let $$ A_m = \{t \in [0, 1] : h(t) \ge 1- 1/m \text{ or } h(t) \le -1 + 1/m\}. $$ Letting $\mathcal{L}$ be Lebesgue measure on $[0, 1]$, we have that $$ J^+(h_m ; A_m) = D^+_\mu(0)\sqrt{1 - (1- 1/m)^2}\mathcal{L}(A_m), \qquad \;\text{and}\; \qquad J^+(h_m ; A_m^c) = J(h ; A_m^c). $$ The flux $J^+(h_m ; A_m) \to 0$ as $m \to \infty$, so $$ \limsup_{m \to \infty} J^+(h_m; [a, b]) \le J^+(h; [a, b]). $$ Moreover, $h_m$ converges uniformly to $h$, so if $Y$ upcrosses $h$, then $Y$ will eventually upcross $h_m$. Therefore $$ \liminf_{m \to \infty} P^+_Y(h_m; [a, b]) \ge P^+_Y(h; [a, b]). $$ Putting these two inequalities together completes the proof. \medskip \noindent \textbf{Proof of (ii):} \qquad Let $h \in \text{\fontfamily{ppl}\selectfont Lip}_r$, and let $Y$ be a subsequential limit of $Y_n$. Therefore by Theorem \ref{T:particle-crossings} (i), \begin{align*} \mathbb{P} \Big(\exists r < t \in [0, 1] \text{ such that either } Y(r) < h(r), \;Y(t) > h(t),\text{ or } &Y(r) > h(r), \;Y(t) < h(t) \Big) \\ &\le J^+(h) + J^-(h) = J(h). \end{align*} The event above holds unless $Y(0) = h(0)$ since $Y(0) = - Y(1)$ almost surely by Theorem \ref{T:bounded-speed}. Since $Y(0)$ is uniformly distributed (Theorem \ref{T:bounded-speed}), $Y(0) \ne h(0)$ with probability one, so the left hand side above is equal to $1$. \medskip \noindent \textbf{Proof of (iii):} \qquad Let $Y$ be any subsequential limit of $Y_n$. Fix $t \in (0, \infty) \cap \mathds{Z}$, and define $$ t_{n, j} = \frac{2jt}{n-1} \quad \text{for } j \in \{0, \mathellipsis \floor{(n-1)/(2t)}\} \quad \text{ and let } t_{n, \floor{\frac{n-1}{2t}} + 1} = 1. $$ Define $Y_{n, t}$ so that $ Y_{n, t}\left(t_{n, j}\right) = Y_{n}\left(t_{n, j}\right)$ for $j \in \{0, \mathellipsis \floor{(n-1)/(2t)} + 1\}$, and so that $Y_{n, t}$ is linear at times in between. By time stationarity of random sorting networks (Theorem \ref{T:time-stat}), we can write \begin{equation} \label{E:JY-ineq} \mathbb{E} J(Y_{n, t}) = \sum_{j=0}^\floor{\frac{n-1}{2t}} \mathbb{E} J(Y_{n, t} ; \; [t_{n, j}, t_{n, j+1}] ) \le \left(\floor{\frac{n-1}{2t}} + 1\right) \mathbb{E} J(Y_{n, t} ; \; [0, t_{n, 1}]). \end{equation} Here we have used that $t \in \mathds{Z}$ to apply time stationarity of sorting networks. The final term in the sum above may be slightly smaller than the previous terms since the length of interval may be less than $2t/(n-1)$; this gives rise to the inequality. \medskip Now we have that \begin{align} \nonumber J(Y_{n, t} ; \; [0, t_{n, 1}]) &= \frac{1}{2} \int_0^{t_{n, 1}} \int \left|Y_{n, t}'(0)- \sqrt{1 - Y_{n, t}^2(r)}x\right|d\mu(x) dr \\ \label{E:frac-en} &= \frac{1}{2} \int_0^{t_{n, 1}} \int \left|Y_{n, t}'(0)- \sqrt{1 - Y_{n, t}^2(0)}x\right|d\mu(x) dr + \epsilon_n \end{align} for some error term $\epsilon_n$. Here we have used that $Y'_{n, t}(r) = Y'_{n, t}(0)$ for all $r \in [0, t_{n, 1}]$ by piecewise linearity of $Y'_{n, t}$. Now if $|Y_{n, t}'(0)| \ge \pi$, then since $\mu$ is symmetric and supported in $[-\pi, \pi]$ (Theorem \ref{T:local-speeds}), the two inner integrals are the same. Therefore $\epsilon_n = 0$ in this case. Also, when $ |Y_{n, t}'(0)| < \pi $ and $x \in [-\pi, \pi]$, the difference of the integrands is bounded above by $$ \left|\sqrt{1 - Y_{n, t}^2(0)}x - \sqrt{1 - Y_{n, t}^2(r)}x\right| \le |x|\sqrt{1 - \left(1 - \frac{2\pi t}{n-1}\right)^2} \le \pi\sqrt{\frac{4\pi t}{n-1}}. $$ Hence $|\epsilon_n| \le k(t/n)^{3/2}$ for some constant $k$. Now, letting $I_n = n(Y_{n, t}(0) + 1)/2$, and using the notation of Lemma \ref{L:L1-flux}, \eqref{E:frac-en} is equal to $$ \frac{t}{n-1}R_nD_\mu\left(S(U_n^{I_n}, R_n t)\right) + \epsilon_n. $$ Therefore by Lemma \ref{L:L1-flux} (ii) and the bound on $\epsilon_n$, \eqref{E:JY-ineq} implies that \begin{align*} \mathbb{E} J(Y_{n, t}) \le \left(\floor{\frac{n - 1}{2t}} + 1 \right)\left[\frac{t}{n-1} \mathbb{E} R_nD_\mu\left(S(U_n^{I_n}, R_n t)\right) + \mathbb{E} \epsilon_n \right]\xrightarrow[n \to \infty]{} \frac{1}2 \mathbb{E} R D_\mu \left(S(U, Rt) \right). \end{align*} Letting $t \to \infty$, Lemma \ref{L:L1-flux} (iii) then implies that \begin{equation} \label{E:flux-lim} \lim_{t \to \infty} \lim_{n \to \infty} \mathbb{E} J(Y_{n, t}) \le 1. \end{equation} Since subsequential limits of $Y_n$ are Lipschitz by Theorem \ref{T:bounded-speed}, $Y$ is a subsequential limit of $Y_n$ if and only if $Y$ is a subsequential limit of $Y_{n, t}$. Therefore, by Fatou's lemma and the lower semicontinuity of $J(\cdot)$ (Proposition \ref{P:lsc-E}), \begin{equation*} \lim_{n \to \infty} \mathbb{E} J(Y_{n, t}) \ge \mathbb{E} J(Y) \end{equation*} for all $t$, so $\mathbb{E} J(Y) \le 1$ by \eqref{E:flux-lim}. Moreover, $J(Y) \ge 1$ almost surely by Theorem \ref{T:particle-crossings} (i), so $J(Y) = 1$ almost surely. \end{proof} \section{Characterization of minimal flux paths} \label{S:unique} In this section, we show that any subsequential limit $Y$ of $Y_n$ is uniquely determined by its initial position, maximum absolute value, and whether it is initially increasing or decreasing. By Theorem \ref{T:particle-crossings} (iii), it is enough to characterize paths $h \in \text{\fontfamily{ppl}\selectfont Lip}_r$ with $J(h) = 1$. \medskip Let $\text{\fontfamily{ppl}\selectfont Lip}_0$ be the set of path $h \in \text{\fontfamily{ppl}\selectfont Lip}$ with $h(0) = h(1) = 0$. We can first recognize that to characterize minimal flux paths $h \in \text{\fontfamily{ppl}\selectfont Lip}_r$, it is enough to characterize minimal flux paths $h \in \text{\fontfamily{ppl}\selectfont Lip}_0$. This is a simple consequence of the following simple fact. \begin{lemma} \label{L:path-shift} Let $h \in \text{\fontfamily{ppl}\selectfont Lip}_r$ and $t_0 = \inf \{t : h(t) = 0\}$. Define the path \begin{equation*} h_0(t) = \begin{cases} &h(t + t_0), \qquad \quad \quad t \le 1- t_0 \\ &-h(t + t_0 - 1), \qquad t > 1 - t_0. \end{cases} \end{equation*} Then $J(h) = J(h_0)$. In particular, every path $h \in \text{\fontfamily{ppl}\selectfont Lip}_r$ with flux $J(h) = 1$ can be shifted to a path $h_0 \in \text{\fontfamily{ppl}\selectfont Lip}_r$ with $J(h_0) = r$. \end{lemma} We build up to the following characterization of minimal flux paths in $h \in \text{\fontfamily{ppl}\selectfont Lip}_0$. Define the maximum height $m(h)$ of a continuous path $h:[0, 1] \to [-1, 1]$ by $$ m(h) = \max \{ |h(t)| : t \in [0, 1] \}. $$ Recall also the definition of $g(y)$-Lipschitz from Section \ref{S:local}. \begin{theorem} \label{T:path-set} For every $m \in [0, 1]$, there exists exactly one $\pi\sqrt{1- h^2}$-Lipschitz path $h_m \in \text{\fontfamily{ppl}\selectfont Lip}_0$ such that $J_m(h_m) = 1$, $m(h_m) = m$, and $h_m \ge 0$. The paths $h_m$ satisfy the following properties. \smallskip \begin{enumerate}[nosep,label=(\roman*)] \item $h_m(1/2) = m$, and $h_m(t) = h_m(1 - t)$ for $t \in [0, 1/2]$. \item $h_m$ is strictly increasing on the interval $[0, 1/2]$. \item For any $\ell \in [0, m]$, we have that $h_{\ell}(s) \le h_{m}(s)$ for all $s \in [0, 1]$. \end{enumerate} \smallskip Now define $h_{-m} = -h_m$. If $h \in \text{\fontfamily{ppl}\selectfont Lip}_0$ is a $\pi\sqrt{1 -h^2}$-Lipschitz path with $J_m(h) = 1$ and $m(h) = m$, then either $h = h_m$ or $h = h_{-m}$. \end{theorem} Theorem \ref{T:path-set} will be proven as Proposition \ref{P:monotone-traj}, Proposition \ref{P:unique-path}, Corollary \ref{C:sym}, Lemma \ref{L:no-cross}, and Proposition \ref{P:exist-m}. We also state a analogue of Theorem \ref{T:path-set} for minimal flux paths $h \in \text{\fontfamily{ppl}\selectfont Lip}_r$. First, for a path $h \in \text{\fontfamily{ppl}\selectfont Lip}$, define $$ S(h) = \sup_{t \in [0, 1]} h(t) \qquad \;\text{and}\; \qquad I(h) = \inf_{t \in [0, 1]} h(t). $$ \begin{theorem} \label{T:unique-2} Fix $a \in [-1, 1]$ and $k \in [|a|, 1]$. Then the following hold: \begin{enumerate}[nosep,label=(\roman*)] \item There is exactly one $\pi\sqrt{1 - h^2}$-Lipschitz path $h_{a, k} \in \text{\fontfamily{ppl}\selectfont Lip}_r$ such that $h_{a, k}(0) = - h_{a, k}(1) = a$ and $m(h) = S(h) = k$. \item There is a exactly one path $\pi\sqrt{1 - h^2}$-Lipschitz path $h_{a, -k} \in \text{\fontfamily{ppl}\selectfont Lip}_r$ such that $h_{a, -k}(0) = - h_{a, -k}(1) = a$ and $-m(h) = I(h) = - k$. We have that $h_{a, k}(t) = -h_{a, k}(1 - t)$ for all $t$. \item If $a \neq 0$, there is a unique time $t \in (0, 1)$ such that $h_{a, k}(t) = 0$. Moreover, the path $$ g(s) = \begin{cases} &h(t + s), \qquad s \le 1 - t, \\ &-h(t + s - 1), \qquad s > 1 - t \end{cases} $$ is equal to $h_k$ if $a < 0$ and $h_{-k}$ if $a > 0$. \item For $k_1 \le k_2 \in [-1, -|a|] \cup [|a|, 1]$, we have that $h_{a, k_1}(t) \le h_{a, k_2}(t)$ for all $t \in [0, 1]$. \item $h_{a, a} = h_{-a, a}$. \end{enumerate} \end{theorem} All parts of this theorem follow from applying Lemma \ref{L:path-shift} to Theorem \ref{T:path-set}, except part (iv). This will be proven in Lemma \ref{L:no-cross}. \subsection{Basic bounds on $D_\mu$} \label{SS:basic-bounds} In order to work with the functional $J(\cdot)$, we first prove a few basic bounds on $D_\mu$. \begin{lemma} \label{L:max-heights} Let $Y$ be a subsequential limit of $Y_n$. For all $a \in [0, 1]$, we have that $$ \mathbb{P}(m(Y) > a) \leq \frac{D_\mu(0)\sqrt{1- a^2}}{2}. $$ \end{lemma} \begin{proof} Since $Y(0) = -Y(1)$ and $Y(0)$ is uniformly distributed (Theorem \ref{T:bounded-speed}), the left hand side of this inequality can be written as \begin{align*} &\mathbb{P}\bigg(Y(0) < a \;\text{and}\; Y(t) > a \text{ for some } t > 0, \;\text{or}\; Y(0) > - a \;\text{and}\; Y(t) < - a \text{ for some } t > 0 \bigg). \end{align*} By Theorem \ref{T:particle-crossings}, this is bounded above by $J^+(g_a) + J^-(g_{-a})$, where $g_a$ is the path of constant height $a$. Finally, \begin{equation*} J^+(g_a) + J^-(g_{-a}) = \frac{D_\mu(0)\sqrt{1- a^2}}2. \qedhere \end{equation*} \end{proof} \begin{lemma} \label{L:expect-2} $D_\mu(0) = 2.$ That is, if $X$ is a random variable with distribution $\mu$, then $\mathbb{E}|X| = 2$. \end{lemma} \begin{proof} Let $Y$ be a subsequential limit of $Y_n$, and let $\alpha = D_\mu(0)$. By Lemma \ref{L:max-heights}, we have that $$ 1 = \mathbb{P}(m(Y) > 0) \leq \frac{\alpha}2, $$ so $\alpha \geq 2$. The equality above follows since $m(Y) \ge |Y(0)|$ and $|Y(0)|$ is uniformly distributed on $[-1, 1]$ (Theorem \ref{T:bounded-speed}). Now, for every height $a$ such that $$ a > \sqrt{1 - \left(\frac{2}{\alpha}\right)^2}, $$ Lemma \ref{L:max-heights} guarantees that $m(Y) \le a$ with positive probability. Therefore by Theorem \ref{T:particle-crossings} (iii), for any $\epsilon > 0$ there is a $\pi\sqrt{1-y^2}$-Lipschitz path $h_\epsilon \in \text{\fontfamily{ppl}\selectfont Lip}_r$ with $J(h_\epsilon) = 1$ such that \begin{equation} \label{E:m-hep} m(h_\epsilon) \leq m_\epsilon := \sqrt{1 - \left(\frac{2}{\alpha} - \epsilon\right)^2}. \end{equation} Using that $D_\mu$ is minimized at $0$ (Lemma \ref{L:convex}), we have the bound \begin{equation} \label{E:flux-he} \begin{split} 1 = J(h_\epsilon) &\ge \frac{\alpha}2 \int_0^1\sqrt{1-h_\epsilon^2(t)}dt. \end{split} \end{equation} The above integrand is always bounded by $2/\alpha - \epsilon$. Also, since $h_\epsilon(0) = -h_\epsilon(1)$ and $h_\epsilon$ is $\pi$-Lipschitz, the amount of time that $h_\epsilon$ spends in the interval $[-m_\epsilon/2, m_\epsilon/2]$ is at least $m_\epsilon/(2\pi)$. Therefore $$ \frac{2}{\alpha} \ge \int_0^1\sqrt{1-h_\epsilon^2(t)}dt \ge \left(\frac{2}\alpha - \epsilon\right)\left(1 - \frac{m_\epsilon}{2\pi}\right) + \frac{m_\epsilon}{2\pi}\sqrt{1 - \frac{m_\epsilon^2}{4}}. $$ Letting $\epsilon \to 0$, we get an inequality in $\alpha$ which implies that $\alpha = 2$. \end{proof} \begin{lemma} \label{L:crude-deviation-bd} There is a sequence $u_n \to 0$ such that $$ D_\mu(u_n) \leq \frac{2}{\cos(u_n/2)}. $$ \end{lemma} \begin{proof} For any subsequential limit $Y$ of $Y_n$ and any $\epsilon > 0$, Lemmas \ref{L:max-heights} and \ref{L:expect-2} imply that $$ \mathbb{P}(m(Y) \le \epsilon) \geq 1 - \sqrt{1-\epsilon^2} > 0. $$ Also, $Y(0)$ is uniformly distributed by Theorem \ref{T:bounded-speed}, so $\mathbb{P}(m(Y) = 0) = 0$. Therefore by Theorem \ref{T:particle-crossings} (iii), we can find a sequence of positive numbers $\alpha_n \to 0$ and a sequence of paths $h_n \in \text{\fontfamily{ppl}\selectfont Lip}_r$ with $m(h_n) = \alpha_n$ and $J(h_n) = 1$. \medskip Let $s_n$ be the local speed of $h_n$. The total variation of each of the paths $h_n$ is at least $2\alpha_n$, and hence the average absolute local speed of each $h_n$ is at least $2 \arcsin(\alpha_n)$. The convexity and symmetry of $D_\mu$ (Lemma \ref{L:convex}) then gives the following bound. \begin{align*} J(h_n) &\geq \sqrt{1- \alpha_n^2}\int_0^1 D_\mu(s_n(t)) dt \\ &\geq \sqrt {1-\alpha_n^2} D_\mu(2 \arcsin(\alpha_n)). \end{align*} Letting $u_n = 2\arcsin(\alpha_n)$ and rearranging completes the proof. \end{proof} \subsection{Monotonicity and uniqueness for minimal flux paths} \label{SS:monotone} In this subsection, we prove that minimal flux paths $h \in \text{\fontfamily{ppl}\selectfont Lip}_0$ with a particular maximum height $m(h)$ are unique up to sign, and that they satisfy a particular monotonicity relation. \medskip We start with two simple lemmas. The first lemma shows that minimal flux paths minimize flux on every subinterval of $[0, 1]$. \begin{lemma} \label{L:min-path} Let $h \in \text{\fontfamily{ppl}\selectfont Lip}_r$ be a path with $J(h) = 1$. For any interval $[a, b] \subset [0, 1]$ and any path $g \in \text{\fontfamily{ppl}\selectfont Lip}$ with $g(a) = h(a)$ and $g(b) = h(b)$, we have that $$ J(h; [a, b]) \le J(g; [a, b]). $$ Moreover, if $f \in \text{\fontfamily{ppl}\selectfont Lip}_r$ is another path with $J(f) = 1$, $f(a) = h(a)$, and $f(b) = h(b)$, then we can form a new path $p \in \text{\fontfamily{ppl}\selectfont Lip}_r$ with $J(p) = 1$ by letting \begin{equation*} p(t) = \begin{cases} &f(t) \qquad t \leq t_1, t \ge t_2 \\ &h(t) \qquad t \in [t_1, t_2]. \end{cases} \end{equation*} \end{lemma} \begin{proof} If there is some $g$ with $J(h; [a, b]) > J(g; [a, b])$, then we can make a new path $p \in \text{\fontfamily{ppl}\selectfont Lip}_r$ which is equal to $g$ on $[a, b]$ and equal to $h$ on $[a, b]^c$. This path $p$ will have $J(p) < 1$, contradicting Theorem \ref{T:particle-crossings} (ii). The second part of the lemma is a consequence of the fact that \[ J(p) = J(p ; [a, b]) + J(p ; [a, b]^c). \qquad \qquad \qedhere \] \end{proof} The next lemma uses the bounds established in Section \ref{SS:basic-bounds} to eliminate plateaus in minimal flux paths. \begin{lemma} \label{L:no-plat} For any height $\alpha \in (0, 1)$ and any interval $[t_1, t_2] \subset [0, 1]$, we have that $$ \inf_{h \in \text{\fontfamily{ppl}\selectfont Lip}} \big\{J(h; [t_1, t_2]) : h(t_1) = \alpha, h(t_2) = \alpha \big\} < (t_2-t_1)\sqrt{1-\alpha^2}. $$ \end{lemma} \begin{proof} Without loss of generality, we may assume that $t_1 = 0$. Let $\{u_n\}_{n \in \mathbb{N}}$ be as in Lemma \ref{L:crude-deviation-bd}, and define a sequence of paths $h_n \in \text{\fontfamily{ppl}\selectfont Lip}$ by letting \begin{equation*} h_n(t) = \begin{cases} \;\sin (\arcsin(\alpha) + u_n t), \qquad &t \leq t_2/2 \\ \;h_n(t_2 - t), \qquad \qquad \quad \;\;\;\;\; &t \in [t_2/2, t_2] \\ \;\alpha, \qquad \qquad \qquad \quad \quad \; &t > t_2. \end{cases} \end{equation*} Lemma \ref{L:crude-deviation-bd} gives the following bound on the flux of $h_n$ on the interval $[0, t_2]$. \begin{align*} J(h_n; [0, t_2]) &\leq \frac{2}{\cos(u_n/2)} \int_0^{t_2/2} \cos(\arcsin(\alpha) + u_n t)dt \\ &= t_2 \sqrt{1-\alpha^2} - \frac{t_2^2 \alpha u_n}{4} + O(u_n^2). \end{align*} For small enough $u_n$, this calculation shows that $J(h_n) < t_2 \sqrt{1-\alpha^2}$. \end{proof} We can now prove that minimal flux paths $h \in \text{\fontfamily{ppl}\selectfont Lip}_0$ are unimodal. \begin{prop} \label{P:monotone-traj} Let $h \in \text{\fontfamily{ppl}\selectfont Lip}_0$ be such that $J(h) = 1$, and $m(h) \in (0, 1)$. Then $|h(1/2)| = m(h)$. Moreover, if $h(1/2) = m(h)$, then $h$ is strictly increasing on $[0, 1/2]$ and strictly decreasing on $[1/2, 1]$. If $-h(1/2) = m(h)$, then $h$ is strictly decreasing on $[0, 1/2]$ and strictly increasing on $[1/2, 1]$. \end{prop} \begin{proof} First consider the case where $h \ge 0$. Let $t_m$ be any time when $h(t_m) = m(h)$. Suppose that there exist times $s_1 < s_2 \in [0, t_m]$ such that $h(s_1) = h(s_2)$, and $h(t) \le h(s_1)$ on the interval $[s_1, s_2]$. Define a new path $$ r(t) = \begin{cases} h(t), \qquad &t \le s_1, t \ge t_m\\ h(t + (s_2 - s_1)), \qquad &t \in (s_1, t_m - (s_2 -s_1)]\\ m(h) \qquad &t \in [t_m -(s_2 - s_1), t_m]. \end{cases} $$ The path $r$ replaces the segment of $h$ in the interval $[s_1, s_2]$ with a plateau at height $t_m$ at a later time. This operation cannot increase flux since $D_\mu$ is minimized at $0$ (Lemma \ref{L:convex}), so $r$ must be a minimal flux path in $\text{\fontfamily{ppl}\selectfont Lip}_r$. However $$ J(r; [t_m -(s_2 - s_1), t_m]) = (s_2 - s_1)\sqrt{1 - m^2(h)}, $$ which contradicts Lemmas \ref{L:min-path} and \ref{L:no-plat}. Therefore there is no interval $[s_1, s_2] \subset [0, t_m]$ where $h(s_1) = h(s_2)$ and $h(t) \le h(s_1)$ for $t \in [s_1, s_2]$, so $h$ must be strictly increasing on $[0, t_m]$. Similarly, $h$ is strictly decreasing on $[t_m, 1]$. The point $t_m$ is the unique time when $h$ achieves its maximum. \medskip We now show that $t_m = 1/2$. Without loss of generality, assume that $t_m \le 1/2$. Define a new path $g(t) = h(1 - t)$. The path $g \in \text{\fontfamily{ppl}\selectfont Lip}_0$ also satisfies $J(g) = 1$ and $m(g) = m$. We have that $g(1/2) = h(1/2)$, and so by Lemma \ref{L:min-path} we can create a path $p \in \text{\fontfamily{ppl}\selectfont Lip}_r$ with $J(p) = 1$ and $m(p) = m$ by letting \begin{equation*} p(t) =\begin{cases} &h(t) \qquad t \leq 1/2\\ &g(t) \qquad t \geq 1/2. \end{cases} \end{equation*} The path $p(t)$ achieves its maximum height at both $t_m$ and $1- t_m$, so we must have that $t_m = 1 - t_m$, and hence $t_m = 1/2$. \medskip Now if $h$ is not a strictly non-negative path, then we can create a non-negative path $p(t) = |h(t)|$. By the symmetry of $\mu$ (Theorem \ref{T:local-speeds}), the path $p$ must again have minimal flux, so by the above argument $p$ is strictly increasing on $[0, 1/2]$ and strictly decreasing on $[1/2, 1]$. Therefore either $h = p$ or $h = -p$, completing the proof. \end{proof} We can now use Proposition \ref{P:monotone-traj} to prove uniqueness of minimal flux paths with a given maximum height. \begin{prop} \label{P:unique-path} For every $m \in [0, 1]$, there is at most one $\pi\sqrt{1- y^2}$-Lipschitz path $h \in \text{\fontfamily{ppl}\selectfont Lip}_0$ with $J(h) = 1$, $m(h) = m$, and $h \ge 0$. If such a path $h$ exists, then the only other $\pi\sqrt{1- y^2}$-Lipschitz path $g \in \text{\fontfamily{ppl}\selectfont Lip}_0$ with $J(g) = 1$ and $m(g) = m$ is $g = -h$. \end{prop} \begin{proof} First observe that the only $\pi\sqrt{1- y^2}$-Lipschitz paths in $\text{\fontfamily{ppl}\selectfont Lip}_0$ with $m(h) = 1$ are $h = \pm \sin(\pi t)$. Similarly, the only path $h \in \text{\fontfamily{ppl}\selectfont Lip}$ with $m(h) = 0$ is $h = 0$. This proves the proposition for $m \in \{0, 1\}$. Now we assume $m \in (0, 1)$. \medskip Suppose that $h, g \in \text{\fontfamily{ppl}\selectfont Lip}_0$ are two distinct non-negative paths with $J(h) = J(g) = 1$ and $m(h) = m(g) = m$. By Proposition \ref{P:monotone-traj}, $h(1/2) = g(1/2) = 1/2$. Without loss of generality, there exists a time $t_2 \in [0, 1/2)$ such that $g(t_2) < h(t_2)$. Let $t_1 < t_2$ be such that $h(t_1) = g(t_2)$. Define the shifted path \begin{equation*} g^*(t) = \begin{cases} \;\;g(t + (t_2 - t_1)), \qquad &t \leq 1 - (t_2 - t_1) \\ \;\;- g(t + (t_2 - t_1) - 1), \qquad& t \geq 1 - (t_2 - t_1). \end{cases} \end{equation*} By Proposition \ref{P:monotone-traj}, we have that $$ m = g^*(1/2 - (t_2 - t_1)) > h(1/2 - (t_2 - t_1)) \quad \;\text{and}\; \quad g^*(1/2) < h(1/2) = m. $$ Therefore there is some time $t_3 \in (1/2 - (t_2 - t_1), 1/2)$ such that $g^*(t_3) = h(t_3)$. Moreover, $g^* \in \text{\fontfamily{ppl}\selectfont Lip}_r$, $J(g^*) = 1$, and $g^*(t_1) = h(t_1)$. Therefore by Lemma \ref{L:min-path} we can create a path $p \in \text{\fontfamily{ppl}\selectfont Lip}_0$ with $J(p) = 1$ by letting \begin{equation*} p(t) = \begin{cases} \;\;h(t), \qquad& t \leq t_1 \;\text{or}\; t \geq t_3 \\ \;\;g^*(t), \qquad& t \in (t_1, t_3). \end{cases} \end{equation*} This new path $p$ is a non-negative minimal flux path in $\text{\fontfamily{ppl}\selectfont Lip}_0$ which does not uniquely achieve its maximum value at $1/2$, contradicting Proposition \ref{P:monotone-traj}. \medskip Now, if $g \in \text{\fontfamily{ppl}\selectfont Lip}_0$ is another minimal flux path with $J(g) = 1$ and $m(g) = m$, then $|g|$ is a non-negative minimal flux path in $\text{\fontfamily{ppl}\selectfont Lip}_0$, so $|g| = h$. Since $h(t) > 0$ for $t \in (0, 1)$ by Proposition \ref{P:monotone-traj}, either $g = h$ or $g = -h$. \end{proof} Proposition \ref{P:unique-path} gives us the following easy corollary. \begin{corollary} \label{C:sym} Any $\pi\sqrt{1-h^2}$-Lipschitz path $h \in \text{\fontfamily{ppl}\selectfont Lip}_0$ with $J(h) = 1$ has $h(t) = h(1-t)$ for all $t \in [0, 1]$. \end{corollary} \begin{proof} Without loss of generality, assume that $h \ge 0$. Both $h(t)$ and $h(1-t)$ are non-negative minimal flux paths satisfying all the conditions of Proposition \ref{P:unique-path} and with the same maximum height, so by that proposition, $h(t) = h(1-t)$ for all $t$. \end{proof} \begin{remark} \label{R:shift} Note that the uniqueness proofs in this section automatically give the uniqueness of the paths $h_{a, k}$ in Theorem \ref{T:unique-2}. The fact that uniqueness immediately carries over to shifted paths follows from the strict monotonicity in Proposition \ref{P:monotone-traj}. \end{remark} \subsection{Existence of minimal flux paths} \label{SS:unique} We now show that for every $m \in [0, 1]$, that there exists a minimal flux path in $\text{\fontfamily{ppl}\selectfont Lip}_0$ with maximum height $m$. This is a consequence of the following proposition, which shows that the inequality in Theorem \ref{T:particle-crossings} is an equality for minimal flux paths. Recall the definition of the upcrossing probability $P^+_Y(h; [a, b])$ and the downcrossing probability $P^-_Y(h; [a, b])$ from Section \ref{S:flux-intro}. \begin{prop} \label{P:flux-upcross} Let $h \in \text{\fontfamily{ppl}\selectfont Lip}_r$ be a path with $J(h) = 1$. Let $Y$ be a subsequential limit of $Y_n$. Then for any interval $[a, b]$, we have that $$ P^+_Y(h; [a, b]) = J^+(h; [a, b]) \quad \;\text{and}\; \quad P^-_Y(h; [a, b]) = J^-(h; [a, b]). $$ \end{prop} \begin{proof} We will only prove the first equality, as the second one follows by the symmetry of $Y$ and $\mu$ (Theorem \ref{T:local-speeds}, Proposition \ref{P:Y-symmetries}). Since $Y(0) = -Y(1)$ almost surely by Theorem \ref{T:bounded-speed}, we have that \begin{equation*} 1 = P^+_Y(h; [0, 1]) + P^-_Y(h; [0, 1])= J^+(h) +J^-(h). \end{equation*} Therefore by Theorem \ref{T:particle-crossings} (i), we have that $P^+_Y(h; [0, 1]) = J^+(h)$. Now we have that \begin{equation} \label{E:big-J+} \begin{split} P^+_Y(h; [0, 1])& \le P^+_Y(h; [0, a]) + P^+_Y(h; [a, b]) + P^+_Y(h; [b, 1])\\ &\le J^+(h; [a, b]^c) + P^+_Y(h; [a, b]) \\ &\le J^+(h; [a, b]^c) + J^+(h; [a, b]) = J^+(h). \end{split} \end{equation} To see the first inequality above, note that the union of the three events on the right hand side contains the event on the left hand side minus the set $\{Y(a) = h(a) \;\text{or}\; Y(b) = h(b)\}$. This set has probability $0$ since $Y(t)$ is uniformly distributed for all $t$ (Theorem \ref{T:bounded-speed}). The second and third inequalities follow from Theorem \ref{T:particle-crossings} (i). \medskip Since $P^+_Y(h; [0, 1]) = J^+(h)$, all the inequalities in \eqref{E:big-J+} must in fact be equalities, proving the lemma. \end{proof} To apply the above proposition in order to prove the existence of minimal flux paths at every height, we need three lemmas. The first lemma shows that any two minimal flux paths in $\text{\fontfamily{ppl}\selectfont Lip}_r$ cannot cross each other more than once. This lemma also proves part (iii) of Theorem \ref{T:path-set}. \begin{lemma} \label{L:no-cross} Suppose that $h, g \in \text{\fontfamily{ppl}\selectfont Lip}_r$ are non-negative paths with $J(h) = J(g) = 1$. Then there cannot exist times $t_0 < t_1 < t_2 \in [0, 1]$ such that $h(t_0) < g(t_0)$, $h(t_1) > g(t_1)$, and $h(t_2) = g(t_2)$. In particular, if $g(0) = h(0)$, then either $h \le g$ or $g \le h$. \end{lemma} \begin{proof} Suppose there exist times $t_0 < t_1 < t_2 \in [0, 1]$ such that $h(t_0) < g(t_0)$, $h(t_1) > g(t_1)$, and $h(t_2) = g(t_2)$. First, by possibly shifting the paths as in Lemma \ref{L:path-shift}, we may assume that $t_2 = 1$. Hence $g(1) = h(1) = -g(0) = -h(0)$. Now let $s \in (t_0, t_1)$ be such that $h(s) = g(s)$. Letting $a = g(0)$, Remark \ref{R:shift} implies that $h = h_{a, k_1}$ and $g = h_{a, k_2}$ for some $k_1, k_2 \in [-1, -|a|] \cup [|a|, 1]$. Here the paths $h_{a, k}$ are as in Theorem \ref{T:unique-2}. Without loss of generality, assume that $k_2 > k_1$ and that $k_2 \ge a$; the other cases follow by symmetric arguments. \medskip Define $p = \max(h, g)$. By Lemma \ref{L:min-path}, $J(p) = 1$. Moreover, $p(0) = -p(1) = a$, and since $k_2 > k_1$ and $k_2 \ge |a|$, we have that $m(p) = S(p) = k_2$. This contradicts the uniqueness of the path $h_{a, k_2}$ established in Remark \ref{R:shift}. \end{proof} The next lemma establishes a strong concavity property for minimal flux paths. \begin{lemma} \label{L:concave-paths} For every $k \in [0, 1]$, the path $\arcsin(h_k)$ is concave. In other words, the local speed $s_k$ of $h_k$ is a non-increasing function of $t$. \end{lemma} \begin{proof} By the symmetry of $h_k$ (Corollary \ref{C:sym}), it suffices to prove that $\arcsin(h_k)$ is concave on $[0, 1/2]$. Suppose that there exist times $t_1 < t_2 < t_3 \le 1/2$ such that $h(t_2) < p(t_2)$, where $p$ is the path of constant local speed $\close{s} \in [0, \pi]$ with $p(t_1) = h(t_1)$ and $p(t_2) = h(t_2)$. Let $$ t_4 = \sup \{t \in [t_1, t_2] : p(t) \le h(t)\} \qquad \;\text{and}\; \qquad t_5 = \inf \{t \in [t_2, t_3] : p(t) \le h(t)\}. $$ Then $h(t_4) = p(t_4)$ and $h(t_5) = p(t_5)$, and for every $t \in (t_4, t_5)$, we have that $h(t) < p(t)$. By Lemma \ref{L:convex}, we can find a line $L(s) = as + b$ with $b \ge 0$ such that $L(s) \leq D_\mu(s)$ for all $s$, and such that $L(\close{s}) = D_\mu(\close{s}).$ Therefore \begin{align*} J(h; [t_4, t_5]) & \geq \int_{t_4}^{t_5} (as(t) + b) \sqrt{1-h^2(t)} dt \\ &\ge \int_{t_4}^{t_5} ah'(t) dt + \int_{t_4}^{t_5} b \sqrt{1-p^2(t)} dt. \end{align*} The inequality in the second line follows since $p(t) > h(t) \ge 0$ for all $t \in (t_4, t_5)$. By the fundamental theorem of calculus, since $p(t_4) = h(t_4)$ and $p(t_5) = h(t_5)$, we have \begin{align*} \int_{t_4}^{t_5} ah'(t) dt + \int_{t_4}^{t_5} b \sqrt{1-p^2(t)} dt &= \int_{t_4}^{t_5} ap'(t) dt + \int_{t_4}^{t_5} b \sqrt{1-p^2(t)} dt \\ & = J(p; [t_4, t_5]). \end{align*} We can then create a non-negative $\pi\sqrt{1 - h^2}$-Lipschitz path $g \in \text{\fontfamily{ppl}\selectfont Lip}_0$ with $m(g) = k$ by letting $g(t) = p(t)$ for $t \in [t_4, t_5]$ and $g(t) = h(t)$ otherwise. $J(g) \le 1$, contradicting the uniqueness of $h_k$. \end{proof} The next lemma is a consequence of the symmetry and concavity of $Y$. \begin{lemma} \label{L:Y-sym-type} Let $Y$ be a subsequential limit of $Y_n$, let $t \in [0, 1)$, and let $[a, b] \subset [0, 1]$. Then $Y'(t)$ exists almost surely and \begin{enumerate}[label=(\roman*)] \item $\mathbb{P}(Y'(t) = 0) = 0.$ \item $\mathbb{P}(Y'(t) > 0 , \; Y(t) \in [a, b]) = \mathbb{P}(Y'(t) < 0 , \; Y(t) \in [a, b]) = 1/2.$ \end{enumerate} \end{lemma} \begin{proof} By time-stationarity of $Y$ (Proposition \ref{P:Y-symmetries} (i)) it suffices to consider $t = 0$. This also implies that $$ Y'(0) \stackrel{d}{=} Y'(U), $$ for an independent uniform random variable $U$ on $[0, 1]$. Since $Y$ is Lipschitz and hence almost everywhere differentiable, this proves that $Y'(0)$ exists almost surely. By the concavity and strict monotonicity of minimal flux paths (Lemma \ref{L:concave-paths} and Proposition \ref{P:monotone-traj}), we have that \begin{equation} \label{E:prob-Y-0} \mathbb{P} (Y'(U) = 0) = \mathbb{P}( m(Y) = |Y(U)|) = 0. \end{equation} Define $Z(t) = -Y(1- t)$. Since $Y(0) = -Y(1)$ (Theorem \ref{T:bounded-speed}), we have that $ Z(0) = Y(0) $ and $Z'(0) = -Y'(0)$. By Proposition \ref{P:Y-symmetries}, $Y \stackrel{d}{=} Z$, so $$ \mathbb{P}(Y'(0) > 0 , Y(0) \in [a, b]) = \mathbb{P}(Y'(0) < 0 , Y(0) \in [a, b]). $$ Putting this together with \eqref{E:prob-Y-0} completes the proof. \end{proof} We are now ready to prove the existence of minimal flux paths at every height. \begin{prop} \label{P:exist-m} For every $m \in [0, 1]$, there exists a $\pi\sqrt{1- h^2}$-Lipschitz path $h_m \in \text{\fontfamily{ppl}\selectfont Lip}_0$ such that $m(h_m) = m$, $J(h_m) = 1$, and $h \ge 0$. \end{prop} \begin{proof} First observe that we can set $h_0(t) = 0$ and $h_1(t) = \sin (\pi t)$. Both of these paths satisfy all of the above properties. For this, we use that $D_\mu(\pm \pi) = \pi$ and $D_\mu(0) = 2$ (Lemma \ref{L:convex} and Lemma \ref{L:expect-2}). \medskip Let $A \subset [0, 1]$ be the set of all $m$ such there is a $\pi\sqrt{1- h^2}$-Lipschitz path $h_m \in \text{\fontfamily{ppl}\selectfont Lip}_0$ with $h_m \ge 0$, $J(h_m) = 1$ and $m(h_m) = m$. Let $m \in \close{A}$, and let $m_k \in A$ be a sequence converging to $m$. \medskip By the Arzel\`a-Ascoli Theorem, there exists a $\pi\sqrt{1-h^2}$-Lipschitz subsequential limit $h_m \in \text{\fontfamily{ppl}\selectfont Lip}_0$ of $h_{m_k}$ with $m(h_m) = m$ and $h_m \ge 0$. By the lower semicontinuity of $J$ (Proposition \ref{P:lsc-E}), $J(h_m) = 1$. Therefore $A$ is closed. \medskip Now suppose that $A \ne [0, 1]$. Then there exists an interval $(\ell, m) \subset [0, 1]$ such that $m \in A$ and $(\ell, m) \cap A = \emptyset$. Let $k = (\ell + m)/2$, and choose $t_1 < t_2 \in [0, 1/2]$ so that $ h_m(t_1) = \ell $ and $h_m(t_2) = k.$ We will show that \begin{equation} \label{E:P=P} P^+_Y(h_m ; [t_1, t_2]) + P^-_Y(h_m ; [t_1, t_2]) = P^+_Y\left(k ; [t_1, t_2]\right) + P^-_Y\left(k; [t_1, t_2]\right). \end{equation} Here we use $k$ as shorthand for the line $h(t) = k$. \medskip Let $g \in \text{\fontfamily{ppl}\selectfont Lip}_r$ be any minimal flux $\pi\sqrt{1- y^2}$-Lipschitz path. By the monotonicity established in Proposition \ref{P:monotone-traj}, $g$ upcrosses $k$ at most once in the interval $[0, 1]$, and downcrosses $k$ at most once. Moreover, by Lemma \ref{L:no-cross}, $g$ crosses $h_m$ at most once. \medskip If $g(t_1) \notin [\ell, k]$, and $g(t_2) \ne k$, then the total number of crossings of the lines $h_m$ and $k$ is even. Hence, either $g$ crosses $h_m$ the same amount of times that $g$ crosses $k$, or $g$ first crosses $k$ going down, and then crosses $k$ going up. This second possibility cannot happen by the monotonicity of $g$ (Proposition \ref{P:monotone-traj}). \medskip If $g(t_1) \in (\ell, k)$ and $g'(t_1) < 0$, then again by the monotonicity properties of $g$, $g$ downcrosses $h_m$, does not upcross $h_m$, and neither upcrosses nor downcrosses $k$. \medskip If $g(t_1) \in (\ell, k)$, $g'(t_1) > 0$, and $g(t_2) \ne k$, then $m(g) \ge m$ since $(\ell, m) \cap A = \emptyset$. By the monotonicity of $g$ (Proposition \ref{P:monotone-traj}), $g(t^*) = m(g)$ for some $t^* > t_1$. If $g(t_2) < h(t_2)$, then since both $g$ and $h_m$ are minimal flux paths, the restrictions on crossings imposed by Lemma \ref{L:no-cross} imply that $t^* < t_2$. In this case, $g$ both upcrosses and downcrosses $k$ in the interval $[t_1, t_2]$, and downcrosses but does not upcross $h_m$. \medskip On the other hand, if $g(t_2) > h(t_2)$, then $g$ upcrosses $k$ in the interval $[t_1, t_2]$ but does not downcross $k$, and does not cross $h$ at all in this interval. \medskip Now let $Y$ be any subsequential limit of $Y_n$. By Theorem \ref{T:bounded-speed} and Lemma \ref{L:Y-sym-type} (i), we have that $$ \mathbb{P} \Big(Y(t_1) \notin \{\ell, k\}, Y(t_2) \ne k, Y'(t_1) \text{ exists and is not equal to $0$}\Big) = 1. $$ Therefore by the above analysis of minimal flux $\pi\sqrt{1- y^2}$-Lipschitz paths $g \in \text{\fontfamily{ppl}\selectfont Lip}_r$, we have that \begin{align*} P^+_Y(h_m ; [t_1, t_2]) - P^+_Y\left(k ; [t_1, t_2]\right) &= - \mathbb{P}(Y'(t_1) > 0, Y(t_1) \in [\ell, k]), \quad \;\text{and}\; \\ P^-_Y(h_m ; [t_1, t_2]) - P^-_Y\left(k ; [t_1, t_2]\right) &= \mathbb{P}(Y'(t_1) < 0, Y(t_1) \in [\ell, k]). \end{align*} Equation \eqref{E:P=P} then follows from the symmetry established in Lemma \ref{L:Y-sym-type} (ii). Now by Proposition \ref{P:flux-upcross}, the left hand side of \eqref{E:P=P} is equal to $J(h_m ; [t_1, t_2])$, and by Theorem \ref{T:particle-crossings} (i), the right hand side of \eqref{E:P=P} is bounded above by $J\left(k; [t_1, t_2] \right)$. However, since $D_\mu$ is minimized at $0$ by Lemma \ref{L:convex}, we can also easily calculate that \begin{align*} J(h_m ; [t_1, t_2]) &\ge \frac{D_\mu(0)}{2} \int_{t_1}^{t_2} \sqrt{1 - h^2_m(t)}dt \\ &> \frac{D_\mu(0)}2(t_2 - t_1)\sqrt{1 - {k^2}} = J\left(k; [t_1, t_2] \right). \end{align*} This is a contradiction, so $A$ must be the whole interval $[0,1]$. \end{proof} \section{The derivative distribution} \label{S:path-speed} Let $ \mathbb{P}_{a}(Y \in \cdot) $ be the regular conditional distribution of $Y$ given that $Y(0) = a$. In this section, we use the structure of minimal flux paths to find $\mathbb{P}_a(Y'(0) \in \cdot)$. \begin{prop} \label{P:local-global-deriv} Let $Y$ be any subsequential limit of $Y_n$. With probability one, $Y'(0)$ exists. Moreover, \begin{equation} \label{E:X-Y'} \mathbb{P}_{a} \left(\frac{Y'(0)}{\sqrt{1 - a^2}} \in \cdot\right) = \mu, \end{equation} for Lebesgue-a.e. $a \in (-1, 1)$. \end{prop} As an immediate corollary of Proposition \ref{P:local-global-deriv}, we will be able to find the distribution of the maximum height function $m(Y)$ for any subsequential limit $Y$ of $Y_n$. This will allow us to show that the sequence $\{Y_n\}$ has a unique limit point. \medskip Moreover, Proposition \ref{P:local-global-deriv} will be used later to prove an integral transform formula for $\mu$, which will allow us to determine $\mu$, and then $Y$. This is done in Section \ref{S:integral-formula}. \medskip To prove Proposition \ref{P:local-global-deriv}, we first show that minimal flux paths fill space. For this lemma, we use the notation of Theorem \ref{T:unique-2}. \begin{lemma} \label{L:closed-int} Let $a \in [0, 1]$, and let $$ M_a = \left\{h_{a, k} : k \in [-1, -a] \cup [a, 1] \right\}. $$ Then for any $t \in [0, 1]$, we have that $$ \{h(t) : h \in M_a\} = [\sin(-\pi t + \arcsin(a)), \sin(\pi t + \arcsin(a))]. $$ \end{lemma} \begin{proof} Let $K_{a, t} = \{h(t) : h \in M_a\}.$ $K_{a, t}$ contains the endpoints of the above interval, since $M_a$ contains the two extremal functions $h_{a, -1}(t) = \sin(-\pi t + \arcsin(a))$ and $h_{a, 1}(t) = \sin(\pi t + \arcsin(a))$. Moreover, the ordering on minimal flux paths (Theorem \ref{T:unique-2} (iv)) implies that $$ K_{a, t} \subset [\sin(-\pi t + \arcsin(a)), \sin(\pi t + \arcsin(a))]. $$ Now suppose that $c \in (\sin(-\pi t + \arcsin(a)), \sin(\pi t + \arcsin(a)))$. Let $$ A = \{h_{a, k} \in M_a : h_{a, k}(t) \le c\} \qquad \;\text{and}\; \qquad B = \{h_{a, k} \in M_a : h_{a, k}(t) > c\}. $$ Both of these sets are non-empty. By Theorem \ref{T:unique-2} (iv), $M_a$ is a linearly ordered subset of the set of $\pi \sqrt{1-h^2}$-Lipschitz paths in $\text{\fontfamily{ppl}\selectfont Lip}$ with the usual partial order on functions. Therefore the sets $A$ and $B$ have a supremum and infimum in $\text{\fontfamily{ppl}\selectfont Lip}$ by the Arzel\`a-Ascoli Theorem. By the lower semicontinuity of flux (Proposition \ref{P:lsc-E}), $\sup A$ and $\inf B$ lie in the set $M_a$. This implies that \begin{align*} \sup A &= h_{a, k_A}, \quad \text{ where } \quad k_A = \sup \{ k : h_{a, k} \in A\}, \quad \;\text{and}\; \\ \inf A &= h_{a, k_B}, \quad \text{ where } \quad k_B = \inf \{ k : h_{a, k} \in B\}. \end{align*} Since $A \cup B = M_a$, we either have that $k_A = k_B$, or else $k_A = -a$ and $k_B = a$. Since $h_{a, a} = h_{-a, a}$ by Theorem \ref{T:unique-2} (v), this implies that $h_{a, k_A}(t) = h_{a, k_B}(t) = c$, so $c \in K_{a, t}$. \end{proof} \begin{proof}[Proof of Proposition \ref{P:local-global-deriv}] First note that $Y'(0)$ exists almost surely by Lemma \ref{L:Y-sym-type}. We now show that for every $q \in (-\pi, \pi)$ and $b < c \in (-1, 1)$, that \begin{equation} \label{E:deriv-integrated} \int_{b}^c \mathbb{E}_{a} (Y'(0) - \sqrt{1 - a^2}q)^+ da = D_\mu^+(q) \int_b^c \sqrt{1-a^2}da. \end{equation} Here $\mathbb{E}_a$ is the expectation taken with respect to $\mathbb{P}_a$. Lemma \ref{L:closed-int} guarantees that there exists a time $t_0>0$ such that for any $r \in [0, t_0]$ and $a \in [b, c]$, there is a minimal flux $\pi\sqrt{1-g^2}$-Lipschitz path $g \in \text{\fontfamily{ppl}\selectfont Lip}_r$ with $g(0) = a$ and $g(r) = \sin(\arcsin(a) + qr)$. Call this path $g_{a, r}$. Letting $$ s_{a, r}(t) = \frac{g'_{a, r}(t)}{\sqrt{1 - g^2_{a, r}(t)}} $$ be the local speed of $g_{a, r}$, we have that \begin{align} \nonumber J^+(g_{a, r} ; [0, r]) &\ge \frac{1}2 \min_{t \in [0, r]} \sqrt{1 - g^2_{a, r}(t)} \int_0^t D^+_\mu(s_{a, r}(t)) dt \\ \label{E:J-lower} &\ge \frac{1}2 \min_{t \in [0, r]} {\sqrt{1 - g^2_{a, r}(t)}} t D^+_\mu(q). \end{align} Here the second inequality follows by Jensen's inequality, using that $D^+_\mu$ is convex (Lemma \ref{L:convex}) and that the average local speed of $g_{a, r}$ is $q$. Now letting $g_a(t) = \sin(\arcsin(a) + qt)$, Lemma \ref{L:min-path} implies that \begin{equation} \label{E:g-J-h} J^+(g_{a, r} ; [0,r]) \le J^+(g_a ; [0, r]) \le \frac{1}2 \max_{t \in [0, r]} {\sqrt{1 - g^2_{a}(t)}} r D^+_\mu(q). \end{equation} Combining this inequality with \eqref{E:J-lower} implies that $$ \lim_{r \to 0} \frac{J^+(g_{a, r} ; [0, r])}{r} = \frac{D_\mu^+(q)}2 \sqrt{1 - a^2}. $$ Moreover, \eqref{E:g-J-h} implies that $J^+(g_{a, r} ; [0, r]) \le rD^+_\mu(q)$ for all $a \in [b, c], r \in [0, t_0]$. Therefore by the bounded convergence theorem, we have that \begin{equation} \label{E:want-1} \lim_{t \to 0} \int_{b}^c \frac{J^+(g_{a, r} ; [0, r])}r da = \frac{D^+_\mu(q)}2\int_b^c \sqrt{1- a^2}da. \end{equation} Now recall that $P^+_Y(g_{a, r} ; [0, r])$ is the probability that $Y$ upcrosses $g_{a, r}$ in the interval $[0, r]$. Since minimal flux paths cross at most once (Lemma \ref{L:no-cross}), $$ \mathbb{P}(Y(0) < a, Y(r) > g_{a, r}(r))= P^+_Y(g_{a, r} ; [0, r]). $$ Therefore by Proposition \ref{P:flux-upcross}, we can write \begin{align} \nonumber \int_{b}^c \frac{J^+(g_{a, r} ; [0, r])}r da &= \int_{b}^c \frac{\mathbb{P}(Y(0) < a, Y(r) > g_{a, r}(r))}r da. \end{align} Then defining $P_x(a, r) = \mathbb{P}_x(Y(r) > g_{a, r}(r))/r$, the right hand side above is equal to \begin{align} \label{E:J-+-da} \int_{b}^c \frac{1}2 \int_{-1}^a P_x(a, r)dxda = \frac{1}2 \int_{-1}^c \int_{x \vee b}^c P_x(a, r) dadx. \end{align} Now define $S_{x, y} = \sin(\arcsin(x) + (\pi - q)y)$. Since $Y$ is $\pi\sqrt{1-h^2}$-Lipschitz almost surely (Theorem \ref{T:bounded-speed}), for almost every $x \in [-1, 1]$, we have that $$ \mathbb{P}_x(Y(t) > g_{a, r}(r)) = 0 $$ whenever $a > S_{x, r}.$ Therefore we can rewrite the right hand side of \eqref{E:J-+-da} as \begin{equation} \label{E:want-2} \begin{split} \frac{1}2 \left[\int_b^{c} \int_{x}^{S_{x, r}} P_x(a, r) da dx - \int_{S_{c, -r}}^c \int_c^{S_{x, r}} P_x(a, r) da dx + \int_{S_{b, -r}}^b \int_b^{S_{x, r}} P_x(a, r) da dx\right]. \end{split} \end{equation} In the second and third terms above, the integrand is of size $O(r^{-1})$, whereas the region of integration is of size $O(r^{2})$. Therefore these terms go to zero as $r \to 0$. Now letting $$ f(a, x, s, r) = \frac{\sin(\arcsin(a) + sr) - x}r, $$ and making the substitution $u = f(a, x, q, r)$, we can write the first term of \eqref{E:want-2} as \begin{equation} \label{E:b-c} \begin{split} \int_b^c \int_{f(x, x, q, r)}^{f(x, x, \pi, r)} \mathbb{P}_x\left(\frac{Y(r) - Y(0)}r > u \right) \frac{\cos(\arcsin(ru + x) - qr)}{\sqrt{1 - (ru + x)^2}}du dx. \end{split} \end{equation} Moreover, for every $x \in [-1, 1]$ such that $Y'(0)$ exists $\mathbb{P}_x$-almost surely, and for $u \in [q\sqrt{1 - x^2}, \pi\sqrt{1 - x^2}]$ for which $u$ is not an atom of the distribution $\mathbb{P}_x(Y'(0) \in \cdot)$, we have that $$ \mathbb{P}_x\left(\frac{Y(r) - Y(0)}r > u \right) \xrightarrow[r \to 0]{} \mathbb{P}_x(Y'(0) > u). $$ Therefore by the dominated convergence theorem, \eqref{E:b-c} converges to $$ \int_b^c \int_{q\sqrt{1 - x^2}}^{\pi\sqrt{1 - x^2}} \mathbb{P}_x(Y'(0) > u) du dx = \int_{b}^c \mathbb{E}_{x} (Y'(0) - \sqrt{1 - x^2}q)^+ dx \qquad \;\text{as}\; \quad r \to 0. $$ The equality above again uses that $|Y'(0)| \le \pi\sqrt{1- Y^2(0)}$ almost surely, since $Y$ is $\pi\sqrt{1- h^2}$-Lipschitz. Combining this with \eqref{E:want-1}, \eqref{E:J-+-da}, and \eqref{E:want-2} proves \eqref{E:deriv-integrated}. Now define \[ d(Y) = \frac{Y'(0)}{\sqrt{1 - Y^2(0)}}. \] By \eqref{E:deriv-integrated}, for almost every $a \in (-1, 1)$, for every $q \in \mathbb{Q} \cap (\pi, \pi)$, we have that $$ \sqrt{1 - a^2}\mathbb{E}_{a} \left(d(Y) - q\right)^+ = \mathbb{E}_{a} \left(Y'(0) - \sqrt{1 - a^2} q\right)^+ = \sqrt{1- a^2}D_\mu^+(q). $$ Therefore by continuity in $s$ of the functions $\mathbb{E}_{a} \left(d (Y) - s\right)^+$ and $D_\mu^+(s)$, we have that \begin{equation} \label{E:daY} \mathbb{E}_{a} \left(d (Y) - s\right)^+ = D_\mu^+(s) \qquad \text{ for all } s \in (-\pi, \pi). \end{equation} Now, $D_\mu(s)$ is Lipschitz and hence differentiable almost everywhere (Lemma \ref{L:convex}), so we can differentiate both sides of the above equation to get that for almost every $s \in (-\pi, \pi)$, that $$ \mathbb{P}_a(d(Y) > s) = \mu(s, \infty). $$ Finally, for almost every $a \in (-1, 1)$, we have that $$ \mathbb{P}_a(d(Y) \in [-\pi, \pi]) = \mu[\pi, \pi] = 1 $$ since $Y$ is almost surely $\pi\sqrt{1 - h^2}$-Lipschitz (Theorem \ref{T:bounded-speed}). Therefore $\mathbb{P}_a(d(Y) \in \cdot) = \mu$. \end{proof} \subsection{The max-height distribution and the uniqueness of $Y$} As an application of Proposition \ref{P:local-global-deriv}, we can find $\mathbb{P}(m(Y) > a)$ for all $a \in [0, 1]$. This allows us to deduce the uniqueness of $Y$. \begin{theorem} \label{T:max-height} Let $Y$ be any subsequential limit of $Y_n$. For all $a \in [0, 1]$, we have that $ \mathbb{P}(m(Y) > a) = \sqrt{1 - a^2}$. That is, $m(Y) \stackrel{d}{=} \sqrt{1- U^2}$ for a uniform random variable $U$ on $[0, 1]$. \end{theorem} \begin{proof} Let $V(Y)$ be the total variation of $Y$ on the interval $[0, 1]$. We have that \begin{align*} \mathbb{E} V(Y) &= \mathbb{E} \int_0^1 |Y'(t)|dt = \mathbb{E} \left(\frac{|Y'(0)|}{\sqrt{1-Y^2(0)}}\right) \sqrt{1 - Y^2(0)} = 2 \mathbb{E} \sqrt{1 - U^2}, \end{align*} where $U$ is a uniform random variable on $[0, 1]$. Here the second equality follows from time stationarity of $Y$ (Proposition \ref{P:Y-symmetries} (i)). The third equality follows by Proposition \ref{P:local-global-deriv}, the fact that the first moment of $\mu$ is $2$ (Lemma \ref{L:expect-2}), and the fact that $Y(0)$ is uniformly distributed (Theorem \ref{T:bounded-speed}). \medskip Moreover, the characterization of minimal flux paths (Theorem \ref{T:path-set}) implies that $ V(Y) = 2 \mathbb{E} m(Y), $ so $\mathbb{E} m(Y) = \mathbb{E} \sqrt{1 - U^2}$. Finally, the bound in Lemma \ref{L:max-heights} implies that $$ \mathbb{P}(m(Y) > a) \le \frac{D_\mu(0)}2 \sqrt{1 - a^2} = \sqrt{1- a^2} = \mathbb{P}(\sqrt{1 - U^2} > a), $$ so the random variable $m(Y)$ is stochastically dominated by $\sqrt{1 - U^2}$. Hence $m(Y) \stackrel{d}{=} \sqrt{1 - U^2}$. \end{proof} Now we can prove the existence of a unique limit $Y$ of $Y_n$. First extend the paths $h_m$ defined in Theorem \ref{T:path-set} to paths $h_m:[0, \infty) \to [-1, 1]$ by letting $h_m(t) = -h_m(t + 1)$ for all $t > 1$. \begin{theorem} \label{T:Y-unique} The sequence $Y_n$ has a distributional limit $Y$ given by $$ Y(t) = h_{\sqrt{1 - U^2}}(V + t). $$ Here $U$ is a uniform random variable on $[0, 1]$ and $V$ is uniform on $[0, 2]$, independent of $U$. \end{theorem} \begin{proof} By Theorem \ref{T:unique-2} and Theorem \ref{T:particle-crossings} (iii), any subsequential limit $Y$ of $Y_n$ is supported on the set of paths $$ \{h_m (\cdot + u) : m \in [0, 1], u \in [0, 2]\}, $$ so we can write $ Y(\cdot) = h_M(\cdot + V), $ for a pair of random variables $(M, V)$. By Theorem \ref{T:max-height}, $M = \sqrt{1 - U^2}$, for a uniform random variable $U$ on $[0, 1]$. By the time stationarity of $Y$ (Proposition \ref{P:Y-symmetries} (i)), for any $t \in [0, 2]$, we have that $$ h_M(\cdot + t + V) \stackrel{d}{=} h_M(\cdot + V), $$ which in turn implies that $(M, V) \stackrel{d}{=} (M, V + t \Mod 2 )$ for any $t \in [0, 2]$. This implies that $V$ has uniform distribution, and that $M$ is independent of $V$. \end{proof} \section{The integral transform formula} \label{S:integral-formula} Let $\mu_+$ be the pushforward of the local speed distribution $\mu$ under the map $x \mapsto |x|$. In this section we prove an integral transform formula for $\mu_+$. This transform formula allows us to identify $\mu$ as the arcsine distribution on $[-\pi, \pi]$. Once we know $\mu$, we can find the set of minimal flux paths $h_m$ introduced in Theorem \ref{T:path-set}, and then in turn identify $Y$. \medskip To find the integral transform formula, we will calculate $\mathbb{P}(m(Y) > k)$ in two different ways. We first give a heuristic explanation of how to do this when the local speed distribution $\mu$ has no atoms. By Theorem \ref{T:max-height}, we have that $$ \mathbb{P}(m(Y) > k) = \sqrt{1 - k^2}. $$ We can also calculate $\mathbb{P}(m(Y) > k)$ by integrating the marginal probabilities $\mathbb{P}_a(m(Y) > k)$. This gives that \begin{equation} \label{E:heur1} \sqrt{1 - k^2} = \frac{1}2 \int_{-1}^1 \mathbb{P}_a(m(Y) > k)da = 1 - k + \frac{1}2 \int_{-k}^k \mathbb{P}_a(m(Y) > k)da. \end{equation} Now we want to find an expression for $\mathbb{P}_a(m(Y) > k)$ when $|a| < k$. By the ordering on minimal flux paths, if $Y(0) = a$, then $m(Y) > k$ if and only if $|Y'(0)|$ is greater than some threshold value. Therefore for some $s_{a, k} \in \mathbb{R}$, we have that \begin{equation} \label{E:heur2} \mathbb{P}_a(m(Y) > k) = \mathbb{P}_a (|Y'(0)| > \sqrt{1 - a^2}s_{a, k}) = \mu_+(s_{a, k}, \infty). \end{equation} The final equality here follows from Proposition \ref{P:local-global-deriv}. To find $s_{a, k}$, we calculate $\mathbb{P}(m(Y) > k | \; m(Y) > a)$. If we let \[ T_a = \inf \{t \in [0, 1] : Y(t) = a\}, \] then we should have that $$ \mathbb{P}(m(Y) > k | \; m(Y) > a) = \mathbb{P}(|Y'(T_a)| > \sqrt{1 - a^2}s_{a, k}). $$ Now, the ``amount of time" that $Y$ spends at height $a$ is inversely proportional to its speed at that location. Therefore the distribution of $|Y'(T_a)|$ should be a size-biased version of the distribution of $Y'(0)$ given that $Y(0) = a$. Hence, $$ \mathbb{P}(|Y'(T_a)| > \sqrt{1 - a^2}s_{a, k}) = \hat{\mu}_+(s_{a, k}, \infty), $$ where $\hat{\mu}_+$ is the size-biased distribution of $\mu_+$ (we define this formally in the next paragraph). Finally, we can also calculate $\mathbb{P}(m(Y) > k | \; m(Y) > a)$ using Theorem \ref{T:max-height}. This gives that $$ \hat{\mu}_+(s_{a, k}, \infty) = \mathbb{P}(m(Y) > k | \; m(Y) > a) = \frac{\mathbb{P}(m(Y) > k)}{\mathbb{P}(m(Y) > a)} = \frac{\sqrt{1 - k^2}}{\sqrt{1 - a^2}}. $$ We can combine this with \eqref{E:heur2} and \eqref{E:heur1} to get an integral transform formula involving the function $r_{\mu_+}(x) = S(\hat{S}^{-1}(x))$, where $S$ and $\hat{S}$ are the survival functions of $\mu_+$ and $\hat{\mu}_+$, respectively. \medskip We now precisely define everything that is needed to state the integral transform formula. For a probability measure $\nu$ on $(0, \infty)$ with finite mean, define the {\bf size-biased distribution} $\hat{\nu}$ on $(0, \infty)$ by the Radon-Nikodym derivative formula $$ \frac{d\hat{\nu}}{d \nu}(x) = \frac{x}{\int_0^\infty x d \nu(x)}. $$ In order to define the integral transform when $\mu$ has atoms, we define the \textbf{extended survival function} $S:\mathbb{R} \times [0,1] \to [0, 1]$ of a probability measure $\nu$ by $$ S(x, q) = \nu(x, \infty) + (1 - q) \nu(x). $$ $S$ is a non-increasing, continuous function in the lexicographic ordering on $\mathbb{R} \times [0,1]$. $S$ can be thought of as the survival function in the lexicographic ordering for $\nu \times \mathcal{L}$, where $\mathcal{L}$ is uniform measure on $[0, 1]$. Now define the \textbf{ size-bias ratio function} $r_\nu(x):[0, 1] \to [0, 1]$ of a probability measure $\nu$ on $(0, \infty)$ with finite mean by \begin{equation*} r_\nu(x) = S(\hat{S}^{-1}(x)), \end{equation*} where $\hat{S}$ is the extended survival function of $\hat{\nu}$ and $S$ is the extended survival function of $\nu$. Here the inverse function is given by $$ \hat{S}^{-1}(x) = \sup \{(y, q) \in \mathbb{R} \times [0,1] : \hat{S}(y, q) = x \}, $$ where the supremum is taken with respect to the lexicographic ordering. When $\nu$ has no atoms, we can define $r_\nu$ in terms of the usual survival functions. \begin{prop}[The integral transform] \label{P:integral-equality} Let $\mu_+$ be the pushforward of the measure $\mu$ under the map $f(x) = |x|$. For every $k \in (0, 1)$, we have that \begin{equation} \label{E:integral-H} 1 - k + \int_0^k r_{\mu_+} \left(\frac{\sqrt{1 - k^2}}{\sqrt{1- x^2}}\right) dx = \sqrt{1-k^2}. \end{equation} \end{prop} To prove Proposition \ref{P:integral-equality}, we first establish that $\mu(0) = 0$. \begin{lemma} \label{L:X-atom} $\mu(0) = 0$. \end{lemma} \begin{proof} By Lemma \ref{L:Y-sym-type}, $\mathbb{P} \left(\frac{Y'(0)}{\sqrt{1 - Y^2(0)}} = 0 \right) = 0$. By Proposition \ref{P:local-global-deriv}, this is equal to $\mu(0)$. \end{proof} Now let $m < 1$, and let $h_m$ be as in Theorem \ref{T:path-set}. For any $a \in [0, m]$, define $$ s_{a, m} = \frac{1}{\sqrt{1 - a^2}} \liminf_{r \to 0} \frac{|h_m(t^* + r) - h_m(t^*)|}{r}, $$ where $t^*$ is any point where $h_m(t^*) = a$. Note that $s_{a, m}$ is independent of $t^*$ by the symmetry of minimal flux paths (Theorems \ref{T:path-set} (i)). We now prove an integral formula relating the speeds $s_{a, m}$ to the local speed distribution. \begin{lemma} \label{L:bias-unbias} Let $\nu$ be the law of $m(Y)$. For almost every $a \in [0, 1)$, for every $k \in (a, 1)$ there exists a constant $q_{a, k} \in [0, 1]$ such that \begin{equation} \begin{split} \mathbb{P}_a(m(Y) > k) &= \frac{1}{\sqrt{1 - a^2}} \int \mathbbm{1} \Big(s_{a, m} > s_{a, k} \;\; \;\text{or}\; \;\; s_{a, m} = s_{a, k}, \; m > k\Big) \frac{2}{s_{a, m}} d\nu(m) \\ \label{E:s-a-k} &= \mu_+(s_{a, k}, \infty) + q_{a, k}\mu_+(s_{a, k}). \end{split} \end{equation} Moreover, $\mathbb{P}_a(m(Y) > k)$ is a continuous function of $k \in (a, 1)$ for almost every $a \in [0, 1)$. \end{lemma} \begin{proof} Let $h_m$ be as in Theorem \ref{T:path-set}, and let $a < b \in [0, 1)$. We first compute the amount of time that $h_m$ spends in the interval $[a, b]$. Define $h_m^{-1}:[0, m] \to [0, 1/2]$ by $$ h_m^{-1}(x) = \inf \{t : h_m(t) = x \}. $$ By the strict monotonicity and symmetry of $h_m$ (Theorem \ref{T:path-set} (i), (ii)), we can write \begin{equation} \label{E:L-h} \mathcal{L}\{t: |h_m(t)| \in [a, b]\} = 2[h_m^{-1}(b) - h_m^{-1}(a)], \end{equation} where $\mathcal{L}$ is Lebesgue measure on $[0, 1]$. Now by the concavity of $h_m$ (Lemma \ref{L:concave-paths}), the inverse $h_m^{-1}(x)$ is almost everywhere differentiable with derivative $2/[s_{x, m} \sqrt{1 - x^2}]$. Therefore the left hand side of \eqref{E:L-h} is equal to $$ \int_a^{b} \frac{2}{s_{x, m} \sqrt{1 - x^2}}dx. $$ Now letting $U$ be a uniform random variable on $[0, 1]$ that is independent of $Y$, for any $a < b < k \in [0, 1)$, we have that \begin{equation*} \begin{split} \mathbb{P}\big(m(Y) > k \;\text{and}\; Y(0) \in [a, b]\big) &= \frac{1}2\mathbb{P}\big(m(Y) > k \;\text{and}\; |Y(U)| \in [a, b]\big) \\ &= \frac{1}2 \int_k^1 \mathcal{L}\{t: |h_m(t)| \in [a, b]\} d\nu(m) \\ &= \int_a^{b} \frac{1}{\sqrt{1 - x^2}} \int_k^1 \frac{1}{s_{x, m}} d\nu(m)dx. \end{split} \end{equation*} The first equality above follows by the time-stationarity and symmetry of $Y$ (Proposition \ref{P:Y-symmetries} (i) and (ii)). The second equality follows since $Y$ is supported on shifts of the minimal flux paths $h_m$ (Theorem \ref{T:unique-2}). This implies that for almost every pair $a < k \in [0, 1)$, that \begin{equation} \label{E:f-amid} \mathbb{P}_a(m(Y) > k) = \frac{1}{\sqrt{1 - a^2}} \int_k^1 \frac{2}{s_{a, m}} d\nu(m). \end{equation} Now by the ordering on minimal flux paths (Theorem \ref{T:unique-2}(iv)), we have that \begin{equation} \label{E:speed-mono} \text{if} \;\; s_{a, m(Y)} < s_{a, k}, \qquad \text{then } m(Y) < k. \end{equation} This allows us to rewrite \eqref{E:f-amid} to get that \begin{equation} \label{E:f-amid-2} \mathbb{P}_a(m(Y) > k) = \frac{1}{\sqrt{1 - a^2}} \int \mathbbm{1} \Big(s_{a, m} > s_{a, k} \;\; \;\text{or}\; \;\; s_{a, m} = s_{a, k}, m > k\Big) \frac{2}{s_{a, m}} d\nu(m) \end{equation} for almost every $a < k \in [0, 1)$. Moreover, for almost every $a \in [0, 1)$, we have that $$ |Y'(0)| = \sqrt{1 - a^2}s_{a, m(Y)} \qquad \mathbb{P}_a\text{-almost surely}. $$ This follows by time stationarity of $Y$, since $Y$ is almost everywhere differentiable. Also, $Y'(0)/\sqrt{1- a^2}$ has distribution $\mu$ for almost every $a$ (Proposition \ref{P:local-global-deriv}). Therefore \eqref{E:speed-mono} implies that for almost every $a \in [0, 1)$, for every $k \in (a, 1)$ there exists a constant $q_{a, k} \in [0, 1]$ such that \begin{equation} \label{E:Pa-mY} \mathbb{P}_a(m(Y) > k) = \mu_+(s_{a, k}, \infty) + q_{a, k}\mu_+(s_{a, k}). \end{equation} Now let $a$ be such that \eqref{E:Pa-mY} holds for every $k \in (a, 1)$ and \eqref{E:f-amid} and \eqref{E:f-amid-2} hold for almost every $k \in (a, 1)$. \medskip By concavity of minimal flux paths (Lemma \ref{L:concave-paths}), $s_{a, m} > 0$ whenever $m > a$. Therefore since $\nu$ has a Lebesgue density by Theorem \ref{T:max-height}, the right hand side of \eqref{E:f-amid} is continuous and non-increasing. Since both sides of \eqref{E:f-amid-2} are also non-increasing, and are equal to the right hand side of \eqref{E:f-amid} for almost every $k \in (a, 1)$, they must be equal for every $k \in (a, 1)$. Combining this with \eqref{E:Pa-mY} proves the lemma. \end{proof} \begin{lemma} \label{L:equal-measure} Let $\mathfrak{s}_a$ be the law of $\mathbb{P}(s_{a, m(Y)} \in \cdot \; | m(Y) > a)$, and define the measure $\bar{\mathfrak{s}}_a$ by the Radon-Nikodym formula $$ \frac{2}{s} d\mathfrak{s}_a(s) = d\bar{\mathfrak{s}}_a(s). $$ Then for almost every $a \in [0, 1)$, we have that $\bar{\mathfrak{s}}_a = \mu_+$. In particular, for such $a$, we have that \begin{equation} \label{E:q-ak} q_{a, k}\mu_+(s_{a, k}) = \frac{2}{s_{a, k}\sqrt{1 - a^2}} \int \mathbbm{1} \Big(s_{a, m} = s_{a, k}, m > k\Big)d\nu(m). \end{equation} \end{lemma} \begin{proof} Let $a$ be such that \eqref{E:s-a-k} holds for every $k \in (0, 1)$ and such that $\mathbb{P}_a(m(Y) > k)$ is continuous. Further assume that $$ \mathbb{P}_a\left( Y'(0) \text{ exists and is non-zero }, m(Y) < 1, J(Y) = 1 \right) = 1. $$ These conditions hold for almost every $a \in [0, 1)$ (Proposition \ref{P:local-global-deriv}, Lemma \ref{L:bias-unbias}). Noting that $\sqrt{1 - a^2} = \mathbb{P}(m(Y) > a)$ by Theorem \ref{T:max-height}, equation \eqref{E:s-a-k} implies that for every $k \in (a, 1)$, there exists a $p_{a, k} \in [0, 1]$ such that \begin{equation} \label{E:comp} \mathbb{P}_a(m(Y) > k) = \bar{\mathfrak{s}}_a(s_{a, k}, \infty) + p_{a, k}\bar{\mathfrak{s}}_a(s_{a, k}) = \mu_+(s_{a, k}, \infty) + q_{a, k}\mu_+(s_{a, k}). \end{equation} Now, since $Y'(0) \ne 0$, $\mathbb{P}_a$-almost surely, the concavity of minimal flux paths (Lemma \ref{L:concave-paths}) implies that $\mathbb{P}_a(m(Y) = a) = 0$. Moreover, since $m(Y) < 1$, $\mathbb{P}_a$-almost surely, we have that $\mathbb{P}_a(m(Y) \in (a, 1)) = 1$, and so \begin{equation} \label{E:mY} \lim_{k \to 1} \mathbb{P}_a(m(Y) > k) = 0 \quad \;\text{and}\; \quad \lim_{k \to a} \mathbb{P}_a(m(Y) > k) = 1. \end{equation} Therefore, \eqref{E:comp} and the continuity of $\mathbb{P}_a(m(Y) > k)$ implies that \begin{equation} \label{E:full} \mu_+ \left( s_{a, k} : k \in (0, 1) \right) = \bar{\mathfrak{s}}_a \left( s_{a, k} : k \in (0, 1) \right) = 1. \end{equation} Now fix $k \in (a, 1)$. Let $$ k^* = \inf \{\ell \in [k, 1] : s_{a, k} < s_{a, m} \;\; \text{ for all } \;\;m > \ell \}. $$ Equation \eqref{E:comp} and the continuity of $\mathbb{P}_a(m(Y) > k)$ then implies that $$ \mathbb{P}_a(m(Y) > k^*) = \bar{\mathfrak{s}}_a(s_{a, k}, \infty) = \mu_+(s_{a, k}, \infty). $$ Combining this with \eqref{E:full} proves that $\mu_+ = \bar{\mathfrak{s}}_a$. Equation \eqref{E:q-ak} follows by using that $\mu_+ = \bar{\mathfrak{s}}_a$ to simplify equation \eqref{E:s-a-k}. \end{proof} \begin{proof}[Proof of Proposition \ref{P:integral-equality}] Fix $a$ so that the conclusion of Lemma \ref{L:equal-measure} holds, and let $k > a$. By Theorem \ref{T:max-height} and the ordering on minimal flux paths, we can write \begin{align*} \nonumber\mathbb{P}(m(Y) > k | \; m(Y) > a) &= \frac{1}{\sqrt{1 - a^2}} \left[\int \mathbbm{1}(s_{a, m} > s_{a, k} ) d\nu(m) + \int \mathbbm{1}(s_{a, m} = s_{a, k}, m > k) d\nu(m) \right]. \end{align*} By Lemma \ref{L:equal-measure}, we can rewrite the first integral above in terms of $\mathfrak{s}_a$, and then in terms of $d\mu_+$. This gives that $$ \frac{1}{\sqrt{1 - a^2}}\int \mathbbm{1}(s_{a, m} > s_{a, k} ) d\nu(m) = \int \mathbbm{1}(s > s_{a, k}) d\mathfrak{s}_a(s) = \int \mathbbm{1}(s > s_{a, k})\frac{s}2d\mu_+(s). $$ We can also rewrite the second integral using \eqref{E:q-ak}. This implies that $$ \mathbb{P}(m(Y) > k | \; m(Y) > a) = \int \mathbbm{1}(s > s_{a, k})\frac{s}2d\mu_+(s) + \frac{s_{a, k}}2 q_{a, k} \mu_+(s_{a, k}). $$ By Lemma \ref{L:expect-2}, we can recognize $\frac{s}{2}$ as the Radon-Nikodym derivative of the size-biased random variable $\hat{\mu}_+$ with respect to $\mu_+$, proving that $$ \frac{\sqrt{1 - k^2}}{\sqrt{1 - a^2}} = \mathbb{P}(m(Y) > k | m(Y) > a) = \hat{\mu}(s_{a, k}, \infty) + q_{a, k}\hat{\mu}(s_{a, k}) $$ for almost every $a < k \in [0, 1)$. Now, Lemma \ref{L:bias-unbias} allow us to conclude that \begin{align*} \mathbb{P}_a(m(Y) > k) &= S(s_{a, k}, 1 - q_{a, k}) = S\left(\hat{S}^{-1}\left(\frac{\sqrt{1- k^2}}{\sqrt{1- a^2}}\right)\right) \end{align*} for almost every $a < k \in [0, 1)$. Combining this with the symmetry of $Y$ (Proposition \ref{P:Y-symmetries}) and Theorem \ref{T:max-height} implies that \[ \sqrt{1 - k^2} = \mathbb{P}(m(Y) > k) = \int_{0}^1 \mathbb{P}_a(m(Y) > k) da = 1 - k + \int_0^k r_{\mu_+}\left(\frac{\sqrt{1- k^2}}{\sqrt{1- a^2}}\right) da. \qedhere \] \end{proof} \section{The weak trajectory limit} \label{S:transform} In this section we show that the integral transform in Proposition \ref{P:integral-equality} determines the local speed distribution. We begin with two basic lemmas about size-bias ratio functions. \begin{lemma} \label{L:loc-lip} For any probability distribution $\nu$ on $(0, \infty)$ with finite first moment $m$, the size-bias ratio function $r_\nu$ is locally Lipschitz on $[0, 1)$ and continuous on $[0, 1]$. \end{lemma} \begin{proof} Let $\pi_1(x, y) = x.$ By calculus, when $x \in (0, 1)$ we have that $$ \partial_+ r_\nu(x) = \lim_{h \to 0+} \frac{m}{\pi_1(\hat{S}^{-1}(x + h))} \qquad \;\text{and}\; \qquad \partial_- r_\nu(x) = \lim_{h \to 0+} \frac{m}{\pi_1(\hat{S}^{-1}(x - h))}. $$ The first equation also holds as $x = 0$. As $\pi_1(\hat{S}^{-1}(y))$ is a decreasing function of $y$, and strictly positive for all $y \in [0, 1)$, this shows that $r_\nu$ is locally Lipschitz on $[0, 1)$. It is straightforward to check that $r_\nu$ is continuous at $1$. \end{proof} \begin{lemma} \label{L:rX-det} Suppose that $\nu_1$ and $\nu_2$ are probability measures on $(0, \infty)$ with the same first moment, such that $r_{\nu_1} = r_{\nu_2}$. Then $\nu_1 = \nu_2$. \end{lemma} \begin{proof} We prove the contrapositive. Let $\nu_1$ and $\nu_2$ be measures with the same first moment, and suppose that $\nu_1 \ne \nu_2$. Since $\nu_1$ and $\nu_2$ have the same first moment, $\hat{\nu}_1 \ne \hat{\nu}_2$, so for some value of $(y, q) \in (0, \infty) \times [0, 1]$, we have that $\hat{S}_{\nu_1} (y, q) \ne \hat{S}_{\nu_2} (y, q)$. Since the functions $\hat{S}_{\nu_1}(y, \cdot)$ and $\hat{S}_{\nu_2}(y, \cdot)$ are linear for any fixed value of $y$, this implies that $\hat{S}_{\nu_1} (y, r) \ne \hat{S}_{\nu_2} (y, r)$ for some $r \in \{0, 1\}$. Moreover, since $$ \hat{S}_{\nu_i}(y, 1) = \sup \{\hat{S}_{\nu_i}(z, 0) : z > y \}, \qquad i = 1, 2, $$ we can conclude that $\hat{S}_{\nu_1} (z, 0) \ne \hat{S}_{\nu_2} (z, 0)$ for some $z \in (0, \infty)$. Without loss of generality, assume that $a_1 := \hat{S}_{\nu_1} (z, 0) > a_2 :=\hat{S}_{\nu_2} (z, 0)$. Therefore using the notation of the previous lemma, letting $b = (a_1 + a_2)/2$, we have that $$ \lim_{h \to 0^+} \pi_1(\hat{S}_{\nu_1}^{-1}(b + h)) \ge z, \qquad \text{whereas} \qquad \lim_{h \to 0^+} \pi_1(\hat{S}_{\nu_1}^{-1}(b + h)) < z. $$ Since $b \in (0, 1)$, the derivative computation in the previous lemma combined with the fact that $\nu_1$ and $\nu_2$ have the same first moment implies that $r_{\nu_1} \ne r_{\nu_2}$. \end{proof} Now let $\mathcal{X}$ be the space of continuous functions from $[0, 1] \to \mathbb{R}$ that are locally Lipschitz on $[0, 1)$. Define an integral transform $H$ on $\mathcal{X}$ by \begin{equation*} H(r)(k) = \int_0^k r \left(\frac{\sqrt{1 - k^2}}{\sqrt{1- x^2}}\right) dx, \end{equation*} for $k \in [0, 1]$. By Lemma \ref{L:loc-lip}, any linear combination of size-bias ratio functions is in $\mathcal{X}$, so if the integral transform $H$ is injective on $\mathcal{X}$, then $H(r_{\mu_+})$ determines $r_{\mu_+}$. \begin{lemma} \label{L:injective-transform-lip} $H$ is injective on $\mathcal{X}$. \end{lemma} \begin{proof} Let $r \in \mathcal{X}, r \ne 0$. We will show that $H(r) \ne 0$. Without loss of generality, we may assume that $$ \max_{x \in [0, 1]} r(x) = \delta > 0. $$ Letting $u = \frac{\sqrt{1 - k^2}}{\sqrt{1- x^2}}$ and $y = \sqrt{1 -k^2}$, we have that \begin{equation*} H(r)(\sqrt{1-y^2}) = \int_y^1 r(u) \frac{y^2}{u^2\sqrt{u^2 - y^2}}du. \end{equation*} For $y \in (0, 1]$, we have that \begin{equation} \label{E:H-bar} \close{H}(r)(y) := \frac{H(r)(\sqrt{1 - y^2})}{y^2} = \int_y^1 \frac{r(u)}{u^2\sqrt{u^2 - y^2}}du. \end{equation} It suffices to show that $\close{H}(r)(y) \ne 0$ for some $y \in (0, 1]$. Observe that if $r(1) > 0$, then there would exist a $\gamma > 0$ such that $r(x) > 0$ for all $x > \gamma$, so $\close{H}(r)(\gamma) > 0$. Also, if $r(0) > 0$, then $\close{H}(r)(y) \to \infty$ as $y \to 0$. Therefore there must exist $y \in (0,1)$ be such that $r(y) = \delta$. Since $r$ is locally Lipschitz on $[0, 1)$, we can find $k, \epsilon > 0$ such that $r(x) \ge \delta - k \epsilon$ for all $x \in [y, y + \epsilon]$. Therefore \begin{align} \label{E:de-k-bd} \int_y^{y+\epsilon}\frac{r(u)}{u^2\sqrt{u^2 - y^2}}du \ge \frac{(\delta - k \epsilon)\sqrt{\epsilon^2 + 2y\epsilon}}{y^2(y + \epsilon)}. \end{align} Now consider the difference between $\close{H}(r)(y)$ and $\close{H}(r)(y + \epsilon)$. We have \begin{align} \nonumber \close{H}&(r)(y) - \close{H}(r)(y + \epsilon) \\ \nonumber &= \int_y^{y+\epsilon}\frac{r(u)}{u^2\sqrt{u^2 - y^2}}du + \int_{y+ \epsilon}^1 \frac{r(u)}{u^2\sqrt{u^2 - y^2}}du - \int_{y+ \epsilon}^1 \frac{r(u)}{u^2\sqrt{u^2 - (y+ \epsilon)^2}}du \\ \nonumber &\ge \int_y^{y+\epsilon}\frac{r(u)}{u^2\sqrt{u^2 - y^2}}du + \delta \int_{y + \epsilon}^1 \left[\frac{1}{u^2\sqrt{u^2 - y^2}} - \frac{1}{u^2\sqrt{u^2 - (y + \epsilon)^2}}\right]du \\ \label{E:int-alm} &\ge \frac{(\delta - k \epsilon) \sqrt{\epsilon^2 + 2y\epsilon}}{y^2(y + \epsilon)} + \delta \left[\frac{(y + \epsilon)^2\sqrt{1-y^2} -y^2\sqrt{1-(y + \epsilon)^2}- (y + \epsilon)\sqrt{\epsilon^2 + 2y\epsilon}}{(y + \epsilon)^2y^2}\right] . \end{align} In above calculation the first inequality comes from the fact that $r(x) \le \delta$ for all $x$, and the observation that $$ \frac{1}{u^2\sqrt{u^2 - y^2}} < \frac{1}{u^2\sqrt{u^2 - (y + \epsilon)^2}} $$ for all $u \in (y + \epsilon, 1)$. The second equality follows by integration and plugging in the bound in \eqref{E:de-k-bd}. Now expanding in $\epsilon$ about $\epsilon = 0$, we get that \eqref{E:int-alm} is equal to \begin{align*} \frac{(2-y^2)\delta\epsilon}{y^3\sqrt{1-y^2}} + O(\epsilon^{3/2}). \end{align*} This is strictly greater than 0 for small enough $\epsilon$, so $ \close{H}(r)(y) - \close{H}(r)(y + \epsilon) \ne 0 $ for such $\epsilon$. Hence $\close{H}(r)(x) \ne 0$ for some $x \in [0, 1)$. \end{proof} \begin{prop} \label{P:arcsin-en} Let $\mathfrak{arc}$ be the arcsine distribution on $[-\pi, \pi]$, and let $\mathfrak{arc}_+$ be the pushforward of $\mathfrak{arc}$ under the map $x \mapsto |x|$. Then for every $k \in [0, 1]$, we have that \begin{equation} \label{E:integral-H-2} 1 - k + \int_0^k r_{\mathfrak{arc}_+} \left(\frac{\sqrt{1 - k^2}}{\sqrt{1- x^2}}\right) dx = \sqrt{1-k^2}. \end{equation} \end{prop} \begin{proof} The distribution $\mathfrak{arc}$ has density $(\pi \sqrt{\pi^2 - x^2})^{-1}$ on $[-\pi, \pi]$. From this, we can calculate that \begin{align*} r_{\mathfrak{arc}_+}(y) = 1 - \frac{2}{\pi}\arcsin(\sqrt{1-y^2}). \end{align*} We now use the connection between the arcsine distribution and the Archimedean measure $\mathfrak{Arch}$ to help evaluate the integral in \eqref{E:integral-H-2}. Let $(X_1, X_2) \sim \mathfrak{Arch}$ and let $$ \mathcal{A}(t) = X_1 \cos (\pi t) + X_2 \sin(\pi t) $$ be the Archimedean path. Writing $\mathbb{P}(m(\mathcal{A}) > k) = \mathbb{P}(X_1^2 + X_2^2 > k^2)$ in both polar and Cartesian coordinates, we get that $$ \int_k^1 \frac{rdr}{\sqrt{1 - r^2}} = 4 \int_{0}^1 \int_{\sqrt{k^2 - y^2} \vee 0}^{\sqrt{1 - y^2}} \frac{dxdy}{2 \pi \sqrt{1 - x^2 - y^2}}. $$ The left hand side can easily be evaluated as $\sqrt{1 - k^2}$. The right hand side is equal to \begin{align*} (1 - k) + \int_{0}^k \frac{2}{\pi} \left(\frac{\pi}{2} - \arcsin\left(\frac{\sqrt{k^2 - y^2}}{\sqrt{1- y^2}}\right)\right)dy = (1- k) + \int_{0}^k r_\mathfrak{arc}\left(\frac{\sqrt{1 - k^2}}{\sqrt{1- y^2}}\right) dy. \qquad \qedhere \end{align*} \end{proof} We can now prove Theorem \ref{T:main-2}, and in turn use that to prove Theorem \ref{T:weak-limit}. \begin{proof}[Proof of Theorem \ref{T:main-2}] By Proposition \ref{P:integral-equality}, Lemma \ref{L:injective-transform-lip} and Proposition \ref{P:arcsin-en}, we have that $r_{\mu_+} = r_{\mathfrak{arc}_+}$. Moreover, the first moment of $\mathfrak{arc}_+$ is $2$. By Lemma \ref{L:expect-2}, this matches the first moment of $\mu_+$. Therefore by Lemma \ref{L:rX-det}, $\mu_+ = \mathfrak{arc}_+$. Finally, symmetry of both measures implies that $\mu = \mathfrak{arc}$. \end{proof} \begin{proof}[Proof of Theorem \ref{T:weak-limit}] Fix $m > 0$. Let $g_m(t) = m \sin (\pi t)$, and let $g_m^{-1}$ be the inverse of $g_m$ on the interval $[0, 1/2]$. Let $$ s_m(t) = \frac{g_m'(t)}{\sqrt{1 - g_m^2(t)}} $$ be the local speed of $g_m$. We can calculate that \begin{align} \nonumber J(g_m) = 2J(g_m; [0, 1/2]) &= \int_0^{1/2} D_\mu(s_m(t)) \sqrt{1 - g_m^2(t)}dt \\ \label{E:s-h} &= \int_0^m \frac{D_\mu(s_m(g_m^{-1}(x))}{s_m(g_m^{-1}(x))}dx. \end{align} Here we have made the substitution $x = g_m(t)$ to go from the first to the second line. Now using Theorem \ref{T:main-2}, we can calculate $$ D_\mu(c) = D_\mathfrak{arc}(c) = \frac{2}{\pi} \left(c \arcsin \left(\frac{c}{\pi}\right) + \sqrt{\pi^2 - c^2}\right). $$ From here we can use that $s_m(g_m^{-1}(x)) = \frac{\pi \sqrt{m^2 - x^2}}{\sqrt{1 - x^2}}$ to compute that \begin{equation*} J(g_m) = \int_0^m \frac{2}{\pi} \left[ \frac{\sqrt{1-m^2}}{\sqrt{m^2 - x^2}} + \arcsin\left(\frac{\sqrt{m^2 - x^2}}{\sqrt{1- x^2}}\right)\right]dx \end{equation*} The first part of this integral can be easily evaluated, and the second part can be evaluated by comparing with the integral in the final line of the proof of Proposition \ref{P:arcsin-en}. Putting this all together yields that $J(g_m) = 1$, as desired. Therefore by Theorem \ref{T:Y-unique}, we can write $$ Y(t) = \sqrt{1 - V^2}\sin(\pi t + 2 \pi U), $$ where $U$ and $V$ are independent uniform random variables on $[0, 1]$. This is the Archimedean path. \end{proof} \subsection{The empirical distribution of trajectories} \label{SS:slightly} By a compactness argument, we can immediately prove a slightly stronger version of Theorem \ref{T:weak-limit}. This will allow us to conclude Theorem \ref{T:subnetwork}. This theorem will also be necessary for establishing the stronger limits in Sections \ref{S:strong-limit} and \ref{S:geom-limit}. \medskip For a Polish space $S$, let $\mathcal{M}(S)$ be the space of probability measures on $S$ with the topology of weak convergence. Note that $\mathcal{M}(S)$ is itself a Polish space. Recall the notation $\sigma_G$ introduced in Section \ref{S:intro}. For a fixed sorting network $\sigma$, let $\nu_\sigma \in \mathcal{M}(D)$ be uniform measure on the set $\{\sigma_G(i, \cdot)\}_{i \in \{1, \dots, n\}}$. Now let $\Omega_n$ be the space of all $n$-element sorting networks, and define $$ \nu_n = \frac{1}{\card{\Omega_n}} \sum_{\sigma \in \Omega_n} \delta(\nu_\sigma). $$ For each $n$, $\nu_n \in \mathcal{M}(\mathcal{M}(\mathcal{D}))$. We now extend Theorem \ref{T:weak-limit} to give a limit theorem for the sequence $\nu_n$. \begin{theorem} \label{T:weak-limit'} Let $\nu_n \in \mathcal{M}(\mathcal{M}(\mathcal{D}))$ be as above, and let $\mathbb{P}_\mathcal{A}$ be the law of the Archimedean path $\mathcal{A}(t) = X_1\cos(\pi t) + X_2 \sin (\pi t)$, where $(X_1, X_2) \sim \mathfrak{Arch}$. Then $\nu_n \to \delta_{\mathbb{P}_\mathcal{A}}$. \end{theorem} Note that the law of $Y_n$ is the expectation of the random measure $\nu_n$. Therefore Theorem \ref{T:weak-limit} gives convergence in expectation for $\nu_n$, whereas Theorem \ref{T:weak-limit'} gives pointwise convergence. \begin{proof} By Remark 2.4 from \cite{dauvergne1}, the sequence $\nu_n$ is precompact. For any subsequential limit $\nu$ of $\nu_n$, Theorem \ref{T:weak-limit} implies that $\nu$-almost every $\rho$ is supported on curves of the form $a\sin(\pi t) + b \cos(\pi t)$. Hence if $Z$ is a random path with law $\rho$, then $$ Z(t) = X_1 \cos(\pi t) + X_2 \sin(\pi t) $$ for some random variables $(X_1, X_2)$. Moreover, $Z(t)$ is uniform for every $t$ since each measure $\nu_n$ is supported on the set $\{\nu_\sigma\}_{\sigma \in \Omega_n}$. Therefore $(X_1, X_2) \sim \mathfrak{Arch}$, so $Z$ is the Archimedean path, and hence $\nu = \delta_{\mathbb{P}_\mathcal{A}}$. \end{proof} \begin{proof}[Proof of Theorem \ref{T:subnetwork}] Fix $m$. With the notation of Theorem \ref{T:subnetwork}, we have that \begin{equation} \label{E:2-cvg} \tau^n_m \stackrel{d}{\to} \tau_m \quad \;\text{as}\; n \to \infty \quad \text{ if and only if } \quad \frac{1}{|\Omega_n|} \sum_{\sigma \in \Omega_n} \nu_n^m \to \delta_{\mathbb{P}_\mathcal{A}^m} \;\text{as}\; n \to \infty. \end{equation} in the space $\mathcal{M}(\mathcal{M}(\mathcal{D}^n))$. Here $\nu_n^m$ is the $m$-fold product measure $\nu_n \times \nu_n \times \dots \nu_n$. The reason for this is that is if $I_1, \dots, I_m$ are uniform random variables on $\{1, \dots n\}$, then $I_j \ne I_k$ for all $j, k \in \{1, \dots n\}$ with high probability. Hence to find the limit of an $m$-out-of-$n$ sorting network, it suffices to find the limit of $m$ independently chosen trajectories. Finally, the second convergence statement in \eqref{E:2-cvg} follows from Theorem \ref{T:weak-limit'}. \end{proof} \begin{section}{The strong sine curve limit} \label{S:strong-limit} Theorem \ref{T:weak-limit} shows that for any $\epsilon > 0$, with high probability $(1- \epsilon)n$ particle trajectories in a random sorting network are close to sine curves. In this section, we extend this result to all particle trajectories, thus proving Theorem \ref{T:sine-curves}. By combining Theorem \ref{T:sine-curves} and Theorem \ref{T:weak-limit}, we also prove Theorems \ref{T:matrices} and \ref{T:unif-rotation}. \medskip The idea behind the proof is as follows. By Theorem \ref{T:weak-limit}, we know that most trajectories in a typical large-$n$ sorting network are close to sine curves. Since any two trajectories must cross exactly once, this restricts the type of behaviour that the remaining trajectories can have. Specifically, this forces all other trajectories to be either sine curves themselves, or to spend a lot of time at the edge of the sorting network. We can eliminate this second case by using the ``octagon" bound from \cite{angel2007random}. To state this bound, let $A_{n, \gamma}$ be the event where \begin{align*} \left|\sigma^n_G(i, t) - \sigma^n_G(i, 0)\right| < 2\sqrt{2t - t^2} + \gamma \;\; &\;\text{and}\; \;\; \left|\sigma^n_G(i, t) - \sigma^n_G(i, 1)\right| < 2\sqrt{1 - t^2} + \gamma \\ &\text{for all } \;\; t \in [0, 1], i \in \{1, \mathellipsis n\}. \end{align*} \begin{theorem}[Octagon bound, \cite{angel2007random}] \label{T:octagon} For any $\gamma > 0$, we have that $$ \lim_{n \to \infty} \mathbb{P}(A_{n, \gamma}) = 1. $$ \end{theorem} Now define $$ L^n_{i}(\delta) = \mathcal{L} \{ t: |\sigma^n_G(i, t)| \ge 1 - \delta\}, $$ where $\mathcal{L}$ is Lebesgue measure on $[0, 1]$. This is the amount of time that particle $i$ spends within $\delta$ of the edge in the random sorting network $\sigma^n$. \begin{lemma} \label{L:edge-time} For every $\epsilon > 0$, we have that $$ \lim_{n \to \infty} \mathbb{P} \left( \max_{i \in [1, n] } L^n_i(\epsilon^2/16) > \epsilon \right) =0. $$ \end{lemma} \begin{proof} Fix $\epsilon > 0, n \in \mathbb{N}$, and let $i$ be such that $|\sigma_G(i, 0)| \ge 1 - \epsilon^2/16$. On the event $A_{n, \epsilon^4/64}$, a simple calculation shows that $$ -1 + \epsilon^2/16 < \sigma_G(i, t) < 1 - \epsilon^2/16 \qquad \text{ for all } t \in [\epsilon/2, 1 - \epsilon/2]. $$ Therefore by Theorem \ref{T:octagon}, we have that \begin{equation} \label{E:time-shift} \lim_{n \to \infty} \mathbb{P} \bigg( \max \left\{L^n_i(\epsilon^2/16) : \; i \in [1, n], \;|\sigma^n_G(i, 0)| \ge 1 - \epsilon^2/16 \right\} > \epsilon \bigg) = 0. \end{equation} By time stationarity of random sorting networks (Theorem \ref{T:time-stat}), for each $n$ the above probability is greater than or equal to $$ \epsilon \mathbb{P} \left( \max_{i \in [1, n] } L^n_i(\epsilon^2/16) > \epsilon \right). $$ Therefore \eqref{E:time-shift} implies the lemma. \end{proof} Recall that $\nu_\sigma$ is uniform measure on the trajectories of a sorting network $\sigma$, and that $\mathbb{P}_\mathcal{A}$ is the law of the Archimedean path. To prove Theorem \ref{T:sine-curves} from here, we must show that if $\nu_\sigma$ is close to $\mathbb{P}_\mathcal{A}$ in the weak topology, then for any particle $i$, either $\sigma_G(i, \cdot)$ spends a lot of time at the edge of the sorting network, or else $\sigma_G(i, \cdot)$ is close to a sine curve. \medskip Recall that $\mathcal{D}$ is the closure of all sorting network trajectories in the uniform norm. To metrize weak convergence on the space of probability measures on $\mathcal{D}$, we use the L\'evy-Prokhorov metric $d_{LP}$. For a set $A \subset \mathcal{D}$, define $$ A^\epsilon = \{ f \in \mathcal{D} : ||f - g||_u < \epsilon \;\; \text{for some} \; g \in A \}. $$ Here and throughout the remaining proofs, $||\cdot||_u$ is the uniform norm. For two probability measures $\nu_1, \nu_2$ on $\mathcal{D}$, define $$ d_{LP}(\nu_1, \nu_2) = \inf \big\{ \epsilon > 0 : \nu_1(A) \le \nu_2(A^\epsilon) + \epsilon \text{ for all Borel sets } A \subset \mathcal{D} \big\}. $$ We now prove two lemmas characterizing particle behaviour when $\nu_\sigma$ is close to ${\mathbb{P}_\mathcal{A}}$. The first gives conditions under which a particle spends time close to the edge of a sorting network. Let $a_{n, i} = 2i/n - 1$, and define $$ C_{n, i}(t) = \sin(\arcsin(a_{n, i}) + \pi t) \qquad \;\text{and}\; \qquad c_{n, i}(t) = \sin(\arcsin(a_{n, i}) - \pi t). $$ Note that these are simply the paths $h_{a_{n, i}, \pm 1}$ in Theorem \ref{T:unique-2}. Recalling the definition of the Archimedean measure $\mathfrak{Arch}$ from Section \ref{S:intro}, for $\epsilon \in (0, 1]$ define $$ L(\epsilon) =\mathfrak{Arch}( r \ge 1 - \epsilon, \theta \in [2\pi x, 2\pi (x + \epsilon)]) =\epsilon\sqrt{2\epsilon - \epsilon^2}, $$ where $(r, \theta)$ are polar coordinates. Note that $L(\epsilon)$ is independent of $x$ by the rotational symmetry of $\mathfrak{Arch}$. \begin{lemma} \label{L:edge-cond} Let $\gamma \in (0, 2]$ and $\epsilon \in (0, \gamma^2/100)$. Suppose that for a fixed $n$-element sorting network $\sigma$ and a particle $i \in \{1, \mathellipsis n \}$, that $d_{LP}(\nu_\sigma, {\mathbb{P}_\mathcal{A}}) < L(\epsilon)/2$, and either $$ \max_{t \in [0, 1]} [\sigma_G(i, t) - C_{n,i}(t)]\ge \gamma \quad \text{or} \quad \max_{t \in [0, 1]} [c_{n, i}(t) - \sigma_G(i, t)] \ge \gamma. $$ Then $$ \mathcal{L}\left\{ t: |\sigma_G(i, t)| \ge 1- 3\epsilon \right\} \ge \gamma/6. $$ \end{lemma} \begin{proof} Without loss of generality, we can assume that there exists an $s \in [0, \arccos(a_{n, i})/\pi]$ such that \begin{equation} \label{E:sig-ga} \sigma_G(i, s) - C_{n, i}(s) \ge \gamma. \end{equation} The other cases follow by symmetric arguments. Since $d_{LP}(\nu_\sigma, {\mathbb{P}_\mathcal{A}}) < L(\epsilon)/2$, for every $x \in [0, 1]$, there exists a particle $j(x)$ such that $$ ||\sigma_G(j(x), t) - r\sin(\pi t + 2 \pi (x + \alpha)) ||_u < L(\epsilon)/2 $$ for some $r \ge 1 - \epsilon$ and $\alpha \in [0, \epsilon]$. Now since $$ ||r\sin(\pi t + 2 \pi (x + \alpha)) - \sin(\pi t + 2 \pi x)||_u \le 2 \epsilon, $$ this implies that \begin{equation} \label{E:sig-jx} ||\sigma_G(j(x), t) - \sin(\pi t + 2 \pi x)||_u < L(\epsilon)/2 + 2 \epsilon \end{equation} for all $x \in [0, 1]$. Let $\delta = \gamma - \frac{L(\epsilon)}2 - 2 \epsilon$. Note that $\delta$ is positive. For all $\alpha \in [0, \delta]$, the inequality \eqref{E:sig-ga} implies that \begin{align} \label{E:cond-1} \sigma_G(i, s)- \sin(\arcsin(a_{n, i}) + \alpha + \pi s) > \gamma - \delta = L(\epsilon)/2 + 2 \epsilon. \end{align} Now define $j_{\alpha} = j(\arcsin(a_{n, i}) + \alpha)/(2\pi)).$ Combining \eqref{E:cond-1} with \eqref{E:sig-jx}, we have that \begin{align} \nonumber \sigma_G(i, s) > \sigma_G\left(j_\alpha, s\right). \end{align} Moreover, if $1 - \cos(\alpha) > L(\epsilon)/2 + 2 \epsilon$, then $$ \sigma_G(i, 0) = a_{n, i} < \sin(\arcsin(a_{n, i}) + \alpha) - L(\epsilon)/2 + 2 \epsilon, $$ and so $$ \sigma_G(i, 0) < \sigma_G\left(j_\alpha, 0\right). $$ Thus for every $\alpha \in [\arccos(1 - L(\epsilon)/2 - 2 \epsilon), \delta]$, the particles $i$ and $j_\alpha$ must cross during the interval $[0, s]$. Therefore \begin{equation} \label{E:sig-G} \sigma_G(i, t) > \sigma_G\left(j_\alpha, t\right)\qquad \text{ for all } t > s, \;\; \alpha \in [\arccos(1 - L(\epsilon)/2 - 2 \epsilon), \delta]. \end{equation} We now show that this forces the particle $i$ to spend a large amount of time close to the edge of the sorting network. For all $\alpha \in [0, \delta]$, there must be some time $t_\alpha \in [0, 1]$ such that $$ \sin(\pi t_\alpha +\arcsin(a_{n, i}) + \alpha) = 1. $$ The time $t_\alpha \notin [0, s]$ since for every $\alpha \in [0, \delta]$, we have that $$ ||\sin(\pi t +\arcsin(a_{n, i}) + \alpha) - C_{n, i}(t)||_u < \gamma, \quad \;\text{and}\; \;\; C_{n, i}(t) \le C_{n, i}(s) \le 1 - \gamma \text{ for all } t \in [0, s]. $$ We have used \eqref{E:sig-ga} to get the second statement above. Therefore for all $\alpha \in [\arccos(1 - L(\epsilon)/2 - 2 \epsilon), \delta]$, \eqref{E:sig-G} implies that $$ \sigma_G(i, t_\alpha) > \sigma(j_\alpha, t_\alpha) \ge 1 - L(\epsilon)/2 - 2\epsilon \ge 1 - 3\epsilon. $$ Using the fact that $\arccos(1 - x) \le 2\sqrt{x}$ for $x \in [0, \pi/2)$, we have that \[ \mathcal{L} \big\{t_\alpha : \alpha \in [\arccos(1 - L(\epsilon)/2 - 2 \epsilon), \delta] \big\} \ge \frac{\delta - 2\sqrt{L(\epsilon)/2 + 2 \epsilon}}{\pi} \ge \frac{\gamma - 3\epsilon - 2\sqrt{3\epsilon}}{\pi} \ge \frac{\gamma}{6}. \] Here the final bound follows from the fact that $\epsilon < \gamma^2/100$. \end{proof} The second lemma shows that if a curve $\sigma_G(i, \cdot)$ stays close to the region between the curves $c_{n, i}$ and $C_{n, i}$, then it must be close to a sine curve. \begin{lemma} \label{L:sine-close} Let $\sigma$ be an $n$-element sorting network, $i \in \{1, \mathellipsis n \}$, and $\gamma \in (0, 1)$. Suppose that $$ \max_{t \in [0, 1]} [\sigma_G(i, t) - C_{n,i}(t)] < \gamma \quad \text{and} \quad \max_{t \in [0, 1]} [c_{n, i}(t) - \sigma_G(i, t)] < \gamma, $$ and that $d_{LP}(\nu_\sigma, {\mathbb{P}_\mathcal{A}}) < \frac{\gamma^4}{128}$. Then there exist constants $a \in [0, 1]$ and $\theta \in [0, 2\pi]$ such that $$ ||\sigma_G(i, t) - a\sin(\pi t + \theta) ||_u < 2\gamma + 2/n. $$ \end{lemma} \begin{proof} Suppose first that for some $s \in [0,1]$, that \begin{equation} \label{E:sig-GG} \sigma_G(i, s) \in [c_{n, i}(s) + \gamma, C_{n, i}(s) - \gamma]. \end{equation} Observe that $s \in [\arcsin(\gamma)/\pi, 1- \arcsin(\gamma)/\pi]$, since otherwise the above interval is empty. By \eqref{E:sig-GG}, we can find a point $(x_0, y_0) \in B(0, 1 - \gamma)$ such that $$ x_0 = \sigma_G(i, 0) \qquad \;\text{and}\; \qquad x_0\cos(\pi s) + y_0 \sin(\pi s) = \sigma_G(i, s). $$ Since $s \le 1- \arcsin(\gamma)/\pi$, we can find another point $ (x_1, y_1) \in B((x_0, y_0), \gamma) \subset B(0, 1) $ such that \begin{equation} \label{E:x1-x0} x_1 > x_0 + \gamma^2/3 \quad \;\text{and}\; \quad x_1\cos(\pi s) + y_1\sin(\pi s) > x_0\cos(\pi s) + y_0\sin(\pi s) + \gamma^2/3. \end{equation} Now observe that $$ \gamma^4/64\le \inf \left\{ \mathfrak{Arch}(B(x, \gamma^2/6)) : x \in B(0, 1) \right\}. $$ Therefore since $d_{LP}(\nu_\sigma, \mathbb{P}_{\mathbb{P}_\mathcal{A}}) < \gamma^4/128$, there must be a point $(x_2, y_2) \in B(0, 1) \cap B((x_1, y_1),\gamma^2/6)$ and a particle $j \in \{1, \dots, n\}$ such that \begin{equation} \label{E:G-jj} \begin{split} ||\sigma_G(j, t) - (x_1\cos(\pi t) &+ y_1\sin(\pi t))||_u \\ &\le ||\sigma_G(j, t) - (x_2\cos(\pi t) + y_2\sin(\pi t))||_u + \gamma^2/6 < \gamma^2/3. \end{split} \end{equation} By \eqref{E:x1-x0}, this implies that $$ \sigma_G(j, 0) > \sigma_G(i, 0) \qquad \;\text{and}\; \qquad \sigma_G(j, s) > \sigma_G(i, s), $$ and hence $\sigma_G(j, t) > \sigma_G(i, t)$ for all $t \in [0, s]$. Combining this with \eqref{E:G-jj} and the fact that $(x_1, y_1) \in B((x_0, y_0), \gamma)$ implies that $$ x_0\sin(\pi t) + y_0 \cos(\pi t) - \sigma_G(i, t) \le \gamma^2/3 + \gamma < 2\gamma \qquad \text{ for all } t \in [0, s]. $$ Symmetric arguments give the same upper bound on this difference over the interval $[s, 1]$, and on the difference $\sigma_G(i, t) - x_0\sin(\pi t) + y_0 \cos(\pi t)$ over the interval $[0, 1]$. This implies that $$ ||\sigma_G(i, t) - [x_0\sin(\pi t) + y_0 \cos(\pi t)]||_u \le 2\gamma. $$ Now suppose that there does not exist an $s \in [0, 1]$ such that $\sigma_G(i, s) \in [c_{n, i}(s) + \gamma, C_{n, i}(s) - \gamma]$. Then either $$ ||\sigma_G(i, \cdot) - C_{n, i}(\cdot)||_u < 2\gamma + \frac{2}n \qquad \;\text{or}\; \qquad ||\sigma_G(i, \cdot) - c_{n, i}(\cdot)||_u < 2\gamma + \frac{2}n. $$ This can be seen by simply analyzing the paths $C_{n, i}$ and $c_{n, i}$, noting that the trajectory $\sigma_G(i, \cdot)$ can ``jump" distances of size $2/n$. \medskip As all the functions $C_{n, i}$, $c_{n, i}$, and $x_0\sin(\pi t) + y_0 \cos(\pi t)$ are of the form $a\sin(\pi t + \theta)$, for some $(a, \theta) \in [0, 1] \times[0, 2\pi]$, this proves the lemma. \end{proof} We can now put together all the ingredients to prove Theorem \ref{T:sine-curves}. \begin{proof} Fix $\gamma \in (0, 1)$, and recall from Section \ref{SS:slightly} that $\nu_n$ is uniform measure on the set $\{\nu_\sigma\}_{\sigma \in \Omega_n}$. For small enough $\epsilon > 0$, Lemma \ref{L:edge-time} and Theorem \ref{T:weak-limit'} imply that $$ \mathbb{P} \left ( \max_{i \in [1, n]} \mathcal{L}\left\{t: |\sigma^n_G(i, t)| \ge 1 -3\epsilon \right\} < \frac{\gamma}{6}, \;\; d_{LP}(\nu_n, \mathbb{P}_\mathcal{A}) < \frac{\gamma^4}{128} \wedge \frac{L(\epsilon)}2 \right) \to 1 \qquad \;\text{as}\; n \to \infty. $$ Combining Lemmas \ref{L:edge-cond} and \ref{L:sine-close}, this implies that there exist random variables $A_{n, i, \gamma} \in [0, 1]$ and $\Theta_{n, i, \gamma} \in [0, 2\pi]$ such that $$ P_{n, \gamma} := \mathbb{P} \left(\max_{i \in [1, n]} ||\sigma^n_G(i, t) - A_{n, i, \gamma} \sin(\pi t + \Theta_{n, i, \gamma})||_u < 2\gamma + \frac{1}n \right) \to 1 \qquad \;\text{as}\; n \to \infty. $$ As constructed, the random variables $A_{n, i, \gamma}$ and $\Theta_{n, i, \gamma}$ depend on $\gamma$. To remove this dependence, let $\gamma_n \to 0$ be a sequence such that $P_{n, \gamma_n} \to 1$ as $n \to \infty$. Let $A_{n, i} = A_{n, i, \gamma_n}$ and $\Theta_{n, i} = \Theta_{n, i, \gamma_n}$, and define \begin{equation} \label{E:Bnga} B_{n, \gamma} = \left\{ \max_{i \in [1, n]} ||\sigma^n_G(i, t)- A_{n, i} \sin(\pi t + \Theta_{n, i})||_u < \gamma \right\}, \end{equation} Then for any $\gamma > 0$, we have that $\mathbb{P}(B_{n, \gamma}) \to 1$ as $n \to \infty$. \end{proof} We can also prove Theorem \ref{T:matrices} and Theorem \ref{T:unif-rotation}. For these we use the notation $B_{n, \gamma}$ from \eqref{E:Bnga}. \begin{proof}[Proof of Theorem \ref{T:matrices}] Fix $t \in [0, 1]$. Since $\nu_{n} \to \delta_{\mathbb{P}_\mathcal{A}}$ (Theorem \ref{T:weak-limit'}), the random measure $\rho_t^n$ converges in probability to the law of $(X, X \cos (\pi t )+ Y \sin (\pi t))$, where $(X, Y) \sim \mathfrak{Arch}$. This law is simply $\mathfrak{Arch}_t$. \medskip Now on the event $B_{n, \gamma}$, the support of the measure $\rho_t^n$ is contained in the set $$ \text{supp}(\mathfrak{Arch}_t)^\gamma = \{ x \in [-1, 1]^2 : d(x, \text{supp}(\mathfrak{Arch}_t)) < \gamma \}. $$ Moreover, since $\mathfrak{Arch}_t$ has a Lebesgue density that is bounded below, the weak convergence of $\rho^n_t$ to $\rho$ implies that with high probability, $ \text{supp}(\mathfrak{Arch}_t) \subset \text{supp}(\rho_n^t)^\gamma $ for any $\gamma > 0$. Therefore since $\mathbb{P}(B_{n, \gamma}) \to 1$ as $n \to \infty$ by Theorem \ref{T:sine-curves}, we have that \[ \lim_{n \to \infty} d_H(\text{supp}(\rho^n_t), \text{supp}(\mathfrak{Arch}_t)) = 0 \qquad \text{ in probability. } \quad \qedhere \] \end{proof} \begin{proof}[Proof of Theorem \ref{T:unif-rotation}] First observe that for any $a \in [0, 1]$, $t \in [0, 1/2]$, and $\theta \in [0, 2\pi]$, that $$ e^{\pi i t} \left[a\sin(\pi t + \theta) + ia\sin(\pi t + \pi/2 + \theta) \right] = a \sin(\theta). $$ Therefore on the event $B_{n, \gamma}$, we have that $$ \max_{j \in [1, n]} \max_{s, t \in [0, 1]} |Z^n_j(t) - Z^n_j(s)| \le 2\gamma. $$ Since $\mathbb{P}(B_{n, \gamma}) \to 1$ as $n \to \infty$ for any $\gamma > 0$ by Theorem \ref{T:sine-curves}, this proves the theorem. \end{proof} \end{section} \section{The geometric limit} \label{S:geom-limit} In this section, we use Theorem \ref{T:sine-curves} and Theorem \ref{T:weak-limit} to prove Theorem \ref{T:geom-limit}. Recall from Section \ref{S:intro} that $d_\infty(f, g)$ is the uniform norm between two $\mathbb{R}^n$-valued functions $f$ and $g$, where the pointwise distance is the $L^\infty$-distance. Recall also that $\bar{\sigma}$ is the embedding of a sorting network $\sigma$ into the $(n-2)$-dimensional sphere $\mathbb{S}^{n-2} \subset \mathbb{R}^n$. \medskip By Theorem \ref{T:sine-curves} and a change of variables, there exist random vectors $\mathbf{X^n} = (X^n_1, \mathellipsis X^n_n)$ and $\mathbf{V^n} = (V^n_1, \mathellipsis V^n_n)$ such that $d_\infty(f_n, \close{\sigma}^n)/n \to 0$ in probability, where $$ f_n(t) = \mathbf{X^n} \cos(\pi t) + \mathbf{V^n} \sin(\pi t) + \mathbf{c}, \qquad \text{where} \qquad \mathbf{c} = \left( \frac{n + 1}2, \mathellipsis , \frac{n + 1}2\right). $$ Moreover, we can assume that $X^n_i = i - (n + 1)/2$ and that $V^n_i = \sigma^n(i, N/2) - (n + 1)/2$, as these changes only shift the curve $\mathbf{X^n} \cos(\pi t) + \mathbf{V^n} \sin(\pi t)$ by $d_\infty$-distance $o(n)$ in probability. \medskip The point $\mathbf{c}$ is the center of $\mathbb{S}^{n-2}$. It remains to show that we can shift the curve $f_n$ by $d_\infty$-distance $o(n)$ to obtain a great circle in $\mathbb{S}^{n-2}$. For this we need the following lemma. \begin{lemma} \label{L:dot-prod-small} Let the vectors $\mathbf{X^n}$ and $\mathbf{V^n}$ be as above. Then $$ \lim_{n \to \infty} \frac{\left\langle\mathbf{X^n} , \mathbf{V^n} \right\rangle}{n^3} = 0 \qquad \text{in probability.} $$ \end{lemma} \begin{proof} Let $I_n$ be a uniform random variable on $\{1, \dots, n\}$, independent of $\mathbf{X^n}$ and $\mathbf{V^n}$, and define $$ (\tilde{X}^n, \tilde{V}^n) = \left(\frac{2X^n_{I_n}}n, \frac{2V^n_{I_n} }n\right). $$ We have that $$ (\tilde{X}^n, \tilde{V}^n) \stackrel{d}{=} (Y_n(0) - 1/n, Y_n(1/2) - 1/n), $$ where $Y_n$ is the trajectory random variable of $\sigma^n$. Therefore by Theorem \ref{T:weak-limit}, $$ (\tilde{X}^n, \tilde{V}^n) \stackrel{d}{\to} (X, V) \sim \mathfrak{Arch}. $$ By the bounded convergence theorem, this implies that $\mathbb{E} \tilde{X}^n\tilde{V}^n \to \mathbb{E} XV$ in probability. Observing that $n^3 \mathbb{E} \tilde{X}^n\tilde{V}^n = 4\left\langle\mathbf{X^n} , \mathbf{V^n} \right\rangle$ and that $\mathbb{E} XV = 0$ completes the proof. \end{proof} \begin{proof}[Proof of Theorem \ref{T:geom-limit}.] Fix $n \in \mathbb{N}$. For ease of bounding errors, we assume that $n \ge 27$. Let $\sigma$ be a fixed $n$-element sorting network. Our goal is to perturb the vector $\mathbf{V^n}$ to a new vector $\mathbf{W^n}$ so that the path \begin{equation} \label{E:C-nep} C_{n}(t) := \mathbf{X^n} \cos(\pi t) + \mathbf{W^n} \sin(\pi t) + \mathbf{c} \end{equation} follows a great circle on $\mathbb{S}^{n-2}$, and so that $d_\infty(C_{n}, f_n)$ is small. In order to do this, we just need to find $\mathbf{W^n}$ such that $$ \sum_{i=1}^n W^n_i = 0, \;\left\langle\mathbf{X^n} , \mathbf{W^n} \right\rangle = 0, \quad \;\text{and}\; \quad ||W^n||_2^2 = (n^3 - n)/12. $$ The first of these conditions guarantees that $C_n$ lies in the correct hyperplane $\mathbb{L}_n \subset \mathbb{R}^n$, and the second and third conditions guarantee that $C_n$ traces out a great circle on the sphere $\mathbb{S}^{n-2}$ within that hyperplane. We first perturb $\mathbf{V^n}$ to a vector $\mathbf{Z^n}$ satisfying the first two properties. Define the random variable $$ A_n = \frac{\left|\left\langle\mathbf{X^n} , \mathbf{V^n} \right\rangle\right|}{n^3}. $$ Without loss of generality, assume that $\left\langle\mathbf{X^n} , \mathbf{V^n} \right\rangle > 0$. \medskip Choose any $i \ge 3(n + 1)/4$. Let $k > 0$, and decrease the value of $V^n_i$ by $kn$ and increase the value of $V^n_{n + 1 - i}$ by $kn$. Call this new vector $\mathbf{V^n_1}$. Note that $X^n_i \ge n/4$, and that $X^n_i = -X^n_{n + 1 - i}$. Therefore $\left\langle\mathbf{X^n} , \mathbf{V^n_1} \right\rangle \le\left\langle\mathbf{X^n} , \mathbf{V^n} \right\rangle - kn^2/2$. \medskip By iterating the above procedure to repeatedly lower the dot product $\left\langle\mathbf{X^n} , \mathbf{V^n} \right\rangle$, we can obtain a vector $\mathbf{K_n} = (K_1n, \mathellipsis K_nn)$ with the following properties (here is where we require that $n \ge 27$). \smallskip \begin{enumerate}[nosep,label=(\roman*)] \item For all $i$, $|K_i| \le 9A_n$, $- K_i = K_{n + 1 - i}$, and $k_i = 0$ if $i \in \left(\frac{n+1}4, \frac{3(n+1)}4\right)$. \item $\left\langle\mathbf{X^n} , \mathbf{V^n + K^n} \right\rangle = 0.$ \end{enumerate} \smallskip Let $\mathbf{Z^n} = \mathbf{V^n + K^n}$. Observe that $\sum_{i=1}^n Z^n_i = 0$ since both $\sum_{i=1}^n K_i = 0$ and $\sum_{i=1}^n V^n_i = 0$. The fact that $\sum_{i=1}^n V^n_i = 0$ follows since $\mathbf{V^n} + \mathbf{c}$ is a permutation of the vector $(1, 2, \mathellipsis n)$. For any $t$, we have that \begin{align*} ||f_n(t) - (\mathbf{X^n} \cos(\pi t) + \mathbf{Z^n} \sin(\pi t) + \mathbf{c})||_\infty = \max_{i \in [1, n]} |V^n_i \sin(\pi t) - Z^n_i \sin(\pi t)| \le 9 A_n n. \end{align*} We can now define $\mathbf{W^n} = M_n\mathbf{Z^n}$, where $M_n$ is a random constant chosen so that $||\mathbf{W^n}||^2_2 = (n^3 - n)/12$. Using the definition of $C_n$ in \eqref{E:C-nep}, for every $t \in [0, 1]$, we have that \begin{equation} \label{E:fC} \begin{split} ||f_n(t) - C_n(t)||_\infty &\le 9\epsilon n + \max_{i \in [1, n]} |W^n_i \sin(\pi t) - Z^n_i \sin(\pi t)| \\ &< 9 A_n n + |M_n - 1|\left[\frac{n + 1}2 + 9 A_n n\right]. \end{split} \end{equation} In the last inequality, we have used that $|Z^n_i - V^n_i| \le 9 A_n n$. The inequality \eqref{E:fC} implies that \begin{equation} \label{E:d-inf-comp} \frac{d_\infty(C_n, \close{\sigma}^n)}n \le 9 A_n + |M_n - 1|\left[\frac{n + 1}{2n} + 9 A_n \right] + \frac{d_\infty(f_n, \close{\sigma}^n)}n. \end{equation} Now again using that $|Z^n_i - V^n_i| \le 9 A_n n$, we have that \begin{align*} \big|\;||\mathbf{Z^n}||^2_2 - ||\mathbf{V^n}||^2_2 \; \big| \le \sum_{i=1}^n |Z^n_i - V^n_i||Z^n_i + V^n_i| \le 9A_n n^2(n + 1 + 9 A_n n). \end{align*} Therefore since $||\mathbf{V^n}||^2_2 = (n^3 - n)/12$, we have that $$ M_n \in \left[\sqrt{1 - 217A_n}, \sqrt{1 + 217A_n} \right] $$ whenever $A_n < 1 /217$. Lemma \ref{L:dot-prod-small} implies that $A_n \to 0$ as $n \to \infty$ in probability, and hence so does $M_n$. Therefore since $d_\infty(f_n, \close{\sigma}^n)/n \to 0$ in probability as $n \to \infty$, \eqref{E:d-inf-comp} implies that $d_\infty(C_n, \close{\sigma}^n)/n$ does as well. \end{proof} \section*{Acknowledgements} The author would like to thank B\'alint Vir\'ag for many fruitful discussions about the problem, and for many constructive comments about previous drafts. \bibliographystyle{alpha}
{ "timestamp": "2018-03-20T01:13:07", "yymm": "1802", "arxiv_id": "1802.08934", "language": "en", "url": "https://arxiv.org/abs/1802.08934" }
\section{Introduction} Let $\X_1 \ldot \X_m$ be independent random vectors with a continuous distribution function (df) $F$. We shall assume without loss of generality that the marginal df's of $F$ are the uniform df on $[0,1]$. Define the empirical df $F_m$ of $F$ by \BQNY F_m(\vk{x}):=\frac{1}{m}\sum_{i=1}^m \indi_{[\vk{0},\vk{x}]}(\X_i), \quad \x=(x_1 \ldot x_n) \in [0,1]^n, \EQNY where $\indi_A$ denote the indicator of the set $A$ and $[\vk{0},\vk{x}]=\Pi_{i=1}^n[0,x_i]$. Let below $W$ denote the (unpinned) Brownian sheet determined by $F$, i.e., this is a centered Gaussian random field with covariance function \BQNY R(\vk{x},\vk{y})=\E{W(\vk{x})W(\vk{y})}=F(\vk{x}\wedge\vk{y}),\quad \vk{x},\vk{y}\in [0,1]^n. \EQNY Further $W_F$ is the pinned Brownian sheet, which is a centered Gaussian random field with covariance function \BQNY R_F(\vk{x},\vk{y})=\E{W_F(\vk{x})W_F(\vk{y})}=F(\vk{x}\wedge\vk{y})-F(\vk{x})F(\vk{y}), \quad \vk{x},\vk{y}\in [0,1]^n. \EQNY It is well known, see e.g., \cite{TBSEP1986,JEDKS1952,WCPES1966,MNMS1967} that $\sqrt{m}(F_m-F)$ converges weakly to $W_F$ as $m\to \IF$ in the space of all bounded functions on $[0,1]^n$ under the topology of uniform convergence. Consequently, if \BQNY T_m^n(F):=\sup_{\vk{x}\in [0,1]^n}\sqrt{m}(F_m(\vk{x})-F(\vk{x})) \EQNY is the one-sided Kolmogorov-Smirnov (KS) statistic, then we have the convergence in distribution \BQNY T_m^n(F)\rw \sup_{\vk{x}\in [0,1]^n}W_F(\vk{x}),\ m\rw\IF. \EQNY For the two-sides KS statistic we have a similar approximation. For $W_F$, the pinned version of $W$ on $[0,1]^n$, we have the following representation \BQNY W_F(\vk{x})=W(\vk{x})-F(\vk{x})W(\vk{1}),\ \vk{x}\in [0,1]^n. \EQNY Similarly to Brownian bridge, for $W_F$ we have another conditional representation (see \cite{TBSEP1986}), namely \BQNY W_F(\vk{x})=W(\vk{x})\Big| W(\vk{1})=0,\ \vk{x}\in [0,1]^n. \EQNY In \cite{ChanLai2006}, as $u\rw\IF$, the asymptotics of \BQNY \pk{\sup_{\vk{x}\in [0,1]^n}W_F(\vk{x})>u}=\pk{\sup_{\vk{x}\in [0,1]^n}\LT(W(\vk{x})-F(\vk{x})W(\vk{1})\RT)>u}=\pk{\sup_{\vk{x}\in [0,1]^n}\LT(W(\vk{x})\Big| W(\vk{1})=0\RT)>u} \EQNY are studied. In this paper, we consider a more general case, the asymptotics of \BQN\label{MMr} \pk{\sup_{\vk{x}\in [0,1]^n}\LT(W(\vk{x})\Big| W(\vk{1})=w\RT)>u} \EQN as $u\rw\IF$ for some constant $w\in\R$.\\ Moreover, we give some special cases which can not include in the former senarios.\\ Organization of this paper: In Section 2 we show our main results and some examples are given in Section 3. Following are the proofs and some useful lemmas in Section 4 and Section 5, respectively. \COM{Further, if \BQNY \widetilde{T}_m^n=\widetilde{T}_m^n(F):=\sup_{\vk{x}\in I^n}\sqrt{m}\abs{F_m(\vk{x})-F(\vk{x})} \EQNY is the standard KS statistic, then \BQNY \widetilde{T}_m^n(F)\rw \widetilde{M}_F:=\sup_{\vk{x}\in I^n}\abs{W_F(\vk{x})},\ m\rw\IF. \EQNY } \def\seE{ \eE{\mathcal{D}}} \def\seEd{ \seE_\delta} \COM{\section{Two dimensional cases} \BT\label{Thm1} Suppose that $F(\vk x),x\in [0,1]^2$ is a continuous distribution function. If there \eE{a strictly monotone decreasing} function $h(\eE{x}) \ x\in[S,T]$ for some $0\leq S<T\leq 1$ satisfying \BQNY \seE:=\{ \vk x\in [0,1]^2: F(\vk x)=1/2\}=\{\vk{x}=(\eE{x},h(\eE{x})): \forall \eE{x}\in[S,T]\} \EQNY and \BQN \label{Fex} \lim_{\delta\rw 0}\sup_{\vk{z}\in \seE} \underset{\vk{x}\neq \vk{y}}{\sup_{\abs{\vk{x}-\vk{z}},\abs{\vk{y}-\vk{z}}\leq \delta}}\frac{\abs{ F(\vk{x})-F(\vk{y})-a_1(\vk{z})(x_1-y_1)-a_2(\vk{z})(x_2-y_2)}} {\abs{x_1-y_1}+\abs{x_2-y_2}}=0, \EQN with $a_i(\cdot), i=1,2$ two positive continuous functions, then for any $w\in\R$ \BQN\label{Thm1re1} \pk{\sup_{\vk{x}\in [0,1]^2}W(\vk{x})>u\Big|W(1,1)=w}\sim K u^2e^{-2u^2+2uw}, \quad K:= 8\int_{S}^{T} a_1(x,h(x))d x \in (0,\IF). \EQN \ET \BK\label{Corr2} \eE{If the distribution function $F(\x),\x \in [0,1]^2$ has a bounded positive density function $f$, \eE{then there exists a strictly monotone decreasing function $h$ on $[S,T] \subset [0,1]$} such that \eqref{Thm1re1} holds with $a_1(x_1,x_2)=\int_{0}^{x_2}f(x_1,t)d t$}. \EK \begin{remarks}\label{Rem1} i) In view of \eqref{Fex} for any $x\in (S,T)$ \BQN \frac{d h(x)}{d x}=-\frac{a_1(x, h(x))}{a_2(x,h(x))},\ \eE{x\in( S,T)} \EQN \eE{implying} \BQNY \int_{S}^{T} a_1(x,h(x))d x= \int_{h(T)}^{h(S)} a_2(\overleftarrow{h}(x),x) d x, \EQNY where $\overleftarrow{h}$ is the inverse function of $h$.\\ ii) \cLa{ In \netheo{Thm1}, by the proof, we know that $[0,1]^2$ can be generalised to a manifold $\mathcal{E}\subseteq [0,1]^2$, i.e. \BQNY \seE:=\{ \vk x\in \mathcal{E} : F(\vk x)=1/2\}=\{\vk{x}=(\eE{x},h(\eE{x})): \forall \eE{x}\in[S,T]\}. \EQNY Then the results are \BQNY \pk{\sup_{\vk{x}\in \mathcal{E}}W(\vk{x})>u\Big|W(1,1)=w}\sim K u^2e^{-2u^2+2uw}, \quad K:= 8\int_{S}^{T} a_1(x,h(x))d x \in (0,\IF). \EQNY } iii)\eE{In \nekorr{Corr2} the assumption of the existence of the density $f$ on $[0,1]^2$ can be weekend to the existence of a positive bounded density on some rectangular set $\mathcal{K}$ that includes $\seE=\{\x\in [0,1]^2:F(\x)=\frac{1}{2}\}$}. \end{remarks}} \def\wx{\widetilde{\vk{x}}} \def\wy{\widetilde{\vk{y}}} \def\wz{\widetilde{\vk{z}}} \section{Main Results} Before stating our main results, we need to introduce some notation. For $\x,\y\in\R^n$, \BQNY &&\vk{x}<\vk{y}\Leftrightarrow x_i<y_i,\quad i=1\ldot n, \quad \vk{x}\wedge\vk{y}=(x_1\wedge y_1\ldot x_n\wedge y_n),\\ && \vk{x}\pm\vk{y}=(x_1\pm y_1\ldot x_n\pm y_n), \quad \abs{\vk{x}}=\sum_{i=1}^n\abs{x_i},\ \x*\y=(x_1\times y_1\ldot x_n\times y_n). \EQNY Further, for $\vk{x}<\vk{y}$ we write $[\vk{x},\vk{y}]$ for the set $\Pi_{i=1}^n[x_i,y_i]$ and use $\indi_{A}(\cdot)$ for the indicator function of the set $A\subset [0,1]^n$.\\ Let $\Psi(\cdot)$ denote the survival function of an $N(0,1)$ random variable. We write below $\lambda_k(A)$ for the Lebesgue measure on $\R^k$ of some measurable set $A \subset \R^k$. \BT\label{ThmM1} Let $F(\vk{x}), \vk x\in [0,1]^n, n\geq 2,$ be a continuous distribution function. \eE{Suppose that there exists a function $h(\wx),\ \wx=(x_1\ldot x_{n-1}) \in L\subseteq [0,1]^{n-1}$ with $\lambda_{n-1}(L)\neq 0$} such that \BQNY \seE:=\LT\{\vk{x}=(\wx,h(\wx)):\wx\in L\RT\}=\Bigl\{\vk{x}\in [0,1]^n:F(\vk{x})=\frac{1}{2}\Bigr\} \EQNY and $\lambda_{n-1}(\partial L)=0$ (i.e., $L$ is a Jordan measurable set). If $h$ is continuously differentiable in the interior of $L$ and further \BQN\label{FEXM} \lim_{\delta\rw 0}\sup_{\vk{z}\in \seE }\underset{\vk{x}\neq \vk{y}}{\sup_{\abs{\vk{x}-\vk{z}},\abs{\vk{y}-\vk{z}}\leq \delta}}\frac{\abs{F(\vk{x})-F(\vk{y})-\sum_{i=1}^{n}a_i(\vk{z})(x_i-y_i)}} {\sum_{i=1}^{n}\abs{x_i-y_i}}=0, \EQN where $a_i$'s are positive continuous functions, then for $w\in\R$ we have as $u\rw\IF$ \BQN\label{Mulre1} \pk{\sup_{\vk{x}\in [0,1]^n}W(\vk{x})>u\Big|W(\vk{1})=w}\sim Ku^{2(n-1)}e^{-2u^2+2uw}, \EQN where \BQN\label{KKt} \cLa{K:= 2^{3(n-1)}\int_{L} \LT(\Pi_{i=1}^{n-1}a_i(\wx, h(\wx))\RT)d \wx \in \eE{(0,\IF)}}. \EQN \ET \BK\label{Corr3} If the distribution function $F(\x),\x \in [0,1]^n$ has a bounded positive density function $f$, then there exists a continuously differentiable function $h$ defined on the interior of a Jordan measurable set $L \subset [0,1]^{n-1}$ with $\lambda_{n-1}(L)>0$ such that \eqref{Mulre1} holds with $a_i(\x)= \partial F(\x) / \partial x_i$. \EK Consider the random field $W^*(\vk x) = B_0(F(\vk x)),x\in [0,1]^n$ for a given distribution $F(\vk x), \vk x\in [0,1]^n$ with $B_0$ a standard Brownian bridge. We have that $W^*$ is a centered Gaussian random field, and it has covariance function $F( \vk x) \wedge F(\vk y) -F( \vk x) F(\vk y)$. In the special case that $F(\vk{x})=F(x_1),\vk{x}\in [0,1]^n$ with \eE{$F$ a univariate distribution}, then $F( \vk x) \wedge F(\vk y)= F( \vk x \wedge \vk y)$, hence $W^*$ has the same law as $W_F$. This observations motivates the result of the next theorem, where essentially we use the fact that the tail asymptotics of the supremum of a Brownian bridge with trend is known, see \cite{GauTrend16}. \BT\label{Thm2} Let $F(\x),\x \in [0,1]^n$ be a $n$-dimensional distribution function. If there exist some $\delta\in (0,\eE{1/2})$ such that \BQN\label{condition1} F(\vk{x}\wedge \vk{y})=F(\vk{x})\wedge F(\vk{y}) \EQN holds for $\vk{x}, \vk{y}\in \seEd$ with \BQNY \seEd:=\LT\{\vk{x}\eE{\in [0,1]^n}: \frac{1}{2}-\delta \leq F(\vk{x})\leq \frac{1}{2}+\delta \RT\}, \EQNY then for any $w\inr \BQN\label{Thm2re1} \pk{\sup_{\vk{x}\in [0,1]^n}W(\vk{x})>u\Big|W(\vk{1})=w}\sim e^{-2u^2+2uw}. \EQN \ET \cLa{\begin{remark} By the proof, we know if \eqref{condition1} holds for any $\vk{x}, \vk{y}\in [0,1]^n$, then for any $w\in\R$ and $u>0$ we have \BQNY\label{remeq1} \pk{\sup_{\vk{x}\in [0,1]^n}W(\vk{x})>u\Big|W(\vk{1})=w}= e^{-2u^2+2uw}. \EQNY \end{remark}} \COM{If \eqref{condition1} holds for any $\vk{x}, \vk{y}\in [0,1]^n$, then for any $w\in\R$ and $u>0$ we have \BQN\label{remeq1} \pk{\sup_{\vk{x}\in [0,1]^n}W(\vk{x})>u\Big|W(\vk{1})=w}= e^{-2u^2+2uw}. \EQN and \BQN\label{remeq2} \pk{\sup_{\vk{x}\in [0,1]^n}\abs{W(\vk{x})}>u\Big|W(\vk{1})=w}\sim e^{-2u^2+2uw}, \ u\rw\IF. \EQN \eE{(14) is exactly what we have above} } Next we give a theorem which further derives the approximation of two-sides KS statistic. \BT\label{Corr4} Let $F(\vk{x}), \vk x\in [0,1]^n$ be a continuous distribution function. If further there exist some $\delta\in (0,1/4)$ such that \BQN\label{Con2} \inf_{\x,\y\in\seEd} F(\x\wedge\y)>\delta \EQN holds for $\seEd:=\LT\{\vk{x}\eE{\in \mathcal{E}}: \frac{1}{2}-\delta \leq F(\vk{x})\leq \frac{1}{2}+\delta \RT\}$ where $\mathcal{E}\subseteq[0,1]^n$ satisfies that there exist $\x_0\in\mathcal{E} $ such that $F(\x_0)=\frac{1}{2}$, then we have for $w\in\R$ as $u\rw\IF$ \BQN\label{Mulre2} &&\pk{\sup_{\vk{x}\in \mathcal{E}}\abs{W(\vk{x})}>u\Big|W(\vk{1})=w}\nonumber\\ &&\quad\quad\sim \pk{\sup_{\vk{x}\in \mathcal{E}}W(\vk{x})>u\Big|W(\vk{1})=w}+\pk{\sup_{\vk{x}\in \mathcal{E}}W(\vk{x})>u\Big|W(\vk{1})=-w}. \EQN \ET \begin{remarks}\label{Rem2} i) When $\mathcal{E}=[0,1]^n$, for $F(\x)$ satisfying \eqref{condition1} in \netheo{Thm2}, \eqref{Con2} always holds. In fact, for $\x,\y\in\seEd$ with $\delta\in(0,\frac{1}{4})$ \BQNY F(\x\wedge\y)= F(\x)\wedge F(\y)\geq \frac{1}{2}-\delta>\frac{1}{4}>\delta. \EQNY Further by \eqref{Thm2re1} and \eqref{Mulre2}, under the conditions of \netheo{Thm2}, we have as $u\rw\IF$ \BQN\label{Thm2re2} \pk{\sup_{\vk{x}\in [0,1]^n}\abs{W(\vk{x})}>u\Big|W(\vk{1})=w}\sim c e^{-2u^2+2u\abs{w}}, \EQN where $c=1$ if $w\neq 0$ and $c=2$ if $w=0$.\\ ii) Under the conditions of \netheo{Thm2}, if further \eqref{Con2} holds for $\mathcal{E}=[0,1]^n$, we have by \eqref{Mulre2} in \netheo{Corr4} \BQN\label{Mainre2} \pk{\sup_{\vk{x}\in [0,1]^n}\abs{W(\vk{x})}>u\Big|W(\vk{1})=w} \sim cKu^{2(n-1)}e^{-2u^2+2uw}, \EQN where $K$ is the same as in \eqref{KKt} and $c=1$ if $w\neq 0$ and $c=2$ if $w=0$. \end{remarks} \section{Applications} In this part, we give the asymptotic results of \eqref{MMr} when $F$ are some special cases. First, we give several two-dimensional cases. \BPR\label{PROP00} For $i)-ii)$ below and $w\in \R$, we have that both \eqref{Mulre1} and \eqref{Mainre2} with $n=2$ hold.\\ i) If $F(\vk{x})=\LT(x_1+x_2-1\RT)^+,\ \vk x \in [0,1]^2$, $K=4$.\\ ii) If $F(\x)=\frac{x_1x_2}{1+(1-x_1)(1-x_2)},\ \vk x \in [0,1]^2$, $K=3\ln 3$. \EPR \begin{remarks} i) In \cite{TBSEP1986}[Theorem 3.1] the following upper bound for case in \neprop{PROP00} is given \BQNY \pk{\sup_{\vk{x}\in [0,1]^2}\LT(W(\vk{x})\Big| W(\vk{1})=0\RT)>u}\leq \sum_{i=1}^{\IF}(8i^2u^2-2)e^{-2i^2u^2},\quad u>0. \EQNY Comparing with our exact result, we see that the prefactor for this upper bound is two times our constant $K=4$.\\ ii) In the light of \cite{TBSEP1986}[Theorem 3.1] for any two-dimensional distribution $F$ on $[0,1]^2$ and any $u>0$ \BQNY \pk{\sup_{\vk{x}\in [0,1]^2}W_F(\vk{x})>u}\leq \pk{\sup_{\vk{x}\in [0,1]^2}W_G(\vk{x})>u}, \EQNY with $G(\vk{x})=(x_1+x_2-1)^{+},\vk x \in [0,1]^2$. Consequently, our result in \neprop{PROP00} gives an asymptotic upper bound for any $2$-dimensional distribution $F$ on $[0,1]^2$. \end{remarks} Following are several multi-dimensional cases. \BPR\label{PROP0} For $i)-ii)$ below both \eqref{Mulre1} and \eqref{Mainre2} hold for any $w\in \R$. Moreover we have: \\ i) If $F(\vk{x})=\Pi_{i=1}^nx_i, \vk x \in [0,1]^n$, then \BQNY K=2^{2(n-1)} \int_{L}(\Pi_{i=1}^{n-1}x_i)^{-1}d\wx, \EQNY and \BQNY L=\LT\{\wx\in [0,1]^{n-1}:\frac{1}{2}\leq \Pi_{i=1}^{n-1}x_i\leq 1\RT\}. \EQNY In particular, $K=4\ln 2$ if $n=2$ and $K=16 (\ln2)^2$ if $n=3$.\\ \COM{ii) If $F(\vk{x})=\frac{1}{n}\sum_{i=1}^nx_i, \vk x \in [0,1]^n$, with \BQNY \widetilde{K}=2^{3(n-1)} n^{1-n} \lambda_{n-1}(L),\quad L=\LT\{\wx\in [0,1]^{n-1}:\frac{n-2}{2}\leq \sum_{i=1}^{n-1}x_i\leq \frac{n}{2}\RT\}. \EQNY In particular, $K=4$ if n=2 and $K=\frac{16}{3} $ if $n=3$. \\} ii) If $$F(\x)=d\min_{1\leq i\leq n}x_i+(1-d)\Pi_{i=1}^{n}x_i,\ d\in(0,1),$$ then \BQNY K= n(4d)^{(n-1)} \int_{L} \frac{\LT(\Pi_{i=1}^{n-1}x_i\RT)^{n-2}}{\LT(d+(1-d)\Pi_{i=1}^{n-1}x_i\RT)^{n-1}}d \wx, \EQNY and \BQNY L=\LT\{\wx\in[0,1]^{n-1}:\frac{1}{2d+2(1-d)\Pi_{i=1}^{n-1}x_i}\leq\min_{1\leq i\leq n-1}x_i\RT\}. \EQNY Specially, when $n=2$, we have \BQNY K=\frac{8d}{1-d}\ln \LT(\sqrt{1+\frac{1}{(1-d)^2}}-\frac{d}{1-d}\RT). \EQNY \EPR \begin{remark} i) \eE{The result of \neprop{PROP0}, $i)$ for $n=2$ and $w=0$ agrees with the claim of \cite{LDMSRF1986}[Theorem 1]}. \COM{\eE{$ii)$ \cite{TBBKI1982} derives the upper and lower bounds for the case $n=2$ in \neprop{PROP0}, $i)$ Comparing with our results we see that }} \cLa{ii) In \cite{TBSEP1986}[Theorem 2.1], a lower bound for the $n$-dimensional case is given by \BQNY \pk{\sup_{\vk{x}\in [0,1]^n}\LT(W(\vk{x})\Big| W(\vk{1})=w\RT)>u}\geq e^{-2u^2+2uw} \sum_{i=0}^{n-1}\frac{(2u^2-2uw)^i}{i !},\quad u>w. \EQNY Since $$ \sum_{i=0}^{n-1}\frac{(2u^2-2uw)^i}{i !} \sim \frac{(2u^2)^{n-1}}{(n-1)!}, \quad u\to \IF$$ \eE{comparing with \neprop{PROP0} we obtain a lower bound for the constant $\Hn $. In the particular case $n=3$ we have $16 (\ln 2)^2 \ge 2$.} \\ } \end{remark} \COM{ \BPR\label{KS1} When $n\geq 2$, we have for $w\in \R$ as $u\rw\IF$ \eqref{Mulre1} and \eqref{Mulre2} hold with $K=2^{3(n-1)}\Hn \int_{L}n^{1-n}d\wx$ and $L=\LT\{\wx\in I^{n-1}:\frac{n-2}{2}\leq \sum_{i=1}^{n-1}x_i\leq \frac{n}{2}\RT\}$. In particular, $K=4\ln 2$ if n=2 and $K=16\Hn (\ln2)^2$ if $n=3$. \EPR Next we consider the distribution of $M_F$ when $F(\vk{x})=\frac{\sum_{i=1}^nx_i}{n},\ \vk{x}\in I^n$. Let $W_2(\vk{x})$ denote the (unpinned) Brownian sheet on $I^n$, i.e. the zero means Gaussian fields with covariance function \BQNY \E{W(\vk{x})W(\vk{y})}= \frac{1}{n}\sum_{i=1}^n(x_i\wedge y_i),\quad \vk{x},\vk{y}\in I^n, \EQNY (which means $W(\vk{x})=\frac{1}{\sqrt{n}}\sum_{i=1}^nB^i(x_i)$ where $B^i(x_i)$ are independent Brownian motions.) and write $W_F$ for the pinned version of $W$ on $I^n$. Then a version of $W_F$ can be obtained from $W$ by the correspondence \BQNY W_F(\vk{x})=W(\vk{x})-\LT(\frac{1}{n}\sum_{i=1}^nx_i\RT)W(\vk{1}),\ \ \vk{x}\in I^n. \EQNY \BPR\label{KS2} When $n\geq 2$, we have for $w\in \R$ as $u\rw\IF$ \eqref{Mulre1} and \eqref{Mulre2} hold with $K=2^{3(n-1)}\Hn \int_{L}n^{1-n}d\wx$ and $L=\LT\{\wx\in I^{n-1}:\frac{n-2}{2}\leq \sum_{i=1}^{n-1}x_i\leq \frac{n}{2}\RT\}$. In particular, $K=4$ if n=2 and $K=\frac{16}{3}\Hn $ if $n=3$. \EPR The lower copula is denoted by $F(x)$, i.e., \BQN\label{G} F(\vk{x})=\LT(\sum_{i=1}^nx_i-1+n\RT)^+, \quad \vk{x}\in I^n. \EQN \cite{TBSEP1986} shows for any two-dimensional d.f. $G$ on $I^2$ and for any $u>0$, \BQNY \pk{\sup_{\vk{x}\in I^2}W_G(\vk{x})>u}\leq \pk{\sup_{\vk{x}\in I^2}W_F(\vk{x})>u}. \EQNY Let $W(\vk{x})$ denote the (unpinned) Brownian sheet on $I^n$, i.e. the zero means Gaussian fields with covariance function \BQNY \E{W(\vk{x})W(\vk{y})}= \LT(\sum_{i=1}^nx_i\wedge y_i-1+n\RT)^+,\quad \vk{x},\vk{y}\in I^n, \EQNY and write $W_F$ for the pinned version of $W$ on $I^n$. Then a version of $W_F$ can be obtained from $W$ by the correspondence \BQNY W_F(\vk{x})=W(\vk{x})-\LT(\sum_{i=1}^nx_i-1+n\RT)^+W(\vk{1}),\ \ \vk{x}\in I^n. \EQNY \BPR\label{KS3} When $n\geq 2$, we have for $w\in \R$ as $u\rw\IF$ \eqref{Mulre1} and \eqref{Mulre2} hold with $K=2^{3(n-1)}\Hn \int_{L}d\wx$ and $L=\LT\{\wx\in I^{n-1}:n-\frac{3}{2}\leq \sum_{i=1}^{n-1}x_i\leq n-\frac{1}{2}\RT\}$. Especially, $K=4$ if n=2 and $K=8\Hn $ if $n=3$. \EPR Finially we consider the distribution of $M_F$ when $F(\vk{x})=\min_{1\leq i\leq n}x_i,\ \vk{x}\in I^n$. Let $W(\vk{x})$ denote the (unpinned) Brownian sheet on $I^n$, i.e. the zero means Gaussian fields with covariance function \BQNY \E{W(\vk{x})W(\vk{y})}= \min_{1\leq i\leq n}(x_i\wedge y_i),\quad \vk{x},\vk{y}\in I^n. \EQNY and write $W_F$ for the pinned version of $W$ on $I^n$. Then a version of $W_F$ can be obtained from $W$ by the correspondence \BQNY W_F(\vk{x})=W(\vk{x})-\min_{1\leq i\leq n}(x_i\wedge y_i)W(\vk{1}),\ \ \vk{x}\in I^n. \EQNY \BPR\label{KS4} We have that for $w\in\R$, as $u\rw\IF$ \eqref{Thm2re1} and \eqref{Thm2re2} hold. \EPR} Next proposition is a case which satisfies the \eqref{condition1} in \netheo{Thm2}. \BPR\label{PROP1} For $d\in (0,1)$ and \BQNY F(\x)= \left\{ \begin{array}{ll} \frac{1}{2d}\min_{1\leq i\leq n}x_i,&\ \text{if}\ \min_{1\leq i\leq n}x_i\leq d,\\ \frac{1}{2(1-d)}\min_{1\leq i\leq n}x_i+\frac{1-2d}{2(1-d)},&\ \text{if}\ \min_{1\leq i\leq n}x_i\geq d, \end{array} \right. \EQNY we have that both \eqref{Thm2re1} and \eqref{Thm2re2} hold for any $w\in \R$, i.e. \BQNY \pk{\sup_{\vk{x}\in [0,1]^n}\LT(W(\vk{x})\Big|W(\vk{1})=w\RT)>u}= e^{-2u^2+2uw}, \ u>0, \EQNY and \BQNY \pk{\sup_{\vk{x}\in [0,1]^n}\LT(\abs{W(\vk{x})}\Big|W(\vk{1})=w\RT)>u}\sim \cLa{c}e^{-2u^2+2u\abs{w}}, \ u\rw\IF, \EQNY with $c=1$ if $w\neq0$ and $c=2$ if $w=0$. \COM{iii) For $d\in(0,1)$ and $$F(\x)=d\min_{1\leq i\leq n}x_i+\frac{(1-d)}{n}\sum_{i=1}^{n}x_i,$$ we have that \eqref{Mulre1} holds for any $w\in \R$ with \BQNY \widetilde{K}= \frac{(8(1-d))^{n-1}}{n^{n-2}}\Hn \lambda_{n-1}(L), \text{and}\ L=\LT\{\wx\in[0,1]^{n-1}:\frac{n-2(1-d)\sum_{i=1}^{n-1}x_i}{2+2(n-1)d}\leq\min_{1\leq i\leq n-1}x_i\RT\}. \EQNY Specially, when $n=2$, we have that \eqref{Thm1re1} holds for any $w\in \R$ with \BQNY K=4(1-d). \EQNY} \EPR \begin{remark} In \neprop{PROP1}, if $d=\frac{1}{2}$, $F(\x)=\min_{1\leq i\leq n}x_i$, i.e., the upper copula. \end{remark} \section{Proofs} \prooftheo{ThmM1} Hereafter, we denote by $\mathbb{Q}_i,\ i\in\mathbb{N}$ some positive constants that may differ from line to line.\\ By the monotonicity and continuity of the distribution function $F(\vk{x}), \vk{x}\in [0,1]^n$, $h(\wx)$ is a continuous function over $L\subset[0,1]^{n-1}$ which is strictly decrease along every line parallel to the axes (and so on all increasing paths).\\ Then for $\wx\in L$ and $x_n=h(\wx)$ we set \BQNY \tla_i(\wx):=a_i(\wx,h(\wx)),\ i=1\ldot n, \EQNY and \BQNY \underline{a}_i:=\inf_{\x\in \seE}a_i(\x)>0,\ \overline{a}_i:=\sup_{\vk{x}\in \seE}a_i(\vk{x})<\IF, i=1\ldot n, \EQNY where we use the fact that $a_i(\x)$'s are continuous and positive function. We have for $u>w$ \begin{align*} \pk{\sup_{\vk{x}\in [0,1]^n}W(\vk{x})>u\Big|W(\vk{1})=w} &=\pk{\sup_{\vk{x}\in [0,1]^n}(W_F(\vk{x})+F(\vk{x})w)>u}, \end{align*} where $W_F(\vk{x}):=W(\vk{x})-F(\vk{x})W(\vk{1})$. The variance function of $W_F(\vk{x})$ is \BQNY \sigma^2_F(\vk{x}):=\Var(W_F(\vk{x}))=F(\vk{x})(1-F(\vk{x})),\quad \vk{x}\in [0,1]^n, \EQNY which attains its maximum equal to $\frac{1}{4}$ over $[0,1]^n$ at $\vk{z}$ with $F(\vk{z})=\frac{1}{2}$, i.e., at $\seE$ and as $F(\x)\rw \frac{1}{2}$ \BQN\label{var11} \frac{1}{2}-\sigma_F(\x)\sim\LT(F(\x)-\frac{1}{2}\RT)^2. \EQN \COM{Further, by \eqref{FEXM1}, we have that \BQN\label{hhB} \lim_{\delta\rw 0}\sup_{\z\in\seE}\sup_{\abs{\x-\z}<\delta}\abs{\frac{\frac{1}{2}-\sigma_F(\vk{x})} {\LT(\sum_{i=1}^na_i(\vk{z})(x_i-z_i)\RT)^2}-1}=0. \EQN} Since by \eqref{FEXM}, there exist $\vn_1\in\LT(0,\min_{1\leq i\leq n}\underline{a}_i\RT),$ for any $\vk{z}\in \seE$, if $\abs{\x-\z},\abs{\y-\z}<\delta$ \BQN F(\x)-F(\y)\leq \sum_{i=1}^{n}a_i(\z)(x_i-y_i)+\vn_1\sum_{i=1}^n\abs{x_i-y_i}\leq \Q_1\abs{\x-\y},\label{ThmMbo1}\\ F(\x)-F(\y)\geq \sum_{i=1}^{n}a_i(\z)(x_i-y_i)-\vn_1\sum_{i=1}^n\abs{x_i-y_i}\geq \Q_2\abs{\x-\y}\label{ThmMbo2}. \EQN Thus $F(\x)$ is strictly increasing along every line parallel to the axes in \BQN\label{ED} E(\delta):=\{\x\in [0,1]^n:\abs{\x-\z}\leq \delta, \z\in \seE\}\supseteq \seE, \EQN and for any $\delta_0\in (0,1/2)$ we can take $\delta\in(0,\frac{1}{2})$ small enough such that \BQN\label{Fb2} \sup_{\x\in E(\delta)}\abs{F(\vk{x})-1/2}\leq \delta_0, \EQN with $\delta_0\rw 0$ as $\delta\rw 0$ and \BQNY \LT(\frac{1}{2}-\delta_0\RT)^2\leq \sigma^2_F(\vk{x})\leq 1,\ \x\in E(\delta). \EQNY For the correlation function $r_F(\vk{x},\vk{y}):=Cov\LT(\frac{W_F(\vk{x})}{\sigma_F(\vk{x})}, \frac{W_F(\vk{y})}{\sigma_F(\vk{y})}\RT)$, we have for any $\z\in\seE$, if $\abs{\x-\z},\abs{\y-\z}\leq\delta$ \BQN 1-r_F(\vk{x},\vk{y}) &=&1-\frac{\E{W_F(\vk{x})W_F(\vk{y})}}{\sigma_F(\vk{x})\sigma_F(\vk{y})}\label{rrr1}\\ &=&\frac{\E{\LT(W_F(\vk{x})-W_F(\vk{y})\RT)^2} -\LT(\sigma_F(\vk{x})-\sigma_F(\vk{y})\RT)^2} {2\sigma_F(\vk{x})\sigma_F(\vk{y})}.\nonumber \EQN By \eqref{Fb2}, we have \BQNY \LT(\sigma_F(\vk{x})-\sigma_F(\vk{y})\RT)^2 &=&\frac{\LT(\sigma^2_F(\vk{x})-\sigma^2_F(\vk{y})\RT)^2} {\LT(\sigma_F(\vk{x})+\sigma_F(\vk{y})\RT)^2}\\ &\leq&\frac{1}{(1-2\delta_0)^2}\LT((F(\vk{x})-F(\vk{y}))-(F^2(\vk{x})-F^2(\vk{y}))\RT)^2\\ &=&\frac{1}{(1-2\delta_0)^2}(F(\vk{x})-F(\vk{y}))^2(1-(F(\vk{x})+F(\vk{y})))^2\\ &\leq&\frac{4\delta_0^2\Q_2^2}{(1-2\delta_0)^2}\abs{\x-\y}^2, \EQNY and \BQNY \E{\LT(W_F(\vk{x})-W_F(\vk{y})\RT)^2} &=&\E{\LT(W(\vk{x})-W(\vk{y})-(F(\vk{x})-F(\vk{y}))W(1,1)\RT)^2}\\ &=&F(\vk{x})+F(\vk{y})-2F(\vk{x}\wedge \vk{y})-(F(\vk{x})-F(\vk{y}))^2. \EQNY Since by \eqref{FEXM}, there exist $\vn_1\in(0,\min_{1\leq i\leq n}\underline{a}_i),$ for any $\vk{z}\in \seE$, if $\abs{\x-\z},\abs{\y-\z}<\delta$ \BQNY F(\vk{x})+F(\vk{y})-2F(\vk{x}\wedge \vk{y})\leq \sum_{i=1}^{n}(\tla_i(\wz)+\vn_1)\abs{x_i-y_i},\\ F(\vk{x})+F(\vk{y})-2F(\vk{x}\wedge \vk{y})\geq \sum_{i=1}^{n}(\tla_i(\wz)-\vn_1)\abs{x_i-y_i}. \EQNY \eE{Consequently, for any $\x,\y\in E(\delta) $} \BQNY 1-r_F(\x,\y)&\leq& \frac{2}{(1-2\delta_0)^2}\LT(F(\vk{x})+F(\vk{y})-2F(\vk{x}\wedge \vk{y})\RT)\\ &\leq& \frac{2}{(1-2\delta_0)^2}\LT(\sum_{i=1}^n(\tla_i(\wz)+\vn_1)\abs{x_i-y_i} \RT)\\ &\leq&2(1+\vn_2)\LT(\sum_{i=1}^n(\tla_i(\wz)+\vn_1)\abs{x_i-y_i} \RT) \EQNY and \BQNY 1-r_F(\x,\y)&\geq& \frac{2}{(1+2\delta_0)^2}\E{\LT(W_F(\vk{x})-W_F(\vk{y})\RT)^2} -\frac{4\delta_0^2\Q_2^2}{(1-2\delta_0)^2}\abs{\x-\y}^2\\ &\geq& \frac{2}{(1+2\delta_0)^2}\LT(F(\vk{x})+F(\vk{y})-2F(\vk{x}\wedge \vk{y})-\Q_2^2\abs{\x-\y}^2\RT) -\frac{4\delta_0^2\Q_2^2}{(1-2\delta_0)^2}\abs{\x-\y}^2\\ &\geq& \frac{2}{(1+2\delta_0)^2}\LT(\sum_{i=1}^n\tla_i(\wz)\abs{x_i-y_i}\RT) -\LT(\Q_2^2+\frac{4\delta_0^2\Q_2^2}{(1-2\delta_0)^2}\RT)\abs{\x-\y}^2\\ &\geq& 2(1+\vn_4)\LT(\sum_{i=1}^n\tla_i(\wz)\abs{x_i-y_i}\RT), \EQNY where we use the fact that for $\x,\y\in E(\delta)$ \BQNY \frac{2}{(1+2\delta)^2}\leq \frac{1}{2\sigma_F(\vk{x})\sigma_F(\vk{y})}\leq\frac{2}{(1-2\delta)^2}. \EQNY Hence \BQN\label{r11} \lim_{\delta\rw 0}\sup_{\vk{z}\in \seE}\underset{\x\neq\y}{\sup_{\abs{\x-\z},\abs{\y-\z}<\delta}} \abs{\frac{1-r_F(\vk{x},\vk{y})}{2\sum_{i=1}^na_i(\vk{z})\abs{x_i-y_i}}-1}=0. \EQN Since for $x,y\in(0,1)$ \BQNY \sqrt{(x-x^2)(y-y^2)}\geq (x\wedge y-xy), \EQNY where the \eE{equality} holds only when $x=y$, then for $\vk{x}, \vk{y}\in E(\delta)$ \BQN\label{eqsig} \sigma_F(\vk{x})\sigma_F(\vk{y})&\geq& F(\vk{x})\wedge F(\vk{y})-F(\vk{x})F(\vk{y})\nonumber\\ &\geq&F(\vk{x}\wedge \vk{y})-F(\vk{x})F(\vk{y})\nonumber\\ &=&\E{W_F(\vk{x})W_F(\vk{y})}, \EQN and for $\vk{x}\neq \vk{y}$, $\vk{x},\vk{y}\in E(\delta)$, if $F(\vk{x})=F(\vk{y})$, $F(\vk{x}\wedge \vk{y})<F(\vk{x})\wedge F(\vk{y})$ \eE{since $F(\vk{x})$ is strictly increasing along every line parallel to the axes in $E(\delta)$}. Then in \eqref{eqsig}, at least one of the two inequality strictly holds implying \BQN\label{Thm31ru} r_F(\vk{x},\vk{y})< 1 \EQN holds for $\vk{x}, \vk{y}\in E(\delta)$ and $\vk{x}\neq \vk{y}$.\\ \COM{By \eqref{ThmMbo1} and \eqref{ThmMbo2}, for $\wx,\ \wy\in L$ with $x_i=y_i, i=2\ldot n-1$, \BQNY &&0=F(\wx,h(\wx))-F(\wy,h(\wy))\leq \tla_1(\wx)(x_1-y_1)+\tla_n(\wx)(h(\wx)-h(\wy))+\vn_1\abs{x_1-y_1}+\vn\abs{h(\wx)-h(\wy)},\\ &&0=F(\wx,h(\wx))-F(\wy,h(\wy))\geq \tla_1(\wx)(x_1-y_1)+\tla_n(\wx)(h(\wx)-h(\wy))-\vn_1\abs{x_1-y_1}-\vn\abs{h(\wx)-h(\wy)}, \EQNY i.e. \BQNY &&-\tla_n(\wx)(h(\wx)-h(\wy))-\vn\abs{h(\wx)-h(\wy)}\leq\tla_1(\wx)(x_1-y_1)+\vn_1\abs{x_1-y_1},\\ &&-\tla_n(\wx)(h(\wx)-h(\wy))+\vn\abs{h(\wx)-h(\wy)}\geq\tla_1(\wx)(x_1-y_1)-\vn_1\abs{x_1-y_1}. \EQNY Then if we set $x_1>y_1$, the right hand side of the last inequality is great than zero and $h(\wx)<h(\wy)$ and \BQN\label{hhB} \frac{\tla_1(\wx)-\vn_1}{\tla_n(\wx)+\vn_1}\abs{x_1-y_1} \leq \abs{h(\wx)-h(\wy)}\leq \frac{\tla_1(\wx)+\vn_1}{\tla_n(\wx)-\vn_1}\abs{x_1-y_1}. \EQN Thus $h(\wx), \wx\in L^o$ with $L^o$ the interior of $L$ is strictly decreasing continuous differentiable about $x_1$ on $L^o$ with \BQNY \frac{\partial h(\wx)}{\partial x_1}=-\frac{\tla_1(\wx)}{\tla_n(\wx)}. \EQNY Further, $h(\wx), \wx\in L^o$ is strictly decreasing continuous differentiable on $L^o$ with \BQN \frac{\partial h(\wx)}{\partial x_i}=-\frac{\tla_i(\wx)}{\tla_n(\wx)},\ i=1\ldot n-1. \EQN} Set for $\x\in [0,1]^n$ and $A,\ B\subseteq [0,1]^n$ $$\rho(\x,A)=\inf_{\y\in A}\abs{\x-\y},\ \rho(A, B)=\inf_{\x\in A, \y\in B}\abs{\x-\y} $$ and \BQNY E_0(\delta)=\{\vk{x}:\rho(\wx, L)\leq \delta, \abs{x_n-q(\wx)}\leq \delta\} \EQNY where \BQNY q(\wx)=\LT\{ \begin{array}{ll} h(\wx),\ \text{if} \ \wx\in L,\\ h(\wy)+\sum_{i=1}^{n-1}\tla(\wy)(x_i-y_i) , \ \text{if} \ \wx\notin L\ \text{and}\ \wy\in\{\wz:\abs{\wz-\wx}=\rho(\wx,L)\}. \end{array} \RT. \EQNY Then $E_0(\delta)\supset \seE$. Since $\sigma^2_F(\vk{x})$ is a continuous function, we have \BQNY \sigma^2_m=\sup_{\vk{x}\in [0,1]^n\setminus E_0(\delta)}\sigma^2_F(\vk{x})< \frac{1}{4}. \EQNY By again Borell-TIS inequality (ref.\cite{AdlerTaylor}), as $u\rw\IF$ \BQN\label{Thm31p2} \pk{\sup_{\vk{x}\in [0,1]^n\setminus E_0(\delta) }\LT(W_F(\vk{x})+F(\vk{x})w\RT)>u} &\leq&\pk{\sup_{\vk{x}\in [0,1]^n\setminus E_0(\delta)}W_F(\vk{x})>u-\abs{w}}\nonumber\\ &\leq&\exp\LT(-\frac{(u-\abs{w}-\mathbb{Q}_2)^2}{2\sigma^2_m}\RT)\nonumber\\ &=&o\LT(u^{2(n-1)}e^{-2u^2+2uw}\RT), \EQN where $\mathbb{Q}_2=\E{\sup_{\x\in[0,1]^n}W_F(\x)}<\IF$.\\ We have \begin{align} \pk{\sup_{\vk{x}\in [0,1]^n}(W_F(\vk{x})+F(\vk{x})w)>u} &\leq \pk{\sup_{\vk{x}\in [0,1]^n\setminus E_0(\delta)}(W_F(\vk{x})+F(\vk{x})w)>u} +\Pi_1(u),\label{Thm31ub1} \end{align} and \begin{align} \pk{\sup_{\vk{x}\in [0,1]^n}(W_F(\vk{x})+F(\vk{x})w)>u} &\geq\Pi_1(u),\label{Thm31lb2} \end{align} where \BQNY \Pi_1(u)=\pk{\sup_{\vk{x}\in E_0(\delta)}(W_F(\vk{x})+F(\vk{x})w)>u}. \EQNY Next we consider $\Pi_1(u)$. We have \BQNY \Pi_1(u)=\pk{\sup_{\vk{x}\in E_0(\delta)}(X(\vk{x})+g(\vk{x}))>\mu} \EQNY where $\mu=2u-w,\ \ g(\vk{x})=w(2F(\x)-1)$ and $X(\x)=2W_F(\x).$ \def\wz{\widetilde{\vk{z}}} Notice that the variance function $\sigma^2_X(\vk{x})$ of $X(\vk{x})$ attains its maximum equal to $1$ at $\seE$. In \eqref{FEXM} for $\x\in E_0(\delta)$ taking $\y=\z=(\wx,h(\wx))$ leading \BQN\label{con1} \lim_{\delta\rw 0}\sup_{\wx\in L }\underset{x_n\neq h(\wx)}{\sup_{\abs{x_n-h(\wx)}\leq \delta}}\frac{\abs{F(\vk{x})-1/2-\tla_n(\wx)\LT(x_n-h(\wx)\RT)}} {\abs{x_n-h(\wx)}}=0, \EQN which combined with \eqref{var11} implies \BQN\label{Thm31sig} \lim_{\delta\rw 0}\sup_{\wx\in L}\underset{x_n\neq h(\wx)}{\sup_{\abs{x_n-h(\wx)}<\delta}}\abs{\frac{1-\sigma_X(\vk{x})} {2\tla_n^2(\wx)(x_n-h(\wx))^2}-1}=0 \EQN and \BQN\label{Thm31gg} \lim_{\delta\rw0}\sup_{\wx\in L}\underset{x_n\neq h(\wx)}{\sup_{\abs{x_n-h(\wx)}<\delta}} \abs{\frac{g(\vk{x})}{2w\tla_n(\wx)(x_n-h(\wx))}-1}=0. \EQN Further in \eqref{FEXM} for $\x,\y\in \seE$ taking $\z=\x$ leads \BQN\label{con1} \lim_{\delta\rw 0}\sup_{\x,\y\in \seE}\underset{\x\neq\y}{\sup_{\abs{\x-\y}\leq \delta}}\frac{\abs{\sum_{i=1}^{n}\tla_i(\wx)\LT(x_i-y_i\RT)}} {\sum_{i=1}^n\abs{x_i-y_i}}=0, \EQN which derive that for any small $\delta\in(0,1)$ there exist a constant $\mathbb{Q}_3$ such that \BQNY \sup_{\wx,\wy\in L}\underset{\wx\neq\wy}{\sup_{\abs{\wx-\wy}\leq \delta}}\frac{\abs{h(\wx)-h(\wy)}}{\sum_{i=1}^{n-1} \abs{x_i-y_i}} \leq \mathbb{Q}_3. \EQNY Thus for small $\delta\in(0,1)$ there exist a constant $ \mathbb{Q}_4$ such that \BQN \sup_{\x,\y\in E_0(\delta)}\underset{\x\neq\y}{\sup_{\abs{\x-\y}\leq \delta}}\frac{\abs{(x_n-h(\wx))-(y_n-h(\wy))}}{\sum_{i=1}^{n} \abs{x_i-y_i}} \leq \mathbb{Q}_4. \EQN \COM{Recall that \BQNY \sup_{\x\in E_0(\delta)}\abs{F(\x)-1/2}\leq \delta_0, \EQNY which together with \eqref{ThmMbo1} and \eqref{con1} derive that for $\x,\y\in E_0(\delta)$ \BQN \abs{\frac{1}{\sigma_X(\x)}-\frac{1}{\sigma_X(\y)}} &=&\frac{1}{2}\LT(\frac{\sqrt{F(\y)(1-F(\y))}-\sqrt{F(\x)(1-F(\x))}} {\sqrt{F(\x)F(\y)(1-F(\x))(1-F(\y))}}\RT)\nonumber\\ &\leq&\frac{1}{2}\LT(\frac{F(\y)(1-F(\y))-F(\x)(1-F(\x))} {2(1-2\delta_0)^3}\RT)\nonumber\\ &\leq&\frac{1}{2}\LT(\frac{(F(\y)-F(\x))(\abs{1/2-F(\x)}+\abs{1/2-F(\y)})} {2(1-2\delta_0)^3}\RT)\nonumber\\ &\leq&\mathbb{Q}_3 \LT(\abs{x_n-h(\wx)}+\abs{y_n-h(\wy)}\RT)\LT(\sum_{i=1}^n\abs{x_i-y_i}\RT), \EQN and \BQN \abs{g(\x)-g(\y)}=2\abs{w}\abs{F(\x)-F(\y)}\leq \mathbb{Q}_4\LT(\sum_{i=1}^n\abs{x_i-y_i}\RT). \EQN} For the correlation function $r_X(\x,\y)$ of $X(\x)$, by \eqref{r11} \BQN\label{Thm31r2} \lim_{\delta\rw 0}\sup_{\z\in \seE}\underset{\x\neq\y}{\sup_{\abs{\x-\z},\abs{\y-\z}<\delta}} \abs{\frac{1-r_X(\vk{x},\vk{y})}{2\sum_{i=1}^{n}a_i(\z)\abs{x_i-y_i} }-1}=0 \EQN and further for any $\abs{\x-\z},\abs{\y-\z}<\delta$ with $\z\in \seE$ \BQN\label{Thm31r3} 2(1-\vn)\sum_{i=1}^{n}a_i(\z)\abs{x_i-y_i}\leq 1-r_X(\vk{x},\vk{y})\leq 2(1+\vn)\sum_{i=1}^{n}a_i(\z)\abs{x_i-y_i}. \EQN \def\wk{\vk{k}} \def\wl{\vk{l}} Set for some $\delta\in(0,\frac{1}{2})$ \BQNY &&\wk=(k_1\ldot k_{n-1})\in \N^{n-1},\quad \wl=(l_1\ldot l_{n-1})\in \N^{n-1},\quad J_{\wk}=\Pi_{j=1}^{n-1}[k_j\delta,(k_j+1)\delta],\\ &&\mathcal{L}_1=\LT\{\wk:\rho(J_{\wk}, L)\leq \frac{\delta}{3}\RT\},\quad \mathcal{L}_2=\LT\{\wk:J_{\wk}\subset L\RT\},\quad D_{\wk}=\{\vk{x}: \abs{x_n-h(\wx)}\leq \delta,\ \wx\in J_{\wk}, \wk\in\mathcal{L}_1 \},\\ &&\mathcal{K}_1=\{(\wk,\wl):J_{\wk}\cap J_{\wl}\neq \emptyset, \wk,\wl\in\mathcal{L}_2 \},\quad \mathcal{K}_2=\{(\wk,\wl):J_{\wk}\cap J_{\wl}= \emptyset,\wk,\wl\in\mathcal{L}_2\},\\ && c_{\wk}=(k_1\delta\ldot k_{n-1}\delta). \EQNY Here we need to notice that $\wk$ and $\wl$ are $(n-1)$-dimensional vector.\\ We have \BQNY \bigcup_{\wk\in \mathcal{L}_2 } J_{\wk} \subset L \subset\bigcup_{\wk\in \mathcal{L}_1 } J_{\wk}. \EQNY Bonferroni inequality leads to \BQN \pk{\sup_{\vk{x}\in E_0(\delta)}(X(\vk{x})+g(\vk{x}))>\mu} &\leq& \sum_{\wk\in\mathcal{L}_1}\pk{\sup_{\vk{x}\in D_{\wk}}(X(\vk{x})+g(\vk{x}))>\mu},\label{Thm31upperbound1}\\ \pk{\sup_{\vk{x}\in E_0(\delta)}(X(\vk{x})+g(\vk{x}))>\mu} &\geq& \sum_{\wk\in\mathcal{L}_2}\pk{\sup_{\vk{x}\in D_{\wk}}(X(\vk{x})+g(\vk{x}))>\mu}-\sum_{i=1}^2\Lambda_i(u),\label{Thm31lowerbound2} \EQN where \BQNY &&\Lambda_1(u)=\sum_{(\wk,\wl)\in\mathcal{K}_1}\pk{\sup_{\vk{x}\in D_{\wk}}(X(\vk{x})+g(\vk{x}))>\mu,\sup_{\vk{x}\in D_{\wl}}(X(\vk{x})+g(\vk{x}))>\mu},\\ &&\Lambda_2(u)=\sum_{(\wk,\wl)\in\mathcal{K}_2}\pk{\sup_{\vk{x}\in D_{\wk}}(X(\vk{x})+g(\vk{x}))>\mu,\sup_{\vk{x}\in D_{\wl}}(X(\vk{x})+g(\vk{x}))>\mu}. \EQNY By \eqref{Thm31sig}--\eqref{Thm31r3} and \nelem{lem1}, we have \BQN\label{Thm31p3} &&\sum_{\wk\in\mathcal{L}_1}\pk{\sup_{\vk{x}\in D_{\wk}}(X(\vk{x})+g(\vk{x}))>\mu}\nonumber\\ &&\sim\sum_{\wk\in\mathcal{L}_1}2^n\delta^{n-1} \LT(\Pi_{i=1}^{n-1}\tla_i(c_{\wk})\RT) \sqrt{\frac{\pi}{2}}e^{\frac{(2w)^2}{8}} \mu^{2n-1}\Psi(\mu)\nonumber\\ &&\sim\sum_{\wk\in\mathcal{L}_1}2^{3(n-1)}\delta^{n-1} \LT(\Pi_{i=1}^{n-1}\tla_i(c_{\wk})\RT) u^{2(n-1)}e^{-2u^2+2wu}\nonumber\\ &&\sim2^{3(n-1)} \int_{L} \LT(\Pi_{i=1}^{n-1}a_i(\wx, h(\wx))\RT)d \wx u^{2(n-1)}e^{-2u^2+2uw}, \EQN as $\mu\rw\IF, \delta\rw 0.$ \eE{Since $a_i(\wx,h(\wx)),\ \wx\in L$ is positive continuous and $L$ is Jordan measurable with positive Lebesgue measure, then $\int_{L} \LT(\Pi_{i=1}^{n-1}a_i(\wx, h(\wx))\RT)d \wx \in(0,\IF)$.}\\ Similarly, \BQN\label{Thm31p4} \sum_{\wk\in\mathcal{L}_2}\pk{\sup_{\vk{x}\in D_{\wk}}(X(\vk{x})+g(\vk{x}))>\mu}\sim2^{3(n-1)} \int_{L} \LT(\Pi_{i=1}^{n-1}a_i(\wx, h(\wx))\RT)d \wx u^{2(n-1)}e^{-2u^2+2uw}, \EQN as $\mu\rw\IF, \delta\rw 0.$ Next we will show that $\Lambda_i(u), i=1,2$ as $u\rw\IF$ are both negligible compared with $$\sum_{\wk\in\mathcal{L}_2}\pk{\sup_{\vk{x}\in D_{\wk}}(X(\vk{x})+g(\vk{x}))>\mu}.$$ For any $(\wk,\wl)\in \mathcal{K}_1(u)$, without loss of generality, we assume that $k_1+1=l_1$. Let \BQNY D_{\wk}^1=\LT[k_1\delta,(k_1+1)\delta-\delta^2\RT] \times \Pi_{j=2}^{n-1}[k_j\delta,(k_j+1)\delta], \quad D_{\wk}^2=\LT[(k_1+1)\delta-\delta^2,(k_1+1)\delta\RT] \times \Pi_{j=2}^{n-1}[k_j\delta,(k_j+1)\delta]. \EQNY Then \BQNY &&\pk{\sup_{\vk{x}\in D_{\wk}}(X(\vk{x})+g(\vk{x}))>\mu, \sup_{\vk{x}\in D_{\wl}}(X(\vk{x})+g(\vk{x}))>\mu}\\ &&\leq\pk{\sup_{\vk{x}\in D^1_{\wk}(u)}(X(\vk{x})+g(\vk{x}))>\mu, \sup_{\vk{x}\in D_{\wl}(u)}(X(\vk{x})+g(\vk{x}))>\mu}+\pk{\sup_{\vk{x}\in D^2_{\wk}}(X(\vk{x})+g(\vk{x}))>\mu}. \EQNY Analogously as in \eqref{Thm31p3}, we have \BQNY \Lambda_{11}(u)&:=&\sum_{\wk\in\mathcal{L}_2}\pk{\sup_{\vk{x}\in D^2_{\wk}}(X(\vk{x})+g(\vk{x}))>\mu}\\ &\sim&\sum_{\wk\in\mathcal{L}_2}2^n\delta^{n} \LT(\Pi_{i=1}^{n-1}\tla_i(c_{\wk})\RT) \sqrt{\frac{\pi}{2}}e^{\frac{(2w)^2}{8}} \mu^{2n-1}\Psi(\mu)\nonumber\\ &\sim&\sum_{\wk\in\mathcal{L}_2}2^{3(n-1)}\delta^{n} \LT(\Pi_{i=1}^{n-1}\tla_i(c_{\wk})\RT)u^{2(n-1)}e^{-2u^2+2wu}\nonumber\\ &=&o\LT(u^{2(n-1)}e^{-2u^2+2uw}\RT), \EQNY as $u\rw\IF, \delta\rw 0.$\\ Moreover, since for $(\vk{x},\vk{y})\in D^1_{\wk}(u)\times D_{\wl}(u),\ (\wk,\wl)\in \mathcal{K}_1$, by \eqref{Thm31r2} we have \BQNY \Var(X(\vk{x})+X(\vk{y}))=2+2r(\vk{x},\vk{y})\leq 4-\mathbb{Q}\delta, \EQNY then by Borell-TIS inequality for $\mathbb{Q}=\sup_{\vk{x}\in [0,1]^n}g(\vk{x})$ \BQN\label{Thm31P5} \Lambda_{12}(u)&:=&\sum_{(\wk,\wl)\in\mathcal{K}_1}\pk{\sup_{\vk{x}\in D^1_{\wk}(u)}(X(\vk{x})+g(\vk{x}))>\mu, \sup_{\vk{x}\in D_{\wl}(u)}(X(\vk{x})+g(\vk{x}))>\mu}\nonumber\\ &\leq& \sum_{(\wk,\wl)\in\mathcal{K}_1}\pk{\sup_{(\vk{x},\vk{y})\in D^1_{\wk}(u)\times D_{\wl}(u) }(X(\vk{x})+X(\vk{y}) )>2(\mu-\mathbb{Q})}\nonumber\\ &\leq& \sum_{(\wk,\wl)\in\mathcal{K}_1} \exp\LT(-\frac{(2(\mu-\mathbb{Q})-\mathbb{Q})^2}{2(4-\mathbb{Q}\delta)}\RT)\nonumber\\ &=&o\LT(u^{2(n-1)}e^{-2u^2+2uw}\RT),\ u\rw\IF. \EQN Since $J_{\wk}(u)$ has at most $3^{n-1}-1$ neighbors, then \BQN\label{Thm31P6} \Lambda_1(u)\leq 2(3^{n-1}-1)(\Lambda_{11}(u)+\Lambda_{12}(u))=o\LT(u^{2(n-1)}e^{-2u^2+2uw}\RT),\ u\rw\IF,\ \delta\rw0. \EQN Similarly, since for $(\vk{x},\vk{y})\in D_{\wk}(u)\times D_{\wl}(u),\ (\wk,\wl)\in \mathcal{K}_2$, by \eqref{Thm31r2} we have \BQNY \Var(X(\vk{x})+X(\vk{y}))=2+2r(\vk{x},\vk{y})\leq 4-\mathbb{Q}\delta, \EQNY and then we have \BQN\label{Thm31P7} \Lambda_2(u)&=&\sum_{(\wk,\wl)\in\mathcal{K}_2}\pk{\sup_{\vk{x}\in D_{\wk}}(X(\vk{x})+g(\vk{x}))>\mu,\sup_{\vk{x}\in D_{\wl}}(X(\vk{x})+g(\vk{x}))>\mu}\nonumber\\ &\leq& \sum_{(\wk,\wl)\in\mathcal{K}_2}\pk{\sup_{(\vk{x},\vk{y})\in D_{\wk}(u)\times D_{\wl}(u) }(X(\vk{x})+X(\vk{y}) )>2(\mu-\mathbb{Q})}\nonumber\\ &\leq& \sum_{(\wk,\wl)\in\mathcal{K}_2} \exp\LT(-\frac{(2(\mu-\mathbb{Q})-\mathbb{Q})^2}{2(4-\mathbb{Q}\delta)}\RT)\nonumber\\ &=&o\LT(u^{2(n-1)}e^{-2u^2+2uw}\RT), \ u\rw\IF. \EQN Inserting \eqref{Thm31p3}, \eqref{Thm31p4}, \eqref{Thm31P6}, and \eqref{Thm31P7} into \eqref{Thm31upperbound1} and \eqref{Thm31lowerbound2} implies \BQNY \pk{\sup_{\vk{x}\in E_0(\delta)}(X(\vk{x})+g(\vk{x}))>\mu}\sim 2^{3(n-1)} \int_{L} \LT(\Pi_{i=1}^{n-1}a_i(\wx, h(\wx))\RT)d \wx u^{2(n-1)}e^{-2u^2+2uw},\ u\rw\IF, \EQNY which combined with \eqref{Thm31ub1} and \eqref{Thm31lb2} establishes the claim \eqref{Mulre1}. \QED \proofkorr{Corr3} Since we assume that $f$ is positive in $[0,1]^n$, we have for any $\x \in (0,1)^{n-1}$ the $i$th partial derivative of $F$ denoted by $a_i(\x)$ is positive and continuous. Let $\Q=\sup_{\x\in [0,1]^n} f(\x)$ which is finite and positive by the assumption and $\seE=\Bigl\{\vk{x}\in [0,1]^n:F(\vk{x})=\frac{1}{2}\Bigr\}$. Using Taylor expansion, we have \BQNY \Abs{F(\x)-F(\z)-\sum_{i=1}^{n}a_i(\vk{z})(x_i-z_i)} \leq \Q\LT(\sum_{i=1}^{n}\abs{x_i-z_i}^2\RT) \EQNY for any $\x, \z \in [0,1]^{n}$. Consequently, \BQNY \sup_{\z\in \seE}\sup_{0 < \abs{\x-\z}<\delta}\frac{\abs{F(\x)-F(\z)-\sum_{i=1}^na_i(\vk{z})(x_i-z_i)}} {\sum_{i=1}^n\abs{x_i-z_i}}\leq 2\Q\delta, \EQNY which combined with the continuity of $a_i$ implies \BQNY \lim_{\delta\rw 0}\sup_{\vk{z}\in \seE} \underset{\vk{x}\neq \vk{y}}{\sup_{\abs{\vk{x}-\vk{z}},\abs{\vk{y}-\vk{z}}\leq \delta}}\frac{\abs{ F(\vk{x})-F(\vk{y})-\sum_{i=1}^{n}a_i(\vk{z})(x_i-y_i)}} {\sum_{i=1}^{n}\abs{x_i-y_i}}=0. \EQNY In view of \netheo{ThDK} the set $\seE$ is not empty and moreover its projection on $[0,1]^{n-1}$ denoted by $L$ is Jordan measurable with positive Lebesgue measure (with respect to $\lambda_{n-1})$.\\ Set $L^o$ is the interior of $L$. By the positivity of the partial derivatives on interior of $[0,1]^n$ and the fact that $F$ is strictly increasing on $[0,1]^{n-1}$, we have that for any $\wx \in L^o$ there is only one $x_n$ such that $F(\wx,x_n)=1/2$. Consequently, $x_n= g(\wx)$ for some bijective function $g$ for any $\wx\in L^o$. Since $F$ is continuously differentiable on $L^o$, for any $\x \in \seE$ with $\wx\in L^o$ by the implicit function theorem there exists $\ve>0$ such that for any $\wy \in \mathcal{O}_\vn(\wx)=\{\wz\in I^{n-1}:\abs{\wz-\wx}<\vn\}\subseteq L^o$, we have $ F(\wy, h_{\x}(\wy))=1/2$. By the above, $\lambda_{n-1}(L^o)\geq \lambda_{n-1}(\mathcal{O}_\vn(\wx) )>0$ and $h_{\x}$ does not depend on $\x$ and $h_{\x}(\wx)=g(\wx)$ for any $\wx\in L^o$. It follows thus that $g$ is continuously differentiable on $L^o$. Moreover for any $\wx\in L^o$ \BQNY \frac{\partial h(\wx)}{\partial x_i }=-\frac{a_i(\wx,h(\wx))}{a_n(\wx,h(\wx))}<0,\ i=1\ldot n-1, \EQNY and thus $h$ is continuously differentiable in $L^o$. Hence the proof follows by \netheo{ThmM1} since \eqref{FEXM}, the continuous differentiability of $h$ in $L^o$ and the Jordan measurability of $L$ are satisfied. \QED \prooftheo{Thm2} The variance function of $W_F(\vk{x})$ is \BQNY \sigma^2_F(\vk{x})=\Var(W_F(\vk{x}))=F(\vk{x})(1-F(\vk{x})),\quad \vk{x}\in [0,1]^n, \EQNY which attains its maximum equal to $\frac{1}{4}$ over $[0,1]^n$ at $(\vk{z})$ which satisfies $F(\vk{z})=\frac{1}{2}$.\\ Since $\sigma^2_F(\vk{x}), \vk x \in [0,1]^n$ is a continuous function, we have \BQNY \sup_{\vk{x}\in [0,1]^n\setminus \seEd}\sigma^2_F(\vk{x})=\frac{1}{4}-\delta^2. \EQNY By Borell-TIS inequality (ref.\cite{AdlerTaylor}), as $u\rw\IF$ \BQN\label{boundth21} \pk{\sup_{\vk{x}\in [0,1]^n\setminus \seEd}W(\vk{x})>u\Big|W(\vk{1})=w}&=&\pk{\sup_{\vk{x}\in [0,1]^n\setminus \seEd}\LT(W_F(\vk{x})+F(\vk{x})w\RT)>u}\nonumber\\ &\leq&\pk{\sup_{\vk{x}\in [0,1]^n\setminus \seEd}W_F(\vk{x})>u-\mathbb{Q}_1}\nonumber\\ &\leq&\exp\LT(-\frac{(u-\mathbb{Q}_1-\mathbb{Q}_2)^2}{2\sigma^2_m}\RT)\nonumber\\ &=&o\LT(e^{-2u+2uw}\RT), \EQN where $\mathbb{Q}_1=\sup_{\vk{x}\in [0,1]^n}F(\vk{x})$ and $\mathbb{Q}_2=\E{\sup_{\vk{x}\in [0,1]^n}W_F(\vk{x})}<\IF$. By \eqref{condition1}, we know that $W_F(\vk{x}), \vk{x}\in \seEd$ is a Gaussian fields with covariance function \BQN\label{covariance1} \E{W_F(\vk{x})W_F(\vk{y})}&=& F(\vk{x}\wedge \vk{y})-F(\vk{x})F(\vk{y})\nonumber\\ &=&F(\vk{x})\wedge F(\vk{y})-F(\vk{x})F(\vk{y})\nonumber\\ &=&\E{\eE{B_0}(F(\vk{x}))B_0(F(\vk{y}))} \EQN where $B_0(t)=B(t)-tB(1)$ is the standard Brownian bridge. Then by \cite{GauTrend16} [Example 3.12] \BQNY \pk{\sup_{\vk{x}\in\seEd}W(\vk{x})>u\Big|W(\vk{1})=w}&=&\pk{\sup_{\vk{x}\in \seEd}\LT(W_F(\vk{x})+F(\vk{x})w\RT)>u}\\ &=&\pk{\sup_{\vk{x}\in \seEd}\LT(B_0(F(\vk{x}))+F(\vk{x})w\RT)>u}\\ &=&\pk{\sup_{F(\vk{x})\in [\frac{1}{2}-\delta,\frac{1}{2}+\delta]}\LT(B_0(F(\vk{x}))+F(\vk{x})w\RT)>u}\\ &=&\pk{\sup_{x\in [\frac{1}{2}-\delta,\frac{1}{2}+\delta]}\LT(\eE{B_0}(x)+wx\RT)>u}\\ &\sim& e^{-2u^2+2uw}, \ u\rw\IF, \EQNY which combined with \eqref{boundth21} and the fact that \begin{align*} \pk{\sup_{\vk{x}\in [0,1]^n}W(\vk{x})>u\Big|W(\vk{1})=w}&\geq \pk{\sup_{\vk{x}\in\seEd}W(\vk{x})>u\Big|W(\vk{1})=w}\\ \pk{\sup_{\vk{x}\in [0,1]^n}W(\vk{x})>u\Big|W(\vk{1})=w}&\leq \pk{\sup_{\vk{x}\in\seEd}W(\vk{x})>u\Big|W(\vk{1})=w} +\pk{\sup_{\vk{x}\in [0,1]^n\setminus \seEd}W(\vk{x})>u\Big|W(\vk{1})=w} \end{align*} implies \BQN\label{re1} \pk{\sup_{\vk{x}\in [0,1]^n}W(\vk{x})>u\Big|W(\vk{1})=w}\sim e^{-2u^2+2uw}, \ u\rw\IF. \EQN If \eqref{condition1} holds for $\vk{x}, \vk{y}\in [0,1]^n$, then by \eqref{covariance1} for any $u>0$ \BQNY \pk{\sup_{\vk{x}\in [0,1]^n}W(\vk{x})>u\Big|W(\vk{1})=w}&=&\pk{\sup_{\vk{x}\in [0,1]^n}\LT(W_F(\vk{x})+F(\vk{x})w\RT)>u}\\ &=&\pk{\sup_{F(\vk{x})\in [0,1]}\LT(B_0(F(\vk{x}))+F(\vk{x})w\RT)>u}\\ &=&\pk{\sup_{x\in [0,1]}\LT(B_0(x)+wx\RT)>u}\\ &=& e^{-2u^2+2uw}, \EQNY where the last equation is well-known, see e.g., \cite{AsymBB2003} [Lemma 2.7]. \QED \prooftheo{Corr4} For $u>0$ we have \begin{align*} \pk{\sup_{\vk{x}\in \mathcal{E}}\abs{W(\vk{x})}>u\Big|W(\vk{1})=w} &=\pk{\sup_{\vk{x}\in \mathcal{E}}W(\vk{x})>u\Big|W(\vk{1})=w} +\pk{\inf_{\vk{x}\in \mathcal{E}}W(\vk{x})<-u\Big|W(\vk{1})=w}\nonumber\\ &\quad-\pk{\sup_{\vk{x}\in \mathcal{E}}W(\vk{x})>u\Big|W(\vk{1})=w ,\ \inf_{\vk{x}\in \mathcal{E}}W(\vk{x})<-u\Big|W(\vk{1})=w}\nonumber\\ &=: J_1(u)+J_2(u)-J_3(u). \end{align*} Since there exist $\x_0\in \mathcal{E}$ such that $F(\x_0)=\frac{1}{2}$ and \BQNY \Var(W(\x_0)-F(\x_0)W(1))=F(\x_0)-F^2(\x_0)=\frac{1}{4}, \EQNY we have \BQNY J_1(u)=\pk{\sup_{\vk{x}\in \mathcal{E}}\LT(W(\vk{x})-F(\x)W(\vk{1})+F(\x) w\RT)>u} \geq \pk{W(\x_0)-F(\x_0)W(\vk{1})+\frac{1}{2} w>u}=\Psi(2u-w), \EQNY and \BQNY J_2(u)=\pk{\inf_{\vk{x}\in \mathcal{E}}\LT(W(\vk{x})-F(\x)W(\vk{1})+F(\x) w\RT)<-u} \geq \pk{W(\x_0)-F(\x_0)W(\vk{1})+\frac{1}{2} w<-u}=\Psi(2u+w). \EQNY Thus we have for $u>0$ \BQNY J_1(u)+J_2(u)\geq \Psi(2u-w)+\Psi(2u+w)\geq \Psi(2u) . \EQNY Next in order to get the finial result, we need to show that \BQNY J_3(u)=o\LT(\Psi(2u)\RT), \ u\rw\IF. \EQNY We have that for $u>0$ \BQNY J_3(u)&\leq&\pk{\sup_{\vk{x}\in \mathcal{E}\setminus \seEd}W(\vk{x})>u\Big|W(\vk{1})=w }+\pk{\inf_{\y\in \mathcal{E}\setminus \seEd }W(\vk{y})<-u\Big|W(\vk{1})=w}\\ &&+\pk{\sup_{\vk{x}\in \seEd }W(\vk{x})>u\Big|W(\vk{1})=w ,\ \inf_{\y\in \seEd}W(\y)<-u\Big|W(\vk{1})=w}\\ &=:& J_{31}(u)+J_{32}(u)+J_{33}(u). \EQNY Since for $\x\in \mathcal{E}\setminus \seEd $, $\abs{F(\x)-\frac{1}{2}}>\delta$ and \BQNY \sigma^2_m:=\sup_{\vk{x}\in \mathcal{E}\setminus \seEd}\Var\LT(W(\vk{x})-F(\vk{x})W(\vk{1})\RT)=\sup_{\vk{x}\in \mathcal{E}\setminus \seEd}F(\x)(1-F(\x))< \frac{1}{4}-\delta^2. \EQNY By Borell-TIS inequality (ref.\cite{AdlerTaylor}), we have for all $u$ sufficiently large \BQNY J_{31}(u)&=&\pk{\sup_{\vk{x}\in \mathcal{E}\setminus \seEd}\LT(W(\vk{x})-F(\vk{x})W(\vk{1})+F(\vk{x})w\RT)>u}\\ &\leq&\pk{\sup_{\vk{x}\in \mathcal{E}\setminus \seEd}\LT(W(\vk{x})-F(\vk{x})W(\vk{1})\RT)>u-\abs{w}}\\ &\leq&\exp\LT(-\frac{(u-\abs{w}-\mathbb{Q}_1)^2}{2\sigma^2_m}\RT) =o\LT(\Psi(2u)\RT), \EQNY and \BQNY J_{32}(u)&=&\pk{\inf_{\y\in \mathcal{E}\setminus \seEd }\LT(W(\vk{x})-F(\vk{x})W(\vk{1})+F(\vk{x})w\RT)<-u}\\ &=&\pk{\sup_{\vk{x}\in \mathcal{E}\setminus \seEd }\LT(-W(\vk{x})+F(\vk{x})W(\vk{1})-F(\vk{x})w\RT)>u}\\ &\leq&\pk{\sup_{\vk{x}\in \mathcal{E}\setminus \seEd }\LT(W(\vk{x})-F(\vk{x})W(\vk{1})\RT)>u-\abs{w}}\\ &\leq& \exp\LT(-\frac{(u-\abs{w}-\mathbb{Q}_1)^2}{2\sigma^2_m}\RT) =o\LT(\Psi(2u)\RT), \EQNY where we use the symmetry of $(W(\vk{x})-F(\vk{x})W(\vk{1}))$ and $\mathbb{Q}_1:=\sup_{\x\in [0,1]^n\setminus \seEd}\E{W(\vk{x})-F(\vk{x})W(\vk{1})}\in(0,\IF)$. Further, by \eqref{Con2} \begin{align*} \varrho:&=\sup_{(\vk{x},\vk{y})\in \seEd\times \seEd}\Var\LT((W(\vk{x})-W(\vk{y}))-(F(\vk{x})-F(\vk{y}))W(\vk{1})\RT)\\ &=\sup_{(\vk{x},\vk{y})\in \seEd\times \seEd}\LT(F(\vk{x})+F(\vk{y})-2F(\vk{x}\wedge\vk{y})-(F(\vk{x})-F(\vk{y}))^2\RT)\\ &\leq \sup_{(\vk{x},\vk{y})\in \seEd\times \seEd}\LT(F(\x)+F(\y)-F^2(\x)-F^2(\y)+2F(\vk{x})F(\vk{y})\RT)- \inf_{(\vk{x},\vk{y})\in \seEd\times \seEd}2F(\vk{x}\wedge\vk{y})\\ &< 1+2\delta-2\delta=1, \end{align*} where we use the fact that $$\sup_{(x,y)\in[\frac{1}{2}-\delta,\frac{1}{2}+\delta] \times[\frac{1}{2}-\delta,\frac{1}{2}+\delta]}\LT(x+y-x^2-y^2+2xy\RT)=1+2\delta.$$ By Borell-TIS inequality (ref.\cite{AdlerTaylor}) again \BQN\label{Thm31pp} J_{33}(u)&=&\pk{\sup_{\vk{x}\in \seEd }\LT(W(\x)-F(\x)W(\vk{1})+F(\x) w\RT)>u ,\ \inf_{\y\in \seEd}\LT(W(\y)-F(\y)W(\vk{1})+F(\y) w\RT)<-u}\nonumber\\ &=&\pk{\sup_{\vk{x}\in \seEd }\LT(W(\x)-F(\x)W(\vk{1})+F(\x) w\RT)>u ,\ \sup_{\y\in \seEd}\LT(-W(\y)+F(\y)W(\vk{1})-F(\y) w\RT)>u}\nonumber\\ &\leq&\pk{\sup_{(\vk{x},\vk{y})\in \seEd\times \seEd}\LT((W(\vk{x})-W(\vk{y}))-(F(\vk{x})-F(\vk{y}))W(\vk{1}) +(F(\vk{x})-F(\vk{y}))w\RT)>2u}\nonumber\\ &\leq& \pk{\sup_{(\vk{x},\vk{y})\in \seEd\times \seEd}\LT((W(\vk{x})-W(\vk{y}))-(F(\vk{x})-F(\vk{y}))W(\vk{1}) \RT)>2u-\abs{w}}\nonumber\\ &\leq& \exp\LT(-\frac{\LT(2u-\abs{w}-\mathbb{Q}_2\RT)^2}{2\varrho}\RT) = o\LT(\Psi(2u)\RT), \EQN where $\mathbb{Q}_2=\sup_{(\vk{x},\vk{y})\in \seEd\times \seEd}\LT((W(\vk{x})-W(\vk{y}))-(F(\vk{x})-F(\vk{y}))W(\vk{1})\RT)<\IF$. Hence, as $u\rw\IF$ \BQN\label{JJ3} J_3(u)\leq J_{31}(u)+J_{32}(u)+J_{33}(u)=o\LT(\Psi(2u)\RT), \EQN and the proof is complete. \QED \proofprop{PROP00} i) We have that \BQNY \seE=\LT\{\vk{x}\in [0,1]^2: x_1+x_2=\frac{3}{2}\RT\}, \quad L=\LT[\frac{1}{2},1\RT] \EQNY and further, for $\vk{x},\vk{y}\in \LT\{\z\in[0,1]^2:z_1+z_2\geq 1\RT\}\supset\seE$ \BQNY F(\vk{x})-F(\vk{y})=\sum_{i=1}^2(x_i-y_i) \EQNY which implies \eqref{FEXM} are satisfied.\\ In view of \netheo{ThmM1}, we have $K=4$ by taking $a_1(x,h(x))=1, \ x \in L=\LT[\frac{1}{2},1\RT]$. For $\delta\in\LT(0,\frac{1}{4}\RT)$ small enough, set $\mathcal{E}=\LT[\frac{1}{2}+\delta, 1\RT]\times\LT[\frac{1}{2}+\delta, 1\RT]$ and $\seEd=\LT\{\x\in \mathcal{E}: \frac{1}{2}-\delta \leq F(\vk{x})\leq \frac{1}{2}+\delta \RT\}$, then we have \BQNY \inf_{\x,\y\in\seEd} F(\x\wedge\y)= \inf_{\x,\y\in\seEd} (x_1\wedge y_1+x_2\wedge y_2-1) \geq 2\delta>\delta, \EQNY which show that \eqref{Con2} holds. We have for $u>0$ \BQN &&\pk{\sup_{\vk{x}\in [0,1]^2}\abs{W(\vk{x})}>u\Big|W(\vk{1})=w}\geq J_1(u),\label{lowbound1}\\ &&\pk{\sup_{\vk{x}\in [0,1]^2}\abs{W(\vk{x})}>u\Big|W(\vk{1})=w}\leq\sum_{i=1}^3 J_i(u),\label{upbound1} \EQN where \BQNY &&J_1(u)=\pk{\sup_{\vk{x}\in \mathcal{E}}\abs{W(\vk{x})}>u\Big|W(\vk{1})=w}, \ \ J_2(u)=\pk{\sup_{\vk{x}\in \LT[0,\frac{1}{2}+\delta\RT]\times [0,1]}\abs{W(\vk{x})}>u\Big|W(\vk{1})=w}\\ &&J_3(u)=\pk{\sup_{\vk{x}\in[0,1]\times\LT[0,\frac{1}{2}+\delta\RT]} \abs{W(\vk{x})}>u\Big|W(\vk{1})=w}. \EQNY For $J_1(u)$, by \netheo{ThmM1} with $a_1(x,h(x))=1, \ x \in L_1=\LT[\frac{1}{2}+\delta,1\RT]$ and \netheo{Corr4} \BQNY J_1(u)&\sim& \pk{\sup_{\vk{x}\in \mathcal{E}}W(\vk{x})>u\Big|W(\vk{1})=w}+\pk{\sup_{\vk{x}\in \mathcal{E}}W(\vk{x})>u\Big|W(\vk{1})=-w} \\ &\sim& c(4-8\delta)u^{2}e^{-2u^2+2u\abs{w}} \\ &\sim& 4cu^{2}e^{-2u^2+2u\abs{w}},\ u\rw\IF,\ \delta\rw 0 \EQNY holds with $c=1$ for $w\neq 0$ and $c=2$ for $w=0$. Since by \netheo{ThmM1} with $a_1(x,h(x))=1, \ x \in L_2=\LT[\frac{1}{2},\frac{1}{2}+\delta\RT]$ \BQNY J_2(u)&\leq&\pk{\sup_{\vk{x}\in \LT[0,\frac{1}{2}+\delta\RT]\times [0,1]}W(\vk{x})>u\Big|W(\vk{1})=w}+\pk{\inf_{\vk{x}\in \LT[0,\frac{1}{2}+\delta\RT]\times [0,1]}W(\vk{x})<-u\Big|W(\vk{1})=w}\\ &\leq&\pk{\sup_{\vk{x}\in \LT[0,\frac{1}{2}+\delta\RT]\times [0,1]}W(\vk{x})>u\Big|W(\vk{1})=w}+\pk{\sup_{\vk{x}\in \LT[0,\frac{1}{2}+\delta\RT]\times [0,1]}W(\vk{x})>u\Big|W(\vk{1})=-w}\\ &\sim& 8\delta u^{2}e^{-2u^2+2uw}+8\delta u^{2}e^{-2u^2-2uw}\\ &\sim& o\LT(u^{2}e^{-2u^2+2u\abs{w}}\RT),\ u\rw\IF,\ \delta\rw 0. \EQNY Similarly, \BQNY J_3(u)=o\LT(u^{2}e^{-2u^2+2u\abs{w}}\RT),\ u\rw\IF,\ \delta\rw 0. \EQNY Thus by \eqref{lowbound1} and \eqref{upbound1} \BQNY \pk{\sup_{\vk{x}\in [0,1]^2}\abs{W(\vk{x})}>u\Big|W(\vk{1})=w}\sim J_1(u)\sim 4cu^{2}e^{-2u^2+2u\abs{w}},\ u\rw\IF. \EQNY ii) We have that \BQNY \seE=\LT\{\vk{x}\in [0,1]^2: x_1+x_2+x_1x_2=2\RT\}, \quad L=\LT[\frac{1}{2},1\RT], \ h(x_1)=\frac{2-x_1}{1+x_1}, \EQNY and for a small neighborhood of $\seE$, the density function of $F(x_1,x_2)$ \BQNY f(x_1,x_2)= \frac{4-2x_1-2x_2}{(1+(1-x_1)(1-x_2))^3}>0. \EQNY In view of \nekorr{Corr3}, we have $K=3\ln 3$ by taking $a_1(x,h(x))=\frac{3}{4-4(1-x)^2}, \ x \in L=\LT[\frac{1}{2},1\RT]$. For $\delta\in(0,\frac{3-2\sqrt{2}}{2})$ , set $\seEd=\LT\{\x\in [0,1]^2: \frac{1}{2}-\delta \leq F(\vk{x})\leq \frac{1}{2}+\delta \RT\}$, then we have \BQNY \min(x_1,x_2)\geq \frac{1}{2}-\delta,\ \x\in\seEd, \EQNY and \BQNY \inf_{\x,\y\in\seEd} F(\x\wedge\y)= \inf_{\x,\y\in\seEd} \frac{(x_1\wedge y_1)\times(x_2\wedge y_2-1)}{1+(1-x_1\wedge y_1)(1-x_2\wedge y_2)} \geq \frac{(\frac{1}{2}-\delta)^2}{1+(\frac{1}{2}+\delta)^2}> \frac{(\frac{1}{2}-\delta)^2}{2}>\delta, \EQNY which show that \eqref{Con2} holds. Thus by \netheo{ThmM1} and \netheo{Corr4} \BQNY \pk{\sup_{\vk{x}\in [0,1]^2}\abs{W(\vk{x})}>u\Big|W(\vk{1})=w} \sim c K u^{2}e^{-2u^2+2u\abs{w}} \EQNY holds with $c=1$ for $w\neq 0$ and $c=2$ for $w=0$ \QED \proofprop{PROP0} i) We have that \BQNY \seE=\LT\{\vk{x}\in [0,1]^n:\Pi_{i=1}^n x_i=\frac{1}{2}\RT\}, \quad L=\LT\{\wx\in [0,1]^{n-1}:\frac{1}{2}\leq \Pi_{i=1}^{n-1}x_i\leq 1\RT\}. \EQNY Further, $F(\x)$ has the density function $f(\x)\equiv1$. By \nekorr{Corr3}, we get the results with $$a_i(\wx,h(\wx))=\frac{1}{2x_i}, \quad \wx \in L,\ i=1\ldot n-1.$$ Further, if we take $\mathcal{E}=[0,1]^n$ and $\seEd:=\LT\{\x\in [0,1]^n: \frac{1}{2}-\delta \leq F(\vk{x})\leq \frac{1}{2}+\delta \RT\}$ with $\delta\in(0,\frac{1}{4})$ such that $\delta^{1/n}+\delta<\frac{1}{2}$, then we have $$\min_{1\leq i\leq n} x_i\geq \frac{1}{2}-\delta , \ \x \in \seEd,$$ and \BQNY \inf_{\x,\y\in\seEd} F(\x\wedge\y)= \inf_{\x,\y\in\seEd} \Pi_{i=1}^n(x_i\wedge y_i) \geq \LT(\frac{1}{2}-\delta\RT)^n>\delta, \EQNY which show that \eqref{Con2} holds. Thus by \nekorr{Corr3} and \netheo{Corr4} \BQNY &&\pk{\sup_{\vk{x}\in [0,1]^n}\abs{W(\vk{x})}>u\Big|W(\vk{1})=w}\nonumber\\ &&\sim \pk{\sup_{\vk{x}\in [0,1]^n}W(\vk{x})>u\Big|W(\vk{1})=w}+\pk{\sup_{\vk{x}\in [0,1]^n}W(\vk{x})>u\Big|W(\vk{1})=-w}\\ &&\sim c K u^{2(n-1)}e^{-2u^2+2u\abs{w}} \EQNY holds with $c=1$ for $w\neq 0$ and $c=2$ for $w=0$. \COM{ii) We have that \BQNY \seE=\LT\{\vk{x}\in [0,1]^n:\sum_{i=1}^n x_i=\frac{n}{2}\RT\}, \quad L=\LT\{\wx\in [0,1]^{n-1}:\frac{n-2}{2}\leq \sum_{i=1}^{n-1}x_i\leq \frac{n}{2}\RT\}. \EQNY Further, for $\vk{x},\vk{y}\in [0,1]^n$ \BQNY F(\vk{x})-F(\vk{y})=\frac{1}{n}\sum_{i=1}^n(x_i-y_i). \EQNY In the notation of \netheo{ThmM1}, we have $a_i(\wx,h(\wx))=\frac{1}{n}, \ \wx \in L$, hence the claim follows from the aforementioned theorem. \\} ii) For $\delta>0$ small enough, set \BQNY &&E_i(\delta)=\{\x\in [0,1]^n: x_i\leq \min(x_1\ldot x_{i-1},x_{i+1}\ldot x_n)-\delta\},\ i=1\ldot n,\\ &&E_{n+1}(\delta)=[0,1]^n\setminus \LT(\cup_{i=1}^n E_i(\delta)\RT). \EQNY We have \begin{align*} \pk{\sup_{\vk{x}\in [0,1]^n}\LT(W(\vk{x})\Big|W(\vk{1})=w\RT)>u}&\leq \sum_{i=1}^{n+1}\pk{\sup_{\vk{x}\in E_i(\delta)}\LT(W(\vk{x})\Big|W(\vk{1})=w\RT)>u},\\ \pk{\sup_{\vk{x}\in [0,1]^n}\LT(W(\vk{x})\Big|W(\vk{1})=w\RT)>u}&\geq \sum_{i=1}^{n}\pk{\sup_{\vk{x}\in E_i(\delta)}\LT(W(\vk{x})\Big|W(\vk{1})=w\RT)>u}\\ &-\sum_{1\leq i<j\leq n}\pk{\sup_{\vk{x}\in E_i(\delta)}\LT(W(\vk{x})\Big|W(\vk{1})=w\RT)>u, \sup_{\vk{x}\in E_j(\delta)}\LT(W(\vk{x})\Big|W(\vk{1})=w\RT)>u}. \end{align*} For $\x\in E_n(\delta)$, we have $F(\x)=x_n\LT(d+(1-d)\Pi_{i=1}^{n-1}x_i\RT)$, \BQNY &&\seE_n=\{\x\in E_n(\delta):F(\x)=\frac{1}{2}\}=\LT\{(\wx,h(\wx)):\wx\in L\RT\},\\ &&L(\delta)=\LT\{\wx\in[0,1]^{n-1}:h(\wx)\leq\min_{1\leq i\leq n-1}x_i-\delta\RT\},\ h(\wx)=\frac{1}{2d+2(1-d)\Pi_{i=1}^{n-1}x_i}, \EQNY and for $\x\in E_n(\delta)$, $F(\x)$ has density function $f(\x)\equiv1$. Thus by \nekorr{Corr3}, we have as $u\rw\IF$ \BQNY \pk{\sup_{\vk{x}\in E_n(\delta)}\LT(W(\vk{x})\Big|W(\vk{1})=w\RT)>u} \sim (4d)^{(n-1)} \int_{L(\delta)} \frac{\LT(\Pi_{i=1}^{n-1}x_i\RT)^{n-2}}{\LT(d+(1-d)\Pi_{i=1}^{n-1}x_i\RT)^{n-1}}d \wx u^{2(n-1)}e^{-2u^2+2uw}. \EQNY Further, we have \BQNY \lim_{\delta\rw 0}\lim_{u\rw\IF}\frac{\pk{\sup_{\vk{x}\in E_n(\delta)}\LT(W(\vk{x})\Big|W(\vk{1})=w\RT)>u}}{u^{2(n-1)}e^{-2u^2+2uw}} = (4d)^{(n-1)} \int_{L(\delta)} \frac{\LT(\Pi_{i=1}^{n-1}x_i\RT)^{n-2}}{\LT(d+(1-d)\Pi_{i=1}^{n-1}x_i\RT)^{n-1}}d \wx , \EQNY where \BQNY L=\LT\{\wx\in[0,1]^{n-1}:\frac{1}{2d+2(1-d)\Pi_{i=1}^{n-1}x_i}\leq\min_{1\leq i\leq n-1}x_i\RT\}. \EQNY By the symmetry of $F(\x)$ on $E_i(\delta), i=1\ldot n-1$, \BQNY \lim_{\delta\rw 0}\lim_{u\rw\IF}\frac{\pk{\sup_{\vk{x}\in E_{i}(\delta)}\LT(W(\vk{x})\Big|W(\vk{1})=w\RT)>u}}{u^{2(n-1)}e^{-2u^2+2uw}} = (4d)^{(n-1)} \int_{L} \frac{\LT(\Pi_{i=1}^{n-1}x_i\RT)^{n-2}}{\LT(d+(1-d)\Pi_{i=1}^{n-1}x_i\RT)^{n-1}}d \wx . \EQNY Assume that $W_i(\x)$ is a Brownian sheet based on $F_i(\x)=d x_i +(1-d)\Pi_{j=1}^{n}x_j$, $i=1\ldot n$. We have \BQNY \pk{\sup_{\vk{x}\in E_{n+1}(\delta)}\LT(W(\vk{x})\Big|W(\vk{1})=w\RT)>u} \leq \sum_{i=1}^n\pk{\sup_{\vk{x}\in E_{n+1}(\delta)}\LT(W_i(\vk{x})\Big|W_i(\vk{1})=w\RT)>u}. \EQNY For $F_n(\x)=d x_n+(1-d)\Pi_{i=1}^{n}x_i$ and $\x\in E_{n+1}(\delta)$ we have \BQNY &&\seE_{n+1}=\LT\{\x\in E_{n+1}(\delta):F_n(\x)=\frac{1}{2}\RT\}=\LT\{(\wx,h(\wx)):\wx\in L_{n+1}\RT\},\\ &&L_{n+1}(\delta)=\LT\{\wx\in[0,1]^{n-1}:(\wx,h(\wx))\in\seE_{n+1}\RT\},\ h(\wx)=\frac{1}{2d+2(1-d)\Pi_{i=1}^{n-1}x_i}. \EQNY Further by \nekorr{Corr3}, we have \BQNY \pk{\sup_{\vk{x}\in E_{n+1}(\delta)}\LT(W_n(\vk{x})\Big|W_n(\vk{1})=w\RT)>u} \sim (4d)^{(n-1)}\int_{L_{n+1}(\delta)} \frac{\LT(\Pi_{i=1}^{n-1}x_i\RT)^{n-2}}{\LT(d+(1-d)\Pi_{i=1}^{n-1}x_i\RT)^{n-1}}d \wx u^{2(n-1)}e^{-2u^2+2uw} , u\rw\IF. \EQNY Similarly, for $i=1\ldot n-1$ \BQNY \pk{\sup_{\vk{x}\in E_{n+1}(\delta)}\LT(W_i(\vk{x})\Big|W_i(\vk{1})=w\RT)>u} \sim (4d)^{(n-1)}\int_{L_{n+1}(\delta)} \frac{\LT(\Pi_{i=1}^{n-1}x_i\RT)^{n-2}}{\LT(d+(1-d)\Pi_{i=1}^{n-1}x_i\RT)^{n-1}}d \wx u^{2(n-1)}e^{-2u^2+2uw} , u\rw\IF. \EQNY Since $\lim_{\delta\rw 0}\LT(\cup_{i=1}^{n}E_i(\delta)\RT)=[0,1]^{n}$ and $\lim_{\delta\rw 0} E_{n+1}(\delta)=\emptyset$, then $\lim_{\delta\rw 0}\lambda_{n-1}(L_{n+1}(\delta))=0$. Thus we have \BQNY \pk{\sup_{\vk{x}\in E_{n+1}(\delta)}\LT(W(\vk{x})\Big|W(\vk{1})=w\RT)>u} &\leq& n(4d)^{(n-1)}\int_{L_{n+1}(\delta)} \frac{\LT(\Pi_{i=1}^{n-1}x_i\RT)^{n-2}}{\LT(d+(1-d)\Pi_{i=1}^{n-1}x_i\RT)^{n-1}}d \wx u^{2(n-1)}e^{-2u^2+2uw}\\ &=&o\LT(u^{2(n-1)}e^{-2u^2+2uw}\RT),\ u\rw\IF,\ \delta\rw 0. \EQNY We have for $1\leq i< j\leq n $ and $u>0$ \BQNY &&\pk{\sup_{\vk{x}\in E_i(\delta)}\LT(W(\vk{x})\Big|W(\vk{1})=w\RT)>u, \sup_{\vk{x}\in E_j(\delta)}\LT(W(\vk{x})\Big|W(\vk{1})=w\RT)>u}\\ &&\leq \pk{\sup_{(\x,\y)\in E_i(\delta)\times E_j(\delta)}W(\x)+W(\y)-(F(\x)+F(\y))W(\vk{1})+(F(\x)+F(\y))w>2u}\\ &&\leq \pk{\sup_{(\x,\y)\in E_i(\delta)\times E_j(\delta)}W(\x)+W(\y)-(F(\x)+F(\y))W(\vk{1})>2u-2w}. \EQNY Since \BQNY \sigma^2_m&:=&\sup_{(\x,\y)\in E_i(\delta)\times E_j(\delta)}Var\LT(W(\x)+W(\y)-(F(\x)+F(\y))W(\vk{1})\RT)\\ &=&\sup_{(\x,\y)\in E_i(\delta)\times E_j(\delta)}\E{\LT(W(\x)+W(\y)-(F(\x)+F(\y))W(\vk{1})\RT)^2}\\ &=&\sup_{(\x,\y)\in E_i(\delta)\times E_j(\delta)}\LT(F(\x)+F(\y)\RT)\LT[1-\LT(F(\x)+F(\y)\RT)\RT]+2F(\x\wedge\y)\\ &<&\sup_{(\x,\y)\in E_i(\delta)\times E_j(\delta)}\LT(F(\x)+F(\y)\RT)\LT[2-\LT(F(\x)+F(\y)\RT)\RT]\\ &\leq& 1, \EQNY where we use the fact that for $(\x,\y)\in E_i(\delta)\times E_j(\delta)$ $$2F(\x\wedge\y)< F(\x)+F(\y).$$ By Borell-TIS inequality (ref.\cite{AdlerTaylor}) \BQNY \pk{\sup_{(\x,\y)\in E_i(\delta)\times E_j(\delta)}W(\x)+W(\y)-(F(\x)+F(\y))W(\vk{1})>2u-2w} &\leq& \exp\LT(-\frac{(2u-2w)^2}{2\sigma^2_m}\RT)\\ &=&o\LT( u^{2(n-1)}e^{-2u^2+2uw}\RT),\ u\rw\IF. \EQNY Thus we have \BQNY &&\sum_{1\leq i<j\leq n}\pk{\sup_{\vk{x}\in E_i(\delta)}\LT(W(\vk{x})\Big|W(\vk{1})=w\RT)>u, \sup_{\vk{x}\in E_j(\delta)}\LT(W(\vk{x})\Big|W(\vk{1})=w\RT)>u}\\ &&\leq n^2\exp\LT(-\frac{(2u-2w)^2}{2\sigma^2_m}\RT)\\ &&=o\LT( u^{2(n-1)}e^{-2u^2+2uw}\RT),\ u\rw\IF. \EQNY Consequently, letting $u\rw\IF, \delta\rw 0$, we have \BQNY \pk{\sup_{\vk{x}\in [0,1]^n}\LT(W(\vk{x})\Big|W(\vk{1})=w\RT)>u}&\sim& \sum_{i=1}^n\pk{\sup_{\vk{x}\in E_i(\delta)}\LT(W(\vk{x})\Big|W(\vk{1})=w\RT)>u}\\ &\sim& n(4d)^{(n-1)} \int_{L} \frac{\LT(\Pi_{i=1}^{n-1}x_i\RT)^{n-2}}{\LT(d+(1-d)\Pi_{i=1}^{n-1}x_i\RT)^{n-1}}d \wx u^{2(n-1)}e^{-2u^2+2uw}. \EQNY Specially, when $n=2$, we have $L=\LT[\frac{1}{\sqrt{(1-d)^2+1}+d},1\RT]$ and $$\pk{\sup_{\vk{x}\in [0,1]^n}\LT(W(\vk{x})\Big|W(\vk{1})=w\RT)>u}\sim \frac{8d}{1-d}\ln \LT(\sqrt{1+\frac{1}{(1-d)^2}}-\frac{d}{1-d}\RT)u^{2}e^{-2u^2+2uw},\ u\rw\IF.$$ Further, if we take $\mathcal{E}=[0,1]^n$ and $\seEd:=\LT\{\x\in [0,1]^n: \frac{1}{2}-\delta \leq F(\vk{x})\leq \frac{1}{2}+\delta \RT\}$ with $\delta\in(0,\frac{1}{4})$ such that $d\LT(\frac{1}{2}-\delta\RT)+(1-d)\LT(\frac{1}{2}-\delta\RT)^n>\delta$, then we have $$\min_{1\leq i\leq n} x_i\geq \frac{1}{2}-\delta , \ \x \in \seEd,$$ and \BQNY \inf_{\x,\y\in\seEd} F(\x\wedge\y)= \inf_{\x,\y\in\seEd} \LT(d\min_{1\leq i\leq n}x_i\wedge y_i+(1-d)\Pi_{i=1}^n(x_i\wedge y_i)\RT) \geq d\LT(\frac{1}{2}-\delta\RT)+(1-d)\LT(\frac{1}{2}-\delta\RT)^n>\delta, \EQNY which show that \eqref{Con2} holds. Thus by \nekorr{Corr3} and \netheo{Corr4} \BQNY &&\pk{\sup_{\vk{x}\in [0,1]^n}\abs{W(\vk{x})}>u\Big|W(\vk{1})=w}\nonumber\\ &&\sim \pk{\sup_{\vk{x}\in [0,1]^n}W(\vk{x})>u\Big|W(\vk{1})=w}+\pk{\sup_{\vk{x}\in [0,1]^n}W(\vk{x})>u\Big|W(\vk{1})=-w}\\ &&\sim c\pk{\sup_{\vk{x}\in [0,1]^n}\LT(W(\vk{x})\Big|W(\vk{1})=\abs{w}\RT)>u} \EQNY holds with $c=1$ for $w\neq 0$ and $c=2$ for $w=0$. \QED \proofprop{PROP1} Clearly, for $F(\vk{x})$, \eqref{condition1} holds for $\vk{x}, \vk{y}\in [0,1]^2$, hence the claim follows by \netheo{Thm2}, \netheo{Corr4} and Remark \ref{Rem2}. \QED \COM{ \proofprop{PROP2} For $\delta\in\LT(0,\frac{\sqrt{5}-2}{2}\RT)$ small enough, set \BQNY E_1(\delta)=\{\x\in [0,1]^2: x_1\leq x_2-\delta\},\ E_2(\delta)=\{\x\in [0,1]^2: x_1\geq x_2+\delta\},\ E_3(\delta)=\{\x\in [0,1]^2: x_2+\delta \leq x_1\leq x_2+\delta\}. \EQNY Then we have \BQNY \pk{\sup_{\vk{x}\in [0,1]^2}\LT(W(\vk{x})\Big|W(\vk{1})=w\RT)>u}&\leq& \sum_{i=1}^3\pk{\sup_{\vk{x}\in E_i(\delta)}\LT(W(\vk{x})\Big|W(\vk{1})=w\RT)>u}\\ \pk{\sup_{\vk{x}\in [0,1]^2}\LT(W(\vk{x})\Big|W(\vk{1})=w\RT)>u}&\geq& \sum_{i=1}^2\pk{\sup_{\vk{x}\in E_i(\delta)}\LT(W(\vk{x})\Big|W(\vk{1})=w\RT)>u}\\ &&-\pk{\sup_{\vk{x}\in E_1(\delta)}\LT(W(\vk{x})\Big|W(\vk{1})=w\RT)>u, \sup_{\vk{x}\in E_2(\delta)}\LT(W(\vk{x})\Big|W(\vk{1})=w\RT)>u}. \EQNY For $\x\in E_1(\delta)$, we have $F(\x)=\frac{1}{2}x_1\LT(1+x_2\RT)$, \BQNY \seE_1=\{\x\in E_1(\delta):F(\x)=\frac{1}{2}\}=\LT\{(x_1,h(x_1)):x_1\in \LT[\frac{1}{2},\frac{\sqrt{5}-1}{2}-\delta\RT]\RT\},\ h(x_1)=\frac{1-x_1}{x_1}, \EQNY and \BQNY \lim_{\vn\rw 0}\sup_{\vk{z}\in \seE_1 }\underset{\vk{x}\neq \vk{y}}{\sup_{\abs{\vk{x}-\vk{z}},\abs{\vk{y}-\vk{z}}\leq \vn}}\frac{\abs{F(\vk{x})-F(\vk{y})-\frac{1}{2}(1+z_2)(x_1-y_1)-\frac{1}{2}z_1(x_2-y_2)}} {\abs{x_1-y_1}+\abs{x_2-y_2}}=0, \EQNY Thus by \netheo{Thm1}, we have \BQNY \pk{\sup_{\vk{x}\in E_1(\delta)}\LT(W(\vk{x})\Big|W(\vk{1})=w\RT)>u} \sim 8\int_{\frac{1}{2}}^{\frac{\sqrt{5}-1}{2}-\delta} \frac{1}{2x}d x u^2e^{-2u^2+2uw} =4\ln (\sqrt{5}-1-2\delta) u^2e^{-2u^2+2uw}, u\rw\IF. \EQNY By the symmetry of $F(\x)$ on $E_1(\delta)$ and $E_1(\delta)$, \BQNY \pk{\sup_{\vk{x}\in E_2(\delta)}\LT(W(\vk{x})\Big|W(\vk{1})=w\RT)>u} \sim 4\ln (\sqrt{5}-1-2\delta) u^2e^{-2u^2+2uw}, u\rw\IF. \EQNY Assume that $W_1(\x)$ is a Brownian sheet based on $F_1(\x)=\frac{1}{2}x_1(1+x_2)$ and $W_2(\x)$ is a Brownian sheet based on $F_2(\x)=\frac{1}{2}x_2(1+x_1)$. Then \BQNY \pk{\sup_{\vk{x}\in E_3(\delta)}\LT(W(\vk{x})\Big|W(\vk{1})=w\RT)>u} \leq \pk{\sup_{\vk{x}\in E_3(\delta)}\LT(W_1(\vk{x})\Big|W_1(\vk{1})=w\RT)>u} +\pk{\sup_{\vk{x}\in E_3(\delta)}\LT(W(\vk{x})\Big|W(\vk{1})=w\RT)>u} \EQNY For $F_1(\x)=\frac{1}{2}x_1(1+x_2)$ and $\x\in E_3(\delta)$ we have \BQNY \seE_2=\{\x\in E_1(\delta):F_1(\x)=\frac{1}{2}\}=\LT\{(x_1,h(x_1)):x_1\in \LT[\frac{\sqrt{5}-1}{2}-\delta,\frac{\sqrt{5}-1}{2}+\delta \RT]\RT\},\ h(x_1)=\frac{1-x_1}{x_1}, \EQNY and \BQNY \lim_{\vn\rw 0}\sup_{\vk{z}\in \seE_2 }\underset{\vk{x}\neq \vk{y}}{\sup_{\abs{\vk{x}-\vk{z}},\abs{\vk{y}-\vk{z}}\leq \vn}}\frac{\abs{F_1(\vk{x})-F_1(\vk{y})-\frac{1}{2}(1+z_2)(x_1-y_1)-\frac{1}{2}z_1(x_2-y_2)}} {\abs{x_1-y_1}+\abs{x_2-y_2}}=0. \EQNY Further by \netheo{Thm1}, we have \BQNY \pk{\sup_{\vk{x}\in E_3(\delta)}\LT(W_1(\vk{x})\Big|W_1(\vk{1})=w\RT)>u} \sim 4\ln \frac{\sqrt{5}-1+2\delta}{\sqrt{5}-1-2\delta} u^2e^{-2u^2+2uw} , u\rw\IF. \EQNY Similarly, \BQNY \pk{\sup_{\vk{x}\in E_3(\delta)}\LT(W_2(\vk{x})\Big|W_2(\vk{1})=w\RT)>u} \sim 4\ln \frac{\sqrt{5}-1+2\delta}{\sqrt{5}-1-2\delta} u^2e^{-2u^2+2uw} , u\rw\IF. \EQNY We have for $u>0$ \BQNY &&\pk{\sup_{\vk{x}\in E_1(\delta)}\LT(W(\vk{x})\Big|W(\vk{1})=w\RT)>u, \sup_{\vk{x}\in E_2(\delta)}\LT(W(\vk{x})\Big|W(\vk{1})=w\RT)>u}\\ &&\leq \pk{\sup_{(\x,\y)\in E_1(\delta)\times E_2(\delta)}W(\x)+W(\y)-(F(\x)+F(\y))W(\vk{1})+(F(\x)+F(\y))w>2u}\\ &&\leq \pk{\sup_{(\x,\y)\in E_1(\delta)\times E_2(\delta)}W(\x)+W(\y)-(F(\x)+F(\y))W(\vk{1})>2u-2w}. \EQNY Since \BQNY \sigma^2_m&:=&\sup_{(\x,\y)\in E_1(\delta)\times E_2(\delta)}Var\LT(W(\x)+W(\y)-(F(\x)+F(\y))W(\vk{1})\RT)\\ &=&\sup_{(\x,\y)\in E_1(\delta)\times E_2(\delta)}\E{\LT(W(\x)+W(\y)-(F(\x)+F(\y))W(\vk{1})\RT)^2}\\ &=&\sup_{(\x,\y)\in E_1(\delta)\times E_2(\delta)}\LT(F(\x)+F(\y)\RT)\LT[1-\LT(F(\x)+F(\y)\RT)\RT]+2F(\x\wedge\y)\\ &<&\sup_{(\x,\y)\in E_1(\delta)\times E_2(\delta)}\LT(F(\x)+F(\y)\RT)\LT[2-\LT(F(\x)+F(\y)\RT)\RT]\\ &\leq& 1, \EQNY where we use the fact that for $(\x,\y)\in E_1(\delta)\times E_2(\delta)$ $$2F(\x\wedge\y)< F(\x)+F(\y).$$ Then by Borell inequality \BQNY \pk{\sup_{(\x,\y)\in E_1(\delta)\times E_2(\delta)}W(\x)+W(\y)-(F(\x)+F(\y))W(\vk{1})>2u-2w} &\leq& \exp\LT(-\frac{(2u-2w)^2}{2\sigma^2_m}\RT)\\ &=&o\LT( u^2e^{-2u^2+2uw}\RT),\ u\rw\IF. \EQNY Consequently, letting $u\rw\IF, \delta\rw 0$, we have \BQNY \pk{\sup_{\vk{x}\in [0,1]^2}\LT(W(\vk{x})\Big|W(\vk{1})=w\RT)>u}\sim \sum_{i=1}^2\pk{\sup_{\vk{x}\in E_i(\delta)}\LT(W(\vk{x})\Big|W(\vk{1})=w\RT)>u} \sim 8\ln (\sqrt{5}-1) u^2e^{-2u^2+2uw}. \EQNY \QED } \def\wk{\widetilde{\vk{k}}} \def\k{\vk{k}} \def\wl{\widetilde{\vk{l}}} \def\l{\vk{l}} \section{Appendix} Before stating the proofs of next lemmas, we introduce some notation. Define the random fields $Y(\x), \x \in \R^n$ by \BQN\label{Yc} Y(\vk{x})=\sum_{i=1}^{n}B^{(i)}(x_i) \EQN where $B^{(i)}$'s are independent standard Brownian motions. We define \BQNY \mathcal{H}_{\vk{c}} [\vk{0},\vk{\lambda}]=\E{\sup_{\x\in[0,\vk{c}*\vk{\lambda}]}e^{Y(\x)-\Var(Y (\vk{x}))}} \EQNY and notice that \BQN\label{HH2} \lim_{\min_{1 \le i \le n} {\lambda_i}\rw\vk{\IF}} \frac{1}{\Pi_{i=1}^n\lambda_i}\mathcal{H}_{\vk{c}} [\vk{0},\vk{\lambda}] = \E{ \frac{\sup_{\vk x\in \R^n }e^{ \sqrt{2}Y (\vk{x})-\Var(Y (\vk{x}))}}{\int_{\R^n }e^{ \sqrt{2}Y (\vk{x})-\Var(Y (\vk{x})) } d \vk x }} = \Pi_{i=1}^nc_i . \EQN See the recent contributions \cite{EHKD,SBK} for various results on Pickands constants. \BL\label{lem1} Let $h(\wx),\wx=(x_1\ldot x_{n-1})\in\R^{n-1}$ is a continuous differentiable function. Assume that $X(\vk{x}),\ \vk{x}\in E,$ with $E=\{\vk{x}: x_i\in[S_i,T_i], i=1\ldot n-1,\abs{x_n-h(\wx)}\leq T\}, T>0, 0\leq S_i<T_i, i=1\ldot {n-1}$ is a Gaussian field which has continuous pathes, variance function $\sigma^2(\vk{x})$ and correlation function $r(\vk{x}, \vk{y})$ and $g(\vk{x}), \vk{x}\in E$ is a continuous function. Further, $\sigma^2(\vk{x})$ attains it maximum at $\widetilde{E}=\{\vk{x}: x_i\in[S_i,T_i], i=1\ldot n-1, x_n=h(\wx)\}$ which satisfies for $\x\in E$ \BQN\label{var2} \lim_{\delta\rw 0}\underset{\x\in E\setminus \widetilde{E}}{\sup_{\abs{\x-h(\wx)}\leq\delta}} \abs{\frac{1-\sigma(\vk{x})}{b\LT(x_n-h(\wx)\RT)^2}-1}=0, \EQN and \BQN\label{g2} \lim_{\delta\rw 0}\underset{\x\in E\setminus \widetilde{E}}{\sup_{\abs{\x-h(\wx)}\leq\delta}} \abs{\frac{g(\x)}{c\LT(x_n-h(\wx)\RT)}-1}=0, \EQN for constants $b>0$ and $c\in\R$.\\ For any $\z\in\widetilde{E}$, $\vk{x},\ \vk{y}\in E$ \BQN\label{r2} 1-r(\vk{x},\vk{y})\sim \sum_{i=1}^{n}c_i\abs{x_i-y_i}, \ \abs{\x-\z}, \ \abs{\y-\z} \rw0, \EQN where $ c_i>0$ are constants and there exist positive constants $\mathcal{C}_1$ and $\mathcal{C}_2$ such that \BQN\label{r3} \mathcal{C}_1\sum_{i=1}^{n}\abs{x_i-y_i}<1-r(\vk{x},\vk{y})< \mathcal{C}_2\sum_{i=1}^{n}\abs{x_i-y_i} , \ \vk{x},\ \vk{y}\in E. \EQN If further there exists a positive constant $\mathcal{C}_3$ such that \BQN\label{boundh} \underset{\x\neq\y}{\sup_{\x,\y\in E}}\frac{\abs{(x_n-h(\wx))-(y_n-h(\wy))}}{\sum_{i=1}^{n} \abs{x_i-y_i}} \leq \mathcal{C}_3 . \EQN \COM{\BQN\label{boundv} \underset{\x\neq\y}{\sup_{\x,\y\in E}}\frac{\abs{\frac{1}{\sigma(\x)}-\frac{1}{\sigma(\y)}}}{(\abs{x_n-h(\wx)}+\abs{y_n-h(\wy)})\sum_{i=1}^{n}\abs{x_i-y_i}}\leq \mathcal{C}_3, \EQN and \BQN\label{boundg} \underset{\x\neq\y}{\sup_{\x,\y\in E}}\frac{\abs{g(\x)-g(\y)}}{\sum_{i=1}^n\abs{x_i-y_i}}\leq \mathcal{C}_3, \EQN} then as $u\rw\IF$ \BQNY \pk{\sup_{\vk{x}\in E}(X(\vk{x})+g(\vk{x}))>u}&\sim& \LT(\Pi_{i=1}^{n-1}(T_i-S_i)\RT)\LT(\Pi_{i=1}^nc_i\RT) \int_{-\IF}^{\IF}e^{-bx^{2}+cx}dx u^{2n-1}\Psi(u)\\ &=&\LT(\Pi_{i=1}^{n-1}(T_i-S_i)\RT)\LT(\Pi_{i=1}^nc_i\RT) \sqrt{\frac{\pi}{b}}e^{\frac{c^2}{4b}}u^{2n-1}\Psi(u). \EQNY \EL \prooflem{lem1} In the following proof, without loss of generality, we assume that $c>0$.\\ First for $\vn_1,\vn\in(0,1)$ and $\lambda>0$ we introduce the following notation: \begin{align*} &E_0=\Pi_{i=1}^{n-1}[S_i,T_i],\ \wx=(x_1\ldot x_{n-1}),\ E(\vn_1)=\LT\{\vk{x}\in E: \abs{x_n-h(\wx)}\leq \vn_1,\ \wx\in E_0\RT\},\\ & E(u)=\LT\{\vk{x}: \abs{x_n-h(\wx)}\leq \frac{\ln u}{u},\ \wx\in E_0\RT\},\ J_{k}(u)=\LT[\frac{k\lambda}{u^{2}},\frac{(k+1)\lambda}{u^{2}}\RT], \ M_{n}(u)=\LT\lfloor\frac{u\ln u}{\lambda}\RT\rfloor,\\ &\wk=(k_1\ldot k_{n-1}),\ D_{\vk{k}}(u)=\Pi_{i=1}^nJ_{k_{i}}(u),\ \k\in\mathbb{N}^n,\ D_{\wk}(u)=\Pi_{i=1}^{n-1}J_{k_{i}}(u),\ \wk\in\mathbb{N}^{n-1},\\ & \mathcal{M}_1(u)=\LT\{\wk:D_{\wk}(u)\subset \prod_{i=1}^{n-1}\LT[S_i,T_i\RT] \RT\},\ \mathcal{M}_2(u)=\LT\{\wk:D_{\wk}(u)\cap \prod_{i=1}^{n-1}\LT[S_i,T_i\RT]\neq \emptyset \RT\},\\ &\mathcal{L}_1(u)=\{\vk{k}: D_{\vk{k}}(u)\subset E(u)\},\ \mathcal{L}_2(u)=\{\vk{k}: D_{\vk{k}}(u)\cap E(u)\neq \emptyset\}.\\ &\mathcal{K}_1(u)=\{(\vk{k},\vk{l}):\vk{k},\vk{l}\in\mathcal{L}_1(u),\vk{k}\neq \vk{l}, D_{\vk{k}}(u)\cap D_{\vk{l}}(u)\neq \emptyset\},\\ &\mathcal{K}_2(u)=\{(\vk{k},\vk{l}):\vk{k},\vk{l}\in\mathcal{L}_1(u),D_{\vk{k}}(u)\cap D_{\vk{l}}(u)=\emptyset, u^{-2}\abs{k_1-l_1}\lambda\leq \vn\},\\ &\mathcal{K}_3(u)=\{(\vk{k},\vk{l}):\vk{k},\vk{l}\in\mathcal{L}_1(u),D_{\vk{k}}(u)\cap D_{\vk{l}}(u)=\emptyset, u^{-2}\abs{k_1-l_1}\lambda\geq \vn\}. \end{align*} Here we need to notice that $\vk{i}, \vk{k}$ and $\vk{l}$ are $n$-dimensional.\\ It follows that for $u$ \large enough \BQN\label{sli1} \Pi_0(u)\leq \pk{\sup_{\vk{x}\in E}(X(\vk{x})+g(\vk{x}))>u}\leq \Pi_0(u)+\Pi_1(u)+\Pi_2(u) \EQN where \BQNY &&\Pi_0(u):=\pk{\sup_{\vk{x}\in E(u)}(X(\vk{x})+g(\vk{x}))>u},\quad \Pi_1(u):=\pk{\sup_{\vk{x}\in E\setminus E(\vn_1)}(X(\vk{x})+g(\vk{x}))>u},\\ &&\Pi_2(u):=\pk{\sup_{\vk{x}\in E(\vn_1)\setminus E(u)}(X(\vk{x})+g(\vk{x}))>u}. \EQNY By \eqref{var2} and \eqref{g2}, we have for any small $\vn\in(0,1)$ there exists $\vn_1$ small enough such that \BQN\label{var3} 1-(1+\vn)b(x_n-h(\wx))^2\leq \sigma(x)\leq 1-(1-\vn)b(x_n-h(\wx))^2 \EQN and \BQN\label{g3} c(x_n-h(\wx))-\vn\abs{x_n-h(\wx)}\leq g(\wx)\leq c(x_n-h(\wx))+\vn\abs{x_n-h(\wx)} \EQN hold for $\x\in E(\vn_1)$.\\ By continuity of $\sigma(\x)$ and $ E(\vn_1)\supset \widetilde{E}$, we have $$\sup_{\vk{x}\in E\setminus E(\vn_1)}\sigma(\vk{x})<1-\delta_1$$ with $\delta_1\in (0,1)$, which combined with Borell-TIS inequality as in \cite{AdlerTaylor} leads \BQN\label{boundofpi1} \Pi_1(u)\leq e^{-\frac{\LT(u-\mathbb{Q}_1-\mathbb{Q}_2\RT)^2}{2(1-\delta_1)^2}} =o\LT(\Psi(u)\RT),\ u\rw\IF, \EQN where $\mathbb{Q}_1=\sup_{\vk{x}\in E\setminus E(\vn_1)}g(\vk{x})<\IF$ and $\mathbb{Q}_2=\E{\sup_{\vk{x}\in E\setminus E(\vn_1)} X(\vk{x})}<\IF$.\\ In light of \eqref{var3} and \eqref{g3}, we have for $u$ large enough \BQNY \inf_{\vk{x}\in E(\vn_1)\setminus E(u)}\frac{1}{\sigma(\vk{x})}&\geq& 1+\mathbb{Q}_3\LT(\frac{\ln u}{u}\RT)^2, \EQNY and \BQNY \sup_{\vk{x}\in E(\vn_1)\setminus E(u)}g(\vk{x})&\leq&\mathbb{Q}_4\LT(\frac{\ln u}{u}\RT). \EQNY By \eqref{r3}, we have \BQNY \E{\LT(\overline{X}(\vk{x})-\overline{X}(\vk{y})\RT)^2} =2(1-r(\vk{x},\vk{y}))\leq 2\mathcal{C}_2\sum_{i=1}^{n}\abs{x_i-y_i}, \EQNY which combined with \cite{Pit96} [Theorm8.1] derive for $u$ large enough \BQN\label{err0} \Pi_2(u) &\leq& \pk{\sup_{\vk{x}\in E(\vn_1)\setminus E(u)}X(\vk{x})>u-\mathbb{Q}_4\LT(\frac{\ln u}{u}\RT)}\nonumber\\ &\leq& \pk{\sup_{\vk{x}\in E(\vn_1)\setminus E(u)}\overline{X}(\vk{x})>\LT(u-\mathbb{Q}_4\LT(\frac{\ln u}{u}\RT)\RT)\LT(1+\mathbb{Q}_3\LT(\frac{\ln u}{u}\RT)^2\RT)}\nonumber\\ &\leq&\mathbb{Q}_5u^{2n}\Psi\LT(\LT(u-\mathbb{Q}_4\LT(\frac{\ln u}{u}\RT)\RT)\LT(1+\mathbb{Q}_3\LT(\frac{\ln u}{u}\RT)^2\RT)\RT)\nonumber\\ &=&o\LT(\Psi(u)\RT),\ u\rw\IF, \EQN By \eqref{sli1}, \eqref{boundofpi1}, \eqref{err0} and the fact $ \Pi_0(u)\geq \pk{X(\z)>u}=\Psi(u)$ with $\z\in\widetilde{E}$ leads to \BQN\label{asym1} \pk{\sup_{\vk{x}\in E}(X(\vk{x})+g(\vk{x}))>u}\sim \Pi_0(u), u\rw\IF. \EQN Next we focus on $\Pi_0(u)$. By \eqref{boundh}, we have \BQNY \sup_{\vk{k}\in\mathcal{L}_2(u)}\sup_{\x,\y\in D_{\vk{k}}(u)} \abs{(x_n-h(\wx))-(y_n-h(\wy))}\leq \mathbb{Q}_6 \frac{\lambda}{u^2} \EQNY and \BQNY &&\sup_{\vk{k}\in\mathcal{L}_2(u)}\sup_{\x,\y\in D_{\vk{k}}(u)}\abs{(x_n-h(\wx))^2-(y_n-h(\wy))^2}\\ &&\leq \sup_{\vk{k}\in\mathcal{L}_2(u)}\sup_{\x,\y\in D_{\vk{k}}(u)}(\abs{x_n-h(\wx)}+\abs{y_n-h(\wy)})\abs{(x_n-h(\wx))-(y_n-h(\wy))}\\ &&\leq\mathbb{Q}_7 \frac{\lambda\ln u }{u^3}. \EQNY In the view of \eqref{var3} and \eqref{g3} we notice that for any $\vk{k}\in\mathcal{L}_2(u)$ \BQNY u_{\vk{k}}^{+}&:=&\sup_{\x\in D_{\vk{k}}(u)}(u-g(\x))\frac{1}{\sigma(\x)}\\ &\leq& \sup_{\x\in D_{\vk{k}}(u)}(u-c(x_n-h(\wx))+\vn \abs{x_n-h(\wx)}) (1+(1+\vn)b(x_n-h(\wx))^2)\\ &\leq&\LT(u-\inf_{\x\in D_{\vk{k}}(u)}(c(x_n-h(\wx)))+\vn\sup_{\x\in D_{\vk{k}}(u)}\abs{x_n-h(\wx)}\RT)\LT(1+(1+\vn)b\sup_{\x\in D_{\vk{k}}(u)}(x_n-h(\wx))^2\RT)\\ &\leq& \LT(u-\inf_{y\in J_{l}(u)}(cy)+\vn\sup_{y\in J_{l}(u) }\abs{y}+\mathbb{Q}_8 \frac{\lambda}{u^2}\RT)\LT(1+(1+\vn)b\sup_{y\in J_{l}(u)}(y)^2+\mathbb{Q}_9 \frac{\lambda\ln u }{u^3}\RT)\\ &=:& u^{+}_l, \EQNY where $l$ satisfy that \BQN\label{conditionl} J_{l}(u)\cap \LT[\inf_{\x\in D_{\vk{k}}(u)}(x_n-h(\wx)),\sup_{\x\in D_{\vk{k}}(u)}(x_n-h(\wx))\RT]\neq \emptyset. \EQN Similarly, we define \BQNY u_{\vk{k}}^{-}:=\inf_{\x\in D_{\vk{k}}(u)}(u-g(\x))\frac{1}{\sigma(\x)} \EQNY and \BQNY u^{-}_l= \LT(u-\sup_{y\in J_{l}(u)}(cy)-\vn\sup_{y\in J_{l}(u) }\abs{y}-\mathbb{Q}_8 \frac{\lambda}{u^2}\RT)\LT(1+(1-\vn)b\inf_{y\in J_{l}(u)}(y)^2-\mathbb{Q}_9 \frac{\lambda\ln u }{u^3}\RT) \EQNY where $u_{\vk{k}}^{-}\geq u^{-}_l$ with $l$ satisfying \eqref{conditionl}. Considering $\k\in\mathcal{L}_2(u)$, if we fix $\wk$ first, for all $k_n$ such that $\k\in\mathcal{L}_2(u)$ we can chose $l$ satisfying \eqref{conditionl} from $-M_n(u)-1$ to $M_n(u)+1$.\\ Bonferroni inequality leads to \BQN &&\Pi_0(u)\leq\sum_{\vk{k}\in \mathcal{L}_2(u)} \pk{\sup_{\vk{x}\in D_{\vk{k}}(u)}(X(\vk{x})+g(\vk{x}))>u},\label{up1}\\ &&\Pi_0(u)\geq\sum_{\vk{k}\in \mathcal{L}_1(u)} \pk{\sup_{\vk{x}\in D_{\vk{k}}(u)}(X(\vk{x})+g(\vk{x}))>u}-\sum_{i=1}^{3}\mathcal{A}_i(u),\label{low1} \EQN where \BQNY \mathcal{A}_i(u)&=&\sum_{(\vk{k}, \vk{l})\in\mathcal{K}_i(u)}\pk{\sup_{\vk{x}\in D_{\vk{k}}(u)}(X(\vk{x})+g(\vk{x}))>u, \sup_{\vk{x}\in D_{\vk{l}}(u)}(X(\vk{x})+g(\vk{x}))>u}\\ &\leq&\sum_{(\vk{k},\vk{l})\in\mathcal{K}_i(u)}\pk{\sup_{\vk{x}\in D_{\vk{k}}(u)}\overline{X}(\vk{x})>u_{\vk{k}}^{-}, \sup_{\vk{x}\in D_{\vk{l}}(u)}\overline{X}(\vk{x})>u_{\vk{l}}^{-}}, i=1,2,3. \EQNY We set \BQNY X_{u,\vk{k}}(\vk{x})=\overline{X}(k_1u^{-2}\lambda+x_1\ldot k_nu^{-2}\lambda+x_n),\ \vk{x}\in D_{\vk{0}}(u), \vk{k}\in\mathcal{L}_2(u). \EQNY Then by \eqref{r2} and \nelem{lem0} that \BQN\label{EQA11} \lim_{u\rw\IF}\sup_{\vk{k}\in\mathcal{L}_2(u)}\abs{ \frac{\pk{\sup_{\vk{x}\in D_{\vk{0}}(u)}X_{u,\vk{k}}(\vk{x})>u_{\vk{k}}^{-}}}{\Psi(u_{\vk{k}}^{-})} -\mathcal{H} _{\vk{c}}[\vk{0},\vk{\lambda}] }=0, \EQN where $\vk{\lambda}=(\lambda\ldot \lambda).$ Further, by \eqref{HH2} \begin{align}\label{upper} &\sum_{\vk{k}\in \mathcal{L}_2(u)} \pk{\sup_{\vk{x}\in D_{\vk{k}}(u)}(X(\vk{x})+g(\vk{x}))>u}\nonumber\\ &\leq \sum_{\vk{k}\in \mathcal{L}_2(u)}\pk{\sup_{\vk{x}\in D_{\vk{0}}(u)}X_{u,\vk{k}}(\vk{x})>u_{\vk{k}}^{-}}\nonumber\\ &\sim \mathcal{H}_{\vk{c}}[\vk{0},\vk{\lambda}] \sum_{\vk{k}\in \mathcal{L}_2(u)}\Psi(u_{\vk{k}}^{-})\nonumber\\ &\leq \mathcal{H} _{\vk{c}}[\vk{0},\vk{\lambda}] \sum_{\wk\in\mathcal{M}_2(u)}\LT(\sum_{l=-M_{n}(u)-1}^{M_{n}(u)+1} \Psi(u_{l}^{-})\RT)\nonumber\\ &\sim \mathcal{H} _{\vk{c}}[\vk{0},\vk{\lambda}]\Psi(u) \sum_{\wk\in\mathcal{M}_2(u)}\LT(\sum_{l=-M_{n}(u)-1}^{M_{n}(u)+1} e^{\sup_{y\in [l,l+1]}\LT(cy+\vn\abs{y}\RT)\frac{\lambda}{u}-(1-\vn)b\inf_{y\in [l,l+1]}\frac{(y\lambda)^2}{u^2}}\RT)\nonumber\\ &\sim \mathcal{H} _{\vk{c}}[\vk{0},\vk{\lambda}]\Psi(u) \sum_{\wk\in\mathcal{M}_2(u)}\frac{u}{\lambda}\int_{-\IF}^{\IF} e^{-(1-\vn)bx^2+cx+\vn\abs{x}}dx\nonumber\\ &\sim \frac{\mathcal{H} _{\vk{c}}[\vk{0},\vk{\lambda}]}{\lambda^n}\Psi(u) \LT(\Pi_{i=1}^{n-1}(T_i-S_i)\RT)u^{2n-1}\int_{-\IF}^{\IF}e^{-(1-\vn)bx^2+cx+\vn\abs{x}}dx \nonumber\\ &\sim \LT(\Pi_{i=1}^{n-1}(T_i-S_i)\RT)\LT(\Pi_{i=1}^nc_i\RT) \int_{-\IF}^{\IF}e^{-bx^{2}+cx}dx u^{2n-1}\Psi(u), \end{align} as $u\rw\IF, \lambda\rw\IF, \vn\rw0$. Similarly, \BQN &&\sum_{\vk{k}\in \mathcal{L}_1(u)} \pk{\sup_{\vk{x}\in D_{k,l}(u)}(X(\vk{x})+g(\vk{x}))>u}\geq \sum_{\vk{k}\in \mathcal{L}_1(u)}\pk{\sup_{\vk{x}\in D_{\vk{0}}(u)}X_{u,\vk{k}}(\vk{x})>u_{\vk{k}}^{+}}\nonumber\\ &&\quad\quad\geq \LT(\Pi_{i=1}^{n-1}(T_i-S_i)\RT)\LT(\Pi_{i=1}^nc_i\RT) \int_{-\IF}^{\IF}e^{-bx^{2}+cx}dx u^{2n-1}\Psi(u), \ u\rw\IF, \lambda\rw\IF, \vn\rw0. \EQN Next we will show that $\mathcal{A}_i(u), i=1,2,3$ are all negligible compared with $$\sum_{\vk{k}\in \mathcal{L}_1(u)}\pk{\sup_{\vk{x}\in D_{\vk{k}}(u)}(X(\vk{x})+g(\vk{x}))>u}.$$ For any $(\vk{k},\vk{l})\in \mathcal{K}_1(u)$, without loss of generality, we assume that $k_1+1=l_1$. Let $$D_{\vk{k}}^1(u)=\LT[k\frac{\lambda}{u^{2}},((k+1)\lambda-\sqrt{\lambda})\frac{1}{u^{2}}\RT] \times \Pi_{j=2}^{n}J_{k_j}(u), \ D_{\vk{k}}^2(u)=\LT[((k+1)\lambda-\sqrt{\lambda})\frac{1}{u^{2}},(k+1)\frac{\lambda}{u^{2}},\RT] \times \Pi_{j=2}^{n}J_{k_j}(u).$$ Then \BQNY &&\pk{\sup_{\vk{x}\in D_{\vk{k}}(u)}\overline{X}(\vk{x})>u_{\vk{k}}^{-\vn}, \sup_{\vk{x}\in D_{\vk{l}}(u)}\overline{X}(\vk{x})>u_{\vk{l}}^{-\vn}}\\ &&\leq\pk{\sup_{\vk{x}\in D^1_{\vk{k}}(u)}\overline{X}(\vk{x})>u_{\vk{k}}^{-\vn}, \sup_{\vk{x}\in D_{\vk{l}}(u)}\overline{X}(\vk{x})>u_{\vk{l}}^{-\vn}}+\pk{\sup_{\vk{x}\in D^2_{\vk{k}}(u)}\overline{X}(\vk{x})>u_{\vk{k}}^{-\vn}}. \EQNY Analogously as in \eqref{EQA11}, we have \BQNY \lim_{u\rw\IF}\sup_{\vk{k}\in\mathcal{L}_1(u)}\abs{ \frac{\pk{\sup_{\vk{x}\in D^2_{\vk{k}}(u)}\overline{X}(\vk{x})>u_{\vk{k}}^{-\vn}}}{\Psi(u_{\vk{k}}^{-\vn})} -\mathcal{H} _{\vk{c}}[\vk{0},\vk{\lambda}_1] }=0, \EQNY where $\vk{\lambda}_1=(\sqrt{\lambda},\lambda\ldot \lambda).$ Moreover, in the light of \eqref{r2} and \cite{EGRFRV2017} [Lemma 5.4] we have for $u$ large enough \BQNY \pk{\sup_{\vk{x}\in D^1_{\vk{k}}(u)}\overline{X}(\vk{x})>u_{\vk{k}}^{-\vn}, \sup_{\vk{x}\in D_{\vk{l}}(u)}\overline{X}(\vk{x})>u_{\vk{l}}^{-\vn}}\leq \mathbb{Q}_7\lambda^4e^{-\mathbb{Q}_8\lambda^{1/4}} \Psi(\min(u_{\vk{k}}^{-\vn},u_{\vk{l}}^{-\vn})), \EQNY and for $(\vk{k},\vk{l})\in \mathcal{K}_2(u)$, \BQNY \pk{\sup_{\vk{x}\in D_{\vk{k}}(u)}\overline{X}(\vk{x})>u_{\vk{k}}^{-\vn}, \sup_{\vk{x}\in D_{\vk{l}}(u)}\overline{X}(\vk{x})>u_{\vk{l}}^{-\vn}} \leq\mathbb{Q}_9\lambda^4e^{-\mathbb{Q}_{10} \abs{\sum_{i=1}^n(k_i-l_i)^2}^{1/4}\lambda^{1/2}} \Psi(\min(u_{\vk{k}}^{-\vn},u_{\vk{l}}^{-\vn})), \EQNY where $\mathbb{Q}_i, i=7,8,9,10$ are positive constants independent of $u$ and $\lambda$.\\ Since $D_{\vk{k}}(u)$ has at most $3^n-1$ neighbors, then \BQN \mathcal{A}_1(u)&\leq& \sum_{(\vk{k},\vk{l})\in\mathcal{K}_1(u)}\pk{\sup_{\vk{x}\in D_{\vk{k}}(u)}\overline{X}(\vk{x})>u_{\vk{k}}^{-\vn}, \sup_{\vk{x}\in D_{\vk{l}}(u)}\overline{X}(\vk{x})>u_{\vk{l}}^{-\vn}}\nonumber\\ &\leq& 2\sum_{(\vk{k},\vk{l})\in\mathcal{K}_1(u)} \LT(\mathcal{H} _{\vk{c}}[\vk{0},\vk{\lambda}_1] +\mathbb{Q}_7\lambda^4e^{-\mathbb{Q}_8\lambda^{1/4}}\RT) \Psi(\min(u_{\vk{k}}^{-\vn},u_{\vk{l}}^{-\vn}))\nonumber\\ &\leq& 2\times (3^n-1)\sum_{\vk{k}\in\mathcal{L}_1(u)} \LT(\mathcal{H} _{\vk{c}}[\vk{0},\vk{\lambda}_1] +\mathbb{Q}_7\lambda^4e^{-\mathbb{Q}_8\lambda^{1/4}}\RT) \Psi(u_{\vk{k}}^{-\vn})\nonumber\\ &=&o\LT(u^{2n-1}\Psi(u)\RT),\ u\rw\IF,\ \lambda\rw\IF. \EQN and \BQN \mathcal{A}_2(u)&\leq& \sum_{(\vk{k},\vk{l})\in\mathcal{K}_2(u)}\mathbb{Q}_9\lambda^4e^{-\mathbb{Q}_{10} \abs{\vk{k}-\vk{l}}^{1/2}\lambda^{1/2}} \Psi(\min(u_{\vk{k}}^{-\vn},u_{\vk{l}}^{-\vn}))\nonumber\\ &\leq&\sum_{\vk{k}\in\mathcal{L}_1(u)}\mathbb{Q}_9\lambda^4\Psi\LT(u_{\vk{k}}^{-\vn}\RT) \sum_{\abs{\vk{k}}\geq 1 }e^{-\mathbb{Q}_{10} \abs{\vk{k}}^{1/2}\lambda^{1/2}}\nonumber\\ &\leq&\sum_{\vk{k}\in\mathcal{L}_1(u)}\mathbb{Q}_9\lambda^4\Psi\LT(u_{\vk{k}}^{-\vn}\RT) e^{-\mathbb{Q}_{10}\lambda^{1/2}}\nonumber\\ &=&o\LT(u^{2n-1}\Psi(u)\RT), \ u\rw\IF, \lambda\rw\IF. \EQN For $(\vk{k},\vk{l})\in \mathcal{K}_3(u)$, $\abs{x_{n}-y_{n}}\geq \vn/2$ holds with $\vk{x}\in D_{\vk{k}}(u), \ \vk{y}\in D_{\vk{l}}(u)$. Then by \eqref{r3}, for $u$ large enough \BQNY \Var\LT(\overline{X}(\vk{x})+\overline{X}(\vk{y})\RT)=2(1+r(\vk{x},\vk{y})) \leq2+2\sup_{\abs{x_1-y_1}\geq \vn/2}r(\vk{x},\vk{y})\leq 4-\delta \EQNY holds with $\delta\in(0,1)$ for $(\vk{k},\vk{l})\in \mathcal{K}_3(u), \vk{x}\in D_{\vk{k}}(u), \vk{y}\in D_{\vk{l}}(u).$ Further, Borell-TIS inequality leads to \BQN\label{err3} \mathcal{A}_3(u)&\leq& \sum_{(\vk{k},\vk{l})\in\mathcal{K}_3(u)} \pk{\sup_{(\vk{x},\vk{y})\in D_{\vk{k}}(u)\times D_{\vk{l}}(u) }\overline{X}(\vk{x})+\overline{X}(\vk{x})>2(u-\mathbb{Q}_{11})} \nonumber\\ &\leq& \sum_{(\vk{k},\vk{l})\in\mathcal{K}_3(u)} e^{-\frac{(2u-2\mathbb{Q}_{11}-\mathbb{Q}_{12})^2}{2(4-\delta)}}\nonumber\\ &\leq& \mathbb{Q}u^{2n} e^{-\frac{(2u-2\mathbb{Q}_{11}-\mathbb{Q}_{12})^2}{2(4-\delta)}}\nonumber\\ &=&o\LT(u^{2n-1}\Psi(u)\RT), \ u\rw\IF, \EQN where $\mathbb{Q}_{11}=\sup_{\vk{x}\in E}g(\vk{x})<\IF$ and $\mathbb{Q}_{12}=2\E{\sup_{\vk{x}\in E}\overline{X}(\vk{x})}<\IF$.\\ Inserting \eqref{upper}-\eqref{err3} into \eqref{up1} and \eqref{low1} yields that \BQNY \pk{\sup_{\vk{x}\in E(u)}(X(\vk{x})+g(\vk{x}))>u}\sim \LT(\Pi_{i=1}^{n-1}(T_i-S_i)\RT)\LT(\Pi_{i=1}^nc_i\RT) \int_{-\IF}^{\IF}e^{-bx^{2}+cx}dx u^{2n-1}\Psi(u), \ u\rw\IF, \EQNY which compared with \eqref{asym1} implies the final result. \QED \BL\label{lem0} Let $X_{u,k}(\vk{x}), \ k\in K_u, u \in D(u)$ be a family of centered Gaussian fields \eE{with continuous sample paths} where $D(u)=\Pi_{i=1}^n[0, \lambda_i u^{-2}]$ for some $\vk{\lambda}>\vk{0}$. Let further $u_k, k\in K_u$ be given positive constants satisfying \BQN\label{uuk} \lim_{u\rw\IF}\sup_{k\in K_u}\LT|\frac{u_k}{u}-1\RT|=0. \EQN \eE{If} $X_{u,k}$ has unit variance, and correlation function $r_k$ \eE{(not depending on u)} satisfying \eqref{r2} uniformly with respect to $k\in K_u$, then \BQNY \lim_{u\rw\IF}\sup_{k\in K_u}\LT| \frac{\pk{\sup_{\vk{x}\in D(u)}X_{u,k}(\vk{x})>u_{k}}}{\Psi(u_{k})} -\mathcal{H} _{\vk{c}}[\vk{0},\vk{\lambda}] \RT|=0. \EQNY \EL \prooflem{lem0} It follows along the same lines of \cite{Uniform2016}[Theorem 2.1]. \COM{ Conditioning on $\mathcal{A}_{u}(w,k):=\LT\{X_{u,k}(\vk{0})= u_k-\frac{w}{u_k}\RT\}, w\in \R$, we have for all $u$ large enough, \begin{align*} &\frac{\pk{\sup_{\vk{x}\in D(u)}X_{u,k}(\vk{x})>u_{k}}}{\Psi(u_{k})}\\ &=\frac{1}{\sqrt{2\pi}u_k\Psi(u_{k})}\int_{\R}e^{-\frac{1}{2}\LT(u_k-\frac{w}{u_k}\RT)^2} \pk{\sup_{\vk{x}\in [\vk{0},\vk{\lambda}]}X_{u,k}(u^{-2}\vk{x})>u_{k}\Bigl\lvert\mathcal{A}_{u}(w,k) }dw\\ &=\frac{e^{-\frac{u_k^2}{2}}}{\sqrt{2\pi}u_k\Psi(u_{k})}\int_{\R} e^{w-\frac{w^2}{2u^2_k}} \pk{\sup_{\vk{x}\in [\vk{0},\vk{\lambda}]}\mathcal{X}_u^w(\vk{x},k)>w}dw\\ &=:\frac{e^{-\frac{u_k^2}{2}}}{\sqrt{2\pi}u_k\Psi(u_{k})} I_{u,k}, \end{align*} where $$\mathcal{X}_u^w(\vk{x},k)=u_k\LT(X_{u,k}(u^{-2}\vk{x})-u_k\RT)+w \Bigl\lvert\mathcal{A}_{u}(w,k).$$ By \eqref{uuk}, we have \BQNY \lim_{u\rw\IF}\sup_{k\in K_u}\abs{\frac{e^{-\frac{u_k^2}{2}}}{\sqrt{2\pi}u_k\Psi(u_{k})}-1}=0. \EQNY Thus in order to establish the proof, it suffices to prove that \BQN\label{IR} \lim_{u\rw\IF}\sup_{k\in K_u}\LT|I_{u,k}- \Hn _{\vk{a}}[\vk{0},\vk{\lambda}]\RT|=0. \EQN For $W$ some positive constant, it follows that \BQNY &&\sup_{k\in K_u}\LT|I_{u,k}- \Hn _{\vk{a}}[\vk{0},\vk{\lambda}]\RT|\\ &&\leq \sup_{k\in K_u}\abs{\int_{-W}^{W} \LT(e^{w-\frac{w^2}{2u^2_k}} \pk{\sup_{\vk{x}\in [\vk{0},\vk{\lambda}]}\mathcal{X}_u^w(\vk{x},k)>w}-e^{w} \pk{\sup_{\vk{x}\in [\vk{0},\vk{\lambda}]}\zeta(\vk{x})>w}\RT)dw}\\ &&\quad +\sup_{k\in K_u}\int_{(-\IF,-W]\cup[W,\IF)}e^{w-\frac{w^2}{2u^2_k}} \pk{\sup_{\vk{x}\in [\vk{0},\vk{\lambda}]}\mathcal{X}_u^w(\vk{x},k)>w}dw\\ &&\quad +\int_{(-\IF,-W]\cup[W,\IF)}e^{w} \pk{\sup_{\vk{x}\in [\vk{0},\vk{\lambda}]}\zeta(\vk{x})>w}dw\\ &&=:I_1+I_2+I_3, \EQNY where $\zeta(\vk{x})=Y(\vk{ax})-\Var(Y(\vk{ax}))$ with \begin{align*} Y(\vk{ax})=\sum_{i=1}^{n-1}B^i(a_ix_i)+B^{n}\LT(\sum_{i=2}^n a_ix_i\RT) \end{align*} where $B^i(x),\ x\in\mathbb{R}, i=1,2\ldot n$ are independent standard Brownian motions.\\ Next, we give upper bounds for $I_i(u), i=1,2,3$.\\ \underline{Upper bound for $I_1(u)$}. Direct calculations show that \BQNY \E{\mathcal{X}_u^w(\vk{x},k)}=-u_k^2(1-r_k(u^{-2}\vk{x}))+w(1-r_k(u^{-2}\vk{x})) \EQNY and \BQNY \Var\LT(\mathcal{X}_u^w(\vk{x},k)-\mathcal{X}_u^w(\vk{y},k)\RT)= u_k^2\LT(\Var(X_{u,k}(\vk{x})-X_{u,k}(\vk{y})) -\LT(r_k(u^{-2}\vk{x})-r_k(u^{-2}\vk{y})\RT)^2\RT). \EQNY By (\ref{r2}) and (\ref{uuk}), it follows uniformly with respect to $\vk{x}\in[\vk{0},\vk{\lambda}], k\in K_u, w\in[-W,W]$ that \BQN\label{XM12} \E{\mathcal{X}_u^w(\vk{x},k)}\rw -\sum_{i=1}^{n-1}\abs{a_i x_i}-\abs{\sum_{i=2}^{n}a_ix_i} \EQN as $u\rw\IF$ and also for any $t,t'\in [S_1,S_2]$ uniformly with respect to $k\in K_u$, any $w_i\in \R$, \BQN\label{XVar} \Var\LT(\mathcal{X}_u^w(\vk{x},k)-\mathcal{X}_u^w(\vk{y},k)\RT)\rw 2\sum_{i=1}^{n-1}\abs{a_i (x_i-y_i)}+2\abs{\sum_{i=2}^{n}a_i(x_i-y_i)}, \EQN as $u\rw\IF$. Combination of (\ref{XM12}) and (\ref{XVar}) shows that the convergence of finite-dimensional distributions of $$\mathcal{X}_u^w(\vk{x},k),\ \vk{x}\in[\vk{0},\vk{\lambda}]$$ is $\zeta(\vk{x}), \ \vk{x}\in[\vk{0},\vk{\lambda}]$. Moreover, by (\ref{r2}) we have that there exists a constant $C>0$ such that for all $\vk{x}, \vk{y}\in[\vk{0},\vk{\lambda}]$ and all large $u$ \BQN\label{HC} \sup_{k\in K_u}\Var\LT(\mathcal{X}_u^w(\vk{x},k)-\mathcal{X}_u^w(\vk{y},k)\RT) \leq C\sum_{i=1}^{n}\abs{x_i-y_i}, \EQN which combined with \eqref{XM12} implies that the family of distributions $$\pk{\mathcal{X}_u^w(\vk{x},k)\in(\cdot)}$$ is uniformly tight with respect to $k\in K_u$ and $w$ in a compact set of $\R$. Consequently, \BQNY \{\mathcal{X}_u^w(\vk{x},k),\ \vk{x}\in[\vk{0},\vk{\lambda}]\} \quad \text{weakly converges to} \quad \{\zeta(\vk{x}),\ \vk{x}\in[\vk{0},\vk{\lambda}]\}. \EQNY Let $$\mathbb{A}:=\left\{v: \pk{\sup_{\vk{x}\in[\vk{0},\vk{\lambda}]}(\zeta(\vk{x})-v)>0} \text{is continuous at } v\right\}.$$ Note that if $w\in \mathbb{A}$, then $$\pk{\sup_{\vk{x}\in[\vk{0},\vk{\lambda}]}(\zeta(\vk{x})-v)>x}$$ is continuous with respect to $x$ at \eE{$x\not=0$}. Hence by the continuity of supremum functional, we have that as \BQNY c_u(w):&=&\sup_{k\in K_u}\LT|\pk{\sup_{\vk{x}\in[\vk{0},\vk{\lambda}]}\mathcal{X}_u^w(\vk{x},k)>w} -\pk{\sup_{\vk{x}\in[\vk{0},\vk{\lambda}]}\zeta(\vk{x})>w}\RT|\rw 0,\ \ u\rw\IF, \EQNY for $w\in \mathbb{A}.$ Noting that $mes(\mathbb{A}^c)=0$ and by dominated convergence theorem, we have that \BQNY I_1(u)\leq e^{W}\int_{[-W,W] \cap \mathbb{A} }c_u(w)dw+2W e^{W}\sup_{\vk{w}\in [-W, W]}\left|1-e^{-\frac{w^2}{2u_k^2}}\right|\rw 0, \quad u\rw\IF. \EQNY \underline{Upper bound for $I_2(u)$}. Using (\ref{XM12}) and (\ref{XVar}), for some $\delta\in(0,1/2)$, $|w|>W$ with $W$ sufficiently large and all $u$ large we have \BQNY \sup_{k\in K_u,\vk{x}\in[0,\vk{\lambda}]}\E{\mathcal{X}_u^w(\vk{x},k)} \leq\mathbb{C}_1+\delta|w| \EQNY and \BQNY \sup_{k\in K_u,\vk{x}\in[0,\vk{\lambda}]}\Var\LT(\mathcal{X}_u^w(\vk{x},k)\RT) \leq \mathbb{C}_2. \EQNY by (\ref{HC}) and Theorem 8.1 of \cite{Pit96} \BQNY I_2(u)&=&\sup_{k\in K_u}\int_{(-\IF,-W]\cup[W,\IF)}e^{w-\frac{w^2}{2u^2_k}} \pk{\sup_{\vk{x}\in [\vk{0},\vk{\lambda}]}\mathcal{X}_u^w(\vk{x},k)>w}dw\\ &\leq&e^{-W}+\sup_{k\in K_u}\int_{W}^{\IF)}e^{w} \pk{\sup_{\vk{x}\in [\vk{0},\vk{\lambda}]}\LT(\mathcal{X}_u^w(\vk{x},k) -\E{\mathcal{X}_u^w(\vk{x},k)}\RT)>(1-\delta)w-\mathbb{C}_1}dw\\ &\leq& e^{-W}+\int_W^{\IF}e^{w}\mathbb{C}_3{w}^2 \Psi\LT(\frac{(1-\delta)w-\mathbb{C}_1}{\mathbb{C}_2}\RT)d w\\ &=:&A_1(W)\rw 0,\ W\rw\IF, \EQNY \underline{Upper bound for $I_3(u)$}. Borell-TIS inequality (see, e.g., \cite{AdlerTaylor}) implies that $$I_3(u)\rw 0,\ u, W\rw\IF,$$ hence (\ref{IR}) follows. \QED \begin{remark} Following \cite{EGRFRV2017} [Lemma 5.2], in \nelem{lem0} the results of the two dimensional cases holds. \end{remark} } \BT \label{ThDK} Assume that $f:[0,1]^n\to \mathbf{R}$ is a measurable bounded positive function. Let $$F(x_1\ldot x_n)=\int_0^{x_1}\dots \int_0^{x_n} f(t_1\ldot t_n)dt_1\dots dt_{n}$$ and assume that $F(1\ldot 1)=1$. Then the projection of the level set $\mathcal{D}=\{(x_1\ldot x_n): F(x_1\ldot x_n)=1/2\}$ on $\mathbb{R}^{n-1}$ is a Jordan measurable set of nonzero measure. \ET \prooftheo{ThDK} The assumption on $f$ tells us that there is a positive constant $M$ so that \BQN \label{bern} 0< f(x) \le M \EQN for every point $x\in [0,1]^n$. Since $f$ is defined on $[0,1]^n$ and positive, we can extend it on $\mathbb{R}^n$ subject to the condition \eqref{bern}. This can be done in an explicit way by using symmetries w.r.t. to all sides of the cube $[0,1]^n$. Namely, we say that two points $x=(x_1\ldot x_n),y=(y_1\ldot y_n)\in\mathbb{R}^n$ are equivalent if there are integers $i,k,m$ so that $x_i-y_i = 2k$ or $x_i+y_i=2m+1$. Now every $y\in\mathbb{R}^n$ has its unique representatives $x$ in $[0,1]^n$. Then we set $f(y)=f(x)$ for $y\not\in [0,1]^n$. By the definition of $F$ we can find the unique global solution $$x_n=h(x_1\ldot x_{n-1})$$ of the equation $$F(x_1\ldot x_{n-1},x_n)=\frac{1}{2}$$ for $$x'=(x_1\ldot x_{n-1})\in (0,\infty)^{n-1},$$ because $F$ is strictly increasing w.r.t all variables. Here $h$ is strictly decreasing w.r.t. all variables $x_1\ldot x_{n-1}$. Moreover $h$ is continuous. Further we are interested on the set $$L=\{x'=(x_1\ldot x_{n-1}): x'\in [0,1]^{n-1}, F(x',h(x'))=1/2\}$$ and have to prove that it is Jordan measurable. We have that $(x_1\ldot x_{n-1})\in K$ if and only if $(x_1\ldot x_{n-1})\in [0,1]^{n-1}$ and $0\le h(x_1\ldot x_{n-1})\le 1$. The boundary of this set is contained in the union of the following two sets $$A:=\partial [0,1]^n, \quad \text{ and } B:=\{(x_1\ldot x_{n-1})\in [0,1]^n: h(x_1\ldot x_{n-1})=1\}.$$ By using again the previous procedure, we find that there is a mapping $g:[0,1]^{n-2}\to [0,1]$ so that $$B=\{(x_1\ldot x_{n-2},g(x_1\ldot x_{n-2})): (x_1\ldot x_{n-2})\in[0,1]^{n-2}\}.$$ Let $T=\{(x_1\ldot x_{n-2}, x_{n-1}): (x_1\ldot x_{n-2})\in[0,1]^{n-2}, 0\le x_{n-1}\le g(x_1\ldot x_{n-1})\}$. Since $g$ is continuous, it follows that $T$ is Jordan measurable and its boundary contains $B$. Namely $$\mu(T)=\int_{[0,1]^{n-2}} g(x_1\ldot x_{n-2})dx_1\dots dx_{n-2}.$$ Thus $B$ has Jordan $n-1$ measure equal to zero. This implies that $\partial L$ has Lebesgue measure 0 (with respect to Lebesgues measure on $\R^{n-1}$ and in particular $L$ is measurable. Further we show that its measure is not zero. Since $F(1\ldot 1,1)=1$ and $F(0\ldot 0,0)=0$, it follows that there is $t\in(0,1)$ so that $F(t\ldot t,t)=1/2$. Then $h(t\ldot t)=t$, and thus by the continuity it follows that $T=(t\ldot t)$ is an interior point of $L$, namely for $\epsilon>0$ small enough so that $t+\epsilon<1$, there is a ball $B$ centered at $T$ and with a positive radius $\delta$ such that $h(B)\subset (t-\epsilon,t+\epsilon)$. Consequently, $L$ has a positive Lebesgue measure establishing the proof. \QED \section*{Acknowledgments} Thanks to Professor Enkelejd Hashorva who give many useful suggestions which greatly improve our manuscript. Thanks to Swiss National Science Foundation Grant no. 200021-175752. \bibliographystyle{ieeetr}
{ "timestamp": "2018-02-27T02:07:17", "yymm": "1802", "arxiv_id": "1802.08883", "language": "en", "url": "https://arxiv.org/abs/1802.08883" }
\section{Abstract} We build upon previous work out of UC Berkeley's energy, controls, and applications laboratory (eCal) that developed a model for price prediction of the energy day-ahead market (DAM) and a stochastic load scheduling for distributed energy resources (DER) with DAM based objective\cite{Travacca}. In similar fashion to the work of Travacca et al., in this project we take the standpoint of a DER aggregator pooling a large number of electricity consumers - each of which have an electric vehicle and solar PV panels - to bid their pooled energy resources into the electricity markets. The primary contribution of this project is the optimization of an aggregated load schedule for participation in the California Independent System Operator (CAISO) real time (15-minute) electricity market. The goal of the aggregator is to optimally manage its pool of resources, particularly the flexible resources, in order to minimize its cost in the real time market. We achieves this through the use of a model predictive control scheme. A critical difference between the prior work in \cite{Travacca} is that the structure of the optimization problem is drastically different. Based upon our review of the current and public literature, no similar approaches exist. The main objective of this project were building methods. Nevertheless, to illustrate a simulation with 100 prosumers was realized. The results should therefore be taken with a grain of salt. We find that the Real Time operation does not substantially decrease or increase the total cost the aggregator faces in the RT market, but this is probably due to parameters that need further tuning and data, that need better processing. \section{Introduction} \subsection{Motivation and background} As a global leader in climate change policy, California has one of the highest renewable energy target in the world; 50\% of electricity supply will be from renewable electricity sources (RES) by 2030. \footnote{In early 2017, California Senate leader Kevin de Le\'{o}n put forth a bill that would mandate the State to use 100\% renewable power by 2045. So this issue of addressing intermittent resources is unlikely to be resolved through lower legislated goals, but rather increased goals only tightening existing system constraints.} Most of the RES that California uses are variable and intermittent such as wind and solar power. The need to ramp up or ramp down controllable, often non-renewable, generation sources due to the variability of the RES is a current and increasing challenge for power system operators and policy makers.\\ Historically, supply-side resources have followed the load. Load, in general, has been considered to be inelastic, as the prices that are observed by retail customers do not correspond to wholesale electricity prices. With the increase in variable supply resources there is increasing interest in the use of demand-side resources to participate in the wholesale markets as a mechanism to resolve some of the supply-demand imbalances that could otherwise occur.\\ In this context, both demand response and energy storage are considered possible mitigating measures and technologies that can provide flexibility to the grid. This helps to address problems associated with the famous "duck curve" and the power ramping problem in the late afternoon. The potential net benefits of the use of demand response and energy storage have not been well studied based on real data.\\ The California Independent System Operator (CAISO) operates three distinct markets, a day-ahead market, real-time market, and ancillary services. Ancillary services, such as congestion revenue rights and convergence bidding, is also a set of markets that the ISO operates though they will not be addressed in this analysis.\\ The day-ahead market (DAM) has three separate processes. The first is to assess whether the bidders may be able to exert market power, the second is to forecast the level of supply needed to meet the demand, finally the additional plants that must be ready to generate are determined. Bids and schedules can be submitted up to seven days in advance, but the market closes at noon the day before and the results are distributed at 1pm. As a part of this market, bilateral trading occurs weeks in advance. These are then scheduled at noon, prior to the ISO publishing the results of the optimization being run. About 70\% of energy in CAISO goes through the market as self schedule or as a price taker.\\ The real-time market (RTM) offers an opportunity for the CAISO to procure supply closer to when the demand occurs, thus the forecast is less uncertain. The market opens at 1pm the day prior to the trading day (i.e. after the DAM results are published) and closes 75 minutes prior to each trading hour, with the results being distributed 45 minutes prior to the hour. The dispatch occurs in 15 and 5 minute intervals, depending on the plant. \subsection{Relevant Literature} A growing body of literature addresses optimal plug-in electric vehicles (PEV) population charging and residential demand response \cite{RTM stochastic optimization 1}, \cite{RTM stochastic optimization 2}, \cite{centralized1}, \cite{centralized2}, \cite{centralized3}. Within the literature that we examined, we found studies that explored either optimization in the DAM, the RTM, or TOU (Time-of Use scheduling).\\ \\ Market bidding strategies and market uncertainty for aggregated PEVs have been studied in several research groups and results can be found in the following publications: \cite{marketObjective1}, \cite{marketObjective2},\cite{marketObjective3}. These studies analyzed both bidding in the DAM and RTM separately. \\ In this paper, we construct a tailored optimization method for real time operation of electricity resources in the Real Time Market using our aggregated resources. It is important to note, that it is no longer simple scheduling as in \cite{Travacca}. \subsection{Notation and Nomenclature} \label{notation} For $(x,y) \in \mathbb{R}^d$, $<x,y>=x^Ty$ refers to the euclidean scalar product of $x$ and $y$, $ \|.\|_2^2$ refers to the corresponding euclidean norm. $x \leq y$ refers to element-wise inequality. $\odot$ denotes the element-wise vector product (a.k.a hadamard product). For a vector $v\in \mathbb{R}^d$, $v(a:b)\in \mathbb{R}^{b-a+1}$, with $a$ and $b$ integers, denotes the vector consisting of the $v_j$, $j\in\{a,...b\}$. \\ The following notation is used in this paper. Uppercase letters refer to variables with units of power ($kW$) while lower case letters refer to variables with units of energy ($kWh$). Symbol $x^t$ refers to the value taken by variable $x$ at time $t$. In the absence of the exponent we will consider the variable $x$ as a vector $\in \mathbb{R}^{24}$. Symbol $x_i$ refers to a local variable $i\in \{1,...N\}$. Finally, $\overline{x} $ (respectively $\underline{x} $) refers to an upper (lower) bound of the variable $x$. For more clarity, current decision variables are highlighted in \textcolor{orange}{orange}. \renewcommand{\arraystretch}{1.8} \begin{table}[H] \caption{Nomenclature} \label{nomenclature} \begin{center} \begin{tabular}{c l} \hline \hline $N$ & Number of prosumers \\ $\Delta t$ & Time-step for Real Time Market: 15 min\\ $T$ & Time-Horizon (hours)\\ $T_H$ & Model Predictive Control Time-Horizon (hours)\\ $\lambda_{DA},\lambda_{RT}$& respectively DA and RT risk aversion parameters\\ & All of the following variables are $\in \mathbb{R}^{T}$\\ $p_{DA}$ & Day-Ahead Market price\\ $L_i$ & Uncontrollable residential load of prosumer $i$\\ $S_i$ & Solar PV production of prosumer $i$ \\ \textcolor{orange}{$EV_i$}& Day Ahead Charging rate of PEV $i$\\ \textcolor{orange}{$G_i$}& Day Ahead Power imported from the grid for prosumer $i$\\ & All of the following variables are $\in \mathbb{R}^{4T}$\\ $p_{RT}$ & Real-Time Market price\\ \textcolor{orange}{$\Delta EV_i$}& Real Time Charging rate of PEV $i$ deviation from DA schedule\\ \textcolor{orange}{$\Delta G_i$}& Real Time Power imported from the grid for prosumer $i$ deviation from DA schedule\\ $C_{DA}$ & DA covariance matrix for DAM price prediction error, $\in \mathbb{R}^{T \times T}$\\ $C_{RT}$ & RT covariance matrix for RTM price prediction error, $\in \mathbb{R}^{4T_H \times 4T_H}$\\ \hline \hline \end{tabular} \end{center} \end{table} We also introduce the following aggregated variables: $\textcolor{orange}{G}=\sum_{i=1}^N \textcolor{orange}{G_i}$, $\textcolor{orange}{EV}=\sum_{i=1}^N \textcolor{orange}{EV_i}$, $\textcolor{orange}{\Delta G}=\sum_{i=1}^N \textcolor{orange}{\Delta G_i}$, $\textcolor{orange}{\Delta EV}=\sum_{i=1}^N \textcolor{orange}{\Delta EV_i}$, keeping in mind that all of these variables are $\in \mathbb{R}^T$. These equalities will be considered as constraints in the following optimization schemes. \section{Technical Description} We do not present results from the Day Ahead Market in CAISO in this report, for more information on results from that market it is recommended to refer to \cite{Travacca}. \subsection{Description of the Real Time Market in CAISO} The real-time market in California (CAISO) has multiple market instruments in which generators can participate. There are three types of generators: (1) internal generators, ones that operate inside of the CAISO balancing area (BA) authority; (2) import and export generators, those that produce power in the BA but for use outside of the BA or those that produce power outside the BA but for use inside the BA; and, (3) dynamic resources, those generators located outside the BA that have telemetry and controls for power delivery inside the BA.\\ \\ The two time periods that are of most relevance are the 15-minute market and the 5-minute market. Supply-side bids are submitted, at the latest, 75 minutes prior to the start Trading Hour (T-75). For each hour there are four bids submitted, one for each 15-minute period. For generators that are able to participate in the 5-minute market (i.e. generators internal to the BA and dynamic resources)\footnote{With the introduction of the Energy Imbalance Market (EIM), there are now generators that participate in the real-time market that are located outside of CAISO, i.e. type 2 generators and type 3 generators. The majority of these generators can only submit bids for the 15-minute market (type 2 generators), but a subset also participates in the 5-minute market (type 3 generators).}, the bids that are submitted for the 15-minute period will apply to the 5-minute periods within that same time period. So a generator will provide four bids per hour and there will be three prices for each bid, thus 12 prices per hour.\\ \\ The two real-time markets (i.e. 15-minute and the 5-minute) operate with the use of separate optimizations. The market prices are published 45 minutes and 22.5 minutes prior to close of the 15-minute and 5-minute markets, respectively. Those prices are published on a rolling basis, so for each 15 or 5 minute period it is 45 minutes and 22.5 minutes prior to the start of that trading period.\\ \\ It is important to note that real time markets can be positive and negative, but on average, we should expect: $\mathbb{E}(p_{RT})=p_{DA}$. Nevertheless, in practice we observe, $\mathbb{E}(p_{RT})\sim 0.93 p_{DA}$. Based on conversations with CAISO management, this is explained by some renewable generators schedule their production in real time which has a tendency to lower prices (because they have zero marginal cost). Since not all resources participate in both the DAM and RTM, there being a price difference is not surprising especially considering that the many of the low cost renewable resources are self scheduled and participate only in the RTM. Nevertheless, while empirically there exists a price difference between the two markets, we can discard this by considering the following conditions to be true for any given time period: \begin{itemize} \item $p_{RT}>p_{DA} \Rightarrow$ electricity \textbf{System is Short}: day ahead supplied schedule is lower than real time demand. \item $p_{RT}<p_{DA} \Rightarrow$ electricity \textbf{System is Long}: too much supply was scheduled compared to real time demand \end{itemize} The prices are received through a program called the California Market Results Interface. Figure \ref{fig:RTM} displays the Real Time Market timeline. Two hours are depicted (hour h and hour h+1) for illustrative purposes to indicate that the the bids are rolling. For both hours, the day-ahead market bids are shown with dotted lines and labeled as "DAM Bid". The real-time (15-minute) bids are submitted at T-75 and the results are published at T-45. Those bids, which were accepted are then depicted with solid lines. The difference, $\Delta$, between the DAM and RTM is also shown. \begin{figure}[H] \centering \includegraphics[scale=0.37]{images/RTM_Bid_Delta} \caption{Graphical Depiction of the Real Time Market Process} \label{fig:RTM} \end{figure} \subsection{Market Hypothesis} Only the 15 min market interval will be considered in this project. Note that the Smart Meters in California allow to give information every 30 min. Therefore, it is already uncertain how the settlement will be made with CAISO (i.e. proof that what was said to be consumed and produced was indeed).\\ \\ While there is a good likelihood that CAISO does require its own metering system, separate from Smart Meters, to participate in the markets in order to verify performance before any settlement. In any case, this would not fundamentally alter the methodology taken in this project, as an aggregator would want to optimize both the 15-minute market and the 5-minute markets provided it can participate it both. \subsection{Relation between the DAM and the Real Time Market} At first, it seems to be a good idea to create an interaction between the Real time Market and the DAM. Indeed, as an aggregator, one can increase their revenue by predicting what is going to happen in the real time market. This consists in betting on the fact that the CAISO forecast of CAISO demand is either short or long for a given hour. Nevertheless we argue that in the CAISO market, doing so is the role of Virtual Bidding. Virtual bids must be virtual, i.e. not associated with physical bids, and the practice of using physical assets to make virtual bids is forbidden by CAISO. Therefore, we argue that the DAM and the real time market optimization should, or must, be independent: the real time market must take the results of the DAM as an exogenous input. This does not mean that the real time market does not give insight into the way to deal with the DAM. On the contrary, it can help to tune parameters and understand which constraints are important and should be kept in the DAM scheduling optimization.\\ \\ Moreover, doing so with physical assets would consists in exerting Market Power. All bids are assessed for whether market power is being exerted, whether intentional or not, through a Market Power Mitigation (MPM) process. The MPM occurs prior to the optimization. In the case that bids are deemed to be an exertion of market power, then those bids are rejected and the optimization is run without those bids. \subsection{Price prediction and estimation of covariance matrix} As shown in Figure 2, price fluctuation of the RTM (right figure) is much larger than that of the DAM (left figure). The DAM price was always between 0 USD/MWh and 100 USD/MWh throughout the year. In contrast, the RTM price spikes relatively frequently and went up to 1,000 USD/MWh and down to -150 USD/MWh. \\ \\ \begin{figure}[H] \centering \includegraphics[scale=0.37]{images/DAM_RTM_price} \caption{Comparison of price fluctuations between the Day-ahead Market and the Real-time Market} \end{figure} In the RTM, generation capacity tends to be constrained and supply curve (Short-term Marginal Cost: STMC) of electricity becomes more vertical (more inelastic). As a result, a small change in demand could cause price spikes in either a positive and a negative direction. Such situations are shown in Figure 3. The figure on the left shows a change where the supply is relatively elastic and thus the price difference between $P_{RT}$ and $P_{DA}$ is small. In contrast to the figure on the right, where the demand moves from the elastic portion of the supply curve to the inelastic portion of the supply curve and the price difference between the DAM and RTM spikes. Nevertheless, all the questions are not answered as for now: as the price in RT are 7\% lower on average, it does seem to be a good idea for an aggregator to buy in RT only. \begin{figure}[H] \centering \includegraphics[scale=0.37]{images/Inelastic_supply} \caption{Mechanism of price fluctuation in the RTM} \label{fig:RTM} \end{figure} This price fluctuation of the RTM price poses a challenge in price prediction and also requires caution in dealing with the prediction uncertainty in the following step. In the stochastic optimization, we not only minimize the cost but also incorporate the uncertainty of the price prediction in the objective function. We estimate the covariance matrix that represents the uncertainty of the prediction.\\ \\ We used random forest regression to predict both the DAM and RTM prices.\\ \\ For the DAM price prediction, we used year, date, and hour as features (regressors). In addition to these features for the real-time forecast we use, DAM demand forecast, DAM prices, and RTM demand forecast, and operating intervals (0 min, 15 min, 30 min, and 45 min). \\ \\% Should we discuss the features for the RTM forecast? We assumed multivariate normal distribution around the expected price both for RTM and DAM price prediction:\\ $$p \mathtt{\sim} N(\hat{p}, C)$$ Zero mean, normally distributed prediction error in day d is denoted as $\epsilon_d\in\mathbb{R}^{24}$. Mean squared error is then denoted as:\\ $$ \mathrm{MSE} = \frac{1}{N}\Sigma_{k=1}^{N_d}\epsilon_k \epsilon_k^T $$ It revealed to be difficult to produce an online prediction model for RTM prices, therefore we kept a 'static' one for RTM prediction. For coherence we did the same for DAM price prediction. We used the same data that we train to predict (which is not the best thing to do in practice, because it leads to over fitting, nevertheless random forest is usually doing good at avoiding that). As a consequence, the prediction we get can be seen as the best prediction we could probably get in practice: for DAM prediction we get a RMSE of 2\$ whereas we get 15\$ RMSE for the RTM. Given electricity price data, we can estimate the concave precision matrix using classic maximum likelihood estimation. This covariance matrices represent price prediction uncertainty and is used as an input for the following stochastic optimization model to incorporate risk aversion characteristics. We incorporate risk management using the classic Markovitz Portfolio Optimization setting (cf. following section). \textit{Note: we used Python to create this prediction model} \begin{figure}[H] \centering \begin{minipage}[b]{0.4\textwidth} \includegraphics[width=1.3\textwidth]{images/cov_DAM} \caption{Covariance matrix Heatmap for DA market, $\in \mathbb{R}^{T times T}$} \end{minipage} \hfill \begin{minipage}[b]{0.4\textwidth} \includegraphics[width=1.3\textwidth]{images/cov_RTM} \caption{Covariance matrix Heatmpap for RT market, $\in \mathbb{R}^{T_H times T_H}$} \end{minipage} \end{figure} \subsection{Day-Ahead Market and Real-Time Market Optimization Objectives} $J_{DA}$ and $J_{RT}$ denote the DA and RT objectives respectively. The choice of the quantity \textcolor{orange}{$G$} or \textcolor{orange}{$\Delta G$} can be considered as a portfolio problem where the assets is the flexibility, the returns are the DAM prices, and the budget constraint is the flexibility constraints \cite{bible}. The choice of the portfolio $G$ involves a trade-off between the expected price and the corresponding variance. For the Day-Ahead: \begin{equation} J_{DAM} = \underbrace{p_{DA}^T \textcolor{orange}{G}}_\text{DA total cost} + \overbrace{\frac{\lambda_{DA}}{2} \textcolor{orange}{G}^TC_{DA}\textcolor{orange}{G}}^{\text{DA Risk}} \label{eq:DAM_objective} \end{equation} Now, for the RT at hour $h$, with no penalties for deviations increasing system shortness or longness, similarly: \begin{equation} J_{RT}^h = \underbrace{p_{RT}(h:h+4T_H-1)^T \textcolor{orange}{\Delta G}}_\text{RT total cost} + \overbrace{\frac{\lambda_{RT}}{2} \textcolor{orange}{\Delta G}^TC_{RT}\textcolor{orange}{\Delta G}}^{\text{RT Risk}} \label{eq:RTM_objective} \end{equation} If the objective function for DAM is easy to understand, the Real Time objective is a little more subtle (erratum with respect to the presentation given in class, mistake that nevertheless inspired what will follow ). Let us take a concrete example to understand why (\ref{eq:RTM_objective}) is the right objective function. This example is worked out in table \ref{rtm_ex}. \begin{table}[H] \caption{Why this cost function for RTM?} \label{rtm_ex} \begin{center} \begin{tabular}{|p{3cm}| p{6.5cm}|p{6.5cm}|} \hline \hline & $\Delta G=+1MW \geq 0$ & $\Delta G=-1MW \leq 0$\\ \hline $p_{RT}=40>p_{DA}=30 \$/MW$ (SYSTEM SHORT)& $\Delta G$ increases the strain on the system. The aggregator has to pay for this extra quantity at RT market price: 30 \$/MW. Had this extra demand been scheduled in the DA, the cost would have been 10 \$/MW lower& $\Delta G$ decreases the strain on the system. The aggregator is paid as if it was providing 1MW of power supply: \$40. Had this demand reduction been included in the DA, the aggregator would have had a cost \$10 higher \\ \hline $p_{RT}=20<p_{DA}=30 \$/MW$ (SYSTEM LONG)& $\Delta G$ decreases the strain on the system. The aggregator has to pay for this extra quantity at RT market price: 30 \$/MW. Had this extra demand been scheduled in the DA, the cost would have been 10 \$/MW higher& $\Delta G$ increases the strain on the system. The aggregator is paid as if it was providing 1MW of power supply: \$20. Had this demand reduction been included in the DA, the aggregator would have had a \$10 lower cost \\ \hline $p_{RT}=-10>p_{DA}=30 \$/MW$ (SYSTEM ULTRA LONG)& $\Delta G$ decreases the strain on the system. The aggregator is paid for extra quantity at RT market price: -30 \$/MW! Had this extra demand been scheduled in the DA, the cost would have been 40 \$/MW higher& $\Delta G$ increases the strain on the system. The aggregator has to pay as if was providing 1MW of power supply: \$40. Had this demand reduction been included in the DA, the aggregator would have had a cost \$40 lower! \\ \hline \hline \end{tabular} \end{center} \end{table} Let us stop being 'california-centric' for a minute. In Germany and the UK, the aggregator would have to face constant imbalance prices. Let $\delta_+$ be the positive imbalance price and $\delta_-$ be the negative imbalance price: the aggregator is paid $\delta_+$ for beneficial deviations (e.g. system long and $\Delta G \geq 0$) and has to pay $\delta_-$ for bad deviation (e.g. system long and $\Delta G \leq 0$). In the UK, the Imbalance market is symmetric (i.e. $\delta_-=\delta_+$), whereas in Germany $\delta_+<\delta_-$. Let us introduce this specificities in our Real-Time objective function for which there are four cases to consider:\\ \textbf{1) System Short and $\Delta G \geq 0$:} $$\text{Imbalance Cost}_1=\delta_-\frac{sgn(p_{RT}-p_{DA})+1}{2} \odot \max\{\Delta G,0\}$$ \textbf{2) System Short and $\Delta G \leq 0$:} $$\text{Imbalance Cost}_2=-\delta_+\frac{sgn(p_{RT}-p_{DA})+1}{2} \odot \max\{-\Delta G,0\}$$ \textbf{3) System Long and $\Delta G \geq 0$} $$\text{Imbalance Cost}_3=\delta_+\frac{sgn(p_{RT}-p_{DA})-1}{2} \odot \max\{\Delta G,0\}$$ \textbf{4) System Long and $\Delta G \leq 0$} $$\text{Imbalance Cost}_4=-\delta_-\frac{sgn(p_{RT}-p_{DA})-1}{2} \odot \max\{-\Delta G,0\}$$ To wrap ones mind around these formulas, reading the worked out example from table \ref{rtm_ex} is useful. We add all of these imbalance costs (1-4) to the RT objective function described by equation (\ref{eq:RTM_objective}). The new objective function is still convex (but is no longer quadratic convex), indeed $X\in \mathbb{R}^d \to \max(X,0)$ is a convex function, and convexity is preserved trough affine composition: $X\in \mathbb{R}^d \to \max(CX+D,0)$ is convex, with $C,D$ matrices of appropriate sizes. There is nevertheless a way to transform the objective to make it a QP as described in the presentation, but we find this approach more elegant as it does not require to add new decision variables to the problem and deal with a quadratic non convex equality constraint (although we can show that it can be removed). \subsection{Local Model for Prosumers} Here we only display the local model for the real time model. The DAM model is thoroughly described in \cite{Travacca}, or can be understood regardless with what follows. Let $i \in \{1,...N\}$. We denote $EV_i^*$ and $G_i^*$ the optimal schedules from DA. In the real time optimization, these variables are considered as exogenous inputs. \subsubsection{Local Power Balance} The power balance (\ref{eq:powerbalance}) establishes the link between the control variables $\textcolor{orange}{\Delta EV_i}$ and $\textcolor{orange}{\Delta G_i}$. \begin{equation} L_i+EV_i^*+ \textcolor{orange}{\Delta EV_i}= S_i+G_i^* + \textcolor{orange}{\Delta G_i} \label{eq:powerbalance} \end{equation} \subsubsection{Local Grid Constraints} At any given node in the distribution network, there is a limit on power import or export (\ref{eq:localgrid}). Typically, for residential customers, $\overline{G}_i\simeq 10 \, kW$. \begin{equation} \underline{G}_i\leq G_i^*+ \textcolor{orange}{\Delta G_i}\leq \overline{G}_i \label{eq:localgrid} \end{equation} \subsubsection{Local PEV constraints} Here, a model equivalent to the one developed in \cite{EVmodel} is used. The PEV dynamics and state of energy constraints can be summarized as follow: \begin{equation} \underline{ev}_i \leq \Delta t A \cdot (EV_i^*+\textcolor{orange}{\Delta EV_i}) \leq \overline{ev}_i \label{eq:evdynamic} \end{equation} With $A$, trigonal inferior matrix with ones. For more information please refer to \cite{Travacca}. Finally, the PEVs charging power constraint is given by (\ref{eq:evdynamic2}). \begin{equation} \underline{EV}_i\leq EV_i^* +\textcolor{orange}{\Delta EV_i} \leq \overline{EV}_i \label{eq:evdynamic2} \end{equation} Note that when the PEV is un-plugged at a given time $t \in \{1,...T\}$, then $\underline{EV}_i^t=\overline{EV}_i^t=0$ is required. \textbf{For the purpose of conciseness, the set of local constraints (\ref{eq:powerbalance}), (\ref{eq:localgrid}), (\ref{eq:evdynamic}) and (\ref{eq:evdynamic2}) are hereby referred to as $local_i$. } \subsection{Model Predictive Control Scheme Scheme for Real-Time Operation} \begin{algorithm} \caption{Model Predictive Control}\label{euclid} \begin{algorithmic}[1] \State \textbf{Initialization}: No warm start possible with CVX \For {$h$ from $1$ to $T$ do:} $$\Delta EV^*, \Delta G^*=\argmin{J_{RT}^h(\Delta EV, \Delta G,\Delta EV_i, \Delta G_i )} \text{ s.t. local}_i $$ Only implement the decision for the first hour: $\Delta EV^*=\Delta EV^*(1:4)$, $\Delta G^*=\Delta G^*(1:4)$, etc. \EndFor \State \textbf{End For} \end{algorithmic} \end{algorithm} \textit{Note: as stated by Michael C.Grant, one of the creators of CVX, it is not possible to impose an initial guess onto CVX. Therefore creating our own solver in the future is of particular interest (or using quadprog in Matlab). We can see here the limitations to CVX. } \section{Data} For price prediction and covariance matrix estimation, we collected CAISO's DAM (hourly) and RTM (15 minutes) price and demand forecast data in PG\&E area from January 2013 to March 2017.\\ \\ For stochastic optimization, we used high resolution data in San Francisco. We used mobility data from 2,000 full electric vehicle's in the bay area. A single solar PV generation data was generated using PVsim (SunPower). The load was modeled taking aggregate load data from CAISO in the PG\&E region and adding random noise to it to create heterogeneity between prosumers (idem for prosumer PV production). Most of the data was concatenated in the same CSV file using R. \section{Simulation and Results} \textit{Note: the code can be made available upon request (Python program, Matlab scripts and files, R program for data cleaning)}\\ We used Matlab to illustrate our method and theory. Given the time frame, it was difficult to realize more than a one day simulation: the main reason is the fact that the way the constraint bounds (eq. \ref{eq:evdynamic2}) are generated from data is complex, and it was tailored for one day simulations only. Nevertheless, this one day simulation allows us to see the behavior of our scheduling and real time operation method. We made a simulation for 100 prosumers, the parameters that were chosen for this simulation are provided in table \ref{parameters}. \renewcommand{\arraystretch}{1.8} \begin{table}[H] \caption{Parameter Values} \label{parameters} \begin{center} \begin{tabular}{c l} \hline \hline $N$ & 100 \\ $T_H$ & 3 hours\\ $\overline{G}$ & 10 kW\\ $\eta$ & 90\% (charging efficiency)\\ $delta_-,delta_+$ & 0\\ $\lambda_{DA},\lambda_{RT}$ & 1 \\ \hline \hline \end{tabular} \end{center} \end{table} \begin{figure}[h] \centering \begin{minipage}[b]{0.4\textwidth} \includegraphics[width=1.4\textwidth]{images/fig1} \caption{Local uncontrollable load consumptions (black curves), Average (red dashed curve)} \label{local_load} \end{minipage} \hfill \begin{minipage}[b]{0.4\textwidth} \includegraphics[width=1.4\textwidth]{images/fig2} \caption{Local solar PV production (black curves), Average solar PV production (red dashed curve)} \label{local_PV} \end{minipage} \end{figure} \begin{figure}[H] \centering \begin{minipage}[b]{0.4\textwidth} \includegraphics[width=1.4\textwidth]{images/sol_dem} \caption{Aggregated uncontrollable load consumption and solar production: difference between RT and DA} \label{aggregate_load_PV} \end{minipage} \hfill \begin{minipage}[b]{0.4\textwidth} \includegraphics[width=1.4\textwidth]{images/p_RTM} \caption{Real-Time prices for the chosen day of simulation} \label{RTprice} \end{minipage} \end{figure} \begin{figure}[H] \centering \includegraphics[scale=0.18]{images/results} \caption{DA Schedule and RT operation results: $G^*$, black dashed curve, $\Delta G^*$ solid black dashed curve, $EV^*$ red dashed curve and $\Delta EV^*$ solid red curve} \label{results} \end{figure} First, figures \ref{local_load} and \ref{local_PV} provide a visualization of the average uncontrollable load and PV production as well as the heterogeneity inside the pool for DA (i.e. 1 hour time step). Figure \ref{aggregate_load_PV} provides a visualization of the difference between the DA and RT aggregated load an PV production. This difference is only known in the RT operation process. Finally, Figure \ref{RTprice} shows the real time price for the day. The results are provided in figure \ref{results}. This figure shows the difference between the schedules and real time operation. In the DA we predict that our clearing price will be \$68.7 for the next day. In reality, when the DA market clears, the aggregator realizes that its cost will be \$71.1. In the RTM, the total supplementary cost over the day is \$0.30. This means that the aggregator is not able to leverage any value in the RT market. Nevertheless, our scheme gives us a way to do real time operation over our pool. It is difficult to determine what costs the aggregator would incur for its real time operation without this scheme, moreover the price spike only appears for 15 min. Based on our simulations, the aggregator does not take advantage of this price surge. This is due to the fact that our MPC horizon is too small and there is no available flexibility by the time we reach spike time (around operating interval 70). When the risk aversion parameter for Real Time $\lambda_{RT}$ is set to zero, the total cost is -\$0.90, which means the aggregator leveraged the RTM to reduce its cost. Nevertheless, based on the simulation, the aggregator is still not able to take advantage of the surge price. \section{Summary and Future Works} This project involved advancing existing research, through developing a forecast model for the day-ahead market and real-time markets using random forest regression technique (although not the primary objective), we then propose a two-step method for managing a pool of prosumers with local production and flexibility. In a first step, the aggregator schedules its total load to CAISO, and in a second step, the aggregator operates the pool in the RTM using a Model Predictive Control Technique to decide how it should deviate from the DAM schedule.\\ \\ A lot of effort was put in producing a model and a method to tackle scheduling and operations for DERs and flexibility. Nevertheless, a large number of questions remain unanswered and a lot is yet to be done. First, an online prediction should be built. If random forest regression performs well for DAM price prediction, it does not seem to be the case for RT. We should dive into the vast literature of real-time surge pricing in RT market. Second, we need to produce some simulations for several days to be able to tune parameters (e.g. risk aversion and time horizon). Third, we need to test the method we developed for European real-time imbalance markets. We concede that this list is far from being exhaustive.
{ "timestamp": "2018-05-17T02:03:33", "yymm": "1802", "arxiv_id": "1802.08929", "language": "en", "url": "https://arxiv.org/abs/1802.08929" }
\section{Introduction} The International Linear Collider, ILC~\cite{Behnke:2013xla}, is an accelerator based particle physics project which will provide collisions of polarized beams made up of $e^{+}e^{-}$ with center-of-mass energies ($c.m.e$) of 250 GeV - 1 TeV. These collisions will be studied by two multipurpose detectors: the International Large Detector (ILD) and the Silicon Detector (SiD)~\cite{Behnke:2013lya}. The accomplishment of the ambitious physics program~\cite{Baer:2013cma,Fujii:2017vwa} of the ILC requires unprecedented precision in the energy determination of final states. To meet these required precision levels, the detectors will be based on the Particle Flow (PF) techniques~\cite{Brient:2002gh,Morgunov:2004ed}. These techniques rely on single particle separation in the full detector volume to choose the best information available to measure the energy of the final state objects (i.e. measuring the charged particles momentum at tracking devices better than in the calorimeters). Therefore, PF algorithms require highly granular and compact calorimeter systems featuring minimum dead material (high hermeticity). The R\&D of highly granular calorimeters for future linear colliders is conducted within the CALICE collaboration For further information about PF and CALICE R\&D we refer the reader to reference~\cite{Sefkow:2015hna} and references therein. In this document we will focus in the description of the silicon-tungsten electromagnetic calorimeter, SiW-ECAL, technological prototype and its performance in beam test. The design and R\&D of this prototype is conducted by CALICE and it is oriented at the baseline design of the ILD ECAL. The ILD ECAL is a sampling calorimeter of 24 $X_{0}$ of thickness (in the barrel region) and it uses silicon (Si) as active material and tungsten (W) as absorber material. The combination of Si and W for the construction of the detector allows the construction of a very compact calorimeter made up of compact active layers with small cell size (high granularity) in the transverse and longitudinal planes. It will consist of an alveolar structure of carbon fiber into which the slabs made up of tungsten plates and active sensors will be inserted. The very-front-end (VFE) electronics will be embedded in the slabs. The silicon sensors will be segmented in squared cells of 5x5 mm, featuring a total of $\sim 100$ million channels for the ECAL of the ILD. To reduce overall power consumption, the ILD ECAL will exploit the special bunch structure foreseen for the ILC: the $e^{+}e^{-}$ bunchs trains will arrive within acquisition windows of $\sim$ 1-2 ms width separated by $\sim$ 200 ms. During the idle time, $\sim99\%$ of the time, the bias currents of the electronics will be shut down. This technique is usually denominated power pulsing. In addition, as the PF techniques demands minimum dead material in the detector, the design of the ILD foresees the calorimeters (hadronic and electromagnetic) to be placed inside the magnetic coil that provides magnetic fields of 3.5 T. \section{The SiW-ECAL engineering prototype.} The first SiW-ECAL prototype was the so called SiW-ECAL physics prototype. It was successfully tested at DESY, FNAL and CERN running together with another CALICE prototype, the analogue hadronic calorimeter AHCAL, delivering the proof of concept of PF calorimetry. For the physics prototype, the VFE was placed outside the active area with no particular constraints in power consumption. Published results proving the good performance of the technology and the PF can be found in references~\cite{Adloff:2011ha,Anduze:2008hq,Adloff:2008aa,Adloff:2010xj,CALICE:2011aa,Bilki:2014uep}. The new generation prototype is called the SiW-ECAL technological prototype and it addresses the main technological challenges: compactness, power consumption reduction through power pulsing and VFE inside the detector close to real ILD conditions. It will also provide data allowing deep studies of the PF performance and input to tune the Monte Carlo programs. The base unit of such technological prototype is the Active Sensor Units or ASU which is the entity of sensors, thin PCB (printed circuit boards) and ASICs (application-specific integrated circuits). An individual ASU has a lateral dimension of 18x18 cm$^{2}$ and has glued onto it 4 silicon wafers (currently with a thickness of 320 $\mu$m). The ASUs are equipped further with 16 ASICs for the read out and features 1024 square pads, 64 per ASIC, of 5x5 mm$^{2}$ (the physics prototype featured squared pads of 10x10 mm$^{2}$). The readout layers of the SiW-ECAL consist of a chain of ASUs and an interface card to a data acquisition system (DAQ) at the beginning of the layer. This interface card also carries services as power connectors, test output pins, connectors for signal injection, etc. Currently, the technological prototype layers are built with a version of the PCB called FEV11 with 16 SKIROC~\cite{Callier:2011zz} (Silicon pin Kalorimeter Integrated ReadOut Chip) ASICs version 2 in BGA packages mounted on top of it. The SKIROC ASIC consists of 64 channels comprising each a low noise charge preamplifier of variable gain followed by two lines: a fast line for the trigger decision and a slow line for dual gain charge measurement. Finally, a Wilkinson type analogue to digital converter fabricates the digitized charge deposition that can be readout. Once one channel is triggered, the ASIC reads out all 64 channels adding a bit of information to tag them as triggered or not triggered. The information is stored in 15 cell deep physical switched capacitor array (SCA). This autotrigger capability is mandatory for the ILC case since the accelerator will not provide a central global trigger. A key feature of the SKIROC ASICs is that they can be power pulsed to meet ILC power consumption requirements. A new version, 2a, has been produced and will be used to equip new layers currently in production. The design of the subsequent chain of the data acquisition (DAQ) is described in~\cite{Gastaldi:2014vaa}. The whole system is controlled by the Calicoes and the Pyrame DAQ software version 3~~\cite{pyrame1,pyrame2}. \begin{figure}[!t] \centering \includegraphics[width=5in]{techno_fev8.eps} \caption{Leftmost upper figure: open single SLAB with FEV11 ASU, 16 SKIROC, interface card and DIF visible; the silicon sensors are glued to the PCB in the other side. Leftmost lower figure: picture of the technological prototype with 7 single short layers inside a mechanical aluminum structure designed for beam tests. Rightmost photographs show the FEV11\_COB: the upper figure correspond to a picture taken from the top; the lower one corresponds to a lateral picture.} \label{proto} \end{figure} Figure \ref{proto}, leftmost upper plot, shows a picture of a full equipped short slab with FEV11 ASU. These PCBs still don't meet current requirements for the ILD in terms of thickness (1.2 mm). The FEV11 thickness is 1.6 alone and 2.7 mm including the ASICs. There are ongoing R\&D activities in an alternative PCB design in which the ASICs will be directly placed on board of the PCB in dedicated cavities. The ASICS will be in semiconductor packaging and wire bonded to the PCB. These PCBs are denominated COB for "chip on board". A small sample of FEV11\_COB boards with thickness of 1.2 mm have been produced (see figure \ref{proto} rightmost photographs) and is planned to be added to the prototype and tested in beam tests conditions. \section{Performance on positron beam test.} The prototype tested in beam in June 2017 consisted of 7 layers: see figure \ref{proto} leftmost lower photograph. A dedicated and comprehensive commissioning process was followed before going to the beam test. The commissioning included the definition of trigger threshold values or the list noisy channels to be masked. In the first layer, $\sim 40\%$ of channels were masked due to a damaged Si wafer. In all the other, only the 6-7\% of channels were masked except in the last one were this number grew up to the 16\% due to a faulty ASIC. The number of masked channels may be drastically reduced by setting individual threshold settings instead of global trigger threshold values for each channel on an ASIC. This possibility will become available with the next version of SKIROC ASIC. The detector was exposed to a positron beam in the DESY test beam area (line 24). The beam test line at DESY provides continuous positron beams in the energy range of 1 to 6 GeV with rates of the order of the KHz (with maximum of $\sim 3$ KHz). In addition, DESY gives acces to a bore 1 T solenoid in the beam area. The detector was running in power pulsing mode without any extra active cooling system. By means of an external pulse generator we defined the length of the acquisition window to be 3.7 ms at a frequency of 5 Hz. The physics program of the beam test can be summarized in the following points: \begin{itemize} \item Commissioning and calibration without absorber using 3 GeV positrons acting as minimum ionizing particle (MIPs); \item magnetic field tests up to 1 T; \item response to electrons with fully equipped detector, i.e. sensitive parts {\it and} W absorber. \end{itemize} \subsection{Calibration runs.} \begin{figure}[!t] \centering \begin{tabular}{ll} \includegraphics[width=2.6in]{MIPsummary_title.eps} & \includegraphics[width=2.6in]{SNsummary_title.eps} \end{tabular} \caption{Result of the MIP position calculation and signal over noise calculation for all calibrated cells.} \label{mipandSN} \end{figure} The main calibration was realized by directing the 3 GeV positron beam on 81 positions equally distributed over the surface of the detector. These data were used for pedestal estimation and energy calibration. Calibration and pedestal analysis was done for all single layers not requiring track reconstruction. We calculated the pedestal position for every channel and SCA by fitting the distribution of non triggered hits with a Gaussian function. Afterwards, we subtracted these values to the distribution of triggered hits and fit the resulting distributions to a Landau function convoluted by a Gaussian. The most-probable-value of the convoluted function is taken as the MIP value. We have obtained a raw energy calibration spread of the 5\% among all cells with the 98\% of all available cells being fitted. Results are summarized in figure \ref{mipandSN}, leftmost plot. The signal-over-noise ratio, defined as the ratio between the most-probable-value of the Landau-gauss function fit to the data (pedestal subtracted) and the pedestal width (calculated as the standard deviation of the Gaussian distribution fitted to the data), was estimated. The average value for all channels and slabs is 20.4. Results are summarized in figure \ref{mipandSN}, rightmost plot. \begin{figure}[!t] \centering \begin{tabular}{ll} \includegraphics[width=2.6in]{MIP3peaks.eps} & \includegraphics[width=2.6in]{MIPefficiency_hitsintrack5_hitouttrack0_bcidcut2850_chip.eps} \\ \end{tabular} \caption{Left: the single cell energy distribution (for all calibrated cells) for 3 GeV positron tracks acting as MIPs. Right: hit efficiency for all layers and ASICs in high purity samples of tracks of MIP-like acting particles.} \label{miplog} \end{figure} After pedestal subtraction, calibration and track reconstruction we could finalize the MIP calibration by selecting tracks that cross the detector parallel to its normal. The results are shown in figure \ref{miplog} where, in the leftmost plot, single cell the energy distribution for MIPs is shown for all calibrated cells. The distribution reveals the presence of a second and third peak due to events involving multiple particles crossing the detector. In the rightmost plot, we summarize the results of hit detection efficiency in tracks made of MIP acting particles. To evaluate the efficiency we define a high purity sample of events of positron traversing the detector perpendicularly by selecting tracks with at least 5 layers (of 7 possible) with a hit in exactly the same cell. Afterwards we check if the other layers have or not a hit in the same cell. Finally, we repeat this for all layers and cells and show the result for all ASICs and layers. Except few exceptions, the efficiency is compatible with $100\%$. Lower efficiencies may be related to some channels having effective high thresholds for the triggers. This can be improved with the next generation of SKIROC (the 2a) which allow for threshold optimization of single channels. To avoid the confusion of the concept of inefficiency and blindness of the detector due to saturation of the DAQ (for example if a noisy channels fills up the memory before the physical signal) we constraint the analysis to events that are stored before the last but one cell in the SCA. Finally, a calibration run with the beam hitting the slabs under an angle of $\sim 45^{0}$ was done. The purpose of this run was to prove that the MIP position scales with a factor $\sqrt{2}$ due to the larger path to be crossed by the positron in the Si wafer. Preliminary results show a perfect agreement with the expected results. \subsection{Magnetic field tests.} For this test, a special PVC structure was designed and produced to support the slab. The purpose of the test was twofold: first to prove that the DAQ, all electronic devices and the mechanical slab itself were able to handle strong magnetic fields; second to prove the stability of performance during these tests. We took several runs, with 0, 0.5 and 1 T magnetic fields with and without 3 GeV positron beam. We observed that the pedestals position is independent of the magnetic field (within the 1 per mille). The MIP position was increased, in average, by 3\% for 1 T and 1.5\% for 0.5 T with respect with the 0 T case. This level of increase is expected since the positron traversing the magnetic field hit the slab with a deflecting angle, increasing then the path of the particle through the wafer. More detailed studies and simulation comparisons are to come. \subsection{Response to electromagnetic showers.} \begin{figure}[!t] \centering \begin{tabular}{ll} \includegraphics[width=2.6in]{energy_z_profile_grid20_conf2.eps} & \includegraphics[width=2.6in]{4GeV_energy_z_profile.eps} \end{tabular} \caption{Raw electromagnetic shower profiles for different tungsten configurations and several energies of the beam. In the x-axis, we show the layer number. In the y-axis, the averaged fraction of {\it energy} (sum of ADC in all triggered cells in a event, considering only events where all layers had at least a hit) measured in every layer. } \label{showers} \end{figure} The purpose of the test was to study the interaction of positrons with the absorber material resulting in electromagnetic showers. We inserted W plates of different thicknesses between the sensitive layers and we performed a scan of energies of the positron beam: 1-5.8 GeV. We tested the response of the detector with three different configurations of the W repartition. The accumulated amount of tungsten, in radiation length units, $X_{0}$, in front of each of the modules is: \begin{itemize} \item W-configuration 1: $0.6,1.2,1.8,2.4,3.6,4.8$ and $6.6~X_{0}$ \item W-configuration 2: $1.2,1.8,2.4,3.6,4.8,6.6$ and $8.4~X_{0}$ \item W-configuration 3: $1.8,2.4,3.6,4.8,6.6,8.4$ and $10.2~X_{0}$ \end{itemize} Preliminary results of the raw electromagnetic shower profiles are shown in figure \ref{showers} for several beam energies in the W-configuration 2 (left) and for 4 GeV beam energy and the three different W-configurations (right). This first approach to the data looks promising but further studies and comparisons with simulations are needed. \acknowledgments This project has received funding from the European Union{\textquotesingle}s Horizon 2020 Research and Innovation program under Grant Agreement no. 654168. This work was supported by the P2IO LabEx (ANR-10-LABX-0038), excellence project HIGHTEC, in the framework {\textquotesingle}Investissements d{\textquotesingle}Avenir{\textquotesingle} (ANR-11-IDEX-0003-01) managed by the French National Research Agency (ANR). The research leading to these results has received funding from the People Programme (Marie Curie Actions) of the European Union{\textquotesingle}s Seventh Framework Programme (FP7/2007-2013) under REA grant agreement, PCOFUND-GA-2013-609102, through the PRESTIGE programme coordinated by Campus France. The measurements leading to these results have been performed at the Test Beam Facility at DESY Hamburg (Germany), a member of the Helmholtz Association (HGF).
{ "timestamp": "2018-02-27T02:04:21", "yymm": "1802", "arxiv_id": "1802.08806", "language": "en", "url": "https://arxiv.org/abs/1802.08806" }
\section{A Block-wise, Asynchronous, and Distributed ADMM Algorithm} \label{sec:algorithm} In this section, we present our proposed \emph{block-wise}, \emph{asynchronous} and \emph{distributed} ADMM algorithm (a.k.a, AsyBADMM) for the general consensus problem. For ease of presentation, we first describe a synchronous version motivated by the basic distributed ADMM for \emph{non-convex} optimization problems as a starting point. \subsection{Block-wise Synchronous ADMM} The update rules presented in Sec.~\ref{sec:basicADMM} represent the basic synchronous distributed ADMM approach \cite{boyd2011distributed}. To solve the general form consensus problem, our block-wise version extends such a synchronous algorithm mainly by 1) approximating the update rule of $\mathbf x_i$ with a simpler expression under non-convex objective functions, and 2) converting the all-vector updates of variables into block-wise updates only for $(i,j)\in \mathcal E$. Generally speaking, in each synchronized epoch $t$, each worker node $i$ updates all blocks of its local primal variables $x_{i,j}$ and dual variables $y_{i,j}$ for $j\in\mathcal N(i)$, and pushes these updates to the corresponding servers. Each server $j$, when it has received $x_{i,j}$ and $y_{i,j}$ from all $i\in\mathcal N(j)$, will update $z_j$ accordingly, by aggregating these received blocks. Specifically, at epoch $t$, the basic synchronous distributed ADMM will do the following update for $\mathbf x_i$: {\small \begin{equation*} \begin{split} \mathbf{x}_{i}^{t+1} &= \mathop{\arg\min}_{\mathbf{x}_i} f_i(\mathbf{x}_i)+ \sum_{j\in\mathcal{N}(i)} \innerprod{y_{i,j}^{t}, x_{i,j} - {z_j}^{t}} \\ &\quad + \sum_{j\in\mathcal{N}(i)} \frac{\rho_i}{2} \norm{x_{i,j} - {z}_j^{t} }^2. \end{split} \end{equation*} } However, this subproblem is hard, especially when $f_i$ is non-convex. To handle non-convex objectives, we adopt an alternative solution \cite{hong2017distributed,hong2016convergence} to this subproblem through the following first-order approximation of $f_i$ at $\mathbf z^t$: {\small \begin{align} \mathbf{x}_i^{t+1} &\approx \mathop{\arg\min}_{\mathbf{x}_i} f_i({\mathbf{z}}^{t}) + \innerprod{\nabla f_i(\mathbf{z}^{t}), \mathbf{x}_i - \mathbf{z}^{t}} \nonumber \\ &\quad + \sum_{j\in\mathcal{N}(i)}\left(\innerprod{y_{i,j}^{t}, x_{i,j} - {z}_j^{t}} + \frac{\rho_i}{2} \norm{x_{i,j} - {z}_j^{t} }^2 \right) \nonumber \\ &= {\mathbf{z}}^{t} - \frac{\nabla f_i({\mathbf{z}}^{t})+\mathbf{y}_{i}^t}{\rho_{i}}, \label{eq:sub_x} \end{align} } where \eqref{eq:sub_x} can be readily obtained by setting the partial derivative w.r.t. $\mathbf{x}_i$ to zero. The above full-vector update on $\mathbf x_i$ is equivalent to the following block-wise updates on each block $x_{i,j}$ by worker $i$: \begin{equation} x_{i,j}^{t+1} = {z}_{j}^{t} - \frac{\nabla_j f_i({\mathbf{z}}^{t})+y_{i,j}^t}{\rho_i}, \label{eq:sync_update_x} \end{equation} where $\nabla_j f_i(\mathbf{z}^{t+1})$ is the partial derivative of $f_i$ w.r.t. $z_j$. Furthermore, the dual variable blocks $y_{i,j}$ can also be updated in a block-wise fashion as follows: \begin{equation} y_{i,j}^{t+1} = y_{i,j}^t + \rho_i(x_{i,j}^{t+1} - {z}_{j}^{t}). \label{eq:sync_update_y} \end{equation} Note that in fact, each $f_i$ only depends on a part of ${\mathbf{z}}^{t}$ and thus each worker $i$ only needs to pull the relevant blocks $z_j$ for $j\in\mathcal N(i)$. Again, we put the full vector $\mathbf z$ in $f_i(\cdot)$ just to simplify notation. On the server side, server $j$ will update $z_j$ based on the newly updated $x^{t+1}_{i,j}$, $y^{t+1}_{i,j}$ received from all workers $i$ such that $i \in \mathcal{N}(j)$. Again, the $\mathbf z$ update in the basic synchronous distributed ADMM can be rewritten into the following block-wise format (with a regularization term introduced): \begin{align} z_j^{t+1} &= \mathop{\arg\min}_{z_j \in \mathcal{X}_j} h_j(z_j) + \frac{\gamma}{2} \norm{z_j - z_j^t}^2 \nonumber \\ &\quad + \sum_{i \in \mathcal{N}(j)} \left( \innerprod{y_{i,j}^{t+1}, x_{i,j}^{t+1} - z_j} + \frac{\rho_i}{2} \norm{x_{i,j}^{t+1} - z_j }^2 \right) \nonumber \\ &= \prox[\mu]{h} \left( \frac{\gamma z_j^t + \sum_{i \in \mathcal{N}(j)}w_{i,j}^t }{\gamma+\sum_{i \in \mathcal{N}(j)} \rho_i}\right), \label{eq:sync_update_z} \end{align} where $w_{i,j}^{t+1}$ is defined as \begin{equation} w_{i,j}^{t+1}:=\rho_i x_{i, j}^{t+1} + y_{i,j}^{t+1}, \label{eq:push_w} \end{equation} and the proximal operator is defined as \begin{equation} \prox[\mu]{h}(x) := \mathop{\arg\min}_{u \in \mathcal{X}_j} h(u) + \frac{\mu}{2} \norm{x - u}^2. \end{equation} Furthermore, the regularization term $\frac{\gamma}{2} \norm{z_j - z_j^t}^2$ is introduced to stabilize the results, which will be helpful in the asynchronous case. In the update of $z_j$, the constant $\mu$ of the proximal operator is given by $\sum_{i\in \mathcal{N}(j)} \rho_i$. Now it is clear that it is sufficient for worker $i$ to send $w_{i,j}^t$ to server $j$ in epoch $t$. \subsection{Block-wise Asynchronous ADMM} We now take one step further to present a \emph{block-wise} \emph{asynchronous} \emph{distributed} ADMM algorithm, which is our main contribution in this paper. In the asynchronous algorithm, each worker $i$ will use a local epoch $t$ to keep track of how many times $\mathbf x_i$ has been updated, although different workers may be in different epochs, due to random delays in computation and communication. Let us first focus on a particular worker $i$. While worker $i$ is in epoch $t$, there is no guarantee for worker $i$ to download $\mathbf z^t$---different blocks $z_j$ in $\mathbf z$ may have been updated for different numbers of times, for which worker $i$ has no idea. Therefore, we use $\tilde{z}_j^t$ to denote the \emph{latest copy} of $z_j$ on server $j$ while worker $i$ is in epoch $t$ and ${\tilde{\mathbf{z}}}^{t} = (\tilde{z}_1^t,\ldots, \tilde{z}_M^t)$. Then, the original synchronous updating equations \eqref{eq:sync_update_x} and \eqref{eq:sync_update_y} for $x_{i,j}$ and $y_{i,j}$, respectively, are simply replaced by \begin{align} x_{i,j}^{t+1} &= \tilde{z}_{j}^{t} - \frac{\nabla_j f_i({\tilde{\mathbf{z}}}^{t})+y_{i,j}^t}{\rho_i}, \label{eq:update_x} \\ y_{i,j}^{t+1} &= y_{i,j}^t + \rho_i(x_{i,j}^{t+1} - \tilde{z}_{j}^{t}). \label{eq:update_y} \end{align} Now let us focus on the server side. Since in the asynchronous case, the variables $w^{t+1}_{i,j}$ for different workers $i$ do not generally arrive at the server $j$ at the same time. In this case, we will update $z^{t+1}_j$ \emph{incrementally} as soon as a $w^{t}_{i,j}$ is received from some worker $i$ until all $w^{t}_{i,j}$ are received for all $i\in\mathcal N(j)$, at which point the update for $z^{t+1}_j$ is \emph{fully finished}. We use $\tilde z^{t+1}_j$ to denote the working (dirty) copy of $z^{t+1}_j$ for which the update may not be fully finished by all workers yet. Then, the update of $\tilde z^{t+1}_j$ is given by \begin{align} \tilde z_j^{t+1} &= \prox[\mu]{h} \left( \frac{\gamma \tilde z_j^t + \sum_{i \in \mathcal{N}(j)}\tilde w_{i,j}}{\gamma+\sum_{i \in \mathcal{N}(j)} \rho_i}\right), \label{eq:update_z} \end{align} where $\tilde w_{i,j} = w_{i,j}^t$ if $w_{i,j}^t$ is received from worker $i$ and triggering the above update; and for all other $i$, $\tilde w_{i,j}$ is the latest version of $w_{i,j}$ that server $j$ holds for worker $i$. The regularization coefficient $\gamma>0$ helps to stabilize convergence in the asynchronous execution with random delays. \begin{algorithm}[htbp] \caption{AsyBADMM: Block-wise Asynchronous ADMM} \label{alg:Asybadmm} \underline{Each \textbf{worker} $i$ asynchronously performs:} \begin{algorithmic}[1] \State pull $\mathbf{z}^0$ to initialize $\mathbf{x}^0=\mathbf{z}^0$ \State initialize $\mathbf{y}^0$ as the zero vector. \For {$t=0$ \textbf{to} $T-1$} \State select an index $j_t\in\mathcal{N}(i)$ uniformly at random \State compute gradient $\nabla_{j_t} f(\mathbf{\tilde z}^{t})$ \State update $x_{i, j_t}^{t+1}$ and $y_{i, j_t}^{t+1}$ by \eqref{eq:update_x} and \eqref{eq:update_y} \State push $w_{i,j_t}^{t+1}$ as defined by \eqref{eq:push_w} to server $j_t$ \State pull the current models $\tilde{\mathbf{z}}^{t+1}$ from servers \EndFor \end{algorithmic} \underline{Each \textbf{server} $j$ asynchronously performs:} \begin{algorithmic}[1] \State initialize $\tilde z^0_{j}$ and $\tilde w_{i,j}$ for all $i \in \mathcal{N}(j)$ \State Upon receiving ${w}_{i,j}^t$ from a worker $i$: \State \quad let $\tilde{w}_{i,j}\gets {w}_{i,j}^t$ \State \quad update $\tilde z_{j}^{t+1}$ by \eqref{eq:update_z}. \State \quad if $w_{i,j}^t$ has been received for all $i \in \mathcal{N}(j)$ then $z_{j}^{t+1}\gets\tilde z_{j}^{t+1}$. \end{algorithmic} \end{algorithm} Algorithm~\ref{alg:Asybadmm} describes the entire block-wise asynchronous distributed ADMM. Note that in Algorithm~\ref{alg:Asybadmm}, the block $j$ is randomly and independently selected from $\mathcal{N}(i)$ according to a uniform distribution, which is common in practice. Due to the page limit, we only consider the random block selection scheme, and we refer readers to other options including Gauss-Seidel and Gauss-Southwell block selection in the literature, e.g., \cite{hong2016unified}. We put a few remarks on implementation issues at the end of this section to characterize key features of our proposed block-wise asynchronous algorithm, which differs from full-vector updates in the literature \cite{hong2017distributed}. \emph{Firstly}, model parameters are stored in blocks, so different workers can update different blocks asynchronously in parallel, which takes advantage of the popular Parameter Server architecture. \emph{Secondly}, workers can pull $\mathbf{z}$ while others are updating some blocks, enhancing concurrency. \emph{Thirdly}, in our implementation, workers will compute both gradients and local variables. In contrast, in the full-vector ADMM \cite{hong2017distributed}, workers are only responsible for computing gradients, therefore all previously computed and transmitted $\tilde{w}_{i,j}$ must be cached on servers with non-negligible memory overhead. \section{Key Lemmas} \begin{lem} Suppose Assumption~\ref{asmp:lipschitz}-\ref{asmp:delay} are satisfied. Then we have \begin{equation} \norm{y_{i,j}^{t+1} - y_{i,j}^t }^2 \leq L_{i,j}^2 (T_{i,j} + 1) \sum_{t'=0}^{T_{i,j}} \norm{z_j^{t+1-t'} - z_j^{t-t'}}^2. \label{eq:ybound} \end{equation} \label{lem:ybound} \end{lem} \begin{proof} For simplicity, we say $(i,j)$ is performed at epoch $t$ when worker $i$ is updating block $j$ at epoch $t$. If the updating $(i,j)$ is not performed at epoch $t$, this inequality holds in trivial, as $y_{i,j}^{t+1} = y_{i,j}^t$. So we only consider the case that $(i,j)$ is performed at epoch $t$. Note that in this case, we have \begin{equation} \nabla_j f_i(\tilde{\mathbf{z}}^{t+1}) + y_{i,j}^t + \rho_{i} (x_{i,j}^{t+1} - \tilde{z}_{j}^{t+1}) = 0. \end{equation} Since $y_{i,j}^{t+1} = y_{i,j}^t + \rho_{i}(x_{i,j}^{t+1} - \tilde{z}_{j}^{t+1})$, we have \begin{equation} \nabla_j f_i(\tilde{\mathbf{z}}^{t+1}) + y_{i,j}^{t+1} = 0. \end{equation} Therefore, we have \begin{equation*} \begin{split} \norm{y_{i,j}^{t+1} - y_{i,j}^t } &= \norm{\nabla_j f_i(\tilde{\mathbf{z}}^{t+1}) - \nabla_j f_i(\tilde{\mathbf{z}}^{t})} \\ &\leq L_{i,j} \norm{\tilde{z}_j^{t+1} - \tilde{z}_j^{t}}. \end{split} \end{equation*} Since the actual updating time for $\tilde{z}_j^{t+1}$ should be in $\{t+1, t, \ldots, t+1-T_{i,j}\}$, and for $\tilde{z}_j^{t}$ in $\{t, \ldots, t-T_{i,j}\}$, then we have \begin{equation} \norm{y_{i,j}^{t+1} - y_{i,j}^t }^2 \leq L_{i,j}^2 (T_{i,j} + 1) \sum_{t'=0}^{T_{i,j}} \norm{z_j^{t+1-t'} - z_j^{t-t'}}^2, \end{equation} which proves the lemma. \end{proof} \begin{lem} At epoch $t$, we have \begin{equation} \nabla_j f_{i}(\tilde{\mathbf{z}}^{t+1}) + y_{i,j}^t + \rho_{i}(x_{i,j}^{t+1} - {z}_{j}^{t+1}) = \rho_{i} (\tilde{z}_{j}^{t+1} - z_{j}^{t+1}). \end{equation} \label{lem:4} \end{lem} \begin{proof} Updating $x_{i,j}^{t+1}$ is performed as follows \begin{equation*} x_{i,j}^{t+1} = \mathop{\arg\min}_{x_{i,j}} f_i(\tilde{\mathbf{z}}^{t+1}) + \innerprod{\nabla_j f_{i}(\tilde{\mathbf{z}}^{t+1}), x_{i,j} - \tilde{z}_{j}^{t+1}} + \innerprod{y_{i,j}^t, x_{i,j} - z_{j}^{t+1}} + \frac{\rho_{i}}{2}\norm{x_{i,j} - z_{j}^{t+1}}^2. \end{equation*} Thus, we have \begin{equation*} \nabla_j f_{i}(\tilde{\mathbf{z}}^{t+1}) + y_{i,j}^t + \rho_{i}(x_{i,j}^{t+1} - \tilde{z}_{j}^{t+1}) = 0, \end{equation*} And therefore \begin{equation*} \nabla_j f_{i}(\tilde{\mathbf{z}}^{t+1}) + y_{i,j}^t + \rho_{i}(x_{i,j}^{t+1} - {z}_{j}^{t+1}) = \rho_{i} (\tilde{z}_{j}^{t+1} - z_{j}^{t+1}). \end{equation*} \end{proof} \begin{lem} Suppose Assumption~\ref{asmp:lipschitz}-\ref{asmp:delay} are satisfied. Then we have \begin{eqnarray} &\quad& L(\mathbf{X}^{T}, \mathbf{Y}^{T}, \mathbf{z}^{T}) - L(\mathbf{X}^0, \mathbf{Y}^0, \mathbf{z}^0) \\ &\leq& - \sum_{t=0}^{T-1}\sum_{(i, j) \in \mathcal{E}}\beta_{i} (\norm{x_{i,j}^{t+1}-\tilde{z}_j^{t+1}}^2 + \norm{x_{i,j}^{t} - {z}_j^{t+1s} }^2) - \sum_{t=0}^{T-1}\sum_{(i,j)\in\mathcal{E}} \alpha_{j} \norm{z_{j}^{t+1} - z_{j}^{t}}^2, \label{eq:xzbound} \end{eqnarray} where \begin{align} \alpha_j &:= (\gamma + \rho_i) - \sum_{i\in\mathcal{N}(j)}\left(\frac{1}{\rho_i} + \frac{1}{2} \right) L_{i,j}^2 (T_{i,j}+1)^2 - \sum_{i \in \mathcal{N}(j)} \frac{(4L_{i,j}+\rho_i+1)T_{i,j}^2}{2}, \\ \beta_i &:= \frac{\rho_i - \max_{j \in \mathcal{N}(i)} 4L_{i,j}}{2|\mathcal{N}(i)|}. \end{align} \label{lem:iteration} \end{lem} \begin{lem} Suppose that Assumption~\ref{asmp:lipschitz}-\ref{asmp:delay} hold. Then the sequence of solutions $\{\mathbf{X}^t, \mathbf{z}^t, \mathbf{Y}^t\}$ satisfies \begin{equation} \lim_{t \to \infty}L(\mathbf{X}^t, \mathbf{z}^t, \mathbf{Y}^t) \geq \underline{f} - \mathrm{diam}^2(\mathcal{X}) \sum_{(i,j)\in \mathcal{E}} \frac{L_{i,j}}{2} > -\infty. \end{equation} \label{lem:boundedbelow} \end{lem} \section{Proof of Lemma~\ref{lem:iteration}} Next we try to bound the gap between two consecutive Augmented Lagrangian values and break it down into three steps, namely, updating $\mathbf{X}$, $\mathbf{Y}$ and $\mathbf{z}$: \begin{equation} \begin{split} & L(\mathbf{X}^{t+1}, \mathbf{Y}^{t+1}, \mathbf{z}^{t+1}) - L(\mathbf{X}^t, \mathbf{Y}^t, \mathbf{z}^t) \\ &= L(\mathbf{X}^{t}, \mathbf{Y}^{t}, \mathbf{z}^{t+1}) - L(\mathbf{X}^t, \mathbf{Y}^t, \mathbf{z}^t) \\ &\quad + L(\mathbf{X}^{t+1}, \mathbf{Y}^{t}, \mathbf{z}^{t+1}) - L(\mathbf{X}^{t}, \mathbf{Y}^t, \mathbf{z}^{t+1}) \\ &\quad + L(\mathbf{X}^{t+1}, \mathbf{Y}^{t+1}, \mathbf{z}^{t+1}) - L(\mathbf{X}^{t+1}, \mathbf{Y}^{t}, \mathbf{z}^{t+1}). \end{split} \end{equation} To prove Lemma~\ref{lem:iteration}, we bound the above three gaps individually. Firstly, we bound the on $\mathbf{X}$. For each worker $i$, at epoch $t$, we use the following auxiliary function for convergence analysis: \begin{equation} l_i(\mathbf{x}_i, y_{i,j}, z_j) := f_i(\mathbf{x}_i) + \innerprod{y_{i,j}, x_{i,j} - z_{j}} + \frac{\rho_{i}}{2}\norm{x_{i,j} - z_{j}}^2 \label{eq:lem2-tmp-1} \end{equation} To simplify our proof in this section, we consider the case that only one block is updated. Therefore, only block $j$ in $\mathbf{x}_i^{t+1}$ differs from $\mathbf{x}_i^{t}$, and similarly for $\mathbf{y}_i^{t+1}$ and $\mathbf{z}^{t+1}$. We will use $\tilde{\mathbf{z}}^{t}:=\mathbf{z}^{t(i,j)}$ as the delayed version of $\mathbf{z}$ in this proof. \begin{lem} For node $i$, we have the following inequality to bound the gap after updating ${x}_{i,j}$: \begin{equation} \begin{split} l_i(\mathbf{x}_i^{t+1}, y_{i,j}^t, z_j^{t+1}) - l_i(\mathbf{x}_i^t, y_{i,j}^t, z_j^{t+1}) &\leq \frac{L_{i,j}}{2}\norm{x_{i,j}^{t+1}-\tilde{z}_j^{t+1}}^2 + \frac{L_{i,j}+\rho_{i}}{2} \norm{\tilde{z}_j^{t+1} - z_j^{t+1}}^2 + \frac{L_{i,j}}{2} \norm{x_{i,j}^{t+1} - {x}_{i,j}^{t}}^2 \\ &\quad - \frac{\rho_{i}}{2}\left( \norm{x_{i,j}^{t} - z_j^{t+1} }^2 + \norm{x_{i,j}^{t+1} - \tilde{z}_j^{t+1} }^2 \right). \end{split} \end{equation} \label{lem:diff1-1} \end{lem} \begin{proof} From the block Lipschitz assumption and the updating rule, we have \begin{align} f_i(\mathbf{x}_i^{t+1}) \leq f_i(\mathbf{x}^{t}) + \innerprod{\nabla_j f_i(\mathbf{x}^{t}), {x}_{i,j}^{t+1} - {x}_{i,j}^{t}} + \frac{L_{i,j}}{2} \norm{x_{i,j}^{t+1} - {x}_{i,j}^{t}}^2. \end{align} By the definition of $\l_i(\cdot)$, we have \begin{align} l_i(\mathbf{x}_i^{t+1}, y_{i,j}^t, z_j^{t+1}) &\leq l_i(\mathbf{x}_i^t, y_{i,j}^t, z_j^{t+1}) + \innerprod{\nabla_j f_i(\mathbf{x}^{t}), {x}_{i,j}^{t+1} - {x}_{i,j}^{t}} + \frac{L_{i,j}}{2} \norm{x_{i,j}^{t+1} - {x}_{i,j}^{t}}^2 + \innerprod{y_j^t, x_{i,j}^{t+1} - x_{i,j}^t} \nonumber \\ &\quad + \frac{\rho_{i}}{2}\norm{x_{i,j}^{t+1} - z_j^{t+1} }^2 - \frac{\rho_{i}}{2}\norm{x_{i,j}^{t} - z_j^{t+1} }^2 \\ &= l_i(\mathbf{x}_i^t, y_{i,j}^t, z_j^{t+1}) + \frac{L_{i,j}}{2} \norm{x_{i,j}^{t+1} - {x}_{i,j}^{t}}^2 + \innerprod{\nabla_j f_i(\mathbf{x}^{t})+y_j^t, {x}_{i,j}^{t+1} - {x}_{i,j}^{t}} \nonumber \\ &\quad + \frac{\rho_{i}}{2}\left( \norm{x_{i,j}^{t+1} - z_j^{t+1} }^2 - \norm{x_{i,j}^{t} - z_j^{t+1} }^2 \right). \end{align} The right hand side is actually a quadratic function w.r.t. $x_{i,j}^{t+1}$. Therefore, it is strongly convex, and we have \begin{align} l_i(\mathbf{x}_i^{t+1}, y_{i,j}^t, z_j^{t+1}) &\leq l_i(\mathbf{x}_i^t, y_{i,j}^t, z_j^{t+1}) + \frac{L_{i,j}}{2} \norm{x_{i,j}^{t+1} - {x}_{i,j}^{t}}^2 + \frac{\rho_{i}}{2}\norm{\tilde{z}_j^{t+1} - z_j^{t+1}}^2 - \frac{\rho_{i}}{2}\norm{x_{i,j}^{t} - z_j^{t+1} }^2 \nonumber \\ &\quad + \innerprod{\nabla_j f_i(\mathbf{x}_i^{t+1})+y_j^t + \rho_{i}(x_{i,j}^{t+1}-z_j^{t+1}), \tilde{z}_j^{t+1} - x_{i,j}^t} - \frac{\rho_{i}}{2} \norm{x_{i,j}^{t+1} - \tilde{z}_j^{t+1} }^2 \end{align} By Lemma~\ref{lem:4}, we have \begin{align*} \nabla_j f_i(\mathbf{x}_i^{t+1})+y_j^t + \rho_{i}(x_{i,j}^{t+1}-z_j^{t+1}) =& \nabla_j f_i(\tilde{\mathbf{z}}^{t+1})+y_j^t + \rho_{i}(x_{i,j}^{t+1}-z_j^{t+1}) + (\nabla_j f_i(\mathbf{x}_i^{t+1}) - \nabla_j f_i(\tilde{\mathbf{z}}^{t+1})) \\ =& \rho_{i}(\tilde{z}_j^{t+1} - z_j^{t+1}) + (\nabla_j f_i(\mathbf{x}_i^{t+1}) - \nabla_j f_i(\tilde{\mathbf{z}}^{t+1})). \end{align*} Therefore, we have \begin{align*} l_i(\mathbf{x}_i^{t+1}, y_{i,j}^t, z_j^{t+1}) &\leq l_i(\mathbf{x}_i^t, y_{i,j}^t, z_j^{t+1}) + \innerprod{\nabla_j f_i(\mathbf{x}_i^{t+1}) - \nabla_j f_i(\tilde{\mathbf{z}}^{t+1}), \tilde{z}_j^{t+1} - z_j^{t+1}} \\ &\quad + \frac{L_{i,j}}{2} \norm{x_{i,j}^{t+1} - {x}_{i,j}^{t}}^2 - \frac{\rho_{i}}{2}\norm{x_{i,j}^{t} - z_j^{t+1} }^2 - \frac{\rho_{i}}{2} \norm{x_{i,j}^{t+1} - \tilde{z}_j^{t+1} }^2 + \frac{\rho_{i}}{2}\norm{\tilde{z}_j^{t+1} - z_j^{t+1}}^2 \\ &\leq l_i(\mathbf{x}_i^t, y_{i,j}^t, z_j^{t+1}) + \frac{L_{i,j}}{2}\norm{x_{i,j}^{t+1}-\tilde{z}_j^{t+1}}^2 + \frac{L_{i,j}+\rho_{i}}{2} \norm{\tilde{z}_j^{t+1} - z_j^{t+1}}^2 + \frac{L_{i,j}}{2} \norm{x_{i,j}^{t+1} - {x}_{i,j}^{t}}^2 \nonumber \\ &\quad - \frac{\rho_{i}}{2}\left( \norm{x_{i,j}^{t} - z_j^{t+1} }^2 + \norm{x_{i,j}^{t+1} - \tilde{z}_j^{t+1} }^2 \right) \\ &\leq l_i(\mathbf{x}_i^t, y_{i,j}^t, z_j^{t+1}) + \frac{4L_{i,j}}{2}\norm{x_{i,j}^{t+1}-\tilde{z}_j^{t+1}}^2 + \frac{4L_{i,j}+\rho_{i}}{2} \norm{\tilde{z}_j^{t+1} - z_j^{t+1}}^2 +\frac{3L_{i,j}}{2} \norm{x_{i,j}^t - z_j^{t+1}} \nonumber \\ &\quad - \frac{\rho_{i}}{2}\left( \norm{x_{i,j}^{t} - z_j^{t+1} }^2 + \norm{x_{i,j}^{t+1} - \tilde{z}_j^{t+1} }^2 \right) \\ &\leq l_i(\mathbf{x}_i^t, y_{i,j}^t, z_j^{t+1}) - \left( \frac{\rho_{i}}{2}- \frac{4L_{i,j}}{2} \right)\norm{x_{i,j}^{t+1}-\tilde{z}_j^{t+1}}^2 - \left(\frac{\rho_{i}}{2} - \frac{3L_{i,j}}{2} \right) \norm{x_{i,j}^{t} - {z}_j^{t+1} }^2 \\ &\quad + \frac{4L_{i,j}+\rho_{i}}{2} \norm{\tilde{z}_j^{t+1} - z_j^{t+1}}^2 \end{align*} \end{proof} \begin{cor} If block $j$ is randomly drawn from uniform distribution, we have \begin{equation} \begin{split} \mathbb{E}_{j}[L_i(\mathbf{x}_i^{t+1}, \mathbf{y}_i^t, \mathbf{z}^{t+1})] &\leq \mathbb{E}_{j}[L_i(\mathbf{x}_i^{t}, \mathbf{y}_i^t, \mathbf{z}^{t+1})] - \frac{1}{|\mathcal{N}(i)|}\sum_{j\in \mathcal{N}(i)} \left( \frac{\rho_{i}}{2}- \frac{4L_{i,j}}{2} \right)\norm{x_{i,j}^{t+1}-\tilde{z}_j^{t+1}}^2 \\ &\quad - \frac{1}{|\mathcal{N}(i)|}\sum_{j\in \mathcal{N}(i)} \left(\frac{\rho_{i}}{2} - \frac{3L_{i,j}}{2} \right) \norm{x_{i,j}^{t} - {z}_j^{t+1} }^2 + \frac{1}{|\mathcal{N}(i)|}\sum_{j\in \mathcal{N}(i)} \frac{4L_{i,j}+\rho_{i}}{2} \norm{\tilde{z}_j^{t+1} - z_j^{t+1}}^2 \end{split} \label{eq:diff_x_whole} \end{equation} \label{cor:diff1-1} \end{cor} \begin{lem} For node i, we have the following inequality to bound the gap after updating $\mathbf{y}_i$: \[l_i(\mathbf{x}_i^{t+1}, y_{i,j}^t, z_j^{t+1}) - l_i(\mathbf{x}_i^{t}, y_{i,j}^t, z_j^{t+1}) \leq \left(\frac{1}{\rho_{i}} + \frac{1}{2} \right) \norm{y_{i,j}^{t+1} - y_{i,j}^{t}}^2 + \frac{1}{2}\norm{\tilde{z}_{j}^{t+1} - z_{j}^{t+1}}^2.\] \label{lem:diff1-2} \end{lem} \begin{proof} From \eqref{eq:update_y} we have \begin{align*} l_i(\mathbf{x}_i^{t+1}, y_{i,j}^t, z_j^{t+1}) - l_i(\mathbf{x}_i^{t}, y_{i,j}^t, z_j^{t+1}) &= \innerprod{y_{i,j}^{t+1}-y_{i,j}^{t}, x_{i,j}^{t+1}-z_{j}^{t+1}} \\ &= \innerprod{y_{i,j}^{t+1}-y_{i,j}^{t}, x_{i,j}^{t+1}-\tilde{z}_{j}^{t+1}} + \innerprod{y_{i,j}^{t+1}-y_{i,j}^{t}, \tilde{z}_{j}^{t+1}-z_{j}^{t+1}} \\ &\leq \frac{1}{\rho_{i}}\norm{y_{i,j}^{t+1}-y_{i,j}^{t}}^2 + \frac{1}{2}\norm{y_{i,j}^{t+1} - y_{i,j}^t}^2 + \frac{1}{2}\norm{\tilde{z}_{j}^{t+1} - z_j^{t+1}}^2 \\ &= \left( \frac{1}{\rho_{i}} + \frac{1}{2} \right)\norm{y_{i,j}^{t+1}-y_{i,j}^{t}}^2 + \frac{1}{2}\norm{\tilde{z}_{j}^{t+1} - z_{j}^{t+1}}^2. \end{align*} \end{proof} \begin{cor} If block $j$ is randomly drawn from uniform distribution, we have \begin{equation} \begin{split} &\quad \mathbb{E}_{j}[L_i(\mathbf{x}_i^{t+1}, \mathbf{y}_i^{t+1}, \mathbf{z}^{t+1})] - \mathbb{E}_{j}[L_i(\mathbf{x}_i^{t+1}, \mathbf{y}_i^t, \mathbf{z}^{t+1})] \\ &\leq \frac{1}{|\mathcal{N}(i)|} \left( \frac{1}{\rho_{i}} + \frac{1}{2} \right) \sum_{j\in\mathcal{N}(i)}\norm{y_{i,j}^{t+1}-y_{i,j}^{t}}^2 + \frac{1}{2|\mathcal{N}(i)|}\sum_{j\in\mathcal{N}(i)} \norm{\tilde{z}_{j}^{t} - z_{j}^{t}}^2 \end{split} \label{eq:diff_y_whole} \end{equation} \label{cor:diff1-2} \end{cor} \begin{lem} After updating $z_j^t$ to $z_j^{t+1}$, we have \begin{equation} \mathbb{E}_j[l(\mathbf{X}^{t}, \mathbf{Y}^{t}, {z}_j^{t+1})] - \mathbb{E}_j[l(\mathbf{X}^{t}, \mathbf{Y}^{t}, {z}_j^t)] \leq - \sum_{i \in \mathcal{N}(j)}\frac{\gamma + \rho_{i}}{|\mathcal{N}(i)|} \cdot \norm{z_{j}^{t+1} - z_{j}^{t}}^2. \label{eq:diff_z_whole} \end{equation} \label{lem:diff1-3} \end{lem} \begin{proof} We begin our proof by analyzing the block $j$. Let \[ l(\mathbf{X}, \mathbf{Y}, {z}_j):= h_j(z_j) + \sum_{i\in\mathcal{N}(j)} \innerprod{y_{i,j}, x_{i,j} - z_j} + \sum_{i\in\mathcal{N}(j)} \frac{\rho_{i}}{2}\norm{x_{i,j} - z_j}^2. \] Firstly, it is clear that $\innerprod{y_{i,j}, x_{i,j}-z_j} + \rho_{i}\norm{x_{i,j}-z_j}^2$ is a quadratic function and thus strongly convex. Then, we have: \begin{align*} &\quad \sum_{i\in \mathcal{N}(j)}\innerprod{y_{i,j}^{t}, x_{i,j}^{t}-z_j^{t+1}} + \frac{\rho_{i}}{2}\norm{x_{i,j}^{t}-z_j^{t+1}}^2 - \sum_{i\in \mathcal{N}(j)}\innerprod{y_{i,j}^{t}, x_{i,j}^{t}-z_j^{t}} - \frac{\rho_{i}}{2}\norm{x_{i,j}^{t}-z_j^{t}}^2 \\ &\leq \innerprod{-y_{i,j}^{t} - \rho_i(x_{i,j}^{t}-z_j^{t+1}), z_{j}^{t+1}-z_{j}^t} - \sum_{i\in\mathcal{N}(j)}\frac{\rho_i}{2}\norm{z_{j}^{t+1} - z_j^t}^2. \end{align*} By the optimality in \eqref{eq:update_z}, we have \begin{align*} \innerprod{p_j^{t+1} - \sum_{i\in\mathcal{N}(j)} y_{i,j}^{t} + \rho_i(z_j^{t+1} - x_{i,j}^{t}) + \gamma(z_j^{t+1} - z_j^t), z_j^{t+1} - z_j^{t}} \leq 0, \end{align*} where $p_j^{t+1} \in \partial h_j(z_j^{t+1})$ is a subgradient. By convexity of $h_j$, we have \begin{align*} h_j(z_j^{t+1}) -h_j(z_j^{t}) &\leq \innerprod{p_j^{t+1}, z_j^{t+1} - z_j^{t}} \\ &\leq \innerprod{\sum_{i\in\mathcal{N}(j)} y_{i,j}^{t} - \rho_i(z_j^{t+1} - x_{i,j}^{t}) - \gamma(z_j^{t+1} - z_j^t), z_j^{t+1} - z_j^{t}} \end{align*} Therefore, by taking expectation on $j$, we have \begin{align*} &\quad \mathbb{E}_j \left[l(\mathbf{X}^{t}, \mathbf{Y}^{t}, {z}_j^{t+1})+\frac{\gamma}{2}\norm{z_{j}^{t+1}-z_j^t}^2 \right] - \mathbb{E}_j[l(\mathbf{X}^{t}, \mathbf{Y}^{t}, {z}_j^t)] \\ &\leq \innerprod{- \sum_{i\in\mathcal{N}(j)}\frac{1}{|\mathcal{N}(i)|}(y_{i,j}^{t} - \rho_i(x_{i,j}^{t}-z_j^{t+1})), z_{j}^{t+1}-z_{j}^t} - \sum_{i\in\mathcal{N}(j)}\frac{\rho_i}{2|\mathcal{N}(i)|}\norm{z_{j}^{t+1} - z_j^t} \\ &\quad + \innerprod{\sum_{i\in\mathcal{N}(j)} \frac{1}{|\mathcal{N}(i)|}[y_{i,j}^{t} - \rho_i(z_j^{t+1} - x_{i,j}^{t}) - \gamma(z_j^{t+1} - z_j^t)], z_j^{t+1} - z_j^{t}} \\ &= -\sum_{i \in \mathcal{N}(j)}\frac{\gamma + 2\rho_{i}}{2 |\mathcal{N}(i)|} \cdot \norm{z_{j}^{t+1} - z_{j}^{t}}^2. \end{align*} which proves the lemma. \end{proof} We now proceed to prove Lemma~\ref{lem:iteration}. From Corollary~\ref{cor:diff1-1}--\ref{cor:diff1-2} and Lemma~\ref{lem:diff1-3}, we have three upper bounds when updating $x_{i,j}^t$, $y_{i,j}^t$ and $z_{j}^t$, respectively, and we observe that the sign of $\norm{x_{i,j}^t-z_j^t}$ can be negative by assuming $\rho_i \geq 3L_{i,j}$, and similarly for $\norm{x_{i,j}^{t+1} - \tilde{z}_j^t}$ by assuming $\rho_i \geq 4L_{i,j} \geq 0$. Therefore, let \[ \rho_i - 4 \max_{j \in \mathcal{N}(i)} L_{i,j} \geq 0,\] and then we can guarantee that the efficients for all $(i,j) \in \mathcal{E}$, the coefficients for $\norm{x_{i,j}^t-z_j^t}$ and $\norm{x_{i,j}^{t+1} - \tilde{z}_j^t}$ are always negative. Then, the major challenge is to make the coefficient of $\norm{z_{j}^{t+1} - z_{j}^t}$ be negative, and we attempt to make it as follows: \begin{align*} \frac{1}{|\mathcal{N}(i)|}\sum_{j \in \mathcal{N}(i)} \left( \frac{1}{\rho_i} + \frac{1}{2} \right)\norm{y_{i,j}^{t+1} - y_{i,j}^{t}}^2 &\leq \frac{1}{|\mathcal{N}(i)|}\sum_{j \in \mathcal{N}(i)} \left( \frac{1}{\rho_i} + \frac{1}{2} \right)L_{i,j}^2(T_{i,j}+1)\sum_{t'=0}^{T_{i,j}}\norm{z_j^{t-t'} - z_j^{t-t'-1}}^2, \\ \frac{1}{|\mathcal{N}(i)|}\sum_{j \in \mathcal{N}(i)} \frac{4L_{i,j}+\rho_i+1}{2} \norm{\tilde{z}_{j}^{t+1} - z_{j}^{t+1}}^2 &\leq \frac{1}{|\mathcal{N}(i)|}\sum_{j \in \mathcal{N}(i)} \frac{4L_{i,j}+\rho_i+1}{2}\cdot T_{i,j}\sum_{t'= 0}^{T_{i,j}-1}\norm{z_j^{t+1-t'} - z_j^{t-t'} }^2. \end{align*} We now combine equations \eqref{eq:diff_x_whole}, \eqref{eq:diff_y_whole} and \eqref{eq:diff_z_whole}, and sum over all workers $i$: \begin{align*} &\quad\ \ \mathbb{E}_j[L(\mathbf{X}^{t+1}, \mathbf{Y}^{t+1}, \mathbf{z}^{t+1})] - \mathbb{E}_j[L(\mathbf{X}^t, \mathbf{Y}^t, \mathbf{z}^t)] \\ &\leq - \sum_{(i, j) \in \mathcal{E}} \frac{1}{|\mathcal{N}(i)|} \left( \frac{\rho_{i}}{2}- \frac{4L_{i,j}}{2} \right)\norm{x_{i,j}^{t+1}-\tilde{z}_j^{t+1}}^2 - \sum_{(i, j) \in \mathcal{E}} \frac{1}{|\mathcal{N}(i)|} \left(\frac{\rho_{i}}{2} - \frac{3L_{i,j}}{2} \right) \norm{x_{i,j}^{t} - {z}_j^{t+1} }^2 \\ &\quad + \sum_{(i, j) \in \mathcal{E}} \frac{4L_{i,j}+\rho_{i}+1}{2|\mathcal{N}(i)|}\cdot \norm{\tilde{z}_j^{t+1} - z_j^{t+1}}^2 + \frac{1}{|\mathcal{N}(i)|} \left( \frac{1}{\rho_{i}} + \frac{1}{2} \right) \norm{y_{i,j}^{t+1}-y_{i,j}^{t}}^2 - \sum_{(i,j)\in\mathcal{E}}\frac{\gamma + \rho_{i}}{|\mathcal{N}(i)|} \cdot \norm{z_{j}^{t+1} - z_{j}^{t}}^2 \\ &\leq - \sum_{(i, j) \in \mathcal{E}} \frac{\rho_i - 4L_{i,j}}{2|\mathcal{N}(i)|} (\norm{x_{i,j}^{t+1}-\tilde{z}_j^{t+1}}^2 + \norm{x_{i,j}^{t} - {z}_j^{t+1} }^2) - \sum_{(i,j)\in\mathcal{E}}\frac{\gamma + \rho_{i}}{|\mathcal{N}(i)|} \cdot \norm{z_{j}^{t+1} - z_{j}^{t}}^2 \\ &\quad + \sum_{(i,j)\in\mathcal{E}} \frac{1}{|\mathcal{N}(i)|} \left( \frac{1}{\rho_i} + \frac{1}{2} \right)L_{i,j}^2(T_{i,j}+1)\sum_{t'=0}^{T_{i,j}}\norm{z_j^{t+1-t'} - z_j^{t-t'}}^2 \\ &\quad + \sum_{(i,j)\in\mathcal{E}}\frac{4L_{i,j}+\rho_i+1}{2|\mathcal{N}(i)|}\cdot T_{i,j}\sum_{t'= 0}^{T_{i,j}-1}\norm{z_j^{t+1-t'} - z_j^{t-t'} }^2. \end{align*} By taking the telescope sum for $t=0, \ldots, T-1$, we have \begin{align*} &\quad\ \ \mathbb{E}_j[L(\mathbf{X}^{T}, \mathbf{Y}^{T}, \mathbf{z}^{T})] - \mathbb{E}_j[L(\mathbf{X}^0, \mathbf{Y}^0, \mathbf{z}^0)] \\ &\leq - \sum_{t=0}^{T-1}\sum_{(i, j) \in \mathcal{E}} \frac{\rho_i - 4L_{i,j}}{2|\mathcal{N}(i)|} (\norm{x_{i,j}^{t+1}-\tilde{z}_j^{t+1}}^2 + \norm{x_{i,j}^{t} - {z}_j^{t+1} }^2) - \sum_{t=0}^{T-1}\sum_{(i,j)\in\mathcal{E}}\frac{\gamma + \rho_{i}}{|\mathcal{N}(i)|} \cdot \norm{z_{j}^{t+1} - z_{j}^{t}}^2 \\ &\quad + \sum_{t=0}^{T-1}\sum_{(i,j)\in\mathcal{E}} \left(\frac{L_{i,j}^2(T_{i,j}+1)^2}{|\mathcal{N}(i)|} \left( \frac{1}{\rho_i} + \frac{1}{2} \right) + \frac{(4L_{i,j}+\rho_i+1)T_{i,j}^2}{2|\mathcal{N}(i)|} \right) \norm{z_j^{t+1} - z_j^{t}}^2 \\ &\leq - \sum_{t=0}^{T-1}\sum_{(i, j) \in \mathcal{E}}\beta_{i} (\norm{x_{i,j}^{t+1}-\tilde{z}_j^{t+1}}^2 + \norm{x_{i,j}^{t} - {z}_j^{t+1} }^2) - \sum_{t=0}^{T-1}\sum_{(i,j)\in\mathcal{E}} \alpha_{j} \norm{z_{j}^{t+1} - z_{j}^{t}}^2, \end{align*} where \begin{align*} \alpha_j &:= (\gamma + \rho_i) - \sum_{i\in\mathcal{N}(j)}\left(\frac{1}{\rho_i} + \frac{1}{2} \right) L_{i,j}^2 (T_{i,j}+1)^2 - \sum_{i \in \mathcal{N}(j)} \frac{(4L_{i,j}+\rho_i+1)T_{i,j}^2}{2}, \\ \beta_i &:= \frac{\rho_i - \max_{j \in \mathcal{N}(i)} 4L_{i,j}}{2|\mathcal{N}(i)|}. \end{align*} By making $\alpha_{j} > 0$ and $\beta_{i} > 0$ for all $(i,j) \in \mathcal{E}$, we prove the lemma. \section{Proof of Lemma~\ref{lem:boundedbelow}} \begin{proof} From Lipschitz continuity assumption, we have. \begin{equation*} \begin{split} f_i(\mathbf{z}^{t+1}) &\leq f_i(\mathbf{x}_i^{t+1}) + \sum_{j \in \mathcal{N}(i)} \innerprod{\nabla_j f_i(\mathbf{x}_i^{t+1}), z_j^{t+1} - x_{i,j}^{t+1}} + \sum_{j \in \mathcal{N}(i)} \frac{L_{i,j}}{2} \norm{x_{i,j}^{t+1} - z_j^{t+1}}^2 \\ &= f_i(\mathbf{x}_i^{t+1}) + \sum_{j \in \mathcal{N}(i)} \innerprod{\nabla_j f_i(\mathbf{x}_i^{t+1}) - \nabla_j f_i(\mathbf{z}^{t+1}), z_j^{t+1} - x_{i,j}^{t+1}} \\ &\quad + \sum_{j \in \mathcal{N}(i)} \innerprod{\nabla_j f_i(\mathbf{z}^{t+1}), z_j^{t+1} - x_{i,j}^{t+1}} + \sum_{j \in \mathcal{N}(i)} \frac{L_{i,j}}{2} \norm{x_{i,j}^{t+1} - z_j^{t+1}}^2 \\ &\leq f_i(\mathbf{x}_i^{t+1}) + \sum_{j \in \mathcal{N}(i)} \innerprod{\nabla_j f_i(\mathbf{z}^{t+1}), z_j^{t+1} - x_{i,j}^{t+1}} + \sum_{j \in \mathcal{N}(i)} \frac{3L_{i,j}}{2} \norm{x_{i,j}^{t+1} - z_j^{t+1}}^2 \end{split} \end{equation*} Now we have \begin{eqnarray*} L(\mathbf{X}^{t+1}, \mathbf{z}^{t+1}, \mathbf{Y}^{t+1}) &=& h(\mathbf{z}^{t+1}) + \sum_{i=1}^N f_i(\mathbf{x}_i^{t+1}) + \sum_{j \in \mathcal{N}(i)} \innerprod{y_{i,j}^{t+1}, x_{i,j}^{t+1} - z_{j}^{t+1}} + \frac{\rho_{i}}{2} \norm{x_{i,j}^{t+1} - z_{j}^{t+1}}^2 \\ &\geq&h(\mathbf{z}^{t+1})+ \sum_{i=1}^N f_i(\mathbf{z}^{t+1}) + \sum_{(i,j) \in \mathcal{E}} \innerprod{\nabla_j f_i(\tilde{\mathbf{z}}^{t+1}) - \nabla_j f_i(\mathbf{z}^{t+1}), z_j^{t+1} - x_{i,j}^{t+1}} \nonumber \\ && + \sum_{(i,j) \in \mathcal{E}} \frac{\rho_{i}-3L_{i,j}}{2} \norm{x_{i,j}^{t+1} - z_j^{t+1}}^2 \\ &\geq& h(\mathbf{z}^{t+1})+ \sum_{i=1}^N f_i(\mathbf{z}^{t+1}) + \sum_{(i,j) \in \mathcal{E}} \frac{\rho_{i}-3L_{i,j}}{2} \norm{x_{i,j}^{t+1} - z_j^{t+1}}^2 \nonumber \\ && -\sum_{(i,j) \in \mathcal{E}} L_{i,j}\norm{\tilde{\mathbf{z}}^{t+1} -\mathbf{z}^{t+1}} \norm{\mathbf{z}^{t+1} - \mathbf{x}_i^{t+1}} \\ &\geq& h(\mathbf{z}^{t+1})+ \sum_{i=1}^N f_i(\mathbf{z}^{t+1}) + \sum_{(i,j) \in \mathcal{E}} \left( \frac{\rho_{i}-4L_{i,j}}{2} \norm{x_{i,j}^{t+1} - z_j^{t+1}}^2 - \frac{L_{i,j}}{2}\norm{\tilde{\mathbf{z}}^{t+1} -\mathbf{z}^{t+1}}^2 \right)\\ &\geq& h(\mathbf{z}^{t+1})+ \sum_{i=1}^N f_i(\mathbf{z}^{t+1}) - \sum_{(i,j) \in \mathcal{E}} \frac{L_{i,j}}{2}\norm{\tilde{\mathbf{z}}^{t+1} -\mathbf{z}^{t+1}}^2 \nonumber \\ &\geq& \underline{f} - \mathrm{diam}^2(\mathcal{X}) \sum_{(i,j) \in \mathcal{E}} \frac{L_{i,j}}{2} > -\infty. \end{eqnarray*} \end{proof} \subsection{Proof of Theorem~\ref{thm:convergence}} \begin{proof} From Lemma~\ref{lem:iteration}, we must have, as $t \to \infty$, \begin{equation} x_{i,j}^{t+1} - \tilde{z}_{j}^{t+1} \to 0, \quad z_{j}^{t+1} - z_{j}^{t} \to 0, \quad x_{j}^{t} - z_{j}^{t+1} \to 0,\quad \forall (i, j) \in \mathcal{E}. \end{equation} Given Lemma~\ref{lem:ybound}, we have $y_{i,j}^{t+1} - y_{i,j}^{t} \to 0$. Since \[ \norm{x_{i,j}^{t+1} - x_{i,j}^{t}} \leq \norm{x_{i,j}^{t+1} - \tilde{z}_{j}^{t+1}} + \norm{x_{j}^{t} - z_{j}^{t+1}} + \norm{\tilde{z}_{j}^{t+1} - z_{j}^{t+1}} \to 0, \] which proves \eqref{eq:kkt-3}, the first part. For the second part, we have the following inequality from the optimality condition of \eqref{eq:update_z}: \begin{align} 0 \in \partial h_j(z_j^{t+1}) - \sum_{i\in \mathcal{N}(j)} \left( y_{i,j}^{t+1} + \rho_{i}(x_{i,j}^{t+1} - z_j^{t+1}) + \gamma (z_j^t - z_j^{t+1}) \right). \end{align} From \eqref{eq:lim_z} and \eqref{eq:kkt-3}, we have \begin{equation} 0 \in \partial h_j(z_j^{*}) - \sum_{i\in \mathcal{N}(j)} y_{i,j}^{*}, \end{equation} which proves \eqref{eq:kkt-2}. Finally, from the optimality condition in \eqref{eq:update_z}, we have \eqref{eq:update_x} which implies \eqref{eq:kkt-1}, the second part of the theorem. We now turn to prove the last part. Let $L'(\mathbf{X}, \mathbf{Y}, \mathbf{z}):= L(\mathbf{X}, \mathbf{Y}, \mathbf{z}) - h(\mathbf{z})$, which excludes $h(\mathbf{z})$ from $L(\mathbf{X}, \mathbf{Y}, \mathbf{z})$. Then, we have \begin{align*} z_j - \nabla_{z_j} l'(\mathbf{X}, \mathbf{Y}, z_j) &= z_j - \sum_{i \in \mathcal{N}(j)} y_{i,j} - \sum_{i \in \mathcal{N}(j)} \rho_i (x_{i,j} - z_j)\\ &= z_j - \sum_{i \in \mathcal{N}(j)} \rho_i (z_j - x_{i,j} - \frac{y_{i,j}}{\rho_i}). \end{align*} Therefore, we have \begin{align} \norm{z_j^t - \prox{h}(z_j^t - \nabla_{z_j} l'(\mathbf{X}^t, \mathbf{Y}^t, z_j^t))} &\leq \norm{z_j^t - z_j^{t+1} + z_j^{t+1} - \prox{h}(z_j^t - \nabla_{z_j} l'(\mathbf{X}^t, \mathbf{Y}^t, z_j^t))} \nonumber \\ &\leq \norm{z_j^t - z_j^{t+1}} + \norm{z_j^{t+1} - \prox{h}(z_j - \sum_{i \in \mathcal{N}(j)} \rho_i (z_j^t - x_{i,j}^t - \frac{y_{i,j}^t}{\rho_i}))} \nonumber \\ &\leq \norm{z_j^t - z_j^{t+1}} + \lVert \prox{h}(z_j^{t+1} - \sum_{i\in\mathcal{N}(j)}\rho_i(z_j^{t+1} - x_{i,j}^{t} - \frac{y_{i,j}^{t}}{\rho_i}) + \gamma(z_j^{t+1}- z_j^{t})) \nonumber \\ &\quad\quad\quad\quad\quad\quad\quad\quad\quad - \prox{h}(z_j^t - \sum_{i \in \mathcal{N}(j)} \rho_i (z_j^t - x_{i,j}^t - \frac{y_{i,j}^t}{\rho_i})) \rVert \label{eq:thm-3-1}\\ &\leq \left(2+\gamma+\sum_{i\in\mathcal{N}(j)}\rho_i \right)\norm{z_j^t - z_j^{t+1}}, \label{eq:thm-3-2} \end{align} where \eqref{eq:thm-3-1} is from the optimality in \eqref{eq:update_z} as \begin{align*} z_j^{t+1} = \prox{h}(z_j^{t+1} - \sum_{i\in\mathcal{N}(j)}\rho_i(z_j^{t+1} - x_{i,j}^{t} - \frac{y_{i,j}^{t}}{\rho_i}) + \gamma(z_j^{t+1}- z_j^{t})), \end{align*} and \eqref{eq:thm-3-2} is from the firm nonexpansiveness property of proximal operator. Then, by the update rule of $x_{i,j}^{t+1}$, if $x_{i,j}$ is selected to update at epoch $t$, we have \begin{align*} \norm{\nabla_{x_{i,j}} L(\mathbf{X}^t, \mathbf{Y}^t, \mathbf{z}^t)}^2 &= \norm{\nabla_j f_i(\mathbf{x}_i^t) + \rho_{i}(x_{i,j}^t - z_j^t + \frac{y_{i,j}^t}{\rho_{i}})}^2 \\ &= \norm{\nabla_j f_i(\mathbf{x}_i^t) - \nabla_j f_i(\tilde{\mathbf{z}}^{t})+(y_{i,j}^t - y_{i,j}^{t-1}) + \rho_{i}(\tilde{z}_{j}^{t} - z_j^t)}^2 \\ &\leq 3\norm{\nabla_j f_i(\mathbf{x}_i^t) - \nabla_j f_i(\tilde{\mathbf{z}}^{t})}^2 + 3\norm{y_{i,j}^t - y_{i,j}^{t-1}}^2 + 3\norm{\rho_{i}(\tilde{z}_{j}^{t} - z_j^t)}^2 \\ &\leq 3L_{i,j}^2\norm{x_{i,j}^t- \tilde{z}_{j}^{t}}^2 + 3\norm{y_{i,j}^t - y_{i,j}^{t-1}}^2 + 3\rho_{i}^2\norm{(\tilde{z}_{j}^{t} - z_j^t)}^2 \\ &\leq 3(L_{i,j}^2 + \rho_i^2)\norm{x_{i,j}^t- \tilde{z}_{j}^{t}}^2 + 3\rho_{i}^2\norm{(\tilde{z}_{j}^{t} - z_j^t)}^2, \end{align*} which implies that there must exist two positive constant $\sigma_1 > 0$ and $\sigma_2 > 0$ such that \begin{equation} \sum_{(i,j)\in \mathcal{E}}\norm{\nabla_{x_{i,j}} L(\mathbf{X}^t, \mathbf{Y}^t, \mathbf{z}^t)}^2 \leq \sum_{(i,j)\in \mathcal{E}} \sigma_1 \norm{x_{i,j}^t- \tilde{z}_{j}^{t}}^2 + \sum_{(i,j)\in \mathcal{E}} \sigma_2 \sum_{t'=0}^{T_{i,j}-1} \norm{z_j^{t-t'}-z_j^{t-t'-1}}. \label{eq:thm-3-3} \end{equation} The last step is to estimate $\norm{x_{i,j}^t - z_j^t}$, which can be done as follows: \begin{align} \norm{x_{i,j}^t - z_{j}^t}^2 &\leq \norm{x_{i,j}^t - \tilde{z}_{j}^t}^2 + \norm{\tilde{z}_{j}^t - z_{j}^t}^2 \\ \sum_{(i,j)\in \mathcal{E}} \norm{x_{i,j}^t - z_{j}^t}^2 &\leq \sum_{(i,j)\in \mathcal{E}}(\norm{x_{i,j}^t - \tilde{z}_{j}^t}^2 + \norm{\tilde{z}_{j}^t - z_{j}^t}^2 ) \label{eq:thm-3-4} \end{align} Combining \eqref{eq:thm-3-2}, \eqref{eq:thm-3-3} and \eqref{eq:thm-3-4}, and summing up $t=0, \ldots, T-1$, we have \begin{align} \sum_{t=0}^{T-1} P(\mathbf{X}^t, \mathbf{Y}^t, \mathbf{z}^t) &\leq \sum_{t=0}^{T-1} \sum_{(i,j)\in\mathcal{E}} \sigma_3 \norm{x_{i,j}^t - \tilde{z}_j^t}^2 + \sigma_4 T_{i,j}\norm{z_j^{t+1} - z_j^{t}}^2. \end{align} From Lemma~\ref{lem:iteration}, we have \begin{align} &\quad\quad L(\mathbf{X}^{T}, \mathbf{Y}^{T}, \mathbf{z}^{T}) - L(\mathbf{X}^0, \mathbf{Y}^0, \mathbf{z}^0) \\ &\leq - \sum_{t=0}^{T-1}\sum_{(i, j) \in \mathcal{E}}\beta_{i} (\norm{x_{i,j}^{t+1}-\tilde{z}_j^{t+1}}^2 + \norm{x_{i,j}^{t} - {z}_j^{t+1} }^2) - \sum_{t=0}^{T-1}\sum_{(i,j)\in\mathcal{E}} \alpha_{j} \norm{z_{j}^{t+1} - z_{j}^{t}}^2 \\ &\leq - \sum_{t=0}^{T-1}\sum_{(i, j) \in \mathcal{E}}\delta_1 \norm{x_{i,j}^{t+1}-\tilde{z}_j^{t+1}}^2 + \delta_2 \norm{z_{j}^{t+1} - z_{j}^{t}}^2, \end{align} where $\delta_1 := \min_{i} \beta_i$ and $\delta_2 := \min_{j} \alpha_j$.Now we can find some $C > 0$, such that the following equation hold: \begin{align*} \sum_{t=0}^{T-1} P(\mathbf{X}^t, \mathbf{Y}^t, \mathbf{z}^t) &\leq C(L(\mathbf{X}^{0}, \mathbf{Y}^{0}, \mathbf{z}^{0}) - L(\mathbf{X}^T, \mathbf{Y}^T, \mathbf{z}^T)) \\ &\leq C(L(\mathbf{X}^{0}, \mathbf{Y}^{0}, \mathbf{z}^{0}) - \underline{f}), \end{align*} where the last inequality we have used the fact that $L(\mathbf{X}^{t}, \mathbf{Y}^{t}, \mathbf{z}^{t})$ is lowered bounded by $\underline{f}$ for all $t$ from Lemma~\ref{lem:boundedbelow}. Let $T = T(\epsilon)$ and we have \begin{equation} T(\epsilon) \leq \frac{C(L(\mathbf{X}^{0}, \mathbf{Y}^{0}, \mathbf{z}^{0}) - \underline{f})}{\epsilon}, \end{equation} which proves the last part of Theorem~\ref{thm:convergence}. \end{proof} \section{Approximate Update for Communication Efficiency} \label{sec:approx} In this section, we will discuss some techniques in practice to lower the communication cost. As reported in the literature, e.g., \cite{li2014communication}, bandwidth in datacenters is often 10-100 times smaller than memory bandwidth, which limits to support a large scale cluster with more than 1000 workers and servers. \subsection{Pushing Changes, not Variables} We firstly try to lower the parameters transmitted from workers to servers. In SGD-based methods, workers will transmit gradients, which are decreasing towards zero along with iterations. Therefore, the communication cost can be lower when the magnitude of gradients are small. Unlike SGD-based methods, however, workers in Algorithm~\ref{alg:admm-worker} transmits an aggregation of its local variable $x_{i,j}^{t+1}$ and its dual variable $y_{i,j}^{t+1}$, which are changed slightly but may still have significant magnitudes. Sending the change in the local variables rather than themselves can also ensure that, when updating the global variables, a fast worker node does not bias the global variable $z_j$ towards its own $x_{i,j}$ when performing multiple updates in short period of epochs. Particularly, we set another variable $s_j^{t+1}$ to denote the sum of all received updates from workers as: \begin{equation} s_j^{t+1} := \sum_{i \in \mathcal{N}(j)} \hat{w}_{i,j}^{t+1}. \end{equation} Then, we can rewrite the update step \eqref{eq:update_z} as \begin{equation} z_j^{t+1} = \prox[\mu]{h}\left( \frac{\gamma z_j^t + s_j^{t+1}}{\sum_{i \in \mathcal{N}(j)}\rho_{i,j}} \right). \label{eq:update_z2} \end{equation} Let $\Delta x_{i,j}^{t+1}$ denote $x_{i,j}^{t+1}-x_{i,j}^t$ and $\Delta y_{i,j}^{t+1}$ denote $y_{i,j}^{t+1}-y_{i,j}^t$. By simple algebra we can easily have \begin{equation} s_j^{t+1} = s_j^{t} + \rho_{i,j} \Delta x_{i,j}^{t+1} + \Delta y_{i,j}^{t+1}. \label{eq:update_s} \end{equation} To rewrite Algorithm~\ref{alg:admm-worker}, we replace Step 7 by pushing $\rho_{i,j}\Delta x_{i,j} + \Delta y_{i,j}$ to server $j$. To rewrite Algorithm~\ref{alg:admm-server}, server $j$ receives $\rho_{i,j} \Delta x_{i,j}^{t+1} + \Delta y_{i,j}^{t+1}$ and update $s_j^{t+1}$ according to \eqref{eq:update_s} and further update $z_j^{t+1}$ according to \eqref{eq:update_z2}. However, we should note that this scheme may lead to divergence without proper initialization. Note that $s_j^{t+1}$ can be correctly calculated using \eqref{eq:update_s}, provided that $s_j^{0}$ equals with the sum of $\rho_{i,j}x_{i,j}^0 + y_{i,j}^0$ for all $i \in \mathcal{N}(j)$. One way is to initialize all variables, including $x_{i,j}$, $y_{i,j}$, $s_j$ and $z_j$, as $0$. Another way is to set up a synchronization update in initialization. Since this is the only one synchronization step in the algorithm, the time consumption can be tolerated, and we can utilize such step to test link communication. \subsection{Significance Filters} We can further reduce the communication overhead by ensuring the local copy $x_{i,j}$ is \emph{approximately correct}, which eliminates insignificant and unnecessary communication between workers and servers. Such scheme is widely adopted in SGD-based distributed machine learning systems, i.e., parameter server \cite{li2014scaling} and Gaia \cite{hsieh17gaia}. As reported in \cite{hsieh17gaia}, nearly 95\% updates are causing 1\% change on the parameter values and we can conclude that the vast majority of changes are quite insignificant. \begin{thm} Suppose that Assumption~\ref{asmp:lipschitz}-\ref{asmp:delay} hold. Assume that we apply a significant filter on pushing values with threshold $\delta_{i,j}^{t} = O(t^{-1})$ and penalty parameters $\rho_{i,j}$ and $\gamma$ are chosen such that {\small \begin{align} \infty & > L(\mathbf{X}^0, \mathbf{z}^0, \mathbf{Y}^0) - \underline{f} \geq 0 \\ \alpha'_{i,j} &:= \frac{\rho_{i,j}+\gamma-\delta_{i,j}^t}{2} - \left(\frac{7L_{i,j}}{2\rho_{i,j}^2} + \frac{1}{\rho_{i,j}}\right) L_{i,j}^2(T_{i,j}+1)^2 - \frac{T_{i,j}^2}{2} \nonumber \\ & > 0, \forall t>0, \\ \beta'_{i,j} &:= \frac{\rho_{i,j}}{4} - 3L_{i,j} > 0, \quad (i,j) \in \mathcal{E}. \end{align} } Then Algorithm~\ref{alg:admm-worker}-\ref{alg:admm-server} can converge in expectation to stationary points satisfying KKT conditions. \label{thm:approx} \end{thm} The proof of this Theorem is similar with that of Theorem~\ref{thm:convergence}. The high level idea of this theorem is that, the update for $\mathbf{z}^{t+1}$ is \emph{``contaminated'' by quantization noise}. In other words, if we apply a significance filter with the threshold of $\delta_{i,j}^t$ on $w_{i,j}^t:=\rho_{i,j}\Delta x_{i,j}^t + \Delta y_{i,j}^t$, we can regard the newly pushed value $\tilde{w}_{i,j}^{t} = w_{i,j}^t + \epsilon_{i,j}^t$, where $\epsilon_{i,j}^t$ is a random noise with the magnitude of $\delta_{i,j}^t$. Similarly, from the strongly convexity of $L(\mathbf{X}, \mathbf{z}, \mathbf{Y})$ w.r.t. $\mathbf{z}$, we have \begin{align} & L(\mathbf{X}^t, \mathbf{z}^{t+1}, \mathbf{Y}^t) - L(\mathbf{X}^t, \mathbf{z}^{t}, \mathbf{Y}^t) \\ &\leq \innerprod{\mathbf{s}+\epsilon_{i,j}^{t+1}, \mathbf{z}^t - \mathbf{z}^{t+1}} \nonumber \\ & - \sum_{i=1}^N\sum_{j \in \mathcal{N}(i)} \left( \frac{\rho_j}{2} \norm{z_j^{t+1} - z_j^{t}}^2 + \frac{\gamma}{2} \norm{z_j^{t+1} - z_j^{t}}\right)\\ &= \sum_{(i,j) \in \mathcal{E}} \epsilon_{i,j}^{t+1}({z}_j^t - {z}_j^{t+1})-\frac{\rho_{i,j}+\gamma}{2} \norm{z_j^{t+1} - z_j^{t}}^2, \label{eq:noisy-1} \end{align} where $\mathbf{s} \in \partial h(\mathbf{z}^{t+1})$. By take the expectation of the above equation, we have \begin{eqnarray} && \mathbf{E}[L(\mathbf{X}^t, \mathbf{z}^{t+1}, \mathbf{Y}^t) - L(\mathbf{X}^t, \mathbf{z}^{t}, \mathbf{Y}^t)] \nonumber \\ &\leq& \sum_{(i,j) \in \mathcal{E}} \delta_{i,j}^{t+1}\norm{{z}_j^t - {z}_j^{t+1}}-\frac{\rho_{i,j}+\gamma}{2} \norm{z_j^{t+1} - z_j^{t}}^2 \\ &\leq& -\sum_{(i,j) \in \mathcal{E}}\frac{\delta_{i,j}^{t+1}-(\rho_{i,j}+\gamma)}{2} \norm{z_j^{t+1} - z_j^{t}}^2. \label{eq:noisy-2} \end{eqnarray} The equation \eqref{eq:diff1-2} and \eqref{eq:diff1-3} can be similarly adopted. Combining \eqref{eq:diff1-2}, \eqref{eq:diff1-3} and \eqref{eq:noisy-2}, we have similar result on like \ref{eq:iteration_L} and finally proves Theorem~\ref{thm:approx}. When approaching to a stationary point, the changes $\Delta x_{i,j}^{t}$ will vanish. Therefore, like SGD and SPGD based methods, inexact changes obtained by delayed and inexact global model could be good approximation of the ground true one. One difference here is that the threshold $\delta_{i,j}$ can be set as a constant, while in SGD or SPGD based methods, the threshold should decrease to ensure convergence. For example, in Gaia, the threshold will be decreased as $O(1/\sqrt{t})$ \cite{hsieh17gaia}. \section{Concluding Remarks} \label{sec:conclude} In this paper, we propose a block-wise, asynchronous and distributed ADMM algorithm to solve general non-convex and non-smooth optimization problems in machine learning. Under the bounded delay assumption, we have shown that our proposed algorithm can converge to stationary points satisfying KKT conditions. The block-wise updating nature of our algorithm makes it feasible to be implemented on Parameter Server, take advantage of the ability to update different blocks of all model parameters in parallel on distributed servers. Experimental results based on a real-world dataset have demonstrated the convergence and near-linear speedup of the proposed ADMM algorithm, for training large-scale sparse logistic regression models in Amazon EC2 clusters. \section{Preliminaries} \label{sec:prelim} \subsection{Consensus Optimization and ADMM} \label{sec:basicADMM} The minimization in \eqref{eq:original} can be reformulated into a \emph{global variable consensus optimization} problem \cite{boyd2011distributed}: \begin{subequations} \begin{align} \mathop{\min}_{\mathbf{z}, \{\mathbf{x}_i\} \in \mathcal{X}} &\quad \sum_{i=1}^{N} f_i(\mathbf{x}_i) + h(\mathbf{z}), \label{eq:consensus-1}\\ \mathrm{s.t.} &\quad \mathbf{x}_i = \mathbf{z}, \quad \forall i = 1,\ldots,N, \label{eq:consensus-2} \end{align} \label{eq:consensus} \end{subequations} where $\mathbf{z}$ is often called the \emph{global consensus variable}, traditionally stored on a master node, and $\mathbf{x}_i$ is its local copy updated and stored on one of $N$ worker nodes. The function $h$ is decomposable. It has been shown \cite{boyd2011distributed} that such a problem can be efficiently solved using distributed (synchronous) ADMM. In particular, let $\mathbf{y}_i$ denote the Lagrange dual variable associated with each constraint in \eqref{eq:consensus-2} and define the Augmented Lagrangian as \begin{equation} \begin{split} L(\mathbf{X}, \mathbf{Y}, \mathbf{z}) &= \sum_{i=1}^{N} f_i(\mathbf{x}_i) + h(\mathbf{z}) + \sum_{i=1}^{N} \innerprod{\mathbf{y}_{i}, \mathbf{x}_{i} - \mathbf{z}} \\ &\quad + \sum_{i=1}^{N} \frac{\rho_{i}}{2} \norm{\mathbf{x}_{i} - \mathbf{z} }^2, \end{split} \label{eq:lagrangian_1} \end{equation} where $\mathbf{X}:=(\mathbf{x}_1, \ldots, \mathbf{x}_N)$ represents a juxtaposed matrix of all $\mathbf x_i$ , and $\mathbf{Y}$ represents the juxtaposed matrix of all $\mathbf{y}_i$. We have, for (synchronized) rounds $t=0,1,\ldots$, the following variable updating equations: {\small \begin{align*} \mathbf{x}_i^{t+1} &= \mathop{\arg\min}_{\mathbf{x}_i \in \mathcal{X}} f_i(\mathbf{x}_i) + \innerprod{\mathbf{y}_{i}^t, \mathbf{x}_{i} - \mathbf{z}^{t}} + \frac{\rho_{i}}{2} \norm{\mathbf{x}_{i} - \mathbf{z}^{t} }^2,\\ \mathbf{y}_i^{t+1} &= \mathbf{y}_i^{t} + \rho_i (\mathbf{x}_i^{t+1} - \mathbf{z}^{t}), \\ \mathbf{z}^{t+1} &= \mathop{\arg\min}_{\mathbf{z} \in \mathcal{X}} h(\mathbf{z}) + \sum_{i=1}^{N} \innerprod{\mathbf{y}_{i}^{t}, \mathbf{x}_{i}^t - \mathbf{z}^t} + \sum_{i=1}^{N} \frac{\rho_{i}}{2} \norm{\mathbf{x}^t_{i} - \mathbf{z}^t }^2. \end{align*} } \subsection{General Form Consensus Optimization} Many machine learning problems involve highly sparse models, in the sense that each local dataset on a worker is only associated with a few model parameters, i.e., each $f_i$ only depends on a subset of the elements in $\mathbf{x}$. The global consensus optimization problem in \eqref{eq:consensus}, however, ignores such sparsity, since in each round each worker $i$ must push the entire vectors $\mathbf{x}_i$ and $\mathbf{y}_i$ to the master node to update $\mathbf z$. \red{In fact, this is the setting of all recent work on asynchronous distributed ADMM, e.g., \cite{zhang2014asynchronous}}. In this case, when multiple workers attempt to update the global census variable $\mathbf z$ at the same time, $\mathbf z$ must be locked to ensure atomic updates, which leads to diminishing efficiency as the number of workers $N$ increases. To better exploit model sparsity in practice for further parallelization opportunities between workers, we consider the \emph{general form consensus optimization} problem \cite{boyd2011distributed}. Specifically, with $N$ worker nodes and $M$ server nodes, the vectors $\mathbf{x}_i$, $\mathbf{y}_i$ and $\mathbf{z}$ can all be decomposed into $M$ blocks. Let $z_j$ denote the $j$-th block of the global consensus variable $\mathbf{z}$, located on server $j$, for $j=1,\ldots,M$. Similarly, let $x_{i,j}$ ($y_{i,j}$) denote the corresponding $j$-th block of the local variable $\mathbf{x}_i$ ($\mathbf{y}_i$) on worker $i$. Let $\mathcal E$ be all the $(i,j)$ pairs such that $f_i$ depends on the block $x_{i,j}$ (and correspondingly depends on $z_j$). Furthermore, let $\mathcal N(j) = \{i|(i,j)\in \mathcal E\}$ denote the set of all the neighboring workers of server $j$. Similarly, let $\mathcal N(i) = \{j|(i,j)\in \mathcal E\}$. Then, the \emph{general form consensus problem} \cite{boyd2011distributed} is described as follows: \begin{equation} \begin{split} \mathop{\min}_{z_j, \{x_{i,j}\}} & \quad \sum_{i=1}^{N} f_i(\{x_{i,j}\}_{j=1}^M) + h(\mathbf{z}), \\ \mathrm{s.t.} &\quad x_{i,j} = z_j, \quad \forall (i,j) \in \mathcal{E}, \\ &\quad x_{i,j}, z_j \in \mathcal{X}_j. \end{split} \label{eq:general-consensus} \end{equation} In fact, in $f_i(\{x_{i,j}\}_{j=1}^M)$, a block $x_{i,j}$ will only be relevant if $(i,j)\in \mathcal E$, and will be a dummy variable otherwise, whose value does not matter. Yet, since the sparse dependencies of $f_i$ on the blocks $j$ can be captured through the specific form of $f_i$, here we have included all $M$ blocks in each $f_i$'s arguments just to simplify the notation. The structure of problem \eqref{eq:general-consensus} can effectively capture the sparsity inherent to many practical machine learning problems. Since each $f_i$ only depends on a few blocks, the formulation in \eqref{eq:general-consensus} essentially reduces the number of decision variables---it does not matter what value $x_{i,j}$ will take for any $(i,j)\notin\mathcal E$. \red{For example, when training a topic model for documents, the feature of each document is represented as a bag of words, and hence only a subset of all words in the vocabulary will be active in each document's feature. In this case, the constraint $x_{i,j}=z_j$ only accounts for those words $j$ that appear in the document $i$, and therefore only those words $j$ that appeared in document $i$ should be optimized.} Like \eqref{eq:lagrangian_1}, we also define the Augmented Lagrangian $L(\mathbf{X}, \mathbf{Y}, \mathbf{z})$\footnote{To simplify notations, we still use $\mathbf{X}$ and $\mathbf{Y}$ as previously defined, but entries $(i,j) \notin \mathcal{E}$ are not taken into account.} as follows: {\small \begin{align*} L(\mathbf{X}, \mathbf{Y}, \mathbf{z}) &= \sum_{i=1}^{N} f_i(\{x_{i,j}\}_{j=1}^M) + h(\mathbf{z}) + \sum_{(i,j) \in \mathcal{E}} \innerprod{{y}_{i,j}, {x}_{i,j} - {z}_j} \nonumber \\ &\quad + \sum_{(i,j) \in \mathcal{E}} \frac{\rho_{i}}{2} \norm{{x}_{i,j} - {z}_j }^2. \label{eq:lagrangian_2} \end{align*} } The formulation in \eqref{eq:general-consensus} perfectly aligns with the latest \emph{Parameter Server} architecture as shown in Fig.~\ref{fig:ps}. Here we can let each server node maintain one model block $z_j$, such that worker $i$ updates $z_j$ if and only if $(i, j)\in \mathcal{E}$. Since all three vectors $\mathbf x_i$, $\mathbf y_i$ and $\mathbf z$ in \eqref{eq:general-consensus} are decomposable into blocks, to achieve a higher efficiency, we will investigate block-wise algorithms which not only enable different workers to send their updates asynchronously to the server (like prior work on asynchronous ADMM does), but also enable different model blocks $\mathbf z_j$ to be updated in parallel and asynchronously on different servers, removing the locking or atomicity assumption required for updating the entire $\mathbf z$. \section{Introduction} \label{sec:intro} The need to scale up machine learning in the presence of sheer volume of data has spurred recent interest in developing efficient distributed optimization algorithms. Distributed machine learning jobs often involve solving a non-convex, decomposable, and regularized optimization problem of the following form: \begin{equation} \begin{split} \mathop{\min}_{\mathbf{x}} &\quad \sum_{i=1}^N f_i(x_1,\ldots,x_{M}) + \sum_{j=1}^M h_j(x_j), \\ \mathrm{s.t.}&\quad x_j \in \mathcal{X}_j, j=1,\ldots,M \end{split} \label{eq:original} \end{equation} where each $f_i:\mathcal{X} \to \real{}$ is a smooth but possibly \emph{non-convex} function, fitting the model $\mathbf{x} :=(x_1,\ldots,x_M)$ to local training data available on node $i$; each $\mathcal{X}_j$ is a closed, convex, and compact set; and the regularizer $h(\mathbf{x}):=\sum_{j=1}^M h_j(x_j)$ is a separable, convex but possibly \emph{non-smooth} regularization term to prevent overfitting. Example problems of this type can be found in deep learning with regularization \cite{dean2012large,chen2015mxnet}, robust matrix completion \cite{recht2011hogwild}, LASSO \cite{tibshirani2005sparsity}, sparse logistic regression \cite{liu2009large}, and sparse support vector machine (SVM) \cite{friedman2001elements}. To date, a number of efficient asynchronous and distributed stochastic gradient descent (SGD) algorithms, e.g., \cite{recht2011hogwild,lian2015asynchronous,li2014scaling}, have been proposed, in which each worker node asynchronously updates its local model or gradients based on its local dataset, and sends them to the server(s) for model updates or aggregation. Yet, SGD is not particularly suitable for solving optimization problems with non-smooth objectives or with constraints, which are prevalent in practical machine learning adopting regularization, e.g., \cite{liu2009large}. Distributed (synchronous) ADMM \cite{boyd2011distributed,zhang2014asynchronous,chang2016asynchronous1,chang2016asynchronous2,hong2017distributed,wei20131,mota2013d,taylor2016training} has been widely studied as an alternative method, which avoids the common pitfalls of SGD for highly non-convex problems, such as saturation effects, poor conditioning, and saddle points \cite{taylor2016training}. The original idea on distributed ADMM can be found in \cite{boyd2011distributed}, which is essentially a synchronous algorithm. In this work, we focus on studying the asynchronous distributed alternating direction method of multipliers (ADMM) for non-convex non-smooth optimization. Asynchronous distributed ADMM has been actively discussed in recent literature. Zhang and Kwok \shortcite{zhang2014asynchronous} consider an asynchronous ADMM assuming bounded delay, which enables each worker node to update a local copy of the model parameters asynchronously without waiting for other workers to complete their work, while a single server is responsible for driving the local copies of model parameters to approach the global \emph{consensus variables}. They provide proof of convergence for convex objective functions only. Wei and Ozdaglar \shortcite{wei20131} assume that communication links between nodes can fail randomly, and propose an ADMM scheme that converges almost surely to a saddle point. Chang \emph{et al.} \shortcite{chang2016asynchronous1,chang2016asynchronous2} propose an asynchronous ADMM algorithm with analysis for non-convex objective functions. However, their work requires each worker to solve a subproblem \emph{exactly}, which is often costly in practice. Hong \shortcite{hong2017distributed} proposes another asynchronous ADMM algorithm, where each worker only computes the gradients based on local data, while all model parameter updates happen at a single server, a possible bottleneck in large clusters. To our knowledge, all existing work on asynchronous distributed ADMM requires locking global consensus variables at the (single) server for each model update; although asynchrony is allowed among workers, i.e., workers are allowed to be at different iterations of model updating. Such atomic or memory-locking operations essentially serialize model updates contributed by different workers, which may seriously limit the algorithm scalability. In many practical problems, not all workers need to access all model parameters. For example, in recommender systems, a local dataset of user-item interactions is only associated with a specific set of users (and items), and therefore does not need to access the latent variables of other users (or items). In text categorization, each document usually consists of a subset of words or terms in corpus, and each worker only needs to deal with the words in its own local corpus. \begin{figure} \centering \includegraphics[width=3in]{figures/parameter_server.pdf} \caption{Data flow of Parameter Servers. ``PS'' represents a parameter server task and ``Worker'' represents a worker task.} \label{fig:ps} \end{figure} A distributed machine learning algorithm is illustrated in Fig.~\ref{fig:ps}. There are multiple server nodes, each known as a ``PS'' and stores a subset (block) of model parameters (consensus variables) $\mathbf{z}$. There are also multiple worker nodes; each worker owns a local dataset, and has a loss function $f_i$ depending on one or several blocks of model parameters, but not necessarily all of them. If there is only one server node, the architecture in Fig.~\ref{fig:ps} degenerates to a ``star'' topology with a single master, which has been adopted by Spark \cite{zaharia2010spark}. With multiple servers, the system is also called a \emph{Parameter Server} architecture \cite{dean2012large,li2014scaling} and has been adopted by many large scale machine learning systems, including TensorFlow \cite{abadi2016tensorflow} and MXNet \cite{chen2015mxnet}. It is worth noting that enabling \emph{block-wise} updates in ADMM is critical for training large models, such as sparse logistic regression, robust matrix completion, etc., since not all worker nodes will need to work on all model parameters --- each worker only needs to work on the blocks of parameters pertaining to its local dataset. For these reasons, block-wise updates have been extensively studied for a number of gradient type of distributed optimization algorithms, including SGD \cite{lian2015asynchronous}, proximal gradient descent \cite{li2014communication}, block or stochastic coordinate descent (BCD or SCD) \cite{liu2015asynchronous}, as well as for a recently proposed block successive upper bound minimization method (BSUM) \cite{hong2016unified} In this work, we propose the first \emph{block-wise} asynchronous distributed ADMM algorithm that can increase efficiency over existing single-server ADMM algorithms, by better exploiting the parallelization opportunity in model parameter updates. Specifically, we introduce the \emph{general form consensus optimization} problem \cite{boyd2011distributed}, and solve it in a \emph{block-wise} asynchronous fashion, thus making ADMM amenable for implementation on Parameter Server, with multiple servers hosting model parameters. In our algorithm, each worker only needs to work on one or multiple blocks of parameters that are relevant to its local data, while different blocks of model parameters can be updated in parallel asynchronously subject to a bounded delay. Since this scheme does not require locking all the decision variables together, it belongs to the set of \emph{lock-free} optimization algorithms (e.g., HOGWILD! \cite{recht2011hogwild} as a lock-free version of SGD) in the literature. Our scheme is also useful on shared memory systems, such as on a single machine with multi-cores or multiple GPUs, where enforcing atomicity on all the consensus variables is inefficient. Theoretically, we prove that, for general \emph{non-convex} objective functions, our scheme can converge to stationary points. Experimental results on a cluster of 36 CPU cores have demonstrated the convergence and near-linear speedup of the proposed ADMM algorithm, for training sparse logistic regression models based on a large real-world dataset. \section{Related Work} \label{sec:related} Distributed ADMM has been widely discussed in the literature. The original distributed ADMM can be found in \cite{boyd2011distributed}, which is a synchronous algorithm with fully updates. In this work, Eq.~\eqref{eq:admm-basic-1} is performed on the master node and Eq.~\eqref{eq:admm-basic-2}-\eqref{eq:admm-basic-3} are performed on each worker node $i$. It can be shown to converge to a Karush-Kuhn-Tucker (KKT) point event for both convex and non-convex problems \cite{hong2016convergence}. Extending consensus ADMM to the asynchronous setting has been discussed in recent years. In \cite{zhang2014asynchronous}, they consider a asynchronous consensus ADMM with assumption on bounded delay, but only convex cases are analyzed. References \cite{chang2016asynchronous1,chang2016asynchronous2} propose a asynchronous ADMM algorithm with analysis on both convex and nonconvex cases. However, they require each worker solve a subproblem exactly and it might be time costly in some cases. Reference \cite{hong2017distributed} proposes another asynchronous algorithm that each worker only calculates gradients and updates of all $\mathbf{x}_i,\mathbf{y}_i$ and $\mathbf{z}$ are done on the server side, which can consume a lot of memory in a large cluster. References \cite{wei20131,mota2013d} consider asynchronous ADMM for decentralized networks. In particular, reference \cite{wei20131} assumes that communication links between nodes can fail randomly and provides convergence analysis in a probability-one sense. To our best knowledge, existing asynchronous ADMM algorithms do not support blockwise updates and this paper tries to make the first attempt to study blockwise updates in asynchronous settings. Existing blockwise updates are considered in stochastic gradient descent \cite{lian2015asynchronous}, proximal gradient descent \cite{li2014communication}, block or stochastic coordinate descent (BCD or SCD) \cite{liu2015asynchronous} and a recently proposed block successive upper bound minimization method (BSUM) \cite{razaviyayn2013unified}. For SGD type algorithms, their convergence rate can be low. For BCD type algorithms, it is preferred that datasets are partitioned by features, which limits its applications. \section{Experiments} \label{sec:simu} We now show how our algorithm can be used to solve the challenging non-convex non-smooth problems in machine learning. We will show how AsyBADMM exhibits a near-linear speedup as the number of workers increases. We use a cluster of 18 instances of type \texttt{c4.large} on Amazon EC2. This type of instances has 2 CPU cores and at least 3.75 GB RAM, running 64-bit Ubuntu 16.04 LTS (HVM). Each server and worker process uses up to 2 cores. In total, our deployment uses 36 CPU cores and 67.5 GB RAM. Two machines serve as server nodes, while the other 16 machines serve as worker nodes. Note that we treat one core as a computational node (either a worker or server node). \textbf{Setup:} In this experiment, we consider the sparse logistic regression problem: \begin{equation} \begin{split} \min_{\mathbf{x}} &\quad \frac{1}{m}\sum_{l=1}^{m} \log(1+\exp(-\tilde{y}_l \innerprod{\tilde{\mathbf{x}}_l, \mathbf{x}})) + \lambda \norm{\mathbf{x}}_1\\ \mathrm{s.t.} &\quad \norm{\mathbf{x}}_\infty \leq C, \end{split} \end{equation} where the constant $C$ is used to clip out some extremely large values for robustness. The $\ell_1$-regularized logistic regression is one of the most popular algorithms used for large scale risk minimization. We consider a public sparse text dataset \texttt{KDDa} \footnote{\url{http://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/}}. This dataset has more than 8 million samples, 20 million features, and 305 million nonzero entries. To show the advantage of parallelism, we set up five experiments with 1, 4, 8, 16 and 32 nodes, respectively. In each experiment, the whole dataset will be evenly split into several smaller parts, and each node only has access to its local dataset. We implement our algorithm on the \texttt{ps-lite} framework \cite{li2014scaling}, which is a lightweight implementation of Parameter Server architecture. It supports Parameter Server for multiple devices in a single machine, and multiple machines in a cluster. This is the back end of \texttt{kvstore} API of the deep learning framework MXNet \cite{chen2015mxnet}. Each worker updates the blocks by cycling through the coordinates of $x$ and updating each in turns, restarting at a random coordinate after each cycle. \textbf{Results:} Empirically, Assumption~\ref{asmp:delay} is observed to hold for this cluster. We set the hyper-parameter $\gamma=0.01$ and the clip threshold constant as $C=10^4$, and the penalty parameter $\rho_{i,j}=100$ for all $(i,j)$. Fig.~\ref{fig:obj1} and Fig.~\ref{fig:obj2} show the convergence behavior of our proposed algorithm in terms of objective function values. From the figures, we can clearly observe the convergence of our proposed algorithm. This observation confirms that asynchrony with tolerable delay can still lead to convergence. To further analyze the parallelism in AsyBADMM, we measure the speedup by the relative time for $p$ workers to perform $k$ iterations, i.e., Speedup of $p$ workers = $\frac{T_k(1)}{T_k(p)}$, where $T_k(p)$ is the time it takes for $p$ workers to perform $k$ iterations of optimization. Fig.~\ref{fig:obj2} illustrates the running time comparison and Table~\ref{table:time_vs_iter} shows that AsyBADMM actually achieves near-linear speedup. \begin{table}[] \centering \caption{Running time (in seconds) for iterations $k$ and worker count.} \label{table:time_vs_iter} \begin{tabular}{ccccc} \hline Workers $p$ & $k=20$ & $k=50$ & $k=100$ & Speedup \\ \hline 1 & 1404 & 3688 & 6802 & 1.0 \\ 4 & 363 & 952 & 1758 & 3.87 \\ 8 & 177 & 466 & 859 & 7.92 \\ 16 & 86 & 226 & 417 & 16.31 \\ 32 & 47 & 124 & 228 & 29.83 \\ \hline \end{tabular} \end{table} \section{Convergence Analysis} \label{sec:theory} \begin{figure*}[t] \centering \subfigure[iteration vs. objective]{ \includegraphics[width=3.0in]{figures/obj_vs_iter.pdf} \label{fig:obj1} } \vspace{1mm} \subfigure[time vs. objective]{ \includegraphics[width=3.0in]{figures/obj_vs_time.pdf} \label{fig:obj2} } \vspace{-3mm} \caption{Convergence of AsyBADMM on the sparse logistic regression problem.} \vspace{-2mm} \label{fig:simu_obj} \end{figure*} In this section, we provide convergence analysis of our algorithm under certain standard assumptions: \begin{asmp}[Block Lipschitz Continuity] For all $(i, j)\in\mathcal{E}$, there exists a positive constant $L_{i,j} > 0$ such that \begin{displaymath} \norm{\nabla_j f_i(\mathbf{x}) - \nabla_j f_i(\mathbf{z})} \leq L_{i,j} \norm{x_j - z_j}, \forall \mathbf{x}, \mathbf{z} \in\real{d}. \end{displaymath} \vspace{-3mm} \label{asmp:lipschitz} \end{asmp} \begin{asmp}[Bounded from Below] Each function $f_i(\mathbf{x})$is bounded below, i.e., there exists a finite number $\underline{f} > -\infty$ where $\underline{f}$ denotes the optimal objective value of problem \eqref{eq:general-consensus}. \label{asmp:below} \end{asmp} \begin{asmp}[Bounded Delay] The total delay of each link $(i,j)\in \mathcal{E}$ is bounded with the constant of $T_{i,j}$ for each pair of worker $i$ and server $j$. Formally, there is an integer $0 \leq \tau \leq T_{i,j}$, such that $\tilde{z}_j^t = z_j^{t-\tau}$ for all $t>0$. This should also hold for $\tilde{w}_{i,j}$. \label{asmp:delay} \end{asmp} To characterize the convergence behavior, a commonly used metric is the squared norm of gradients. Due to potential nonsmoothness of $h(\cdot)$, Hong \emph{et al.} propose a new metric \shortcite{hong2016convergence} as a hybrid of gradient mapping as well as vanilla gradient as follows: \begin{align} P(\mathbf{X}^t, \mathbf{Y}^t, \mathbf{z}^t) :=& \norm{\mathbf{z}^t - \hat{\mathbf{z}}^{t}}^2 + \sum_{(i,j)\in\mathcal{E}} \norm{\nabla_{x_{i,j}} L(\mathbf{X}^t, \mathbf{Y}^t, \mathbf{z}^t)}^2 \nonumber \\ & + \sum_{(i,j)\in\mathcal{E}} \norm{x_{i,j} - z_j}^2, \label{eq:grad_norm} \end{align} where $\hat{\mathbf{z}}^t=(\hat{z}_1^t, \ldots, \hat{z}_M^t)$ is defined as \begin{equation} \hat{z}_j^t := \prox{h}(z_j^t - \nabla_{z_j}(L(\mathbf{X}^t, \mathbf{Y}^t, \mathbf{z}^t)-h(\mathbf{z}^t))). \end{equation} It is clear that if $P(\mathbf{X}^t, \mathbf{Y}^t, \mathbf{z}^t)\to 0$, then we obtain a stationary solution of \eqref{eq:original}. The following Theorem~\ref{thm:convergence} indicates that Algorithm~\ref{alg:Asybadmm} converges to a stationary point satisfying KKT conditions under suitable choices of hyper-parameters. \begin{thm} Suppose that Assumptions~\ref{asmp:lipschitz}-\ref{asmp:delay} hold. Moreover, for all $i$ and $j$, the penalty parameter $\rho_{i}$ and $\gamma$ are chosen to be sufficiently large such that: {\footnotesize \begin{align} \infty & > L(\mathbf{X}^0, \mathbf{Y}^0, \mathbf{z}^0) - \underline{f} \geq 0 \\ & \alpha_{j} := \gamma + \rho_i -\sum_{i\in \mathcal{N}(j)}\left(\frac{1}{2} + \frac{1}{\rho_{i}}\right) L_{i,j}^2(T_{i,j}+1)^2 \nonumber\\ &\quad\quad\quad - \sum_{i\in \mathcal{N}(j)}\frac{(4L_{i,j}+\rho_{i}+1)T_{i,j}^2}{2} >0, \\ & \beta_{i} := \frac{\rho_{i}- 4\max_{j \in \mathcal{N}(i)} L_{i,j}}{2|\mathcal{N}(i)|} > 0. \end{align} } Then the following is true for Algorithm~\ref{alg:Asybadmm}: \begin{enumerate} \item Algorithm~\ref{alg:Asybadmm} converges in the following sense: \begin{subequations} \begin{align} \lim_{t\to \infty} \norm{z_j^{t+1} - z_j^{t}} = 0, &\quad \forall j=1,\ldots,M, \label{eq:lim_z}\\ \lim_{t\to \infty} \norm{x_{i,j}^{t+1} - x_{i,j}^{t}} = 0, &\quad \forall (i, j) \in \mathcal{E}, \label{eq:lim_x} \\ \lim_{t\to \infty} \norm{y_{i,j}^{t+1} - y_{i,j}^{t}} = 0, &\quad \forall (i, j) \in \mathcal{E}. \label{eq:lim_y} \end{align} \end{subequations} \item For each worker $i$ and server $j$, denote the limit points of $\{x_{i,j}^t\}, \{y_{i,j}^t\}$, and $\{z_{j}^t\}$ by $x_{i,j}^*, y_{i,j}^*$ and $z_{j}^*$, respectively. Then these limit points satisfy KKT conditions, i.e., we have \begin{subequations} \begin{align} \nabla_j f_i(\mathbf{x}_i^*) + {y}_{i,j}^* = 0, &\quad \forall (i,j) \in \mathcal{E}, \label{eq:kkt-1}\\ \sum_{j \in \mathcal{N}(i)} y_{i,j}^* \in \partial h_j(z_j^*) , &\quad \forall j=1,\ldots,M, \label{eq:kkt-2}\\ x_{i,j}^{*} = z_j^* \in \mathcal{X}_j, &\quad \forall (i, j)\in \mathcal{E}. \label{eq:kkt-3} \end{align} \end{subequations} When sets $\mathcal{X}_j$ are compact, the sequence of iterates generated by Algorithm~\ref{alg:Asybadmm} converges to stationary points. \item For some $\epsilon > 0$, let $T(\epsilon)$ denote the epoch that achieves the following: \[ T(\epsilon) = \min \{ t | P(\mathbf{X}^t, \mathbf{Y}^t, \mathbf{z}^t) \leq \epsilon, t \geq 0 \}. \] Then there exists some constant $C > 0$ such that \begin{equation} T(\epsilon) \leq \frac{C(L(\mathbf{X}^0, \mathbf{Y}^0, \mathbf{z}^0) - \underline{f})}{\epsilon}, \end{equation} where $\underline{f}$ is defined in Assumption~\ref{asmp:below}. \end{enumerate} \label{thm:convergence} \end{thm} Due to the non-convex objective function $f_i(\mathbf{x}_i)$, no guarantee of global optimality is possible in general. The parameter $\rho_{i}$ acts like the learning rate hyper-parameter in gradient descent: a large $\rho_{i}$ slows down the convergence and a smaller one can speed it up. The term $\gamma$ is associated with the delay bound $T_{i,j}$. In the synchronous case, we can set $\gamma=0$; otherwise, to guarantee convergence, $\gamma$ should be increased as the maximum allowable delay $T_{i,j}$ increases.
{ "timestamp": "2018-02-27T02:07:03", "yymm": "1802", "arxiv_id": "1802.08882", "language": "en", "url": "https://arxiv.org/abs/1802.08882" }
\subsubsection*{Acknowledgments} JRG, GP, and KQW are supported in part by grants from the National Science Foundation (III-1525919, IIS-1550179, IIS-1618134, S\&AS 1724282, and CCF-1740822), the Office of Naval Research DOD (N00014-17-1-2175), and the Bill and Melinda Gates Foundation. AGW is supported by NSF IIS-1563887. \bibliographystyle{apalike} \section{Method} In this paper, we will use the running example of personalized forecasting predictions when formulating the problem and our methods. Assume that we would like to make forecasting predictions for ${\ensuremath{s}}$ people. In healthcare applications, we might have access to previous patient measurements, and we would like to predict these measurements at future times. Each person $p \in 1, \ldots, {\ensuremath{s}}$ is represented by a collection of $T_p$ data points $\{ \ensuremath{\mathbf{x}}_p, \ensuremath{\mathbf{y}}_p \}$, where $\ensuremath{\mathbf{y}}_p = y^{(1)}_p, \ldots, y^{(T_p)}_p$ corresponds to previously-recorded measurements and $\ensuremath{\mathbf{x}}_p = x^{(1)}_p, \ldots, x^{(T_p)}_p$ corresponds to the times measurements were collected. \gp{Discuss other time-dependent features?} We do not assume that measurements were collected at evenly-spaced intervals, nor do we assume that measurements were collected at the same time for different patients, i.e. for people $p$ and $p'$, $\ensuremath{\mathbf{x}}_p \neq \ensuremath{\mathbf{x}}_{p'}$ and $T_p \neq T_{p'}$. Given $\ensuremath{\mathcal D} = \{ (\ensuremath{\mathbf{x}}_p, \ensuremath{\mathbf{y}}_p) \}_{p=1}^{\ensuremath{s}}$, our goal is to predict future measurement values $\ensuremath{y^*}_p(\ensuremath{x^*})$ for person $p$ at a future time $\ensuremath{x^*} > \ensuremath{\mathbf{x}}_p$. In other words, we would like to learn the predictive distribution $\Pr{ \ensuremath{y^*}_p(\ensuremath{x^*}) \mid \ensuremath{x^*}, p, \ensuremath{\mathcal D} }$ \agw{$p(\cdot)$ rather than Pr. It's a density not a probability. Possibly you don't want to overlap with $p$ for person, but perhaps use a different letter for that... such as `s' for subject}. We can formulate this task \agw{formulate this objective... to avoid overloading the word task} as a multi-task regression problem. At each future time $\ensuremath{x^*}$, we can consider the {\ensuremath{s}}{} predictive tasks $\ensuremath{y^*}_1(\ensuremath{x^*}), \ldots, \ensuremath{y^*}_{\ensuremath{s}}(\ensuremath{x^*})$ -- one predictive task per person. \subsection{Multi-task Gaussian processes} Multi-task Gaussian process models are a natural choice for this type of forecasting problem. A standard Gaussian process (GP) model would assume a Gaussian process prior over the measurements for a person $p$. In other words, $y_p(t) \sim \GP{\mu}{k_\text{feat}}$, where $\mu$ is a mean function (typically zero) and $k_\text{feat}$ is a function that models the covariance between two measurements based on the inputs: $$k_\text{feat} ( x^{(i)}, x^{(i')} ) = \covariance ( y_p(x^{(i)}), y_p(x^{(i')}).$$ Multi-task Gaussian processes extend this model by assuming a GP prior on measurements across \emph{all} individuals. We use a covariance function $k_\text{multi}$ that augments $k_\text{feat}$ with a task-specific covariance $k_\text{task}$. Thus $k_\text{multi}$ measures covariance across individuals $p$ and $p'$ as well as across inputs $x^{(i)}$ and $x^{(i')}$: \begin{align*} k_\text{multi} ( [ x^{(i)}; p ], [ x^{(i')}; p'] ) &= \covariance ( y_p(x^{(i)}), y_{p'}(x^{(i')}). \\ &= k_\text{feat} ( x^{(i)} , x^{(i')} ) k_\text{task} ( p, p' ). \end{align*} $k_\text{feat}$ takes the form of a kernel function, such as the RBF kernel or the spectral mixture kernel \cite{wilson2013gaussian}. $k_\text{task}$ is parameterized by a person-correlation matrix $\ensuremath{\M}^\text{ppl} \in \ensuremath{\mathbb{R}}^{{\ensuremath{s}} \times {\ensuremath{s}}}$ -- i.e. $k_\text{task}(p, p') = \ensuremath{\M}^\text{ppl}_{p, p'}$. We learn the hyperparameters of $k_\text{feat}$ and the entries of $\ensuremath{\M}^\text{ppl}$ by optimizing the marginal log likelihood of the data. It is worth noting that our formulation of multi-task GPs differs slightly from the original. \citet{bonilla2008multi} assume that inputs map to outputs for \emph{each} task. Since we assume that inputs $x_p^i$ are unique to each person, we assume that inputs map to outputs for a \emph{single} task. Thus we cannot formulate the covariance matrix in terms of Kronecker products, as in \cite{bonilla2008multi}. Under our formulation, the covariance matrix is expressed as an element-wise multiplication: $$K^\text{multi} = K^\text{feat} \odot K^\text{task}.$$ $K^\text{feat}$ is constructed by calculating $k_\text{feat}$ on all pairs of points. $K^\text{task}$ on the other hand is constructed using $\ensuremath{\M}^\text{ppl}$ as a lookup table -- i.e., given tasks $p_i$ and $p_j$, $K^\text{task}_{i, j} = \ensuremath{\M}^\text{ppl}_{p_i, p_j}$. We can efficiently implement this \agw{this what?} using matrix multiplies: $$K^\text{task} = \ensuremath{\V} \ensuremath{\M}^\text{ppl} \ensuremath{\V}^T$$ where $\ensuremath{\V} \in \{0, 1\}^{n \times {\ensuremath{s}}}$ is a sparse index matrix with each row $v_i$ is a one-hot encoding of task $p_i$. To ensure that $K^\text{task}$ is positive semi-definite, we constrain $\ensuremath{\M}^\text{ppl}$ to take the form \begin{equation} \ensuremath{\M}^\text{ppl} = \mathbf B \mathbf B^T + \text{diag} ( \mathbf \kappa ), \label{eqn:taskmat_form} \end{equation} where $\mathbf B$ is a low-rank matrix (the rank $r$ is a hyperparameter) and $\mathbf \kappa \in \ensuremath{\mathbb{R}}^n$ is a vector. \paragraph{Scaling with tasks.} As ${\ensuremath{s}}$, the number of tasks, increases, the number of model parameters in $\ensuremath{\M}^\text{ppl}$ increases quadratically. We find empirically that learning is significantly impeded as the number of parameters in increases. When formulating $\ensuremath{\M}^\text{ppl}$ as in \eqref{eqn:taskmat_form}, the model is very sensitive to the rank of the matrix $B$. If the rank is too small, it becomes difficult to construct as $\ensuremath{\M}^\text{ppl}$ which can fully captures correlations between people. If the rank is too large, it becomes increasingly likely that the model overfits. Empirically, we find that when ${\ensuremath{s}}$ exceeds 20, even changing the rank by 1 significantly alters results. While it would be possible to sweep over all possible values of rank, or marginalizing out the parameters in $\ensuremath{\M}^\text{ppl}$, the computational cost to do so may make training this model infeasible.\agw{Actually, a prior over M seems quite reasonable here} \subsection{Learning clusters} To avoid parameter explosion, our goal is to reduce the dimensionality of the task-correlation matrix. We accomplish this by discovering \emph{latent subpopuliation clusters} in the data, and parameterizing $k_\text{task}$ in terms of correlations between clusters rather than people. Given ${\ensuremath{c}}$ clusters, we now parameterize $k_\text{task}$ by a matrix $\ensuremath{\M}^\text{c} \in \ensuremath{\mathbb{R}}^{{\ensuremath{c}} \times {\ensuremath{c}}}$. If ${\ensuremath{c}} << {\ensuremath{s}}$, then the number of parameters to learn will be significantly reduced. We can now rewrite $K^\text{task}$ as $K^\text{task} = \ensuremath{\V} \ensuremath{\M}^\text{c} \ensuremath{\V}^T$, where $\ensuremath{\V} \in \ensuremath{\mathbb{R}}^{n \times {\ensuremath{c}}}$ is a matrix where each row $\mathbf v_i$ represents a categorical distribution over person $p_i$'s cluster assignment. \agw{How do you learn the clusters?} \gp{Draw connection to parameter sharing in $\ensuremath{\M}^\text{ppl}$.} \subsection{Scaling to large datasets} \section{BACKGROUND} In this section, we provide a brief review of Gaussian process regression and an overview of iterative inference techniques for Gaussian processes based on matrix-vector multiplies. \subsection{Gaussian Processes} A Gaussian process generalizes multivariate normal distributions to distributions over functions that are specified by a prior \emph{mean function} and a prior \emph{covariance function} $f(\ensuremath{\mathbf{x}})\sim\mathcal{GP} \left(\mu(\ensuremath{\mathbf{x}}),k(\ensuremath{\mathbf{x}},\ensuremath{\mathbf{x}}')\right)$. By definition, the function values of a GP at any finite set of inputs $[\ensuremath{\mathbf{x}}_{1},...,\ensuremath{\mathbf{x}}_{n}]$ are jointly Gaussian distributed: \begin{equation*} \ensuremath{\mathbf{f}} = [f(\ensuremath{\mathbf{x}}_{1}),...,f(\ensuremath{\mathbf{x}}_{n})] \sim\mathcal{N} \left( \mu_{\ensuremath{\boldmatrix{X}}}, \ensuremath{\boldmatrix{K}}_{\ensuremath{\boldmatrix{X}}\X} \right) \end{equation*} where $\mu_{\ensuremath{\boldmatrix{X}}} = [\mu(\ensuremath{\mathbf{x}}_{1}),...,\mu(\ensuremath{\mathbf{x}}_{n})]^{\top}$ and $\ensuremath{\boldmatrix{K}}_{\ensuremath{\boldmatrix{X}}\X}=[k(\ensuremath{\mathbf{x}}_{i},\ensuremath{\mathbf{x}}_{j})]_{i,j=1}^{n}$. Generally, $K_{AB}$ denotes a matrix of cross-covariances between the sets $A$ and $B$. Under a Gaussian noise observation model, $p(y(\ensuremath{\mathbf{x}}) \mid f(\ensuremath{\mathbf{x}})) \sim \mathcal{N}(y(\ensuremath{\mathbf{x}}); f(\ensuremath{\mathbf{x}}),\sigma^2)$, the predictive distribution at $\ensuremath{\mathbf{x}}^{*}$ given data $\mathcal{D} = \{(\ensuremath{\mathbf{x}}_i, y_i)\}_{i=1}^{n}$ is \begin{align} p(f(\ensuremath{\mathbf{x}}^{*})\mid\ensuremath{\mathcal D}) &\sim \mathcal{GP} \left(\mu_{f\mid\ensuremath{\mathcal D}}(\ensuremath{\mathbf{x}}^{*}), k_{f\mid\ensuremath{\mathcal D}}(\ensuremath{\mathbf{x}}^{*}, \ensuremath{\mathbf{x}}^{*'})\right), \notag \\ \mu_{f\mid\ensuremath{\mathcal D}}(\ensuremath{\mathbf{x}}) &= \mu(\ensuremath{\mathbf{x}}^{*}) + \ensuremath{\boldmatrix{K}}_{\ensuremath{\mathbf{x}}^{*}\ensuremath{\boldmatrix{X}}}\hat{K}_{\ensuremath{\boldmatrix{X}}\X}^{-1} \ensuremath{\mathbf{y}} \label{eq:pred_mean}, \\ k_{f\mid\ensuremath{\mathcal D}}(\ensuremath{\mathbf{x}}^*, \ensuremath{\mathbf{x}}^*) &= \ensuremath{\boldmatrix{K}}_{\ensuremath{\mathbf{x}}^{*}\ensuremath{\mathbf{x}}^{*}} - \ensuremath{\boldmatrix{K}}_{\ensuremath{\mathbf{x}}^{*}\ensuremath{\boldmatrix{X}}}\hat{K}_{\ensuremath{\boldmatrix{X}}\X}^{-1}\ensuremath{\boldmatrix{K}}_{\ensuremath{\mathbf{x}}^{*}\ensuremath{\boldmatrix{X}}}^{\top}, \label{eq:pred_covar} \end{align} where $\hat{K}_{\ensuremath{\boldmatrix{X}}\X} = K_{\ensuremath{\boldmatrix{X}}\X} + \sigma^2 I$ and $\ensuremath{\mathbf{y}} = (y(\ensuremath{\mathbf{x}}_1),\dots,y(\ensuremath{\mathbf{x}}_n))^{\top}$. All kernel matrices implicitly depend on hyperparameters $\theta$. The log \emph{marginal likelihood} of the data, conditioned only on these hyperparameters, is given by \begin{equation} \label{eq:marginalloglik} \log p(\ensuremath{\mathbf{y}}\mid\theta) = -\frac{1}{2}\ensuremath{\mathbf{y}}^{\top}\hat{K}_{\ensuremath{\boldmatrix{X}}\X}^{-1}\ensuremath{\mathbf{y}} - \frac{1}{2} \log \vert \hat{K}_{\ensuremath{\boldmatrix{X}}\X} \vert + \text{c} \,, \end{equation} which provides a utility function for kernel learning. \subsection{Inference with matrix-vector multiplies} \label{sec:mvm} In order to compute the predictive mean in \eqref{eq:pred_mean}, the predictive covariance in \eqref{eq:pred_covar}, and the marginal log likelihood in \eqref{eq:marginalloglik}, we need to perform linear solves (i.e. $[\ensuremath{\boldmatrix{K}}_{\ensuremath{\boldmatrix{X}}\X} + \sigma^{2} I]^{-1}\ensuremath{\mathbf{v}}$) and log determinants (i.e. $\log | \ensuremath{\boldmatrix{K}}_{\ensuremath{\boldmatrix{X}}\X} + \sigma^{2} I |$). Traditionally, these operations are achieved using the Cholesky decomposition of $\ensuremath{\boldmatrix{K}}_{\ensuremath{\boldmatrix{X}}\X}$ \citep{rasmussen2006gaussian}. Computing this decomposition requires $\bigo{n^3}$ operations and storing the result requires $\bigo{n^2}$ space. Given the Cholesky decomposition, linear solves can be computed in $\bigo{n^{2}}$ time and log determinants in $\bigo{n}$ time. There exist alternative approaches \citep[e.g.][]{wilson2015kernel} that require only matrix-vector multiplies (MVMs) with $[\ensuremath{\boldmatrix{K}}_{\ensuremath{\boldmatrix{X}}\X} + \sigma^{2} I]$. To compute linear solves, we use the method of \emph{conjugate gradients} (CG). This technique exploits that the solution to $\ensuremath{\boldmatrix{A}}\ensuremath{\mathbf{x}} = \ensuremath{\mathbf{b}}$ is the unique minimizer of the quadratic function $\frac{1}{2} \ensuremath{\mathbf{x}}^{\top}\ensuremath{\boldmatrix{A}}\ensuremath{\mathbf{x}} - \ensuremath{\mathbf{x}}^{\top}\ensuremath{\mathbf{b}}$, which can be found by iterating a simple three term recurrence. Each iteration requires a single MVM with the matrix $\ensuremath{\boldmatrix{A}}$ \citep{shewchuk1994introduction}. Letting $\mvm{\ensuremath{\boldmatrix{A}}}$ denote the time complexity of an MVM with $\ensuremath{\boldmatrix{A}}$, $p$ iterations of CG requires $O(p\mvm{A})$ time. If $\ensuremath{\boldmatrix{A}}$ is $n \times n$, then CG is exact when $p=n$. However, the linear solve can often be approximated by $p<n$ iterations, since the magnitude of the residual $\mathbf{r} = \ensuremath{\boldmatrix{A}}\ensuremath{\mathbf{x}} - \ensuremath{\mathbf{b}}$ often decays exponentially. In practice the value of $k$ required for convergence to high precision is a small constant that depends on the conditioning of $\ensuremath{\boldmatrix{A}}$ rather than $n$ \citep{golub2012matrix}. A similar technique known as \emph{stochastic Lanczos quadrature} exists for approximating log determinants in $O(p\mvm{A})$ time \citep{dong2017scalable,ubaru2017fast}. In short, inference and learning for GP regression can be done in $\bigo{p\mvm{\ensuremath{\boldmatrix{K}}_{\ensuremath{\boldmatrix{X}}\X}}}$ time using these iterative approaches. Critically, if the kernel matrices admit fast MVMs -- either through the structure of the data \citep{saatcci2012scalable,cunningham2008fast} or the structure of a general purpose kernel approximation \citep{wilson2015kernel} -- this iterative approach offers massive scalability gains over conventional Cholesky-based methods. \subsection{Structured kernel interpolation} \label{sec: ski} Structured kernel interpolation (SKI) \citep{wilson2015kernel} replaces a user-specified kernel $k(\ensuremath{\mathbf{x}},\ensuremath{\mathbf{x}}')$ with an approximate kernel that affords very fast matrix-vector multiplies. Assume we are given a set of $m$ \emph{inducing points} $\ensuremath{\boldmatrix{U}}$ that we will use to approximate kernel values. Instead of computing kernel values between data points directly, SKI computes kernel values between inducing points and \emph{interpolates} these kernel values to approximate the true data kernel values. This leads to the approximate SKI kernel: \begin{equation}\label{eq:interp_single} k(\ensuremath{\mathbf{x}},\ensuremath{\mathbf{z}}) \approx \ensuremath{\mathbf{w}}_{\ensuremath{\mathbf{x}}}\ensuremath{\boldmatrix{K}}_{\ensuremath{\boldmatrix{U}}\U}\ensuremath{\mathbf{w}}_{\ensuremath{\mathbf{z}}}^{\top}, \end{equation} where $\ensuremath{\mathbf{w}}_{\ensuremath{\mathbf{x}}}$ is a sparse vector that contains interpolation weights. For example, when using local cubic interpolation \citep{keys1981cubic}, $\ensuremath{\mathbf{w}}_{x}$ contains four nonzero elements. Applying this approximation for all data points in the training set, we see that: \begin{equation} \ensuremath{\boldmatrix{K}}_{\ensuremath{\boldmatrix{X}}\X} \approx \ensuremath{\boldmatrix{W}}_{\ensuremath{\boldmatrix{X}}}\ensuremath{\boldmatrix{K}}_{\ensuremath{\boldmatrix{U}}\U}\ensuremath{\boldmatrix{W}}_{\ensuremath{\boldmatrix{X}}}^{\top} \end{equation} With arbitrary inducing points $\ensuremath{\boldmatrix{U}}$, matrix-vector multiplies with $[\ensuremath{\boldmatrix{W}}\ensuremath{\boldmatrix{K}}_{\ensuremath{\boldmatrix{U}}\U}\ensuremath{\boldmatrix{W}}^{\top}]\ensuremath{\mathbf{v}}$ require $\bigo{n+m^{2}}$ time. In one dimension, we can reduce this running time by instead choosing $\ensuremath{\boldmatrix{U}}$ to be a regular grid, which results in $\ensuremath{\boldmatrix{K}}_{\ensuremath{\boldmatrix{U}}\U}$ being \emph{Toeplitz}. In higher dimensions, a multi-dimensional grid results in $\ensuremath{\boldmatrix{K}}_{\ensuremath{\boldmatrix{U}}\U}$ being the Kronecker product of Toeplitz matrices. This decompositions enables matrix-vector multiplies in at most $\bigo{n + m \log m}$ time, and $\bigo{n + m}$ storage. However, a Kronecker decomposition of $\ensuremath{\boldmatrix{K}}_{\ensuremath{\boldmatrix{U}}\U}$ leads to an exponential time complexity in $d$, the dimensionality of the inputs $\mathbf{x}$ \citep{wilson2015kernel}. \section{DISCUSSION} \vspace{-1ex} \label{sec:discussion} It is our hope that this work highlights a question of foundational importance for scalable GP inference: \emph{given the ability to compute $\ensuremath{\boldmatrix{A}}\ensuremath{\mathbf{v}}$ and $\ensuremath{\boldmatrix{B}}\ensuremath{\mathbf{v}}$ quickly for matrices $\ensuremath{\boldmatrix{A}}$ and $\ensuremath{\boldmatrix{B}}$, how do we compute $(A \circ B)\ensuremath{\mathbf{v}}$ efficiently?} We have shown an answer to this question can \emph{exponentially} improve the scalability and general applicability of MVM-based methods for fast Gaussian processes. \paragraph{Stochastic diagonal estimation.} Our method relies primarily on quickly computing the diagonal in Equation \eqref{eq:two_mat_hpvm}. Techniques exist for stochastic diagonal estimation \citep{fitzsimons2016improved,hutchinson1990stochastic,selig2012improving,bekas2007estimator}. We found that these techniques converged slower than our method in practice, but they may be more appropriate for kernels with high rank structure. \paragraph{Higher-order product kernels.} A fundamental property of the Hadamard product is that $\textrm{rank}(A \circ B) \leq \textrm{rank}(A)\textrm{rank}(B)$ suggesting that we may need higher rank approximations with increasing dimension. In the limit, the SKI approximation $\ensuremath{\boldmatrix{W}}\ensuremath{\boldmatrix{K}}_{\ensuremath{\boldmatrix{U}}\U}\ensuremath{\boldmatrix{W}}^{\top}$ can be used in place of the Lanczos decomposition in equation \eqref{eq:two_mat_hpvm}, resulting in an exact algorithm with $\bigo{dnm + dm^{2}\log m}$ runtime: simply set $Q_{k}=\ensuremath{\boldmatrix{W}}$ and MVMs require $\bigo{n}$ time instead of $\bigo{nk}$, and set $T_{k}=\ensuremath{\boldmatrix{K}}_{\ensuremath{\boldmatrix{U}}\U}$ and MVMs now require $\bigo{m \log m}$ instead of $\bigo{k^{2}}$. This adaptation is rarely necessary, as the accuracy of MVMs with SKIP increases exponetially in $k$ in practice. \paragraph{Space complexity.} To perform the matrix-vector multiplication algorithm described above, we must store the Lanczos decomposition of each component kernel matrix and intermediate matrices in the merge step for $\bigo{dkn}$ storage. This is better storage than the $\bigo{n^{2}}$ storage required in full GP regression, or $\bigo{nm}$ storage for standard inducing point methods, but worse than the linear storage requirements of SKI. In practice, we note that GPU memory is indeed often the major limitation of our method, as storing even $k=20$ or $k=30$ copies of a dataset in GPU memory can be expensive. \section{APPLICATION 1: AN EXPONENTIAL IMPROVEMENT TO SKI} \begin{table} \caption{Asymptotic complexities of a single calculation of \autoref{eq:marginalloglik} with $n$ data points, $m$ inducing points, $r$ Lanczos iterations and $p$ CG iterations. The first two rows correspond to an exact GP with Cholesky and CG. \label{tab:time}} \vspace{0.5ex} \centering \resizebox{\columnwidth}{!}{% \input results/time_complexities.tex } \vspace{-2ex} \end{table} \citet{wilson2015kernel} use a Kronecker decomposition of $K_{UU}$ to apply SKI for $d > 1$ dimensions, which requires a fully connected multi-dimensional grid of inducing points $U$. Thus if we wish to have $m$ distinct inducing point values for each dimension, the grid requires $m^{d}$ inducing points -- i.e. MVMs with the SKI approximate $\ensuremath{\boldmatrix{K}}_{\ensuremath{\boldmatrix{X}}\X}$ require $\bigo{n + dm^{d}\log m}$ time. It is therefore computationally infeasible to apply SKI with a Kronecker factorization, referred to in \citet{wilson2015kernel} as KISS-GP, to more than about five dimensions. However, using the proposed SKIP method of Section~\ref{sec: skip}, we can reduce the running time complexity of SKI in $d$ dimensions from exponential $\bigo{n + dm^{d}\log m}$ to linear $\bigo{dn + dm\log m}$! If we express a $d$-dimensional kernel as the product of $d$ one-dimensional kernels, then each component kernel requires only $m$ grid points, rather than $m^{d}$. For the RBF and ARD kernels, decomposing the kernel in this way yields the same kernel function. \paragraph{Datasets.} We evaluate SKIP on six benchmark datasets. The precipitation dataset contains hourly rainfall measurements from hundreds of stations around the country. The remaining datasets are taken from the UCI machine learning dataset repository. KISS-GP (SKI with a Kronecker factorization) is not applicable when $d>5$, and the full GP is not applicable on the four largest datasets. \paragraph{Methods.} We compare against the popular sparse variational Gaussian processes (SGPR) \citep{titsias2009variational, hensman2013gaussian} implemented in GPflow \citep{matthews2017gpflow}. We also compare to our GPU implementation of KISS-GP where possible, as well as our GPU implementation of the full GP on the two smallest datasets. All experiments were run on an NVIDIA Titan Xp. We evaluate SGPR using 200, 400 and 800 inducing points. All models use the RBF kernel and a constant prior mean function. We optimize hyperparameters with ADAM using default optimization parameters. \paragraph{Discussion.} The results of our experiments are shown in \autoref{tab:highd_results}. On the two smallest datasets, the Full GP model outperforms all other methods in terms of speed. This is due to the overhead added by inducing point methods significantly outweighing simple solves with conjugate gradients with such little data. SKIP is able to match the error of the full GP model on elevators, and all methods have comparable error on the Pumadyn dataset. On the precipitation dataset, inference with standard KISS-GP is still tractable due to the low dimensionality, and KISS-GP is both fast and accurate. Using SKIP results in higher error than KISS-GP, because we were able to use significantly fewer Lanczos iterations for our approximate MVMs than on other datasets due to the space complexity. We discuss the space complexity limitation further in the discussion section. Nevertheless, SKIP still performs better than SGPR. SGPR results with 400 and 800 inducing points are unavailable due to GPU memory constraints. On the remaining datasets, SKIP is able to achieve comparable or better overall error than SGPR, but with a significantly lower runtime. \section{INTRODUCTION} Gaussian processes (GPs) provide a powerful approach to regression and extrapolation, with applications as varied as time series analysis \citep{wilson2013gaussian,DuvLloGroetal13}, blackbox optimization \citep{jones1998efficient,snoek2012practical}, and personalized medicine and counterfactual prediction \citep{durichen2015multitask, schulam2015framework, herlands2016scalable,gardner2015psychophysical}. Historically, one of the key limitations of Gaussian process regression has been the computational intractability of inference when dealing with more than a few thousand data points. This complexity stems from the need to solve linear systems and compute log determinants involving an $n \times n$ symmetric positive definite \emph{covariance matrix} $\ensuremath{\boldmatrix{K}}$. This task is commonly performed by computing the Cholesky decomposition of $K$ \citep{rasmussen2006gaussian}, incurring $\bigo{n^{3}}$ complexity. To reduce this complexity, \emph{inducing point methods} make use of a small set of $m < n$ points to form a rank $m$ approximation of $\ensuremath{\boldmatrix{K}}$ \citep{quinonero2005unifying, snelson2006sparse,hensman2013gaussian,titsias2009variational}. Using the matrix inversion and determinant lemmas, inference can be performed in $\bigo{nm^{2}}$ time \citep{snelson2006sparse}. Recently, however, an alternative class of inference techniques for Gaussian processes have emerged based on iterative numerical linear algebra techniques \citep{wilson2015kernel,dong2017scalable}. Rather than explicitly decomposing the full covariance matrix, these methods leverage Krylov subspace methods \citep{golub2012matrix} to perform linear solves and log determinants using only matrix-vector multiples (MVMs) with the covariance matrix. Letting $\mvm{\ensuremath{\boldmatrix{K}}}$ denote the time complexity of computing $\ensuremath{\boldmatrix{K}}\ensuremath{\mathbf{v}}$ given a vector $\ensuremath{\mathbf{v}}$, these methods provide excellent approximations to linear solves and log determinants in $\bigo{r\mvm{\ensuremath{\boldmatrix{K}}}}$ time, where $r$ is typically some small constant \cite{golub2012matrix}.\footnote{ In practice, $r$ depends on the conditioning of $K$, but is independent of $n$.} This approach has led to scalable GP methods that differ radically from previous approaches -- the goal shifts from computing efficient Cholesky decompositions to computing efficient MVMs. Structured kernel interpolation (SKI) \citep{wilson2015kernel} is a recently proposed inducing point method that, given a regular grid of $m$ inducing points, allows for MVMs to be performed in an impressive $\bigo{n + m \log m}$ time. These MVM approaches have two fundamental drawbacks. First, \citet{wilson2015kernel} use Kronecker factorizations for SKI to take advantage of fast MVMs, constraining the number of inducing points $m$ to grow exponentially with the dimensionality of the inputs, limiting the applicability of SKI to problems with fewer than about $5$ input dimensions. Second, the computational benefits of iterative MVM inference methods come at the cost of reduced modularity. If all we know about a kernel is that it decomposes as $\ensuremath{\boldmatrix{K}} = \ensuremath{\boldmatrix{K}}_{1} \circ \ensuremath{\boldmatrix{K}}_{2}$, it is not obvious how to efficiently perform MVMs with $\ensuremath{\boldmatrix{K}}$, even if we have access to fast MVMs with both $\ensuremath{\boldmatrix{K}}_{1}$ and $\ensuremath{\boldmatrix{K}}_{2}$. In order for MVM inference to be truly modular, we should be able to perform inference equipped with nothing but the ability to perform MVMs with $K$. One of the primary advantages of GPs is the ability to construct very expressive kernels by composing simpler ones \citep{rasmussen2006gaussian, gonen2011multiple, durrande2011additive, DuvLloGroetal13, wilson2014covariance}. One of the most common kernel compositions is the element-wise product of kernels. This composition can encode different functional properties for each input dimension \citep[e.g.,][]{rasmussen2006gaussian, gonen2011multiple, DuvLloGroetal13, wilson2014covariance}, or express correlations between outputs in multi-task settings \citep{mackay98, bonilla2008multi, alvarez2011computationally}. Moreover, the RBF and ARD kernels -- arguably the most popular kernels in use -- decompose into product kernels. In this paper, we propose a single solution which addresses both of these limitations of iterative methods -- improving modularity while simultaneously alleviating the curse of dimensionality. In particular: \begin{enumerate}[wide, labelwidth=!, labelindent=0pt] \item We demonstrate that MVMs with product kernels can be approximated efficiently by computing the Lanczos decomposition of each component kernel. If MVMs with a kernel $\ensuremath{\boldmatrix{K}}$ can be performed in $\bigo{\mvm{\ensuremath{\boldmatrix{K}}}}$ time, then MVMs with the element-wise product of $d$ kernels can be approximated in $\bigo{dr\mvm{\ensuremath{\boldmatrix{K}}} + r^{3}n\log d}$ time, where $r$ is typically a very small constant. \item Our fast product-kernel MVM algorithm, entitled \emph{SKIP}, enables the use of structured kernel interpolation with product kernels without resorting to the exponential complexity of Kronecker products. SKIP can be applied even when the product kernels use different interpolation grids, and enables GP inference and learning in $\bigo{dn + d m \log m}$ for products of $d$ kernels. \item We apply SKIP to high-dimensional regression problems by expressing $d$-dimensional kernels as the product of $d$ one-dimensional kernels. This formulation affords an \emph{exponential improvement} over the standard SKI complexity of $\bigo{n + dm^{d} \log m}$, and achieving state of the art performance over popular inducing point methods \citep{hensman2013gaussian, titsias2009variational}. \item We demonstrate that SKIP can reduce the complexity of multi-task GPs (MTGPs) to $\bigo{n + m \log m + s}$ for a problem with $s$ tasks. We exploit this fast inference, developing a model that discovers cluster of tasks using Gibbs sampling. \item We make our GPU implementations available as easy to use code as part of a new package for Gaussian processes, GPyTorch, available at \url{https://github.com/cornellius-gp/gpytorch}. \end{enumerate} \section{MVMs WITH PRODUCT KERNELS} \label{sec: skip} \begin{figure}[t!] \centering \includegraphics[width=\linewidth]{figures/product_kernel.pdf} \caption{ Computing fast matrix-vector multiplies (MVMs) with the product kernel $K^{(1)}_{\ensuremath{\boldmatrix{X}}\X} \circ K^{(2)}_{\ensuremath{\boldmatrix{X}}\X}.$ {\bf 1:} Rewrite the element-wise product as the diagonal $\Delta(\cdot)$ of a product of matrices. {\bf 2:} Compute the rank-$r$ Lanczos decomposition of $K^{(1)}_{\ensuremath{\boldmatrix{X}}\X}$ and $K^{(2)}_{\ensuremath{\boldmatrix{X}}\X}$. } \label{fig:product_kernel} \raggedbottom \end{figure} In this section we derive an approach to exploit product kernel structure for fast MVMs, towards alleviating the curse of dimensionality in SKI. Suppose a kernel separates as a product as follows: \begin{equation} \label{eq:prod_kernel} k(\ensuremath{\mathbf{x}}, \ensuremath{\mathbf{x}}') = \prod_{i=1}^{d}k^{(i)}(\ensuremath{\mathbf{x}}, \ensuremath{\mathbf{x}}'). \end{equation} Given a training data set $\mathbf \ensuremath{\boldmatrix{X}} = [ \ensuremath{\mathbf{x}}_1, \ldots \ensuremath{\mathbf{x}}_n ]$, the kernel matrix $\ensuremath{\boldmatrix{K}}$ resulting from the product of kernels in \eqref{eq:prod_kernel} can be expressed as $\ensuremath{\boldmatrix{K}}=\ensuremath{\boldmatrix{K}}^{(1)}_{\ensuremath{\boldmatrix{X}}\X} \circ \cdots \circ \ensuremath{\boldmatrix{K}}^{(d)}_{\ensuremath{\boldmatrix{X}}\X}$, where $\circ$ represents element-wise multiplication. In other words: \begin{equation} \left[K^{(1)}_{\ensuremath{\boldmatrix{X}}\X} \circ K^{(2)}_{\ensuremath{\boldmatrix{X}}\X}\right]_{ij} = \left[K^{(1)}_{\ensuremath{\boldmatrix{X}}\X}\right]_{ij}\left[K^{(2)}_{\ensuremath{\boldmatrix{X}}\X}\right]_{ij}. \end{equation} The key limitation we must deal with is that, unlike a sum of matrices, vector multiplication does not distribute over the elementwise product: \begin{equation} \left( K^{(1)} \circ K^{(2)} \right) \ensuremath{\mathbf{v}} \ne \left( K^{(1)} \ensuremath{\mathbf{v}} \right) \circ \left( K^{(2)} \ensuremath{\mathbf{v}} \right). \end{equation} We will assume we have access to fast MVMs for each component kernel matrix $K^{(i)}$. Without fast MVMs, there is a trivial solution to computing the elementwise matrix vector product: explicitly compute the kernel matrix $\ensuremath{\boldmatrix{K}}$ in $\bigo{dn^{2}}$ time and then compute $\ensuremath{\boldmatrix{K}}\ensuremath{\mathbf{v}}$. We further assume that $K^{(i)}$ admits a low rank approximation, following prior work on inducing point methods following prior work on inducing point methods \citep{snelson2006sparse,titsias2009variational,wilson2015kernel,hensman2013gaussian}. \paragraph{A naive algorithm for a two-kernel product.} We initially assume for simplicity that there are only $d=2$ components kernels in the product. We will then show how to extend the two kernel case to arbitrarily sized product kernels. We seek to perform matrix vector multiplies: \begin{equation} \label{eq:mvm_2_terms} (K^{(1)}_{\ensuremath{\boldmatrix{X}}\X} \circ K^{(2)}_{\ensuremath{\boldmatrix{X}}\X})\ensuremath{\mathbf{v}} \end{equation} Eq.~\eqref{eq:mvm_2_terms} may be expressed in terms of matrix-matrix multiplication using the following identity: \begin{equation} \label{eq:two_mat_hpvm} \ensuremath{\boldmatrix{K}}\ensuremath{\mathbf{v}} = (K^{(1)}_{\ensuremath{\boldmatrix{X}}\X} \circ K^{(2)}_{\ensuremath{\boldmatrix{X}}\X})\ensuremath{\mathbf{v}} = \Delta(K^{(1)}_{\ensuremath{\boldmatrix{X}}\X} \: D_{\ensuremath{\mathbf{v}}} \: K^{(2)\top}_{\ensuremath{\boldmatrix{X}}\X}), \end{equation} where $D_{\ensuremath{\mathbf{v}}}$ is a diagonal matrix whose elements are $\ensuremath{\mathbf{v}}$ (\autoref{fig:product_kernel}), and $\Delta(M)$ denotes the diagonal of $M$. Because $D_{\ensuremath{\mathbf{v}}}$ is an $n \times n$ matrix, computing the entries of $K \ensuremath{\mathbf{v}}$ naively requires $n$ matrix-vector multiplies with $K^{(1)}_{\ensuremath{\boldmatrix{X}}\X}$ and $K^{(2)}_{\ensuremath{\boldmatrix{X}}\X}$. The time complexity to compute \eqref{eq:two_mat_hpvm} is therefore $\bigo{n\mvm{K^{(1)}_{\ensuremath{\boldmatrix{X}}\X}} + n\mvm{K^{(2)}_{\ensuremath{\boldmatrix{X}}\X}}}$. This reformulation does not naively offer any time savings \paragraph{Exploiting low-rank structure.} Suppose however that we have access to rank-$r$ approximations of $\ensuremath{\boldmatrix{K}}^{(1)}_{\ensuremath{\boldmatrix{X}}\X}$ and $\ensuremath{\boldmatrix{K}}^{(2)}_{\ensuremath{\boldmatrix{X}}\X}$: \begin{equation}\nonumber K^{(1)}_{\ensuremath{\boldmatrix{X}}\X} \approx \ensuremath{\boldmatrix{Q}}^{(1)}\ensuremath{\boldmatrix{T}}^{(1)}\ensuremath{\boldmatrix{Q}}^{(1)\top}, \:\:\:\:\: K^{(2)}_{\ensuremath{\boldmatrix{X}}\X} \approx \ensuremath{\boldmatrix{Q}}^{(2)}\ensuremath{\boldmatrix{T}}^{(2)}\ensuremath{\boldmatrix{Q}}^{(2)\top}, \end{equation} where $\ensuremath{\boldmatrix{Q}}^{(1)}$, $\ensuremath{\boldmatrix{Q}}^{(2)}$ are $n \times r$ and $\ensuremath{\boldmatrix{T}}^{(1)}$, $\ensuremath{\boldmatrix{T}}^{(2)}$ are $r \times r$ (\autoref{fig:product_kernel}). This rank decomposition makes the MVM significantly cheaper to compute. Plugging these decompositions in to \eqref{eq:two_mat_hpvm}, we derive: \begin{equation} \label{eq:two_mat_hpvm_low_rank} \ensuremath{\boldmatrix{K}}\ensuremath{\mathbf{v}} = \Delta\left(\ensuremath{\boldmatrix{Q}}^{(1)}\ensuremath{\boldmatrix{T}}^{(1)}\ensuremath{\boldmatrix{Q}}^{(1)\top} \: D_{\ensuremath{\mathbf{v}}} \: \ensuremath{\boldmatrix{Q}}^{(2)}\ensuremath{\boldmatrix{T}}^{(2)}\ensuremath{\boldmatrix{Q}}^{(2)\top}\right). \end{equation} We prove the following key lemma in the supplementary materials about \eqref{eq:two_mat_hpvm_low_rank}: \begin{lemma} \label{lemma:low_rank_mvm} Suppose that $\ensuremath{\boldmatrix{K}}^{(1)}_{\ensuremath{\boldmatrix{X}}\X}=\ensuremath{\boldmatrix{Q}}^{(1)}\ensuremath{\boldmatrix{T}}^{(1)}\ensuremath{\boldmatrix{Q}}^{(1)\top}$ and $\ensuremath{\boldmatrix{K}}^{(2)}_{\ensuremath{\boldmatrix{X}}\X}=\ensuremath{\boldmatrix{Q}}^{(1)}\ensuremath{\boldmatrix{T}}^{(1)}\ensuremath{\boldmatrix{Q}}^{(1)\top}$, where $\ensuremath{\boldmatrix{Q}}^{(1)}$ and $\ensuremath{\boldmatrix{Q}}^{(2)}$ are $n \times r$ matrices and $\ensuremath{\boldmatrix{T}}^{(1)}$ and $T^{(2)}$ are $r \times r$. Then $(\ensuremath{\boldmatrix{K}}^{(1)}_{\ensuremath{\boldmatrix{X}}\X} \circ \ensuremath{\boldmatrix{K}}^{(2)}_{\ensuremath{\boldmatrix{X}}\X})\ensuremath{\mathbf{v}}$ can be computed with \eqref{eq:two_mat_hpvm_low_rank} in $\bigo{r^{2}n}$ time. \end{lemma} Therefore, if we can efficiently compute low-rank decompositions of $\ensuremath{\boldmatrix{K}}^{(1)}$ and $\ensuremath{\boldmatrix{K}}^{(2)}$, then we immediately apply \autoref{lemma:low_rank_mvm} to perform fast MVMs. \paragraph{Computing low-rank structure.} With \autoref{lemma:low_rank_mvm}, we have reduced the problem of computing MVMs with $\ensuremath{\boldmatrix{K}}$ to that of constructing low-rank decompositions for $K^{(1)}_{\ensuremath{\boldmatrix{X}}\X}$ and $K^{(2)}_{\ensuremath{\boldmatrix{X}}\X}$. Since we are assuming we can take fast MVMs with these kernel matrices, we now turn to the \emph{Lanczos decomposition} \citep{lanczos1950iteration,paige1972computational}. The Lanczos decomposition is an iterative algorithm that takes a symmetric matrix $A$ and probe vector $b$ and returns $Q$ and $T$ such that $A \approx QTQ^{\top}$, with $\ensuremath{\boldmatrix{Q}}$ orthogonal. This decomposition is exact after $n$ iterations. However, if we only compute $r < n$ columns of $Q$, then $Q_{r}T_{r}Q_{r}^{\top}$ is an effective low-rank approximation of $A$ \citep{nickisch2009bayesian,simon2000low}. Unlike standard low rank approximations (such as the singular value decomposition), the algorithm for computing the Lanczos decomposition $\ensuremath{\boldmatrix{K}}^{(i)}_{\ensuremath{\boldmatrix{X}}\X} = Q^{(i)}T^{(i)}Q^{(i)\top}$ requires only $r$ MVMs, leading to the following lemma: \begin{lemma} \label{lemma:lanczos_time} Suppose that MVMs with $\ensuremath{\boldmatrix{K}}^{(i)}_{\ensuremath{\boldmatrix{X}}\X}$ can be computed in $\bigo{\mvm{\ensuremath{\boldmatrix{K}}^{(i)}_{\ensuremath{\boldmatrix{X}}\X}}}$ time. Then the rank-$r$ Lanczos decomposition $\ensuremath{\boldmatrix{K}}^{(i)}_{\ensuremath{\boldmatrix{X}}\X} \approx \ensuremath{\boldmatrix{Q}}_r^{(i)}\ensuremath{\boldmatrix{T}}_r^{(i)}\ensuremath{\boldmatrix{Q}}_r^{(i)\top}$ can be computed in $\bigo{r \mvm{\ensuremath{\boldmatrix{K}}^{(i)}_{\ensuremath{\boldmatrix{X}}\X}}}$ time. \end{lemma} The above discussion motivates the following algorithm for computing $(K^{(1)}_{\ensuremath{\boldmatrix{X}}\X} \cdot K^{(2)}_{\ensuremath{\boldmatrix{X}}\X})\ensuremath{\mathbf{v}}$, which is summarized by \autoref{fig:product_kernel}: First, compute the rank-$r$ Lanczos decomposition of each matrix; then, apply \eqref{eq:two_mat_hpvm_low_rank}. Lemmas \ref{lemma:lanczos_time} and \ref{lemma:low_rank_mvm} together imply that this takes $\bigo{r\mvm{\ensuremath{\boldmatrix{K}}^{(1)}_{\ensuremath{\boldmatrix{X}}\X}} + r\mvm{\ensuremath{\boldmatrix{K}}^{(2)}_{\ensuremath{\boldmatrix{X}}\X}} + r^{2}n}$ time. \begin{figure*}[t!] \centering \includegraphics[width=0.8\columnwidth]{figures/mvm_error.pdf} \hspace{3ex} \includegraphics[width=0.8\columnwidth]{figures/inducing_scaling.pdf} \caption{{\bf Left:} Relative error of MVMs computed using SKIP compared to the exact value $\ensuremath{\boldmatrix{K}}\ensuremath{\mathbf{v}}$. {\bf Right:} Training time as a function of the number of inducing points \emph{per dimension}. KISS-GP (SKI with Kronecker factorization) scales well with the \emph{total} number of inducing points, but badly with number of inducing points \emph{per dimension}, because the required total number of inducing points scales exponentially with the number of dimensions. \label{fig:mvm_evaluation}} \vspace{-2ex} \end{figure*} \paragraph{Extending to product kernels with three components.} Now consider a kernel that decomposes as the product of three components, $k(\ensuremath{\mathbf{x}}, \ensuremath{\mathbf{x}}') = k^{(1)}(\ensuremath{\mathbf{x}}, \ensuremath{\mathbf{x}}') k^{(2)}(\ensuremath{\mathbf{x}}, \ensuremath{\mathbf{x}}')k^{(3)}(\ensuremath{\mathbf{x}}, \ensuremath{\mathbf{x}}')$. An MVM with this kernel is given by $\ensuremath{\boldmatrix{K}}\ensuremath{\mathbf{v}} = (\ensuremath{\boldmatrix{K}}^{(1)}_{\ensuremath{\boldmatrix{X}}\X} \circ \ensuremath{\boldmatrix{K}}^{(2)}_{\ensuremath{\boldmatrix{X}}\X} \circ \ensuremath{\boldmatrix{K}}^{(3)}_{\ensuremath{\boldmatrix{X}}\X})\ensuremath{\mathbf{v}}.$ Define $\tilde{K}^{(1)}_{\ensuremath{\boldmatrix{X}}\X} = \ensuremath{\boldmatrix{K}}^{(1)}_{\ensuremath{\boldmatrix{X}}\X} \circ \ensuremath{\boldmatrix{K}}^{(2)}_{\ensuremath{\boldmatrix{X}}\X}$ and $\tilde{K}^{(2)}_{\ensuremath{\boldmatrix{X}}\X} = \ensuremath{\boldmatrix{K}}^{(3)}_{\ensuremath{\boldmatrix{X}}\X}$. Then \begin{equation} \label{eq:three_mat_case} \ensuremath{\boldmatrix{K}}\ensuremath{\mathbf{v}} = (\tilde{K}^{(1)}_{\ensuremath{\boldmatrix{X}}\X} \circ \tilde{K}^{(2)}_{\ensuremath{\boldmatrix{X}}\X})\ensuremath{\mathbf{v}} \,, \end{equation} reducing the three component problem back to two components. To compute the Lanczos decomposition of $\tilde{K}^{(1)}_{\ensuremath{\boldmatrix{X}}\X}$, we use the method described above for computing MVMs with $\ensuremath{\boldmatrix{K}}^{(1)}_{\ensuremath{\boldmatrix{X}}\X} \circ \ensuremath{\boldmatrix{K}}^{(2)}_{\ensuremath{\boldmatrix{X}}\X}$. \paragraph{Extending to product kernels with many components.} The approach for the three component setting leads naturally to a divide and conquer strategy. Given a kernel matrix $\ensuremath{\boldmatrix{K}} = \ensuremath{\boldmatrix{K}}^{(1)}_{\ensuremath{\boldmatrix{X}}\X} \circ \cdots \circ \ensuremath{\boldmatrix{K}}^{(d)}_{\ensuremath{\boldmatrix{X}}\X}$ we define \begin{align} \tilde{K}^{(1)}_{\ensuremath{\boldmatrix{X}}\X}&=\ensuremath{\boldmatrix{K}}^{(1)}_{\ensuremath{\boldmatrix{X}}\X} \circ \cdots \circ \ensuremath{\boldmatrix{K}}^{(\frac{d}{2})}_{\ensuremath{\boldmatrix{X}}\X} \\ \tilde{K}^{(2)}_{\ensuremath{\boldmatrix{X}}\X}&=\ensuremath{\boldmatrix{K}}^{(\frac{d}{2}+1)}_{\ensuremath{\boldmatrix{X}}\X} \circ \cdots \circ \ensuremath{\boldmatrix{K}}^{(d)}_{\ensuremath{\boldmatrix{X}}\X}, \end{align} which lets us rewrite $\ensuremath{\boldmatrix{K}}=\tilde{K}^{(1)}_{\ensuremath{\boldmatrix{X}}\X} \circ \tilde{K}^{(2)}_{\ensuremath{\boldmatrix{X}}\X}$. By applying this splitting recursively, we can compute matrix-vector multiplies with $\ensuremath{\boldmatrix{K}}$, leading to the following running time complexity: \begin{theorem} \label{theorem:main_running_time} Suppose that $\ensuremath{\boldmatrix{K}}=\ensuremath{\boldmatrix{K}}^{(1)}_{\ensuremath{\boldmatrix{X}}\X} \circ \cdots \circ \ensuremath{\boldmatrix{K}}^{(d)}_{\ensuremath{\boldmatrix{X}}\X}$, and that computing a matrix-vector multiply with any $\ensuremath{\boldmatrix{K}}^{(i)}_{\ensuremath{\boldmatrix{X}}\X}$ requires $\bigo{\mvm{\ensuremath{\boldmatrix{K}}^{(1)}_{\ensuremath{\boldmatrix{X}}\X}}}$ operations. Computing an MVM with $\ensuremath{\boldmatrix{K}}$ requires $\bigo{dr\mvm{\ensuremath{\boldmatrix{K}}^{(i)}} + r^{3}n\log d + r^{2}n}$ time, where $r$ is the rank of the Lanczos decomposition used. \end{theorem} \paragraph{Sequential MVMs.} If we are computing many MVMs with the same matrix, then we can further reduce this complexity by caching the Lanczos decomposition. The terms $\bigo{dr\mvm{\ensuremath{\boldmatrix{K}}^{(i)}_{\ensuremath{\boldmatrix{X}}\X}} + r^{3}n \log d}$ represent the time to construct the Lanczos decomposition. However, note that this decomposition is not dependent on the vector that we wish to multiply with. Therefore, if we save the decomposition for future computation, we have the following corollary: \begin{corollary} Any subsequent MVMs with $\ensuremath{\boldmatrix{K}}$ require $\bigo{r^{2}n}$ time. \end{corollary} If matrix-vector multiplications with $\ensuremath{\boldmatrix{K}}^{(i)}_{\ensuremath{\boldmatrix{X}}\X}$ can be performed with significantly fewer than $n^{2}$ operations, this results in a significant complexity improvement over explicitly computing the full kernel matrix $\ensuremath{\boldmatrix{K}}$. \subsection{Structured kernel interpolation for products (SKIP)} So far we have assumed access to fast MVMs with each constituent kernel matrix of an elementwise (Hadamard) product: $\ensuremath{\boldmatrix{K}}=\ensuremath{\boldmatrix{K}}^{(1)}_{\ensuremath{\boldmatrix{X}}\X} \circ \cdots \circ \ensuremath{\boldmatrix{K}}^{(d)}_{\ensuremath{\boldmatrix{X}}\X}$. To achieve this, we apply the SKI approximation (Section~\ref{sec: ski}) to each component: \begin{equation} \ensuremath{\boldmatrix{K}}^{(i)}_{\ensuremath{\boldmatrix{X}}\X}=\ensuremath{\boldmatrix{W}}^{(i)}\ensuremath{\boldmatrix{K}}_{\ensuremath{\boldmatrix{U}}\U}\ensuremath{\boldmatrix{W}}^{(i)\top}. \end{equation} \emph{}When using SKI approximations, the running time of our product kernel inference technique with $p$ iterations of CG becomes $\bigo{dr(n + m \log m) + r^{3}n\log d + pr^{2}n}$. The running time of SKIP is compared to that of other inference techniques in \autoref{tab:time}. \section{APPLICATION 2: MULTI-TASK LEARNING} \label{sec:multitask} \begin{figure*}[t] \centering \includegraphics[width=\linewidth]{figures/health_ltgp_adapt.pdf} \caption{Applying the cluster-based MTGP model to new tasks. \label{fig:health_adaptation}} \vspace{-2ex} \end{figure*} \begin{figure}[t!] \centering \includegraphics[width=\columnwidth]{figures/health_comparison.pdf} \caption{Predictive performance on childhood development dataset as a function of the number of tasks. \label{fig:health_comparison}} \vspace{-5ex} \end{figure} We demonstrate how the fast elementwise matrix vector products with SKIP can also be applied to accelerate multi-task Gaussian processes (MTGPs). Additionally, because SKIP provides cheap marginal likelihood computations, we extend standard MTGPs to construct an interpretable and robust multi-task GP model which discovers latent clusters among tasks using Gibbs' sampling. We apply this model to a particularly consequential child development dataset from the Gates foundation. \paragraph{Motivating problem.} The Gates foundation has collected an aggregate longitudinal dataset of child development, from studies performed around the world. We are interested in predicting the future development for a given child (as measured by weight) using a limited number of existing measurements. Children in the dataset have a varying number of measurements (ranging from 5 to 30), taken at irregular times throughout their development. We therefore model this problem with a multitask approach, where we treat each child's development as a task. This approach is the basis of several medical forecasting models \citep{alaa2017personalized,cheng2017sparse,xu2016bayesian}. \paragraph{Multi-task learning with GPs.} The common multi-task setup involves ${\ensuremath{s}}$ datasets corresponding to a set of different tasks, $\ensuremath{\mathcal D}_{i}\!:\!\set{(\ensuremath{\mathbf{x}}^{(i)}_{1},y^{(i)}_{1}),...,(\ensuremath{\mathbf{x}}^{(i)}_{n_{i}}, y^{(i)}_{n_{i}})}_{i=1}^{{\ensuremath{s}}}$. The multi-task Gaussian process (MTGP) of \citet{bonilla2008multi} extends standard GP regression to share information between several related tasks. MTGPs assume that the covariance between data points factors as the product of kernels over (e.g. spatial or temporal) inputs and tasks. Specifically, given data points $\ensuremath{\mathbf{x}}$ and $\ensuremath{\mathbf{x}}'$ from tasks $i$ and $j$, the MTGP kernel is given by \begin{equation} k((\ensuremath{\mathbf{x}}, i),(\ensuremath{\mathbf{x}}', j)) = k_\text{input}(\ensuremath{\mathbf{x}}, \ensuremath{\mathbf{x}})k_\text{task}(i, j), \label{eq:multitask_kernel} \end{equation} where $k_\text{input}$ is a kernel over inputs, and $k_\text{task}(i, j)$ -- the \emph{coregionalization kernel} -- is commonly parameterized by a low-rank covariance matrix $M = BB^\top \in \ensuremath{\mathbb{R}}^{{\ensuremath{s}} \times {\ensuremath{s}}}$ that encodes pairwise correlations between all pairs of tasks. The entries of $B$ are learned by maximizing \eqref{eq:marginalloglik}. We can express the covariance matrix $K_\text{multi}$ for all $n$ measurements as $$ K_\text{multi} = \ensuremath{\boldmatrix{K}}^{(\text{data})}_{\ensuremath{\boldmatrix{X}}\X} \circ \left( V BB^\top V \right), $$ where $V$ is an $n \times s$ matrix with one-hot rows: $V_{ij}=1$ if the $i^{th}$ observation belongs to task $j$. We can apply SKIP to multi-task problems by using a SKI approximation of $K^{(\text{data})}$ and computing its Lanczos decomposition. If $B$ is rank-$q$, with $q < n$, then we do not need to decompose $VBB^\top V^\top$ since the matrix affords $\bigo{n + sq}$ MVMs.\footnote{ MVMs are $\bigo{n + sq}$ because $V$ has $\bigo{n}$ nonzero elements and $B$ is an $s \times q$ matrix. } For one-dimensional inputs, the time complexity of an MVM with $K_\text{multi}$ is $\bigo{n + m \log m + sq}$ -- a substantial improvement over standard inducing-point methods with MTGPs, which typically require at least $\bigo{nm^{2}q}$ time \citep{bonilla2008multi,alvarez2011computationally}. For $n=4000$, SKIP speeds up marginal likelihood computations by a factor of $20$. \paragraph{Learning clusters of tasks.} Motivated by the work of \citet{rasmussen2002infinite}, \citet{shi2005hierarchical}, \citet{schulam2015framework}, \citet{hensman2015fast}, and \cite{xu2016bayesian}, we propose a modification to the standard MTGP framework. We hypothesize that similarities between tasks can be better expressed through $c$ latent subpopulations, or clusters, rather than through pairwise associations. We place an independent uniform categorical prior over $\lambda_i \in [1, \ldots, c]$, the cluster assignment for task $i$. Given measurements $\ensuremath{\mathbf{x}}_i, \ensuremath{\mathbf{x}}'_j$ for tasks $i$ and $j$, we propose a kernel consisting of product and sum structure that captures cluster-level trends and individual-level trends: $$k(\ensuremath{\mathbf{x}}_i, \ensuremath{\mathbf{x}}'_j) = k_\text{cluster}(\ensuremath{\mathbf{x}}, \ensuremath{\mathbf{x}}') \delta_{\lambda_i = \lambda_j} + k_\text{indiv}(\ensuremath{\mathbf{x}}, \ensuremath{\mathbf{x}}') \delta_{i = j}.$$ Here, $k_\text{cluster}$ and $k_\text{indiv}$ are both Mat\'ern kernels ($\nu=\frac{5}{2}$) operating on $\ensuremath{\mathbf{x}}$, and the $\delta$ terms represent indicator functions. Both terms can be easily expressed as product kernels. We infer the posterior distribution of cluster assignments through Gibbs sampling. Given $\lambda_{-i}$, the cluster assignments for all tasks except the $i^{th}$, we sample an assignment for the $i^{th}$ task from the marginal posterior distribution \begin{equation*} p(\lambda_i | \ensuremath{\mathbf{y}}, \lambda_{-i}) \propto p(\ensuremath{\mathbf{y}} \mid \lambda_{-i}, \lambda_{i}=a \theta)p(\lambda_{-i},\lambda_{i}=a) \end{equation*} Drawing a sample for the full vector $\lambda$ requires $\bigo{cs}$ calculations of \eqref{eq:marginalloglik}, an operation made relatively inexpensive by applying SKIP to the underlying model. \paragraph{Results.} We compare the cluster-based MTGP against two baselines: 1) a single-task GP baseline, which treats all available data as a single task, and 2) the standard MTGP. In \autoref{fig:health_comparison}, we measure the extrapolation accuracy for 25 children as additional children (tasks) are added to the model. As the models are supplied with data from additional children, they are able to refine the extrapolations on all children. The predictions of the cluster model slightly outperform the standard MTGP, and significantly outperform the single-task model. Perhaps the key advantage of the clustering approach is interpretability: in \autoref{fig:health_adaptation} (left), we see three distinct development types: above-average, average, and below average. We then demonstrate that as more data is observed when we apply the model to a new child with limited measurements, the model becomes increasingly certain that the child belongs to the above-average subpopulation. \section{MVM ACCURACY AND SCALABILITY} \vspace{-2ex} We first evaluate the accuracy of our proposed approach with product kernels in a controlled synthetic setting. We draw $2500$ data points in $d$ dimensions from $\mathcal{N}(0, I)$ and compute an RBF kernel matrix with lengthscale $1$ over these data points. We evaluate the relative error of SKIP compared exact MVMs as a function of $r$ -- the number of Lanczos iterators. We perform this test for for 4, 8, and 12 dimensional data, resulting in a product kernel with 4, 8, and 12 components respectively. The results, averaged over 100 trials, are shown in \autoref{fig:mvm_evaluation} (left). Even in the 12 dimensional setting, an extremely small value of $r$ is sufficient to get very accurate MVMs: less than 1\% error is achieved when $k=30$. For a discussion of increasing error with dimensionality, see \autoref{sec:discussion}. In future experiments, we set the maximum number of Lanczos iterations to $100$, but note that the convergence criteria is typically met far sooner. In the right side of \autoref{fig:mvm_evaluation}, we demonstrate the improved scaling of our method with the number of inducing points per dimension over KISS-GP. To do this, we use the $d=4$ dimensional Power dataset from the UCI repository, and plot inference step time as a function of $m$. While our method clearly scales better with $m$ than both KISS-GP and SGPR, we also note that because SKIP only applies the inducing point approximation to one-dimensional kernels, we anticipate ultimately needing significantly fewer inducing points than either SGPR or KISS-GP which need to cover the full $d$ dimensional space with inducing points. \section{Structured kernel interpolation for product kernels} \label{sec:ski} In this section, we briefly review Structured Kernel Interpolation \citep{wilson2015kernel}, a method that dramatically reduces the matrix-vector multiplication time with non-product kernels. We then discuss SKI for product kernels--which we call \emph{SKIP}--by integrating our method with SKI, thus allowing SKI to be applied to product kernels. We then provide two applications that demonstrate the greatly increased applicability of SKI. \subsection{Structured kernel interpolation} Structured kernel interpolation is a technique for scalable Gaussian process inference that seeks to replace a user-specified kernel $k(\ensuremath{\mathbf{x}},\ensuremath{\mathbf{x}}')$ with an approximate kernel that leads to very fast matrix-vector multiplies with the resulting kernel matrix $\ensuremath{\boldmatrix{K}}_{\ensuremath{\boldmatrix{X}}\X}$. Assume we are given a set of $m$ \emph{inducing points} $\ensuremath{\boldmatrix{U}}$ that we will use to approximate kernel values. The underlying assumption is essentially that, if $\ensuremath{\mathbf{u}}_{a}$ is very close to $\ensuremath{\mathbf{x}}$ and $\ensuremath{\mathbf{u}}_{b}$ is very close to $\ensuremath{\mathbf{x}}'$, then $k(\ensuremath{\mathbf{x}}, \ensuremath{\mathbf{x}}')$ is well approximated by $k(\ensuremath{\mathbf{u}}_{a}, \ensuremath{\mathbf{u}}_{b})$. Thus, instead of computing kernel values between data points directly, SKI computes kernel values between inducing points and interpolate these kernel values to approximate the true data kernel values. This leads to the approximate SKI kernel: \begin{equation}\label{eq:interp_single} k(\ensuremath{\mathbf{x}},\ensuremath{\mathbf{z}}) = \ensuremath{\mathbf{w}}_{\ensuremath{\mathbf{x}}}\ensuremath{\boldmatrix{K}}_{\ensuremath{\boldmatrix{U}}\U}\ensuremath{\mathbf{w}}_{\ensuremath{\mathbf{z}}}^{\top}, \end{equation} where $\ensuremath{\mathbf{w}}_{\ensuremath{\mathbf{x}}}$ is a sparse vector that contains interpolation weights. For example, when using local cubic interpolation \citep{keys1981cubic}, $\ensuremath{\mathbf{w}}_{x}$ contains four nonzero elements. Applying this approximation for all data points in the training set, we see that: \begin{equation} \ensuremath{\boldmatrix{K}}_{\ensuremath{\boldmatrix{X}}\X} = \ensuremath{\boldmatrix{W}}_{\ensuremath{\boldmatrix{X}}}\ensuremath{\boldmatrix{K}}_{\ensuremath{\boldmatrix{U}}\U}\ensuremath{\boldmatrix{W}}_{\ensuremath{\boldmatrix{X}}}^{\top} \end{equation} With arbitrary inducing points $\ensuremath{\boldmatrix{U}}$, matrix-vector multiplies with $[\ensuremath{\boldmatrix{W}}\ensuremath{\boldmatrix{K}}_{\ensuremath{\boldmatrix{U}}\U}\ensuremath{\boldmatrix{W}}^{\top}]\ensuremath{\mathbf{v}}$ require $\bigo{n+m^{2}}$ time \citep{wilson2015kernel}. In one dimension, instead choosing $\ensuremath{\boldmatrix{U}}$ to be a regular grid results in $\ensuremath{\boldmatrix{K}}_{\ensuremath{\boldmatrix{U}}\U}$ being \emph{Toeplitz}. In higher dimensions, using a multi-dimensional grid resulting in $\ensuremath{\boldmatrix{K}}_{\ensuremath{\boldmatrix{U}}\U}$ being the Kronecker product of Toeplitz matrices. This results in the ability to perform matrix-vector multiplies in $\bigo{n + m \log m}$ time. Furthermore, storing $\ensuremath{\boldmatrix{W}}$ and $\ensuremath{\boldmatrix{K}}_{\ensuremath{\boldmatrix{U}}\U}$ requires $\bigo{n + m}$ space. \subsection{Structured kernel interpolation for products} Given a product kernel $\ensuremath{\boldmatrix{K}}=\ensuremath{\boldmatrix{K}}^{(1)}_{\ensuremath{\boldmatrix{X}}\X} \circ \cdots \circ \ensuremath{\boldmatrix{K}}^{(d)}_{\ensuremath{\boldmatrix{X}}\X}$, we apply the SKI approximation to each component as: \begin{equation} \ensuremath{\boldmatrix{K}}^{(i)}_{\ensuremath{\boldmatrix{X}}\X}=\ensuremath{\boldmatrix{W}}^{(i)}\ensuremath{\boldmatrix{K}}_{\ensuremath{\boldmatrix{U}}\U}\ensuremath{\boldmatrix{W}}^{(i)\top}. \end{equation} This immediately provides us with fast matrix-vector multiplies $\mvm{\ensuremath{\boldmatrix{K}}^{(i)}_{\ensuremath{\boldmatrix{X}}\X}}$. When using SKI approximations, the running time of our product kernel inference technique with $p$ iterations of CG becomes $\bigo{dk(n + m \log m) + k^{3}n\log d + pk^{2}n}$. The running time of SKIP is compared to that of other inference techniques in \autoref{tab:time}.
{ "timestamp": "2018-02-27T02:07:43", "yymm": "1802", "arxiv_id": "1802.08903", "language": "en", "url": "https://arxiv.org/abs/1802.08903" }
\section{Introduction} We consider the problem of constrained image generation of a porous medium with given properties. Porus media occur, e.g., in lithium-ion batteries and composed materials~\cite{7962936,Ilenia11}; the problem of generating porus media with a given set of properties is relevant in practical applications of material design~\cite{Hermann,Pyrcz,Hornung:1996}. Artificial porous media are useful during the manufacturing process as they allow the designer to synthesize new materials with predefined properties. For example, generated images can be used in designing a new porous medium for an electrode of lithium-ion batteries. It is well-known that ions macro-scale transport and reactions rates are sensitive to the topological properties of the porous medium of the electrode. Therefore, manufacturing the porous electrode with given properties allows improving the battery performance~\cite{7962936}. Images of porous media\footnote{Specifically, we are looking at a transitionally periodic ``unit cell'' of porous medium assuming that porous medium has a periodic structure~\cite{Hornung:1996}.} are black and white images that represent an abstraction of the physical structure. Solid parts (or so called grains) are encoded as a set of connected black pixels; a void area is encoded a set of connected white pixels. There are two important groups of restrictions that images of a porous medium have to satisfy. The first group constitutes a set of ``geometric'' constraints that come from the problem domain and control the total surface area of grains. For example, an image contains two isolated solid parts. Figure~\ref{fig:images}(a) shows examples of 16x16 images from our datasets with two (the top row) and three (the bottom row) grains. \vspace{-10pt} \begin{figure} \centering \includegraphics[width=1\linewidth]{td_gan \vspace{-7pt} \caption{{\small (a) Examples of images from train sets with two and three grains; (b) Examples of images generated by a GAN on the dataset with two grains. Examples of generated images with (c) $d \in [40,50)$, (d) $d \in [60,70)$, and (e) $d \in [90,100]$.}} \label{fig:images} \end{figure} \vspace{-20pt} The second set of restrictions comes from the physical process that is defined for the corresponding porous medium. In this paper, we consider the macro-scale transportation process that can be described by a set of dispersion coefficients depending on the transportation direction. For example, we might want to generate images that have two grains such that the dispersion coefficient along the $x$-axis is between 0.5 and 0.6. The dispersion coefficient is defined for the given geometry of a porous medium. It can be obtained as a numerical solution of the diffusion Partial Differential Equation (PDE). We refer to these restrictions on the parameters of the physical process as process constraints. The state of the art approach to generating synthetic images is to use generative adversarial networks (GANs)~\cite{GoodfellowPMXWOCB14}. However, GANs are not able learn geometric, three-dimensional perspective, and counting constraints which is a known issue with this approach~\cite{Goodfellow17,Osokin}. Our experiments with GAN-generated images also reveal this problem. There are no methods that allow embedding of declarative constraints in the image generation procedure at the moment. In this work we show that the image generation problem can be solved using decision procedures for porous media. We show that both geometric and process constraints can be encoded as a logical formula. Geometric constraints are encoded as a set of linear constraints. To encode process constraints, we first approximate the diffusion PDE solver with a Neural Network(NN)~\cite{Korneev1,Korneev2}. We use a special class of NN, called $\BNN$, as these networks can be encoded as logical formulas. Process constraints are encoded as restrictions on outputs of the network. This provides us with an encoding of the image generation problem as a single logical formula. The contributions of this paper can be summarized as follows: (i)~We show that constrained image generation can be encoded as a logical formula and tackled using decision procedures. (ii)~We experimentally investigate a GAN-based approach to constrained image generation and analyse their advantages and disadvantages compared to the constraint-based approach. (iii)~We demonstrate that our constraint-based approach is capable of generating random images that have given properties, i.e., satisfy process constraints. \vspace{-10pt} \section{Problem description} \vspace{-10pt} We describe a constrained image generation problem. We denote $I \in \{0,1\}^{t \times t}$ an image that encodes a porous medium and $d \in \mathbb{Z}^m$ a vector of parameters of the physical process defined for this porous material. We use an image and a porous medium interchangeably to refer to $I$. We assume that there is a mapping function $\Map$ that maps an image $I$ to the corresponding parameters vector $d$, $\Map: I \rightarrow \mathbb{Z}^m$. We denote as $C_g(I)$ the geometric constraints on the structure of the image $I$ and as $C_p(d)$ the process constraints on the vector of parameters $d$. Given a set of geometric and process constraints and a mapping function $\Map$, we need to generate a random image $I$ that satisfies $C_g$ and $C_p$. Next we overview geometric and process constraints and discuss the mapping function. The geometric constraints $C_g$ define a topological structure of the image. For example, they can ensure that a given number of grains is present on an image and these grains do not overlap. Another type of constraints focuses on a single grain. They can restrict the shape of a grain, e.g., a convex grain, its size or position on the image. The third type of constraints are boundary constraints that ensure that the boundary of the image must be in a void area. Process constraints define restrictions on the vector of parameters. For example, we might want to generate images with $d_i^j \in [a_j,b_j]$, $j = 1,\ldots,m$. Next we consider a mapping function $\Map$. A standard way to define $\Map$ is by solving a system of partial differential equations. However, solving these PDEs is a computationally demanding task and, more importantly, it is not clear how to `reverse' them to generate images with given properties. Hence, we take an alternative approach of approximating a PDE solver using a neural network~\cite{Korneev1,Korneev2}. To train such an approximation, we build a training set of pairs $(I_i,d_i)$, $i=1,\ldots,n$, where $I_i$ is an input of the network and $d_i$, obtained by solving the PDE given $I$, is its label. In this work, we use a special class of deep neural networks --- binarized neural networks ($\BNN$) that admit an exact encoding into a logical formula. We assume that $\Map$ is represented as a $\BNN$ and is given as part of input. We will elaborate on the training procedure in Section~\ref{sec:exps}. \section{The generative neural network approach} One approach to tackle the constrained image generation problem is to use generative adversarial networks (GANs)~\cite{GoodfellowPMXWOCB14,RadfordMC15}. GANs are successfully used to produce samples of realistic images for commonly used datasets, e.g. interior design, clothes, animals, etc. A GAN can be described as a game between the image generator that produces synthetic (fake) images and a discriminator that distinguishes between fake and real images. The cost function is defined in such a way that the generator and the discriminator aim to maximize and minimize this cost function, respectively, turning the learning process into a minimax game between these two players. Each payer is usually represented as a neural network. To apply GANs to our problem, we take a set of images $\{I_1,\ldots,I_n\}$ and pass them to the GAN. These images are samples of real images for the GAN. After the training procedure is completed, the generator network produces artificial images that look like real images. The main advantage of GANs is that it is a generic approach that can be applied to any type of images and can handle complex concepts, like animals, scenes, etc.\footnote{GANs exhibit well-known issues with poor convergence that we did not observe as our dataset is quite simple~\cite{Chintala}.} However, the main issue with this approach is that there is no way to explicitly pass declarative constraints into the training procedure. One might expect that GANs are able to learn these constraints from the set of examples. However, this is not the case at the moment, e.g., GANs cannot capture counting constraints, like four legs, two eyes, etc.~\cite{Goodfellow17}. Figure~\ref{fig:images} shows examples of images that GAN produces on a dataset with two grains per image. As can be seen from these examples, GAN produces images with an arbitrary number of grains between 1 and 5 per image. In some simple cases, it is easy to filter wrong images. If we have more sophisticated constraints like convexity or size of grains, then most images will be invalid. On top of this, to take into account process constraints, we need additional restrictions on the training procedure. Overall, it is an interesting research question how to extend the GAN training procedure with physical constraints, which is beyond the scope of this paper~\cite{Oliveira}. Next we consider our approach to the image generation problem. \section{The constraint-based approach} The main idea behind our approach is to encode the image generation problem as a logical formula. To do so, we need to encode all problem constraints and the mapping between an image and its label as a set of constraints. We start with constraints that encode an approximate PDE solver. We denote $[N]$ a range of numbers from $1$ to $N$. \subsection{Approximation of a PDE solver.} One way to approximate a diffusion PDE solver is to use a neural network~\cite{Korneev1,Korneev2}. A neural network is trained on a set of binary images $I_i$ and their labels $d_i$, $i=1,\ldots,n$. During the training procedure, the networks takes an image $I_i$ as an input and outputs its estimate of the parameter vector $\hat{d}_i$. As we have ground truth parameters $d_i$ for each image, we can use the mean square error or absolute value error as a cost function to perform optimization~\cite{Narodytska}. In this work, we take the same approach. However, we use a special type of networks: Binarized Neural Networks (\BNN). $\BNN$ is a feedforward network where weights and activations are binary~\cite{BNNNIPS2016}. It was shown in~\cite{Narodytska,Cheng2017} that $\BNN$s allow exact encoding as logical formulas, namely, they can be encoded a set of reified linear constraints over binary variables. We use $\BNN$s as they have a relatively simple structure and decision procedures scale to reason about small and medium size networks of this type. In theory, we can use any exact encoding to represent a more general network, e.g., MILP encodings that are used to check robustness properties of neural networks~\cite{Katz2017,Cheng2017a}. However, the scalability of decision procedures is the main limitation in the use of more general networks. We use the ILP encoding as in~\cite{Narodytska} with a minor modification of the last layer as we have numeric outputs instead of categorical outputs. We denote $\BBNN(I,d)$ a logical formula that encodes $\BNN$ using reified linear constraints over Boolean variables (Section 4, ILP encoding~\cite{Narodytska}). \subsection{Geometric and process constraints.} Geometric constraints can be roughly divided into three types. The first type of constraints defines the high-level structure of the image. The high-level structure of our images is defined by the number of grains present in the image. Let $w$ be the number of grains per image. We define a grid of size $t\times t$. Figure~\ref{fig:examples}(a) shows an example of a grid of size $4 \times 4$. We refer to a cell $(i,j)$ on the grid as a pixel as this grid encodes an image of size $t\times t$. Next we define the neighbor relation on the grid. We say that a cell $(h,g)$ is a neighbour of $(i,j)$ if these cells share a side. For example, $(2,3)$ is a neighbour of $(2,4)$ as the right side of $(2,3)$ is shared with $(2,4)$. Let $\neub(i,j)$ be the set of neighbors of $(i,j)$ on the gird. For example, $\neub(2,3) = \{(1,3),(2,2),(2,4), (3,3)\}$. \begin{figure} \centering \includegraphics[width=1\linewidth]{examples \caption{Illustrative examples of additional structures used by constraint-based model.} \label{fig:examples} \end{figure} \vspace{-20pt} \paragraph{Variables.} For each cell we introduce a Boolean variable $c_{i,j,r}$, $i,j \in [t]$, $r \in [w+1]$. $c_{i,j,r} = 1$ iff the cell $(i,j)$ belongs to the $r$th grain, $r =1,\ldots, w$. Similarly, $c_{i,j,w+1} = 1$ iff the cell $(i,j)$ represents a void area. \paragraph{Each cell is either a black or white pixel.} We enforce that each cell contains either a grain or a void area. \begin{equation}\label{eq:onegrain} \begin{array}{lcr} \sum_{r=1}^{w+1} c_{i,j,r} = 1 & \quad\quad & j, i \in [t]\\ \end{array} \end{equation} \paragraph{Grains do not overlap.} Two cells that belong to different grains cannot be neighbours. \begin{equation}\label{eq:overlap} \begin{array}{lcr} c_{i,j,r} \rightarrow \neg c_{h,g,r'} & \quad\quad & (h,g) \in \neub(i,j), r' \in [w]\setminus\{r\} \end{array} \end{equation} \paragraph{Grains are connected areas.} We enforce connectivity constraints for each grain. By connectivity we mean that there is a path between two cells of the same grain using only cells that belong to this grain. Unfortunately, enforcing connectivity constraints is very expensive. Encoding the path constraint results in a prohibitively large encoding. To deal with this explosion, we restrict the space of possible grain shapes. First, we assume that we know the position of one pixel of this grain that we pick randomly. Let $s_r = (i,j)$ be a random cell, $r \in [w]$. Then we implicitly build a directed acyclic graph (DAG) $G$ starting from this cell $s_r$ that covers the entire grid. Each cell of a grid is a node in this graph. The node that corresponds to the cell $s_r$ does not have incoming arcs. There are multiple ways to build a $G$ from $s_r$. Figure~\ref{fig:examples}(a) and (d) show two possible ways to build a DAG that covers a grid starting from cell $(3,3)$. Next we define a parent relation in $G$. Let $\prt_G(i,j)$ be the set of parents of cell $(i,j)$ in $G$. For example, $\prt_G(2,2) = \{(2,3), (3,2)\}$ in our example on Figure~\ref{fig:examples}(a). Given a DAG $G$, we can easily enforce connectivity relation w.r.t. $G$. The following constraint ensures that a cell $(i,j)$ belongs to the $r$th grain iff one of its parents in $G$ belongs to the same grain. Moreover, by enforcing connectivity constraints on the void area, we make sure that grains do not contain isolated void areas inside them. \begin{equation}\label{eq:connectedgrain} \begin{array}{lll} (c_{i,j,r}), & \quad\quad & s_r = (i,j), r \in [w+1],\\ \left(\wedge_{(h,g)\in \prt_G(i,j)}\neg c_{h,g,r}\right)\rightarrow \neg c_{i,j,r}, & \quad\quad & j, i \in [t], r \in [w+1]\\ \end{array} \end{equation} Given a DAG $G$, we can generate grains of multiple shapes. For example, Figure~\ref{fig:examples}(b) shows one possible grain. However, we also lose some valid shapes that are ruled out by the choice of graph $G$. For example, Figure~\ref{fig:examples}(c) gives an example of a shape that is not possible to build using $G$ in Figure~\ref{fig:examples}(a). However, if we select a different random DAG $G'$, e.g., Figure~\ref{fig:examples}(d), then this shape is one of the possible shapes for $G'$. In general, we can pick $s_r$ and DAG randomly, it is possible to generate a variety of shapes. \paragraph{Compactness of a grain.} The second set of constraints is about restrictions on a single grain. The compactness constraint is a form of convexity constraint. We want to ensure that any two boundary points of a grain are close to each other. The reason for this constraint is that grains are unlikely to have a long snake-like appearance as solid particles tend to group together. Sometimes, we need to enforce the convexity constraint, which is an extreme case of compactness. To enforce this constraint, we again trade-off the variety of shapes and the size of the encoding. Now we assume that $s_r$ is the center of the grain. Then we build virtual circles around this center that cover the grid. Figure~\ref{fig:examples}(e) shows examples of such circles. Let $C_r(i,j) = \{C_r^1,\ldots, C_r^q\}$ be a set of circles that are built with the cell $s_r$ as a center. The following constraint enforces that a cell that belongs to the circle $C_r^v$ can be in the $r$th grain iff all cells from the inner circle $C_r^{v-s}$ belong to the $r$th grain, where $s$ is a parameter. \begin{equation}\label{eq:convexgrain} \begin{array}{lcr} \vee_{c_{h,g,r} \in C_r^{v-s}} \neg c_{h,g,r} \rightarrow \neg c_{i,j,r} & \quad\quad & c_{i,j,r} \in C_r^v, v \in [q], r \in [w]\\ \end{array} \end{equation} Note that if $s=1$ then we generate convex grains. In this case, every pixel from $C_r^v$ has to belong to the $r$th grain before we can add a pixel from the circle $C_r^{v+1}$ to this grain. \paragraph{Boundary constraints.} We also have a technical constraint that all cells on the boundary of the grid must be void pixels. They are required to define boundary conditions for PDEs on generated images. \begin{equation}\label{eq:boundary} \begin{array}{lcr} (c_{i,j,w+1}) & \quad\quad & j = t \vee i = t\\ \end{array} \end{equation} \paragraph{Connecting with $\BNN$.} We need to connect variables $c_{i,j,r}$ with the inputs of the network. \begin{equation}\label{eq:boundary2} \begin{array}{lcr} c_{i,j,r} \rightarrow I_{i,j} =1 & \quad\quad & j, i \in [t], r \in [w],\\ c_{i,j,w+1} \rightarrow I_{i,j} =0 & \quad\quad & j, i \in [t]. \\ \end{array} \end{equation} \paragraph{Process constraints.} Process constraints are enforced on the output of the network. Given ranges $[a_i,b_i]$, $i\in [m]$ we have: \begin{equation}\label{eq:process} \begin{array}{lcr} a_i \leq d_{i} \leq b_i & \quad\quad & i \in [m] \end{array} \end{equation} \paragraph{Summary.} To solve the constrained random image generation problem, we solve the conjunctions of constraints~\eqref{eq:onegrain}--\eqref{eq:process} together with our ILP encoding $\BBNN(I,d)$. Randomness comes from the random seed that is passed to the solver, a random choice of $s_r$ and $G$. \section{Experiments}\label{sec:exps} We conduct a set of experiments with our constraint based approach. We ran our experiments on Intel(R) Xeon(R) 3.30GHz. We use the timeout of 600 sec in all runs. \paragraph{Training procedure.} We use two datasets, $D_2$ with 10K images and $D_3$ with 5K images. Each image in $D_2$ contains two grains and each image in $D_3$ contains three grains. These images were labeled with dispersion coefficients along the $x$-axis which is a number between 0.4 and 1. We performed quantization on the dispersion coefficient value to map $d$ into an interval of integers between $40$ and $100$. We use mean absolute error ($MAE$) to train $\BNN$. $\BNN$ consists of three blocks with 100 neurons per layers and one output. The $MAE$ is 4.2 for $D_2$ and 5.1 for $D_3$. We lose accuracy compared to non-binarized networks, e.g, $MAE$ for the same non-binarized network is 2.5 for $D_2$. However, $\BNN$s are much easier to reason about, so we work with this subclass of networks. \paragraph{Image generation.}We use CPLEX and the SMT solver Z3 to solve instances produced by constraints~\eqref{eq:onegrain}--\eqref{eq:process} together with $\BBNN(I,d)$. In principle, other solvers could be evaluated on these instances. The best mode for Z3 was to use an SMT core based on CDCL and a theory solver for \emph{nested} Pseudo-Boolean and cardinality constraints. We noted that bit-blasting into sorting circuits did not scale, and Z3’s theory of linear integer arithmetic was also inadequate. We considered six process constraints for $d$, namely, $d \in [a,b]$, $ [a,b] \in \{[40,50),\ldots, [90,100]\}$. For each interval $[a,b]$, we generate 100 random constrained problems. The randomization comes from a random seed that is passed to the solver, the position of centers of each grain and the parameter $s$ in the constraint~\eqref{eq:convexgrain}. We used the same DAG $G$ construction as in Figure~\ref{fig:examples}(a) in all problems. Table~\ref{table:solved} shows summary of our results for CPLEX and Z3 solvers. As can be seen from this table, these instances are relatively easy for the CPLEX solver. It can solve most of them within the given timeout. The average time for $D_2$ is 25s and for $D_3$ is 12s with CPLEX. Z3 handles most benchmarks, but we observed it gets stuck on examples that are very easy for CPLEX, e.g. the interval $[80,90)$ for $D_2$. We hypothesize that this is due to how watch literals are tracked in a very general way on nested cardinality constraints (Z3 maintains a predicate for each nested PB constraint and refreshes the watch list whenever the predicate changes assignment), when one could instead exploit the limited way that CPLEX allows conditional constraints. The average time for $D_2$ is 94s and for $D_3$ is 64s with Z3. \begin{wrapfigure}{r}{0.35\textwidth} \vspace{-20pt} \begin{center} \includegraphics[width=0.35\textwidth]{results} \end{center} \caption{The absolute error between $d$ and its true value. \label{fig:mae}} \vspace{-20pt} \end{wrapfigure} Figures~\ref{fig:images}(c)--(e) show examples of generated images for ranges $[40,50)$, $[60,70)$ and $ [90,100]$ for $D_2$ (the top row) and $D_3$ (the bottom row). For the process we consider, as the value of the dispersion coefficient grows, the black area should decrease as there should be fewer grain obstacles for a flow to go through the porous medium. Indeed, images in Figures~\ref{fig:images}(c)--(e) follow this pattern, i.e. the black area on images with $d \in [40,50)$ is significantly larger than on images with $d \in [90,100]$. Moreover, by construction, they satisfy geometric constraints that GANs cannot handle. For each image we generated, we run a PDE solver to compute the true value of the dispersion coefficient on this image. Then we compute the absolute error between the value of $d$ that our model computes and the true value of the coefficient. Figure~\ref{fig:mae} shows absolute errors for all benchmarks that were solved by CPLEX. First, this figure shows that our model generates images with given properties. The mean absolute error is about 10 on these instances. Taking into account that $\BNN$ has $MAE$ of 4.2 on $D_2$, $MAE$ of 10 on new generated instances is a reasonable result. Ideally, we would like $MAE$ to be zero. However, this error depends purely on the $\BNN$ we used. To reduce this error, we need to improve the accuracy of $\BNN$ as it serves as an approximator of a PDE solver. For example, we can use more binarized layers or use additional non-binarized layers. Of course, increasing the power of the network leads to computational challenges solving the corresponding logical formulas. \vspace{-10pt} \begin{table} \scriptsize \centering \begin{tabular}{|c|c|c|c|c|c|c||c|c|c|c|c|c|} \hline \multirow{2}{*}{Solver} & \multicolumn{6}{c||}{$D_2$} & \multicolumn{6}{c|}{$D_3$} \\ \cline{2-13} & [40,50)& [50,60)& [60,70)& [70,80) & [80,90)& [90,100]& [40,50)& [50,60)& [60,70)& [70,80) & [80,90)& [90,100]\\ \hline CPLEX & 100 &99 & 99 & 98 & 100 & 41 & 100& 100 & 96 &99 & 100 & 84 \\ Z3 & 98 & 89 & 81 & 74 & 56 & 12 & 100 & 97 & 97 & 97 & 96 & 54\\ \hline \end{tabular} \caption{The number of solved instances in each interval $[a,b]$.}\label{table:solved} \end{table} \vspace{-35pt} \section{Related work} There are two lines of work related to our paper. The first one uses constraint to enhance machine learning techniques with declarative constraints, e.g. in solving constrained clustering problems and in data mining techniques that handle domain specific constraints~\cite{DaoDV17,GanjiBS17,GunsDNTR17}. One recent example is the work of Ganji \emph{et al.}~\cite{GanjiBS17} who proposed a logical model for constrained community detection. The second line of research explores embedding of domain-specific constraints in the GAN training procedure~\cite{Oliveira,Oliveira1,HuGLXBVN17,Osokin,RavanbakhshLMSP17}. Work in this area is targeting various applications in physics and medicine that impose constraints, like sparsity constraints, high dynamic range requirements (e.g. when pixel intensity in an image varies by orders of magnitude), location specificity constraints (e.g. shifting pixel locations can change important image properties), etc. However, this research area is emerging and the results are still preliminary. \section{Conclusion} In this paper we considered the constrained image generation problem for a physical process. We showed that this problem can be encoded as a logical formula over Boolean variables. For small porous media, we show that the generation process is computationally feasible for modern decision procedures There are a lot of interesting future research directions. First, the main limitation of our approach is scalability, as we cannot use large networks with a number of weights in the order of hundreds of thousands, as it is required by industrial applications. However, constraints that are used to encode, for example, binarized neural networks are mostly pseudo-Boolean constraints with unary coefficients. Hence, it would be interesting to design specialized procedures to deal with this fragment of constraints. Second, we need to investigate different types of neural networks that admit encoding into SMT or ILP. For instance, there is a lot of work on quantized networks that use a small number of bits to encode each weight, e.g.~\cite{DengJPWL17}. Finally, can we use similar techniques to reveal vulnerabilities in neural networks? For example, we might be able to generate constrained adversarial examples or other special types of images that expose undesired network behaviour. \section{Introduction} We consider the problem of constrained image generation of a porous medium with given properties. Porus media occur, e.g., in lithium-ion batteries and composed materials~\cite{7962936,Ilenia11}; the problem of generating porus media with a given set of properties is relevant in practical applications of material design~\cite{Hermann,Pyrcz,Hornung:1996}. Artificial porous media are useful during the manufacturing process as they allow the designer to synthesize new materials with predefined properties. For example, generated images can be used in designing a new porous medium for an electrode of lithium-ion batteries. It is well-known that ions macro-scale transport and reactions rates are sensitive to the topological properties of the porous medium of the electrode. Therefore, manufacturing the porous electrode with given properties allows improving the battery performance~\cite{7962936}. Images of porous media\footnote{Specifically, we are looking at a transitionally periodic ``unit cell'' of porous medium assuming that porous medium has a periodic structure~\cite{Hornung:1996}.} are black and white images that represent an abstraction of the physical structure. Solid parts (or so called grains) are encoded as a set of connected black pixels; a void area is encoded a set of connected white pixels. There are two important groups of restrictions that images of a porous medium have to satisfy. The first group constitutes a set of ``geometric'' constraints that come from the problem domain and control the total surface area of grains. For example, an image contains two isolated solid parts. Figure~\ref{fig:images}(a) shows examples of 16x16 images from our datasets with two (the top row) and three (the bottom row) grains. \vspace{-10pt} \begin{figure} \centering \includegraphics[width=1\linewidth]{td_gan \vspace{-7pt} \caption{{\small (a) Examples of images from train sets with two and three grains; (b) Examples of images generated by a GAN on the dataset with two grains. Examples of generated images with (c) $d \in [40,50)$, (d) $d \in [60,70)$, and (e) $d \in [90,100]$.}} \label{fig:images} \end{figure} \vspace{-20pt} The second set of restrictions comes from the physical process that is defined for the corresponding porous medium. In this paper, we consider the macro-scale transportation process that can be described by a set of dispersion coefficients depending on the transportation direction. For example, we might want to generate images that have two grains such that the dispersion coefficient along the $x$-axis is between 0.5 and 0.6. The dispersion coefficient is defined for the given geometry of a porous medium. It can be obtained as a numerical solution of the diffusion Partial Differential Equation (PDE). We refer to these restrictions on the parameters of the physical process as process constraints. The state of the art approach to generating synthetic images is to use generative adversarial networks (GANs)~\cite{GoodfellowPMXWOCB14}. However, GANs are not able learn geometric, three-dimensional perspective, and counting constraints which is a known issue with this approach~\cite{Goodfellow17,Osokin}. Our experiments with GAN-generated images also reveal this problem. There are no methods that allow embedding of declarative constraints in the image generation procedure at the moment. In this work we show that the image generation problem can be solved using decision procedures for porous media. We show that both geometric and process constraints can be encoded as a logical formula. Geometric constraints are encoded as a set of linear constraints. To encode process constraints, we first approximate the diffusion PDE solver with a Neural Network(NN)~\cite{Korneev1,Korneev2}. We use a special class of NN, called $\BNN$, as these networks can be encoded as logical formulas. Process constraints are encoded as restrictions on outputs of the network. This provides us with an encoding of the image generation problem as a single logical formula. The contributions of this paper can be summarized as follows: (i)~We show that constrained image generation can be encoded as a logical formula and tackled using decision procedures. (ii)~We experimentally investigate a GAN-based approach to constrained image generation and analyse their advantages and disadvantages compared to the constraint-based approach. (iii)~We demonstrate that our constraint-based approach is capable of generating random images that have given properties, i.e., satisfy process constraints. \vspace{-10pt} \section{Problem description} \vspace{-10pt} We describe a constrained image generation problem. We denote $I \in \{0,1\}^{t \times t}$ an image that encodes a porous medium and $d \in \mathbb{Z}^m$ a vector of parameters of the physical process defined for this porous material. We use an image and a porous medium interchangeably to refer to $I$. We assume that there is a mapping function $\Map$ that maps an image $I$ to the corresponding parameters vector $d$, $\Map: I \rightarrow \mathbb{Z}^m$. We denote as $C_g(I)$ the geometric constraints on the structure of the image $I$ and as $C_p(d)$ the process constraints on the vector of parameters $d$. Given a set of geometric and process constraints and a mapping function $\Map$, we need to generate a random image $I$ that satisfies $C_g$ and $C_p$. Next we overview geometric and process constraints and discuss the mapping function. The geometric constraints $C_g$ define a topological structure of the image. For example, they can ensure that a given number of grains is present on an image and these grains do not overlap. Another type of constraints focuses on a single grain. They can restrict the shape of a grain, e.g., a convex grain, its size or position on the image. The third type of constraints are boundary constraints that ensure that the boundary of the image must be in a void area. Process constraints define restrictions on the vector of parameters. For example, we might want to generate images with $d_i^j \in [a_j,b_j]$, $j = 1,\ldots,m$. Next we consider a mapping function $\Map$. A standard way to define $\Map$ is by solving a system of partial differential equations. However, solving these PDEs is a computationally demanding task and, more importantly, it is not clear how to `reverse' them to generate images with given properties. Hence, we take an alternative approach of approximating a PDE solver using a neural network~\cite{Korneev1,Korneev2}. To train such an approximation, we build a training set of pairs $(I_i,d_i)$, $i=1,\ldots,n$, where $I_i$ is an input of the network and $d_i$, obtained by solving the PDE given $I$, is its label. In this work, we use a special class of deep neural networks --- binarized neural networks ($\BNN$) that admit an exact encoding into a logical formula. We assume that $\Map$ is represented as a $\BNN$ and is given as part of input. We will elaborate on the training procedure in Section~\ref{sec:exps}. \section{The generative neural network approach} One approach to tackle the constrained image generation problem is to use generative adversarial networks (GANs)~\cite{GoodfellowPMXWOCB14,RadfordMC15}. GANs are successfully used to produce samples of realistic images for commonly used datasets, e.g. interior design, clothes, animals, etc. A GAN can be described as a game between the image generator that produces synthetic (fake) images and a discriminator that distinguishes between fake and real images. The cost function is defined in such a way that the generator and the discriminator aim to maximize and minimize this cost function, respectively, turning the learning process into a minimax game between these two players. Each payer is usually represented as a neural network. To apply GANs to our problem, we take a set of images $\{I_1,\ldots,I_n\}$ and pass them to the GAN. These images are samples of real images for the GAN. After the training procedure is completed, the generator network produces artificial images that look like real images. The main advantage of GANs is that it is a generic approach that can be applied to any type of images and can handle complex concepts, like animals, scenes, etc.\footnote{GANs exhibit well-known issues with poor convergence that we did not observe as our dataset is quite simple~\cite{Chintala}.} However, the main issue with this approach is that there is no way to explicitly pass declarative constraints into the training procedure. One might expect that GANs are able to learn these constraints from the set of examples. However, this is not the case at the moment, e.g., GANs cannot capture counting constraints, like four legs, two eyes, etc.~\cite{Goodfellow17}. Figure~\ref{fig:images} shows examples of images that GAN produces on a dataset with two grains per image. As can be seen from these examples, GAN produces images with an arbitrary number of grains between 1 and 5 per image. In some simple cases, it is easy to filter wrong images. If we have more sophisticated constraints like convexity or size of grains, then most images will be invalid. On top of this, to take into account process constraints, we need additional restrictions on the training procedure. Overall, it is an interesting research question how to extend the GAN training procedure with physical constraints, which is beyond the scope of this paper~\cite{Oliveira}. Next we consider our approach to the image generation problem. \section{The constraint-based approach} The main idea behind our approach is to encode the image generation problem as a logical formula. To do so, we need to encode all problem constraints and the mapping between an image and its label as a set of constraints. We start with constraints that encode an approximate PDE solver. We denote $[N]$ a range of numbers from $1$ to $N$. \subsection{Approximation of a PDE solver.} One way to approximate a diffusion PDE solver is to use a neural network~\cite{Korneev1,Korneev2}. A neural network is trained on a set of binary images $I_i$ and their labels $d_i$, $i=1,\ldots,n$. During the training procedure, the networks takes an image $I_i$ as an input and outputs its estimate of the parameter vector $\hat{d}_i$. As we have ground truth parameters $d_i$ for each image, we can use the mean square error or absolute value error as a cost function to perform optimization~\cite{Narodytska}. In this work, we take the same approach. However, we use a special type of networks: Binarized Neural Networks (\BNN). $\BNN$ is a feedforward network where weights and activations are binary~\cite{BNNNIPS2016}. It was shown in~\cite{Narodytska,Cheng2017} that $\BNN$s allow exact encoding as logical formulas, namely, they can be encoded a set of reified linear constraints over binary variables. We use $\BNN$s as they have a relatively simple structure and decision procedures scale to reason about small and medium size networks of this type. In theory, we can use any exact encoding to represent a more general network, e.g., MILP encodings that are used to check robustness properties of neural networks~\cite{Katz2017,Cheng2017a}. However, the scalability of decision procedures is the main limitation in the use of more general networks. We use the ILP encoding as in~\cite{Narodytska} with a minor modification of the last layer as we have numeric outputs instead of categorical outputs. We denote $\BBNN(I,d)$ a logical formula that encodes $\BNN$ using reified linear constraints over Boolean variables (Section 4, ILP encoding~\cite{Narodytska}). \subsection{Geometric and process constraints.} Geometric constraints can be roughly divided into three types. The first type of constraints defines the high-level structure of the image. The high-level structure of our images is defined by the number of grains present in the image. Let $w$ be the number of grains per image. We define a grid of size $t\times t$. Figure~\ref{fig:examples}(a) shows an example of a grid of size $4 \times 4$. We refer to a cell $(i,j)$ on the grid as a pixel as this grid encodes an image of size $t\times t$. Next we define the neighbor relation on the grid. We say that a cell $(h,g)$ is a neighbour of $(i,j)$ if these cells share a side. For example, $(2,3)$ is a neighbour of $(2,4)$ as the right side of $(2,3)$ is shared with $(2,4)$. Let $\neub(i,j)$ be the set of neighbors of $(i,j)$ on the gird. For example, $\neub(2,3) = \{(1,3),(2,2),(2,4), (3,3)\}$. \begin{figure} \centering \includegraphics[width=1\linewidth]{examples \caption{Illustrative examples of additional structures used by constraint-based model.} \label{fig:examples} \end{figure} \vspace{-20pt} \paragraph{Variables.} For each cell we introduce a Boolean variable $c_{i,j,r}$, $i,j \in [t]$, $r \in [w+1]$. $c_{i,j,r} = 1$ iff the cell $(i,j)$ belongs to the $r$th grain, $r =1,\ldots, w$. Similarly, $c_{i,j,w+1} = 1$ iff the cell $(i,j)$ represents a void area. \paragraph{Each cell is either a black or white pixel.} We enforce that each cell contains either a grain or a void area. \begin{equation}\label{eq:onegrain} \begin{array}{lcr} \sum_{r=1}^{w+1} c_{i,j,r} = 1 & \quad\quad & j, i \in [t]\\ \end{array} \end{equation} \paragraph{Grains do not overlap.} Two cells that belong to different grains cannot be neighbours. \begin{equation}\label{eq:overlap} \begin{array}{lcr} c_{i,j,r} \rightarrow \neg c_{h,g,r'} & \quad\quad & (h,g) \in \neub(i,j), r' \in [w]\setminus\{r\} \end{array} \end{equation} \paragraph{Grains are connected areas.} We enforce connectivity constraints for each grain. By connectivity we mean that there is a path between two cells of the same grain using only cells that belong to this grain. Unfortunately, enforcing connectivity constraints is very expensive. Encoding the path constraint results in a prohibitively large encoding. To deal with this explosion, we restrict the space of possible grain shapes. First, we assume that we know the position of one pixel of this grain that we pick randomly. Let $s_r = (i,j)$ be a random cell, $r \in [w]$. Then we implicitly build a directed acyclic graph (DAG) $G$ starting from this cell $s_r$ that covers the entire grid. Each cell of a grid is a node in this graph. The node that corresponds to the cell $s_r$ does not have incoming arcs. There are multiple ways to build a $G$ from $s_r$. Figure~\ref{fig:examples}(a) and (d) show two possible ways to build a DAG that covers a grid starting from cell $(3,3)$. Next we define a parent relation in $G$. Let $\prt_G(i,j)$ be the set of parents of cell $(i,j)$ in $G$. For example, $\prt_G(2,2) = \{(2,3), (3,2)\}$ in our example on Figure~\ref{fig:examples}(a). Given a DAG $G$, we can easily enforce connectivity relation w.r.t. $G$. The following constraint ensures that a cell $(i,j)$ belongs to the $r$th grain iff one of its parents in $G$ belongs to the same grain. Moreover, by enforcing connectivity constraints on the void area, we make sure that grains do not contain isolated void areas inside them. \begin{equation}\label{eq:connectedgrain} \begin{array}{lll} (c_{i,j,r}), & \quad\quad & s_r = (i,j), r \in [w+1],\\ \left(\wedge_{(h,g)\in \prt_G(i,j)}\neg c_{h,g,r}\right)\rightarrow \neg c_{i,j,r}, & \quad\quad & j, i \in [t], r \in [w+1]\\ \end{array} \end{equation} Given a DAG $G$, we can generate grains of multiple shapes. For example, Figure~\ref{fig:examples}(b) shows one possible grain. However, we also lose some valid shapes that are ruled out by the choice of graph $G$. For example, Figure~\ref{fig:examples}(c) gives an example of a shape that is not possible to build using $G$ in Figure~\ref{fig:examples}(a). However, if we select a different random DAG $G'$, e.g., Figure~\ref{fig:examples}(d), then this shape is one of the possible shapes for $G'$. In general, we can pick $s_r$ and DAG randomly, it is possible to generate a variety of shapes. \paragraph{Compactness of a grain.} The second set of constraints is about restrictions on a single grain. The compactness constraint is a form of convexity constraint. We want to ensure that any two boundary points of a grain are close to each other. The reason for this constraint is that grains are unlikely to have a long snake-like appearance as solid particles tend to group together. Sometimes, we need to enforce the convexity constraint, which is an extreme case of compactness. To enforce this constraint, we again trade-off the variety of shapes and the size of the encoding. Now we assume that $s_r$ is the center of the grain. Then we build virtual circles around this center that cover the grid. Figure~\ref{fig:examples}(e) shows examples of such circles. Let $C_r(i,j) = \{C_r^1,\ldots, C_r^q\}$ be a set of circles that are built with the cell $s_r$ as a center. The following constraint enforces that a cell that belongs to the circle $C_r^v$ can be in the $r$th grain iff all cells from the inner circle $C_r^{v-s}$ belong to the $r$th grain, where $s$ is a parameter. \begin{equation}\label{eq:convexgrain} \begin{array}{lcr} \vee_{c_{h,g,r} \in C_r^{v-s}} \neg c_{h,g,r} \rightarrow \neg c_{i,j,r} & \quad\quad & c_{i,j,r} \in C_r^v, v \in [q], r \in [w]\\ \end{array} \end{equation} Note that if $s=1$ then we generate convex grains. In this case, every pixel from $C_r^v$ has to belong to the $r$th grain before we can add a pixel from the circle $C_r^{v+1}$ to this grain. \paragraph{Boundary constraints.} We also have a technical constraint that all cells on the boundary of the grid must be void pixels. They are required to define boundary conditions for PDEs on generated images. \begin{equation}\label{eq:boundary} \begin{array}{lcr} (c_{i,j,w+1}) & \quad\quad & j = t \vee i = t\\ \end{array} \end{equation} \paragraph{Connecting with $\BNN$.} We need to connect variables $c_{i,j,r}$ with the inputs of the network. \begin{equation}\label{eq:boundary2} \begin{array}{lcr} c_{i,j,r} \rightarrow I_{i,j} =1 & \quad\quad & j, i \in [t], r \in [w],\\ c_{i,j,w+1} \rightarrow I_{i,j} =0 & \quad\quad & j, i \in [t]. \\ \end{array} \end{equation} \paragraph{Process constraints.} Process constraints are enforced on the output of the network. Given ranges $[a_i,b_i]$, $i\in [m]$ we have: \begin{equation}\label{eq:process} \begin{array}{lcr} a_i \leq d_{i} \leq b_i & \quad\quad & i \in [m] \end{array} \end{equation} \paragraph{Summary.} To solve the constrained random image generation problem, we solve the conjunctions of constraints~\eqref{eq:onegrain}--\eqref{eq:process} together with our ILP encoding $\BBNN(I,d)$. Randomness comes from the random seed that is passed to the solver, a random choice of $s_r$ and $G$. \section{Experiments}\label{sec:exps} We conduct a set of experiments with our constraint based approach. We ran our experiments on Intel(R) Xeon(R) 3.30GHz. We use the timeout of 600 sec in all runs. \paragraph{Training procedure.} We use two datasets, $D_2$ with 10K images and $D_3$ with 5K images. Each image in $D_2$ contains two grains and each image in $D_3$ contains three grains. These images were labeled with dispersion coefficients along the $x$-axis which is a number between 0.4 and 1. We performed quantization on the dispersion coefficient value to map $d$ into an interval of integers between $40$ and $100$. We use mean absolute error ($MAE$) to train $\BNN$. $\BNN$ consists of three blocks with 100 neurons per layers and one output. The $MAE$ is 4.2 for $D_2$ and 5.1 for $D_3$. We lose accuracy compared to non-binarized networks, e.g, $MAE$ for the same non-binarized network is 2.5 for $D_2$. However, $\BNN$s are much easier to reason about, so we work with this subclass of networks. \paragraph{Image generation.}We use CPLEX and the SMT solver Z3 to solve instances produced by constraints~\eqref{eq:onegrain}--\eqref{eq:process} together with $\BBNN(I,d)$. In principle, other solvers could be evaluated on these instances. The best mode for Z3 was to use an SMT core based on CDCL and a theory solver for \emph{nested} Pseudo-Boolean and cardinality constraints. We noted that bit-blasting into sorting circuits did not scale, and Z3’s theory of linear integer arithmetic was also inadequate. We considered six process constraints for $d$, namely, $d \in [a,b]$, $ [a,b] \in \{[40,50),\ldots, [90,100]\}$. For each interval $[a,b]$, we generate 100 random constrained problems. The randomization comes from a random seed that is passed to the solver, the position of centers of each grain and the parameter $s$ in the constraint~\eqref{eq:convexgrain}. We used the same DAG $G$ construction as in Figure~\ref{fig:examples}(a) in all problems. Table~\ref{table:solved} shows summary of our results for CPLEX and Z3 solvers. As can be seen from this table, these instances are relatively easy for the CPLEX solver. It can solve most of them within the given timeout. The average time for $D_2$ is 25s and for $D_3$ is 12s with CPLEX. Z3 handles most benchmarks, but we observed it gets stuck on examples that are very easy for CPLEX, e.g. the interval $[80,90)$ for $D_2$. We hypothesize that this is due to how watch literals are tracked in a very general way on nested cardinality constraints (Z3 maintains a predicate for each nested PB constraint and refreshes the watch list whenever the predicate changes assignment), when one could instead exploit the limited way that CPLEX allows conditional constraints. The average time for $D_2$ is 94s and for $D_3$ is 64s with Z3. \begin{wrapfigure}{r}{0.35\textwidth} \vspace{-20pt} \begin{center} \includegraphics[width=0.35\textwidth]{results} \end{center} \caption{The absolute error between $d$ and its true value. \label{fig:mae}} \vspace{-20pt} \end{wrapfigure} Figures~\ref{fig:images}(c)--(e) show examples of generated images for ranges $[40,50)$, $[60,70)$ and $ [90,100]$ for $D_2$ (the top row) and $D_3$ (the bottom row). For the process we consider, as the value of the dispersion coefficient grows, the black area should decrease as there should be fewer grain obstacles for a flow to go through the porous medium. Indeed, images in Figures~\ref{fig:images}(c)--(e) follow this pattern, i.e. the black area on images with $d \in [40,50)$ is significantly larger than on images with $d \in [90,100]$. Moreover, by construction, they satisfy geometric constraints that GANs cannot handle. For each image we generated, we run a PDE solver to compute the true value of the dispersion coefficient on this image. Then we compute the absolute error between the value of $d$ that our model computes and the true value of the coefficient. Figure~\ref{fig:mae} shows absolute errors for all benchmarks that were solved by CPLEX. First, this figure shows that our model generates images with given properties. The mean absolute error is about 10 on these instances. Taking into account that $\BNN$ has $MAE$ of 4.2 on $D_2$, $MAE$ of 10 on new generated instances is a reasonable result. Ideally, we would like $MAE$ to be zero. However, this error depends purely on the $\BNN$ we used. To reduce this error, we need to improve the accuracy of $\BNN$ as it serves as an approximator of a PDE solver. For example, we can use more binarized layers or use additional non-binarized layers. Of course, increasing the power of the network leads to computational challenges solving the corresponding logical formulas. \vspace{-10pt} \begin{table} \scriptsize \centering \begin{tabular}{|c|c|c|c|c|c|c||c|c|c|c|c|c|} \hline \multirow{2}{*}{Solver} & \multicolumn{6}{c||}{$D_2$} & \multicolumn{6}{c|}{$D_3$} \\ \cline{2-13} & [40,50)& [50,60)& [60,70)& [70,80) & [80,90)& [90,100]& [40,50)& [50,60)& [60,70)& [70,80) & [80,90)& [90,100]\\ \hline CPLEX & 100 &99 & 99 & 98 & 100 & 41 & 100& 100 & 96 &99 & 100 & 84 \\ Z3 & 98 & 89 & 81 & 74 & 56 & 12 & 100 & 97 & 97 & 97 & 96 & 54\\ \hline \end{tabular} \caption{The number of solved instances in each interval $[a,b]$.}\label{table:solved} \end{table} \vspace{-35pt} \section{Related work} There are two lines of work related to our paper. The first one uses constraint to enhance machine learning techniques with declarative constraints, e.g. in solving constrained clustering problems and in data mining techniques that handle domain specific constraints~\cite{DaoDV17,GanjiBS17,GunsDNTR17}. One recent example is the work of Ganji \emph{et al.}~\cite{GanjiBS17} who proposed a logical model for constrained community detection. The second line of research explores embedding of domain-specific constraints in the GAN training procedure~\cite{Oliveira,Oliveira1,HuGLXBVN17,Osokin,RavanbakhshLMSP17}. Work in this area is targeting various applications in physics and medicine that impose constraints, like sparsity constraints, high dynamic range requirements (e.g. when pixel intensity in an image varies by orders of magnitude), location specificity constraints (e.g. shifting pixel locations can change important image properties), etc. However, this research area is emerging and the results are still preliminary. \section{Conclusion} In this paper we considered the constrained image generation problem for a physical process. We showed that this problem can be encoded as a logical formula over Boolean variables. For small porous media, we show that the generation process is computationally feasible for modern decision procedures There are a lot of interesting future research directions. First, the main limitation of our approach is scalability, as we cannot use large networks with a number of weights in the order of hundreds of thousands, as it is required by industrial applications. However, constraints that are used to encode, for example, binarized neural networks are mostly pseudo-Boolean constraints with unary coefficients. Hence, it would be interesting to design specialized procedures to deal with this fragment of constraints. Second, we need to investigate different types of neural networks that admit encoding into SMT or ILP. For instance, there is a lot of work on quantized networks that use a small number of bits to encode each weight, e.g.~\cite{DengJPWL17}. Finally, can we use similar techniques to reveal vulnerabilities in neural networks? For example, we might be able to generate constrained adversarial examples or other special types of images that expose undesired network behaviour.
{ "timestamp": "2018-02-27T02:03:59", "yymm": "1802", "arxiv_id": "1802.08795", "language": "en", "url": "https://arxiv.org/abs/1802.08795" }
\section{Introduction}\label{sec:Introduction} Android is one of the most popular operating system in for smart devices and are connected through the internet accessing billions of online websites. The exponential increase in android apps is basically due to the open source, third party distribution, free rich SDK and the very much suited java language. In this growing android apps market, it is very hard to know which apps are spam or malware content. As per statista \cite{statista} $\sim2 \times 10^6$ android apps are available at google play store. Also, there are many third party android apps available for the users \cite{9apps}, which may be malicious. Hence potential of the malicious apps or malware entering these systems is now at never seen before levels. \par Due to ease of use, these devices hold sensitive information such as personal data, browsing history, shopping history, financial details, etc. \cite{qr2015} i.e. users are ever more frequent to use the internet consequently these devices are vulnerable to cyber threats/attacks. In this, Quick Heal Threat Research Labs in the 3rd quarter of 2015 reported that they have received samples of files at the rate of $\sim4.2 \times 10^5$ samples per day for the Android and Windows platforms and the G Data security experts expect a rapid increase in numbers of new malware samples in 2016 compare to previous years \cite{gdata2015}. The traditional approach i.e. signature based techniques, to detect the advanced malicious android apps are no longer effective, as it uses code obfuscation techniques. However, a number of methods have been proposed on static and dynamic analysis for analyzing and detecting Android malware prior to their installation \cite{enck2014taintdroid} \cite{felt2011android} \cite{grace2012riskranker} \cite{reina2013system} \cite{yan2012droidscope}. It appears that so far proposed approaches are not suffice to detect the advanced malware to limit/prevent the damages \cite{sharma2014evolution}. Therefore, we investigated the five classifiers ( FT, Random forest, J48, LMT and NBT ) and present a novel approach to combat malware threat/attack by analysing the opcode occurrence in the apps. The remaining paper is organised as follows. In next section, we discuss the related work. Section 3 describe our approach to detect the malicious apps based on static analysis. The results of our approach are discussed in section 4. Finally, section 5 contains the conclusion and direction for the future work. \section{Related work}\label{sec:Related work} Static and dynamic analysis are the two main approaches applied for detection of android malware \cite{sharma2014evolution}. In static analysis, without executing the apps, the code are analysed to find a malicious pattern by extracting the features such as permissions, APIs used, control flow, data flow, broadcast receivers, intents, hardware components etc. Whereas, in the dynamic analysis the apps are examined in run time environment by monitoring the dynamic behaviour (network connections, system calls, resources usage, etc.) of the apps and the system response. However, in both the approaches selected classifiers are trained with a known dataset to differentiate the benign and malicious apps. In this Seo, et. al. by analysing the permissions, dangerous APIs and keywords associated with malicious behaviours detected potential malicious scripts in Android apps \cite{seo2014detecting}. A lightweight framework was discussed by Arp, et. al., which uses AndroidManifest.xml file and disassembled code to generate a joint vector space \cite{arp2014drebin}. Wu, et. al., approach detects the malware by analyzing AndroidManifest.xml and tracing the systems calls \cite{wu2012droidmat}. Sanz, et. al., analysed five classifiers with machine learning (DT, KNN, BN, RF \& SVM) for automatic malware detection by analysing different sets of Android market permissions, ratings and a number of ratings. They found that among five classifiers BN performs the best while RF second and DT worst \cite{sanz2012automatic}. Vidas, et. al., developed a tool which automatically analyzes the apps to find the least permissions/privileges that are required to run the apps \cite{vidas2011curbing}. In this, Fuchs, et. al., method analyse the data flow across the android apps components \cite{fuchs2009scandroid}. Daniel, et. al., did a broad static analysis by embedding the features in a joint vector space, such that the typical patterns of malware can be automatically identified \cite{arp2014drebin}. In the DREBIN project, a study has been done with 123,453 benign and 5,560 malware apps. Based on a set of characteristics derived from binary and metadata Gonzalez, et. al., proposed a method named as DroidKin, which can detect the similarity among the apps under various levels of obfuscation (code reordering, register reassignment, etc. \cite{sharma2014evolution} \cite{sharma2016improving}) \cite{gonzalez2014droidkin}. SVM-based malware detection scheme given by Gugian, et. al., integrates both risky permission combinations and vulnerable API calls and used them as features for the classification \cite{scholkopf2001estimating}. Saracino, et. al., 2016 \cite{saracino2016madam} proposed a novel host-based malware detection system called MADAM which simultaneously analyzes and correlates the features at four levels (kernel, application, user and package) to detect and stop the malicious behaviours. Quentin et. al., uses op-code sequences to detect the malicious apps, however the approach will not detect completely different malware \cite{jerome2014using}. Later on using N-opcode, BooJoong et. al., classified the malware and reported F-measure 98\% \cite{kang2016n}. \section{Our approach}\label{sec:Our approach} A novel approach to classify the unknown android malware is shown in figure~\ref{fig:fc}, which involves finding the promising features (algorithm.~\ref{algo:FS}), classifiers training and its detection. \begin{figure}[!htb] \centering \includegraphics[width=0.75\linewidth, height=0.20\textheight]{Drawing1.eps} \caption{\small \sl Flow chart of the proposed approach for detection of android malicious apps.} \label{fig:fc} \end{figure} \vspace{-0.3in} \subsection{Data Preprocessing and Feature Selection} For the classification of unknown android malware apps, we downloaded 5531 android malware from DREBIN \cite{arp2014drebin} and 2691 benign apps from google play store. The benign apps are cross verified from virustotal.com \cite{wp1}. \par To understand the logic of android malware apps, we use freely available \textit{apktool} \cite{winsniewski2012android} to decompress the android $.apk$ files. After decompressing, we kept $.smali$ files and discarded other created files/folders. The $.smali$ files contains only one class information and is equivalent to $.class$ file. To find the prominent features for classification of android malware and benign, we extracted the opcodes (list of the android opcodes is available at http://pallergabor.uw.hu/androidblog/dalvik\_opcodes.html) of the apps from the obtained $.smali$ files. We analysed the opcode occurrence of all the android apps and found that the occurrence of many opcodes in malware and benign apps differ in large. The normalized opcode occurrence of both the apps are shown in figure. \ref{fig:f1}. The mapping of the opcodes with hexadecimal representation has been kept same as given by the android developers \cite{opcodeList}. The prominent opcodes (features), which suppose to distinguish the malicious and benign android apps are obtained as described in the algorithm.~\ref{algo:FS}. For the classification, we have used Waikato Environment for Knowledge Analysis (WEKA) tool, a collection of visualisation tools and algorithms for data analysis and predictive modeling, together with graphical user interfaces for easy access to this functionality \cite{holmes1994weka}, in which many inbuilt classifiers are available. On the basis of studies done by Sharma and Sahay \cite{sharma2016effective} \cite{sahay2016grouping}, we selected the best classifier (Random forest \cite{rodriguez2006rotation}, LMT (Logistic model trees) \cite{landwehr2005logistic}, NBT (Naive-Bayes tree) \cite{kohavi1996scaling}, J48 \cite{bhargava2013decision} and FT (Functional Tree) \cite{gama2004functional}) for in-depth analysis by using K-fold cross-validation technique. \vspace{-0.2in} \begin{figure}[!htb] \centering \includegraphics[width=0.95\linewidth, height=0.20\textheight]{sel_prfl.eps} \caption{\small \sl Dominant opcodes of malicious and benign android apps .} \label{fig:f1} \end{figure} \vspace{-0.4in} \begin{algorithm}[!htb] \textbf{INPUT:} Pre-processed data\\ $\mathbf{N_B}$: Number of benign android apps, $\mathbf{N_M}$: Number of malware android apps, \\$\mathbf{n}$: Total number of prominent features required. \\ \textbf{OUTPUT:} List of prominent features \begin{algorithmic} \STATE \textbf{BEGIN} \FORALL{benign apps } \STATE Compute sum of the frequencies $\mathbf{f_i}$ of each opcode $\mathbf{Op}$ and normalize it. \STATE \begin{equation*} F_B ( Op_j ) = ( \sum f_i ( Op_j ) ) / N_B \end{equation*} \ENDFOR \FORALL{malware data } \STATE Compute sum of the frequencies $\mathbf{f_i}$ of each opcode $\mathbf{Op}$ and normalize it. \STATE \begin{equation*} F_M ( Op_j ) = ( \sum f_i ( Op_j ) ) / N_M \end{equation*} \ENDFOR \FORALL{opcode $\mathbf{Op_j}$} \STATE Find the difference of the normalized frequencies for each opcode $\mathbf{D(Op_j)}$. \STATE \begin{equation*} D(Op_j)= | F_B ( Op_j ) - F_M ( Op_j ) | \end{equation*} \ENDFOR \RETURN $\mathbf{n}$ number of prominent opcodes as features with high $\mathbf{D(Op)}$. \end{algorithmic} \caption{\textbf{:} Feature Selection} \label{algo:FS} \end{algorithm} \begin{figure}[!htb] \centering \begin{minipage}[t]{0.49\textwidth} \includegraphics[width=1\linewidth, height=0.3\textheight]{accuracy5.eps} \caption{\small \sl Detection accuracy obtained by the selected five classifiers with different number of prominent features.} \label{fig:accuracy} \end{minipage} \begin{minipage}[t]{0.49\textwidth} \centering \includegraphics[width=1\linewidth, height=0.3\textheight]{best_accuracy.eps} \caption{\small \sl Best accuracy obtained by the selected five classifiers.} \label{ba} \end{minipage} \end{figure} \begin{figure}[!htb] \centering \begin{minipage}[t]{0.49\textwidth} \centering \includegraphics[width=1\linewidth, height=0.3\textheight]{tp.png} \caption{\small \sl True positives obtained by selected five classifiers with different number of prominent features.} \label{tp} \end{minipage} \begin{minipage}[t]{0.49\textwidth} \centering \includegraphics[width=1\linewidth, height=0.3\textheight]{tn.eps} \caption{\small \sl True negatives obtained by selected five classifiers with different number of prominent features.} \label{tn} \end{minipage} \end{figure} \begin{figure}[!htb] \centering \begin{minipage}[t]{0.49\textwidth} \centering \includegraphics[width=1\linewidth, height=0.3\textheight]{fn.png} \caption{\small \sl False negatives obtained by selected five classifiers with different number of prominent features.} \label{fn} \end{minipage} \begin{minipage}[t]{0.49\textwidth} \centering \includegraphics[width=1\linewidth, height=0.3\textheight]{fp.png} \caption{\small \sl False positives obtained by selected five classifiers with different number of prominent features.} \label{fp} \end{minipage} \end{figure} \section{Result analysis}\label{sec:Result analysis} The five selected classifiers are analysed by applying supervised machine learning technique with K-fold cross validation for k = 10. For the analysis, we first obtained the top 200 promising features (algorithm \ref{algo:FS}). The accuracy of the classifiers is obtained by varying the promising features and is measured by the equation \begin{equation} \text{Accuracy} = \frac{TP+TN} {TP + FN + TN + FP} \times 100 \end{equation} \noindent where, \\ \noindent $TP \longrightarrow $ True positive, the number of malware apps correctly classified. \\ $FN \longrightarrow $ False negative, the number of malware apps incorrectly classified. \\ $TN \longrightarrow $ True negative, the number of benign apps correctly classified.\\ $FP \longrightarrow $ False positives, the number of benign apps incorrectly classified.\\ The performance of the classifier has been studied by taking 20\% of available data (not used for training) with 20-200 best features, incrementing 20 features at each step and the result obtained are shown in figure. \ref{fig:accuracy}. From the analysis, the best accuracy is obtained by FT, Random forest, J48, LMT and NBT is approximately $79.27$, $74.95$, $71.73$, $70.51$ and $68.87$ percent (figure \ref{ba}). Among these classifiers the least fluctuation in the accuracy by varying the features is observed in Random forest. Figure \ref{tp} shows the TPR (malware detection rate) of all five classifiers with a different number of features. We found that the RF gives maximum TPR with least fluctuation compared to other classifiers. Figure ~\ref{tn} shows the TNR (benign detection rate) for all five classifiers with a different number of features. Here with some exception, we observed that FT detected the benign better than the other classifiers with a different number of features. Figure \ref{fn} shows the false negatives of all selected classifier, in which compared to other classifiers the RF is good and also fluctuation is least with the number of features. Figure \ref{fp} shows the false positives of the analysed classifiers and here we observed that all the five classifier does not give a good result, hence very much affects the final accuracy. However, although the false negative of RF is not as par but the fluctuation with the number of features is least compared to other classifiers. \section{Conclusion}\label{sec:Conclusion} The threat/attack from the malicious apps in android devices are now never seen at before levels, as millions of android apps are available officially (google play store) and unofficially. Some of these available apps may be malicious, hence these devices are very much vulnerable to cyber threat/attack. The consequence will be devastating if in time counter-measures are not developed. Therefore, in this paper, we investigated five classifier FT, Random forest, J48, LMT and NBT for the detection of malicious apps. We found that among the studied classifiers, FT is the best classifiers and detect the malware with $\sim79.27\%$ accuracy. However, true positives i.e. malware detection rate is highest ($\sim99.91\%$) by RF and fluctuate least with the different number of prominent features compared to other studied classifiers, which is better than BooJoong et. al., F-measure (98\%) \cite{kang2016n}. The analysis shows that overall accuracy is majorly affected by the false positives of the classifier. Hence in future more detail study are required to decrease the false positive and negative ratio for overall good accuracy and in this direction work is in progress, showing impressive results. \section*{Acknowledgments}\label{sec:Acknowledgments} Mr. Ashu Sharma is thankful to BITS, Pilani, K.K. Birla Goa Campus for the support to carry out his work through Ph.D. scholarship No. Ph603226/Jul. 2012/01.
{ "timestamp": "2018-02-26T02:10:33", "yymm": "1802", "arxiv_id": "1802.08611", "language": "en", "url": "https://arxiv.org/abs/1802.08611" }
\section{Introduction} Given an edge-colored graph $G$, a \textit{rainbow spanning tree} of $G$ is a spanning tree in which each edge receives a different color. One question that has attracted a lot of attention in recent years is: how many edge-disjoint rainbow spanning trees can we find in any properly edge-colored $K_n$? Brualdi and Hollingsworth \cite{Brualdi-Hollingsworth} conjectured that every properly $(n-1)$-edge-colored $K_n$ contains $\frac{n}{2}$ edge-disjoint rainbow spanning trees (see also Constantine \cite{Constantine}). They showed that in any properly $(n-1)$-edge-colored $K_n$, there exist at least two edge-disjoint rainbow spanning trees. Krussel, Marshall, and Verrall \cite{KMV} showed that under the same assumption, we can find at least three edge-disjoint rainbow spanning trees. Horn \cite{Horn1} proved that for some $\epsilon > 0$, every properly $(n-1)$-edge-colored $K_n$ contains at least $\epsilon n$ edge-disjoint rainbow spanning trees. Subsequently, Fu, Lo, Perry and Rodger \cite{FLPR} showed that every properly $(2n-1)$-colored $K_{2n}$ has at least $\floor{\frac{\sqrt{6n+9}}{3}}$ edge-disjoint rainbow spanning trees. The best bound is due to Pokrovskiy and Sudakov \cite{Sudakov}, who showed that every properly $(n-1)$-edge-colored $K_n$ has $n/9$ edge-disjoint rainbow spanning trees. Strengthening the conjecture of Brualdi and Hollingsworth, Kaneko, Kano and Suzuki \cite{KKS} conjectured that any properly edge-colored $K_n$ (using arbitrary number of colors) contains $\frac{n}{2}$ edge-disjoint rainbow spanning trees. Balogh, Liu and Montgomery \cite{Balogh} proved that every properly edge-colored $K_n$ contains at least $n/10^{12}$ edge-disjoint rainbow spanning trees. Independently using a different method, Pokrovskiy and Sudakov \cite{Sudakov} proved a stronger result, showing that in every proper edge-coloring of $K_n$, there are $n/10^6$ edge-disjoint rainbow copies of a certain spanning tree with radius $2$. Akbari and Alipour \cite{AA} studied edge-disjoint rainbow spanning trees using weakened conditions, showing that any edge-colored $K_n$, in which each color appears at most $\frac{n}{2} $ times contains at least two edge-disjoint rainbow spanning trees. Carraher, Hartke and Horn \cite{CHH} showed that every edge-colored $K_n$, under the same assumption, contains at least $\floor{n/(1000 \log n)}$ edge-disjoint rainbow spanning trees. Many of the results mentioned above used a criterion for the existence of a rainbow spanning tree, which was first established by Schrijver \cite{Schrijver} using matroid methods, and later given graph theoretical proofs by Suzuki \cite{Suzuki} and also by Carraher and Hartke \cite{Carraher-Hartke}. \begin{theorem}{(\cite{Schrijver, Suzuki, Carraher-Hartke})}\label{Suzuki} An edge-colored connected graph $G$ has a rainbow spanning tree if and only if for every $2\leq k\leq n$ and every partition of $G$ with $k$ parts, at least $k-1$ different colors are represented in edges between partition classes. \end{theorem} In this paper, we develop a similar tool on a variant of the problem: under what conditions can we find $t$ pairwise \textit{color-disjoint} rainbow spanning trees? We say two edge-colored multigraphs $G_1$ and $G_2$ are \textit{color-disjoint} if the sets of colors of the edge sets of $G_1$ and $G_2$ are disjoint. Clearly if two subgraphs of a graph are color-disjoint, they are also edge-disjoint. We will extend Suzuki's proof on Theorem \ref{Suzuki} and give a sufficient and necessary condition for the existence of $t$ pairwise color-disjoint rainbow spanning trees in an arbitrary edge-colored graph. In particular, we show the following: \begin{theorem}\label{partition} An edge-colored multigraph $G$ has $t$ pairwise color-disjoint rainbow spanning trees if and only if for every partition $P$ of $V(G)$ into $|P|$ parts, at least $t(|P|-1)$ distinct colors are represented in edges between partition classes. \end{theorem} \begin{remark} Theorem \ref{partition} implies Theorem \ref{Suzuki} when $t = 1$. \end{remark} \begin{remark} Recall the famous Nash-Williams-Tutte Theorem (\cite{Nash, Tutte}): A multigraph contains $t$ edge-disjoint spanning trees if and only if for every partition $P$ of its vertex set, it has at least $t(|P|-1)$ cross-edges. Theorem \ref{partition} implies the Nash-Williams-Tutte Theorem by assigning every edge of the multigraph a distinct color. \end{remark} In many situations, we care about edge-disjoint rainbow spanning trees. Let $G$ be an edge-colored multigraph. Let $F_1, \ldots, F_t$ be $t$ edge-disjoint rainbow spanning forests. We are interested in whether $F_1,\ldots, F_t$ can be extended to $t$ edge-disjoint rainbow spanning trees $T_1,\ldots, T_t$ in $G$, i.e., $E(F_i)\subset E(T_i)$ for each $i$. We say the extension is {\em color-disjoint} if all edges in $\cup_i\left ( E(T_i)\setminus E(F_i)\right )$ have distinct colors and these colors are different from the colors appearing in the edges of $\cup_i E(F_i)$. We show the following theorem on the existence of a color-disjoint extension of edge-disjoint rainbow spanning forests. \begin{theorem}\label{extension} A family of $t$ edge-disjoint rainbow spanning forests $F_1, \ldots, F_t$ has a color-disjoint extension to $t$ edge-disjoint rainbow spanning trees in $G$ if and only if for every partition $P$ of $G$ into $|P|$ parts, \begin{equation} \label{eq:ext} |c({\rm cr}(P,G'))|+\sum_{i=1}^t|{\rm cr}(P, F_i)|\geq t(|P|-1). \end{equation} Here $G'$ is the spanning subgraph of $G$ by removing all edges with colors appearing in some $F_i$, and $c({\rm cr}(P,G'))$ be the set of colors appearing in the edges of $G'$ crossing the partition $P$. \end{theorem} It would be interesting to find a similar criterion for the existence of $t$ edge-disjoint rainbow trees in a general graph since applications of Theorem \ref{partition} and Theorem \ref{extension} usually require large number of colors in the host graph. In this paper, we apply the two above theorems to solve an anti-Ramsey problem on rainbow spanning trees. The general \textit{anti-Ramsey problem} asks for the maximum number of colors in an edge-coloring of $K_n$ having no rainbow copy of some graph in a class $\mathcal{G}$. Hass and Young \cite{HY} found the anti-Ramsey number for perfect matchings (when $n$ is even). Bialostocki and Voxman \cite{BV} showed that the maximum number of colors in an edge-coloring of $K_n$ with no rainbow spanning tree is $\binom{n-2}{2}+1$. Jahanbekam and West \cite{West} generalized their results in the direction of avoiding $t$ edge-disjoint rainbow spanning trees. Let $r(n,t)$ be the maximum number of colors in an edge-coloring of $K_n$ not having $t$ edge-disjoint rainbow spanning trees. Akbari and Alipour \cite{AA} showed that $r(n,2) = \binom{n-2}{2}+2$ for $n\geq 6$. Jahanbekam and West \cite{West} showed that $$r(n,t) = \begin{cases} {n-2\choose 2}+t & \textrm{ for } n > 2t+\sqrt{6t-\frac{23}{4}}+\frac{5}{2}\\ {n\choose 2}-t & \textrm{ for } n = 2t, \end{cases}$$ and they made the following conjecture: \begin{conjecture}{\cite{West}} $r(n,t) = \binom{n-2}{2} + t$ whenever $n\geq 2t+2 \geq 6.$ \end{conjecture} Using Theorem \ref{partition} and Theorem \ref{extension}, we show that the conjecture holds and we also determine the value of $r(n,t)$ when $n = 2t+1$. Together with previous results (\cite{BV},\cite{AA},\cite{West}), this completely resolves the anti-Ramsey problem for $t$ rainbow spanning trees. In particular, we have the following theorem. \begin{theorem}\label{anti-Ramsey} For all positive integers $t$, $$r(n,t) = \begin{cases} {n-2\choose 2}+t & \textrm{ for } n \geq 2t+2\\ {n-1\choose 2} & \textrm{ for } n = 2t+1\\ {n\choose 2}-t & \textrm{ for } n = 2t, \end{cases}$$ \end{theorem} \begin{remark} Note that if $n < 2t$, then $K_n$ does not have enough edges for $t$ edge-disjoint rainbow spanning trees. \end{remark} {\noindent}{\bf Organization:} The rest of the paper is organized as follows. In Section 2, we present the proof of Theorem \ref{partition}. In Section 3, we present the proof of Theorem \ref{extension}. In Section 4, we show the applications of Theorem \ref{partition} and Theorem \ref{extension} to the anti-Ramsey problems for rainbow spanning trees.\\ \section{Proof of Theorem \ref{partition}} Given a graph $G$, we use $V(G),E(G)$ to denote its vertex set and edge set respectively. We use $\|G\|$ to denote the number of edges in $G$. Given a set of edges $E$, we use $c(E)$ to denote the set of colors that appear in $E$. For clarity, we abuse the notation to use $c(e)$ to denote the color of an edge $e$. We say a color $c$ has \textit{multiplicity} $k$ in $G$ if the number of edges with color $c$ in $G$ is $k$. The \textit{color multiplicity} of an edge in $G$ is the multiplicity of the color of the edge in $G$. For any partition $P$ of the vertex set $V(G)$ and a subgraph $H$ of $G$, let $|P|$ denote the number of parts in the partition $P$ and let ${\rm cr}(P,H)$ denote the set of crossing edges in $H$ whose end vertices belong to different parts in the partition $P$. When $H=G$, we also write ${\rm cr}(P,G)$ as ${\rm cr}(P)$. Given two partitions $P_1\colon V =\cup_i V_i$ and $P_2\colon V=\cup_j V'_j$, let the intersection $P_1\cap P_2$ denote the partition given by $V=\bigcup\limits_{i,j} V_i\cap V'_j$. Given a spanning disconnected subgraph $H$, there is a natural partition $P_H$ associated to $H$, which partitions $V$ into its connected components. Without loss of generality, we abuse our notation ${\rm cr}(H)$ to denote the crossing edges of $G$ corresponding to this partition $P_H$. Recall we want to show that an edge-colored multigraph $G$ has $t$ color-disjoint rainbow spanning trees if and only if for any partition $P$ of $V(G)$ (with $|P| \geq 2$), \begin{equation}\label{eq:partition} |c(cr(P))| \geq t(|P|-1). \end{equation} \begin{proof} One direction is easy. Suppose that $G$ contains $t$ pairwise color-disjoint rainbow spanning trees $T_1, T_2, \ldots, T_t$. Then all edges in these trees have distinct colors. For any partition $P$ of the vertex set $V$, each tree contributes at least $|P|-1$ crossing edges, thus $t$ trees contribute at least $t(|P|-1)$ crossing edges and the colors of these edges are all distinct. Now we prove the other direction. Assume that $G$ satisfies inequality \eqref{eq:partition}. We would like to prove $G$ contains $t$ pairwise color-disjoint rainbow spanning trees. We will prove by contradiction. Assume that $G$ doesn't contain $t$ pairwise color-disjoint rainbow spanning trees. Let ${\cal F}$ be the collection of all families of $t$ color-disjoint rainbow spanning forests $\{F_1, \cdots, F_t\}$. Consider the following deterministic process: \begin{center} \begin{tabbing} mmmm\=mmmm\=mmmm\=mmmm\=mmmm\= \kill Initially, set $C':=\bigcup\limits_{j=1}^t c({\rm cr}(F_j))$\\ {\bf while } $C'\not=\emptyset$ {\bf do}\\ \>{\bf for } each color $x$ in $C' $, {\bf do}\\ \>\> {\bf for } $j$ from 1 to $t$, {\bf do}\\ \>\>\>{\bf if } color $x$ appears in $F_j$, {\bf then }\\ \>\>\>\> delete the edge in color $x$ from $F_j$\\ \>\>\>{\bf endif}\\ \>\>{\bf endfor}\\ \>{\bf endfor}\\ \>set $C':=\bigcup\limits_{j=1}^t c({\rm cr}(F_j))-C'$\\ {\bf endwhile}\\ \end{tabbing} \end{center} For $i \geq 0$, $F_j^{(i)}$ denote the rainbow spanning forest $F_j$ after $i$ iterations of the while loop. In particular, $F_j^{(0)} = F_j$ for all $j\in [t]$ and $F_j^{(\infty)}$ is the resulting rainbow spanning forest of $F_j$ after the process. Similarly, let $C_i$ denote the set $C'$ after the $i$-th iteration of the while loop. Observe that since the procedure is deterministic, $\{ F_j^{(i)}: j\in [t], i>0\}$ is unique for a fixed family $\{F_1, \cdots, F_t\}$. We define a {\em preorder} on ${\cal F}$. We say a family $\{F_j\}_{j=1}^t$ is less than or equal to another family $\{F'_j\}_{j=1}^t$ if there is a positive integer $l$ such that \begin{enumerate} \item For $1\leq i<l$, $\displaystyle\sum_{j=1}^t \| F_j^{(i)}\|=\displaystyle\sum_{j=1}^t \| {F'}_j^{(i)}\|$. \item $\displaystyle\sum_{j=1}^t \| F_j^{(l)}\|\leq \displaystyle\sum_{j=1}^t \| {F'}_j^{(l)}\|$. \end{enumerate} Since $G$ is finite, so is $\cal F$. There exists a maximal element $\{F_1, F_2, \cdots, F_t\} \in {\cal F}$. Run the deterministic process on $\{F_1, F_2, \cdots, F_t\}$. The goal is to construct a common partition $P$ by refining ${\rm cr}(F_j)$ so that $|c({\rm cr}(P))| < t(|P|-1)$. In particular, we will show that all forests in $\{F^{(\infty)}_j: j\in [t]\}$ admit the same partition $P$. {\bf Claim (a):} $\bigcup\limits_{j=1}^t c \left ({\rm cr}(F^{(i)}_j)\right ) \subseteq \left ( \bigcup\limits_{j=1}^t c \left ({\rm cr}(F^{(i-1)}_j)\right ) \right )\cup \left (\bigcup\limits_{j=1}^t c(F^{(i)}_j)\right )$. We will prove this claim by contradiction. Assume that there is a color $x\in \bigcup\limits_{j=1}^t c \left ({\rm cr}(F^{(i)}_j)\right )$ but $x\not\in \bigcup\limits_{j =1}^t c({\rm cr}(F^{(i-1)}_j))$, and there is no edge in color $x$ in all forests $F^{(i)}_1,\ldots, F^{(i)}_t$. Let $e$ be the edge such that $c(e)=x$ and $e\in {\rm cr}(F_s^{(i)})$ for some $s \in [t]$. Observe that since $c(e) \notin \bigcup\limits_{j =1}^t c({\rm cr}(F^{(i-1)}_j))$, it follows that $F_s^{(i-1)}+ e$ contains a rainbow cycle, which passes through $e$ and another edge $e' \in F_s^{(i-1)}$ joining two distinct components of $F_s^{(i)}$. Now let us consider a new family of rainbow spanning forests $\{F_1', \cdots, F_t'\}$ where $F_j' = F_j$ for $j \neq s$ and $F_s' = F_s - e' + e$. The color-disjoint property is maintained since the color of edge $e$ is not in any $F_j$. Observe that since $c(e) \notin \bigcup\limits_{j =1}^t c({\rm cr}(F^{(i-1)}_j))$, $F_s'^{(i)}$ will have one fewer component than $F_s^{(i)}$. Thus we have \[\displaystyle\sum_{j=1}^t \| F_j^{(k)}\| = \displaystyle\sum_{j=1}^t \| F_j'^{(k)}\| \text{ for } k < i.\] \[\displaystyle\sum_{j=1}^t \| F_j'^{(i)}\| > \displaystyle\sum_{j=1}^t \| F_j^{(i)}\|.\] which contradicts our maximality assumption of $\{F_i: i\in [t]\}$. That finishes the proof of Claim $(a)$. Claim (a) implies that for each $x \in C_i$, there is an edge $e$ of color $x$ in exactly one of the forests in $\{F_j^{(i)}: j\in [t]\}$. Thus removing that edge in the next iteration will increase the sum of number of partitions exactly by $1$. Thus we have that $$\displaystyle\sum_{j=1}^t|P_{F^{(i+1)}_j}|= \displaystyle\sum_{j=1}^t |P_{F^{(i)}_j}|+ |C_{i}|.$$ It then follows that \begin{align*} \sum_{j=1}^t|P_{F_j^{(\infty)}}| &= \sum_{j=1}^t |P_{F_j}|+ \sum_{i}|C_i|\\ &= \sum_{j=1}^t |P_{F_j}|+ |\bigcup\limits_{j=1}^t c({\rm cr}(F_j^{(\infty)}))|. \end{align*} Finally set the partition $P=\bigcap\limits_{j=1}^t P_{F^{(\infty)}_j}$. We claim $P_{F_j^{(\infty)}}=P$ for all $j$. This is because all edges in $cr(P_{F_j^{(\infty)}}) \cap \bigcup\limits_{k=1}^t E(F_k^{(\infty)})$ have been already removed. We have \begin{align*} t|P| &= \sum_{j=1}^t|P_{F^{(\infty)}_j}| \\ &= \sum_{j=1}^t |P_{F_j}|+ |\bigcup\limits_{j=1}^t c({\rm cr}(F^{(\infty)}_j))| \\ &= \sum_{j=1}^t |P_{F_j}|+ |c({\rm cr}(P))|\\ &\geq t+1+ |c({\rm cr}(P))|. \end{align*} We obtain $$ |c({\rm cr}(P))| \leq t(|P|-1)-1.$$ Contradiction. \end{proof} \begin{corollary} The edge-colored complete graph $K_n$ has $t$ color-disjoint rainbow spanning trees if the number of edges colored with any fixed color is at most $n/(2t)$. \end{corollary} \begin{proof} Suppose $K_n$ does not have $t$ color-disjoint rainbow spanning trees, then there exists a partition $P$ of $V(K_n)$ into $r$ parts ($2\leq r \leq n$) such that the number of distinct colors in the crossing edges of $P$ is at most $t(r-1)-1$. Let $m$ be the number of edges crossing the partition $P$. It follows that $$m \leq \left ( t(r-1)-1 \right ) \cdot \frac{n}{2t} = \frac{n}{2}(r-1) -\frac{n}{2t}. $$ On the other hand, $$m \geq \binom{n}{2} - \binom{n-(r-1)}{2}.$$ Hence we have $$\binom{n}{2} - \binom{n-(r-1)}{2} \leq \frac{n}{2}(r-1) -\frac{n}{2t}.$$ which implies $$(n-r)(r-1) \leq -\frac{n}{t}.$$ which contradicts that $2\leq r\leq n$. \end{proof} {\bf Remark:} This result is tight since the total number of colors used in $K_n$ could be as small as ${n\choose 2}/(n/(2t))= t(n-1),$ but any $t$ color-disjoint rainbow spanning trees need $t(n-1)$ colors. On the contrast, Carraher, Hartke and Horn's result \cite{CHH} implies there are $\Omega(n/\log n)$ edge-disjoint rainbow spanning trees. \section{Proof of Theorem \ref{extension}} Recall we want to show that any $t$ edge-disjoint rainbow spanning forests $F_1, \ldots, F_t$ have a color-disjoint extension to edge-disjoint rainbow spanning trees in $G$ if and only if $$ |c({\rm cr}(P,G'))|+\sum_{j=1}^t|{\rm cr}(P, F_j)|\geq t(|P|-1).$$ where $G'$ is the spanning subgraph of $G$ by removing all edges with colors appearing in some $F_j$. \begin{proof} Again, the forward direction is trivial. We only need to show that condition \eqref{eq:ext} implies there exists a color-disjoint extension to edge-disjoint rainbow spanning trees. The proof is similar to the proof of Theorem \ref{partition}. We will prove it by contradiction. Assume that $\{F_1, \ldots, F_t\}$ has no color-disjoint extension to $t$ edge-disjoint rainbow spanning trees. Consider a set of edge-maximal forests $F^{(0)}_1,\ldots, F^{(0)}_t$ which is a color-disjoint extension of $F_1, \ldots, F_t$. From $\{F^{(0)}_j\}$ we delete all edges (in $\{F^{(0)}_j\}$) of some color $c$ appearing in $\bigcup_{j=1}^tc({\rm cr}(F^{(0)}_j,G'))$ to get a new set $\{F^{(1)}_j\}$. Repeat this process until we reach a stable set $\{F^{(\infty)}_j\}$. Since we only delete edges in $G'$, we have $E(F_j)\subseteq E(F^{(\infty)}_j)$ for each $1\leq j \leq t$. The edges and colors in $\cup_{j=1}^t E(F_j)$ will not affect the process. A similar claim still holds: $$\bigcup\limits_{j=1}^t c({\rm cr}(F^{(i)}_j, G')) \subseteq \left ( \bigcup\limits_{j=1}^t c({\rm cr}(F^{(i-1)}_j,G')) \right ) \cup \left (\bigcup\limits_{j=1}^t c \left ( E(F^{(i)}_j)\cap E(G')\right )\rp.$$ In particular, let $C_i = \left (\bigcup_{j=1}^t c({\rm cr}(F^{(i)}_j,G')) \right ) \backslash \left ( \bigcup_{j=1}^t c({\rm cr}(F^{(i-1)}_j,G')) \right ) $. Then we have $$\displaystyle\sum_{j=1}^t|P_{F^{(i+1)}_j}| = \displaystyle\sum_{j=1}^t |P_{F^{(i)}_j}|+ |C_{i}|.$$ It then follows that \begin{align*} \sum_{j=1}^t|P_{F_j^{(\infty)}}| &= \sum_{j=1}^t |P_{F_j^{(0)}}|+ \sum_{i}|C_i|\\ &= \sum_{j=1}^t |P_{F_j^{(0)}}|+ |\bigcup\limits_{j=1}^t c({\rm cr}(F_j^{(\infty)},G'))|. \end{align*} Finally set the partition $P=\bigcap\limits_{j=1}^t P_{F^{(\infty)}_j \backslash E(F_j)}$. Clearly all edges in ${\rm cr}(P, G')$ are removed. All possible edges remaining in $G$ that cross the partition $P$ are exactly the edges in $\bigcup\limits_{j=1}^t{\rm cr}(P, F_j)$. We have \begin{align*} t|P| &= \sum_{j=1}^t|P_{F^{(\infty)}_j}| + \sum_{j=1}^t|{\rm cr}(P, F_j)| \\ &= \sum_{j=1}^t |P_{F_j^{(0)}}|+ |\bigcup\limits_{j=1}^t c({\rm cr}(F^{(\infty)}_j,G'))| + \sum_{j=1}^t|{\rm cr}(P, F_j)|\\ &= \sum_{j=1}^t |P_{F_j^{(0)}}|+ |c({\rm cr}(P,G'))|+ \sum_{j=1}^t|{\rm cr}(P, F_j)|\\ &\geq t+1+ |c({\rm cr}(P,G'))|+ \sum_{j=1}^t|{\rm cr}(P, F_j)|. \end{align*} We obtain $$|c({\rm cr}(P,G'))|+ \sum_{j=1}^t|{\rm cr}(P, F_j)|\leq t(|P|-1)-1.$$ Contradiction. \end{proof} \section{Applications to the anti-Ramsey problems} Recall that $r(n,t)$ is the maximum number of colors in an edge-coloring of the complete graph $K_n$ not having $t$ edge-disjoint rainbow spanning trees.\\ \noindent{\bf Lower Bound:} Jahanbekam and West (See Lemma 5.1 in \cite{West}) showed the following lower bound for $r(n,t)$. \begin{proposition}{\cite{West}}\label{lower-bound} For positive integers $n$ and $t$ such that $t\leq 2n-3$, there is an edge-coloring of $K_n$ using $\binom{n-2}{2} + t$ colors that does not have $t$ edge-disjoint rainbow spanning trees. When $n=2t+1$, the construction improves to $\binom{n-1}{2}$ colors. When $n=2t$, it improves to $\binom{n}{2}-t$. \end{proposition} This matches the upper bound that we will show in this section. Hence we will skip the proof of lower bounds in the subsequent theorems. We only need consider the case $t\geq 3$ (and $t\geq 2, n=5$). since the case $t=1$ is implied by the results of Bialostocki and Voxman \cite{BV} and the case $t=2$ and $n\geq 6$ is implied by the results of Akbari and Alipour \cite{AA}. \noindent\textbf{Organization:} In Section 4.1, we prove a technical lemma that will be used later. In Section 4.2, we prove a base case when $n=2t+2$. In Section 4.3, we use induction on $n$ to prove the remaining cases of $n\geq 2t+2$. The case $n=2t+1$ is finished in Section 4.4. Putting together, we finish the proof of Theorem \ref{anti-Ramsey}. \subsection{Technical lemma} \begin{lemma}\label{lem:1} Let $G$ be an edge-colored graph with $s$ colors $c_1, \cdots, c_s$ and $|V(G)| =n = 2t+2$ where $t\geq 3$. For color $c_i$, let $m_i$ be the number of edges of color $c_i$. Suppose $\displaystyle\sum_{i=1}^s (m_i-1) = 3t$ and $m_i \geq 2$ for all $i \in [s]$. Then we can construct $t$ edge-disjoint rainbow forests $F_1,\ldots, F_t$ in $G$ such that if we define $G_0 = G -\bigcup\limits_{i=1}^t E(F_i)$, then \begin{equation} \label{eq:5} |E(G_0)| \leq 2t+1. \end{equation} and \begin{equation} \label{eq:6} \Delta(G_0) \leq t+1. \end{equation} \end{lemma} \begin{proof} We consider two cases: \begin{description} \item Case 1: $m_1\geq 2t+2$. Note that $$\sum_{i=2}^s(m_i-1)=3t-(m_1-1)\leq t-1.$$ Thus, $s\leq t$. Let $d_i(v)$ be the number of edges in color $c_i$ and incident to $v$ in the current graph $G$. We construct the edge-disjoint rainbow forests $F_1,F_2,\ldots, F_t$ in two rounds: In the first round, we greedily extract edges only in color $c_1$. For $i=1,\ldots, t$, at step $i$, pick a vertex $v$ with maximum $d_1(v)$ (pick arbitrarily if tie). Pick an edge in color $c_1$ incident to $v$, assign it to $F_i$, and delete it from $G$. We claim that after the first round $d_1(v)\leq t+1$ for any vertex $v$. Suppose not, if $d_1(v)\geq t+2$. Since $n-1-(t+2) < t$, it follows that there exists another vertex $u$ with $d_1(u)\geq d_1(v)-1\geq t+1$. This implies $$m_1\geq t+d_1(v)+d_1(u)-1\geq 3t+2.$$ However, $$m_1-1\leq \sum_{i=1}^s(m_i-1)=3t.$$ which gives us the contradiction. In the second round, we greedily extract edges not in color $c_1$. For $i=1,\ldots, t$, at step $i$, among all vertices $v$ with at least one neighboring edge not in color $c_1$, pick a vertex $v$ with maximum vertex degree $d(v)$ (pick arbitrarily if tie). Pick an edge incident to $v$ and not in color $c_1$, assign it to $F_i$, and delete it from $G$. If we succeed with selecting $t$ edges not in color $c_1$ in the second round, we claim $d(v)\leq t+1$ for any vertex $v$. Suppose not, if $d(v)\geq t+2$. Then there is another vertex $u$ with $d(u)\geq d(v)-1\geq t+1$. It implies $$\sum_{i=1}^sm_i\geq 2t+d(u)+d(v)-1\geq 4t+2.$$ However, since $s\leq t$, we have $$\sum_{i=1}^s m_i\leq 3t+s\leq 4t.$$ Contradiction. Therefore it follows that $d(v) \leq t+1$. Moreover, $|E(G_0)| \leq 4t - 2t \leq 2t$. If the process stops at step $i=l<t$, then all remaining edges in $G_0$ must be in color 1. Thus, by the previous claim, $\Delta(G_0) \leq t+1$. Moreover, $$|E(G_0)| \leq m_1 -t \leq (3t+1) -t = 2t+1.$$ In both cases above, $F_1, \cdots F_t$ are edge-disjoint rainbow forests that satisfies inequality (\ref{eq:5}) and (\ref{eq:6}). \item Case 2: $m_1\leq 2t+1$. {\bf Claim:} There exists $t$ edge-disjoint rainbow forests $F_1, F_2, \cdots, F_t$ such that $\Delta(G_0) \leq t+1$. For $j=1,2,\ldots, t$, we will construct a rainbow forest $F_j$ by selecting a rainbow set of edges such that after deleting these edges from $G$, $\Delta(G_0) \leq 2t+1-j$. Notice that when $j=t$, we will have $\Delta(G_0) \leq t+1$. Our procedure is as follows: For step $j$, without loss of generality, let $v_1, v_2, \cdots, v_l$ be the vertices with degree $2t+2 -j$ and let $c_1, c_2, \cdots, c_m$ be the set of colors of edges incident to $v_1, v_2, \cdots, v_l$ in $G$. If there is no such vertex, simply pick an edge incident to the max-degree vertex and assign it to $F_j$. Otherwise, we we will construct an auxiliary bipartite graph $H = A\cup B$ where $A = \{v_1, \cdots, v_l\}$ and B = $\{c_1, c_2, \dots, c_m\}$ and $v_x c_y \in E(H)$ if and only if there is an edge of color $c_y$ incident to $v_x$. We claim that there exists a matching of $A$ in $H$. Suppose not, then by Hall's theorem, there exists a set of vertices $A' = \{u_1, u_2, \cdots u_k\} \subseteq A$ such that $|N(A')|< |A'|=k$ where $k\geq 2$. Without loss of generality, suppose $N(A) = \{c_1', c_2', \cdots, c_q'\}$ where $q\leq k-1$. Let $m_i'$ be the number of edges of color $c_i'$ remaining in $G$. Note that $k\neq 2$ since otherwise we will have one color with at least $2\cdot (2t+2-j) -1 \geq 2t+3$ edges, which contradicts our assumption in this case. Notice that for every $i\in [k]$, $u_i$ has at least $(2t+2-j)$ edges incident to it. Moreover, at least $j-1$ edges are already deleted from $G$ in previous steps. Therefore, we have \begin{align*} \frac{k(2t+2-j)}{2} & \leq \displaystyle\sum_{i=1}^{q} m_i' \leq \left ( \displaystyle\sum_{i=1}^{q} (m_i'-1)\right ) + (k-1) \leq 3t - (j-1) + (k-1). \end{align*} It follows that $$k \leq 2+ \frac{2t}{2t-j} \leq 4.$$ Similarly, using another way of counting the edges incident to some $u_i$ $(i\in [k]$), we have $$ k(2t+2-j) - \binom{k}{2} \leq 3t-(j-1)+(k-1).$$ which implies that $$t(2k-3) \leq \frac{k(k-3)}{2} + j(k-1) \leq \frac{k(k-3)}{2} + t(k-1).$$ It follows that $t\leq \frac{k(k-3)}{2(k-2)}.$ Since $k \leq 4$ and $k> 2$, we obtain that $t\leq 1$, which contradicts our assumption that $t\geq 2$. Thus by contradiction, there exists a matching of $A$ in $H$. This implies that there exists a rainbow set of edges $E_j$ that cover all vertices with degree $2t+2 -j$ in step $j$. We can then find a maximally acyclic subset $F_j$ of $E_j$ such that $F_j$ is a rainbow forest and every vertex of degree $2t+2-j$ is adjacent to some edge in $F_j$. Delete edges of $F_j$ from $G$ and we have $\Delta(G_0) \leq 2t+1 -j$. As a result, after $t$ steps, we obtain $t$ edge-disjoint rainbow forests $F_1, \cdots, F_t$ and $\Delta(G_0) \leq t+1$. This finishes the proof of the claim.\\ Now let $\{F_1, F_2, \cdots, F_t\}$ be an edge-maximal set of $t$ edge-disjoint rainbow forests that satisfies $\Delta(G_0) \leq t+1$. We claim that $|E(G_0)| \leq 2t+1$. Suppose not, i.e. $|E(G_0)| \geq 2t+2$. It follows that $\displaystyle\sum_{i=1}^t |E(F_i)| \leq 6t-(2t+2) < 4t$, i.e. there exists a $j \in [t]$ such that $F_j$ has at most $3$ edges. Since $F_j$ is edge maximal, none of the edges in $G_0 $ can be added to $F_j$. We have three cases: \begin{description} \item Case 2a: $|E(F_j)| = 1$. It then follows that all edges in $G_0$ have the same color (call it $c_1'$) as the single edge in $F_j$. Thus we have a color with multiplicity at least $2t+3$, which contradicts that $m_1 < 2t+2$. \item Case 2b: $|E(F_j)| = 2$. Similarly, we have that at least $2t+1$ edges in $G_0$ that share the same colors (call them $c_1', c_2'$) as edges in $F_j$. It follows that $m_1 + m_2 \geq 2t+3$. Similar to Case 1, in this case, we have that $s \leq t+1$ and $|E(G)| = 3t+s \leq 4t+1$. Since $|E(G_0)| \geq 2t+2$, that implies that $\displaystyle\sum_{i=1}^t |E(F_i)| \leq (4t+1)-(2t+2) = 2t-1.$ Hence there exists some $F_k$ such that $|E(F_k)| \leq 1$ and we are done by Case 2a. \item Case 2c: $|E(F_j)| = 3$. Similarly, we have that at least $2t-1$ edges in $G_0$ share the same colors (call them $c_1', c_2', c_3')$ as edges in $F_j$. It follows that $m_1 + m_2 + m_3 \geq 2t+2$. By inequality \eqref{edge-bound}, we have that $s \leq t+4$ and $|E(G)| \leq 4t+4$. Since $|E(G_0)| \geq 2t+2$, that implies that $\displaystyle\sum_{i=1}^t |E(F_i)| \leq 2t+2$. Since $t\geq 3$ by our assumption, there exists a $k \in [t]$ such that $|E(F_k)| \leq 2$ and we are done by Case $2b$ and Case $2c$. \end{description} Therefore, by contradiction, we have that $|E(G_0)| \leq 2t+1$ and we are done. \end{description} \end{proof} \subsection{Proof of Theorem \ref{anti-Ramsey} where $n =2t+2$} \begin{proposition}\label{base-case} For any $t\geq 3$ and $n = 2t+2$, we have $r(n,t) = \binom{n-2}{2} +t = 2t^2$. \end{proposition} \begin{proof} Note that the lower bound is shown by Jahanbekam and West in Proposition \ref{lower-bound}. For the upper bound, we will show that any coloring of $K_{2t+2}$ with $2t^2+1$ distinct colors contains $t$ edge-disjoint rainbow spanning trees. Call this edge-colored graph $G$. Let $m_i$ be the multiplicity of the color $c_i$ in $G$. Without loss of generality, say the first $s$ colors have multiplicity at least $2$, i.e. \[m_1 \geq m_2 \geq \cdots \geq m_s \geq 2.\] Let $G_1$ be the spanning subgraph of $G$ consisting of all edges with color multiplicity greater than 1 in $G$. Let $G_2$ be the spanning subgraph consisting of the remaining edges. We have \begin{equation}\label{edge-bound} \sum_{i=1}^s (m_i-1)={n\choose 2}-(2t^2 + 1)=3t. \end{equation} In particular, we have $$|E(G_1)|=\sum_{i=1}^s m_i=3t+s\leq 6t.$$ By Lemma \ref{lem:1}, it follows that we can construct $t$ edge-disjoint rainbow spanning forests $F_1,\ldots, F_t$ in $G$ such that if we define $G_0 = E(G) -\bigcup\limits_{i=1}^t E(F_i)$, then $$ |E(G_0)| \leq 2t+1.$$ and $$\Delta(G_0) \leq t+1.$$ Now we show that $F_1,\ldots, F_t$ have a color-disjoint extension to $t$ edge-disjoint rainbow spanning trees. Consider any partition $P$. We will verify \begin{equation} \label{eq:extend} |c({\rm cr}(P),G_2)|+ \sum_{i=1}^t|{\rm cr}(P, F_i)|\geq t(|P|-1). \end{equation} We will first verify the case when $3\leq |P| \leq n$. Note that by equation (\ref{eq:5}), we have $$|c({\rm cr}(P),G_2)|+ \sum_{i=1}^t|{\rm cr}(P, F_i)| -t(|P|-1) \geq {n\choose 2}-(2t+1) -{n-|P|+1 \choose 2}-t(|P|-1).$$ We want to show that the right hand side of the above inequality is nonnegative. Note that the function on the right hand side is concave downward with respect to $|P|$. Thus it is sufficient to verify it at $|P|=3$ and $|P|=n$. When $|P|=3$, we have $${n\choose 2}-(2t+1) -{n-2\choose 2}-2t = 0.$$ When $|P|=n$, we have $${n\choose 2}-(2t+1)-t(n-1)=0.$$ It remains to verify the inequality (\ref{eq:extend}) for $|P| = 2$. By Theorem \ref{extension}, we have $|E(G_0)|\leq 2t+1$. If each part of $P$ contains at least $2$ vertices, then we have \begin{align*} &\hspace*{-1cm} |c({\rm cr}(P),G_2)|+ \sum_{i=1}^t|{\rm cr}(P, F_i)| -t(|P|-1)\\ &\geq {n\choose 2}-|E(G_0)|-\left ({n-2 \choose 2}+1 \right )-t\\ &\geq {n\choose 2}-(2t+1) -\left ({n-2 \choose 2}+1 \right )-t\\ &=t-1\geq 0. \end{align*} Otherwise, $P$ is of the form $V(G) = \{v\} \cup B$ for some $v\in V(G)$ and $B = V(G)\backslash \{v\}$. By Lemma \ref{lem:1}, we have $d_{G_0}\leq t+1$. Thus, $$|c({\rm cr}(P),G_2)|+ \sum_{i=1}^t|{\rm cr}(P, F_i)| - t(|P|-1)\geq (n-1)-d_{G_0}(v)-t \geq 2t+1-(t+1)-t= 0.$$ Therefore, by Theorem \ref{extension}, $F_1,\ldots, F_t$ have a color-disjoint extension to $t$ edge-disjoint rainbow spanning trees. \end{proof} \subsection{Proof of Theorem \ref{anti-Ramsey} where $n\geq 2t+3$} \begin{proposition} For any $t\geq 3$ and $n\geq 2t+2$, we have $r(n,t)={n-2\choose 2}+t$. \end{proposition} \begin{proof} Again, the lower bound is due to Proposition \ref{lower-bound}. For the upper bound, we will show that every edge-coloring of $K_n$ with exactly ${n-2\choose 2} +t+1$ distinct colors has $t$ edge-disjoint spanning trees. Call this edge-colored graph $G$. Given a vertex $v$, we define $D(v)$ to be the set of colors $C$ such that every edge with colors in $C$ is incident to $v$. Given a vertex $v$ and a set of colors $C$, define $\Gamma(v, C)$ as the set of edges incident to $v$ with colors in $C$. For ease of notation, we let $\Gamma(v) = \Gamma(v,D(v))$. For fixed $t$, we will prove this proposition by induction on $n$. The base case is when $n = 2t+2$, which is proven in Proposition \ref{base-case}. Let's now consider the theorem when $n\geq 2t+3$. \begin{description} \item Case 1: there exists a vertex $v \in V(G)$ with $|\Gamma(v)| \geq t$ and $|D(v)| \leq n-3$. In this case, we set $G' = G-\{v\}$. Note that $G'$ is an edge-colored complete graph with at least ${n-2\choose 2} +t+1 - (n-3) = {n-3 \choose 2} + t+1$ distinct colors. Moreover $|G'| \geq 2t+2$. Hence by induction, there exists $t$ edge-disjoint rainbow spanning trees in $G'$. Note that by our definition of $D(v)$, none of the colors in $D(v)$ appear in $E(G')$. Moreover, since $|\Gamma(v)| \geq t$, we can extend the $t$ edge-disjoint rainbow spanning trees in $G'$ to $G$ by adding one edge in $\Gamma(v)$ to each of the rainbow spanning trees in $G'$. \item Case 2: Suppose we are not in Case 1. We first claim that there exists two vertices $v_1, v_2 \in V(G)$ such that $|\Gamma(v_1)| \leq t-1$ and $|\Gamma(v_2)| \leq t-1$. Otherwise, there are at least $n-1$ vertices $u$ with $|\Gamma(u)| \geq t$. Since we are not in Case $1$, it follows that all these vertices $u$ also satisfy $|D(u)| \geq n-2$. Hence by counting the number of distinct colors in $G$, we have that \[ \frac{(n-1)(n-2)}{2} \leq {n-2 \choose 2} + t + 1.\] which implies that $n \leq t+3$, giving us the contradiction.\\ Now suppose $|\Gamma(v_1)| \leq t-1$ and $|\Gamma(v_2)| \leq t-1$. Let $D = D(v_1) \cup D(v_2)$. Add new colors to $D$ until $|\Gamma(v_1, D)| \geq t$, $|\Gamma(v_2, D)| \geq t+1$ and $|D| \geq t+1$. Call the resulting color set $S$. Note that \[ t+1 \leq |S| \leq 2t+1 \leq n-2.\] Now let $G' = G-\{v_1, v_2\}$ and delete all edges of colors in $S$ from $G'$. We claim that $G'$ has t color-disjoint rainbow spanning trees. By Theorem \ref{partition}, it is sufficient to verify the condition that for any partition $P$ of $V(G')$, $$|c({\rm cr}(P, G'))|\geq t(|P|-1).$$ Observe \begin{align*} & |c({\rm cr}(P,G'))| -t(|P|-1) \hspace*{-2cm} \\ & \geq |c(E(G')|- {n-1-|P|\choose 2}-t(|P|-1)\\ &\geq {n-2\choose 2}+t+1 -|S| - {n-1-|P|\choose 2}-t(|P|-1)\\ &\geq {n-2\choose 2}+t+1 - (n-2) - {n-1-|P|\choose 2}-t(|P|-1). \end{align*} Note the expression above is concave downward as a function of $|P|$. It is sufficient to check the value at $2$ and $n-2$. When $|P|=2$, we have \begin{align*} |c({\rm cr}(P,G'))| -t(|P|-1) \geq {n-2\choose 2}+t+1 - (n-2) - {n-3\choose 2}-t= 0. \end{align*} When $|P|=n-2$, we have \begin{align*} |c({\rm cr}(P,G'))| -t(|P|-1)&\geq {n-2\choose 2}+t+1 - (n-2) -t(n-3)\\ &= \frac{(n-4)(n-2t-3)}{2}\\ &\geq 0. \end{align*} Here we use the assumption $n \geq 2t+3$ in the last step. Now it remains to extend the $t$ color-disjoint spanning trees we found to $G$ by using only the colors in $S$. Let $e_1, \cdots, e_k$ be the edges in $G$ incident to $v_1$ with colors in $S$. Let $e_1', \cdots e_l'$ be the edges in $G\backslash \{v_1\}$ incident to $v_2$ with colors in $S$. With our selection of $S$, it follows that $k, l\geq t$. Now construct an auxiliary bipartite graph $H$ with partite sets $A = \{e_1, \cdots, e_k\}$ and $B = \{e_1', \cdots, e_l'\}$ such that $e_i e_j' \in E(H)$ if and only if $e_i, e_j'$ have different colors in $G$. We claim that there is a matching of size $t$ in $H$. Let $M$ be the maximum matching in $H$. Without loss of generality, suppose $e_1 e_1', \cdots, e_m e_m' \in M$ where $m < t$. It follows that $\{e_j: m < j \leq k\} \cup \{e_j': m < j \leq l\}$ all have the same color (otherwise we can extend the matching). Without loss of generality, they all have color $x$. Now observe that for every matched edge $e_i e_i'$, exactly one of the two end vertices must be in color $x$. Otherwise, we can extend the matching by pairing $e_i$ with $e_t'$ and $e_t$ with $e_i'$. This implies that $H$ has at most $t$ colors, which contradicts that $|S| \geq t+1$. Hence there is a matching of size $t$ in $H$. Since none of the edges in $G'$ have colors in $S$, it follows that we can extend the $t$ color-disjoint rainbow spanning trees in $G'$ to $t$ edge-disjoint rainbow spanning trees in $G$. \end{description} Hence in all of the three cases, we obtain that $G$ has $t$ edge-disjoint rainbow spanning trees. \end{proof} \subsection{Proof of Theorem \ref{anti-Ramsey} where $n=2t+1$} \begin{proposition} For positive integers $t \geq 1$ and $n=2t+1$, we have $r(n,t)={n-1\choose 2}=2t^2-t$. \end{proposition} \begin{proof} Again, the lower bound is due to Proposition \ref{lower-bound}. Now we prove that any edge-coloring of $K_{2t+1}$ with $2t^2-t+1$ distinct colors contains $t$ edge-disjoint rainbow spanning trees. Call this edge-colored graph $G$. The proof approach is similar to the case when $n=2t+2$. Let $m_i$ be the multiplicity of the color $c_i$ in $G$. Without loss of generality, say the first $s$ colors have multiplicity greater than or equal to $2$: $$m_1\geq m_2\geq \cdots \geq m_s\geq 2.$$ Let $G_1$ be the spanning subgraph consisting of all edges whose color multiplicity is greater than 1 in $G$. Let $G_2$ be the spanning subgraph consisting of the remaining edges. We have \begin{equation}\label{eq:m2} \sum_{i=1}^s (m_i-1)={n\choose 2}-(2t^2-t+1)=2t-1. \end{equation} In particular, we have $$|E(G_1)|=\sum_{i=1}^s m_i=2t-1+s\leq 4t-2.$$ \textbf{Claim:} we can construct $t$ edge-disjoint rainbow forests $F_1,\ldots, F_t$ in $G_1$ such that if we let $G_0 = G_1 \backslash \bigcup\limits_{i=1}^t E(F_i)$, then $|E(G_0)| \leq t$. Again, for the proof of the claim, we consider two cases: \begin{description} \item Case 1: $m_1 \geq t+2$. By equation (\ref{eq:m2}), we have that $s \leq (2t-1)-(t+1)+1 = t-1.$ We construct $t$ edge-disjoint rainbow forests $F_1, \cdots, F_t$ as follows: First take $t$ edges of color $c_1$ and add one edge to each of $F_1,\cdots F_t$. Next, pick one edge from each of the remaining $s-1$ colors and add each of them to a distinct $F_i$. Clearly, we can obtain $t$ edge-disjoint rainbow forests in this way. Furthermore, $$|E(G_0)| \leq 2t-1 + s - (t+s-1) = t.$$ which proves the claim. \item Case 2: $m_1 < t+2$. Let $F_1,\ldots, F_t$ be the edge-maximal family of rainbow spanning forests in $G_1$. Let $G_0 = G_1 \backslash \bigcup\limits_{i=1}^t E(F_i)$. Suppose that $|E(G_0)| > t$. Then $$\sum_{i=1}^t |E(F_i)| \leq 2t-1+s-(t+1) = t+s-2.$$ Since $s\leq 2t-1$, it follows that there exists some $j$ such that $|E(F_j)| \leq 2$. \begin{description} \item Case 2a: $|E(F_j)| = 1$. Since $\{F_1,\ldots, F_t\}$ is edge-maximal and $|E(G_0)|\geq t+1$, it follows that all edges in $G_0$ share the same color (call it $c_1'$) as the single edge in $F_j$. Thus $m_1 \geq t+2$, which contradicts that $m_1 < t+2$ since we are in Case 2. \item Case 2b: $|E(F_j)| = 2$. Similarly, at least $t$ edges in $G_0$ share the same colors (call them $c_1'$, $c_2'$) as the two edges in $F_j$. It follows that $m_1 + m_2 \geq t+2$. Hence $s\leq t+1$. Now since $|E(G_0)|\geq t+1$, it follows that $$\sum_{i=1}^t |E(F_i)| \leq 2t-1+s -(t+1) = t+s-2 \leq 2t-1,$$ Hence there exists some forest with only one edge, in which case we are done by Case 2a. \end{description} Hence by contradiction, we obtain that $|E(G_0)| \leq t$, which completes the proof of the claim.\\ \end{description} Now we show that $F_1,\ldots, F_t$ have a color-disjoint extension to $t$ edge-disjoint rainbow spanning trees. Consider any partition $P$. We will verify $$|c({\rm cr}(P),G_2)|+ \sum_{i=1}^t|{\rm cr}(P, F_i)|\geq t(|P|-1).$$ We have $$|c({\rm cr}(P),G_2)|+ \sum_{i=1}^t|{\rm cr}(P, F_i)| -t(|P|-1) \geq {n\choose 2}-t -{n-|P|+1 \choose 2}-t(|P|-1).$$ Note that the function on right is concave downward on $|P|$. It is enough to verify it at $|P|=2$ an $|P|=n$. When $|P|=2$, we have $${n\choose 2}-t -{n-1\choose 2}-t =n-1-2t\geq 0.$$ When $|P|=n$, we have $${n\choose 2}-t-t(n-1)=0.$$ By Theorem \ref{extension}, $F_1,\ldots, F_t$ have a color-disjoint extension to $t$ edge-disjoint rainbow spanning trees. \end{proof}
{ "timestamp": "2018-02-27T02:07:58", "yymm": "1802", "arxiv_id": "1802.08918", "language": "en", "url": "https://arxiv.org/abs/1802.08918" }
\section{Introduction} Within this paper we look to explain an English Premier League team's style of attacking play; determining the number of chances a team creates, along with identifying the players involved and from where on the pitch the chance took place. The Premier League is an annual soccer league established in 1992 and is the most watched soccer league in the world \citep{yueh_2014, curley_2016}. It consists of 20 teams, who, over the course of a season, play every other team twice (both home and away), giving a total of 380 fixtures. It is the top division of English soccer, and every year the bottom 3 teams are relegated to be replaced by 3 teams from the next division down (the Championship). In recent times the Premier League has also become known as the richest league in the world \citep{deloitte_2016}, through both foreign investment and a lucrative deal for television rights \citep{rumsby_2016, bbc_2015}. To compete in the Premier League, teams employ different styles of play, often determined by the manager's personal preferences and the players who make up the team. Examples of attacking styles of play include counter attacking (quickly moving the ball into scoring range) or passing-build-up (many short passes to find a weakness in the oppositions defense). For further discussion of styles of play, we direct the reader to \citep{wendichansky_2016, huddleston_2018}. Methods to model a soccer team's style of play/behavior have been explored previously by a number of authors. \cite{lucey_2013} use occupancy maps defined using a given metric, for example, the mean or an entropy measure, to determine a team's style of play with the aim of showing that a team will aim to ``win home games and draw away ones.'' Occupancy maps are also used by \cite{bialkowski_2014}, who take spatio-temporal player tracking data and develop a method to automatically detect formation and player roles. \cite{bojinov_2016} utilize Gaussian processes to form a spatial map to capture each team's defensive strengths and weaknesses. \cite{pena_2012, pena_2014, pena_2015} employ methods from the Network analysis toolbox to draw conclusions about a team/player's use of possession. How the player's on a team interact is discussed in \cite{grund_2012}, and \cite{kim_2010} estimate the global movements of all players to predict short term evolutions of play. Outside of soccer, \cite{miller_2014} investigate shot selection amongst basketball players in the NBA, combining matrix factorization techniques with an intensity surface, modeled using a log-Gaussian Cox process. Defensive play in basketball is captured by \cite{franks_2015}, who take player tracking data and apply spatio-temporal processes, matrix factorization techniques and hierarchical regression models. More generally, the statistical modeling of sports has become a topic of increasing interest in recent times, as more data is collected on the sports we love, coupled with a heightened interest in the outcome of these sports, that is, the continuous rise of online betting. Soccer is providing an area of rich research, with the ability to capture the goals scored in a match being of particular interest, see \citep{dixon_1997, karlis_2003, baio_2010}. A player performance rating system (the EA Sports Player Performance Index) was developed by \cite{mchale_2012}, which aims to represent a player's worth in a single number, whilst \cite{mchale_2014} identify the goal scoring ability of players. \cite{whitaker_2017} rate players for a number of abilities, before using them to aid the prediction of goals scored. Finally \cite{kharrat_2017} develop a plus-minus rating system for soccer. In this paper we propose a method to capture the number of chances a team creates during a given section of a match, along with determining the players involved in a chance, where on the pitch the chance was created and where it was taken from. Our work differs from previous studies in this area in a number of ways. Firstly, previous work has used complete touch data (where every location that a player touches the ball in a game is recorded), to model a team's attacking play. Here, we use only the location of the assist and the chance. Thus, our proposed method is less computationally intensive and allows inferences from coarser and significantly cheaper data. Previous work has also focused on modeling the spatial dynamics of a team as a whole, whereas our method identifies the individual spatial contributions of players. Where specific players have been modeled in the past, this is often not accompanied by spatial analysis, instead player-to-player relationships are considered. We note that the model proposed within this paper has a wide variety of applications, of which we illustrate a few. The remainder of this article is organized as follows. The data is presented in Section~\ref{data}. In Section~\ref{model} we outline our model to capture a teams chances, before discussing an approach to identify the players involved with each chance and from which spatial locations. Applications are considered in Section~\ref{app} and a discussion is provided in Section~\ref{disc}. \section{The data} \label{data} The data available to us is Stratagem Technologies' Analyst data. This is a collection of data which marks the significant events during a soccer match; including goals, cards (both yellow and red) and chances created. For each of these events a time is recorded (in minutes), the team and player involved with the event, and for the goals/chances the location on the pitch is marked. If the event is a goal/chance, both the player taking the chance and the player assisting the chance are recorded (along with the spatial location of the chance and the assist). From here on in, we consider goals and chances to be the same for our purposes (a goal being a chance which is scored after all)---we refer to them collectively as chance. A section of the data is shown in Table~\ref{tab-data}. The data covers the 2016/2017 English Premier League season and consists of roughly 32K events in total, which equates to approximately 85 events for each fixture in the dataset. We also have the date of each fixture. { \begin{table} \centering \footnotesize \begin{tabular}{ccccccccccc} \hline \multirow{2}{*}{fixture} & \multirow{2}{*}{date} & \multirow{2}{*}{team} & \multirow{2}{*}{time} & \multirow{2}{*}{type} & event & assist & assist & assist & chance & chance \\ &&&&& player & player & x & y & x & y \\ \hline 2241765 & 2016-08-13 & 725 & 82.35 & Yellow card & 94174 & --- & --- & --- & --- & --- \\ 2241765 & 2016-08-13 & 725 & 81.38 & Chance & 38569 & 38569 & -108 & 21 & -98 & 34 \\ 2241765 & 2016-08-13 & 682 & 75.65 & Chance & 5724 & 11180 & 136 & 41 & 26 & 45 \\ 2241765 & 2016-08-13 & 682 & 72.48 & Chance & 156662 & 159732 & 47 & 76 & 48 & 39 \\ \hline \end{tabular} \caption{A section of Stratagem Technologies' analyst data} \label{tab-data} \end{table} } \begin{figure}[t!] \centering \includegraphics[scale=0.48]{pitchmap.pdf} \caption{Map of the pitch, the point (0,0) represents the center of the defended goal (shaded box). Further key reference points are detailed in Table~\ref{tab-points}} \label{pitchfig} \end{figure} \begin{table}[h!] \centering \begin{tabular}{lcc} \hline Point & $x$ & $y$ \\ \hline Center of defended goal & 0 & 0 \\ Right goalpost & 15 & 0 \\ Left goalpost & -15 & 0 \\ 6-yard box, right corner & 37 & 22 \\ 6-yard box, left corner & -37 & 22 \\ Penalty spot & 0 & 44 \\ 18-yard box, right corner & 81 & 66 \\ 18-yard box, left corner & -81 & 66 \\ Center spot & 0 & 210 \\ \hline \end{tabular} \caption{Key reference points} \label{tab-points} \end{table} Locations on the pitch are represented by $(x,y)$-coordinates with the $x$-axis running between the two touch-lines (width of the pitch) and the $y$-axis representing the length of the pitch between the goalposts. The spatial location is always recorded from the perspective of the attacking team, meaning the coordinate system does not need to be rotated to account for the second team, or to accommodate the fact that teams switch ends at half-time. The point $(0,0)$ marks the center of the defended goal, with the width of the pitch going from -136 to 136 (left to right), and the pitch length running from 0 to 420. Explicitly, $x\in[-136,136]$ and $y\in[0,420]$. A map of the pitch is shown in Figure~\ref{pitchfig}, with some key reference points given in Table~\ref{tab-points}. Further to the above, it is possible to extract additional statistics from the dataset. These include the game state and the red card state for a team at a given time point. The game state is the number of goals a team is winning or losing by at that point in time, for example, a team winning 1-0 would have a game state of +1, a team losing 1-3 would be -2, and, if the game is currently a draw, both teams would have a game state of 0. The red card state is defined similarly, and is the difference in the number of players on each team. To elucidate, if a team has a player sent off their red card state would be -1, whilst the opposition would be +1. \section{The model} \label{model} In this section we define our model to capture a team's chances, before discussing an approach to determine the composite nature of each individual chance. Each chance consists of an assist player, a player taking the chance (chance player), the spatial location from which the assist was made and the location of the chance. First, the number of chances a team has in a given period~($N$) is sampled using a Poisson model. Then for each chance ($E$), we draw an assist player ($A$) and a chance player ($C$) from discrete distributions, with an assist location $(x^a,y^a)$ and the difference between the assist and chance locations $(\Delta^{x},\Delta^{y})$ being captured through Gaussian mixture models. A diagram of the model is given in Figure~\ref{pic-modelrep}. We begin by looking at the number of chances each team generates. \begin{figure \centering \begin{tikzpicture}[scale=7,>=latex] \node[draw,circle] (N) at (0.9,1.2) {\large $N$}; \node[draw,circle] (e1) at (0.4,1) {\large $E_1$}; \node[draw,circle] (a1) at (0.1,0.85) {\large $A_1$}; \node[draw,circle] (c1) at (0.1,0.65) {\large $C_1$}; \node[draw,circle] (xa1) at (0.35,0.65) {\large $\left(x_1^a,y_1^a\right)$}; \node[draw,circle] (xd1) at (0.65,0.65) {\large $\left(\Delta_1^x,\Delta_1^y\right)$}; \node at (0.55, 1) {{$\ldots$}}; \node at (0.9, 1) {{$\ldots$}}; \node at (1.25, 1) {{$\ldots$}}; \node (e2) at (0.7, 1.03) {{}}; \node (e3) at (1.1, 1.03) {{}}; \node[draw,circle] (eN) at (1.4,1) {\large $E_N$}; \node[draw,circle] (aN) at (1.75,0.85) {\large $A_N$}; \node[draw,circle] (cN) at (1.75,0.65) {\large $C_N$}; \node[draw,circle] (xaN) at (1.48,0.65) {\large $\left(x_N^a,y_N^a\right)$}; \node[draw,circle] (xdN) at (1.15,0.65) {\large $\left(\Delta_N^x,\Delta_N^y\right)$}; \draw (e1) edge[->] (N) ; \draw (e2) edge[->, black!50!] (N) ; \draw (e3) edge[->, black!50!] (N) ; \draw (eN) edge[->] (N) ; \draw (a1) edge[->] (e1) ; \draw (c1) edge[->] (e1) ; \draw (xa1) edge[->] (e1) ; \draw (xd1) edge[->] (e1) ; \draw (aN) edge[->] (eN) ; \draw (cN) edge[->] (eN) ; \draw (xaN) edge[->] (eN) ; \draw (xdN) edge[->] (eN) ; \end{tikzpicture} \caption{Visual representation of the model for a single team in a given fixture} \label{pic-modelrep} \end{figure} \subsection{A team's number of chances} \label{chances} Consider the case where we have $K$ matches, numbered $k=1,\ldots,K$. We denote the set of teams in fixture $k$ as $T_k$, with $T_k^H$ and $T_k^A$ representing the home and away teams respectively. Explicitly, $T_k = \{T_k^H, T_k^A\}$. We take $P$ to be the set of all players who feature in the dataset, and $P^j\in P$ to be the subset of players who play for team~$j$. For simplicity we outline the model for a single fixture first. We split a fixture into blocks---one possibility being to split a fixture into 15 minute blocks, giving 6 blocks in total (see Figure~\ref{pic-matchblocks}). Of course the widths of these blocks is arbitrary, and could equally be set to be either a half of soccer (45 minutes) or indeed every minute. After discussion with expert soccer analysts the authors feel that a block of 15 minutes provides sufficient granularity without introducing large levels of redundancy. Typically, a soccer match will have a small amount of extra time at the end of each half; throughout this paper, any chances which occur within these periods of extra time are included in either $t_3$ or $t_6$ (using the block structure illustrated in Figure~\ref{pic-matchblocks}). \begin{figure \vspace{0.5cm} \begin{center} \begin{tikzpicture}[scale=7,>=latex] \draw[black] (0,0.0) rectangle (1.5,0.2); \draw[black] (0,-0.05) -- (0,0.2); \draw[black] (0.25,-0.05) -- (0.25,0.2); \draw[black] (0.5,-0.05) -- (0.5,0.2); \draw[black] (0.75,-0.05) -- (0.75,0.2); \draw[black] (1.0,-0.05) -- (1.0,0.2); \draw[black] (1.25,-0.05) -- (1.25,0.2); \draw[black] (1.5,-0.05) -- (1.5,0.2); \node[draw,circle,fill=black,inner sep=0.5mm, label=below:{0}] (xo0) at (0,-0.05) {}; \node[draw,circle,fill=black,inner sep=0.5mm, label=below:{15}] (xo1) at (0.25,-0.05) {}; \node[draw,circle,fill=black,inner sep=0.5mm, label=below:{30}] (xo2) at (0.5,-0.05) {}; \node[draw,circle,fill=black,inner sep=0.5mm, label=below:{45}] (xo3) at (0.75,-0.05) {}; \node[draw,circle,fill=black,inner sep=0.5mm, label=below:{60}] (xo4) at (1.0,-0.05) {}; \node[draw,circle,fill=black,inner sep=0.5mm, label=below:{75}] (xo5) at (1.25,-0.05) {}; \node[draw,circle,fill=black,inner sep=0.5mm, label=below:{90}] (xo6) at (1.5,-0.05) {}; \node at (0.125, 0.1) {{$t_1$}}; \node at (0.375, 0.1) {{$t_2$}}; \node at (0.625, 0.1) {{$t_3$}}; \node at (0.875, 0.1) {{$t_4$}}; \node at (1.125, 0.1) {{$t_5$}}; \node at (1.375, 0.1) {{$t_6$}}; \node at (0.75, -0.25) {{Time (minutes)}}; \end{tikzpicture} \end{center} \vspace{-0.5cm} \caption{One possible way to split a fixture into blocks} \label{pic-matchblocks} \end{figure} Taking $N_{{t_r},k}^j$ to be the number of chances for team $j$, in match $k$, and block~$t_r$,~$r=~1,\ldots,6$, we have \begin{equation} \label{posmod} N_{{t_r},k}^j\sim Pois\left(\lambda_{{t_r},k}^j\right), \end{equation} where \begin{equation} \label{lambda} \lambda_{{t_r},k}^j = \exp\left\{\theta_{t_r}^j - \theta_{t_r}^{T_k\setminus j} + \left(\delta_{T_k^H,j}\right)\gamma_{t_r} + \alpha G^j_{{t_r,k}} + \beta R^j_{{t_r,k}} \right\}. \end{equation} A teams propensity to create chances is represented by $\theta_{t_r}^j$, $\theta_{t_r}^{T_k\setminus j}$ is the opposition's ability to create chances, $\gamma_{t_r}$ is a home effect for the corresponding block and $\delta_{a,b}$ is the Kronecker delta. The home effect reflects the (supposed) advantage the home team has over the away team. A home effect for the number of goals a team scores has been discussed by numerous authors, see for example \citep{dixon_1997, karlis_2003}. The current game state at the start of a block for a team is $G^j_{{t_r,k}}$, with $R^j_{{t_r,k}}$ being the red card state. For identifiability purposes, we follow \cite{karlis_2003} (amongst others) and impose the constraint that the $\theta_{t_r}^j$ must sum-to-zero, specifically \[ \sum_{i\in j} \theta_{t_r}^i = 0. \] The thinking behind this model construction is that if a team is creating chances, the other team cannot. Whilst this assumption is limiting by construction, given defensive tactics and other tangential aspects of play, it is the easiest (and possibly most meaningful) set-up derived from the data, which consists of attacking instances only. From \eqref{posmod} and \eqref{lambda}, the likelihood is given by \begin{equation} L_N = \prod_{r=1}^6\prod_{k=1}^K\prod_{j\in T_k} \dfrac{\left(\lambda_{{t_r},k}^j \right)^{N_{{t_r},k}^j}\exp{\left(-\lambda_{{t_r},k}^j \right)}}{N_{{t_r},k}^j\,!}. \label{llike-pois} \end{equation} We note that it is possible to model the number of chances a team creates using an approach similar to the one implemented by \citep{dixon_1997, karlis_2003, baio_2010, whitaker_2017} (albeit for goals a team scores). However, we find little or no difference in the sum-of-squares, bias or empirical predictive distributions under the two set-ups. Thus, we proceed with the simpler model (in terms of the number of parameters) given by \eqref{posmod}--\eqref{llike-pois}. \subsection{Chance composition} \label{chancecomposition} Once the number of chances created by a team is determined by the above, we break $N_{{t_r},k}^j$ into separate events, $E_s$, where $s~=~1,\ldots,N_{{t_r},k}^j$ and \[ E=\left(E_1,\ldots,E_{N_{{t_r},k}^j}\right). \] Each $E_s$ is a composition of the assist player ($A$), the chance player ($C$), the $(x,y)$-coordinates for the assist location $(x^a,y^a)$ and the difference between the assist and chance locations $(\Delta^{x},\Delta^{y})$, where \begin{align*} \Delta^{x} & = x^c - x^a \\ \Delta^{y} & = y^c - y^a, \end{align*} with $(x^c,y^c)$ being the $(x,y)$-coordinates of the chance. By using the difference between the assist and chance locations we aim to model any dependence we may observe between the assist and chance locations. Explicitly $E_s=\left[A,C,x^a,y^a,\Delta^{x},\Delta^{y}\right]$. First, let us consider the task of determining the assist and chance player involved with each event. We make the assumption that a player cannot assist a player on an opposing team (such as assisting an own goal, by forcing the error), and neither can they take a chance created by a player from the opposition (for example, running onto a bad back pass). In the context of soccer these events are reasonably rare, and by implementing this assumption we can consider the players of one team to be independent from the players of another team. A player can switch teams part way through a season (in January) or at the end of a season by means of a transfer; however, we consider them to be a new player to be learned, as they may have different dynamics with their new team mates and possibly play in a different system, for example, playing in a new position to the one at their previous team. We model the probability of each assist player (and chance player) using a Multinoulli (or categorical) distribution. Let, $Z_{s,i,t_r}^a$ be a one-hot vector, with a 1 in position $i$, representing the assist player for event $s$, in a given block $t_r$, with $i\in P^j$. Denote the probability of each player making an assist for a given event by $\phi_{i,t_r}^a$, where \[ \sum_{i\in P^j}\phi_{i,t_r}^a = 1. \] Setting $\phi^a_{t_r}$ to be the vector of $\phi_{i,t_r}^a$s, $\phi^a$ to be the vector of $\phi^a_{t_r}$s, $Z^a_{t_r}$ as the vector of $Z_{s,i,t_r}$s and $Z^a$, the vector of $Z^a_{t_r}$s, then \begin{equation}\label{multassist_a} Z_{s,i,t_r}^a\sim \textrm{Multinoulli}\left(\phi^a_{t_r}\right), \end{equation} with \begin{equation}\label{zassist} \pi\left(Z^a\big\vert\phi^a\right) = \prod_{r=1}^6\prod_{s=1}^{N_{{t_r},k}^j} \pi\left(Z_{s,i,t_r}^a\big\vert\phi^a_{t_r}\right). \end{equation} Similarly, for the chance player \begin{equation}\label{multassist_c} Z_{s,i,t_r}^c\sim \textrm{Multinoulli}\left(\phi^c_{t_r}\right), \end{equation} where \begin{equation}\label{zdelta} \pi\left(Z^c\big\vert\phi^c\right) = \prod_{r=1}^6\prod_{s=1}^{N_{{t_r},k}^j} \pi\left(Z_{s,i,t_r}^c\big\vert\phi^c_{t_r}\right). \end{equation} Next, we consider the spatial locations, which we model using a mixture model. For a general discussion of mixture models we refer the reader to \cite{mclachlan_2004}. Given the nature of the spatial locations we implement a Gaussian mixture model, with $M$ components. Denote the weighting of the mixture components (for a given player $i$, in a given block $t_r$) by $\kappa_{i,t_r}^a$ and $\kappa_{i,t_r}^{\Delta}$ for the assist and $\Delta$ locations respectively, with $\kappa_{i,t_r}^*~=~(\kappa_{i,t_r,1}^*,\ldots,\kappa_{i,t_r,M}^*)$ and \[ \sum_{m=1}^M \kappa_{i,t_r,m}^*=1. \] Furthermore, let the observations for a given player, in a specific block, be $X_{i,t_r}^a$ and $X_{i,t_r}^{\Delta}$, with $X_{i,t_r}^*=(X_{i,t_r,1}^*,\ldots,X_{i,t_r,L_{i,t_r}^*}^*)$. Whence, the likelihood for the assist locations is \begin{equation}\label{llike-space_a} L_a = \prod_{r=1}^6\prod_{i\in P} \prod_{l=1}^{L_{i,t_r}^a} \sum_{m=1}^M \kappa_{i, t_r, m}^a\times N\left\{ \begin{pmatrix} x_{i,t_r,l}^a \\ y_{i,t_r,l}^a \end{pmatrix} ; \begin{pmatrix} \mu^a_{x,m} \\ \mu^a_{y,m} \end{pmatrix}, \Sigma^a_m \right\}, \end{equation} where $N(\cdot\,;\,m,V)$ denotes the multivariate Gaussian density with mean $m$ and variance~$V$. Similarly \begin{equation}\label{llike-space_d} L_{\Delta} = \prod_{r=1}^6\prod_{i\in P} \prod_{l=1}^{L_{i,t_r}^{\Delta}} \sum_{m=1}^M \kappa_{i, t_r, m}^{\Delta}\times N\left\{ \begin{pmatrix} x_{i,t_r,l}^{\Delta} \\ y_{i,t_r,l}^{\Delta} \end{pmatrix} ; \begin{pmatrix} \mu^{\Delta}_{x,m} \\ \mu^{\Delta}_{y,m} \end{pmatrix}, \Sigma^{\Delta}_m \right\}. \end{equation} \begin{comment} \begin{figure \hspace{-2cm} \begin{minipage}[b]{0.48\linewidth} \centering \includegraphics[scale=0.4]{kmeansclustassist.pdf} \end{minipage} \hspace{1.8cm} \begin{minipage}[b]{0.48\linewidth} \centering \includegraphics[scale=0.4]{kmeansclustdelta_swapx.pdf} \end{minipage} \caption{Cluster centroids (cross) under k-means with all data points classified by cluster assignment. \emph{Top} assist, \hbox{\emph{bottom}}~$\Delta$.} \label{clustfig} \end{figure} \end{comment} To simplify our approach we choose to predetermine the number of components which make up our mixture model. After discussion with expert soccer analysts we decided upon 8 components, whose locations we determine through k-means clustering. Thus, we set $\mu_m^a$, $m~=~1,\ldots,M$, to be the cluster centroids defined using all the observed assist locations (by all players), and $\mu_m^{\Delta}$, $m=1,\ldots,M$, using the $\Delta$ locations (deterministically constructed using the chance and assist locations). We leave $\Sigma^a_m$, $\Sigma^{\Delta}_m$, $m=1,\ldots,M$, as parameters to infer, rather than taking the variances of clusters per se. The locations of the cluster centroids are shown in Figure~\ref{clustfig} (indicated by a cross), where we also plot the data, colored according to cluster assignment (under k-means). To add some context to the cluster centroids, for the assist locations, the furthest right centroid $(0, 240)$ (own half, OH) is likely to represent a long ball forward for a player to run on to. For the leftmost column, the widest centroids $(x=-115, 115)$ (left corner [LC], right corner [RC]) are assists from corners or crosses into the box, whilst the middle two (left box [LB], right box [RB]) show cutbacks across goal and knock-downs. The middle column is slightly more ambiguous, although the center cross (center opposition half, CH) is most likely short through-ball assists, with the wider centroids being free-kicks and further crosses into the box (left opposition half [LH], right opposition half [RH]). The $\Delta$ centroids are the inverse of the assist centroids (in shape) and are simply the distance the ball traveled for the assist, for example, a larger magnitude of $x$ and a smaller magnitude of $y$ represents a cross into the box. Having outlined the two components of our model, namely, the number of chances a team generates and the composition of these chances, we must consider the best way to fit the model, which is the subject of the next section. \subsection{Bayesian inference}\label{bayes} To estimate the parameters in the model we use a Bayesian inference approach. The joint posterior is given by \begin{align} & \mkern-40mu\pi\left(\theta, \alpha, \beta, \tau, \phi^a, \phi^c, \kappa^a, \kappa^{\Delta}, \Sigma^a, \Sigma^{\Delta}\big\vert N, Z^a, Z^c, x^a, y^a, \Delta^x, \Delta^y\right) \nonumber \\ &\propto \pi\left(\alpha\right)\pi\left(\beta \right)\pi\left(\gamma \right)\pi\left(\tau \right)\pi\left(\theta\vert\tau \right)\pi\left(N\big\vert\theta,\alpha,\beta,\gamma \right) \nonumber \\ &\quad\times \pi\left(\phi^a\right)\pi\left(Z^a\big\vert\phi^a\right) \pi\left(\phi^c\right)\pi\left(Z^c\big\vert\phi^c\right) \nonumber \\ &\qquad\times \pi\left(\kappa^a\right)\pi\left(\Sigma^{a}\right)\pi\left(x^a, y^a\big\vert \kappa^a, \mu^a, \Sigma^a, \phi^a\right) \nonumber \\ &\qquad\quad\times \pi\left(\kappa^{\Delta}\right)\pi\left(\Sigma^{\Delta}\right)\pi\left(\Delta^x, \Delta^y\big\vert \kappa^{\Delta}, \mu^{\Delta}, \Sigma^{\Delta}, \phi^c\right), \label{joint} \end{align} where $\pi(N\vert\theta,\alpha,\beta,\gamma)$ follows \eqref{llike-pois}, $\pi(Z^a\vert\phi^a)$ is given by \eqref{zassist} and $\pi(Z^c\vert\phi^c)$ by \eqref{zdelta}, $\pi(x^a, y^a\vert \kappa^a, \mu^a,\Sigma^a, \phi^a)$ is governed by \eqref{llike-space_a} and $\pi(\Delta^x, \Delta^y\vert \kappa^{\Delta}, \mu^{\Delta}, \Sigma^{\Delta}, \phi^c)$ follows \eqref{llike-space_d}. Furthermore, $\pi(\theta\vert\tau)$ is the prior density ascribed to $\theta$, dependent upon $\tau$, which we take to follow a $N(0,\tau)$ distribution. To fully specify the model, we implement the following priors \\ \begin{minipage}[b]{0.47\linewidth} \begin{align} \pi\left(\alpha\right)&\sim N\left(0,10^2\right), \nonumber\\ \pi\left(\gamma\right)&\sim N\left(0,10^2\right),\nonumber\\ \pi\left(\phi^a\right)&\sim\textrm{Dirichlet}\left(1_P\right), \nonumber\\ \pi\left(\kappa^a\right)&\sim\textrm{Dirichlet}\left(1_M\right), \nonumber\\ \pi\left(\Sigma^{a}\right)&\sim \mathcal{W}^{-1}\left(I_2, 2\right), \nonumber \end{align} \end{minipage} \begin{minipage}[b]{0.47\linewidth} \begin{align} \pi\left(\beta\right)&\sim N\left(0,10^2\right), \nonumber\\ \pi\left(\tau\right)&\sim \textrm{Gamma}(1, 0.01),\nonumber\\ \pi\left(\phi^c\right)&\sim\textrm{Dirichlet}\left(1_P\right), \nonumber\\ \pi\left(\kappa^{\Delta}\right)&\sim\textrm{Dirichlet}\left(1_M\right), \nonumber\\ \pi\left(\Sigma^{\Delta}\right)&\sim \mathcal{W}^{-1}\left(I_2, 2\right), \label{priors} \end{align} \end{minipage} \vspace{0.3cm}\\ where $1_q$ is a vector of 1s with length $q$, $I_q$ is the identity matrix with dimension $q$ and $\mathcal{W}^{-1}$ is the inverse Wishart distribution. By assuming $\phi^*$ follows a Dirichlet distribution \textit{a priori}, we are modeling the assist and chance players as a mixture of Multinomials, which is in line with techniques used in topic modeling, as part of a hierarchical Bayesian model. Where topic models (usually) capture the words for a particular topic, here, we determine the players for an assist or chance. The form of \eqref{joint} admits a Gibbs sampling strategy with blocking, which we can extend to form five independent full conditionals for the number of chances, the assist player, the chance player, the location of the assist and the $\Delta$ location. Further blocking strategies that exploit the conditional dependencies between the model parameters and the data can also be used. To elucidate, the assist player, $\phi^a$, can be updated separately for each team. On top of this, all parameters can be updated separately for each block, $t_r$. We fit the model in \verb+Python+ using the package \verb+PyMC3+. \begin{figure}[t!] \centering \includegraphics[scale=0.48]{kmeansclustassist.pdf}\vspace{0.05cm} \includegraphics[scale=0.48]{kmeansclustdelta_swapx.pdf} \caption{Cluster centroids (cross) under k-means with all data points classified by cluster assignment. \emph{Top} assist, \hbox{\emph{bottom}}~$\Delta$.} \label{clustfig} \end{figure} \section{Applications} \label{app} Having outlined our approach to determine the number of chances a team will generate in a given fixture---accounting for the opposition's ability to create chances, the game and red card states and a home effect---plus a model for the composition of these chances, we wish to test the proposed methods in real world scenarios. Given the independence between the components which constitute the model we consider two applications. In the first we learn a team's ability to create chances and in the second we examine which players are involved, and where on the pitch these events occur. For both applications we use the data described in Section~\ref{data}, namely the 2016/2017 English Premier League. Throughout this section, to aid table/figure aesthetics, we refer to teams by the abbreviations given in Table~\ref{tab-teamabb}. We note that CHE won the league, with TOT, MCI and LIV getting UEFA Champions League places, therefore, we may expect these 4 teams to be the best. On the other hand, SUN, MID and HUL were relegated at the end of the season, meaning these 3 teams were perhaps the worst. { \begin{table} \footnotesize \centering \hspace{-1.1cm} \begin{tabular}{ll|ll|ll|ll} \hline \multicolumn{8}{c}{Abbreviation\,/\,Team} \\ \hline BOU & AFC Bournemouth & EVE & Everton & MUN & Manchester United & SWA & Swansea City \\ ARS & Arsenal & HUL & Hull City & MID & Middlesbrough & TOT & Tottenham Hotspur \\ BUR & Burnley & LEI & Leicester City & SOU & Southampton & WAT & Watford \\ CHE & Chelsea & LIV & Liverpool & STK & Stoke City & WBA & West Bromwich Albion \\ CRY & Crystal Palace & MCI & Manchester City & SUN & Sunderland & WHU & West Ham United \\ \hline \end{tabular} \caption{2016/2017 English Premier League teams and abbreviations} \label{tab-teamabb} \end{table} } \subsection{Determining a team's chance ability} \label{teamchance-sec} We fit the model defined by \eqref{posmod}--\eqref{llike-pois}, using the priors specified in \eqref{priors}. We found little difference in results for alternative priors. We ran the model for 2000 iterations, after an initial burn-in of 100 iterations. A trace plot for $\gamma_{t_r}$ is given in Figure~\ref{gammafig}, where we see reasonable mixing (this trace plot is typical for all parameters in the model). \begin{figure \centering \includegraphics[scale=0.48]{gammatrace.pdf} \caption{Trace plot for $\gamma_{t_1}$} \label{gammafig} \end{figure} \begin{table} \centering \begin{tabular}{l|cccccc} \hline & \multicolumn{6}{c}{Block} \\ Team & $t_1$ & $t_2$ & $t_3$ & $t_4$ & $t_5$ & $t_6$ \\ \hline BOU & -0.043 & -0.004 & -0.211 & -0.231 & -0.003 & -0.075 \\ ARS & 0.043 & 0.122 & 0.238 & 0.201 & 0.086 & 0.150\\ BUR & -0.098 & -0.174 & -0.280 & -0.166 & -0.178 & -0.291\\ CHE & 0.040 & 0.036 & 0.310 & 0.307 & 0.183 & 0.384\\ CRY & -0.0231 & -0.079 & -0.200 & -0.106 & -0.020 & -0.126\\ EVE & -0.024 & 0.030 & 0.233 & 0.015 & 0.069 & 0.284\\ HUL & -0.143 & -0.057 & -0.304 & -0.147 & -0.183 & -0.125\\ LEI & -0.058 & -0.080 & -0.303 & -0.121 & -0.139 & -0.174\\ LIV & 0.118 & 0.207 & 0.414 & 0.390 & 0.130 & 0.333\\ MCI & 0.201 & 0.401 & 0.375 & 0.268 & 0.249 & 0.465\\ MUN & 0.111 & 0.234 & 0.341 & 0.033 & 0.112 & 0.253\\ MID & -0.065 & -0.263 & -0.255 & -0.162 & -0.208 & -0.198\\ SOU & 0.092 & 0.075 & 0.090 & 0.132 & 0.020 & 0.091\\ STK & -0.053 & -0.182 & -0.162 & 0.028 & -0.081 & -0.113\\ SUN & -0.296 & -0.080 & -0.200 & -0.156 & -0.194 & -0.531\\ SWA & 0.047 & -0.139 & -0.042 & -0.093 & -0.106 & -0.236\\ TOT & 0.169 & 0.220 & 0.360 & 0.254 & 0.208 & 0.332\\ WAT & -0.095 & -0.153 & -0.189 & -0.197 & 0.020 & -0.211\\ WBA & -0.001 & -0.183 & -0.160 & -0.171 & -0.015 & -0.138\\ WHU & 0.077 & 0.071 & -0.056 & -0.076 & 0.051 & -0.075\\ \hline \end{tabular} \caption{A team's mean ability to create chances, $\theta^j_{t_r}$, in the 2016/2017 English Premier League for each block} \label{tab-chances} \end{table} The posterior means for a team's ability to create chances ($\theta^j_{t_r}$) over the entire 2016/2017 season are presented in Table~\ref{tab-chances}, for each of the 6 blocks. Those teams which we identified as possibly being ``better'' at creating chances, namely CHE, TOT, MCI and LIV, all have higher values in the table. Noticeably, they have higher values for blocks $t_5$ and $t_6$, when compared to other teams. This suggests they are able to find a way to win (by creating more chances) in the closing moments of a game (or a way to recover if they are losing), which is perhaps why they had a successful season. MCI have the highest value in $t_1$, $t_2$ and $t_6$, meaning they started and finished games well. These values highlight Pep Guardiola's playing style, along with the quality of MCI's substitutes (they can replace good players with equally good players). CHE do not have as high values as some of the other top teams (even though they won the league), suggesting they did not create as many chances as other teams but they were more clinical with the ones they did create. Unsurprisingly, the teams who were relegated at the end of the season (SUN, MID, HUL) have some of the lowest values in the table. SUN have the worst ability to create chances in $t_1$ and $t_6$, with MID having a similar ability across all blocks, leading to them being the 2 lowest scoring teams in the league. \begin{figure \centering \includegraphics[scale=0.48]{gamma-end-with95int.pdf} \caption{Mean home effect (solid line) and 95\% credible intervals (dotted line) in each block in the 2016/2017 English Premier League} \label{gammatimefig} \end{figure} Figure~\ref{gammatimefig} shows the posterior mean for the home effect in each block over the entire 2016/2017 season, along with 95\% credible intervals. The credible intervals in each block are of near identical size, meaning we have similar levels of uncertainty surrounding all $\gamma_{t_r}$s. For all blocks we see a positive home effect, showing a team tends to create more chances at home than when playing away. This is in line with other findings concerning home effects. There is a rise in the home effect in $t_3$ (the end of the first half), this is possibly due to fan pressure to perform well. If a team is losing going into half time, fans want to see their team trying to get back into the game (by creating more chances), if they are drawing they want to try to gain an advantage, or if they are winning, they want to see them press home their advantage. This level of home effect carries into the second half $(t_4,t_5)$ before a similar rise is observed in $t_6$. The rise at the end of the game corresponds to a home team's desperateness to achieve a positive result (and please their fans). It is also possible that the home team is able to draw more energy from the crowd, and therefore out perform the away team. The trend seen in Figure~\ref{gammatimefig} compliments the findings of \cite{lucey_2013}, that a team will play more defensively away from home, with the suggestion that if an away team is winning or drawing in the final 15 minutes of a game ($t_6$), they will attempt to hold onto what they have (by defending more and creating less chances). \begin{figure \centering \includegraphics[scale=0.6]{Eriksen_assists_vor_seasonv2_gray.pdf} \caption{Eriksen assist locations for each block in the 2016/2017 English Premier League, colored according to the weighting of each mixture component} \label{eriksenfig} \end{figure} \begin{figure \centering \includegraphics[scale=0.6]{Mahrez_assists_t5_vor_gray.pdf} \caption{Mahrez assist locations in $t_5$ after different periods of time, colored according to the weighting of each mixture component} \label{mahrezfig} \end{figure} \subsection{Determining locations} \label{location-sec} Having determined the number of chances a team will create, we now fit the model defined through \eqref{zassist} and \eqref{zdelta}--\eqref{llike-space_d} to capture the composition of these chances. Initially, we focus our attention on the assist and $\Delta$ locations. Christian Eriksen created the most chances in the 2016/2017 English Premier League. Figure~\ref{eriksenfig} illustrates the locations of these assists in each block through a Voronoi diagram, colored according to the weighting of each mixture component ($\kappa^a_{i,{t_r}}$). It clearly shows that Eriksen changes his style of play (or at least the location of his play, and possibly effectiveness) during different periods of the game, for example, the plots for $t_1$ and $t_2$. As we are implementing the model within the Bayesian paradigm, we can fit the model to a certain point in the season, before updating our beliefs once more data becomes available (more matches are played). To this end, we learn the model parameters using data up until 1/1/2017 (roughly half the season), and then proceed to update our beliefs after each subsequent month. Voronoi diagrams for Riyad Mahrez's assists in $t_5$ after each of these months (along with the season as a whole) are shown in Figure~\ref{mahrezfig}. Mahrez was one of the stars for Leicester City when they won the league in 2015/2016, however he was not playing as well under manager Claudio Ranieri in 2016/2017 (our dataset); this is evidenced by the top row of plots where high weights are only assigned to the left corner. Ranieri was sacked in February and Craig Shakespeare became manager, who was seen to get Mahrez back playing somewhere near his best. The figure supports this, with the bottom 4 plots showing assists coming from more areas of the pitch, those being, the left-hand side, drifting to more central positions. This approach (through Figures~\ref{eriksenfig} and \ref{mahrezfig}) illustrates that we can model how a player plays throughout a game and over a season. Given we can update as more data becomes available, this allows us to capture when a player changes their style of play or when they start to become more/less important to a team. \begin{figure \centering \includegraphics[scale=0.6]{KaneAguero_delta_season.pdf} \caption{Radar plots of the mean $\kappa_{i,t_r}^{\Delta}$ for Kane (solid) and Ag\"{u}ero (dashed) for each block in the 2016/2017 English Premier League} \label{kaneaguerofig} \end{figure} Integrating over the posterior uncertainty of the spatial locations gives the marginal posterior densities for $\kappa^{\Delta}$, from which we can ascertain differences in how certain players take chances. Radar plots of the mean $\kappa_{i,t_r}^{\Delta}$ (at each centroid) for Harry Kane (scored the most goals) and Sergio Ag\"{u}ero (had the most chances) are shown in Figure~\ref{kaneaguerofig}. For simplicity we number the centroids 1--8. The meaning of each centroid is subtle, and explanation is beyond the scope of this paper. However, it is clear from Figure~\ref{kaneaguerofig} that it is easy to visualize (and distinguish) between how certain players take chances, for instance, the differences in the shape of each player's radar plot. By marginalizing over the mixture weights ($\kappa^*$), along with the uncertainty within the mixture components, we can construct a surface under the Gaussian mixture model for each player. One such surface is presented in Figure~\ref{eriksenGMMfig}. This is the surface for Christian Eriksen's assists in $t_1$ over the entire 2016/2017 English Premier League. Note, these surfaces can be constructed at any point in the season and updated once more data becomes available. From the figure it is easy to see where this player had most influence, and we observe a similar pattern to the one seen in the top left plot of Figure~\ref{eriksenfig}. Such plots are a useful way to convey information to a team, an application of which we consider below. \begin{figure \centering \includegraphics[scale=0.48]{Eriksen_assists_t1_GMMv3.pdf} \caption{Eriksen assist locations under the Gaussian mixture model for $t_1$ in the 2016/2017 English Premier League} \label{eriksenGMMfig} \end{figure} \subsubsection{Identifying a team's strengths and weaknesses} During the 2016/2017 English Premier League many pundits questioned the ability of LIV's defense, highlighting a weakness on the left-hand side. Looking at the data, this criticism appears fair. Of the goals LIV concede, the assist leading to the goal is most likely to come from the left-hand side of the box (LB), with a $\Delta$ $(x,y)$-location of approximately (50,0), see Figure~\ref{clustfig} for cluster locations. Moreover, they are most likely to concede from these positions in blocks $t_3$ and $t_5$. Therefore, when approaching a game, LIV may want to know which of the opposition players are most likely to be involved in chances at these locations for each block, so that they can attempt to reduce their impact. Let us consider the match LIV vs CRY (23/4/17)---CRY are a team who in recent years have caused LIV problems. We fit our model using all data available before the match is played. From the model, in both $t_3$ and $t_5$ we expect CRY to have 1 chance against LIV (in the match they had 2 chances in both $t_3$ and $t_5$). By integrating over $\phi^*$, $\kappa^*$, $\Sigma^*$ and by applying Bayes theorem we can calculate the probability of each player being involved in a chance, for each block, at LIV's weak locations. Christian Benteke is the most likely CRY player in $t_3$ to have a chance at the $\Delta$ location (with probability 0.166). Andros Townsend is the most likely in $t_5$, although there is little difference between the probability of Townsend and Benteke. Assists are likely to come from James McArthur in $t_3$, or Yohan Cabaye or Jason Puncheon in $t_5$ (with probabilities 0.134 and 0.121 respectively). The $\Delta$ surface for Benteke in $t_3$ is shown in Figure~\ref{GMMdeltafig}, with the assist surfaces for Cabaye and Puncheon in $t_5$ given in Figure~\ref{GMMassistfig}. In both figures we see the highlighted ability of these players at the locations LIV are most susceptible. During the game LIV did not stop these players adequately enough, with Benteke scoring in both $t_3$ and $t_5$, Cabaye assisting in $t_3$ and Puncheon assisting in $t_5$. \begin{figure \centering \includegraphics[scale=0.48]{Benteke_delta_t3_GMM_toApr.pdf} \caption{Benteke $\Delta$ locations under the Gaussian mixture model in $t_3$ using data to $22^{\textrm{nd}}$ April 2017} \label{GMMdeltafig} \end{figure} \section{Discussion} \label{disc} Within this paper we have provided a framework to determine the number of chances a team creates, along with the players and locations which make up a chance, in a Bayesian inference setting. Our approach is computationally efficient and utilizes the combination of a Poisson and Gaussian mixture model. We have shown in Section~\ref{app} that inferences under the model are reasonably accurate and have close ties to reality, along with implementable applications (of which we only illustrate a few). In contrast to previous work, we exploit coarser data to identify individual player contributions, rather than modeling the spatial dynamics of a team as a whole. There are a number of ways in which the current work can be extended. Firstly, smoothing techniques can be applied to $\phi^*$ and $\kappa^*$ so that the probabilities of players and mixture components vary smoothly over time (this was not implemented here for computational simplicity). Also, there is some dependence between the player assisting the chance and the player taking the chance. To elucidate, some players link up better with some players than others (often determined by the areas on the pitch in which they play). This dependence between $A$ and $C$ needs incorporating into the model; which could also allow some network analysis techniques to be implemented. Finally, as an extension to the applications for the proposed methods, an interesting area of future work is anomaly detection. This would allow us to detect a change in a player's level, for instance, becoming a starting player rather than a substitute could increase a player's contribution in the earlier blocks of a game ($t_1$--$t_4$). Techniques discussed in \cite{heard_2010} could be used as inspiration for methods to detect these changes. \begin{figure}[h!] \centering \includegraphics[scale=0.48]{Cabaye_assists_t5_GMM_toApr.pdf}\vspace{0.1cm} \includegraphics[scale=0.48]{Puncheon_assists_t5_GMM_toApr.pdf} \caption{Assist locations under the Gaussian mixture model in $t_5$ using data to $22^{\textrm{nd}}$ April 2017. \emph{Top}~Cabaye, \hbox{\emph{bottom}}~Puncheon} \label{GMMassistfig} \end{figure} \newpage \bibliographystyle{apalike}
{ "timestamp": "2018-02-26T02:12:18", "yymm": "1802", "arxiv_id": "1802.08664", "language": "en", "url": "https://arxiv.org/abs/1802.08664" }
\section{Introduction}\label{sec:intro} Co-training/multiview learning is a problem that asks to aggregate two views of data into a prediction for the latent label, and was first proposed by \citet{blum1998combining}. Although co-training is an important learning problem, it lacks a unified and rigorous approach to the general setting. The current paper will make an innovative connection between the co-training problem and a peer prediction style mechanism design problem: forecast elicitation without verification, and develop a unified theory for both of them via the same information theoretic approach. We use ``forecasting whether a startup company will succeed'' as our running example. We have two possible sources of information for each startup: the features $X_A$ (e.g. products, business idea, target customer) of the startup; and the survey feedback $X_B$, collected from the crowd (e.g.\ a survey of amateur investors). Sometimes we have access to both the sources, and sometimes we have access to only one of the sources. We want to learn how to forecast the result $Y$ (succeed/fail) of a startup company, using both or one of the sources. We are given a set predictor candidates $\{P_A\}$ (e.g. a set of hypotheses) such that each predictor candidate $P_A$ maps the features $X_A$ to a forecast for the result $Y$ of the startup (e.g. succeed with 73\% probability, fail with 27\% probability). We are also given a set predictor candidates $\{P_B\}$ (e.g. a set of aggregation algorithms like majority vote/weighted average) such that each predictor candidate $P_B$ maps the survey feedback $X_B$ to a forecast for the result $Y$. Our goal is to evaluate the performance of a specific pair $P_A,P_B$. The learning problem, learning how to forecast, can be reduced to this goal since if we know how to evaluate the two candidates $P_A,P_B$'s performance, we can select the two candidates $P_A^*,P_B^*$ which have the highest performance and use them to forecast. Given a batch of past startup data each with the features $X_A$, the crowdsourced feedback $X_B$, and the result $Y$, we can evaluate the performance of the predictors through many existing measurements (e.g. proper scoring rules, loss functions). This evaluation method is related to the supervised learning setting. However, there may be only very few data points about the startups with results $Y$.\footnote{For example, if we focus on cryptographic or self-driving currencies, there are very few startups labeled with results.} When we only use a few labeled data points to train the predictor, the predictor will likely over-fit. Thus, we can boldly ask: (*Learning) \emph{Can we evaluate the performance of the predictor candidates, as well as learn how to forecast the ground truth $Y$, without access to any data labeled with $Y$?} (See Figure~\ref{fig:peer1}) {\setlength\intextsep{0pt} \begin{figure}[h] \centering \includegraphics[width=0.7\linewidth]{./ppml.png} \caption{Problem (*): Finding the common ground truth} \label{fig:peer1} \end{figure}} It is impossible to solve this problem without making an additional assumption on the relationship between $X_A,X_B$ and $Y$. However, it turns out we can solve this problem with a natural assumption, conditioning on $Y$, $X_A$ and $X_B$ are independent. This assumption states that $Y$ contains all common information between $X_A$ and $X_B$ (see Section~\ref{sec:model} for more discussion). With this assumption, a naive approach is to learn the joint distribution of $X_A$ and $X_B$ using the past data, and then solve the relationship between $Y$ and $X_A,X_B$ by some calculations, using the fact that $X_A$ and $X_B$ are independent conditioning on $Y$. However, this naive approach will not work if either $X_A$ or $X_B$ has very high dimension. We will address this issue using learning methods. Before we go further on the learning problem, let's consider a corresponding mechanism design problem. In the scenario where the forecasts are provided by human beings, we want to ask a mechanism design problem: (**Mechanism design) \emph{Can we design proper \emph{instant} reward schemes to incentivize high quality forecast for $Y$ without instant access to $Y$?} (See Figure~\ref{fig:peer}) {\setlength\intextsep{0pt} \begin{figure}[t] \centering \includegraphics[width=0.7\linewidth]{./ppml2.png} \caption{Problem (**): Forecast elicitation} \label{fig:peer} \end{figure} } People will obtain instant payments from \emph{instant} reward schemes. If we do not require the reward schemes to be instant, proper scoring rules will work by rewarding people in the future after $Y$ is revealed. It turns out the above learning problem (*) and mechanism design problem (**) are essentially the same, since there is a natural correspondence between an evaluation of their performance and their rewards. The mechanism design applications still require the conditional independent assumption. To address the two problems, a first try would be rewarding the predictors according to their ``agreement'', since high quality predictors should have a lot of agreement with each other. However, if we train the predictors based on this criterion, then the output of the training process will be two meaningless constant predictors which perfectly agree with each other (e.g. always forecast 100\% success). We call this problem the ``naive agreement'' issue. Note that the mechanism design problem (**) is closely related to the peer prediction literature, incentivizing high quality information reports without verification. It is natural to leverage the techniques and insights from peer prediction to address problems (*) and (**). In fact, the peer prediction literature provides an information theoretic idea to address the ``naive agreement'' issue, that is, replacing ``agreement'' by mutual information. In the current paper, we will show that with a natural assumption, conditioning on $Y$, $X_A$, and $X_B$ are independent, we can address problem (*) and (**) simultaneously via rewarding the predictors the mutual information between them and using the predictors' reward as the evaluation of their performance. \paragraph{Our contribution} We build a natural connection between mechanism design and machine learning by simultaneously addressing a learning problem and a mechanism design problem in the context where ground truth is unknown, via the same information theoretic approach. \begin{description} \item [Learning] We focus on the co-training problem~\cite{blum1998combining}: learning how to forecast $Y$ using two sources of information $X_A$ and $X_B$, without access to any data labeled with ground truth $Y$ (Section~\ref{sec:model}). By making a typical assumption in the co-training literature, conditioning on $Y$, $X_A$ and $X_B$ are independent, we reduce the learning problem to an optimization problem $\max_{P_A,P_B}MIG^f(P_A,P_B)$ such that solving the learning problem is equivalent to picking the $P_A^*,P_B^*$ that maximize $MIG^f(P_A,P_B)$, i.e., the $f$-mutual information gain between $P_A$ and $P_B$ (Section~\ref{sec:commontruth}). Formally, we define \emph{the Bayesian posterior predictor} as the predictor that maps any input information $X=x$ to its Bayesian posterior forecast for $Y=y$, i.e., $Pr(Y=y|X=x)$. Then when both $P_A,P_B$ are Bayesian posterior predictors, $MIG^f(P_A,P_B)$ is maximized and the maximal value is the $f$-mutual information between $X_A$ and $X_B$. With an additional mild restriction on the prior, $MIG^f(P_A,P_B)$ is maximized if and only if both $P_A,P_B$ are permuted versions of the Bayesian posterior predictor. We also design another family of optimization goals, \emph{$PS$-gain}\footnote{$PS$ is a proper scoring rule.}, based on the family of proper scoring rules (Section~\ref{sec:psgain}). We can also reduce the learning problem to the $PS$-gain optimization problem. We will show a special case of the $PS$-gain, picking $PS$ as the logarithmic scoring rule $LSR$, corresponds to the maximum likelihood estimator method. The range of applications of $PS$-gain is more limited when compared with the range of applications of the $f$-mutual information gain, since the application of $PS$-gain requires either one of the information sources to be low dimensional or that we have a simple generative model for the distribution over one of the information sources and ground truth labels, while the $f$-mutual information gain does not have these restrictions. As is typical in related literature, we do not investigate the computation complexity or data requirement of the learning problem. \emph{To the best of our knowledge, this is the first optimization goal in the co-training literature that guarantees that the maximizer corresponds to the Bayesian posterior predictor, without any additional assumption}. Thus, our method optimally aggregates the two sources of information. \vspace{5pt} \item [Mechanism design] Consider the scenario where we elicit forecasts for ground truth $Y$ from agents and pay agents immediately. Without access to $Y$, given the prior on the distribution of $Y$, i.e., $Pr[Y]$, \footnote{This is not a very strong assumption since we do not need the knowledge of the joint distribution over the event and agents' private information.} by assuming agents' private information are independent conditioning on $Y$ and the prior satisfies some mild conditions, in the single-task setting (there is only a single forecasting task), we design a \emph{strictly truthful} mechanism, the \emph{common ground mechanism}, where truth-telling is a strict equilibrium (Section~\ref{sec:single}); in the multi-task (there are at least two a priori similar forecasting tasks) setting, we design a family of \emph{focal} mechanisms, the \emph{multi-task common ground mechanism $MCG(f)$s}, where the truth-telling equilibrium pays better than any other strategy profile and \emph{strictly} higher than any non-permutation strategy profile (Section~\ref{sec:multi}). \end{description} \paragraph{Technical contribution}Our main technical ingredient is a novel performance measurement, the \emph{$f$-mutual information gain}, which is an unbiased estimator of the $f$-mutual information. To give a flavor of this measurement, we give an informal presentation here: both $P_A$ and $P_B$ are assigned a batch of forecasting tasks, the $f$-mutual information gain between $P_A$ and $P_B$ is \begin{align*} &\text{The agreements between $P_A$'s forecast and $P_B$'s forecast for the same task} \\ &- f^{\star}(\text{The agreements between $P_A$'s forecast and $P_B$'s forecast for different tasks}) \end{align*} {\setlength\intextsep{5pt} \begin{figure}[htp] \centering \includegraphics[width=0.7\linewidth]{./fgain.png} \caption{An unbiased estimator of $f$-mutual information: $f$-mutual information gain. $P_A$ and $P_B$ are assigned three forecasting tasks. $P_A$'s outputs are $(0.7,0.3),(0.1,0.9),(0.5,0.5)$ and $P_B$'s outputs are $(0.6,0.4),(0.2,0.8),(0.4,0.6)$. To calculate the $f$-mutual information gain between them, we pick a task (e.g. Task no.\ 2) uniformly at random and calculate the ``agreement" $a_s$ between $P_A$ and $P_B$'s forecasts for this task; we also pick a pair of distinct tasks $(i,j)$ uniformly at random (e.g. (Task no.\ 1, Task no.\ 2)) and calculate the ``agreement" $a_d$ between $P_A$'s forecast for task $i$ and $P_B$'s forecast for this task $j$. The $f$-mutual information gain is then $a_s-f^{\star}(a_d)$. The formal definition (Section~\ref{sec:fgain}) actually uses the empirical expectations of $a_s$ and $f^{\star}(a_d)$.} \label{fig:fgain} \end{figure} } where $f^{\star}$ is the conjugate of the convex function $f$. With this measurement, two agreeing constant predictors have small gain since their outputs have large agreements for both the same task and different tasks. The formal definition will be introduced in Section~\ref{sec:fgain} and the agreement measure is introduced in Definition~\ref{def:agree}. The $f$-mutual information gain is conceptually similar to the correlation payment scheme proposed by \citet{dasgupta2013crowdsourced} (in the binary choice setting), and \citet{2016arXiv160303151S} (in the multiple choice setting), which pays agents ``the agreement for the same task \emph{minus} the agreement for the distinct task''. In \citet{dasgupta2013crowdsourced} and \citet{2016arXiv160303151S}, the payment scheme is designed for discrete signals and the measure of agreements is a simple indicator function. \citet{2016arXiv160501021K} show that this correlation payment is related to a special $f$-mutual information. Thus, the $f$-mutual information gain can be seen as an extension of the correlation payment scheme that works for forecast reports. \subsection{Applications}\label{sec:application} In our startup running example, we consider the situation where one source of information is the features and another source of information is the crowdsourced feedback. In fact, our results apply to all kinds of information sources. For example, we can make both sources features or crowdsourced feedback. Different setups for the information sources and predictor candidates can bring different applications of our results. Let's consider the ``learning with noisy labels'' problem where the labels in the training data are a noisy version of the ground truth labels $Y$ and the noise is independent. We can map this problem into our framework by letting $X_B$ be the noisy label of features $X_A$. That is, $X_B$ is a noisy version of $Y$. Our framework guarantees that the Bayesian posterior predictor that forecasts $Y$ using $X_A$ must be part of a maximizer of the optimization problem. However, there are many other maximizers. For example, since $X_A$ and $X_B$ are independent conditioning $X_B$. The Bayesian posterior predictor that forecasts $X_B$ using $X_A$ is also part of a maximizer, since the scenario $Y=X_B$ also satisfies the conditional independence assumption. If $X_B$ has much higher dimension than $Y$, we do not have this issue. But $X_B$ has the same signal space with $Y$ in the learning with noisy label problem. Thus, it's impossible to eliminate other maximizers without any side information here. With some side information (e.g. a candidate set $\mathcal{F}$, like linear regressions, that only contains our desired maximizer.), it's possible to obtain the Bayesian posterior predictor that forecasts $Y$ using $X_A$. Note that our framework does not require a pre-estimation on the transition probability that transits the ground truth label $Y$ to the noisy ground truth label $X_B$, since our framework has this transition probability, which corresponds to the predictor $P_B$, as parameters as well and learns the correct forecaster $P_A$ and the transition probability $P_B$ simultaneously. \citet{ratner2016data} propose a method to collect massive labels by asking the crowds to write heuristics to label the instances. Each instance is associated with many noisy labels outputted by the heuristics. In their setting, the crowds use a different source of information from the learning algorithm (e.g. the learning algorithm uses the biology description of the genes and the crowds use the scientific papers about the gene). Thus, the conditional independence assumption is natural here and we can map this setting's training problem into our framework. \citet{ratner2016data} preprocess the collected labels to approximate ground truth by assuming a particular information structure model on the crowds. Our framework is model-free and does not need to preprocess the collected labels since we can learn the best forecaster (predictor $P_A$) and the best processing/aggregation algorithm (predictor $P_B$) simultaneously. Moreover, since the highest evaluation value of the predictors $P_A,P_B$ is the $f$-mutual information between $X_A$ and $X_B$, our results provide a method to calculate the $f$-mutual information between any two sources of information $X_A,X_B$ of any format. \citet{2016arXiv160501021K} propose a framework for designing information elicitation mechanisms that reward truth-telling by paying each agent the $f$-mutual information between her report and her peers' report. Thus, the $f$-mutual information gain method can be combined with this framework to design information elicitation mechanisms when the information has a complicated format. \subsection{Related work} \paragraph{Learning} Co-training/multiview learning was first proposed by \citet{blum1998combining} and explored by many works (e.g. \citet{dasgupta2002pac,collins1999unsupervised}). \citet{xu2013survey, li2016multi} give surveys on this literature. Although co-training is an important learning problem, it lacks a unified theory and a solid theoretic guarantee for the general model. Most traditional co-training methods require additional restrictions on the hypothesis space (e.g. weakly good hypotheses) to address the ``naive agreement'' issue and fail to deal with soft hypotheses. Soft hypotheses output a continuous signal (as opposed to hard hypothesis which output a discrete signal) and are typically required to fully aggregate the information from two sources. \citet{becker1996mutual} deals with a feature learning problem which is very similar to the co-training problem. \citet{becker1996mutual} seeks to maximize the Shannon mutual information between the output of two functions. However, their work only considers hard (not soft) hypotheses and lacks a solid theoretic analysis for the maximizer. \citet{kakade2007multi} consider the multi-view regression and maximize the correlation between the two hypotheses. Their method captures the ``mutual information'' idea (in fact, correlation is a special $f$-mutual information \cite{2016arXiv160501021K}) but their model has a very specific set up and the analysis cannot be extended to other co-training problems. In contrast, we propose a simple, powerful and general information theoretic framework, $f$-mutual information gain, that has a solid theoretic guarantee, works for soft hypothesis and addresses the ``naive agreement'' issue without any additional assumption. \citet{natarajan2013learning}, \citet{sukhbaatar2014learning} and many other works (e.g. \citet{angluin1988learning,khardon2007noise,scott2013classification}) consider the learning with noisy labels problem. \citet{natarajan2013learning} consider binary labels and calibrate the original loss function such that the Bayesian posterior predictor that forecasts ground truth $Y$ is a maximizer of the calibrated loss. \citet{sukhbaatar2014learning} extend this work to the multiclass setting. These works require additional estimation steps to learn the transition probability that transits the ground truth labels to the noisy labels and fix this transition probability in their calibration step. In contrast, by mapping this problem into our framework (Section~\ref{sec:application}), we do not need the additional estimation steps to make the calibrated forecaster part of a maximizer of our optimization problem, and can incorporate any kind of side information to learn the calibrated forecaster and true transition probability simultaneously. Moreover, our results can handle more complicated setting where each instance is labeled by multiple labels. Rather than preprocessing the labels by a particular algorithm (e.g. majority vote, weighted average, spectral method) and assuming some information structure model among the crowds \cite{ratner2016data}, our framework is model-free and can learn the best calibrated forecaster (predictor $P_A$) and the best processing algorithm (predictor $P_B$) simultaneously. \citet{raykar2010learning} also \emph{jointly} learn the calibrated forecaster and the distribution over the crowd-sourced feedback and ground truth labels. \citet{raykar2010learning} uses the maximum likelihood estimator and assumes a simple generative model for the distribution over the crowdsourced feedback and the ground truth labels, which is conditioning the ground truth label, the crowdsourced feedback is drawn from a binomial distribution, while our framework is model-free. We also extend the maximum likelihood estimator method in \citet{raykar2010learning} to a general family of estimators, $PS$-gain estimators, based on the family of proper scoring rules, which also \emph{jointly} learn the calibrated forecaster and the distribution. We will show the range of applications of $PS$-gain is more limited compared with the range of applications of the $f$-mutual information gain (see Section~\ref{sec:comparison} for more details). \citet{cid2012proper} also uses proper scoring rules to design the loss functions that address the learning with noisy labels problem. However, \citet{cid2012proper} designs a different family of loss functions from the $PS$-gain and cannot jointly learn the calibrated forecaster and the distribution. Generative Adversarial Networks (GAN) \cite{goodfellow2014generative} combine game theory and learning theory to make innovative progress. We also combine game theory and learning theory by proposing a peer prediction game between two predictors. The game in GAN is a zero-sum competitive game while the game in the current paper is collaborative. Several learning problems (e.g. finding the pose of an object in an image \cite{bell1995information}, blind source separation \cite{cardoso1997infomax}, feature selection \cite{peng2005feature}) use mutual information maximization (infomax) as their optimization goal. Some of these problems require data labeled with ground truth and some of them have a very different problem set up than our work. We borrow the techniques about the duality of $f$-divergence from \citet{nguyen2009surrogate,nguyen2010estimating}. \citet{nguyen2009surrogate} show a correspondence between the $f$-divergence and the surrogate loss in the \emph{binary supervised learning} setting and \citet{nguyen2010estimating} propose a way to estimate the $f$-divergence between two high dimensional random variables. We apply the duality of $f$-divergence to an unsupervised learning problem and not restricted to the binary setting. We also differ from the crowdsourcing literature that infers ground truth answers from agents' reports (e.g. \cite{zhou2012learning,karger2014budget,zhang2014spectral,dalvi2013aggregating}) in the sense that their agents' reports are a simple choice (e.g. A, B, C, D) while in our setting, the report can come from a space larger than the space of ground truth answers, perhaps even a very high dimensional vector. \paragraph{Mechanism design} Our mechanism design setting differ from the traditional peer prediction literature (e.g.\cite{MRZ05,prelec2004bayesian,dasgupta2013crowdsourced,2016arXiv160501021K,2016arXiv160303151S}) since we are eliciting forecast rather than a simple signal. We can discretize the forecast report and apply the traditional peer prediction literature results. However, this will only provide approximated truthfulness and fail to design focal mechanisms which pay truth-telling \emph{strictly} better than any other non-permutation equilibrium since the forecast is discretized, while our mechanisms are focal for $\geq$2 tasks setting. \citet{witkowski2017proper} consider the forecast elicitation situation and assume that they have an unbiased estimator of the optimal forecast while we assume an additional conditional independence assumption but do not need the unbiased estimator. \citet{Liu:2017:MAP:3033274.3085126,liuchen} connect mechanism design with learning by using the learning methods to design peer prediction mechanisms. In the setting where several agents are asked to label a batch of instances, \citet{Liu:2017:MAP:3033274.3085126} design a peer prediction mechanism where each agent is paid according to her answer and a reference answer generated by a classification algorithm using other agents' reports. \citet{liuchen} also use surrogate loss functions as tools to develop a multi-task mechanism that achieves truthful elicitation in dominant strategy when the mechanism designer only has access to agents' reports. Instead of using learning methods to design the peer prediction mechanisms, our work uses peer prediction mechanism design techniques to address a learning problem. Moreover, our mechanism design problem has a very different set up from \citet{Liu:2017:MAP:3033274.3085126, liuchen}. \citet{agarwal2015consistent} connect learning theory with information elicitation by showing the equivalence between the calibrated surrogate losses in \emph{supervised} learning and the elicitation of certain properties of the underlying conditional label distribution. Both our learning problem and mechanism design problem have a very different set up from theirs. \paragraph{Independent work} Like the current paper, \citet{DBLP:journals/corr/abs-1802-07572} also uses Shannon mutual information to propose an information theoretic training objective that can deal with soft hypotheses/classifiers. However, the optimization functions from these two works are different. We also use a more general information measure, $f$-mutual information, which has Shannon mutual information as a special case, and provide a formal analysis for this general framework. Additionally, we propose an innovative connection between co-training and peer prediction. \section{Preliminaries}\label{sec:prelim} Given a finite set $[N]:=\{1,2,...,N\}$, for any function $\phi:[N]\mapsto \mathbb{R}$, we use $(\phi(y))_{y\in[N]}$ to represent the vector $(\phi(1),\phi(2),...,\phi(N))\in \mathbb{R}^N$. Given a finite set $\Sigma$, $\Delta_{\Sigma}$ is the set of all distributions over $\Sigma$. \subsection{$f$-divergence and Fenchel's duality} \paragraph{$f$-divergence~\cite{ali1966general,csiszar2004information}} $f$-divergence $D_f:\Delta_{\Sigma}\times \Delta_{\Sigma}\mapsto \mathbb{R}$ is a non-symmetric measure of the difference between distribution $\mathbf{p}\in \Delta_{\Sigma} $ and distribution $\mathbf{q}\in \Delta_{\Sigma} $ and is defined to be $$D_f(\mathbf{p},\mathbf{q})=\sum_{\sigma\in \Sigma} \mathbf{p}(\sigma)f\bigg( \frac{\mathbf{q}(\sigma)}{\mathbf{p}(\sigma)}\bigg)$$ where $f:\mathbb{R}\mapsto\mathbb{R}$ is a convex function and $f(1)=0$. Here we introduce two $f$-divergences in common use: KL divergence, and Total Variance Distance. \begin{example}[KL divergence] Choosing $-\log(x)$ as the convex function $f(x)$, $f$-divergence becomes KL divergence $D_{KL}(\mathbf{p},\mathbf{q})=\sum_{\sigma}\mathbf{p}(\sigma)\log\frac{\mathbf{p}(\sigma)}{\mathbf{q}(\sigma)}$ \end{example} \begin{example}[Total Variance Distance] Choosing $|x-1|$ as the convex function $f(x)$, $f$-divergence becomes Total Variance Distance $D_{tvd}(\mathbf{p},\mathbf{q})=\sum_{\sigma}|\mathbf{p}(\sigma)-\mathbf{q}(\sigma)|$ \end{example} \begin{definition}[Fenchel Duality \cite{rockafellar1966extension}] Given any function $f:\mathbb{R}\mapsto \mathbb{R}$, we define its convex conjugate $f^{\star}$ as a function that also maps $\mathbb{R}$ to $\mathbb{R}$ such that $$f^{\star}(x)=\sup_{t} tx-f(t).$$ \end{definition} \begin{lemma}[Dual version of $f$-divergence~\cite{nguyen2009surrogate,nguyen2010estimating}]\label{lemma:dualdivergence} $$ D_f(\mathbf{p},\mathbf{q}) \geq \sup_{u\in \Sigma} \mathbb{E}_{\mathbf{p}} u- \mathbb{E}_{\mathbf{q}}f^{\star}(u)=\sup_{u\in \mathcal{G}} \sum_{\sigma}u(\sigma) \mathbf{p}(\sigma)- \sum_{\sigma}f^{\star}(u(\sigma))\mathbf{q}(\sigma) $$ where $\mathcal{G}$ is a set of functions that maps $\Sigma$ to $\mathbb{R}$. The equality holds if and only if $u(\sigma)=u^*(\sigma)\in \partial{f}(\frac{\mathbf{p}(\sigma)}{\mathbf{q}(\sigma)})$, i.e., the subdifferential of $f$ on value $\frac{\mathbf{p}(\sigma)}{\mathbf{q}(\sigma)}$. \end{lemma} We call $(u^*,f^{\star}(u^*))$ \emph{a pair of best disinguishers}. This dual version of $f$-divergence is introduced by \citet{nguyen2009surrogate} and also plays a key role in the design of a type of generative adversarial networks, $f$-GANs~\cite{nowozin2016f}. \subsection{$f$-mutual information} Given two random variables $X,Y$ whose realization space are $\Sigma_X$ and $\Sigma_Y$, let $\mathbf{U}_{X,Y}$ and $\mathbf{V}_{X,Y}$ be two probability measures where $\mathbf{U}_{X,Y}$ is the joint distribution of $(X,Y)$ and $\mathbf{V}_{X,Y}$ is the product of the marginal distributions of $X$ and $Y$. Formally, for every pair of $(x,y)\in\Sigma_X\times\Sigma_Y$, $$\mathbf{U}_{X,Y}(X=x,Y=y)=\Pr[X=x,Y=y]\qquad \mathbf{V}_{X,Y}(X=x,Y=y)=\Pr[X=x]\Pr[Y=y].$$ If $\mathbf{U}_{X,Y}$ is very different from $\mathbf{V}_{X,Y}$, the mutual information between $X$ and $Y$ should be high since knowing $X$ changes the belief for $Y$ a lot. If $\mathbf{U}_{X,Y}$ equals to $\mathbf{V}_{X,Y}$, the mutual information between $X$ and $Y$ should be zero since $X$ is independent with $Y$. Intuitively, the ``distance'' between $\mathbf{U}_{X,Y}$ and $\mathbf{V}_{X,Y}$ represents the mutual information between them. \begin{definition}[$f$-mutual information \cite{2016arXiv160501021K}] The $f$-mutual information between $X$ and $Y$ is defined as $$MI^f(X;Y)=D_f(\mathbf{U}_{X,Y},\mathbf{V}_{X,Y})$$ where $D_f$ is $f$-divergence. $f$-mutual information is always non-negative \cite{2016arXiv160501021K}. \end{definition} $f$-mutual information is used in the peer prediction literature since if the information is measured by $f$-mutual information, any ``data processing'' on either of the random variables will decrease the amount of information crossing them. Thus, in peer prediction, if we pay agents according to the $f$-mutual information between her information and her peers' information, agents will be incentivized to report all information to maximize their payments\footnote{In the current paper, we do not directly use the data processing inequality of $f$-mutual information. Thus, we omit the formal introduction here. The interested reader is refer to \citet{2016arXiv160501021K}. }. Two examples of $f$-mutual information are Shannon mutual information~\cite{cover2006elements} (Choosing $f$-divergence as KL divergence) and $MI^{tvd}(X;Y):=\sum_{x,y}|\Pr[X=x,Y=y]-\Pr[X=x]\Pr[Y=y]|$ (Choosing $f$-divergence as Total Variation Distance). We define $K(X=x,Y=y)$ as the ratio between $U_{X,Y}(x,y)$ and $V_{X,Y}(x,y)$, i.e., $$K(X=x,Y=y):=\frac{\Pr[X=x,Y=y]}{\Pr[X=x]\Pr[Y=y]}=\frac{\Pr[Y=y|X=x]}{\Pr[Y=y]}=\frac{\Pr[X=x|Y=y]}{\Pr[X=x]}.$$ $K(X=x,Y=y)$ represents the ``\textbf{pointwise mutual information}(PMI)'' between $X=x$ and $Y=y$. Lemma~\ref{lemma:dualdivergence} directly implies: \begin{lemma}[Dual version of $f$-mutual information]\label{lemma:duality} $$ MI^f(X;Y) \geq \sup_{u\in \mathcal{G}} \mathbb{E}_{U_{X,Y}} u- \mathbb{E}_{V_{X,Y}}f^{\star}(u)$$ where $\mathcal{G}$ is a set of functions that maps $\Sigma_X\times \Sigma_Y$ to $\mathbb{R}$. The equality holds if and only if $u(x,y)=u^*(x,y)\in \partial{f}(K(X=x,Y=y))$. \end{lemma} \setlength{\textfloatsep}{0pt} \begin{table}\label{table:distinguishers} \centering \begin{tabular}{llll} \toprule {$f$-divergence} & {$f(t)$} & {$u^*(x,y)=\partial{f}(K(x,y))$} & {$f^{\star}(u^*(x,y)$)} \\ \midrule Total Variation Distance & $|t-1|$ & sign($\log K(x,y)$) & sign($\log K(x,y)$) \\ \midrule KL divergence & $t\log t$ & $1+\log K(x,y)$ & $K(x,y)$ \\ \midrule Reverse KL & $-\log t$ & $-\frac{1}{K(x,y)}$ & $-1+\log K(x,y)$) \\ \midrule Pearson $\chi^2$ & $(t-1)^2$ & $2(K(x,y)-1)$ & $(K(x,y))^2-1$ \\ \midrule Squared Hellinger & $(\sqrt{t}-1)^2$ & $1-\sqrt {\frac{1}{K(x,y)}}$ & $\sqrt {K(x,y)}-1$\\ \bottomrule \end{tabular} \caption{Reference for common $f$-divergences and corresponding pairs of best distinguishers $(u^*(x,y),f^{\star}(u^*(x,y))$ of $f$-mutual information. $K(x,y)=K(X=x,Y=y)$ (PMI).} \end{table} \subsection{Proper scoring rules} A scoring rule $PS: \Sigma \times \Delta_{\Sigma} \mapsto \mathbb{R}$ \cite{winkler1969scoring,gneiting2007strictly} takes in a signal $\sigma \in \Sigma$ and a distribution over signals $\mathbf{p} \in \Delta_{\Sigma}$ and outputs a real number. A scoring rule is \emph{proper} if, whenever the first input is drawn from a distribution $\mathbf{p}$, then $\mathbf{p}$ will maximize the expectation of $PS$ over all possible inputs in $\Delta_{\Sigma}$ to the second coordinate. A scoring rule is called \emph{strictly proper} if this maximum is unique. We will assume throughout that the scoring rules we use are strictly proper. Slightly abusing notation, we can extend a scoring rule to be $PS: \Delta_{\Sigma} \times \Delta_{\Sigma} \mapsto \mathbb{R}$ by simply taking $PS(\mathbf{p}, \mathbf{q}) = \mathbb{E}_{\sigma \leftarrow \mathbf{p}}(\sigma, \mathbf{q})$. We note that this means that any proper scoring rule is linear in the first term. \begin{example}[Log Scoring Rule~\cite{winkler1969scoring,gneiting2007strictly}]\label{eg:lsr} Fix an outcome space $\Sigma$ for a signal $\sigma$. Let $\mathbf{q} \in \Delta_{\Sigma}$ be a reported distribution. The Logarithmic Scoring Rule maps a signal and reported distribution to a payoff as follows: $$LSR(\sigma,\mathbf{q})=\log (\mathbf{q}(\sigma)).$$ Let the signal $\sigma$ be drawn from some random process with distribution $\mathbf{p} \in \Delta_\Sigma$. Then the expected payoff of the Logarithmic Scoring Rule $$ \mathbb{E}_{\sigma \leftarrow \mathbf{p}}[LSR(\sigma,\mathbf{q})]=\sum_{\sigma}\mathbf{p}(\sigma)\log \mathbf{q}(\sigma)=LSR(\mathbf{p},\mathbf{q})$$ This value will be maximized if and only if $\mathbf{q}=\mathbf{p}$. \end{example} \subsection{Property of the pointwise mutual information} We will introduce a simple property of the pointwise mutual information that we will use multiple times in the future. In addition to several different formats of the pointwise mutual information (e.g. joint distribution/product of the marginal distributions, posterior/prior), if there exists a latent random variable $Y$ such that random variable $X_A$ and random variable $X_B$ are independent conditioning on $Y$, we can also represent the pointwise mutual information between $X_A$ and $X_B$ by the ``agreement'' between the ``relationship'' between $X_A$ and $Y$, and the ``relationship'' between $X_B$ and $Y$. \begin{claim}\label{claim:ci} When random variables $X_A$, $X_B$ are independent conditioning on $Y$, \begin{align*} K(X_A=x_A,X_B=x_B) =&\sum_y {\Pr[Y=y]}K(X_A=x_A,Y=y) K(X_B=x_B,Y=y)\\ =&\sum_y \Pr[Y=y|X_A=x_A] K(X_B=x_B,Y=y)\\ =&\sum_y \frac{\Pr[Y=y|X_A=x_A]\Pr[Y=y|X_B=x_B]}{\Pr[Y=y]}. \end{align*} \end{claim} We defer the proof to the appendix. \section{General Model and Assumptions}\label{sec:model} Let $X_A,X_B,Y$ be three random variables and we define prior $Q$ as the joint distribution over $X_A,X_B,Y$. We want to forecast the ground truth $Y$ whose realization is a signal in a finite set $\Sigma$. $X_A, X_B$ are two sources of information that are related to $Y$. $X_A$'s realization is a signal in a finite set $\Sigma_A$. $X_B$'s realization is a signal in a finite set $\Sigma_B$. We may have access to both of the realizations of $X_A$ and $X_B$ or only one of them. Thus, we need to learn the relationship between $X_A, X_B$ and $Y$ to forecast $Y$. It's impossible to learn by only accessing the samples of $X_A, X_B$ without additional assumption. We make the following conditional independence assumption: \begin{assumption}[Conditional independence]\label{assume:coni} We assume that conditioning on $Y$, $X_A$, and $X_B$ are independent. \end{assumption} Intuitively, $Y$ can be seen as the ``intersection'' between $X_A$ and $X_B$. To better understand this assumption and its limitations we return to our running example where the variable $Y$ is the success of a start-up. In this case, if both $X_A$ and $X_B$ contain the sex of the CEO (which we assume is independent of $Y$), then this assumption will not hold. To make it hold, either $Y$ would need to be redefined to contain the sex of the CEO, or this information would need to be removed from either $X_A$ or $X_B$. For the mechanism design application, if the assumption is violated, for example both agents are sexists and forecast using the sex of the CEO, then it is impossible to avoid paying them for this useless/harmful information. \subsection{Well-defined and stable prior} We call $Z$ a \emph{solution} if conditioning on $Z$, $X_A$, and $X_B$ are independent. $Y$ is a solution. However, there are a lot of solutions. For example, conditioning on $X_A$ or $X_B$, $X_A$ and $X_B$ are independent, which means $X_A$ and $X_B$ are both solutions. Thus, we have an additional restriction on the prior: well-defined prior and stable prior. We will need restrictions on the prior when we analyze the strictness of our learning algorithm/mechanism. Readers can skip this section without losing the core idea of our results. To infer the relationship between $Y$ and $X_A,X_B$ with only samples of $X_A,X_B$, we cannot do better than to just solve the system of equations (\ref{soe}), given the joint distribution over $X_A,X_B$: $Q$. Our goal is to obtain the Bayesian posterior predictor. Thus, we list a system that the Bayesian posterior predictor satisfies. The system below equations involve variables $\{\mathbf{a}^{x_A},\mathbf{b}^{x_B}\in\Delta_{\Sigma}\}_{x_A\in \Sigma_A,x_B\in \Sigma_B}$, and $\mathbf{r}\in\Delta_{\Sigma}$. We insist $a^{x_A}_y=\Pr[Y=y|X_A=x_A]$, $b^{x_B}_y=\Pr[Y=y|X_B=x_B]$ and $r_y = \Pr[Y=y]$ is a solution and we call it the \emph{desired} solution. \begin{align}\label{soe} \mathcal{S}(\{\mathbf{a}^{x_A},&\mathbf{b}^{x_B}\}_{x_A\in \Sigma_A,x_B\in \Sigma_B},\mathbf{r})\\ \nonumber :=&\bigg\{\sum_{y\in \Sigma} \frac{a^{x_A}_y b^{x_B}_y}{r_y}-K(X_A=x_A,X_B=x_B)\bigg\}_{x_A\in \Sigma_A,x_B\in \Sigma_B}=0 \end{align} Claim~\ref{claim:ci} shows the above system has the desired solution. Note that any permutation of a solution is still a valid solution\footnote{We may be able to distinguish a solution with its permuted version if we have some side information (e.g. the prior of $Y$/a few $(x_A,x_B,y)$ samples).}. Since we cannot do better than to solve the above system, if the above system only has one ``unique'' solution, in the sense that any two solutions are permuted version of each other, we call the prior $Q$ a well-defined prior. Formally, \begin{definition}[Well-defined] A prior $Q$ is well-defined if for any two solutions $\{\mathbf{a}^{x_A},\mathbf{b}^{x_B}\}_{x_A\in \Sigma_A,x_B\in \Sigma_B}$, $\mathbf{r}$ and $\{\mathbf{c}^{x_A},\mathbf{d}^{x_B}\}_{x_A\in \Sigma_A,x_B\in \Sigma_B}$, $\mathbf{r}'$ of the system of equations (\ref{soe}), there exists a permutation $\pi: \Sigma\mapsto\Sigma$ such that $\mathbf{r}=\pi \mathbf{r}' $ for any $x_A,x_B$, $\mathbf{a}^{x_A}=\pi \mathbf{c}^{x_A} $, $\mathbf{b}^{x_B}=\pi \mathbf{d}^{x_B} $. \end{definition} The well-defined prior exist since intuitively, if $|\Sigma_A|$ and $|\Sigma_B|$ are high and $|\Sigma|$ is low, it is likely $Y$ is the ``unique intersection'' since the number of constraints of the system will be much greater than the number of variables. We say a prior is stable if fixing part of the desired solution of the system (\ref{soe}), in order to make it still a solution of the system, other parts of the desired solution should also be fixed. \begin{definition}[Stable] A prior $Q$ is stable if fixing $a^{x_A}_y=\Pr[Y=y|X_A=x_A]$ and $r_y = \Pr[Y=y]$, the system (\ref{soe}) $\mathcal{S}(\{\mathbf{a}^{x_A},\mathbf{b}^{x_B}\}_{x_A\in \Sigma_A,x_B\in \Sigma_B},\mathbf{r})=0$ has unique solution $\mathbf{b}^{x_A}$ such that $b^{x_B}_y=\Pr[Y=y|X_B=x_B]$; and fixing $b^{x_B}_y=\Pr[Y=y|X_B=x_B]$ and $r_y = \Pr[Y=y]$, the system (\ref{soe}) $\mathcal{S}(\{\mathbf{a}^{x_A},\mathbf{b}^{x_B}\}_{x_A\in \Sigma_A,x_B\in \Sigma_B},\mathbf{r})=0$ has unique solution $\mathbf{a}^{x_A}$ such that $a^{x_A}_y=\Pr[Y=y|X_A=x_A]$. \end{definition} We require stable priors when we design \emph{strictly} truthful mechanisms. \subsection{Predictors} This section gives the definition of predictors. We have two sets of samples $S_A:=\{x_A^{\ell}\}_{\ell\in {\mathcal{L}_A}}$ and $S_B:=\{x_B^{\ell}\}_{\ell\in {\mathcal{L}_B}}$ which are i.i.d samples of $X_A$ and $X_B$ respectively. For $\ell\in \mathcal{L}_A\cap\mathcal{L}_B$, $(x_A^{\ell},x_B^{\ell})$s are i.i.d samples of the joint random variable $(X_A,X_B)$. A predictor $P_A:\Sigma_A\mapsto\Delta_{\Sigma}$ for $X_A$ maps $x_A\in\Sigma$ to a forecast $P_A(x_A)$ for ground truth $Y$. We similarly define the predictors for $X_B$. We define \emph{the Bayesian posterior predictor} as the predictor that maps any input information $X=x$ to its Bayesian posterior forecast for $Y=y$, i.e., $Pr(Y=y|X=x)$. With the conditional independence assumption, we have \begin{align*} \Pr[Y|X_A,X_B]=&\frac{\Pr[Y,X_A,X_B]}{\Pr[X_A,X_B]}\\ \tag{conditional independence} =& \frac{\Pr[Y]\Pr[X_A|Y]\Pr[X_B|Y]}{\Pr[X_A,X_B]}\\ \tag{$K(X_A,X_B)$ is the pointwise mutual information.} =& \frac{\Pr[Y|X_A]\Pr[Y|X_B]}{K(X_A,X_B)\Pr[Y]}\\ \end{align*} When we have access to both the sources where $X_A=x_A$ and $X_B=x_B$, given the prior of the ground truth $Y$, we can construct an aggregated forecast for $Y=y$ using $P_A,P_B$: $$\frac{P_A(x_A)P_B(x_B)}{\Pr[Y=y]}\cdot\text{normalization}$$ In this case, if both $P_A$ and $P_B$ are the Bayesian posterior predictor, the aggregated forecast is the Bayesian posterior predictor as well. Thus, it's sufficient to only train $P_A$ and $P_B$. In the rest sections, we will show how to train $P_A$ and $P_B$ (Section~\ref{sec:commontruth}), given the two sets of samples $S_A$ and $S_B$, as well as how to incentivize high quality predictors from the crowds (Section~\ref{sec:forecastelicitation}). \section{Co-training: finding the common ground truth}\label{sec:commontruth} We have a set of candidates $\mathcal{H}_A$ for the predictor for $X_A$ and a set of candidates $\mathcal{H}_B$ for the predictor for $X_B$. We sometimes call each predictor candidate \emph{a hypothesis}. Given the two sets of samples $S_A=\{x_A^{\ell}\}_{\ell\in {\mathcal{L}_A}}$ and $S_B=\{x_B^{\ell}\}_{\ell\in {\mathcal{L}_B}}$, our goal is to figure out the best hypothesis in $\mathcal{H}_A$ and the best hypothesis in $\mathcal{H}_B$ simultaneously. Thus, we need to design proper ``loss function'' such that the best hypotheses minimize the loss. In fact, we will show how to design a proper ``reward function'' such that the best hypotheses maximize the reward. \subsection{$f$-mutual information gain}\label{sec:fgain} \paragraph{$f$-mutual information gain $MIG^f(R)$ (Figure~\ref{fig:fgain})} \begin{description} \item[Hypothesis] We are given $\mathcal{H}_A=\{h_A:\Sigma_A\mapsto \Delta_{\Sigma}\}$, $\mathcal{H}_B=\{h_B:\Sigma_B\mapsto \Delta_{\Sigma}\}$: the set of hypotheses/predictor candidates for $X_A$ and $X_B$, respectively. \item[Gain] Given reward function $R:\Delta_{\Sigma}\times\Delta_{\Sigma}\mapsto \mathbb{R}$, \\ for each $\ell\in \mathcal{L}_A\cap \mathcal{L}_B$, reward ``the amount of agreement'' between the two predictor candidates' predictions for task $\ell$, i.e., $$R(h_A(x_A^{\ell}),h_B(x_B^{\ell}));$$ for each distinct pair $(\ell_A,\ell_B), \ell_A\in \mathcal{L}_A,\ell_B\in \mathcal{L}_B,\ell_A\neq \ell_B$, punish both predictor candidates ``the amount of agreement'' between their predictions for a pair of distinct tasks $(\ell_A,\ell_B)$, i.e., $$f^{\star}(R(h_A(x_A^{\ell_A}),h_B(x_B^{\ell_B})).$$ The $f$-mutual information gain $MIG^f(R)$ that is corresponding to the reward function $R$ is \begin{align*} MIG^f(R(h_A,h_B))_{|S_A,S_B}=&\frac{1}{|\mathcal{L}_A\cap \mathcal{L}_B|}\sum_{\ell \in \mathcal{L}_A\cap \mathcal{L}_B} R(h_A(x_A^{\ell}),h_B(x_B^{\ell}))\\ -\frac{1}{|\mathcal{L}_A||\mathcal{L}_B|-|\mathcal{L}_A\cap \mathcal{L}_B|^2}&\sum_{\ell_A\in \mathcal{L}_A,\ell_B\in \mathcal{L}_B,\ell_A\neq \ell_B}f^{\star}(R(h_A(x_A^{\ell_A}),h_B(x_B^{\ell_B}))) \end{align*} \end{description} \begin{lemma}\label{lem:pplearn} The expected total $f$-mutual information gain is maximized over all possible $R$, $h_A$, and $h_B$ if and only if for any $(x_A,x_B)\in\Sigma_A\times\Sigma_B$, $$R(h_A(x_A),h_B(x_B))\in \partial{f}(K(x_A,x_B)).$$ The maximum is $MI^f(X_A;X_B).$ \end{lemma} \begin{proof} $(x_A^{\ell},x_B^{\ell})_{\ell}$ are i.i.d. realizations of $(X_A,X_B)$. Therefore, the expected $f$-mutual information gain is $\mathbb{E}_{U_{X_A,X_B}} R - \mathbb{E}_{V_{X_A,X_B}} f^{\star}(R).$ The results follow from Lemma~\ref{lemma:duality}. \end{proof} Although any reward function corresponds to an $f$-mutual information gain function, we need to properly design the reward function $R$ such that, fixing $R$, there exist hypotheses to maximize the corresponding $f$-mutual information gain $MIG^f(R)$ to the $f$-mutual information between the two sources. We will use the intuition from Lemma~\ref{lem:pplearn} to design such reward functions $R$ in the next section. \subsection{Maximizing the $f$-mutual information gain} In this section, we will construct a special reward function $R^f$ and then show that the maximizers of the corresponding $f$-mutual information gain $MIG^f(R^f)$ are the Bayesian posterior predictors. \begin{definition}[$R^f$]\label{def:agree} We define reward function $R^f$ as a function that maps the two hypotheses' outputs $\mathbf{p}_1,\mathbf{p}_2\in \Delta_{\Sigma}$ and the vector $\mathbf{p}\in \Delta_{\Sigma}$ to $$R^f(\mathbf{p}_1,\mathbf{p}_2,\mathbf{p}):=g\bigg(\sum_{y}{\frac{\mathbf{p}_1(y) \mathbf{p}_2(y)}{\mathbf{p}(y)} }\bigg)$$ where $g(t)\in \partial{f}(t),\forall t$. When $f$ is differentiable, $$R^f(\mathbf{p}_1,\mathbf{p}_2,\mathbf{p}):=f'\bigg(\sum_{y}{\frac{\mathbf{p}_1(y) \mathbf{p}_2(y)}{\mathbf{p}(y)} }\bigg).$$\end{definition} With this definition of the reward function, fixing $\mathbf{p}\in \Delta_{\Sigma}$ which can be seen as the prior over $Y$, the ``amount of agreement'' between two predictions $\mathbf{p}_1,\mathbf{p}_2$ are an increasing function $g$ of $$\sum_{y}{\frac{\mathbf{p}_1(y) \mathbf{p}_2(y)}{\mathbf{p}(y)} }, $$ which is intuitive and reasonable. The increasing function $g$ is the derivative of the convex function $f$. By carefully choosing convex function $f$, we can use any increasing function $g$ here. \begin{example} Here we present some examples of the $f$-mutual information gain $MIG^f(R^f)$ with reward function $R^f$, associated with different $f$-divergences. We use Table 1 as reference for $\partial{f}(\cdot)$ and $f^{\star}(\partial{f}(\cdot))$. \vspace{5pt} Total variation distance: \begin{align*} &\frac{1}{|\mathcal{L}_A\cap \mathcal{L}_B|}\sum_{\ell \in \mathcal{L}_A\cap \mathcal{L}_B} sign\bigg(log[\sum_{y}{\frac{h_A(x_A^{\ell})(y) h_B(x_B^{\ell})(y)}{\mathbf{p}(y)} }]\bigg)\\ &-\frac{1}{|\mathcal{L}_A||\mathcal{L}_B|-|\mathcal{L}_A\cap \mathcal{L}_B|^2}\sum_{\ell_A\in \mathcal{L}_A,\ell_B\in \mathcal{L}_B,\ell_A\neq \ell_B} sign\bigg(log[\sum_{y}{\frac{h_A(x_A^{\ell_A})(y) h_B(x_B^{\ell_B})(y)}{\mathbf{p}(y)} }]\bigg) \end{align*} KL divergence: \begin{align*} &\frac{1}{|\mathcal{L}_A\cap \mathcal{L}_B|}\sum_{\ell \in \mathcal{L}_A\cap \mathcal{L}_B} \bigg(1+log[\sum_{y}{\frac{h_A(x_A^{\ell})(y) h_B(x_B^{\ell})(y)}{\mathbf{p}(y)} }]\bigg)\\ &-\frac{1}{|\mathcal{L}_A||\mathcal{L}_B|-|\mathcal{L}_A\cap \mathcal{L}_B|^2}\sum_{\ell_A\in \mathcal{L}_A,\ell_B\in \mathcal{L}_B,\ell_A\neq \ell_B} \bigg(\sum_{y}{\frac{h_A(x_A^{\ell_A})(y) h_B(x_B^{\ell_B})(y)}{\mathbf{p}(y)} }\bigg) \end{align*} Pearson: \begin{align*} &\frac{1}{|\mathcal{L}_A\cap \mathcal{L}_B|}\sum_{\ell \in \mathcal{L}_A\cap \mathcal{L}_B} 2*\bigg(\sum_{y}{\frac{h_A(x_A^{\ell})(y) h_B(x_B^{\ell})(y)}{\mathbf{p}(y)} }-1\bigg)\\ &-\frac{1}{|\mathcal{L}_A||\mathcal{L}_B|-|\mathcal{L}_A\cap \mathcal{L}_B|^2}\sum_{\ell_A\in \mathcal{L}_A,\ell_B\in \mathcal{L}_B,\ell_A\neq \ell_B} \bigg((\sum_{y}{\frac{h_A(x_A^{\ell_A})(y) h_B(x_B^{\ell_B})(y)}{\mathbf{p}(y)} })^2-1\bigg) \end{align*} \end{example} \begin{theorem}\label{thm:ppl} With the conditional independent assumption on $X_A,X_B,Y$, given the samples $S_A,S_B$, given a convex function $f$, we define the optimization goal as the expected $f$-mutual information gain with reward function $R^f$, i.e., \begin{align*} MIG^f(h_A,h_B,\mathbf{p}):=\mathbb{E}_{X_A,X_B}MIG^f(R^f(h_A,h_B,\mathbf{p}))_{|S_A,S_B}\\ \end{align*} and optimize over all possible hypotheses $h_A:\Sigma_A\mapsto \Delta_{\Sigma}$, $h_B:\Sigma_B\mapsto \Delta_{\Sigma}$ and distribution vectors $\mathbf{p}\in \Delta_{\Sigma}$. We have \begin{description} \item [Solution$\rightarrow$Maximizer:] any solution $Z$ corresponds to a maximizer of $MIG^f(h_A,h_B,\mathbf{p})$\footnote{Given the prior over $Y$, we can fix $\mathbf{p}$ as the prior over $Y$. Without knowing the prior over $Y$, $\mathbf{p}$ becomes a variable of the optimization goal and helps us learn the prior over $Y$. }: for any solution $Z$, $$h_A^*(x_A):=(\Pr[Z=y|X_A=x_A])_y\qquad h_B^*(x_B):=(\Pr[Z=y|X_B=x_B])_y\footnote{Recall that we use $(\phi(y))_{y\in[N]}$ to represent the vector $(\phi(1),\phi(2),...,\phi(N))\in \mathbb{R}^N$. }$$ and the prior over $Z$, $\Pr[Z=y]_y$, is the maximizer of $MIG^f(h_A,h_B,\mathbf{p})$ and the maximum is $MI^f(X_A;X_B)$; \item [Maximizer$\rightarrow$(Permuted) Ground truth] when the prior is well-defined, $f$ is differentiable, and $f'$ is invertible, any maximizer of $MIG^f(h_A,h_B,\mathbf{p})$ corresponds to the (possibly permuted) ground truth $Y$: for any maximizer $(h_A^*(\cdot),h_B^*(\cdot),\mathbf{p}^*)$ of $MIG^f(h_A,h_B,\mathbf{p})$, there exists a permutation $\pi$ such that $$h_A^*(x_A):=(\Pr[\pi(Y)=y|X_A=x_A])_y\qquad h_B^*(x_B):=(\Pr[\pi(Y)=y|X_B=x_B])_y$$ and $\mathbf{p}^*=\Pr[\pi(Y)=y]_y$. \end{description} \end{theorem} The above theorem neither investigates computation complexity (which may be affected by the choice of $f$), data requirements, nor the choice of the hypothesis class for practical implementation (see Section~\ref{sec:discussions} for more discussion). \begin{proof}[Proof for Theorem~\ref{thm:ppl}] Lemma~\ref{lem:pplearn} shows that the expected $f$-mutual information gain is maximized if and only if for any $(x_A,x_B)$, $$R^f(h_A^*(x_A),h_B^*(x_B),\mathbf{p}^*)\in \partial{f}(K(x_A,x_B)).$$ (1)\emph{ Solution$\rightarrow$Maximizer:} For any solution $Z$, we can construct $$h_A^*(x_A):=(\Pr[Z=y|X_A=x_A])_y\qquad h_B^*(x_B):=(\Pr[Z=y|X_B=x_B])_y$$ and $\mathbf{p}^*=\Pr[Z=y]_y$. Then \begin{align*} R^f(h_A^*(x_A),h_B^*(x_B),\mathbf{p}^*)&\in \partial{f}\bigg(\sum_{y}{\frac{\Pr[Z=y|X_A=x_A]\Pr[Z=y|X_B=x_B]}{\Pr[Z=y]}}\bigg)\\ \tag{Claim~\ref{claim:ci}} &= \partial{f}(K(x_A,x_B) \end{align*} Thus, based on Lemma~\ref{lem:pplearn}, any solution $Z$ corresponds to a maximizer of the optimization goal. (2)\emph{Maximizer$\rightarrow$(Permuted) Ground truth:} For any maximizer $(h_A^*(\cdot),h_B^*(\cdot),\mathbf{p}^*)$ of the optimization goal, when $f$ is differentiable, Lemma~\ref{lem:pplearn} shows that $$R^f(h_A^*(x_A),h_B^*(x_B),\mathbf{p}^*)=f'(K(x_A,x_B)).$$ When $f'$ is invertible, we have \begin{align*} \sum_{y}{\frac{h_A^*(x_A)(y) h_B^*(x_B)(y)}{\mathbf{p}^*(y)}}=K(x_A,x_B) \end{align*} for all $x_A,x_B$. Thus, $\{(h_A^*(x_A),h_B^*(x_B),\mathbf{p}^*)\}_{x_A,x_B}$ is actually the solution of the system (\ref{soe}). When the prior is well-defined, there exists a permutation $\pi$ such that $$h_A^*(x_A):=(\Pr[\pi(Y)=y|X_A=x_A])_y\qquad h_B^*(x_B):=(\Pr[\pi(Y)=y|X_B=x_B])_y$$ and $\mathbf{p}^*=\Pr[\pi(Y)=y]_y$ where $Y$ is the ground truth. \end{proof} \section{Forecast elicitation without verification}\label{sec:forecastelicitation} This section considers the setting where the forecasts are provided by the crowds and we want to incentivize high quality forecast by providing an instant reward without instant access to the ground truth. There is a forecasting task. Alice and Bob have private information $X_A,X_B=x_A\in\Sigma_A,x_B\in\Sigma_B$ correspondingly and are asked to forecast the ground truth $Y=y$. We denote $(\Pr[Y=y|X_A=x_A])_y$, $(\Pr[Y=y|X_B=x_B])_y$ by $\mathbf{p}_{x_A}$, $\mathbf{p}_{x_B}$ correspondingly. Alice and Bob are asked to report their Bayesian forecast $\mathbf{p}_{x_A}$, $\mathbf{p}_{x_B}$. We denote their actual reports by $\hat{\mathbf{p}}_{x_A}$ and $\hat{\mathbf{p}}_{x_B}$. Without access to the realization of $Y$, we want to incentivize both Alice and Bob play \emph{truth-telling} strategies, i.e., honestly reporting their forecast $\mathbf{p}_{x_A}$, $\mathbf{p}_{x_B}$ for $Y$. We define the \emph{strategy} of Alice as a mapping $s_A$ from $x_A$ (private signal) to a probability distribution over the space of all possible forecast for random variable $Y$. Analogously, we define Bob's strategy $s_B$. Note that essentially each (possibly mixed) strategy $s_A$ can be seen as a (possibly random) predictor $P_A$ where $P_A(x_A)$ is a random forecast drawn from distribution $s_A(x_A)$. In particular, the truthful strategy corresponds to the Bayesian posterior predictor. We say agents play a \emph{permutation strategy profile} if there exists permutation $\pi:\Sigma\mapsto \Sigma$ such that each agent always reports $\pi \mathbf{p}$ given her truthful report is $\mathbf{p}$. Note that without any side information about $Y$, we cannot distinguish the scenario where agents are honest and the scenario where agents play a permutation strategy profile. Thus, it is too much to ask truth-telling to be strictly better than any other strategy profile. The focal property defined in the following paragraph is the optimal property we can obtain. \paragraph{Mechanism Design Goals} \begin{description} \item[(Strictly) Truthful] Mechanism $\mathcal{M}$ is (strictly) truthful if truth-telling is a (strict) equilibrium. \item[Focal] Mechanism $\mathcal{M}$ is focal if it is strictly truthful and each agent's expected payment is maximized if agents tell the truth; moreover, when agents play a non-permutation strategy profile, each agent's expected payment is \emph{strictly} less. \end{description} We consider two settings: \begin{description} \item[Multi-task] Each agent is assigned several independent a priori similar forecasting tasks in a random order and is asked to report her forecast for each task. \item[Single-task] All agents are asked to report their forecast for the same single task. \end{description} In the single-task setting, it's impossible to design focal mechanisms since agents can collaborate to pick an arbitrary $y^*\in\Sigma$ and pretend that they know $Y=y^*$. However, we will show we can design strictly truthful mechanism in the single-task setting. In the multi-task setting, since agents may be assigned different tasks and the tasks show in random order, they cannot collaborate to pick an arbitrary $y^*\in\Sigma$ for each task. In fact, we will show if the number of tasks is greater or equal to 2, we can design a family of focal mechanisms. Achieving the focal goal in the multi-task setting is very similar to what we did in finding the common ground truth. Note that in the forecast elicitation problem, incentivizing a truthful strategy is equivalent to incentivizing the Bayesian posterior predictor. Thus, we can directly use the $f$-mutual information gain as the reward in the multi-task setting. Achieving the strictly truthful goal in the single-task setting is more tricky and we will return to it later. \subsection{Multi-task: focal forecast elicitation without verification}\label{sec:multi} We assume Alice is assigned tasks set $\mathcal{L}_A$ and Bob is assigned tasks set $\mathcal{L}_B$. For each task $\ell$, Alice's private information is $x_A^{\ell}$ and Bob's private information is $x_B^{\ell}$. The ground truth of this task is $y^{\ell}$. \paragraph{Multi-task common ground mechanism $MCG(f)$} Given the prior distribution over $Y$, a convex and \emph{differentiable} function $f$ whose convex conjugate is $f^{\star}$, \begin{description} \item[Report] for each task $\ell\in\mathcal{L}_A$, Alice is asked to report $\mathbf{p}_{{x_A}^{\ell}}:=(\Pr[Y=y|x_A^{\ell}])_y$; for each task $\ell\in\mathcal{L}_B$, Bob is asked to report $\mathbf{p}_{{x_B}^{\ell}}:=(\Pr[Y=y|x_B^{\ell}])_y$. We denote their actual reports by $\hat{\mathbf{p}}_{{x_A}^{\ell}}^{\ell}$ and $\hat{\mathbf{p}}_{{x_B}^{\ell}}^{\ell}$. \item[Payment] For each $\ell\in \mathcal{L}_A\cap \mathcal{L}_B$, reward both Alice and Bob ``the amount of agreement'' between their forecast in task $\ell$, i.e., $$R(\hat{\mathbf{p}}_{{x_A}^{\ell}}^{\ell},\hat{\mathbf{p}}_{{x_B}^{\ell}}^{\ell});$$ for each pair of distinct tasks $(\ell_A,\ell_B), \ell_A\in \mathcal{L}_A,\ell_B\in \mathcal{L}_B,\ell_A\neq \ell_B$, punish both Alice and Bob ``the amount of agreement'' between their forecast in distinct tasks $(\ell_A,\ell_B)$, i.e., $$f^{\star}(R(\hat{\mathbf{p}}_{{x_A}^{\ell_A}}^{\ell_A},\hat{\mathbf{p}}_{{x_B}^{\ell_B}}^{\ell_B}).$$ In total, both Alice and Bob are paid \begin{align*} &\frac{1}{|\mathcal{L}_A\cap \mathcal{L}_B|}\sum_{\ell \in \mathcal{L}_A\cap \mathcal{L}_B} R(\hat{\mathbf{p}}_{{x_A}^{\ell}}^{\ell},\hat{\mathbf{p}}_{{x_B}^{\ell}}^{\ell})\\ &-\frac{1}{|\mathcal{L}_A||\mathcal{L}_B|-|\mathcal{L}_A\cap \mathcal{L}_B|^2}\sum_{\ell_A\in \mathcal{L}_A,\ell_B\in \mathcal{L}_B,\ell_A\neq \ell_B}f^{\star}(R(\hat{\mathbf{p}}_{{x_A}^{\ell_A}}^{\ell_A},\hat{\mathbf{p}}_{{x_B}^{\ell_B}}^{\ell_B}) \end{align*} where $$R(\mathbf{p}_1,\mathbf{p}_2):=f'(\sum_{y}\frac{\mathbf{p}_1(y) \mathbf{p}_2(y)}{\Pr[Y=y]}).$$ \end{description} We do not want agents to collaborate with each other based on the index of the task or other information in addition to the private information. Thus, we make the following assumption to guarantee the index of the task is meaningless for all agents. \begin{assumption}[A priori similar and random order] For each task $\ell$, fresh i.i.d. realizations of $(X_A,X_B,Y)=(x_A^{\ell},x_B^{\ell},y^{\ell})$ are generated. All tasks appear in a random order, independently drawn for each agent. \end{assumption} \begin{theorem}\label{thm:focal} With the conditional independence assumption, and a priori similar and random order assumption, when the prior $Q$ is stable and well-defined, given the prior distribution over the $Y$, given a differential convex function $f$ whose derivative $f'$ is invertible, if $\max\{|\mathcal{L}_A|,|\mathcal{L}_B|\}\geq 2$, then $MCG(f)$ is focal. When both Alice and Bob are honest, each of them's expected payment in $MCG(f)$ is $$ MI^f(X_A;X_B). $$ \end{theorem} The non-negativity of $MI^f$ implies that agents are willing to participate in the mechanism. Like Theorem~\ref{thm:ppl}, in order to show Theorem~\ref{thm:focal}, we need to first introduce a lemma which is very similar to Lemma~\ref{lem:pplearn}. \begin{lemma}\label{lem:focal} With the conditional independence assumption, the expected total payment is maximized over Alice and Bob's strategies if and only if $\forall \ell_1 \in \mathcal{L}_A, \ell_2 \in \mathcal{L}_B$, for any $(x_A^{\ell_1},x_B^{\ell_2})\in\Sigma_A\times\Sigma_B$, $$R(\hat{\mathbf{p}}_{{x_A}^{\ell_1}}^{\ell_1},\hat{\mathbf{p}}_{{x_B}^{\ell_2}}^{\ell_2})=f'(K(x_A^{\ell_1},x_B^{\ell_2})).$$ The maximum is $MI^f(X_A;X_B).$ \end{lemma} The proofs of Lemma~\ref{lem:focal} and Theorem~\ref{thm:focal} are very similar with Lemma~\ref{lem:pplearn} and Theorem~\ref{thm:ppl}. We defer the formal proofs to the appendix. \subsection{Single-task: strictly truthful forecast elicitation without verification}\label{sec:single} This section introduces the strictly truthful mechanism in the single-task setting. If we know the realization $y$ of $Y$, we can simply apply a proper scoring rule and pay Alice and Bob $PS(y,\hat{\mathbf{p}}_{x_A})$ and $PS(y,\hat{\mathbf{p}}_{x_B})$ respectively. Then according to the property of the proper scoring rule, Alice and Bob will honestly report their truthful forecast to maximize their expected payment. However, we do not know the realization of $Y$. In the information elicitation without verification setting where Alice and Bob are required to report their information, \citet{MRZ05} propose the ``peer prediction'' idea, that is, pays Alice the accuracy of the forecast that predicts Bob's information conditioning Alice's information, i.e., $$PS\big(\hat{x}_B,(\Pr[{X}_B=x_B|\hat{x}_A])_y\big)$$ where $\hat{x}_A$ and $\hat{x}_B$ are Alice and Bob's reported information. We note the peer prediction mechanism in \citet{MRZ05} is truthful. With a similar ``peer prediction'' idea, we propose a strictly truthful mechanism in forecast elicitation. \paragraph{Common ground mechanism} Given the prior distribution over $Y$, \begin{description} \item[Report] Alice and Bob are required to report $\mathbf{p}_{x_A}$, $\mathbf{p}_{x_B}$. We denote their actual reports by $\hat{\mathbf{p}}_{x_A}$ and $\hat{\mathbf{p}}_{x_B}$. \item[Payment] Both Alice and Bob are paid $$ \log \sum_{y}\frac{\hat{\mathbf{p}}_{x_A}(y) \hat{\mathbf{p}}_{x_B}(y)}{\Pr[Y=y]} .$$ \end{description} \begin{theorem} With the conditional independence assumption (and when the prior is stable), given the prior distribution over the $Y$, the common ground mechanism is (strictly) truthful; moreover, when both Alice and Bob are honest, each of them's expected payment in the common ground mechanism is the Shannon mutual information between their private information $$I(X_A;X_B)=MI^{KL}(X_A;X_B).$$ \end{theorem} The non-negativity of the Shannon mutual information implies that agents are willing to participate in the mechanism. The (strictly) truthful property of the common ground mechanism is proved by the fact that log scoring rule $LSR$ is strictly proper. \begin{proof} When both Alice and Bob are honest, their payment is $\log K(x_A,x_B)$ according to Claim~\ref{claim:ci}. Their expected payment will be \begin{align*} \sum_{x_A,x_B}\Pr[x_A,x_B] \log K(x_A,x_B) = \sum_{x_A,x_B}\Pr[x_A,x_B] \log \frac{\Pr[x_A,x_B]}{\Pr[x_A]\Pr[x_B]}=MI^{KL}(X_A;X_B) \end{align*} Given that Bob honestly reports $\hat{\mathbf{p}}_{x_B}={\mathbf{p}}_{x_B}$, we would like to show that the expected payment of Alice is less than $MI^{KL}(X_A; X_B)$ regardless of the strategy Alice plays. The expected payment of Alice is \begin{align*} &\sum_{x_A,x_B} \Pr[X_A=x_A,X_B=x_B]\log \sum_{y}\frac{\hat{\mathbf{p}}_{x_A}(y) {\mathbf{p}}_{x_B}(y)}{\Pr[Y=y]}\\ =&\sum_{x_A,x_B} \Pr[X_A=x_A,X_B=x_B]\log \sum_{y}\frac{\hat{\mathbf{p}}_{x_A}(y) {\mathbf{p}}_{x_B}(y)}{\Pr[Y=y]} \Pr[X_B=x_B]\\ &-\sum_{x_A,x_B} \Pr[X_A=x_A,X_B=x_B]\log \Pr[X_B=x_B]\\ \tag{$C$ is a constant that does not depend on Alice's strategy} =&\sum_{x_A,x_B} \Pr[X_A=x_A,X_B=x_B]\log \sum_{y}\frac{\hat{\mathbf{p}}_{x_A}(y) {\mathbf{p}}_{x_B}(y)}{\Pr[Y=y]} \Pr[X_B=x_B]-C\\ =& \sum_{x_A,x_B} \Pr[X_A=x_A]\Pr[X_B=x_B|X_A=x_A]\log \sum_{y}\frac{\hat{\mathbf{p}}_{x_A}(y) {\mathbf{p}}_{x_B}(y)}{\Pr[Y=y]} \Pr[X_B=x_B]-C \end{align*} Moreover, fixing $X_A=x_A$ \begin{align*} &\sum_{x_B}\sum_{y}\frac{\hat{\mathbf{p}}_{x_A}(y) {\mathbf{p}}_{x_B}(y)}{\Pr[Y=y]} \Pr[X_B=x_B]\\ =& \sum_{x_B}\sum_{y}\frac{\hat{\mathbf{p}}_{x_A}(y) \Pr[X_B=x_B,Y=y]}{\Pr[Y=y]} \\ =& \sum_{x_B}\sum_{y}{\hat{\mathbf{p}}_{x_A}(y) \Pr[X_B=x_B|Y=y]} \\ =& \sum_{y}\hat{\mathbf{p}}_{x_A}(y)=1 \end{align*} Thus, $\sum_{y}\frac{\hat{\mathbf{p}}_{x_A}(y) {\mathbf{p}}_{x_B}(y)}{\Pr[Y=y]} \Pr[X_B=x_B]$ can be seen as a forecast for $X_B=x_B$. Since $LSR(\mathbf{p},\mathbf{q})=\sum_{\sigma}\mathbf{p}(\sigma)\log \mathbf{q}(\sigma)\leq \sum_{\sigma}\mathbf{p}(\sigma)\log \mathbf{p}(\sigma)=LSR(\mathbf{p},\mathbf{p})$ for any $\mathbf{p},\mathbf{q}\in\Delta_{\Sigma}$, we have \begin{align*} \numberthis \label{eq:truthful} & \sum_{x_A,x_B} \Pr[X_A=x_A]\Pr[X_B=x_B|X_A=x_A]\log \sum_{y}\frac{\hat{\mathbf{p}}_{x_A}(y) {\mathbf{p}}_{x_B}(y)}{\Pr[Y=y]} \Pr[X_B=x_B]-C\\ \leq & \sum_{x_A,x_B} \Pr[X_A=x_A]\Pr[X_B=x_B|X_A=x_A]\log \Pr[X_B=x_B|X_A=x_A]-C\\ =& \sum_{x_A,x_B} \Pr[X_A=x_A]\Pr[X_B=x_B|X_A=x_A]\log \Pr[X_B=x_B|X_A=x_A]\\ &-\sum_{x_A,x_B} \Pr[X_A=x_A,X_B=x_B]\log \Pr[X_B=x_B]\\ &= \sum_{x_A,x_B} \Pr[X_A=x_A,X_B=x_B]\log \frac{\Pr[X_B=x_B|X_A=x_A]}{\Pr[X_B=x_B]}\\ =&I(X_A;X_B) \end{align*} The non-negativity of the Shannon mutual information implies that agents are willing to participate in the mechanism. It remains to analyze the strictness of the truthfulness. We need to show for any $x_A$, given that Alice receives $X_A=x_A$, she will obtain strictly less payment via reporting $\hat{\mathbf{p}}_{x_A}\neq \mathbf{p}_{x_A}$. Given that Alice receives $X_A=x_A$, her expected payment is \begin{align*} \tag{see equation (\ref{eq:truthful})} & \sum_{x_B} \Pr[X_B=x_B|X_A=x_A]\log \sum_{y}\frac{\hat{\mathbf{p}}_{x_A}(y) {\mathbf{p}}_{x_B}(y)}{\Pr[Y=y]} \Pr[X_B=x_B]-C\\ \numberthis \label{eq:strict} \leq & \sum_{x_B}\Pr[X_B=x_B|X_A=x_A]\log \Pr[X_B=x_B|X_A=x_A]-C \end{align*} Note that $\sum_{\sigma}\mathbf{p}(\sigma)\log \mathbf{q}(\sigma)< \sum_{\sigma}\mathbf{p}(\sigma)\log \mathbf{p}(\sigma)$ when $\mathbf{q}\neq \mathbf{p}$. When the prior is stable, since $\hat{\mathbf{p}}_{x_A}\neq \mathbf{p}_{x_A}$, then $\mathbf{p}_{x_B},\hat{\mathbf{p}}_{x_A},(\Pr[Y=y])_y$ is not the solution of system (\ref{soe}). This implies that there exists $x_B$ such that $$\Pr[X_B=x_B|X_A=x_A]\neq \sum_{y}\frac{\hat{\mathbf{p}}_{x_A}(y) {\mathbf{p}}_{x_B}(y)}{\Pr[Y=y]} \Pr[X_B=x_B]. $$ Thus, the inequality (\ref{eq:strict}) must be strict. Therefore, when the prior is stable, the common ground mechanism is strictly truthful. \end{proof} \section{$PS$-gain}\label{sec:psgain} In this section, we will extend the maximum likelihood estimator method in \citet{raykar2010learning} to a general family of optimization goals---$PS$-gain and compare the general family with our $f$-mutual information gain. We will see the application of $PS$-gain requires either one of the information sources to be low dimensional or that we have a simple generative model for the distribution over one of the information sources and ground truth label. Thus, the range of applications of $PS$-gain is more limited compared with the range of applications of $f$-mutual information gain. In \citet{raykar2010learning}, $X_A$ is a feature vector which has multiple crowdsourced labels $X_B$. We have access to $(x_A^{\ell},x_B^{\ell})_{\ell\in\mathcal{L}}$ which are i.i.d samples of $(X_A,X_B)$. \citet{raykar2010learning} also have the conditional independence assumption. \subsection{Maximum likelihood estimator (MLE)} Let $\Theta_A,\Theta_B$ be two parameters that control the distribution over $X_A$ and $Y$ and the distribution over $X_B$ and $Y$ respectively. With the conditional independence assumption, we have \begin{align*} \log \Pr[(x_A^{\ell},x_B^{\ell})_{\ell\in\mathcal{L}}|\Theta_A,\Theta_B]=&\log \Pi_{\ell\in\mathcal{L}} \Pr[X_B=x_B^{\ell}|X_A=x_A^{\ell},\Theta_A,\Theta_B]\\ =&\log \Pi_{\ell\in\mathcal{L}} \sum_y \Pr[X_B=x_B^{\ell}|Y=y,\Theta_B]\Pr[Y=y|X_A=x_A^{\ell},\Theta_A]\\ =& \sum_{\ell\in\mathcal{L}}\log\bigg(\sum_y \Pr[X_B=x_B^{\ell}|Y=y,\Theta_B]\Pr[Y=y|X_A=x_A^{\ell},\Theta_A]\bigg)\\ \end{align*} The MLE is a pair of parameters $\Theta_A^*,\Theta_B^*$ that maximizes the expected $$\log \Pr[(x_A^{\ell},x_B^{\ell})_{\ell\in\mathcal{L}}|\Theta_A,\Theta_B]=\sum_{\ell\in\mathcal{L}}\log\bigg(\sum_y \Pr[X_B=x_B^{\ell}|Y=y,\Theta_B]\Pr[Y=y|X_A=x_A^{\ell},\Theta_A]\bigg).$$ \citet{raykar2010learning} use the MLE to estimate the parameters. In order to compare this MLE method with our $f$-mutual information gain framework, we map this MLE method into our language and provide a theoretical analysis for the condition when MLE is meaningful. \paragraph{$LSR$-gain/MLE} \begin{description} \item[Hypothesis] We are given $\mathcal{H}_A=\{h_A:\Sigma_A\mapsto \Delta_{\Sigma}\}$, $\mathcal{V}_B=\{v_B:\Sigma_B\mapsto [0,1]^{|\Sigma|}\}$: the set of hypotheses candidates for $X_A$ and $X_B$, respectively. Note that $v_B$ maps $x_B\in\Sigma_B$ into a vector in $[0,1]^{|\Sigma|}$ rather than a distribution vector. \item[Gain] We see $$(v_B(x_B) \cdot h_A(x_A^{\ell}))_{x_B}$$ as a forecast for random variable $X_B$ conditioning on $X_A=x_A^{\ell}$ and we reward the hypotheses $LSR$-gain---the accuracy of this forecast via log scoring rule (LSR): \begin{align*} \sum_{\ell\in\mathcal{L}}LSR\bigg(x_B^{\ell}, ( v_B(x_B) \cdot h_A(x_A^{\ell}))_{x_B}\bigg) =\sum_{\ell\in\mathcal{L}}\log\bigg( v_B(x_B^{\ell}) \cdot h_A(x_A^{\ell})\bigg) \end{align*} \end{description} We use $\mathbf{v}\cdot \mathbf{v}'$ to represent the dot product between two vectors. Note that by picking $\mathcal{H}_A$ as the set of mappings---associated with a set of parameters $\{\Theta_A\}$---that map $X_A=x_A$ to $(\Pr[Y=y|X_A=x_A^{\ell},\Theta_A])_y$ and picking $\mathcal{V}_B$ as the set of mappings---associated with a set of parameters $\{\Theta_B\}$---that map $X_B=x_B$ to $(\Pr[X_B=x_B|Y=y,\Theta_B])_y$, maximizing $LSR$-gain is equivalent to obtaining MLE. The idea of $LSR$-gain is very similar with the original peer prediction idea introduced in Section~\ref{sec:single} as well as our common ground mechanism. \begin{theorem}\label{thm:mle} When $\sum_{x_B\in\Sigma_B} v_B(x_B)=(1,1,..,1)$ for all $v_B\in\mathcal{V}_B$, the ground truth $Y$ corresponds to a maximizer of $LSR$-gain: $$v_B^*(x_B)=(\Pr[X_B=x_B|Y=y])_y\qquad h_A^*(x_A)=(\Pr[Y=y|X_A=x_A])_y.$$ The maximum is the conditional Shannon entropy $H(X_B|X_A)$. \end{theorem} \begin{remark} Note that without the restriction: $\sum_{x_B\in\Sigma_B} v_B(x_B)=(1,1,..,1)$ for all $v_B\in\mathcal{V}_B$, $$v_B^*(x_B)=(\Pr[X_B=x_B|Y=y])_y\qquad h_A^*(x_A)=(\Pr[Y=y|X_A=x_A])_y$$ is not a maximizer and we will have a meaningless maximizer $v_B(x_B)=(1,1,..,1),\forall x_B$ and $h_A(x_A)=(1,0,...,0),\forall x_A$. \end{remark} By picking $\mathcal{V}_B$ as the set of mappings---associated with a set of parameters $\{\Theta_B\}$---that map $X_B=x_B$ to $(\Pr[X_B=x_B|Y=y,\Theta_B])_y$, the restriction $\sum_{x_B\in\Sigma_B} v_B(x_B)=(1,1,..,1)$ for all $v_B\in\mathcal{V}_B$ satisfies naturally. However, it requires the knowledge of the generative distribution model over $X_B$ and $Y$ with parameter $\Theta_B$. \citet{raykar2010learning} assume a simple distribution model between $X_B$ and $Y$ with parameter $\Theta_B$---conditioning the ground truth label, the crowdsourced feedback $X_B$ is drawn from a binomial distribution, such that $\Pr[X_B=x_B|Y=y,\Theta_B]$ has a simple explicit form. \begin{proof}[Proof of Theorem~\ref{thm:mle}] \begin{align*} &\mathbb{E}\sum_{\ell\in\mathcal{L}}\log\bigg(v_B(x_B^{\ell}) \cdot h_A(x_A^{\ell})\bigg)\\ =&\sum_{x_A\in\Sigma_A,x_B\in\Sigma_B}\Pr[X_A=x_A,X_B=x_B] \log \bigg(v_B(x_B) \cdot h_A(x_A)\bigg)\\ =&\sum_{x_A\in\Sigma_A,x_B\in\Sigma_B}\Pr[X_A=x_A]\Pr[X_B=x_B|X_A=x_A] \log \bigg(v_B(x_B) \cdot h_A(x_A)\bigg)\\ =& \sum_{x_A\in\Sigma_A,x_B\in\Sigma_B}\Pr[X_A=x_A]LSR\bigg(( \Pr[X_B=x_B|X_A=x_A])_{x_B},(v_B(x_B) \cdot h_A(x_A))_{x_B}\bigg) \end{align*} Fixing $X_A=x_A$, since $\sum_{x_B\in\Sigma_B} v_B(x_B)=(1,1,...,1)$ for all $v_B\in\mathcal{V}_B$, we have \begin{align*} \sum_{x_B} \bigg(v_B(x_B) \cdot h_A(x_A)\bigg)=\sum_y h_A(x_A)(y)=1 \end{align*} Since $LSR(\mathbf{p},\mathbf{q})\leq LSR(\mathbf{p},\mathbf{p})$ for any $\mathbf{p},\mathbf{q}\in\Delta_{\Sigma}$, we have \begin{align*} &\mathbb{E}\sum_{\ell\in\mathcal{L}}\log\bigg(v_B(x_B) \cdot h_A(x_A)\bigg)\\ =&\sum_{x_A\in\Sigma_A,x_B\in\Sigma_B}\Pr[X_A=x_A]LSR\bigg(( \Pr[X_B=x_B|X_A=x_A])_{x_B},(v_B(x_B) \cdot h_A(x_A))_{x_B}\bigg)\\ \leq&\sum_{x_A\in\Sigma_A,x_B\in\Sigma_B}\Pr[X_A=x_A]LSR\bigg(( \Pr[X_B=x_B|X_A=x_A])_{x_B},( \Pr[X_B=x_B|X_A=x_A])_{x_B}\bigg)\\ =& \sum_{x_A\in\Sigma_A,x_B\in\Sigma_B}\Pr[X_A=x_A]\Pr[X_B=x_B|X_A=x_A] \log \Pr[X_B=x_B|X_A=x_A]\\ =& H(X_B|X_A)\\ \tag{conditional independence} =&\sum_{x_A\in\Sigma_A,x_B\in\Sigma_B}\Pr[X_A=x_A]\Pr[X_B=x_B|X_A=x_A] \log\bigg(\sum_y \Pr[X_B=x_B|Y=y]\Pr[Y=y|X_A=x_A]\bigg) \end{align*} Thus, $$v_B^*(x_B)=(\Pr[X_B=x_B|Y=y])_y\qquad h_A^*(x_A)=(\Pr[Y=y|X_A=x_A])_y$$ is a maximizer and the maximum is the conditional Shannon entropy $H(X_B|X_A)$. \end{proof} \subsection{Extending $LSR$-gain to $PS$-gain} The property $LSR(\mathbf{p},\mathbf{q})=\sum_{\sigma}\mathbf{p}(\sigma)\log \mathbf{q}(\sigma)\leq \sum_{\sigma}\mathbf{p}(\sigma)\log \mathbf{p}(\sigma)=LSR(\mathbf{p},\mathbf{p})$ for any $\mathbf{p},\mathbf{q}\in\Delta_{\Sigma}$ is also valid for all proper scoring rules. Thus, we can naturally extend the MLE to $PS$-gain by replacing the $LSR$ by any given proper scoring rule $PS$. \paragraph{$PS$-gain} \begin{description} \item[Hypothesis] We are given $\mathcal{H}_A=\{h_A:\Sigma_A\mapsto \Delta_{\Sigma}\}$, $\mathcal{V}_B=\{v_B:\Sigma_B\mapsto [0,1]^{|\Sigma|}\}$: the set of hypotheses candidates for $X_A$ and $X_B$, respectively. \item[Gain] We see $$(v_B(x_B) \cdot h_A(x_A^{\ell}))_{x_B}$$ as a forecast for random variable $X_B$ conditioning on $X_A=x_A^{\ell}$ and we reward the hypotheses $PS$-gain---the accuracy of this forecast via a given proper scoring rule $PS$: \begin{align*} \sum_{\ell\in\mathcal{L}}PS\bigg(x_B^{\ell}, ( v_B(x_B) \cdot h_A(x_A^{\ell}))_{x_B}\bigg) \end{align*} \end{description} Note that the general $PS$-gain may involve the calculations of $( v_B(x_B) \cdot h_A(x_A^{\ell}))_{x_B}$ while $LSR$-gain only requires the value of $v_B(x_B^{\ell}) \cdot h_A(x_A^{\ell})$. Thus, unlike $LSR$-gain, the general $PS$-gain may be only applicable for low dimensional $X_B$, even if we assume a simple generative distribution model over $X_B$ and $Y$. \begin{theorem} Given a proper scoring rule $PS$, when $\sum_{x_B\in\Sigma_B} v_B(x_B)=(1,1,...,1)$ for all $v_B\in\mathcal{V}_B$, the ground truth $Y$ corresponds to a $PS$-gain maximizer: $$v_B^*(x_B)=(\Pr[X_B=x_B|Y=y])_y\qquad h_A^*(x_A)=(\Pr[Y=y|X_A=x_A])_y.$$ \end{theorem} The proof is the same with Theorem~\ref{thm:mle} except that we replace $LSR(\mathbf{p},\mathbf{q})\leq LSR(\mathbf{p},\mathbf{p})$ by $PS(\mathbf{p},\mathbf{q})\leq PS(\mathbf{p},\mathbf{p})$ for any $\mathbf{p},\mathbf{q}\in\Delta_{\Sigma}$. \subsection{Comparing $PS$-gain with $f$-mutual information gain}\label{sec:comparison} Generally, $f$-mutual information gain can be applied to a more general setting. $PS$-gain requires the restriction $\sum_{x_B\in\Sigma_B} v_B(x_B)=(1,1,...,1)$ for all $v_B\in\mathcal{V}_B$. Thus, $PS$-gain requires the full knowledge of $v_B$ for all $v_B\in\mathcal{V}_B$ to check whether it satisfies the restriction, while for the $f$-mutual information gain, it is sufficient to just have the access to the outputs of the hypothesis: $\{h_B(x_B^{\ell})\}_{\ell\in\mathcal{L}_B}$. Therefore, in the mechanism design part, we can only use $f$-mutual information gain to design focal mechanisms since we only have the outputs from agents. Moreover, $\sum_{x_B\in\Sigma_B} v_B(x_B)=(1,1,...,1)$ is also hard to check when $|\Sigma_B|$ is very large. For example, when $x_B$ is a $100\times 100$ black-and-white image, $|\Sigma_B|=2^{100}$ and checking $\sum_{x_B\in\Sigma_B} v_B(x_B)=(1,1,...,1)$ requires $2^{100}$ time. Normalizing $v_B$ such that it satisfies the condition also requires $2^{100}$ time. Thus, when $|\Sigma_B|$ is very large, we need a simple generative distribution model between $X_B$ and $Y$ with parameter $\Theta_B$ such that we can pick $\mathcal{V}_B$ as the set of mappings---associated with a set of parameters $\{\Theta_B\}$---that map $X_B=x_B$ to $(\Pr[X_B=x_B|Y=y,\Theta_B])_y$, to make the restriction $\sum_{x_B\in\Sigma_B} v_B(x_B)=(1,1,..,1)$ for all $v_B\in\mathcal{V}_B$ satisfy naturally. When we have the simple generative distribution model, we can use $LSR$-gain. The general $PS$-gain involves the calculations of the $|\Sigma_B|$ dimensional vector---$(v_B(x_B) \cdot h_A(x_A^{\ell}))_{x_B}$---for each $x_A^{\ell}$. Thus, the general $PS$-gain is only applicable to low dimensional $X_B$. In the learning with noisy labels problem, the distribution between $X_B$ and $Y$ can be represented by a simple transition matrix and $X_B$ is low dimensional. Therefore, both $PS$-gain and $f$-mutual information gain can be applied to the learning with noisy labels problem. Therefore, the application of $PS$-gain requires either one of the information sources to be low dimensional or that we have a simple generative model for the distribution over one of the information sources and ground truth label, while $f$-mutual information gain does not have the restrictions. \section{Conclusion and discussion}\label{sec:discussions} We build a natural connection between mechanism design and machine learning by addressing two related problems: (1) co-training: learning to forecast ground truth using two conditionally independent sources, without access to labeled data; (2) forecast elicitation: eliciting high quality forecasts from the crowds without verification, by the same information theoretic approach. For the co-training problem, as usual in the related literature, we reduce the problem to an optimization problem and do not investigate the computation complexity or the data requirements. To implement our $f$-mutual information gain framework in practice, we implicitly assume that for high dimensional $X_A,X_B$, there exists a trainable set of hypotheses (e.g. neural networks) that is sufficiently rich to contain the Bayesian posterior predictor but not everything to cause over-fitting. The most apparent empirical direction will be running experiments on real data by training two neural networks to test our algorithms. Interesting theoretic directions include the analysis of the Bayesian risk and the influence of the choice of the convex function $f$ on the convergence rate. For forecast elicitation, the most apparent direction will be performing real-world experiments. To apply our mechanisms, we do not need that every two agents' information is conditionally independent. In fact, for each agent, we only need to find a single reference agent for her such that the reference agent's information is conditionally independent of hers. Then we can run our mechanisms on the agent and her reference agent. In practice, we can pair the agents with some side information and make sure each pair of agents' information is conditionally independent. \gs{Can we cite paper 4 here?} Another interesting direction is to ensure fairness, in particular, that agents are not incentivized to coordinate on stereotypes. One solution, is suppressing information from some of the agents and using our framework. However, when this is not possible, the prior peer prediction work on cheap signals~\cite{2016arXiv160607042G,2018arXiv180208312K} may be helpful in addressing this issue. \section*{Acknowledgement} We thank Clayton Scott for useful conversations. \bibliographystyle{ACM-Reference-Format}
{ "timestamp": "2018-05-24T02:02:59", "yymm": "1802", "arxiv_id": "1802.08887", "language": "en", "url": "https://arxiv.org/abs/1802.08887" }
\section{Introduction} \label{section_1} \IEEEPARstart{M}{any-objective} optimization problems (MaOPs) concern solving $M$ conflicting objectives simultaneously where $M$ is greater than three~\cite{farina2002optimal}. Generally, an MaOP has the following formulation described by Equation~(\ref{equation_mops}) \label{section_introduction} \begin{equation} \label{equation_mops} \setlength{\arraycolsep}{1pt} \renewcommand{\arraystretch}{1.5} \left\{ \begin{array}{c} f(x)=\left[f_1(x),\cdots,f_M(x)\right] \\ s.t.~~~x\in \Lambda \end{array} \right. \end{equation} where $\Lambda\in \mathbb{R}^n$ is the decision space, $f:\Lambda \rightarrow \Omega \in \mathbb{R}^M$ is the objective space. Without loss of generality, it is assumed that $f(x)$ is a minimization problem in which $f_1(x),\cdots, f_M(x)$ are to be minimized. Because of MaOPs widely existing in many real-world applications, such as management in land exploitation with $14$-objective~\cite{chikumbo2012approximating}, calibration problems of automotive engine with $10$-objective~\cite{lygoe2013real}, to name a few, there is a strong incentive for efficiently and effectively solving MaOPs. In MaOPs, there is no single perfect solution that optimizes all of the objectives at the same time but a set of Pareto-optimal solutions in which each individual is non-dominated with respect to each other. In addition, all the Pareto-optimal solutions constitute the Pareto-optimal set~(PS) in the decision space while the image of PS produces a Pareto-optimal front~(PF) in the objective space. Commonly, the goal in solving MaOPs is to obtain a limit number of Pareto-optimal solutions, which are uniformly distributed in PF, where a decision-maker can delegate a solution based on his or her preference. Among all the approaches for handing MaOPs, evolutionary algorithms are considered preferable because of the searching power exerted in these population-based meta-heuristics. During the past several decades, numerous multi-objective evolutionary algorithms (MOEAs), such as elitist non-dominated sorting genetic algorithm (NSGA-II)~\cite{deb2002fast}, advanced version of strength Pareto evolutionary algorithm (SPEA2)~\cite{zitzler2001spea2}, have been developed for dealing with multi-objective optimization problems (MOPs) in which at most three objectives are to be optimized simultaneously. However, their performance degraded drastically in addressing MaOPs~\cite{knowles2007quantifying}. The main reason is the loss of selection pressure which is caused by the dominance resistance~(DR)~\cite{fonseca1998multiobjective} and the curse of dimensionality~\cite{purshouse2007evolutionary} phenomena. To be specific, DR refers to a large proportion of solutions in which individuals are the best in one or very few objectives but far worse in others, and these solutions cannot be discriminated by the original Pareto domination principle. Then density-based secondary measurement is activated to decide which solutions are allowed to survive in the next generation~\cite{li2015many}. Because of the behavior influenced by DR pointed out in~\cite{wagner2007pareto}, the selected solutions do not necessarily converge to the PF~\cite{purshouse2007evolutionary}. To this end, various many-objective evolutionary algorithms (MaOEAs) for tackling MaOPs have been developed\footnote{The algorithms which was designed originally for MOPs while is extended for MaOPs are also categorized to MaOEAs in this paper.}, such as multi-objective evolutionary algorithm based on decomposition (MOEA/D)~\cite{zhang2007moea}, hypervolume-based many-objective optimization algorithm (HypE)~\cite{bader2011hype}, grid-based evolutionary algorithm for many-objective optimization (GrEA)~\cite{yang2013grid}, many-objective optimization algorithm using reference point based non-dominated sorting (NSGA-III)~\cite{deb2014evolutionary}, and etc.\cite{li2015many,trivedi2017survey,sun2017global,sun2017reference}\footnote{Typically, these MaOEAs can be classified into three basic categories: Dominance-based, Decomposition-based and Hypervolume-based. MOEA/D and HypE are from the second and third categories, respectively. NSGA-III and GrEA are the hybridization of the first and the second categories. }. More precisely, MOEA/D employs the decomposition-based approach to construct a set of single objective problems by aggregating objectives considered in the original MaOP with different predefined weight vectors. New solutions are generated within a sub-region and diversity is maintained by the uniformly distributed weight vectors. The promising solutions are selected in HypE based on their fitness which is assigned by the corresponding contribution in hypervolume measure. As the computation of exact hypervolume is prohibitive, Monte Carlo simulation is employed to address this limitation. GrEA utilizes grid-based approach to show better performance in solving MaOPs by introducing the grid-based fitness comparison to relax the Pareto-based dominance relationship and grid-metrics to improve the diversity. Compared to NSGA-II, the improvement of NSGA-III is in the diversity mechanism by assigning the solutions to a set of uniformly distributed reference vectors. In summary, these state-of-the-art MaOEAs mentioned above mainly contemplate on two distinct issues: 1) reform the comparison manner in the traditional dominance relationship, such as HypE and GrEA and 2) apply new designs to reinforce the diversity, such as MOEA/D and NSGA-III. \begin{algorithm} \caption{Framework of An EDA} \label{alg_framework_eda} $t\leftarrow 0$\; $P_t\leftarrow$ Randomly initialize the population\; \While{termination is not satisfied} { $M\leftarrow$ Built probabilistic models from $P_t$\; $t\leftarrow t+1$\; $U_t\leftarrow$ Generate offspring from $M$\; $P_t\leftarrow$ Select promising solutions from $U_t\cup P_{t-1}$\; } \textbf{Return} $P_t$. \end{algorithm} It is highly expected that individuals generated by selected parents with the crossover and mutation operators would march towards the PF in a MOP. However, this will not be the case in MaOPs due to the DR phenomenon. Specifically, if the parents are neighbors of DR solutions their offspring are also DR solutions. Otherwise, the newly generated solutions are not necessarily better than their parents who stand at a large space of many-objectives with a remote distance. This can be seen as the inefficiency of existing genetic operators for MaOPs~\cite{purshouse2007evolutionary}. Moreover, Deb \emph{et al.} in~\cite{deb2006multi} concluded that the performances of MOEAs are significantly influenced by the genetic operators which cannot ensure to generate promising offspring. Furthermore, the parameters in genetic operators need empirically configured. For example, the distribution index of SBX in NSGA-III needs to be set at a larger number. For this purpose, researchers have developed estimation of distribution algorithms (EDAs) to tackle optimization problems~\cite{pelikan2006multiobjective,zhang2008rm,karshenas2014multiobjective} by generating new solutions without involving the traditional genetic operators, but probabilistic models which are built based on the statistics of the visited solutions. A general framework of EDAs is illustrated in Algorithm~\ref{alg_framework_eda}. Typically, the EDAs-based MOEAs are broadly classified into two categories based on their estimation models. The first category covers the Bayesian network-based EDAs. For example, multi-objective Bayesian optimization algorithm~\cite{khan2002multi} utilized the Bayesian optimization algorithm (BOA) to build a Bayesian network as its model for generating offspring. A related work was investigated in~\cite{schwarz2001multiobjective} to predict the model by strengthening Pareto ranking approach~\cite{zitzler1999multiobjective} and BOA. Furthermore, Laumanns \emph{et al.} in~\cite{laumanns2002bayesian} proposed a Bayesian multi-objective optimization algorithm whose model is built over the solutions selected by $\epsilon$-Pareto ranking method~\cite{laumanns2002combining}. In addition, an improved non-dominated sorting approach was employed by decision tree-based multi-objective EDA~\cite{zhong2007decision} to select a subset of solutions serving for a regression decision tree to learn the model. Recently, the multi-dimensional Bayesian network (MBN-EDA) was proposed in~\cite{karshenas2014multiobjective} specifically for addressing MaOPs. The other category is often known as the mixture probability model-based EDAs. Examples include the multi-objective mixture-based iterated density estimation evolutionary algorithm~\cite{bosman2002multi} employing the mixed probability distributions to sample well-distributed solutions; and the multi-objective Parzen-based EDA~\cite{costa2003moped} learning from the Gaussian and Cauchy kernels to build its models. In~\cite{pelikan2005multiobjective}, the multi-objective hierarchical Bayesian optimization algorithm was designed by the mixture Bayesian network-based probabilistic model for discrete multi-objective optimization problems. In addition, the multi-objective extended compact genetic algorithm~\cite{sastry2005limits} took a marginal product model as the mixture of probability model. Furthermore, a regularity-based model EDA (RM-MEDA) was proposed in~\cite{zhang2008rm} in which the model is built based on the mixture normal distribution over the regularity. Zhou \emph{et al.}~\cite{zhou2009approximating} proposed a regularity-based method for solving the MOPs requiring the objective spaces to be $(m-1)$ dimensions. It is believed that EDAs are capable of solving MaOPs without suffering the disadvantages of MOEAs with traditional genetic operators. Although, MBN-EDA has shown the promise in solving MaOPs, the development of many-objective optimization EDAs (MaOEDA) is still in infancy. Especially, probability models based on regularity have been extensively investigated in the discipline of statistical learning~\cite{cherkassky2007learning,hastie2005elements}, and regularity-based model are easier to build, and fairly effective. Based on our recent research achievements on this topic~\cite{he2016visualization,cheng2015many} and motivated by the success of regularity-based EDAs for MOPs~\cite{zhou2005model,zhou2006combining,zhang2008rm}, an improved regularity-based EDA for MaOPs, in short named MaOEDA-IR, is proposed in this paper. To be specific, models employed to generate new solutions in the proposed algorithm are built based on a group of neighbors selected by a uniformly distributed \textit{reference vectors}. In order to improve the selection pressure, \textit{diversity repairing} mechanism is developed to prevent the adverse DR phenomenon in each generation and push the new solutions toward having a closer proximity to the PF. Furthermore, \textit{dimension reduction} technique is employed priori to the evolution to reduce the cost for exploration search. Specifically, convergence in the proposed algorithm is guaranteed by repairing diversity and sampling solutions based on the reference vectors, while diversity is facilitated by selecting solutions with the nearest perpendicular distances to the reference vectors. Compared to traditional MaOEAs and EDAs, the contributions of the proposed algorithm are summarized as follows: \begin{enumerate} \item Extend the uses of regularity model-based EDAs to MaOPs. In addition, reference vectors-based diversity mechanism is incorporated into the proposed algorithm to enhance the selection pressure. \item Large search space poses a challenge for regularity model-based EDAs as do to all MaOEAs. To this end, dimension reduction technique is utilized in the decision space to speed up the exploration search for sampling promising solutions. \item Convergence and diversity are considered equally important in the design of a quality MOEA or MaOEA. In the proposed algorithm, convergence is mainly treated in the first stage (the phase of dimension reduction), while diversity is focused in the second stage. \end{enumerate} The remainder of this paper is organized as follows. Related reference vectors-based MaOEA, evolutionary algorithms based on dimension reduction, and the seminal work of regularity-based EDA are reviewed in Section~\ref{section_2}, respectively. In Section~\ref{section_3}, the framework of the proposed algorithm is outlined, and the respective steps are detailed. In addition, the complexity of the proposed algorithm are analyzed, and two crucial sub-components of the proposed algorithm as well as the principles for selecting neighbor solutions for building the model are discussed. In Section~\ref{section_4}, a series of experiments are performed over widely used test suites against chosen peer competitors and the results measured by the selected performance metrics are statistically compared, in addition to experiments on investigating the effect of neighbor size, diversity repairing, and dimension reduction. Finally, conclusion and future work are given in Section~\ref{section_5}. \section{Related Works} \label{section_2} Because the proposed algorithm are mainly concerned on the reference vectors, dimension reduction, and regularity-based EDA model, works related to these aspects are discussed. To be specific, state-of-the-art MaOEA, NSGA-III which is based on the reference vectors, is introduced first. Then the evolutionary algorithms employing dimension reduction are reviewed. Next, RM-MEDA which is a regularity-based EDA is analyzed. Finally, the disadvantages of RM-MEDA for solving MaOPs, and the differences to the proposed algorithm are highlighted. \subsection{NSGA-III} The major difference of NSGA-III compared to its predecessor is the diversity improving mechanism which is performed by the reference vectors, and it begins to take effect when $j$ solutions need to be chosen from the non-dominated sorting front $F_i$ where $i>0$, $0<j<|F_i|$, and $|\cdot|$ is a cardinality operator. To be specific, each reference vector is assigned to solutions in $F_k$ where $0<k<i-1$ by calculating which solution has the nearest perpendicular distance to it. Then the reference vector $v$ who is assigned the smallest number of solutions is marked and the solution $p$ in $F_i$ that has the smallest perpendicular distance to $v$ is selected. Next, solution $p$ is removed from $F_i$ and the assigned number of $v$ is increased by $1$. These steps are iterated until $j$ solutions are selected from $F_i$. Because these reference vectors are uniformly distributed, the selected solutions are hopefully evenly distributed in the objective space. Specifically, the reference vectors are uniformly generated in the $R^M_+$ space, and the sum of elements in one reference vector is equal to $1$. However, the problem to be optimized is not necessary in this unit hyperplane. For this purpose, all the objectives are to be normalized priori to calculating the perpendicular distances. This mechanism of uniformly distributed reference vector assisting to select solutions is more and more preferred by MaOEAs due to its explicitly diversity preserving nature. \subsection{Evolutionary Algorithms Based on Dimension Reduction} The notion that state-of-the-art MOEAs are capable of solving the MOPs naturally leads to an intent of reducing the number of objective in a MaOP (i.e., dimension reduction in the objective space), and then applying these powerful MOEAs in solving them. Specifically, dimension reduction in the objective space refers to removing the redundant objectives, while the same solutions are obtained as with all objectives involved~\cite{Gal1999Consequences}. With the utilization of dimension reduction, the computational complexity is reduced due to a smaller number of objectives. In summary, the dimension reduction schemes considered in literatures can be sorted into two categories. The first category is often known as the correlation-based methods, such as the works in~\cite{brockhoff2010automated,brockhoff2009objective,lopez2008objective}. In addition, the correntropy principal component analysis (C-PCA), maximum variance unfolding PCA (MVU-PCA)~\cite{saxena2007non}, and PCA-based algorithm~\cite{brockhoff2008handling} were proposed to analyze the correlation between the solutions generated in each generation, while Saxena \textit{et al.}~\cite{saxena2013objective} proposed the linear PCA and nonlinear MVC over a set of Pareto-optimal solutions to check the correlation. In addition, Guo \textit{et al.}~\cite{guo2012new} employed the interdependence factors to identify the correlation for dimension reduction. Recently, Wang and Yao proposed a novel approach to reduce the objective dimension by measuring the linear and nonlinear correlation between objectives using nonlinear correlation information entropy~\cite{Wang2016Objective}. The second category covers the algorithms which employ the dimension reduction based on dominance structure, such as the work~\cite{brockhoff2009objective} in which the $\epsilon$-dominance was employed to identify the redundant objectives. In addition, the Pareto Corner Search Evolutionary Algorithm~\cite{singh2011pareto} utilized the corner sorting technique to find the corner solutions in which the dimension reduction was performed. In summary, the dimension reduction techniques in these algorithms are performed in the objective space, while the proposed improved regularity-based MaOEDA builds its model in the decision space. Moreover, it is common that the number of decision variables is much greater than that of the objectives. As a consequence, it is reasonable to reduce the dimension of the decision space for the purpose of computational efficiency and effectiveness. \subsection{RM-MEDA} The probability model of RM-MEDA is built based on the regularity of decision space. To be specific, all the solutions are clustered into multiple disjoint groups by local PCA~\cite{kambhatla1997dimension} first, and then models are constructed to generate new solutions for each group. Specifically, local PCA is employed for manifold dimension reduction by performing multiple PCA operations in each piecewise linear segment over the entire given data. Compared to PCA, local PCA is considered better to collectively capture the global structure. For intuitively comparing the effects of local PCA and PCA, an example is plotted in Fig.~\ref{fig_local_pca_pca} in which it is clearly shown that the local PCA works better in estimating the entire structure of the data. Supposed that $\lambda_1,\cdots,\lambda_N$ are the eigenvalues of the covariance matrix of one group, $\lambda_1\geq\cdots\geq \lambda_N$, and the corresponding eigenvectors are $v_1, \cdots, v_N$. Then the model is formulated with the assumption that the PS is a $(M-1)$-dimensional piecewise manifold in a continuous problem. In further, solutions are sampled from this model with the number which is proportional to the volume of the model to that of all the models. Combined with non-dominated sorting, a set of solutions with diversity approximating the PF are generated. \begin{figure*}[htp] \begin{center} \subfloat[]{\includegraphics[width=0.6\columnwidth]{local_pca \label{fig_local_pca}} \hfil \subfloat[]{\includegraphics[width=0.6\columnwidth]{pca \label{fig_pca}} \caption{An example comparison of local PCA and PCA on the same data. Specifically, Fig.~\ref{fig_local_pca} denotes the utilization of local PCA, while Fig.~\ref{fig_pca} denotes that of PCA. The solid lines on both figures denote the main directions of the principal components. Obviously, local PCA is better to capture the global structure of the give data.} \label{fig_local_pca_pca} \end{center} \end{figure*} EDAs generate new solutions with probabilistic model to reject the detrimental consequence lead by genetic operator. Especially, regularity-based model is much preferable than most Bayesian network-based models in EDAs because of the simplicity yet effectiveness~\cite{wang2015regularity}. For example, EDAs based on Bayesian network need the procedure of training while regularity-based methods are analytic. However, because RM-MEDA is originally developed for solving MOPs with variables linkage~\cite{zhang2008rm}, it may not be suitable for solving MaOPs. For example, the diversity in RM-MEDA is maintained by uniformly sampling new solutions in the decision space and this cannot give rise to the corresponding diversity in the objective space especially in MaOPs such as all the test problems in DTLZ~\cite{deb2005scalable}. Furthermore, local PCA makes sense when the PS is in full rank in the decision space, which is not necessary the case in MaOPs. To this end, an improved regularity-based EDA for effectively addressing MaOPs is proposed in this paper. In addition, a new diversity facilitating mechanism which employs the uniformly distributed reference vectors is also incorporated to improve the selection pressure. Furthermore, the dimension reduction technique based on the correlation scheme is utilized in the decision space over a set of Pareto-optimal solutions to save the computational complexity. Compared to RM-MEDA, the main contributions of the proposed algorithm are listed as follows. \begin{enumerate} \item The diversity improving mechanism of the proposed algorithm is compatible to most problems while RM-MEDA is only suitable to the problems in which the PS has the same image of PF. \item The PS in RM-MEDA is not allowed to lie on any subspaces of the decision space, while the proposed algorithm has no such a requirement of it. \item RM-MEDA is implemented with the manifold assumption that the PS is a piece-wise manifold with $(M-1)$-dimension. In this paper, the proposed algorithm is developed without such an assumption. \end{enumerate} \section{Proposed Algorithm} \label{section_3} In this section, the framework of the proposed algorithm, i.e., improved regularity model-based EDA for many-objective optimization~(MaOEDA-IR), is given first. Then the details of each step in the framework are presented. This is followed by the analysis of the computational complexity. Finally, significant sub-components of the proposed algorithm, and the principles of selecting neighbor solutions are discussed, respectively. It is noted here that the proposed algorithm is given in the context of problem described by Equation~(\ref{equation_mops}). \subsection{Framework of the Proposed Algorithm} \label{section3-1} The proposed algorithm begins with reducing the dimension of the decision space (Subsection~\ref{sec_dimension_reduction}). Then new promising solutions are generated in the reduced decision space to speed up the evolutionary progress and lower the computational cost of exploitation search in the proposed algorithm, and their fitness are evaluated (Subsection~\ref{sec_create_evaluate}). Next, a set of generated reference vectors in $R^M_+$ (it refers to the sub-part of $R^M$, where all points in this sub-part are with element values no less than 0) is mapped (Subsection~\ref{sec_map_reference_vectors}), and regularity-based model is built (Subsection~\ref{sec_model_build}) for repairing the diversity of the proposed algorithm (Subsection~\ref{sec_diversity_repair}) in which a set of solutions $R_t$ are generated. Based on $R_t$, new offspring are generated (Subsection~\ref{sec_generate_offspring}). With the help of environmental selection operator (Subsection~\ref{sec_environmental_selection}), a set of solutions with a better quality in convergence and diversity are obtained. Repairing the diversity, model updating, sampling new solutions, and environmental selection are performed one by one in a limit number of generations. At last, final selection is utilized to choose the representative solutions for the available slots (Subsection~\ref{sec_environmental_selection}). In addition, maximum generation number, a set of uniformly distributed reference vectors, neighbor size, and respective threshold for dimension reduction and model building need to be made available prior to the proposed algorithm running. In summary, the framework of the proposed algorithm is listed in Algorithm~\ref{alg_ireda}. \begin{algorithm} \caption{Main Framework of MaOEDA-IR} \label{alg_ireda} \KwIn{the maximum number of generations $t_{max}$, a set of unit reference vectors $\textbf{r}_0=\{r_{0,1},\cdots,r_{0,N}\}$, neighbor size $T$, dimension reduction threshold $\alpha$, regularity-based model threshold $\beta$; } \KwOut{final population $P$;} Reduce the dimension of decision space from $R^n$ to $R^k$\; \label{alg_framework_1} Create population $P_0$ with $TN$ individuals from $R^k$\; Evaluate the fitness of $P_0$\; Map reference vectors $\textbf{r}_0$ to $\textbf{v}_0=\{v_{0,1},\cdots,v_{0,N}\}$\; Build the regularity-based model $\Phi$\; \label{alg_framework_2} $t\leftarrow 0$\; \While{$t<t_{max}$} { $R_t\leftarrow$ Repair the diversity\; \label{alg_framework_3} $S\leftarrow$ Non-dominated selection from $P_t\cup R_t$ \label{alg_add_1}\; Map reference vectors $\textbf{v}_{t-1}$ to $\textbf{v}_t=\{v_{t,1}\cdots,v_{t,N}\}$ \label{alg_add_2}\; $Q_t\leftarrow$ Generate offspring\; $S\leftarrow$ Non-dominated selection from $P_t\cup R_t \cup Q_t$\label{alg_add_3}\; Update reference vectors $\textbf{v}_t=\{v_{t,1}\cdots,v_{t,N}\}$ \label{alg_add_4}\; $P_{t+1}\leftarrow$ Environmental selection from $Q_t\cup R_t \cup P_{t}$\; \label{alg_framework_4} $t\leftarrow$ $t+1$\; } $P\leftarrow$ Final selection from $P_{t_{max}}$\; \textbf{Return} $P$. \end{algorithm} \subsection{Reducing the Dimension of Decision Space} \label{sec_dimension_reduction} Dimension reduction technique is used to reduce the volume of exploration space to speed up the search of sampling new solutions. Ideally, the subspace of PS is desirable. To this end, a set of Pareto-optimal solutions is suitable to be the training data and then exploitation is performed in the subspace. Noted here that, the training data only require the solutions with convergence and the diversity is not necessary. Consequently, algorithms such as the conventional weighted aggregation method~\cite{fleming85computer} for problems with convex PF, and the evolutionary dynamic weighted aggregation methods~\cite{jin2001adapting} for problems with non-convex PF are suitable for this. In this paper, the Pareto Corner Search Evolutionary Algorithm (PCSEA)~\cite{singh2011pareto} is employed because 1) a set of corner Pareto-optimal solutions which can be used as the training data as well as the extreme points (extreme points are employed for building the regularity-based model in Subsection~\ref{sec_model_build}) are obtained simultaneously when the PCSEA completes, 2) the source code is available, and 3) the computational cost is promising compared to the algorithms selected for generating the extreme points and the training data. Moreover, the PCSEA has been successfully employed to generate solutions for dimension reduction in the objective space in its seminal paper. In each generation of PCSEA, $2M$ lists are increasingly ordered, where $M$ is the number of objectives. Specifically, the first $M$ lists are about the $M$ objectives, while the last $M$ lists are with the exclusive $L_2$ norm square. Especially, the exclusive $L_2$ norm square of the $i$-th objective is with the form $\sum_{j=1,j\neq i}^Mf_i^2$. By assigning the solutions in the top of the lists with a smaller rank value, the corner solutions are highlighted (more details can be found in~\cite{singh2011pareto}). When the evolution process of PCSEA is completed, the Pareto-optimal solutions, which are denoted by $S_p$, are selected from the population. For convenience of the development, a matrix $X$ is used to represent $S_p$. Specifically, each row in $X$ denotes one solution while the columns refer to the different dimension of decision variables. Then the process of dimension reduction is illustrated in Algorithm~\ref{alg_dimension_reduction} in which it can be divided into two parts. The first part is to employ the principal component analysis (PCA) to find the projected space in which $X$ keeps its $(\alpha\times 100\%)$ information where $\alpha$ is a predefined threshold (lines~\ref{alg_dimension_reduction_begin_pca}-\ref{alg_dimension_reduction_end_pca}). After the transformed data is projected back to its original space (in line~\ref{alg_dimension_reduction_restore}), the values at reduced dimensions become $0$. Thereafter, the index of the reduced dimensions $I$ are selected, which are implemented in lines~\ref{alg_dimension_reduction_saved_begin}-\ref{alg_dimension_reduction_saved_end}. Then, $I$ is saved together with the mean value $\mu$ of $X$ for evaluating the fitness of solutions generated in the reduced decision variable space. \begin{algorithm} \caption{Dimension Reduction} \label{alg_dimension_reduction} \KwIn{the matrix $X$ representing $S_p$, threshold $\alpha$;} \KwOut{the index of the removed dimension, the mean value of $X$;} $\mu \leftarrow$ Compute the column mean value of $X$\; \label{alg_dimension_reduction_begin_pca} $X'\leftarrow $ Subtract $\mu$ from $X$\; $U\leftarrow $ Select principal components with threshold $\alpha$\; \label{alg_dimension_reduction_end_pca} $\hat{X}\leftarrow U'UX'$\; \label{alg_dimension_reduction_restore} $I\leftarrow\emptyset; V\leftarrow\emptyset$\; \label{alg_dimension_reduction_saved_begin} \For{$j\leftarrow 1$ \rm{\textbf{to}} $n$} { $s\leftarrow$ compute the $j$-th column sum of $\hat{X}$\; \If{ $s == 0$} { $I\leftarrow I\cup j$\; } } \label{alg_dimension_reduction_saved_end} \textbf{Return} $I$, $\mu$. \end{algorithm} In particular, the main reason for not using $U$ but the original space is for reducing the computational complexity. Specifically, $U$ is not the original decision space, and the solutions sampled from $U$ cannot be directly used for fitness evaluation. There are two ways to solve this problem. The first one is to sample solutions from $U$ and then transform solutions into the original space when they are evaluated for the fitness. The other one is to transform $U$ to the original space in advance, and then sample solutions from the original space. If we use the first method, the transformation operation needs to be performed for each solution in each generation. However, if we use the second method, we only need to do this transformation once. Obviously, the second method is with less computational complexity. Next, we will explain the mechanism of lines~\ref{alg_dimension_reduction_saved_begin}-\ref{alg_dimension_reduction_saved_end} in Algorithm~\ref{alg_dimension_reduction}. By using PCA, the generated principal space $U$ is with less number of dimensions compared to that of the original space. When $U$ is transformed to the original space, the values of the dimensions reduced by PCA are all zeros. Therefore, we use lines~\ref{alg_dimension_reduction_saved_begin}-\ref{alg_dimension_reduction_saved_end} in Algorithm~\ref{alg_dimension_reduction} to find these dimensions by checking where their values are zeros. Once we find these dimensions, we remove them and store their indexes. In Algorithm~\ref{alg_dimension_reduction} we use $I$ to denote the space without the reduced dimensions, and sample solutions from $I$. When these solutions are used for fitness evaluation, just padding their corresponding mean value (stored in $\mu$) to the corresponding dimension based on the stored index, and using them to do fitness evaluation. In most PCA-based methods, none of solutions sampled from the reduced space needs to be operated in the original space. However, in the proposed design, the solutions must be transformed back to the original space for fitness evaluation. Naturally, the dimension reduction technique used here is quite different from a general PCA-based method. \subsection{Creating Population and Evaluating Fitness} \label{sec_create_evaluate} Assuming the dimension of the reduced decision variable space is $k$ (obviously $k=n-|I|$). Based on the design principles of the proposed algorithm, each final solution of the proposed algorithm is desirable to be associated with one reference vector, and the model is built on the neighbor solutions of one particular reference vector. Therefore, the population is randomly initialized in $R^k$ with size $TN$. In order to evaluate the fitness, the created population needs to be translated back to $R^n$, which is demonstrated by Algorithm~\ref{alg_translate_pop}. Specifically, the translated population $P_0'$ has the same number to that of the initialized population $P_0$, and each individual in $P_0'$ is with $n$-dimension. Furthermore, the values in reduced dimension are equal to that of the elements in the mean value vector $\mu$. To this end, $P_0'$ is initialized in $R^{TN\times n}$ (line~\ref{alg_translate_pop_1}) with zeros first. Then, the element values in each row of $P_0'$ is set to be as $\mu$ (line~\ref{alg_translate_pop_2}). Finally, each column of $P_0$ is added to the corresponding column of $P_0'$, which is implemented by lines~\ref{alg_translate_pop_3}-\ref{alg_translate_pop_4}. Once the translation is performed, the fitness is evaluated by introducing the population to the problem to be optimized. \begin{algorithm} \caption{Translate the Population} \label{alg_translate_pop} \KwIn{population $P_0\in R^k$, index of reduced dimension $I$, mean value $\mu$;} \KwOut{population $P_0'\in R^n$;} Initialize a matrix $P_0'\in R^{TN\times n}$ with zeros\; \label{alg_translate_pop_1} Copy $\mu$ to each row of $P_0'$\; \label{alg_translate_pop_2} $l\leftarrow$ $1$\; \For{$i\leftarrow$ $1$ \rm{\textbf{to}} $n$ and $i$ \rm{\textbf{not in}} $I$} { \label{alg_translate_pop_3} Add the $l$-th column of $P_0$ to the $i$-th column of $P_0'$\; $l\leftarrow l+1$\; } \label{alg_translate_pop_4} \textbf{Return} $P_0'$. \end{algorithm} \subsection{Mapping Reference Vectors} \label{sec_map_reference_vectors} \begin{algorithm} \caption{Mapping Reference Vectors} \label{alg_mapping_reference_vectors} \KwIn{reference vectors $\textbf{r}_0$, data $S_p$ from Algorithm~\ref{alg_dimension_reduction};} \KwOut{mapped reference vector $\textbf{v}_0$;} $F\leftarrow$ Calculate the objective values of $S_p$\; \label{alg_begin_extreme_points} $\textbf{z}^\textbf{u}\leftarrow$ $\emptyset$ \; \For{$i\leftarrow 1$ \rm{\textbf{to}} $M$} { $\textbf{v}\leftarrow[0,\cdots,0]$\; \For{\rm{\textbf{each}} $f$ \rm{\textbf{in}} $F$} { \If{$f_i>\textbf{v}_i$} { $\textbf{v}\leftarrow f$\; } } $\textbf{z}^\textbf{u}\leftarrow \textbf{z}^\textbf{u} \cup \textbf{v}$\; } \label{alg_end_extreme_points} $\textbf{z}^*\leftarrow\emptyset$\; \label{alg_begin_ideal_points} \For{\rm{\textbf{each}} $f$ \rm{\textbf{in}} $F$} { $v\leftarrow +\infty$\; \For{$i\leftarrow 1$ \rm{\textbf{to}} $M$} { \If{$f_i<v$} { $v\leftarrow f_i$\; } } $\textbf{z}^*\leftarrow \textbf{z}^*\cup v$\; } \label{alg_end_ideal_points} Update each $z$ in $\textbf{z}^\textbf{u}$ by $z\leftarrow z - \textbf{z}^*$\; \label{alg_begin_intercepts} $\textbf{a}=[a_1\cdots,a_M]\leftarrow$ Find the intercepts of the hyperplane constructed by $\textbf{z}^\textbf{u}$\; \label{alg_end_intercepts} \For{$i\leftarrow 1$ \rm{\textbf{to}} $N$} { \label{alg_begin_mapping_rv} $v_{0,i}\leftarrow$ = $r_{0,i}\times \textbf{a} + \textbf{z}^*$\; } \label{alg_end_mapping_rv} \textbf{Return} $\textbf{v}_0=\{v_{0,1},\cdots,v_{0,N}\}$. \end{algorithm} Conventionally, Das and Dennis's method~\cite{das1998normal} is employed to generate the uniformly distributed reference vectors $\textbf{r}_0$ which is constructed in $R^M_+$. However, the PF of the problem to be optimized does not necessary span the entire $R^M_+$ space. In order to keep the same number of reference vectors in the PF, $\textbf{r}_0$ needs to be mapped. For illustrating this motivation, an example in the $2$-dimensional space is plotted in Fig.~\ref{fig_motivation_map_rf} in which nine blue lines denote the reference vectors and line $AB$ denotes the PF. Specifically, Fig.~\ref{fig_mapping_before} describes the reference vectors generated by Das and Dennis's method and only four reference vectors intersect $AB$, while Fig.~\ref{fig_mapping_after} depicts the nine reference vectors intersect $AB$ after they are mapped. Because the final solutions in the proposed algorithm are selected by their corresponding reference vector, it is obvious that the design in Fig.~\ref{fig_mapping_after} is preferred. In addition, Algorithm~\ref{alg_mapping_reference_vectors} presents the details of mapping the reference vectors which can be divided into four steps. First, the objective values of the training data for dimension reduction in Algorithm~\ref{alg_dimension_reduction} and the extreme points (denoted by $\textbf{z}^\textbf{u}$) are calculated (lines~\ref{alg_begin_extreme_points}-\ref{alg_end_extreme_points}). Second, the ideal point is derived by selecting the minimum values in all objectives, which are implemented in lines~\ref{alg_begin_ideal_points}-\ref{alg_end_ideal_points}. Noted here that, this approach to calculate the ideal point is also utilized in~\cite{yuan2015balancing,cheng2016reference,deb2009hybrid,deb2006towards,wang2015nadir}. Third, the extreme points are updated by subtracting the ideal point and the intercepts are calculated (lines~\ref{alg_begin_intercepts}-\ref{alg_end_intercepts}) by solving the equation\footnote{Repetitive points existing in $\textbf{z}^\textbf{u}$ will lead multiple solutions to this equation. To avoid this, the trick introduced in~\cite{yuan2014evolutionary} is employed. $\textbf{z}^\textbf{u}\times (1/a)=I$ where $I$ is an identity vector. Fourth, the reference vectors uniformly generated in $R^M_+$ are mapped by lines~\ref{alg_begin_mapping_rv}-\ref{alg_end_mapping_rv} as mapped reference vectors $\textbf{v}_0$. Noting that, the last two steps are necessary. Otherwise, only the origin of the coordinate system has been moved to the idea point, and there is still no sufficient numbers of reference vectors intersecting with line $AB$ (see Fig.~\ref{fig_no_mapping}). } \begin{figure*}[htp] \begin{center} \subfloat[]{\includegraphics[width=0.5\columnwidth]{fig1 \label{fig_mapping_before}} \hfil \subfloat[]{\includegraphics[width=0.5\columnwidth]{fig2 \label{fig_mapping_after}} \hfil \subfloat[]{\includegraphics[width=0.5\columnwidth]{fig1_5 \label{fig_no_mapping}} \caption{A $2$-dimensional example to illustrate the motivation of mapping the reference vectors. Specifically, there are nine reference vectors (blue lines) generated by the Das and Dennis's method, and line $AB$ denotes the Pareto front. The reference vectors without mapping are plotted in Fig.~\ref{fig_mapping_before} in which only four reference vectors intersect $AB$, while the reference vectors which have been mapped are plotted in Fig.~\ref{fig_mapping_after} in which all the sampled reference vectors intersect $AB$ (i.e., the situation that Algorithm~\ref{alg_mapping_reference_vectors} have been performed). Fig.~\ref{fig_no_mapping} denotes the situation that only lines~\ref{alg_begin_extreme_points}-\ref{alg_begin_intercepts} in Algorithm~\ref{alg_mapping_reference_vectors} have been performed.} \label{fig_motivation_map_rf} \end{center} \end{figure*} \subsection{Building the Regularity-based Model} \label{sec_model_build} Generally, the regularity-based model is composed of multiple sub-models due to the complexity of regularity on which a unified model is difficult to exactly capture the global intrinsic relation~\cite{zhang2008rm,zhang2006modelling}. In this proposed algorithm, each sub-model is built based on one reference vector with its neighbor solutions and Algorithm~\ref{alg_model_build} presents the details. \begin{algorithm} \caption{Build the $j$-th Regularity-based Sub-model} \label{alg_model_build} \KwIn{neighbor solutions $S_n$ of the $j$-th reference vector, threshold $\beta$, enlargement factor $\gamma$;} \KwOut{model $\Phi$;} Let matrix $X$ denote $S_n$\; \label{alg_model_build_1_begin} $\mu \leftarrow$ Compute the column mean value of $X$\; $X'\leftarrow$ Subtract $\mu$ from each rom in $X$\; \label{alg_model_build_1_end} $[\lambda_1,\cdots,\lambda_M],[v_1,\cdots,v_M]\leftarrow$ Eigen-factorize the covariance matrix of $X$ and descend the eigenvalues and eigenvectors\; \label{alg_model_build_2} $i\leftarrow 0$\; \label{alg_model_build_3_begin} \While{$(\lambda_1+\cdots+\lambda_i)/(\lambda_1+\cdots+\lambda_M) < \beta$} { $i\leftarrow i + 1$\; } \label{alg_model_build_3_end} $y\leftarrow X'\times [v_1,\cdots,v_i]$\; \label{alg_model_build_4} $l,u\leftarrow$ Find the minimum and the maximum values in each row of $y$\; \label{alg_model_build_5} $\Omega\leftarrow \left\{\mu+\sum_{j=1}^i\tau v_i, l-\gamma(u-l)\leq \tau \leq u+\gamma(u-l)\right\}$\; \label{alg_model_build_6} $\epsilon \leftarrow \frac{1}{M-i+1}\sum_{j=i}^M\lambda_j$\; \label{alg_model_build_7} \textbf{Return} $\Phi=\Omega+\epsilon$. \end{algorithm} To be specific, given the neighbor solutions $S_n$ of the $j$-th reference vector, the threshold $\beta$, and the enlargement factor $\gamma$. The steps of building the $j$-th sub-model are illustrated as follows. First, $S_n$ is represented by the matrix $X$ and centered (lines~\ref{alg_model_build_1_begin}-\ref{alg_model_build_1_end}). Then the eigenvalues as well as the eigenvectors of $X$ are obtained and ordered based on descending the eigenvalues (line~\ref{alg_model_build_2}). Lines~\ref{alg_model_build_3_begin}-\ref{alg_model_build_3_end} demonstrates the search of principal components on which the centered data is projected (line~\ref{alg_model_build_4}). Next, the projected space is constrained by find the minimum and the maximum values of projection (line~\ref{alg_model_build_5}), and the latent space for generating new offspring is obtained with the enlargement factor and the principle components (line~\ref{alg_model_build_6}). Finally, the noise for the latent space is computed from the mean values of the eigenvalues regarding the non-principle components (line~\ref{alg_model_build_7}), and the regularity-based model is obtained. \subsection{Repairing the Diversity} \label{sec_diversity_repair} Diversity repairing is employed by sampling new solutions to mitigate the adverse of the phenomenon that reference vectors lack associated solutions. To this end, all the non-dominated individuals in the current population are enumerated to find their respective nearest reference vectors first, which is implemented by lines~\ref{alg_repair_diversity_begin_1}-\ref{alg_repair_diversity_end_1}. Then lines~\ref{alg_repair_diversity_begin_2}-\ref{alg_repair_diversity_end_2} demonstrate the selection of neighbor solutions. In addition, the model building and new solution sampling are presented in line~\ref{alg_repair_diversity_3} and lines~\ref{alg_repair_diversity_begin_4}-\ref{alg_repair_diversity_end_4}, respectively. Noted in the phase of selecting neighbor solutions for reference vector $v_{t,i}$, its neighbor solutions are from the current population $S_t$ and the non-dominated solutions in $S_t$, the motivation of which is discussed in Subsection~\ref{sec_discussion}. In addition, it is obvious that the size of the neighbor solutions is not necessary with $T$. I.e., it is with $T+1$ when the selected non-dominated neighbor solution has been included in line~\ref{alg_repair_diversity_begin_2}. The reasons that the neighbor solution size are not strictly kept with $T$ are that 1) it does not affect the built model and 2) removing the extra solution gives more computational cost. \begin{algorithm} \caption{Repair the Diversity} \label{alg_repair_diversity} \KwIn{current population $P_t$, reference vectors $\textbf{v}_t$, neighbour size $T$;} \KwOut{new solutions $R_t$;} $S=\{s_1,\cdots,s_k\}\leftarrow$ Non-dominated selection from $P_t$\; $I\leftarrow \emptyset$\; \label{alg_repair_diversity_begin_1} \For{\rm\textbf{each} $s$ \rm\textbf{in} $S$} { $I\leftarrow I\cup\argmin_{i} ||s-v_{t,i}sv_{t,i}^T/(v_{t,i}v_{t,i}^T)||$\; } \label{alg_repair_diversity_end_1} $R_t\leftarrow \emptyset$\; \For{$i\leftarrow 1$ \rm\textbf{to} $N$ \rm\textbf{and} $i$ \rm\textbf{not in} $I$} { $\text{neighbour}(i)\leftarrow$ Select $T$ solutions from current population who have the smallest perpendicular distances to $v_{t,i}$\; \label{alg_repair_diversity_begin_2} $j\leftarrow \argmin_{j} ||s_j-v_{t,i}s_jv_{t,i}^T/(v_{t,i}v_{t,i}^T)||$\; $\text{neighbour}(i)\leftarrow \text{neighbour}(i)\cup s_j$\; \label{alg_repair_diversity_end_2} Build model $\Phi_i=\Omega_i+\epsilon_i$ with $\text{neighbour}(i)$\; \label{alg_repair_diversity_3} $A\leftarrow$ Uniformly sample $T$ points in $[l-\gamma(u-l), u+\gamma(u-l)]$ from $\Omega_i$\; \label{alg_repair_diversity_begin_4} $B\leftarrow$ Sample points from the normal distribution with mean $0$ and standard derivation $\sqrt{\epsilon_i}$\; $R\leftarrow (A+B)$\; \label{alg_repair_diversity_end_4} $R_t\leftarrow R_t\cup R$\; } \textbf{Return} $R_t$. \end{algorithm} \subsection{Generating Offspring} \label{sec_generate_offspring} After repairing the diversity, the solution set $R_t$ is generated. Then, non-dominated solutions in $S$ are selected from $R_t$ and the current population $P_t$ (line~\ref{alg_add_1} of Algorithm~\ref{alg_ireda}). Next, reference vectors $\textbf{v}_t=\{v_{t,1}, \cdots, v_{t,N}\}$ are mapped (line~\ref{alg_add_2} of Algorithm~\ref{alg_ireda}), which is performed by Algorithm~\ref{alg_mapping_reference_vectors} with the input parameters $\textbf{v}_{t-1}$ and $S$. Finally, offspring $Q_t$ are generated by Algorithm~\ref{alg_offspring_generate}. In summary, generating offspring can be viewed as the diversity repairing in each reference vector, which could be investigated through the analogy between lines~\ref{alg8_begin}-\ref{alg8_end} of Algorithm~\ref{alg_offspring_generate} and lines~\ref{alg_repair_diversity_begin_2}-\ref{alg_repair_diversity_end_4} of Algorithm~\ref{alg_repair_diversity}. However, the motivation of generating offspring is different to that of diversity repairing, which will be discussed in Subsection~\ref{sec_discussion}. In addition, updating the reference vectors is motivated by achieving a better performance of the proposed algorithm, although it has been reported that PCSEA is capable of finding the approximated corner solutions from which the extreme points are derived~\cite{singh2011pareto}. \begin{algorithm} \caption{Generate Offspring} \label{alg_offspring_generate} \KwIn{non-dominated solutions $S=\{s_1,\cdots,s_k\}$,} neighbour size $T$, reference vectors $\textbf{v}_{t}$; \KwOut{offspring $Q_t$;} $Q_t\leftarrow \emptyset$\; \For{$i\leftarrow 1$ \rm\textbf{to} $N$} { $\text{neighbour}(i)\leftarrow$ Select $T$ solutions from $S$ who have the smallest perpendicular distances to $v_{t,i}$\; \label{alg8_begin} $j\leftarrow \argmin_{j} ||s_j-v_{t,i}s_jv_{t,i}^T/(v_{t,i}v_{t,i}^T)||$\; $\text{neighbour}(i)\leftarrow \text{neighbour}(i)\cup s_j$\; Build model $\Phi_i=\Omega_i+\epsilon_i$ with $\text{neighbour}(i)$\; $A\leftarrow$ Uniformly sample $T$ points in $[l-\gamma(u-l), u+\gamma(u-l)]$ from $\Omega_i$\; $B\leftarrow$ Sample points from the normal distribution with mean $0$ and standard derivation $\sqrt{\epsilon_i}$\; $Q\leftarrow (A+B)$\; \label{alg8_end} $Q_t\leftarrow Q_t\cup Q$\; } \textbf{Return} $Q_t$. \end{algorithm} \subsection{Environmental Selection and Final Selection} \label{sec_environmental_selection} After offspring $Q_t$ are generated, non-dominated solutions $S$ are selected from the current population, i.e., $P_t\cup R_t\cup Q_t$ (line~\ref{alg_add_3} of Algorithm~\ref{alg_ireda}). Then, reference vectors $\textbf{v}_t=\{v_{t,1}, \cdots, v_{t,N}\}$ are updated (line~\ref{alg_add_4} of Algorithm~\ref{alg_ireda}) by performing Algorithm~\ref{alg_mapping_reference_vectors} with the input parameters $S$ and itself. Next, the environmental selection is performed. Specifically, the environmental selection aims at removing ill-fit solutions from the current population and maintaining a limit size of representative individuals to reduce the cost of computation in each generation. The final selection is to choose the best-fit solutions. Moreover, both selections are dependent in the proposed algorithm. In the framework of the proposed algorithm, it is assumed that $N$ solutions are needed by the decision-maker when the algorithm is finished. Because the proposed algorithm is based on the statistics of regularity of the $T$ neighbors regarding each solution in $N$. As a consequence, the number of solutions for building the model should be $TN$. In addition, extra solutions are incorporated from the phases of diversity repairing and offspring generating in each generation. To this end, the purpose of environmental selection is for maintaining a size of population with the same number of initialized population while the final selection is for selecting $N$ solutions. It is hopeful that each reference point has $T$ neighbor solutions after the environmental selection which is described by Algorithm~\ref{alg_environment_selection}. Specifically, final selection is implemented if the number $T$ is replaced by $1$ in the environmental selection. \begin{algorithm} \caption{Environmental Selection} \label{alg_environment_selection} \KwIn{non-dominated solutions $S=\{s_1,\cdots,s_k\}$,} neighbour size $T$, reference vectors $\textbf{v}_{t}$; \KwOut{$P_{t+1}$;} $L_1,\cdots,L_N\leftarrow\emptyset$\; \label{alg_es_begin_1} \For{\rm\textbf{each} $s$ \rm\textbf{in} $S$} { $i\leftarrow\argmin_{i} ||s-v_{t,i}sv_{t,i}^T/(v_{t,i}v_{t,i}^T)||$\; $L_i\leftarrow L_i\cup s$\; } \label{alg_es_end_1} \For{$i\leftarrow 1$ \rm\textbf{to} $N$} { \label{alg_es_begin_2} \If{$|L_i| > T$} { $\{F_1,\cdots, F_l\}\leftarrow$ Non-dominated sorting solutions in $L_i$\; $k\leftarrow 1$, $L_i\leftarrow \emptyset$\; \While{$|L_i|+|F_k| \leq T$} { $L_i\leftarrow L_i\cup F_k$\; $k\leftarrow k + 1$\; } \If{$|L_i|<T$} { $D\leftarrow$ Select $T-|L_i|$ solutions from $F_k$ who have the smallest perpendicular distances to $v_{t,i}$\; $L_i\leftarrow L_i\cup D$\; } } } \label{alg_es_end_2} From current population removing solutions in $L_i\cup\cdots\cup L_N$\; \label{alg_es_4} \For{$i\leftarrow 1$ \rm\textbf{to} $N$} { \label{alg_es_begin_3} \uIf{$|L_i| < T$} { $D\leftarrow$ Select $T-|L_i|$ solutions from current population who have the smallest perpendicular distances to $v_{t,i}$\; $L_i\leftarrow L_i\cup D$\; }\Else{ $D\leftarrow$ Select $T$ solutions from $L_i$ who have the smallest perpendicular distances to $v_{t,i}$\; } From current population removing solutions in $D$\; \label{alg_es_5} } \label{alg_es_end_3} \textbf{Return} $Q_t=L_i\cup\cdots\cup L_N$. \end{algorithm} In summary, the environmental selection covers two steps. The first step is to assign the reference vectors by selecting the non-dominated solutions which have smallest perpendicular distances to them (lines~\ref{alg_es_begin_1}-\ref{alg_es_end_1} of Algorithm~\ref{alg_environment_selection}). The other is to set $T$ solutions for each reference vector. Specifically, the non-dominated sorting and truncated selection is employed when more than $T$ solutions are assigned to one reference vector (lines~\ref{alg_es_begin_2}-\ref{alg_es_end_2} of Algorithm~\ref{alg_environment_selection}), otherwise necessary solutions are selected from the current population based on the smallest perpendicular distances (lines~\ref{alg_es_begin_3}-\ref{alg_es_end_3} of Algorithm~\ref{alg_environment_selection}). Noted that, selected solutions are removed from the current population to avoid repetitions being re-selected (lines~\ref{alg_es_4} and \ref{alg_es_5} of Algorithm~\ref{alg_environment_selection}). \subsection{Computational Complexity} \label{sec_computational_complexity} Complexity of the proposed algorithm is analyzed with the context of problem defined by Equation~(\ref{equation_mops}). For convenience, the number of solutions for PCSEA to generate the training data for dimension reduction is set to be same to that in the final selection of the proposed algorithm. Consequently, the computational times of PCSEA is $O(MNlog(N))$. For the dimension reduction, the most cost is for computing the eigenvalues and eigenvectors which has $O((TN)^3)$ computations. In summary, the computational times of dimension reduction is $O((TN)^3)$. I addition, the creating population and fitness evaluation need $O(TNk)$ and $O(TNM)$ computation times, respectively. The computational complexity of mapping reference vectors is $O(TNM)$. Moreover, the complexity of building the model is mainly contributed by the matrix factorization whose complexity is $O(T^3)$. Briefly, lines~\ref{alg_framework_1}-\ref{alg_framework_2} in Algorithm~\ref{alg_ireda} takes $O((TN)^3)$ computational times. Furthermore, the model building, non-dominated selection from $P_t\cup R_t$, non-dominated selection from $P_t\cup R_t \cup Q_t$ dominate the computational complexity of the diversity repairing, generating offspring, and environmental selection. As a consequence, the computational complexity of lines~\ref{alg_framework_3}-\ref{alg_framework_4} is $O(MN^2)$ or $O(T^3)$ which is greater. In addition, the final selection takes $O(MN^2)$ computational times. Generally, the neighbor size $T$ is generally set to be a number with order of magnitude $1$, and the maximum generation is set to that of magnitude $1$. In summary, the computational complexity of the proposed algorithm is $O((TN)^3)$, where $T$ is the neighbor size, and $N$ is the number of solutions the decision-makers require. \subsection{Discussion} \label{sec_discussion} In the proposed algorithm, two sub-components, dimension reduction, and diversity repairing are well-designed to guarantee the performance. In this section, their design motivations are discussed first, and then the experimental verification are presented in Sections~\ref{sec_investigation_diversity_raparation} and~\ref{sec_investigation_dimension_reduction}. First, redundancy exists in the high dimensional data, and data from a small part of the dimensions is sufficient to represent these high dimensional data, such as in the feature selection discipline. In order to obtain these low dimensional data, dimension reduction technique is utilized. Employing these low dimensional data against the original high dimensional data can significantly benefit the utilization of data, such as lowering the computational cost, improving the precision by removing the interference from the elements in the redundant dimensions, and so on. Furthermore, a series of literatures~\cite{brockhoff2010automated,lopez2008objective,brockhoff2008handling,saxena2007non,saxena2013objective,guo2012new,brockhoff2009objective,singh2011pareto} have been proposed to reduce the objective number in MaOPs, thereafter state-of-the-art MOEAs can be utilized to solve them. Generally, the numbers of decision variables are greater than that of objectives in MaOPs, such as the $M$-dimensional DTLZ7~\cite{deb2005scalable} with $19+M$ decision variables, and as well in MOPs, for example, 2-dimensional ZDT1 problem~\cite{zitzler2000comparison} with $30$ decision variables. Actually, it is no surprise that problems are with the number of decision variables greater than that of objectives because it is difficult to determine which particular factors affect the response, and a better way for modeling is to select all the observed factors, which is always with a larger size. As a result, dimension reduction in the decision space is appropriate and well justified, specifically for the proposed algorithm which is based on the regularity of the decision space. Moreover, the Pareto-optimal solutions can be viewed as the features of the decision space in solving MaOPs, and if we obtain the subspace of the PS, subsequent operations can be constrained in this subspace to reduce the cost of exploration. Basically, this subspace is obtained by reducing the dimension on the Pareto-optimal solutions. Ideally, only the diversity is concerned if the exact subspace of the PS is obtained, which is not true in implementation. To this end, extra components are incorporated in the proposed algorithm for improving the convergence, such as the reference vector updating, and non-dominated selection for building model, diversity repairing, generating offspring, environmental selection, and final selection. To intuitively understand the diversity repairing mechanism in the proposed algorithm, an example of $3$-objective DTLZ1 problem~\cite{deb2005scalable} is illustrated in Fig.~\ref{fig_diversity_repairing}. To be specific, twelve different markers in red are shown in Fig.~\ref{fig_alg_1}) identifying a set of uniformly distributed reference vectors in the objective space. The corresponding solutions (given the same markers) in the decision space with the reduced dimensions are shown in Fig.~\ref{fig_alg_2}) which are not necessarily uniformly distributed. For each given solution in the decision space, nine different neighbors are chosen and displayed in the same marker as shown in Fig.~\ref{fig_alg_3}). However, all the reference vectors are not necessarily assigned by the solutions in one generation due to the heuristic nature of evolution. This can be seen from Fig.~\ref{fig_alg_4}) in which blue markers denote all the solutions in this generation and the area constrained by the ellipse circle implies that there is no solution for assigning to the corresponding reference vector. To this end, diversity repairing mechanism is activated by using the neighbor solutions. The ``cross'' markers in Fig.~\ref{fig_alg_5}) are the solutions selected for generating new solutions. These solutions are then used for diversity repairing and generate solutions shown in ``hollow circle'' markers in Fig.~\ref{fig_alg_6}). Intuitively, diversity repairing is employed for improving the diversity by assigning the corresponding solution. In fact, the convergence is strengthened in the phase of diversity repairing by generating new solutions from which dominated solution is assigned to the corresponding reference vector which is short of diversity previously. In summary, both convergence and diversity are promoted by diversity repairing mechanism. \begin{figure} \begin{center} \subfloat[]{\includegraphics[width=1.5in]{ALG_1 \label{fig_alg_1}} \hfil \subfloat[]{\includegraphics[width=1.5in]{ALG_2 \label{fig_alg_2}} \hfil \subfloat[]{\includegraphics[width=1.5in]{ALG_3 \label{fig_alg_3}} \hfil \subfloat[]{\includegraphics[width=1.5in]{ALG_4 \label{fig_alg_4}} \hfil \subfloat[]{\includegraphics[width=1.5in]{ALG_5 \label{fig_alg_5}} \hfil \subfloat[]{\includegraphics[width=1.5in]{ALG_6 \label{fig_alg_6}} \caption{A schematic diagram of the diversity repairing mechanism. In Figs.~\ref{fig_alg_2})-\ref{fig_alg_3}), the blue markers denote the uniformly distributed solutions in the decision variable space with the reduced dimension, and their corresponding objectives are plotted with the same color in Fig.~\ref{fig_alg_1}). A set of uniformly distributed reference vectors is plotted in Fig.~\ref{fig_alg_1}) in red color and their corresponding solutions and neighbors are plotted with the same shape in Figs.~\ref{fig_alg_2}) and \ref{fig_alg_3}), respectively. The blue markers in Figs.~\ref{fig_alg_4})-\ref{fig_alg_6}) denote all the solutions in the reduced dimension decision variable space in one generation. Especially, the area constrained by the ellipse circle denotes there is no solution for assigning to the corresponding reference vector. The cross markers in Fig.~\ref{fig_alg_5}) are the solutions selected for generating new solutions which are plotted in Fig.~\ref{fig_alg_6}) with solid markers to repair the diversity of the corresponding reference vector.} \label{fig_diversity_repairing} \end{center} \end{figure} Normally, the neighbor solutions for one particular reference vector consist of one non-dominated solution and other nearest solutions to this reference vector from the current population. With this neighbor solution assignment, it is hopeful that solutions with convergence and diversity are sampled from the built model. Moreover, Fig.~\ref{fig_neighbour_assigment} highlights our motivation on this design. Especially, the blue line denotes the reference vector, black circles, $A, B, C, D$ and $E$ denotes the current population, rectangle area denotes the built model, and red circles denoted the sampled offspring. Fig.~\ref{fig_neighbour_assigment_1} plots the solutions which have the nearest perpendicular distances to the reference vectors, while Fig.~\ref{fig_neighbour_assigment_2} depicts the non-dominated solution included into the neighbor solutions. It is obviously that solutions with good diversity and convergence are generated in Fig.~\ref{fig_neighbour_assigment_2}. \begin{figure} \begin{center} \subfloat[]{\includegraphics[width=0.6\columnwidth]{diversity_neighbor_1 \label{fig_neighbour_assigment_1}} \hfil \subfloat[]{\includegraphics[width=0.6\columnwidth]{diversity_neighbor_2 \label{fig_neighbour_assigment_2}} \caption{An example to highlight the quality of generated solutions that the non-dominated solution $A$ is included in the neighbor solutions (see Fig.~\ref{fig_neighbour_assigment_2}) or not (see Fig.~\ref{fig_neighbour_assigment_1}). Especially, the blue line denotes the reference vector, $A, B, C, D$ and $E$ denote the current population, rectangle area denotes the built model based on the selected neighbor solutions, and red circles are the generated solutions.} \label{fig_neighbour_assigment} \end{center} \end{figure} \section{Experiments} \label{section_4} To demonstrate the quality of the proposed algorithm, a series of experiments are well-designed and performed on $8$ test problems, which are from two benchmark test suits, DTLZ~\cite{deb2005scalable} and DTLZ$^{-1}$~\cite{ishibuchiperformance}, with $3$-, $5$-, $8$-, $10$-, and $15$-objective. Since the proposed MaOEDA-IR is an EDA-based algorithm for solving MaOPs, state-of-the-art algorithms covering two categories 1) traditional MaOEAs (NSGA-III~\cite{deb2014evolutionary}, MOEA/D~\cite{zhang2007moea}, GrEA~\cite{yang2013grid}, and HypE~\cite{bader2011hype}) and 2) EDA-based evolutionary algorithm (MBN-EDA~\cite{karshenas2014multiobjective}, and RM-MEDA~\cite{zhang2008rm}) are considered as the peer competitors to compare the performance against the proposed MaOEDA-IR. In the following subsections, the selected benchmark test problems are introduced first. Then, the performance indicators chosen to measure the results generated by these compared algorithms are documented. Next, the parameter settings utilized by compared algorithms are declared. Finally, experiments on compared algorithms are performed and their results measured by the selected performance indicators are analyzed. In addition, empirical experiments on investigating the diversity repairing, dimension reduction, and neighbor size are performed to highlight their superiority and promote efficacy in addressing real-word problems. \subsection{Benchmark Test Problems} \label{sec_benchmark_test_problems} DTLZ1-DTLZ4 problems which are from the scalable benchmark test suit DTLZ are considered as the test instances in these experiments. Specifically, each $M$-objective test problem is with $n=M+k-1$ decision variables where $k$ is specified as $5$ for DTLZ1 and $10$ for DTLZ2-DTLZ4. Furthermore, the Pareto-optimal solutions of DTLZ1-DTLZ4 in the normalized $M$-dimensional objective space have the form formulated by Equation~\ref{equ_pareto_optimal_solution_dtlz1-4}. \begin{equation} \label{equ_pareto_optimal_solution_dtlz1-4} \sum_{i=1}^M f_i(x)^p = 1 \end{equation} where $p=1$ for DTLZ1 and $p=2$ for DTLZ2-DTLZ4. Because the employed reference vectors $r_{0,i}=[r_{0,i}^1,\cdots, r_{0,i}^M]$ generated by the systematical Das and Dennis's method in the proposed algorithm are with the form $\sum_{j=1}^Mr_{0,i}^j=1$ that is similar to Equation~\ref{equ_pareto_optimal_solution_dtlz1-4}, DTLZ test problems are considered less challengeable. To this end, the DTLZ1$^{-1}$-DTLZ4$^{-1}$ problems from the DTLZ$^{-1}$ test suit, which is a variant of the DTLZ by multiplying the negative sign to each test problem in DTLZ, are included into the considered benchmark test problems for their more complicated PF shapes which especially challenges algorithms based on reference vectors. \subsection{Performance Metrics} \label{sec_performance_metrics} Two widely used performance metrics, Inverted Generational Distance (IGD)~\cite{bosman2003balance} and Hypervolume (HV)~\cite{zitzler1999multiobjective} which can simultaneously quantify the performance in convergence and diversity of the algorithms, are adopted in these experiments. The results generated by these compared algorithms are normalized to $[0,1]$ priori to employing the performance indicators, which is in the same manner~\cite{yuan2016new}. In addition, $100,000$ reference points are uniformly sampled from Equation~\ref{equ_pareto_optimal_solution_dtlz1-4} for the calculation of IGD, and $[1.1,\cdots,1.1]$ is specified as the reference point for the calculation of HV. Furthermore, Monte Carlo simulation~\cite{bader2011hype} is applied for the calculation of HV when $M\geq 10$, otherwise the exact approach proposed in~\cite{while2012fast} is utilized due to the computation cost dramatically increasing as the number of objectives grows. \subsection{Parameter Settings} \label{sec_parameter_settings} In this subsection, the parameter settings are presented. First, the general settings for most compared algorithms are listed. Thereafter, special settings for partial algorithms are specified. \subsubsection{Crossover and Mutation} SBX~\cite{deb1995simulated} and polynomial mutations~\cite{deb2001multi} are employed as the crossover operator and mutation operator, respectively. Furthermore, the probabilities for SBX and polynomial mutation, and the crossover distribution index are set to be $1.0$, $1/n$ ($n$ is the number of decision variables), and $20$, respectively. In addition, the distribution index of NSGA-III is set to be $30$ according to the suggestions in~\cite{deb2014evolutionary} while others are set to be $20$. \subsubsection{Population Size} The population size can be set arbitrarily for executing experiments. However, reference vectors assisted algorithms, such as NSGA-III, require the same number of the population size to that of reference vectors, other peer algorithms adopt the same population size for a fair comparison. Furthermore, only boundary reference vectors are generated when the division numbers is less than $M$ in the phase of sampling of reference vectors. To this end, the two-layer approach~\cite{deb2014evolutionary} is employed for generating the reference vectors. In addition, the implementation of GrEA and NSGA-III require the population size to be a multiple of $4$. In summary, the settings for reference vector and population size are listed in Table~\ref{table_population_size}. \begin{table}[ht] \caption{Settings for reference vectors and population size.} \label{table_population_size} \begin{center} \begin{tabular}{p{0.05\columnwidth}<{\centering}|p{0.06\columnwidth}<{\centering}|p{0.06\columnwidth}<{\centering}|p{0.2\columnwidth}<{\centering}|p{0.35\columnwidth}<{\centering}} \hline \multirow{2}{*}{$M$}& \multicolumn{2}{c|}{\# of division} & \# of reference &population size of\\ \cline{2-3} & $H_1$ & $H_2$ & vectors& GrEA and NSGA-III\\ \hline 3 & 14 & - & 120 & 120 \\ 5 & 5 & - & 126 & 128 \\ 8 & 3 & 2 & 156 & 156\\ 10 & 2 & 2 & 110 & 112 \\ 15 & 2 & 1 & 135 & 136 \\ \hline \end{tabular} \end{center} \end{table} \subsubsection{Special Settings} The grid sizes of GrEA varying in $\{6,7,8,9,10,11,12\}$ are tested individually and the best scores selected based on the corresponding performance indicators are picked up for comparisons. Because RM-MEDA is originally designed for MOEAs, the default configurations are not suitable for solving MaOPs in this experiments. Consequently, the parameter setting in RM-MEDA is slightly modified for maximizing its performance to deal with MaOPs. Specifically, the number of clusters in local PCA varies in $\{10, 20, 30, 50\}$; the maximum iterations of local PCA with $\{50,100\}$ are tested individually with the maximum generations at $500$ for selecting the best mean value indicated by performance metrics, while the parameters settings in MBN-EDA are set based on the developers' suggestion in~\cite{karshenas2014multiobjective}. In addition, both the thresholds for dimension reduction and model building are set to be $0.96$ according to the convention of the community. Furthermore the neighbor size is specified as $25$ for the investigation in Subsection~\ref{sec_investigation_neighbor_size}, and the enlargement factor is set to be $0.5$. In the settings of the proposed algorithm, the generation number of PCSEA for obtaining training data is specified as $50$, and the population size is set to be $100$. The proposed MaOEDA-IR is based on the training data obtained by PCSEA. For a fair comparison, the termination criterion of MaOEDA-IR should be set to the total function evaluation number of the peer competitors minus that of PCSEA. Table~\ref{tab_function_evaluation}\footnote{These settings apply only to the experimental results in Subsection~\ref{sub_results}. For other experiments, the terminated criteria for MaOEDA-IR and PCSEA are specified as $200$ generations, and the population size for PCSEA is set to be $100$ for $M\leq 10$ and $200$ for $M >10$.} shows these particular settings for each considered number of objective. \begin{table}[ht] \caption{The settings for the maximal function evaluation numbers.} \label{tab_function_evaluation} \begin{center} \begin{tabular}{c|c|c|c} \hline \multirow{2}{*}{$M$}& total evaluation & evaluation numbers &evaluation numbers\\ & numbers & for PCSEA & for MaOEDA-IR\\ \hline 3 & 7.2E+4 & 1.5E+4 & 5.7E+4 \\ 5 & 1.3E+5 & 2.5E+4& 1.0E+5\\ 8 & 2.5E+5 & 4.0E+4 & 2.1E+5 \\ 10 & 2.2E+5 & 5.0E+4 & 1.7E+5 \\ 15 & 4.1E+5 & 7.5E+4 & 3.3E+5\\ \hline \end{tabular} \end{center} \end{table} \subsection{Performance on DTLZ and DTLZ$^{-1}$} \label{sub_results} The HV results of the proposed MaOEDA-IR against its peer competitors (NSGA-III, MOEA/D, GrEA, HypE, MBN-EDA, and RM-MEDA) on DTLZ1-DTLZ4 and DTLZ1$^-$-DTLZ4$^-$ test problems with $3$-, $5$-, $8$-, $10$-, and $15$-objective are presented in Table~\ref{hv_results_on_dtlz1_7}. Furthermore, each compared algorithm is independently performed $30$ runs and the best median HV results are highlighted in bold face. Moreover, the Mann-Whitney-Wilcoxon rank-sum test~\cite{steel1997principles} with a $5\%$ significance level is used to conduct the HV results due to the heuristic nature of peer algorithms, and the symbols ``+,'' ``=,'' and ``-'' denote whether the HV results of the proposed MaOEDA-IR are statistically better than, equal to, or worse than those of the corresponding peer competitors. In addition, the last row in Table~\ref{hv_results_on_dtlz1_7} summarizes how many times the proposed MaOEDA-IR is better than, equal to, or worse than its respective peer competitor. The results in Table~\ref{hv_results_on_dtlz1_7} indicate that the proposed MaOEDA-IR obtains the best performances on the DTLZ4, DTLZ1$^-$, and DTLZ3$^-$ test problems with all considered objective numbers except for $5$-objective DTLZ1$^-$ and $8$-objective DTLZ3$^- $ which are worse than GrEA. Moreover, MaOEDA-IR is superior to others over DTLZ3 with $8$- and $10$-objective while is inferior to GrEA with $3$-objective, RM-MEDA with $5$-objective, and NSGA-III with $15$-objective. Although the performance of MaOEDA-IR in DTLZ2 and DTLZ4$^-$ are worse than those of NSGA-III and GrEA on the $3$-, and $5$-objective, respectively, the proposed MaOEDA-IR performs better than others on the $10$-objective. In addition, the proposed MaOEDA-IR outperforms peer competitors on DTLZ1 and DTLZ2$^-$ test problems with $8$-, and $15$-objective. In summary, the HV results of the proposed MaOEDA-IR against selected compared algorithms over eight test problems with $3$-, $5$-, $8$-, $10$-, and $15$-objective indicate that MaOEDA-IR has a comparable performance by winning $181$ test scores out of $240$ comparisons, and performing equally well to $10$ comparisons. \begin{table*}[ht] \caption{HV results of MaOEDA-IR against NSGA-III, MOEA/D, GrEA, HypE, MBN-EDA, and RM-MEDA over the DTLZ1, DTLZ2, DTLZ3, DTLZ4, DTLZ1$^-$, DTLZ2$^-$, DTLZ3$^-$, and DTLZ4$^-$ test problems with $3$-, $5$-, $8$-, $10$-, and $15$-objective. Each compared algorithm is independently performed $30$ runs, and the best median HV results are highlighted in bold face. The symbols ``+,'' ``=,'' and ``-'' denote whether the HV results of the proposed MaOEDA-IR are statistically better than, equal to, or worse than that of the corresponding peer competitors with a significant level $5\%$, respectively.} \label{hv_results_on_dtlz1_7} \begin{center} \begin{tabular}{c|c|c|c|c|c|c|c|c} \hline Problem&$M$&MaOEDA-IR&NSGA-III&MOEA/D&GrEA&HypE&MBN-EDA&RM-MEDA\\ \hline \multirow{5}{*}{DTLZ1} &3&0.838(2.1E-2)&0.915(5.6E-2)(-)&0.816(1.6E-3)(+)&0.925(8.2E-2)(-)&0.906(6.3E-2)(-)&\textbf{0.999(9.8E-4)(-)}&0.835(2.5E-2)(=)\\ \cline{2-9} &5&0.959(3.3E-3)&0.986(8.9E-3)(-)&0.936(2.6E-3)(+)&\textbf{0.989(8.0E-3)(-)}&0.973(3.0E-2)(-)&0.970(6.9E-3)(-)&0.980(9.5E-5)(-)\\ \cline{2-9} &8&\textbf{0.997(6.1E-4)}&0.996(3.1E-3)(=)&0.202(1.1E-1)(+)&0.988(5.0E-2)(+)&0.982(1.8E-2)(+)&0.984(3.5E-3)(+)&0.997(2.4E-3)(=)\\ \cline{2-9} &10&0.991(4.2E-4)&0.990(4.0E-3)(+)&0.047(6.7E-2)(+)&0.971(2.4E-2)(+)&0.937(5.1E-2)(+)&0.984(4.1E-3)(+)&\textbf{0.992(5.2E-3)(-)}\\ \cline{2-9} &15&\textbf{0.997(2.2E-4)}&0.994(6.1E-3)(+)&0.153(1.5E-1)(+)&0.968(3.0E-2)(+)&0.930(4.3E-2)(+)&0.992(2.3E-3)(+)&0.993(4.2E-3)(+)\\ \hline \multirow{5}{*}{DTLZ2} &3&0.533(1.2E-3)&\textbf{0.645(4.7E-2)(-)}&0.540(2.1E-3)(-)&0.543(1.9E-3)(-)&0.361(2.9E-2)(+)&0.563(4.5E-2)(-)&0.566(1.6E-3)(-)\\ \cline{2-9} &5&0.780(5.5E-3)&\textbf{0.956(1.1E-2)(-)}&0.710(1.8E-3)(+)&0.795(1.2E-3)(-)&0.494(6.4E-2)(+)&0.739(4.0E-2)(+)&0.795(1.0E-2)(-)\\ \cline{2-9} &8&\textbf{0.925(6.7E-4)}&0.924(4.3E-3)(+)&0.925(8.7E-3)(=)&0.904(2.8E-3)(+)&0.543(8.6E-2)(+)&0.842(3.9E-2)(+)&0.020(1.1E-2)(+)\\ \cline{2-9} &10&\textbf{0.932(9.3E-3)}&0.931(1.0E-2)(+)&0.002(4.1E-3)(+)&0.922(3.7E-3)(+)&0.443(7.2E-2)(+)&0.879(3.1E-2)(+)&0.880(1.8E-2)(+)\\ \cline{2-9} &15&0.979(4.2E-3)&\textbf{0.982(4.4E-4)(-)}&0.066(1.1E-1)(+)&0.888(1.8E-2)(+)&0.404(7.2E-2)(+)&0.922(1.9E-2)(+)&0.876(1.6E-2)(+)\\ \hline \multirow{5}{*}{DTLZ3} &3&0.513(2.7E-3)&0.986(3.0E-2)(-)&0.574(7.3E-2)(-)&\textbf{0.979(2.2E-4)(-)}&0.816(1.7E-1)(-)&0.680(4.6E-2)(-)&0.977(7.1E-3)(-)\\ \cline{2-9} &5&0.989(3.6E-4)&0.987(1.8E-2)(+)&0.704(2.8E-2)(+)&0.794(1.7E-3)(+)&0.995(3.7E-3)(-)&0.862(3.1E-2)(+)&\textbf{0.996(1.4E-3)(-)}\\ \cline{2-9} &8&\textbf{0.997(2.6E-5)}&0.994(6.5E-3)(+)&0.313(1.9E-1)(+)&0.923(5.6E-4)(+)&0.995(6.2E-3)(+)&0.921(1.5E-2)(+)&0.949(8.9E-3)(+)\\ \cline{2-9} &10&\textbf{0.989(2.0E-5)}&0.984(1.4E-2)(+)&0.096(1.4E-1)(+)&0.943(8.2E-4)(+)&0.988(1.0E-2)(+)&0.929(1.4E-2)(+)&0.898(1.9E-2)(+)\\ \cline{2-9} &15&0.975(4.9E-4)&\textbf{0.997(3.0E-3)(-)}&0.079(9.3E-2)(+)&0.902(2.2E-2)(+)&0.985(1.2E-2)(-)&0.963(4.1E-3)(+)&0.887(1.6E-2)(+)\\ \hline \multirow{5}{*}{DTLZ4} &3&\textbf{0.733(2.9E-4)}&0.521(9.2E-2)(+)&0.445(1.0E-1)(+)&0.476(1.7E-1)(+)&0.471(1.0E-1)(+)&0.565(2.1E-3)(+)&0.538(3.2E-3)(+)\\ \cline{2-9} &5&\textbf{0.920(6.9E-3)}&0.798(9.0E-3)(+)&0.614(5.9E-2)(+)&0.785(3.0E-2)(+)&0.585(6.6E-2)(+)&0.795(1.1E-3)(+)&0.726(2.8E-2)(+)\\ \cline{2-9} &8&\textbf{0.938(5.2E-3)}&0.923(2.3E-3)(+)&0.065(5.7E-2)(+)&0.916(2.1E-3)(+)&0.521(9.2E-2)(+)&0.928(9.5E-4)(+)&0.859(1.4E-2)(+)\\ \cline{2-9} &10&\textbf{0.953(6.2E-5)}&0.943(2.3E-3)(+)&0.011(2.1E-2)(+)&0.936(1.3E-3)(+)&0.449(9.4E-2)(+)&0.949(9.7E-4)(+)&0.897(2.1E-2)(+)\\ \cline{2-9} &15&\textbf{0.997(3.2E-5)}&0.991(5.5E-4)(+)&0.009(1.9E-2)(+)&0.936(9.6E-3)(+)&0.546(9.1E-2)(+)&0.991(2.3E-4)(+)&0.985(9.4E-3)(+)\\ \hline \multirow{5}{*}{DTLZ1$^-$} &3&\textbf{0.289(3.1E-3)}&0.217(3.2E-3)(+)&0.208(3.2E-3)(+)&0.227(1.3E-3)(+)&0.121(1.0E-2)(+)&0.118(1.6E-2)(+)&0.255(1.6E-2)(+)\\ \cline{2-9} &5&0.010(2.7E-3)&0.019(3.8E-3)(-)&0.007(4.1E-4)(+)&\textbf{0.199(9.0E-3)(-)}&0.001(1.0E-4)(+)&0.019(3.7E-3)(-)&0.011(1.0E-3)(=)\\ \cline{2-9} &8&\textbf{0.112(2.9E-2)}&0.001(5.2E-4)(+)&0.000(1.2E-5)(+)&0.000(1.4E-5)(+)&0.000(3.1E-6)(+)&0.001(0.0E+0)(+)&0.000(2.6E-5)(+)\\ \cline{2-9} &10&\textbf{0.099(3.6E-4)}&0.000(1.7E-4)(+)&0.000(0.0E+0)(+)&0.000(0.0E+0)(+)&0.000(0.0E+0)(+)&0.000(0.0E+0)(+)&0.000(6.6E-6)(+)\\ \cline{2-9} &15&\textbf{0.102(6.6E-3)}&0.000(0.0E+0)(+)&0.000(0.0E+0)(+)&0.000(0.0E+0)(+)&0.000(0.0E+0)(+)&0.000(0.0E+0)(+)&0.000(0.0E+0)(+)\\ \hline \multirow{5}{*}{DTLZ2$^-$} &3&0.533(7.2E-2)&0.541(1.0E-2)(-)&0.527(1.8E-3)(+)&\textbf{0.587(2.3E-2)(-)}&0.465(2.5E-2)(+)&0.340(2.3E-2)(+)&0.531(6.1E-3)(+)\\ \cline{2-9} &5&0.067(3.3E-5)&0.140(2.0E-2)(-)&0.078(2.0E-3)(-)&\textbf{0.258(3.1E-2)(-)}&0.008(1.3E-3)(+)&0.144(2.2E-2)(-)&0.067(9.7E-3)(=)\\ \cline{2-9} &8&\textbf{0.102(5.2E-3)}&0.012(4.5E-3)(+)&0.000(4.5E-5)(+)&0.001(8.4E-5)(+)&0.000(1.6E-5)(+)&0.017(3.4E-3)(+)&0.001(2.1E-4)(+)\\ \cline{2-9} &10&\textbf{0.033(3.6E-2)}&0.004(2.4E-3)(+)&0.000(0.0E+0)(+)&0.000(1.5E-5)(+)&0.000(0.0E+0)(+)&0.002(5.7E-5)(+)&0.000(2.4E-5)(+)\\ \cline{2-9} &15&\textbf{0.003(3.2E-3)}&0.000(4.2E-5)(+)&0.000(0.0E+0)(+)&0.000(0.0E+0)(+)&0.000(0.0E+0)(+)&0.000(0.0E+0)(+)&0.000(0.0E+0)(+)\\ \hline \multirow{5}{*}{DTLZ3$^-$} &3&\textbf{0.553(1.9E-2)}&0.539(1.1E-2)(+)&0.535(1.7E-2)(+)&0.540(1.7E-3)(+)&0.483(1.5E-2)(+)&0.242(2.9E-2)(+)&0.547(2.4E-2)(+)\\ \cline{2-9} &5&\textbf{0.230(3.3E-2)}&0.131(1.9E-2)(+)&0.119(1.1E-2)(+)&0.071(9.0E-4)(+)&0.074(3.6E-3)(+)&0.105(1.2E-2)(+)&0.087(8.6E-3)(+)\\ \cline{2-9} &8&0.001(3.2E-3)&0.012(3.0E-3)(-)&0.001(1.8E-4)(=)&\textbf{0.102(1.2E-2)(-)}&0.001(2.3E-4)(=)&0.014(4.0E-3)(-)&0.001(2.2E-4)(=)\\ \cline{2-9} &10&\textbf{0.030(3.0E-3)}&0.003(2.0E-3)(+)&0.000(6.9E-6)(+)&0.000(1.4E-5)(+)&0.000(2.4E-5)(+)&0.003(1.1E-4)(+)&0.000(2.8E-5)(+)\\ \cline{2-9} &15&\textbf{0.003(2.1E-5)}&0.000(1.6E-4)(+)&0.000(0.0E+0)(+)&0.000(0.0E+0)(+)&0.000(0.0E+0)(+)&0.000(0.0E+0)(+)&0.000(0.0E+0)(+)\\ \hline \multirow{5}{*}{DTLZ4$^-$} &3&0.531(2.8E-3)&0.533(6.4E-3)(-)&0.528(2.0E-3)(+)&\textbf{0.621(3.2E-2)(-)}&0.492(1.4E-2)(+)&0.170(1.2E-2)(+)&0.511(1.2E-2)(+)\\ \cline{2-9} &5&0.060(3.9E-3)&0.076(1.5E-2)(-)&0.071(2.1E-3)(-)&\textbf{0.329(3.4E-2)(-)}&0.007(1.4E-3)(+)&0.005(1.1E-3)(+)&0.014(2.7E-3)(+)\\ \cline{2-9} &8&0.001(6.3E-3)&0.002(1.0E-3)(-)&0.000(2.4E-5)(+)&\textbf{0.124(1.1E-2)(-)}&0.002(7.9E-6)(-)&0.000(0.0E+0)(+)&0.001(1.5E-5)(=)\\ \cline{2-9} &10&\textbf{0.033(3.2E-3)}&0.000(2.9E-4)(+)&0.000(0.0E+0)(+)&0.000(3.1E-6)(+)&0.000(2.2E-6)(+)&0.000(0.0E+0)(+)&0.001(2.2E-6)(+)\\ \cline{2-9} &15&\textbf{0.005(6.2E-2)}&0.003(3.8E-5)(+)&0.000(0.0E+0)(+)&0.000(0.0E+0)(+)&0.000(0.0E+0)(+)&0.000(0.0E+0)(+)&0.000(0.0E+0)(+)\\ \hline \multicolumn{2}{c|}{+/=/-}&&25/1/14 & 34/2/4& 28/0/12& 33/1/6& 33/0/7& 28/6/6\\ \hline \end{tabular} \end{center} \end{table*} \subsection{Investigation on Neighbor Size} \label{sec_investigation_neighbor_size} \begin{figure} \centering \includegraphics[width=\columnwidth]{t_size_results}\\ \caption{IGD values of the results generated by $3$-, $5$-, $8$-, $10$-, and $15$-objective DTLZ1 test problems with different neighbor sizes varying in $\{5,10,15,20,25,30,35,40,45,50\}$.}\label{fig_varying_t_size} \end{figure} To investigate how the neighbor size $T$ affecting the performance of the proposed MaOEDA-IR, a series of experiments is performed by varying $T$ in $[5,50]$ with an interval of $5$. Specifically, the results measured by IGD on DTLZ1 test problems with considered objective numbers are plotted in Fig.~\ref{fig_varying_t_size} in which it is clearly shown that appreciable changes have been taken place in $T$ with smaller numbers and gradually remain steady as $T$ increases. It is interpreted that the neighbor solutions with one particular reference vector for building the model are from other reference vectors when $T$ is with a smaller number, which causes the inaccuracy of the built model based on which new solutions are generated that led to deteriorating ues. Especially, most IGD values remain level when $T>25$ in Fig.~\ref{fig_varying_t_size} (it is actually applicable to other tested benchmark problems based on the investigations.), and the $T$ with larger size will increase the computational cost by introducing more initialized solutions. As a consequence, $T$ is specified as $25$ in our experiments. \subsection{Investigation on Diversity Repairing Mechanism} \label{sec_investigation_diversity_raparation} \begin{figure}[htp] \begin{center} \subfloat[]{\includegraphics[width=0.8\columnwidth]{DTLZ2-10-IGD-diversity \label{fig_diversity_10}} \hfil \subfloat[]{\includegraphics[width=0.8\columnwidth]{DTLZ2-15-IGD-diversity \label{fig_diversity_15}} \caption{IGD values of the results generated by $10$-(Fig.~\ref{fig_diversity_10}) and $15$-objective (Fig.~\ref{fig_diversity_15}) DTLZ2 with and without diversity repairing over $200$ generations.} \label{fig_diversity_comparison} \end{center} \end{figure} \begin{figure}[htp] \begin{center} \subfloat[]{\includegraphics[width=0.8\columnwidth]{DTLZ2-10-HV-diversity \label{fig_diversity_10_hv}} \hfil \subfloat[]{\includegraphics[width=0.8\columnwidth]{DTLZ2-15-HV-diversity \label{fig_diversity_15_hv}} \caption{HV values of the results generated by $10$-(Fig.~\ref{fig_diversity_10_hv}) and $15$-objective (Fig.~\ref{fig_diversity_15_hv}) DTLZ2 with and without diversity repairing over $200$ generations.} \label{fig_diversity_comparison_hv} \end{center} \end{figure} As discussed in Subsection~\ref{sec_discussion}, both the diversity and convergence have been improved with the diversity repairing mechanism. To this end, experimental comparisons on the test problems with and without the diversity repairing mechanism are performed. Specifically, the IGD values and the HV values of the evolution trajectory results generated by $10$- and $15$-objective DTLZ2 test problems over $200$ generations are illustrated in Figs.~\ref{fig_diversity_10}, ~\ref{fig_diversity_15},~\ref{fig_diversity_10_hv}, and~\ref{fig_diversity_15_hv}, respectively. In these figures, the red lines denote the results without the diversity repairing mechanism, while the blue lines refer to those with the diversity repairing. To be specific, both the IGD values of $10$-objective DTLZ2 with and without the diversity repairing mechanism sharply decrease during the first $20$ generations then gradually remain stable, while those of $15$-objective DTLZ2 smoothly decline throughout the entire evolution. For both the HV values of $10$-objective DTLZ2 with and without the diversity repairing mechanism, they grow substantially before the $40$-th generation then go up moderately as the evolution continues, while the ones resulted by the proposed algorithm without the diversity repairing mechanism stay lower than those with the diversity repairing mechanism during the entire evolution. In summary, both the best IGD results in Figs.~\ref{fig_diversity_10} and~\ref{fig_diversity_15} and the HV results in Figs.~\ref{fig_diversity_10_hv} and~\ref{fig_diversity_15_hv} demonstrate the promising performance of the proposed MaOEDA-IR when the diversity repairing mechanism is employed. \begin{table} \caption{HV results of MaOEDA-IR with and without the diversity repairing mechanism over the DTLZ1, DTLZ2, DTLZ3, DTLZ4, DTLZ1$^-$, DTLZ2$^-$, DTLZ3$^-$, and DTLZ4$^-$ test problems with $3$-, $5$-, $8$-, $10$-, and $15$-objective. Each compared algorithm is independently performed $30$ runs, and the best median HV results are highlighted in bold face. The symbols ``+,'' ``=,'' and ``-'' denote whether the HV results of the MaOEDA-IR with DR mechanism are statistically better than, equal to, or worse than that of the corresponding MaOEDA-IR without the diversity repairing mechanism with a significant level $5\%$, respectively.} \label{hv_results_on_dtlz1_7_diversity} \begin{center} \begin{tabular}{c|c|c|c} \hline \multirow{2}{*}{Problem}&\multirow{2}{*}{$M$}&\multicolumn{2}{c}{Diversity Repairing}\\ \cline{3-4} &&With&Without\\ \hline \multirow{5}{*}{DTLZ1} &3&\textbf{0.847(1.2E-3)}&0.843(1.2E-3)(=)\\ \cline{2-4} &5&\textbf{0.975(4.5E-4)}&0.933(6.6E-2)(+)\\ \cline{2-4} &8&\textbf{0.997(1.7E-4)}&0.844(5.6E-2)(+)\\ \cline{2-4} &10&\textbf{0.997(1.5E-4)}&0.590(7.5E-2)(+)\\ \cline{2-4} &15&\textbf{0.999(1.2E-4)}&0.727(9.7E-3)(+)\\ \hline \multirow{5}{*}{DTLZ2} &3&\textbf{0.567(2.8E-3)}&0.503(4.3E-2)(+)\\ \cline{2-4} &5&\textbf{0.796(4.0E-3)}&0.709(5.0E-2)(+)\\ \cline{2-4} &8&\textbf{0.923(9.6E-4)}&0.811(9.2E-3)(+)\\ \cline{2-4} &10&\textbf{0.943(7.2E-4)}&0.924(1.5E-2)(+)\\ \cline{2-4} &15&\textbf{0.987(1.0E-3)}&0.933(1.2E-3)(+)\\ \hline \multirow{5}{*}{DTLZ3} &3&\textbf{0.566(1.3E-3)}&0.558(1.0E-3)(=)\\ \cline{2-4} &5&\textbf{0.999(4.1E-6)}&0.777(3.1E-2)(+)\\ \cline{2-4} &8&\textbf{0.997(5.5E-6)}&0.785(2.0E-2)(+)\\ \cline{2-4} &10&\textbf{0.998(2.8E-3)}&0.554(6.3E-2)(+)\\ \cline{2-4} &15&\textbf{0.982(6.1E-4)}&0.652(1.7E-4)(+)\\ \hline \multirow{5}{*}{DTLZ4} &3&\textbf{0.750(8.3E-2)}&0.745(3.9E-3)(=)\\ \cline{2-4} &5&\textbf{0.937(1.7E-2)}&0.925(3.4E-2)(+)\\ \cline{2-4} &8&\textbf{0.991(2.9E-3)}&0.881(5.5E-2)(+)\\ \cline{2-4} &10&\textbf{0.992(3.8E-3)}&0.813(4.9E-2)(+)\\ \cline{2-4} &15&\textbf{0.999(6.1E-4)}&0.900(3.7E-5)(+)\\ \hline \multirow{5}{*}{DTLZ1$^-$} &3&\textbf{0.338(3.9E-3)}&0.285(3.1E-2)(+)\\ \cline{2-4} &5&\textbf{0.011(3.3E-4)}&0.002(5.3E-2)(+)\\ \cline{2-4} &8&\textbf{0.142(5.1E-3)}&0.129(3.0E-3)(+)\\ \cline{2-4} &10&\textbf{0.124(7.0E-3)}&0.042(5.1E-2)(+)\\ \cline{2-4} &15&\textbf{0.102(6.7E-3)}&0.010(2.8E-3)(+)\\ \hline \multirow{5}{*}{DTLZ2$^-$} &3&\textbf{0.541(1.4E-3)}&0.443(9.0E-2)(+)\\ \cline{2-4} &5&\textbf{0.071(9.6E-4)}&0.063(3.0E-2)(+)\\ \cline{2-4} &8&\textbf{0.103(1.5E-2)}&0.014(9.6E-2)(+)\\ \cline{2-4} &10&\textbf{0.038(7.7E-3)}&0.006(2.9E-2)(+)\\ \cline{2-4} &15&\textbf{0.003(8.9E-4)}&0.000(0.0E-0)(=)\\ \hline \multirow{5}{*}{DTLZ3$^-$} &3&\textbf{0.595(2.1E-2)}&0.518(9.1E-2)(+)\\ \cline{2-4} &5&\textbf{0.255(3.0E-2)}&0.240(6.1E-2)(+)\\ \cline{2-4} &8&\textbf{0.001(6.5E-5)}&0.000(0.0E-0)(=)\\ \cline{2-4} &10&\textbf{0.036(7.2E-3)}&0.013(1.1E-4)(+)\\ \cline{2-4} &15&\textbf{0.003(9.0E-4)}&0.000(0.0E-0)(=)\\ \hline \multirow{5}{*}{DTLZ4$^-$} &3&\textbf{0.540(1.2E-3)}&0.472(9.7E-4)(+)\\ \cline{2-4} &5&\textbf{0.069(7.7E-4)}&0.031(9.7E-2)(+)\\ \cline{2-4} &8&\textbf{0.001(5.9E-5)}&0.000(0.0E-0)(=)\\ \cline{2-4} &10&\textbf{0.043(8.0E-3)}&0.010(5.5E-2)(+)\\ \cline{2-4} &15&\textbf{0.005(1.5E-3)}&0.000(0.0E-0)(=)\\ \hline \multicolumn{2}{c|}{+/=/-}&&33/7/0 \\ \hline \end{tabular} \end{center} \end{table} Furthermore, Table~\ref{hv_results_on_dtlz1_7_diversity} shows the extensive experimental comparisons between the proposed algorithm with and without the diversity repairing mechanism. Specifically, these experiments are independently performed $30$ runs over each test problem, and their results are measured by HV. Moreover, the best median HV results are highlighted in bold face, and the symbols ``+,'' ``=,'' and ``-'' denote whether the HV results of the proposed algorithm with the diversity repairing mechanism are statistically better than, equal to, or worse than those of the proposed algorithm without the diversity repairing mechanism with a significant level $5\%$, respectively. In addition, the last row in Table 3 summarizes how many times the proposed algorithm with the diversity repairing mechanism are better than, equal to, or worse than itself without this mechanism. It is clearly shown in Table~\ref{hv_results_on_dtlz1_7_diversity} that when the diversity repairing mechanism is employed, the proposed algorithm obtains all the best median HV results and most best statistical results against its competitors over DTLZ1-DTLZ4, and DTLZ1$^-$-DTLZ4$^-$ with $3$-, $5$-, $8$-, $10$-, and $15$-objective, while the proposed algorithm without the diversity repairing mechanism could not even obtain effective solutions over DTLZ2$^-$ with $15$-objective, DTLZ3$^-$ with 8- and 15-objective, and DTLZ4$^-$ with 8-objective (i.e., the HV results upon these generated solutions are approximately zeros, which is caused by the domination by the employed reference points for the calculation of HV). In addition, it also can be observed that the diversity repairing mechanism may not significantly improve the performance in solving MOPs where the phenomenon of diversity losing is not severe. For example, the proposed algorithm obtains the same statistical results over 3-objective DTLZ1, DTLZ3, and DTLZ4 test problems no matter if the diversity repairing mechanism is employed. In summary, the diversity repairing mechanism can significantly improve the performance of the proposed algorithm especially in solving MaOPs. \subsection{Investigation on Dimension Reduction} \label{sec_investigation_dimension_reduction} It is expected that dimension reduction in the decision variable space is capable of reducing the computational complexity of the proposed MaOEDA-IR. In this situation, two types of experiments are to be performed in order to draw any meaningful conclusion. The first one is to measure the performance of the solutions generated by the proposed algorithm with and without the dimension reduction within the same generation numbers. The other one is to compare the generation numbers when the same performance is achieved by the proposed algorithm with and without the dimension reduction. In the following, the first experimental results would be shown, while the second experimental comparisons are presented in Supplemental Materials. Specifically, the generation number of the first experiment is adopted from the parameter settings in Subsection~\ref{sec_parameter_settings} (i.e., $200$). The experimental comparisons are independently performed $30$ runs by the proposed algorithm with and without the dimension reduction over DTLZ1-DTLZ4 and DTLZ1$^-$-DTLZ4$^-$ with $3$-, $5$-, $8$-, $10$-, and $15$-objective. Then their results are measured by HV and shown in Table~\ref{hv_results_on_dtlz1_7_dimension} where the best median HV results are highlighted in bold face, and the symbols ``+,'' ``=,'' and ``-'' denote whether the HV results of the proposed algorithm with the dimension reduction are statistically better than, equal to, or worse than that of the proposed algorithm without the dimension reduction with a significant level $5\%$, respectively. Furthermore, the last row in Table~\ref{hv_results_on_dtlz1_7_dimension} summarizes how many times the proposed algorithm with the dimension reduction are better than, equal to, or worse than itself without this technique. It is obvious from Table~\ref{hv_results_on_dtlz1_7_dimension} that the proposed algorithm obtains the significant performance improvement when the dimension reduction is employed. Furthermore, without the dimension reduction, the proposed algorithm cannot perform well over several test problems, such as the DTLZ1$^-$ with $5$-objective, DTLZ2$^-$ with $10$- and $5$-objective, and DTLZ3$^-$ as well as DTLZ4$^-$ with $8$-, $10$-, and $15$-objective (their HV results are zeros). In summary, the proposed algorithm shows its superiority when the dimension reduction is utilized. \begin{table}[htp] \caption{HV results of MaOEDA-IR with and without the dimension reduction over the DTLZ1, DTLZ2, DTLZ3, DTLZ4, DTLZ1$^-$, DTLZ2$^-$, DTLZ3$^-$, and DTLZ4$^-$ test problems with $3$-, $5$-, $8$-, $10$-, and $15$-objective. Each compared algorithm is independently performed $30$ runs, and the best median HV results are highlighted in bold face. The symbols ``+,'' ``=,'' and ``-'' denote whether the HV results of the MaOEDA-IR with the dimension reduction are statistically better than, equal to, or worse than that of the corresponding MaOEDA-IR without the dimension reduction with a significant level $5\%$, respectively.} \label{hv_results_on_dtlz1_7_dimension} \begin{center} \begin{tabular}{c|c|c|c} \hline \multirow{2}{*}{Problem}&\multirow{2}{*}{$M$}&\multicolumn{2}{c}{ Dimension Reduction}\\ \cline{3-4} &&With&Without\\ \hline \multirow{5}{*}{DTLZ1} &3&\textbf{0.847(1.2E-3)}&0.776(3.2E-3)(+)\\ \cline{2-4} &5&\textbf{0.975(4.5E-4)}&0.969(3.4E-4)(+)\\ \cline{2-4} &8&\textbf{0.997(1.7E-4)}&0.902(4.4E-3)(+)\\ \cline{2-4} &10&\textbf{0.997(1.5E-4)}&0.948(6.6E-2)(+)\\ \cline{2-4} &15&\textbf{0.999(1.2E-4)}&0.954(1.6E-2)(+)\\ \hline \multirow{5}{*}{DTLZ2} &3&\textbf{0.567(2.8E-3)}&0.555(2.6E-2)(+)\\ \cline{2-4} &5&\textbf{0.796(4.0E-3)}&0.746(5.1E-2)(+)\\ \cline{2-4} &8&\textbf{0.923(9.6E-4)}&0.907(7.0E-3)(+)\\ \cline{2-4} &10&\textbf{0.943(7.2E-4)}&0.909(8.9E-2)(+)\\ \cline{2-4} &15&\textbf{0.987(1.0E-3)}&0.828(9.6E-3)(+)\\ \hline \multirow{5}{*}{DTLZ3} &3&\textbf{0.566(1.3E-3)}&0.511(2.4E-5)(+)\\ \cline{2-4} &5&\textbf{0.999(4.1E-6)}&0.985(9.3E-3)(+)\\ \cline{2-4} &8&\textbf{0.997(5.5E-6)}&0.912(3.5E-4)(+)\\ \cline{2-4} &10&\textbf{0.998(2.8E-3)}&0.328(2.0E-2)(+)\\ \cline{2-4} &15&\textbf{0.982(6.1E-4)}&0.928(2.5E-4)(+)\\ \hline \multirow{5}{*}{DTLZ4} &3&\textbf{0.750(8.3E-2)}&0.688(2.9E-4)(+)\\ \cline{2-4} &5&\textbf{0.937(1.7E-2)}&0.890(7.6E-2)(+)\\ \cline{2-4} &8&\textbf{0.991(2.9E-3)}&0.956(7.5E-3)(+)\\ \cline{2-4} &10&\textbf{0.992(3.8E-3)}&0.109(3.8E-4)(+)\\ \cline{2-4} &15&\textbf{0.999(6.1E-4)}&0.940(5.7E-4)(+)\\ \hline \multirow{5}{*}{DTLZ1$^-$} &3&\textbf{0.338(3.9E-3)}&0.330(4.7E-2)(+)\\ \cline{2-4} &5&\textbf{0.011(3.3E-4)}&0.000(0.0E-0)(=)\\ \cline{2-4} &8&\textbf{0.142(5.1E-3)}&0.089(3.4E-2)(+)\\ \cline{2-4} &10&\textbf{0.124(7.0E-3)}&0.046(1.6E-4)(+)\\ \cline{2-4} &15&\textbf{0.102(6.7E-3)}&0.009(7.9E-3)(+)\\ \hline \multirow{5}{*}{DTLZ2$^-$} &3&\textbf{0.541(1.4E-3)}&0.510(7.5E-4)(+)\\ \cline{2-4} &5&\textbf{0.071(9.6E-4)}&0.018(4.5E-2)(+)\\ \cline{2-4} &8&\textbf{0.103(1.5E-2)}&0.086(8.4E-3)(+)\\ \cline{2-4} &10&\textbf{0.038(7.7E-3)}&0.000(0.0E-0)(=)\\ \cline{2-4} &15&\textbf{0.003(8.9E-4)}&0.000(0.0E-0)(=)\\ \hline \multirow{5}{*}{DTLZ3$^-$} &3&\textbf{0.595(2.1E-2)}&0.580(9.6E-3)(+)\\ \cline{2-4} &5&\textbf{0.255(3.0E-2)}&0.172(4.6E-5)(+)\\ \cline{2-4} &8&\textbf{0.001(6.5E-5)}&0.000(0.0E-0)(=)\\ \cline{2-4} &10&\textbf{0.036(7.2E-3)}&0.000(0.0E-0)(=)\\ \cline{2-4} &15&\textbf{0.003(9.0E-4)}&0.000(0.0E-0)(=)\\ \hline \multirow{5}{*}{DTLZ4$^-$} &3&\textbf{0.540(1.2E-3)}&0.532(2.6E-3)(+)\\ \cline{2-4} &5&\textbf{0.069(7.7E-4)}&0.029(1.5E-3)(+)\\ \cline{2-4} &8&\textbf{0.001(5.9E-5)}&0.000(0.0E-0)(=)\\ \cline{2-4} &10&\textbf{0.043(8.0E-3)}&0.000(0.0E-0)(=)\\ \cline{2-4} &15&\textbf{0.005(1.5E-3)}&0.000(0.0E-0)(=)\\ \hline \multicolumn{2}{c|}{+/=/-}&&31/9/0 \\ \hline \end{tabular} \end{center} \end{table} \section{Conclusion} \label{section_5} In solving many-objective optimization problems, the performance of most multi-objective evolutionary algorithms often deteriorate appreciably because of the loss of selection pressure during the evolution process. This is largely due to the selected parents solutions not generating promising individuals with the conventional genetic operators to direct the search towards the Pareto-optimal front. An improved regularity-based estimation of distribution algorithm, which generates new solutions with a probabilistic model built from the solutions the algorithm has visited, is proposed in this paper. To be specific, the proposed algorithm made an innovation in the following aspects: 1) devising a diversity repairing mechanism to reduce the risk of dominance resistant solutions and 2) generating promising solutions with the statistics of regularity learnt from the neighboring solutions with respect to the representatives which are uniformly distributed in the objective space. These two steps works in conjunction with each other to direct the search towards Pareto-optimal front. In addition, dimension reduction technique is utilized to reduce the cost of exploitation and exploration. Furthermore, in addition to the investigations are performed on the diversity repairing and dimension reduction, investigation is also performed based on the size of neighbors affecting the performance of the proposed algorithm to give the guideline for decision-marker. Extensive experiments are performed and the results measured by the chosen performance metrics indicate that the proposed algorithm shows superiority in tackling many-objective optimization problems. In our future research, we will extend the proposed algorithm to deal with highly constrained many-objective optimization problems in which complicated regularity of Pareto fronts often exists. \IEEEpeerreviewmaketitle \ifCLASSOPTIONcaptionsoff \newpage \fi \bibliographystyle{IEEEtran}
{ "timestamp": "2018-02-27T02:03:41", "yymm": "1802", "arxiv_id": "1802.08788", "language": "en", "url": "https://arxiv.org/abs/1802.08788" }
\section{INTRODUCTION} The ARCADE 2 (Absolute Radiometer for Cosmology, Astrophysics and Diffuse Emission) observations of the radio sky show an excess in addition to the Cosmic Microwave Background (CMB) temperature of $T_\mathrm{CMB}=2.725\pm0.001\,\mathrm{K}$ (Fixsen et al. 2011). The existence of this excess radio emission (Cosmic Radio Background - CRB) is supported by the observations at lower frequencies (Haslam et al. 1981; Reich \& Reich 1986; Roger et al. 1999; Maeda et al. 1999). The observed excess extends from $22\,\mathrm{MHz}$ to $10\,\mathrm{GHz}$, and is well fitted by a power law $T_\mathrm{CRB} = T_\mathrm{R}\left(\frac{\nu}{310\,\mathrm{MHz}}\right)^\beta\mathrm{K}$, where $T_\mathrm{CRB}$ is the brightness temperature of the CRB, $T_\mathrm{R}=(24.1 \pm 2.1)\,\mathrm{K}$ is the normalization temperature of the CRB, $\nu$ is the frequency, and $\beta = -2.599 \pm 0.036$ is the power law index (Fixsen et al. 2011). The measured CRB is several times higher than the contribution from currently observed radio sources like galaxy clusters and the intergalactic medium, radio supernovae, radio quiet quasars, and star forming galaxies (Singal et al. 2010; Vernstrom et al. 2011). This leaves room for possible dark matter contribution (Fornengo et al. 2011; Hooper et al. 2012; Cline \& Vincent 2013) or some other unresolved radio sources. Here, we consider cosmic-ray acceleration in large-scale accretion shocks (Miniati et al. 2000; Furlanetto \& Loeb 2004), present around galaxy clusters (Pinzke \& Pfrommer 2010). Recent detection of X-ray and gamma-ray signal around the Coma cluster could be a potential evidence for the presence of accretion shocks (Keshet et al. 2017; Keshet \& Reiss 2017). Constrains on their contribution to the gamma-ray and neutrino backgrounds are still weak, but they cannot yet be ruled out (Dobard\v zi\'c \& Prodanovi\'c 2014; 2015). Synchrotron emission from electrons accelerated in large-scale accretion shocks, should produce radio signal (Ensslin et al. 1998; Kushnir et al. 2009), but also contribute to the CRB (Keshet \& Waxman 2004). \section{FORMALISM AND RESULTS} We follow models from Dobard\v zi\'c \& Prodanovi\'c (2014), who have calculated the contribution of unresolved galaxy clusters to the \emph{Fermi}-LAT isotropic gamma-ray background (Ackermann et al. 2015). The observable quantity that can be compared to the CRB is the differential radio intensity $\mathrm{d}I_\mathrm{r}/\mathrm{d}\Omega\,[\mathrm{Jy}\,\mathrm{sr}^{-1}]$ coming from all unresolved galaxy clusters: \begin{equation} \frac{\mathrm{d}I_{\mathrm{r}}}{\mathrm{d}\Omega} = \frac{c}{4\pi H_0 J_0(z_0)} \int_0^{z_\mathrm{vir}} \mathrm{d}z\frac{\dot{\rho}_\mathrm{sf}(z)L_{\mathrm{r}}(\nu)}{\sqrt{\Omega_\Lambda +\Omega_\mathrm{m}(1+z)^3}}\, \nonumber\\$$ $$\times \left[ \frac{\epsilon}{\epsilon+1}+ (\epsilon+1)^{-1}\frac{\int_{z_\mathrm{vir}}^{z}\mathrm{d}z\left(\mathrm{d}t /\mathrm{d}z \right) \dot{\rho}_\mathrm{sf}(z)}{\int_{z_\mathrm{vir}}^{z_0}\mathrm{d}z\left(\mathrm{d}t/\mathrm{d}z\right)\dot{\rho}_\mathrm{sf}(z)} \right]\,, \end{equation} where $H_0$ is the present value of the Hubble parameter, $c$ speed of light, $z$ redshift and $z_\mathrm{vir}$ virialization redshift of the source, and matter and vacuum energy density parameters are $\Omega_\mathrm{m}$ and $\Omega_\Lambda$. The evolution of cosmic accretion shocks is described by the cosmic accretion rate $\dot{\rho}_\mathrm{sf}(z)\,\left[M_\odot\mathrm{yr}^{-1}\mathrm{Mpc}^{-3}\right]$ (Pavlidou \& Fields 2006). The accretion rate of a single object at redshift $z_0$ to which we normalize our models is $J_0\,[\mathrm{M_\odot\mathrm{yr}^{-1}}]$, and $\epsilon$ is the initial gas fraction of the object accreting gas. The detailed derivation of this equation and parameter values are given in Dobard\v zi\'c \& Prodanovi\'c (2014). Finally, $L_{\mathrm{r}}(\nu)\,[\mathrm{erg}\,\mathrm{s}^{-1}\mathrm{Hz}^{-1}]$ is the radio spectrum of some typical galaxy cluster. For this we use Coma cluster, since it is a well studied galaxy cluster with the observed diffuse radio emission (Large et al. 1959; Schlickeiser et al. 1987). We use fitted Coma radio spectrum by Brunetti et al. (2012), which was derived using power law in momentum $\propto p^{-2.6}$ hadronic models. Compilation of observed Coma radio data, that was used for the fitting, can be found in Pizzo (2010). In Figure 1 we present the resulting differential radio intensity of unresolved galaxy clusters, derived using Equation (1) and integrated over the whole solid angle, $I_\mathrm{r}\,\left[\mathrm{Jy}\right]$. Dashed curve was derived using the simplest Model 1 (model depends only on the distribution of accreting objects by mass) for the evolution of accretion shocks (Pavlidou \& Fields 2006), while dotted and dash dotted curves use more realistic Models 2 and 3 (models depend on the distribution of accreting objects by mass and properties of the surrounding medium), respectively. The data points represent the CRB derived by subtracting the $T_\mathrm{CMB}$ (Fixsen et al. 2011) from the observed radio emission (Fixsen et al. 2011; Roger et al. 1999; Maeda et al. 1999; Reich \& Reich 1986; Haslam et al. 1981). The solid black line is the best fit CRB spectrum that corresponds to $T_\mathrm{CRB} \propto \nu^{-2.599}$ from Fixsen et al. (2011). Both $T_\mathrm{CRB}$ data points and the best fit spectrum were converted using $I_\mathrm{CRB}\left[\mathrm{Jy}\right] = 10^{26}\times 4\pi\times\frac{2\nu^2k T_\mathrm{CRB}}{c^2}$, where $k$ is the Boltzmann constant. Our models predict that the contribution of unresolved galaxy clusters can be high on low frequencies, although one has to keep in mind that synchrotron self-absorption will reduce the possible contribution at even lower frequencies (not included here, since these losses aren't visible in the observed Coma radio spectrum). On $5\,\mathrm{GHz}$ the contribution should be $\lesssim 2-35\%$ (upper limit range corresponds to the use of different accretion shock models). Finally, around $10\,\mathrm{GHz}$ the possible contribution sharply drops. \begin{figure} \includegraphics[width=7cm]{Graph1.eps}\centering \caption{Radio intensity of unresolved galaxy clusters derived using Model 1 for the evolution of cosmic accretion shocks (dashed line), Model 2 (dotted line), Model 3 (dash dotted line). Data points correspond to the measured CRB on various frequencies, from Fixsen et al. (2011), and the solid black line corresponds to their best fit CRB spectrum. Both data points and best fit CRB spectrum were converted to $I_\mathrm{CRB}\left[\mathrm{Jy}\right]$.} \label{fig:f1} \end{figure} \section{DISCUSSION AND CONCLUSION} The model presented in this paper shows that large scale accretion shocks can potentially be an important contributor to the CRB. We assume that the observed radio spectrum of the Coma cluster is entirely produced by accretion shock cosmic rays, which makes our predictions an upper limit. We also assume that the Coma cluster is a typical cluster, and that clusters similar to Coma produce the bulk of the radio waves coming from large-scale accretion shocks. Our upper limits show that clusters can contribute $\lesssim 2-35\%$ at $5\,\mathrm{GHz}$, but after $10\,\mathrm{GHz}$ their contribution sharply drops. The highest curve on Figure 1 overshoots the observed CRB at lower frequencies, which suggests that there should be a brake in the CRB spectrum at lower frequencies, which is not observed. This indicates that the Coma might not be a typical cluster, or that its entire radio emission cannot be coming from accretion shock cosmic rays. Brunetti et al. (2012) have tried to explain the Coma radio halo by synchrotron emission of secondary electrons produced via proton-proton collisions in the intra-cluster medium, or secondary electrons reaccelerated by MHD turbulence during cluster mergers. Also, not all clusters have the associated diffuse radio emission (Giovannini \& Feretti 2000; Rudnick \& Lemmerman 2009). The presence of the radio emission is often related to merging clusters and merger shocks (Fang \& Linden 2016), which are not included in our model. Contribution can also be lower if most of the radio halos have much steeper spectral indices than Coma (Liang et al. 2000). Better understanding of accretion shocks can come from linking cluster observations at different wavelengths. Of course, one has to keep in mind that the same processes inside galaxy clusters might not be responsible for the bulk of their emitted radiation at different wavelengths. After the recent possible detection of the Coma cluster in gamma rays (Xi et al. 2017; Keshet et al. 2017) and hopefully forthcoming detections of other galaxy clusters, it will be easier to distinguish between different cosmic-ray populations inside these objects, but also better understand their possible role in the production of measured background radiation at different wavelengths. \\ \\ \textbf{Acknowledgements} The work of A.C. and M.Z.P. is supported by the Ministry of Science of the Republic of Serbia under project number 176005, and the work of T.P. is supported in part by the Ministry of Science of the Republic of Serbia under project numbers 171002 and 176005. \references Ackermann, M. et al.: 2015, \journal{Astrophys. J.}, \vol{799}, 86. Brunetti, G. et al.: 2012, \journal{Mon. Not. R. Astron. Soc.}, \vol{426}, 956. Cline, J.~M., Vincent, A.~C.: 2013, \journal{J. Cosmol. Astropart. Phys.}, \vol{02}, 011. Dobard\v zi\'c, A., Prodanovi\'c, T.: 2014, \journal{Astrophys. J.}, \vol{782}, 109 [Erratum: 2014, \journal{Astrophys. J.}, \vol{787}, 95]. Dobard\v zi\'c, A., Prodanovi\'c, T.: 2015, \journal{Astrophys. J.}, \vol{806}, 184. Ensslin, T.~A. et al.: 1998, \journal{Astron. Astrophys.}, \vol{332}, 395. Fang, K., Linden, T.: 2016, \journal{J. Cosmol. Astropart. Phys.}, \vol{10}, 004. Fixsen, D.~J. et al.: 2011, \journal{Astrophys. J.}, \vol{734}, 5. Fornengo, N. et al.: 2011, \journal{Phys. Rev. Lett.}, \vol{107}, 271302. Furlanetto, S.~R., Loeb, A.: 2004, \journal{Astrophys. J.}, \vol{611}, 642. Giovannini G., Feretti L.: 2000, \journal{New Astron.}, \vol{5}, 335. Haslam, C.~G.~T. et al.: 1981, \journal{Astron. Astrophys.}, \vol{100}, 209. Hooper, D. et al.: 2012, \journal{Phys. Rev. D}, \vol{86}, 103003. Keshet, U. et al: 2004, \journal{Astrophys. J.}, \vol{617}, 281. Keshet, U. et al.: 2017, \journal{Astrophys. J.}, \vol{845}, 24. Keshet, U., Reiss, I.: 2017, arXiv:1709.07442. Kushnir, D. et al.: 2009, \journal{J. Cosmol. Astropart. Phys.}, \vol{09}, 024. Large M. et al.: 1959, \journal{Nature}, \vol{183}, 1663L. Liang H. et al.: 2000, \journal{Astrophys. J.}, \vol{544}, 686. Maeda, K. et al.: 1999, \journal{Astron. Astrophys. Supp.}, \vol{140}, 145. Miniati, F. et al.: 2000, \journal{Astrophys. J.}, \vol{542}, 608. Pavlidou, V., Fields, B.~D.: 2006, \journal{Astrophys. J.}, \vol{642}, 734. Pinzke, A., Pfrommer, C.: 2010, \journal{Mon. Not. R. Astron. Soc.}, \vol{409}, 449. Pizzo, R.: 2010, \journal{Tomography of galaxy clusters through low-frequency radio polarimetry}, PhD Thesis, Groningen University. Reich, P., Reich, W.: 1986, \journal{Astron. Astrophys. Supp.}, \vol{63}, 205. Roger, R.~S. et al.: 1999, \journal{Astron. Astrophys. Supp.}, \vol{137}, 7. Rudnick L., Lemmerman J.~A.: 2009, \journal{Astrophys. J.}, \vol{697}, 1341. Schlickeiser R. et al.: 1987, \journal{Astron. Astrophys.}, \vol{182}, 21. Singal, J. et al.: 2010, \journal{Mon. Not. R. Astron. Soc.}, \vol{409}, 1172. Vernstrom, T. et al.: 2011, \journal{Mon. Not. R. Astron. Soc.}, \vol{415}, 3641. Xi, S.-Q. et al.: 2017, arXiv:1709.08319. \endreferences \end{document}
{ "timestamp": "2018-02-27T02:08:50", "yymm": "1802", "arxiv_id": "1802.08939", "language": "en", "url": "https://arxiv.org/abs/1802.08939" }
\section{Introduction} The distribution of the interstellar gas is a key ingredient to any self-consistent model describing propagation of cosmic rays (CRs) and generation of non-thermal interstellar emissions. Propagation of CR species in the Galaxy and their interactions with the interstellar gas and radiation field produce changes in their composition and spectra due to fragmentation, secondary particle production, and energy losses \citep[see][for a review]{StrongEtAl:2007}. Measurements of the spectra and composition of CR species are used to constrain the most important parameters of CR propagation models \citep[e.g.,][]{JohannessonEtAl:2016}. Observations of the interstellar emissions generated through the production and decay of neutral pions and inverse Compton scattering of CR electrons off the interstellar radiation field (ISRF) provide a direct probe of the spatial densities and spectra of CR protons, helium, and electrons in distant locations \citep[e.g.,][]{AbdoEtAl:2009,AbdoEtAl:2010,AckermannEtAl:2011,AckermannEtAl:2012,AdeEtAl:2015,AjelloEtAl:2016}, far beyond the reach of direct measurements. The interpretation of these data requires well developed propagation models and a detailed knowledge of the spatial distribution of the interstellar gas in the Milky Way. The interstellar gas consists mostly of hydrogen and helium with a number density ratio of approximately 10 to 1 \citep{Ferriere:2001}, while heavier elements represent a negligible fraction of the total gas mass. Depending on its temperature, three forms of the hydrogen gas are distinguished: atomic (H\,{\sc i}), molecular (H$_2$), and ionized (H\,{\sc ii}) hydrogen, while helium remains mostly neutral due to its much higher first ionization potential. The H\,{\sc i}{} component is the most massive, containing about 60\% of the mass while H$_2$ and H\,{\sc ii}{} contain 25\% and 15\%, respectively \citep{Ferriere:2001}. The spatial density distribution of the three forms is also widely different. The H\,{\sc ii}{} component is the most widespread with a large scale height perpendicular to the Galactic plane of a few hundred pc near the solar system and a relatively low number density. The H$_2$ component on the other hand has a scale height of few tens of pc near the solar system, is very clumpy, and contained mostly in high density molecular clouds. The distribution of the neutral H\,{\sc i}{} gas is somewhat intermediate between those of the ionized and molecular components with a scale height of about a hundred pc near the solar system and a large filling factor. Little is known about the distribution of the helium component because it is only observable in one of its ionizing states. It is assumed that its distribution closely follows that of hydrogen. The correlation between the high-energy $\gamma$-ray{} intensity and the column density of interstellar gas was well-established using the first $\gamma$-ray{} sky surveys by SAS-2 and COS-B satellites \citep{LebrunPaul:1979,LebrunEtAl:1983}, and later confirmed by the EGRET telescope \citep{Hunter1997}. The intrinsic connection between CRs and energetic $\gamma$-ray{s} inspired the development of the first self-consistent model for CR propagation and diffuse $\gamma$-ray{} emission and led to the establishment of the open-source {\it GALPROP}\footnote{Available from \href{http://galprop.stanford.edu}{http://galprop.stanford.edu}\label{fn:link}} project in the mid-1990s \citep{MoskalenkoStrong:1998,StrongMoskalenko:1998,2000ApJ...528..357M,MoskalenkoStrong:2000,StrongEtAl:2000,VladimirovEtAl:2011}. Solving a system of $\sim90$ coupled transport equations and calculating the resulting high-energy emissions within a single framework enables the self-consistent treatment of all CR-related data. From the very beginning of the project {\it GALPROP}{} has been capable of full 3D spatial and time-dependent propagation calculations, but the data quality and computational requirements limited usage of these capabilities. Thus the CR propagation and production of secondary particles due to the interactions in the ISM relied on 2D cylindrical symmetric models for the respective spatial densities. The data from the {\em Fermi}--LAT{}, with its major improvements in sensitivity and statistics compared to earlier experiments, heralded a new era for studies of the high-energy interstellar emissions from the Milky Way. \citet{AckermannEtAl:2012} considered a grid of 128 CR propagation models and compared them against $\sim2$ years of {\em Fermi}--LAT{} data to test the effects of variations of important model parameters, such as the radial distribution of the CR sources, the size of the propagation volume (the halo), and the spin temperature of the H\,{\sc i}\ gas. The models were all constructed using a 2D (cylindrical symmetry) approximation for the CR propagation. While the models provide reasonable agreement with the data, residuals of the order of few tens of percent are visible on scales ranging from a few to tens of degrees over the sky. Some of these residuals are likely related to large-scale structure in the CR and ISM distributions that is not described by the 2D models, such as higher gas densities near the spiral arms and/or the presence of freshly accelerated CRs in the vicinity of their sources. Consequently, analyses of the {\em Fermi}--LAT{} data have inspired progress towards more detailed 3D models for the high-energy interstellar emissions: \citet{JohannessonEtAl:2013}, \citet{JohannessonEtAl:2015}, and \citet{PorterEtAl:2017} using {\it GALPROP}, \citet{KissmannEtAl:2017} and \citet{NiederwangerEtAl:2017} with the {\it PICARD} code, and \citet{NavaEtAl:2017} with a Monte Carlo code. Only for the {\it GALPROP}{}-based modeling has both the 3D structure of the ISM and CR spatial density distributions been taken into account; other works employ 2D ISM models. Most of the knowledge of the gas distribution in the Milky Way has been acquired from line emission and absorption data. For the H\,{\sc i}{} component the 21-cm hyperfine line is employed, which is observed both in emission and absorption for a wide range of conditions in the ISM \citep{DickeyLockman:1990, KalberlaKerp:2009}. Using simplifying but realistic assumptions, the radiative transport equation for the line emission can be solved to directly relate the observations of the emission line to the column density of the H\,{\sc i}{} gas \citep{KulkarniHeiles:1988}. However, this requires information about the excitation temperature (hereafter, the so-called ``spin temperature'' $T_S$) of the emitting gas. The distribution of $T_S$ has been studied using observations of the 21-cm line in absorption and found to range from a few tens of K to a few thousands of K, and to be strongly correlated with the kinetic temperature of the gas \citep{HeilesTroland:2003, StrasserTaylor:2004, DickeyEtAl:2009}. This agrees with the idea that the H\,{\sc i}{} gas exists as two separate and stable phases in the ISM: the warm neutral medium (WNM, $T$$\sim$\,few thousand K) and the cold neutral medium (CNM, $T$$\sim$\,few tens of K). The WNM has a larger filling factor and is generally more widely spread than the CNM that is more clumpy and has a smaller scale height, at least in the inner Galaxy. Recent 21-cm absorption studies toward the outer Galaxy indicate that the warm and cold components are well mixed in that region, most likely because of the smaller amounts of molecular gas \citep{DickeyEtAl:2009}. The distribution of H$_2$ is less well known because typical conditions in the cold ISM do not produce detectable line emissions. Other tracers must therefore be used to estimate its column density. The most common is the rotational transition line from the $^{12}$C$^{16}$O (hereafter CO) molecule, which is the second most abundant molecule in the ISM after H$_2$. The formation conditions of CO are similar to those of H$_2$, and the line emissions are mostly excited through collisions between CO and H$_2$ molecules. It has been observationally shown that the integrated line intensity of the CO lines is almost linearly related to the column density of H$_2$. The linear conversion factor, $X_{\text{CO}}$ has been found to depend somewhat on both column density and temperature of the ISM \citep{BolattoEtAl:2013}. Other molecular tracers, such as OH or $^{13}$CO in dense clouds, can also be used, but they are generally less abundant and their observations are more difficult. The Doppler shift of the line emission caused by differential movement of the interstellar gas in the Galaxy can be modeled to extract distance information. The most common method assumes that the gas is in cylindrical rotation around the GC, which is a technique that has been applied since the beginning of systematic line-emission surveys \citep{Burton:1988, KalberlaKerp:2009}. Even though this approach incorporates the main features of the gas motions, \citet{Burton:1988} pointed out that non-cylindrical streaming motions can cause significant perturbations to modeled line emission profiles. These streaming motions have been shown to be up to 30~km~s$^{-1}$ using numerical simulations \citep{CheminEtAl:2015} and comparisons with other distance estimators \citep{TchernyshyovPeek:2017}. Streaming motions dominate the line profiles in the directions toward the GC and anti-center where cylindrical rotation causes negligible motion along the line of sight (LOS). In addition, thermal and turbulent motions cause line broadening at the level of a few km~s$^{-1}$, up to more than 10~km~s$^{-1}$ \citep{StrasserTaylor:2004}. The line broadening seriously affects the distance resolution available using the Doppler shift velocity, effectively smearing the gas along the LOS. The resulting elongated features visible in many derivations of the Galactic distribution of interstellar gas \citep[e.g.,][]{NakanishiSofue:2003} are sometimes referred to as ``fingers of God'' because they all point towards the location of the Sun. Some efforts have been made to correct for these inadequacies: \citet{LevineEtAl:2006} added elliptical rotation in the outer Galaxy to account for inconsistencies observed around the Galactic anti-center and \citet{KalberlaEtAl:2007} included decreased rotation for gas that is further above/below the Galactic disk, while \citet{PohlEtAl:2008} used hydrodynamical simulations to estimate the gas velocity fields to obtain distance estimates in the direction of the GC and used Gaussian profile fitting to account for the line broadening. Even for the latter work noticeable artifacts are evident from the deconvolution procedure that smears out features in the actual spatial distribution. Studies indicate that dust and gas in the ISM are well mixed \citep[e.g.,][]{1978ApJ...224..132B}, and under certain assumptions, the dust column density can also be used as a tracer of the gas column density \citep[e.g.,][]{SchlegelEtAl:1998}. More sensitive surveys of the stars in the Galaxy over large areas of the sky can allow the 3D structure of the gas to be probed through the observation of their light absorption by the dust, a so-called dust reddening effect \citep[e.g.,][]{SchlaflyEtAl:2014}. This method can be more reliable than the kinematic distance estimators for the emission lines and also works for directions toward the GC and anti-center. However, its application is currently limited because it depends upon observations of a large number of stars, and the light of more distant stars is absorbed by the total column density along the LOS and, therefore, is very faint. Estimates of the gas distribution with this method require an assumption about a conversion factor between the dust and gas column densities that has been shown to be dependent on the physical properties of the ISM \citep[e.g.,][]{AdeEtAl:2015}. The techniques described above for deriving the distribution of the interstellar gas in the Galaxy have been extensively applied \citep[e.g.,][]{LevineEtAl:2006,PohlEtAl:2008,SchlaflyEtAl:2014,NakanishiSofue:2016,MarascoEtAl:2017,SchlaflyEtAl:2017}. However, their results are not very well suited for usage as the gas distribution to run with CR propagation codes. The major issue is that the distributions are usually incomplete with gaps along sight lines toward the GC; this is particularly prevalent for models derived via deconvolution of the line-emission survey data. The stellar absorption method is limited by the distance from the Sun that can be probed (typically extending only out to $\sim10$ kpc), and by the sky coverage. Even with the full-sky coverage of the \citet{PohlEtAl:2008} work, there are still issues with artifacts caused by broadening of the line emissions. In this paper a forward folding model fitting technique is employed to estimate the 3D structure of the gas. Continuity is enforced over the directions with limited distance information by using a parameterized model of the gas distribution. This approach resolves the issue of artifacts caused by smearing and allows for complex gas rotation models that can also be parametrized and tuned to the data. It also allows the complexity of the spatial structure to be easily controlled, as well as the effects that each individual modification has on propagation of CRs to be studied separately. The goal of this paper is to determine the distributions of H\,{\sc i}{} and H$_2$ in the Milky Way, the most important gas components for modeling CR propagation, production of secondaries, and high-energy interstellar emissions. The H\,{\sc ii}{} gas, which has lower number density and larger scale height, is significantly less important when modeling CR propagation and is not considered in this paper. The effects of 3D structure for the ISRF in combination with CRs and non-thermal interstellar emissions have been explored by \citet{PorterEtAl:2017}. \section{3D modeling of the interstellar gas} \label{sec:gasModeling} \subsection{Analysis method} A forward folding technique is used to derive the 3D structure of the atomic and molecular hydrogen. Parameterized models for both the gas densities and the velocity fields are tuned with a maximum-likelihood fit to H\,{\sc i}{} and CO line-emission data (described below). The evaluations of the likelihood function are made with the {\it GALGAS}{} code, which is described in detail in Appendix~\ref{app:GALGAS}. A brief overview of the code is given here. The {\it GALGAS}{} code is designed to take arbitrary 3D models for the gas density and its velocity field and integrate them along the LOS from a user-specified location (the Solar system in this paper) to create line-emission profiles that can be compared to data. The coordinate system is right handed with the Sun positioned at the positive $X$-axis at a distance $R_\odot=8.5$~kpc from the Galactic center, while the $Z$-axis is pointing towards the north Galactic pole. The $Z=0$ plane coincides with Galactic latitude $b=0^\circ$ and the $Y$-axis is parallel to the Galactic longitude $l=-90^\circ$. In this coordinate system the conversion from $(l,b,s)$ to $(X,Y,Z)$ is given by \begin{align*} X &= R_\odot - s \cos(b)\cos(l),\\ Y &= - s \cos(b)\sin(l),\quad \text{and}\\ Z &= s \sin(b). \end{align*} where $s$ is the distance along the LOS $(l,b)$. For each LOS, the code uses the velocity field projected onto the line of sight to create a projection from $s$ to Doppler-shifted velocity $v$ which is calculated as the difference in velocity at the origin and a point along the LOS. Using this projection from $s$ to $v$ the code integrates the gas density along the line of sight to calculate the column density of gas associated with each velocity bin and converts it to line emission as described below. The code accounts for turbulent and thermal motions by smoothing the resulting emission profiles with a Gaussian kernel. Conversion of gas column densities for each velocity bin $v$ to an observed line intensity is necessary for comparison with data. For H\,{\sc i}{}, it is assumed that within each distance bin $s$ the gas is homogeneous with a spin temperature $\bar{T}_S$ that is specified with a parametrized distribution: \begin{equation} \bar{T}_S(v) = \frac{\int_{v} T_S(X,Y,Z) ds}{\int_{v} ds} \end{equation} where the integration is performed over distance bin $s$ associated with velocity bin $v$. The column density is turned into observed brightness using a formula by \citet{KulkarniHeiles:1988}: \begin{equation} T_b(v) = \left[ \bar{T}_S(v) - T_{0}(v) \right] \left[ 1 - e^{-\tau(v)} \right], \label{eq:HItransport} \end{equation} where the optical depth is given by $\tau(v) = N_{\text{H\,{\sc i}}}(v)/C \bar{T}_S(v)$, $N_{\text{H\,{\sc i}}}(v) = \int_v n_{\text{H\,{\sc i}}}(X,Y,Z) ds$ is the column density of hydrogen, and $C = 1.83\times10^{18}$~cm$^{-2}$~K$^{-1}$~(km~s$^{-1}$)$^{-1}$ is a constant. If multiple distance bins align within the same velocity bin along a LOS, the optical depth from distance bins between the observer and the current bin is also included. In this work the only background considered is the cosmic microwave background and the background temperature $T_{0}=2.66$~K is constant over the sky. The Galactic synchrotron continuum emission is non-negligible at 1420~MHz. Its 3D distribution is not well known and is difficult to explicitly account for in Eq.~(\ref{eq:HItransport}), and is therefore not included. Typical estimates \citep{Sofue:2017} will lead to the emission from H\,{\sc i}{} from the Galactic disk being underestimated by $\sim 10$~K for the model presented in this paper. Meanwhile, the optically thin assumption is used when modeling the H\,{\sc i}{} gas, which is implemented using a large constant value for $T_S(X,Y,Z)$. This assumption has a much larger effect on the estimated emission than neglecting the synchrotron continuum emission. Hence, the analysis in this paper provides a robust lower limit for the density distribution of H\,{\sc i}{} gas. For the molecular gas, the standard assumption that the column density of H$_2$ is linearly related to the integrated line emission of CO is used, $N_{\text{H}_2}(v) = X_{\text{CO}}(v) W_{\text{CO}}(v)$. Here $W_{\text{CO}}(v)$ is the integrated CO line emission over the velocity bin $v$ and $X_{\text{CO}}(v)$ is the linear conversion factor. This assumption of linear relation is entirely phenomenological and does not have a strong physical motivation. The CO emission is generally optically thick and the linear relation between $N_{\text{H}_2}$ and $W_{\text{CO}}$ may be caused by the prevalent conditions in the ISM \citep{GloverMacLow:2011}. This assumption leads to a simple relation for the number density of H$_2$: \begin{equation} n_{\text{H}_2}(X,Y,Z) = X_{\text{CO}}(X,Y,Z) \epsilon_{\text{CO}}(X,Y,Z) \label{eq:H_2COrelation} \end{equation} where the linear conversion factor $X_{\text{CO}}(X,Y,Z)$ can depend on the position in the Galaxy and $\epsilon_{\text{CO}}$ is the CO volume emissivity. This greatly simplifies the modeling of the CO line emission because the quantity of interest becomes $\epsilon_{\text{CO}}(X,Y,Z)$, which is independent of $X_{\text{CO}}(X,Y,Z)$ and linearly related to the CO line emission, \begin{equation} W_{\text{CO}}(v) = \int_v \epsilon_{\text{CO}}(X,Y,Z) ds \end{equation} However, propagation codes require the number density of the hydrogen gas and specification of $X_{\text{CO}}(X,Y,Z)$ is necessary. Common assumptions include a constant $X_{\text{CO}}$ throughout the Galaxy or a radially increasing $X_{\text{CO}}(R)$ that seems to be more consistent with $\gamma$-ray{} data \citep{AckermannEtAl:2012}. The model parameters are tuned by maximizing the likelihood of the model given the data. Even though the data is assumed to be normally distributed with a specified uncertainty, a student-t likelihood is used. The student-t distribution has more weight in the tails of the distribution compared to the normal distribution and the likelihood is less affected by strong outliers in the data. This property of the likelihood is desired because the models used in the analysis are by design not able to recover the fine structure of the gas distribution. Generally, there are more data points with low or little emission than bright ones, so using the student-t likelihood de-weights the emission peaks allowing a simplified model to catch the basic features of the data without being biased by strong emission peaks that it cannot reproduce. Also, the emission peaks not accounted for by the model will be evident in the residuals for the longitude and latitude profiles, making it easier to identify were additional model refinement is needed. Using the student-t likelihood requires, in addition to the data uncertainty, specifying the number of degrees of freedom which is set to $\nu=100$ for this analysis. The exact value does not change the overall conclusions of this work, but smaller values generally result in smaller estimated total gas mass with comparatively smaller negative residuals, while larger values give higher gas masses with larger negative residuals. Comparison with likelihoods using normally distributed errors show that $\nu\gtrsim 10^4$ is needed before the student-t likelihood gives similar results. \subsection{Data} This paper employs the H\,{\sc i}{} LAB survey \citep{KalberlaEtAl:2005} and the composite CO survey of \citet{DameEtAl:2001} for the model tuning. The data is re-binned to a HEALPix grid \citep{GorskiEtAl:2005} using HEALPix order 7 for H\,{\sc i}{} while order 8 is used for the CO data. The selected resolution on the sky is such that the spatial resolution at the GC is about 80~pc and 40~pc, respectively for CO and H\,{\sc i}{}. This is enough to resolve the gradient of the gas distributions near the GC. The velocity resolution of both surveys is degraded to 2~km~s$^{-1}$ velocity bins to reduce the needed computational resources. The lower velocity resolution does not strongly affect the results because it is still smaller than the characteristic line spread for CO and H\,{\sc i}{}, which is assumed to be 6~km~s$^{-1}$ and 10~km~s$^{-1}$, respectively. These values were determined by examining the tail of the line emission close to the tangent point velocities in the inner Galaxy and are in reasonable agreement with the analysis of \citet{MarascoEtAl:2017}. Non-cylindrical motions of the gas dominate the Doppler shifts of the line emission in directions towards the GC and anti-center so the velocity information is ignored for longitudes $|l| < 10^\circ$ and $170^\circ < l < 190^\circ$ and only the total velocity integrated emission is compared. In those regions the uncertainty for the bins along each LOS is summed up in quadrature assuming that the uncertainties are independent. The statistical uncertainty on the data is assumed to be constant over the entire sky. Values of 0.05~K for CO and 0.1~K for H\,{\sc i}{} are used, which are consistent with the noise estimate for the original surveys taking into account the re-binning. Because the models are too simple to properly account for the fine structure in the data the statistical uncertainty of the model parameters is not of high importance and therefore neither is the exact value of the uncertainty on the data. However, some of the model parameters described below are common in both the CO and H\,{\sc i}{} models and constrained by both data sets and it is thus important to have the relative uncertainty of the two datasets correct to avoid biasing the likelihood either way. The statistics of the two data sets are such that the much brighter H\,{\sc i}{} emission dominates the likelihood. The H\,{\sc i}{} data is filtered to exclude high-velocity emission, and also emission from the Local Group galaxies (Large and Small Magellanic Clouds, M31, and M33). No attempt is made to correct for bright radio background sources because the resolution of the LAB survey is not sufficient to do that accurately. Only data with $|b|\leq 40^\circ$ is selected for the analysis because higher latitude emission is predominantly due to local clouds that are not elements of the models considered in this paper. The increased number of pixels used for the likelihood evaluation from including high-latitude data considerably increases the required computation without providing additional constraints for the parameters of the global model. The CO survey is filtered with the moment masking method of \citet{Dame:2011}, significantly reducing noise in the data. No other filtering is performed on the CO data. \subsection{Model Components} The model components used in this work are: a warped disk with a scale height that varies with Galactocentric radius, a central bulge/bar, and 4 logarithmic spiral arms. To reduce the number of parameters many of the geometrical parameters are the same for both the H\,{\sc i}{} and CO models. This includes the parameters controlling the warp of the disk, the radial increase in scale height, the shape of the bulge/bar, and the shape of the spiral arms. The physical motivation for this assumption is that these parameters are controlled by external processes and should affect both components of the gas in a similar way. The radial and vertical profiles of the spiral arms follow that of the disk to further reduce the number of parameters. The velocity field is modeled as cylindrical rotation using the rotation curve of \citet{SofueEtAl:2009} scaled to match the IAU-recommended Sun-GC distance of $R_\odot=8.5$~kpc and the rotation velocity of $v_\odot = 220$~km~s$^{-1}$ at the location of the solar system. The distance from the GC projected onto the $X$-$Y$ plane is calculated as $R = \sqrt{X^2+Y^2}$. The optically thin assumption is used for the H\,{\sc i}{} data, which is effectively modeled using a large constant value for $T_S$ in Eq.~(\ref{eq:HItransport}). This provides a robust lower limit for the column density of H\,{\sc i}{} gas. Two functional forms of the radial profile for the number density of the disk and spiral arms are explored, an exponential disk with a central hole \begin{equation} f_d (R) = n_d e^{-(R-r_s)/r_0} \left[ 1 - e^{\left( -R/r_h \right)^{h_i}} \right], \label{eq:ExpDisc} \end{equation} where $r_s=8.0$ kpc\footnote{$r_s$ is a normalization constant and its value was chosen to coincide with the point in the cubic spline closest to $R_\odot$}, $n_d, r_0, r_h, h_i$ are free parameters, and a cubic spline in logarithm of the number density $n(R)$ between the values at constant radii, $R= 0, 2, 4, 6, 8, 10, 15, 20$, and $50$ kpc. The radii are selected based on previously determined radial profiles for the H\,{\sc i}{} and CO gas while minimizing the number of parameters \citep{GordonBurton:1976,BronfmanEtAl:1988}. The cubic spline provides more freedom but at the expense of more than double the number of parameters. When combined with the bulge/bar component described below, the number density at the two innermost radial points of the cubic spline is fixed to a small value. The warp of the Galactic disk is modeled similarly to \citet{LevineEtAl:2006}: \begin{equation} z_0 = w_0(R) + w_1(R)\sin\left( \Theta - \theta_1 \right) + w_2(R)\sin\left( 2\Theta - \theta_2 \right) \label{eq:warp} \end{equation} where $\Theta = \tan^{-1}(Y/X)$ is the azimuthal angle. The radial dependence of the amplitudes $w_i(R)$ is modeled with a cubic spline between the constant radii: 0, 5, 10, 15, 20, and 50 kpc. While the warp is dominantly in the outer Galaxy the points in the inner Galaxy are used to account for small variations in the disk mid-plane as found by \citet{BronfmanEtAl:1988}. The zero modes, $\theta_i$, are assumed to be independent of the radius. The vertical profile of the disk is modeled as a function of $Z'/z_h$, where $Z' = Z - z_0$ is the distance from the central plane of the disk, and $z_h$ is the scale height of the disk. Several functions describing the vertical profile are tested, including an exponential, a Gaussian, and hyperbolic secant to the power of $2, 1$, and $0.5$. To account for the increasing scale height in the outer Galaxy, its radial dependence is modeled as \begin{equation} z_h(R) = z_s e^{(R-r_{z_0})/r_z}, \label{eq:scaleHeight} \end{equation} where $r_{z_0}=8.5$ kpc is a constant\footnote{$z_s$ is thus the scale height of the gas at the solar location.}, and $z_s, r_z$ are free parameters. The final disk model is thus \begin{equation} f_d(X,Y,Z) = f_d(R) f_s(R,Z'), \label{eq:totalDisk} \end{equation} where $f_s(R,Z')$ is one of the scale functions describing the vertical profile of the disk. The central bulge/bar component is parameterized with the function \begin{equation} \begin{aligned} f_{b}(X,Y,Z) &= n_{b} e^{-R_r^{e_i}} R_r^{p_i} \quad \text{with}\\ R_r &= \left( R'/r_b + Z/z_b \right)^{-1}, \\ R' &= \sqrt{ \left( X' \right)^2 + \left( Y'/0.3 \right)^2}, \\ X' &= X \cos(\theta_b) + Y \sin(\theta_b) + x_0, \\ Y' &= -X \sin(\theta_b) + Y \cos(\theta_b). \label{eq:Bulge} \end{aligned} \end{equation} where $n_{b}$, $e_i$, $p_i$, $r_b$, $z_b$, and $x_0$ are free parameters while $\theta_b = -30^\circ$ is a constant. The lack of velocity and, therefore, distance information means that it is not possible to constrain $\theta_b$ using the method employed in this paper. The exact value chosen for $\theta_b$ will affect the other parameters of the bulge, but not the data-model agreement. The value of $\theta_b$ that we use is within the range of $-10^\circ$ \citep{Freudenreich:1998} to $-45^\circ$ \citep{Lopez-CorredoiraEtAl:2007} given in the literature. The rotation of the bulge/bar corresponds to its closest distance being at positive longitudes. \begin{deluxetable}{lc}[tb!] \tablecaption{\label{tab:diskParameters} Model parameters describing the radial and vertical distributions of the number density for the H\,{\sc i}\ and CO models} \tablecolumns{2} \tablehead{\colhead{Parameter} & \colhead{Value} } \startdata \multicolumn{2}{l}{\sc \quad Disk parameters for CO model} \smallskip\\ $n_{d}$, K~km~s$^{-1}$~kpc$^{-1}$ & 0.894 \\ $r_s$\tablenotemark{a}, kpc & 8.0 \\ $r_0$, kpc & 1.27 \\ $r_h$, kpc & 6.34 \\ $h_i$, kpc & 6.38 \\ $z_{s,\text{CO}}$\tablenotemark{b}, kpc & 0.103 \medskip \\ \multicolumn{2}{l}{\sc \quad Disk parameters for H\,{\sc i}\ model} \smallskip\\ $n(R=8\,\text{kpc}) \equiv n_8$, cm$^{-3}$ & 0.160 \\ $n(0\,\text{kpc})/n_8$ & $1.72 \times 10^{-6}$ \\ $n(2\,\text{kpc})/n_8$ & $0.284 $ \\ $n(4\,\text{kpc})/n_8$ & $0.807$ \\ $n(6\,\text{kpc})/n_8$ & $1.19$ \\ $n(10\,\text{kpc})/n_8$ & $0.798$ \\ $n(15\,\text{kpc})/n_8$ & $0.477$ \\ $n(20\,\text{kpc})/n_8$ & $0.0457$ \\ $n(50\,\text{kpc})/n_8$ & $1.12 \times 10^{-3}$ \\ $z_{s,\text{H\,{\sc i}}}$\tablenotemark{b}, kpc & 0.0942 \medskip \\ \multicolumn{2}{l}{\sc \quad Bulge parameters for CO model} \smallskip\\ $n_{b}$, K km s$^{-1}$ kpc$^{-1}$ & 47.8 \\ $\theta_b$\tablenotemark{a}, rad & 5.67 \\ $x_0$, kpc & 0.751 \\ $r_b$, kpc & 0.514 \\ $z_b$, pc & 6.43 \\ $e_i$ & 0.647 \\ $p_i$ & 1.18 \medskip \\ \multicolumn{2}{l}{\sc \quad Flare parameters (common)} \smallskip\\ $r_z$, kpc & 6.94 \\ $r_{z_0}$\tablenotemark{a}, kpc & 8.5 \enddata \tablenotetext{a}{Constant} \tablenotetext{b}{Note that the CO model uses squared hyperbolic secant, while the H\,{\sc i}\ model uses the square root of hyperbolic secant for the vertical scale. These numbers are therefore not directly comparable.} \end{deluxetable} \begin{deluxetable}{lccc}[tb!] \tablecaption{\label{tab:warpParameters}Model parameters describing the disk warp} \tablecolumns{4} \tablehead{\colhead{Parameter} & \colhead{Mode 0} & \colhead{Mode 1} & \colhead{Mode 2} } \startdata $\theta$, rad & \nodata & 4.61 & 2.73 \\ $w(0\,\text{kpc})$, kpc & --0.0756 & 0.146 & 0.287 \\ $w(5\,\text{kpc})$, kpc & --0.00819 & --0.0520 & --0.0192 \\ $w(10\,\text{kpc})$, kpc & --0.0288 & 0.101 & --0.00716 \\ $w(15\,\text{kpc})$, kpc & 0.0576 & 0.737 & --0.00587 \\ $w(20\,\text{kpc})$, kpc & 0.767 & 1.71 & 0.587 \\ $w(50\,\text{kpc})$, kpc & 20\tablenotemark{a} & 20\tablenotemark{a} & 14.9 \enddata \tablenotetext{a}{Parameter at fit range boundary.} \end{deluxetable} \begin{deluxetable}{cccccc}[b!] \tablecaption{\label{tab:armParameters}Model parameters describing the shape and number density of the spiral arms} \tablecolumns{6} \tablehead{ Arm & \colhead{$\alpha_j$} & \colhead{$r_{\text{min},j}$} & \colhead{$\theta_{\text{min},j}$\tablenotemark{a}} & \colhead{$\epsilon_{\text{CO}}(8\,\text{kpc})$} & \colhead{$n_{\text{H\,{\sc i}}}(8\,\text{kpc})$} \\ No. & & kpc & rad & K\,km\,s$^{-1}$\,kpc$^{-1}$ & cm$^{-3}$ } \startdata 1 & 3.30 & 2.00 & 1.05 & 0.642 & 0.184 \\ 2 & 4.35 & 3.31 & 2.62 & 0\tablenotemark{b} & 0.193 \\ 3 & 5.32 & 3.89 & 4.19 & 3.37 & 0.332 \\ 4 & 4.75 & 3.19 & 5.76 & 7.53 & 0.521 \enddata \tablenotetext{a}{Constant} \tablenotetext{b}{Parameter at fit range boundary.} \end{deluxetable} The spiral arms are purely logarithmic: \begin{equation} \theta_j(R) = \alpha_j \log\left( R/r_{\text{min},j} \right) + \theta_{\text{min},j}, \label{eq:arms} \end{equation} where $\alpha_j$, $r_{\text{min},j}$ and $\theta_{\text{min},j}$ are parameters of the model. Values of $\alpha_j$ and $r_{\text{min},j}$ are tuned in the minimization procedure, while the starting angles $\theta_{\text{min},j} = -\pi/6 + j\pi/2$ are held constant throughout. The pitch angle of the spiral arm can be determined using $\theta_{p,j} = \tan^{-1}(\alpha_j^{-1})$. The starting point of arms 2 and 4 are at the ends of the central bulge/bar. The arms have a Gaussian profile perpendicular to the locus traced by Eq.~(\ref{eq:arms}) with a scale of 0.6~kpc giving them a FWHM of $\sim 1.4$~kpc. The radial density distribution of each arm is assumed to be identical to that of the disk in each density model, but with independent normalization for each arm. The vertical scale height of the arms is the same as that of the disk for the respective gas component. To ensure stable fits, the maximum-likelihood procedure is performed in several iterations using the best-fit values from the previous step as a starting point for the next. The initial fit is performed using the disk component only without the warp and radial increase in the scale height. The model complexity is then increased by fitting for additional parameters and components in the following order: the radial distribution of the vertical scale height, the central bulge/bar, the warp of the disk, and the spiral arms. Because of the number of parameters and computing time required, the selection for the different radial and vertical profiles is performed without inclusion of the warp and spiral arms. \subsection{Results} According to the maximum-likelihood values, the logarithmic cubic-spline radial profile is a better match for the H\,{\sc i}{} number density profile, while the exponential disk with a central hole is preferred for the CO distribution. The central bulge/bar component is rejected by the fit for the H\,{\sc i}{} model and subsequently excluded from the H\,{\sc i}{} model. The best-fit vertical profile is the square root of the hyperbolic secant for the H\,{\sc i}{} model, while for CO it is the square of the hyperbolic secant that gives the best fit of all the tested profiles. The parameters and their best-fit values for the final models of H\,{\sc i}{} and CO gas are listed in Table~\ref{tab:diskParameters} for the radial and vertical number density distributions of the disk, in Table~\ref{tab:warpParameters} for the warp describing the central plane of the disk, and in Table~\ref{tab:armParameters} for the spiral arm parameters. Because the combined model does not reproduce the fine structure of the data, statistical uncertainties are unimportant and not reported. The statistical uncertainties are in most cases less than 0.1\%. Each parameter value is reported with 3 significant digits, or at the level of the statistical uncertainty, whichever has a fewer number of significant digits. The final models will be distributed with larger number of significant digits as supplementary material to the paper in XML form readable by the {\it GALPROP}{}\textsuperscript{\ref{fn:link}} code version 56. \begin{figure*}[tb!] \centering \includegraphics[width=.50\textwidth]{f1a}\hfill \includegraphics[width=.50\textwidth]{f1b}\\ \includegraphics[width=.50\textwidth]{f1c}\hfill \includegraphics[width=.50\textwidth]{f1d} \caption{Surface density maps for the final models (top) and maps of the first moment of the vertical density distribution (bottom). H\,{\sc i}{} gas component is on left and H$_2$ component is on right. The CO number density is converted to H$_2$ number density assuming $X_{\rm CO} = 2 \times 10^{20}$ cm$^{-2}$ (K km s$^{-1}$)$^{-1}$. The Sun is marked as a white point and the white-dashed lines mark the longitude grid with 30$^\circ$ step. The cyan curves on the surface density maps trace the cores of the spiral arms with each arm marked with a different line style: arm 1 is solid, arm 2 -- dotted, arm 3 -- dashed, and arm 4 -- dash-dotted.} \label{fig:gasMaps} \end{figure*} The best-fit model parameters are in reasonable agreement with previous studies of the interstellar gas. The radial distribution of the H\,{\sc i}{} surface density is approximately flat between 4 and 15~kpc \citep{GordonBurton:1976}. The distribution then falls off exponentially towards the outer Galaxy at a rate similar to that found by \citet{KalberlaDedes:2008}. The CO distribution has a peak between 4 and 6~kpc \citep{BronfmanEtAl:1988} before falling off exponentially at larger Galactocentric distances. The central bulge/bar accounts for some part of strong CO emission in the inner Galaxy \citep{FerriereEtAl:2007}. The scale parameter for the disk flaring is somewhat smaller than that derived for the global average by \citet{KalberlaDedes:2008}. The flaring parameter value is found to be closer to its value for the northern Galactic hemisphere indicating that northern latitudes ($b > 0^\circ$) may have more weight when fitting the parameters of the described model. The warp parameters are in excellent agreement with those found by \citet{LevineEtAl:2006} for Galactocentric radii between 10 and 25~kpc. The best-fit arm pitch angles are $16.8^\circ$ (arm 1), $12.9^\circ$ (arm 2), $10.6^\circ$ (arm 3), and $11.9^\circ$ (arm 4). These are all within the range of arm pitch angle estimates from the literature \citep[e.g.,][]{Vallee:2017}. The total masses of H\,{\sc i}{} and H$_2$ gas components in the constructed models are $4.9 \times 10^9$~M$_\odot$ (H\,{\sc i}{}) and $0.67 \times 10^9$~M$_\odot$ (H$_2$), where the standard value of $X_{\rm CO} = 2 \times 10^{20}$ cm$^{-2}$ (K km~s$^{-1}$)$^{-1}$ \citep{BolattoEtAl:2013} is used for the conversion. The spiral arms in the H\,{\sc i}{} model account for approximately 25\% of the mass while the remainder is in the disk. For the CO model the arms account for $\sim42$\% of the mass, the bulge/bar $\sim31$\%, and $\sim27$\% is in the disk component. These mass ratios may not relate directly to an estimated H$_2$ mass because the $X_{\text{CO}}$ conversion factor has been shown to depend on the specific properties of the ISM, and near the GC it may be an order of magnitude smaller than the Galactic average \citep{FerriereEtAl:2007,AckermannEtAl:2012}. For example, using $X_{\rm CO} = 2 \times 10^{19}$ cm$^{-2}$ (K km~s$^{-1}$)$^{-1}$ for the bulge/bar component reduces the total H$_2$ mass in the model to $0.48 \times 10^9$~M$_\odot$ and the fraction of the mass in the bulge/bar to $\sim5$\%. \begin{figure*}[tb!] \center{ \includegraphics[width=0.5\textwidth]{f2a}\hfill \includegraphics[width=0.5\textwidth]{f2b}\\ \includegraphics[width=0.5\textwidth]{f2c}\hfill \includegraphics[width=0.5\textwidth]{f2d} } \caption{Longitude-velocity diagram integrated over the longitude range $|b| < 4^\circ$ for CO (left panels) and H\,{\sc i}\ (right panels) gas. Top row shows the data, while bottom row shows our best-fit model. The cyan curves trace the cores of the spiral arms, the line coding is the same as in Figure~\ref{fig:gasMaps}. } \label{fig:lvdiagram} \end{figure*} The surface density maps of the final models \begin{equation} \Sigma (X,Y) = \int_{-\infty}^\infty n(X,Y,Z) dZ \label{eq:surfaceDensity} \end{equation} are shown in Figure~\ref{fig:gasMaps}. The CO model has been converted to H$_2$ surface density using $X_{\text{CO}} = 2 \times 10^{20}$ cm$^{-2}$ (K~km~s$^{-1}$)$^{-1}$. The central bulge/bar and some spiral arms are clearly visible on the H$_2$ surface density map, meanwhile, the lack of arm 1 and the faint haze at the location of arm 2 are also apparent. Comparison of the spiral arm structures to the catalog of the spiral arm tangents collected by \citet{Vallee:2014} leads to the following identifications: arm 1 accounts for the Perseus arm, arm 2 -- for Sagittarius and Carina, arm 3 -- for Scutum and Crux-Centaurus, and arm 4 -- for the 3~kpc arm, Norma, and Cygnus. Arm 3 also accounts for the new arm detected by \citet{DameThaddeus:2011}, which has been confirmed using parallax observations by \citet{SannaEtAl:2017}. This picture is also mostly consistent with the updated spiral arm model presented by \citet{Vallee:2016}. The starting points of the arms shown in his Figure~2 are at 2~kpc so care must be taken when matching the arms. The only exception is that arm 1 does not match the Perseus arm in the updated model of \citet{Vallee:2016}. The final arm parameter values are somewhat dependent on their initial values chosen for the model fit and their interpretation could be ambiguous. In particular, arm 1 does not match perfectly with the start of Perseus at $l=-23^\circ$; its start is closer to $l=-15^\circ$. It also overlaps with arm 2 in the inner Galaxy, indicating that there is some degeneracy between these arms. In turn, this may affect the derived values of the $\alpha_j$ parameters so the modeled spiral arms match observations in the outer Galaxy. The presented spiral arm configuration is quite stable over the relatively few choices of initial values tested in this analysis. The resulting small variations in the model parameters do not qualitatively affect the results for CR propagation and high-energy interstellar emission (Section~\ref{sec:CRgamma}) and further exploration of the model is deferred to future work. To illustrate the warp of the Galactic disk, the first mode of the calculated vertical number density distribution \begin{equation} \left<Z\right>_n(X,Y) = \frac{ \int_{-\infty}^{\infty} Z n(X,Y,Z) dZ}{\Sigma(X,Y)} \label{eq:meanHeight} \end{equation} is shown in Figure~\ref{fig:gasMaps}. The results are in good agreement with those from \cite{LevineEtAl:2006} (their Figure~11). In this work the same warp is applied to the disk and spiral arm components of the H\,{\sc i}{} and CO models, but the central bulge/bar does not include any warp. The best-fit parameters for the CO model correspond to the bulge/bar component that falls off with the Galactocentric distance slower than the disk and spiral arm components. The bulge/bar contributes to the gas number density even outside the solar radius and the warp maps of the H\,{\sc i}{} and CO models are not identical even though the warp component is the same for both models. The contribution of the bulge/bar component falls off more quickly along its minor axis in the plane resulting in larger values of the first mode along that direction. The warp is not significant for the CO model because its number density for distances $\gtrsim 10$~kpc is low. Even though the effect of the warp is small in the inner Galaxy it is found to be necessary for the model because it significantly improves the data-model agreement. Figure~\ref{fig:lvdiagram} shows the longitude-velocity diagram for both models and data integrated over the Galactic disk $|b| < 4^\circ$, with the locations of the spiral arms overlaid. Overall, the models reproduce the data fairly well, with the main ``butterfly'' shape of the model driven by the cylindrical rotation. The spiral arm structures are clearly visible in the H\,{\sc i}{} model plot and their locations reasonably match similar structures in the data. However, the enforced smoothness of the models means that the true complexity of the observed ISM is not fully recoverable and even some large-scale features are not reproduced. There is a clear spur between the spiral arm structure visible to the right of $(l,V)=(100^\circ, -60$ km~s$^{-1})$ in the H\,{\sc i}{} data that is absent in the model. Gaps in the data near $(50^\circ, -30$ km~s$^{-1})$ and $(-110^\circ, 30$ km~s$^{-1})$ show the absence of the gas, but correspond to the spiral arms in the model. Very bright emission near $(80^\circ, 0$ km~s$^{-1})$ and $(-90^\circ, 0$ km~s$^{-1})$ is not reproduced by H\,{\sc i}{} nor CO models. The CO model is also much fainter than the data toward the inner Galaxy, and the few clouds visible in the outer Galaxy are not reproduced either. Figure~\ref{fig:lvdiagram} also illustrates why the densities in spiral arms 1 and 2 are lower than in the other arms, especially for the CO model. Arm 1 (associated with the Perseus arm and shown as a solid curve) starts in a void in the data at around $l=15^\circ$. It then aligns with the location of arm 2 in a region with data and follows to some extent other evident features all the way to the outer Galaxy. It looks shifted relative to the brightest features in the data at $90^\circ < l < 150^\circ$ in both CO and H\,{\sc i}{}. Arm 1 ends up in a large void in the H\,{\sc i}{} data at $l \sim -110^\circ$ before picking up some structure further on. There is also a bright feature near $(40^\circ, 40$ km~s$^{-1})$ in the CO data with corresponding structure in the H\,{\sc i}{} data that may be associated with this spiral arm. The location of this feature is, however, offset from the modeled spiral arm. Arm 2 that has been associated with Sagittarius and Carina and shown as a dotted curve seems to be offset from a very bright feature in the H\,{\sc i}{} data at $15^\circ < l < 45^\circ$, and from a fainter feature at $-70^\circ < l < -20^\circ$. There is evidence of similar offsets in the CO data as well. The offsets mentioned above are likely caused by some combination of an incorrect spiral arm shape and/or variations in the velocity field. However, it is difficult to discriminate between these two effects without additional information. \begin{figure}[tb!] \centering \includegraphics[width=0.48\textwidth]{f3a}\\ \includegraphics[width=0.48\textwidth]{f3b} \caption{Longitude profiles of the models (curves) overlaid on the data (points). The model and data are integrated over all velocity bins and averaged over the latitude range $|b| < 4^\circ$. H\,{\sc i}{} gas is on top and CO is at the bottom. } \label{fig:longitudeProfiles} \end{figure} The longitude profiles (Figure~\ref{fig:longitudeProfiles}) show that the data are under-predicted by the models. This is a consequence of using the student-t likelihood, which de-weights strong outliers in the data, in combination with a simplified and smooth model. The outliers are positive in almost all cases giving rise to positive residuals. Of the two, the H\,{\sc i}{} model performs somewhat better at representing the data with the residuals fairly constant over the entire longitude range. The most conspicuous residuals are seen at $60^\circ \leq l \leq 160^\circ$ and $-100^\circ \leq l \leq -20^\circ$. The spiral arm tangents are fairly obvious in the models and coincide with corresponding peaks in the data profiles reasonably well indicating that the locations of the spiral arms in the model are mostly correct. Therefore, some of the discrepancies observed in the $l$-$V$ diagrams in Figure~\ref{fig:lvdiagram} are likely due to streaming motions of the gas that are unaccounted for in the models and can give rise to velocity discrepancies of more than 10~km~s$^{-1}$. It is also likely that the assumption made in this paper of azimuthally independent radial distribution is not appropriate \citep{KalberlaDedes:2008}. The residuals in the outer Galaxy for longitudes in the range $90^\circ \leq l \leq 180^\circ$ are larger than those for $-180^\circ \leq l \leq -90^\circ$ for both the H\,{\sc i}{} and CO models. This indicates that the models are missing some structure in that region, possibly related to the Perseus arm. The CO model is more strongly affected due to the clumpy nature of the CO emission visible in the $l$-$V$ diagram in Figure~\ref{fig:lvdiagram}. About $\sim50$\% of the CO intensity for $|l| \leq 45^\circ$ is unaccounted for by the model, and there are positive residuals around $l \pm 90^\circ$. The very bright peak in CO emission toward the GC is also not reproduced by the bulge/bar structure. Outside these longitudes the CO model fares better mostly because the CO intensity is low toward the outer Galaxy. There is also the assumption of continuity for the spiral arms. It is not guaranteed that the spiral arms all follow a logarithmic shape throughout their distance. The arms are also assumed to all follow the same radial distribution in density which may not be the case. This has the effect that even though the fraction of the density contained in the spiral arms is higher in the best-fit CO model compared to the best-fit H\,{\sc i}{} model, the peaks associated with the spiral arm tangents are less prominent in the CO model. There is clear lack of emission around the tangents of the Perseus arm at $l\approx -23^\circ$ and the Sagittarius-Carina arm at $l\approx 51^\circ$ and $l\approx -79^\circ$. The CO spatial distribution model developed is the initial attempt at decomposing the emission into spiral arms and disk components, given the methodology employed in this paper. Additional model tuning is required to improve the representation of the data, but this is deferred to future work. \section{Effects on CR propagation and high-energy $\gamma$-ray{} emission} \label{sec:CRgamma} \subsection{The GALPROP code} \label{sec:GALPROP} The underlying concept of the {\it GALPROP}{} code is that all CR-related data, including direct measurements and indirect electromagnetic observations are related to the same Milky Way galaxy and should, therefore, be modeled self-consistently. With over 20 years of development it is the best known and most feature-rich code for calculations of CR propagation and their interactions in the Galactic ISM. The recently released {\it GALPROP}{} code version 56 \citep{PorterEtAl:2017} is used in this work. The latest releases are always available at the dedicated website\textsuperscript{\ref{fn:link}}, which also provides the WebRun facility to run {\it GALPROP}{} via a web browser interface. The website contains detailed information on CR propagation together with links to all {\it GALPROP}{} team publications and the supporting data sets required to run the code. The interstellar gas models developed in this paper will also become available at the above-mentioned website. A brief overview of the {\it GALPROP}{} code is given below, while further details can be found in recent {\it GALPROP}{} publications \citep[e.g.,][and references therein]{PorterEtAl:2017,JohannessonEtAl:2016,VladimirovEtAl:2011}. The {\it GALPROP}{} code numerically solves the diffusion-reacceleration equation for CR transport with a given source distribution and boundary conditions for all CR isotopes. Energy losses from ionization and Coulomb interactions are included for all species and for CR electrons and positrons energy losses due to Bremsstrahlung, inverse Compton (IC) scattering, and synchrotron emission are also included. Additional processes for nuclei include nuclear spallation, secondary particle production, radioactive decay, electron capture and stripping, electron knock-on, and electron K-capture. Electromagnetic radiation from the decays of $\pi^0$, $K^0$, and heavier mesons, Bremsstrahlung, IC scattering, and synchrotron emission are calculated self-consistently once the system of transport equations has been solved. To capture the finer structure of the interstellar gas the $\gamma$-ray\ intensity maps associated with interactions between CRs and the interstellar gas use the column density estimated from the line emission surveys that have been split into Galactocentric annular maps using the gas velocity field; see Appendix~B of \citet{AckermannEtAl:2012} for full details of their construction. The {\it GALPROP}{} code has proven to be remarkably successful in modeling both CR data \citep[e.g.,][]{JohannessonEtAl:2016,BoschiniEtAl:2017} and electromagnetic radiation associated with CR interactions in the ISM \citep[e.g.,][]{AckermannEtAl:2012,AjelloEtAl:2016}. The spatial number density distribution of the interstellar gas is used in {\it GALPROP}{} for calculations of the production of secondaries and energy losses by CR species. In addition, the number density distribution is used internally for the proper weighting of the gas column density along the LOS in the individual Galactocentric gas rings for calculations of Bremsstahlung and $\pi^0$-decay $\gamma$-ray{} intensity maps. However, the density distribution is approximate and does not represent all details of the true distribution of the interstellar gas. Therefore, the ratios between the column densities estimated from the H\,{\sc i}{} and CO line emission surveys and corresponding column densities from the Galactic gas distributions employed in {\it GALPROP}{} are used as multiplicative corrections for each LOS integration. This method enables {\it GALPROP}{} to predict the details in the structure of the gas-related interstellar emissions imprinted in the $\gamma$-ray{} skymaps without the necessity to develop precise 3D models of the gas distribution in the Galaxy. Even though all versions of {\it GALPROP}{} since the very beginning allowed full 3D functionality, including 3D gas distributions, the analytic 2D cylindrically symmetric gas distributions as described by \citet{AckermannEtAl:2012} have been most commonly used. This usage mode is attributed to the increased demand for computing resources required to run {\it GALPROP}{} in 3D mode as well as the absence of detailed 3D models of the ISM. The latest {\it GALPROP}{} (version 56) expands upon this functionality with the capability to read in the same XML description of the gas distribution used in the {\it GALGAS}\ code, and the optimizations made significantly improve the performance. The output from the analysis described in Section~\ref{sec:gasModeling} can, therefore, be used directly without any modification. \subsection{CR propagation} \label{sec:crcalc} This Section illustrates how the new 3D interstellar gas distributions affect the CR propagation parameters derived from a fit to the direct observations of primary and secondary CR species. This study is limited to models with diffusive reacceleration and an isotropic and homogeneous diffusion coefficient with a power-law rigidity dependence. The propagation parameters are tuned using the method described by \citet{PorterEtAl:2017}. To enable comparison with that work the same CR source density models, SA0, SA50, and SA100 are used. The CR source density distribution is composed of two constituents, a disk and 4 spiral arms, where each component has the same exponential scale height of 200 pc perpendicular to the Galactic plane. The source distribution in the disk follows the radial distribution of pulsars as given by \citet{YusifovKucuk:2004}. The source distribution in the spiral arms matches the density distribution of the stellar population in the four main arms described by \citet{RobitailleEtAl:2012}. The different numbers $nn$ in the source model name (SA$nn$) represent the percentile contribution from the sources in the spiral arms. The spiral arms in the CR source models are not identical to those in the gas density model. Due to different pitch angles, some parts of the spiral arms in the CR source models end in inter-arm regions of the gas density models and vice-versa. While this configuration may not be completely physical, there are theories that predict an offset between the peak of the gas distribution and that of star formation that should trace the CR source distribution \citep{Vallee:2014}. Having a single model both with and without an offset enables an illustration of the effects of both models within a single calculation. Each CR source model is then paired with the standard 2D interstellar gas distributions available in {\it GALPROP}{} \citep{AckermannEtAl:2012} and the new 3D gas distributions described in this paper, resulting in a total of 6 models. To identify changes associated with the choice of the gas distribution, the same standard 2D ISRF model is used for all 6 {\it GALPROP}{} models \citep{AckermannEtAl:2012}. The SA0--2D gas model is used as the reference case for comparison with the other models considered in this paper. This combination corresponds to the 2D CR source and gas distributions scenario that has been the standard approach for CR and interstellar emission modeling in the past and is the same reference model used by \citet{PorterEtAl:2017}. The main effects of varying the interstellar gas distributions are expected at low energies where the interstellar CR propagation is slow and the energy losses are fast. A comparison of propagated CR spectra with the direct measurements made deep in the heliosphere is impossible without taking into account heliospheric effects. Because the details of the heliospheric propagation and the CR spectra depend on the solar activity, the effect of variation of CR fluxes is called heliospheric or solar modulation. The modulated spectra of CR species differ considerably from the local interstellar spectra below $\sim$20-50 GeV/nucleon, where the effect becomes stronger as energy decreases. \begin{deluxetable}{lcc}[tb!] \tablecolumns{3} \tablewidth{0pc} \tablecaption{CR data used for determination of the propagation parameters \label{tab:CRdata}} \tablehead{ \colhead{Instruments} & \colhead{Species} & \colhead{References\tablenotemark{a}} } \startdata AMS-02 (2011-2016) & B/C & [1] \\ AMS-02 (2011-2013) & $e^-$ & [2] \\ AMS-02 (2011-2013) & H & [3] \\ AMS-02 (2011-2013) & He & [4] \\ HEAO3-C2 (1979-1980) & B, C, O, Ne, Mg, Si & [5] \\ Voyager-1 (2012-2015) & H, He, B, C, O, Ne, Mg, Si & [6] \\ PAMELA (2006-2008) & B, C & [7] \enddata \tablenotetext{a}{[1] \citet{PRL:1171102}, [2] \citet{PRL:1131102}, [3] \citet{PRL:1141103}, [4] \citet{PRL:1151101}, [5] \citet{Engelmann:1990}, [6] \citet{CummingsEtAl:2016}, [7] \citet{AdrianiEtAl:2014}.} \end{deluxetable} \begin{deluxetable*}{@{\extracolsep{4pt}}lcccccc@{}}[th!] \tablecolumns{7} \tablewidth{0pc} \tablecaption{Final {\it GALPROP}{} model parameters \label{tab:CRparameters} } \tablehead{ & \multicolumn{3}{c}{2D gas models} & \multicolumn{3}{c}{3D gas models} \\ \cline{2-4}\cline{5-7}\noalign{\rule{0pt}{.5ex}} \colhead{Parameter} & \colhead{SA0} & \colhead{SA50} & \colhead{SA100} & \colhead{SA0} & \colhead{SA50} & \colhead{SA100} } \startdata \tablenotemark{a }$D_{0}$, $10^{28}$ cm$^2$ s$^{-1}$ & $4.37$ & $4.47$ & $4.71$ & $2.20$ & $2.28$ & $2.34$ \\ \tablenotemark{a }$\delta$ & $0.494$ & $0.508$ & $0.483$ & $0.546$ & $0.545$ & $0.549$ \\ \phantom{\tablenotemark{a }}$v_{A}$, km s$^{-1}$ & $7.64$ & $9.19$ & $7.34$ & $5.86$ & $5.26$ & $3.97$ \\ \tablenotemark{b }$\gamma_0$ & $1.47$ & $1.61$ & $1.66$ & $1.37$ & $1.51$ & $1.51$ \\ \tablenotemark{b }$\gamma_1$ & $2.366$ & $2.350$ & $2.381$ & $2.338$ & $2.345$ & $2.357$ \\ \tablenotemark{b }$\rho_1$, GV & $3.64$ & $3.92$ & $4.12$ & $3.40$ & $3.56$ & $3.33$ \\ \tablenotemark{b }$\gamma_{0,\text{H}}$ & $1.75$ & $1.77$ & $1.78$ & $1.75$ & $1.71$ & $1.79$ \\ \tablenotemark{b }$\gamma_{1,\text{H}}$ & $2.375$ & $2.359$ & $2.349$ & $2.331$ & $2.349$ & $2.322$ \\ \tablenotemark{b }$\gamma_{2,\text{H}}$ & $2.199$ & $2.200$ & $2.238$ & $2.203$ & $2.190$ & $2.219$ \\ \tablenotemark{b }$\rho_{1,\text{H}}$, GV & $5.99$ & $5.99$ & $5.67$ & $5.32$ & $4.81$ & $4.93$ \\ \tablenotemark{b }$\rho_{2,\text{H}}$, GV & $265$ & $225$ & $403$ & $206$ & $200$ & $206$ \\ \phantom{\tablenotemark{a }}$\Delta_{\text{He}}$ & $0.034$ & $0.034$ & $0.039$ & $0.043$ & $0.045$ & $0.035$ \\ \tablenotemark{b }$\gamma_{0,e}$ & $1.66$ & $1.67$ & $1.57$ & $1.63$ & $1.81$ & $1.74$ \\ \tablenotemark{b }$\gamma_{1,e}$ & $2.761$ & $2.753$ & $2.749$ & $2.744$ & $2.769$ & $2.734$ \\ \tablenotemark{b }$\gamma_{2,e}$ & $2.351$ & $2.327$ & $2.312$ & $2.305$ & $2.378$ & $2.303$ \\ \tablenotemark{b }$\rho_{1,e}$, GV & $5.82$ & $5.89$ & $6.14$ & $5.68$ & $5.97$ & $6.90$ \\ \tablenotemark{b }$\rho_{2,e}$, GV & $102$ & $101$ & $102$ & $100$ & $76$ & $109$ \\ \tablenotemark{c }$J_\text{H}$, $10^{-9}$ cm$^{-2}$ s$^{-1}$ sr$^{-1}$ MeV$^{-1}$ & $4.520$ & $4.498$ & $4.610$ & $4.486$ & $4.542$ & $4.322$ \\ \tablenotemark{c }$J_e$, $10^{-11}$ cm$^{-2}$ s$^{-1}$ sr$^{-1}$ MeV$^{-1}$ & $1.242$ & $1.252$ & $1.243$ & $1.290$ & $1.316$ & $1.231$ \\ \tablenotemark{d }$q_{0,^{4}\text{He}}/q_{0,\text{H}}\times10^{6}$ & $ 94602$ & $ 95324$ & $ 97365$ & $101800$ & $100160$ & $100630$ \\ \tablenotemark{d }$q_{0,^{12}\text{C}}/q_{0,\text{H}}\times10^{6}$ & $ 2882$ & $ 2867$ & $ 2746$ & $ 2960$ & $ 2916$ & $ 2849$ \\ \tablenotemark{d }$q_{0,^{16}\text{O}}/q_{0,\text{H}}\times10^{6}$ & $ 3780$ & $ 3873$ & $ 3645$ & $ 3944$ & $ 3950$ & $ 3804$ \\ \tablenotemark{d }$q_{0,^{20}\text{Ne}}/q_{0,\text{H}}\times10^{6}$ & $ 356$ & $ 358$ & $ 333$ & $ 379$ & $ 371$ & $ 356$ \\ \tablenotemark{d }$q_{0,^{24}\text{Mg}}/q_{0,\text{H}}\times10^{6}$ & $ 644$ & $ 654$ & $ 609$ & $ 685$ & $ 675$ & $ 657$ \\ \tablenotemark{d }$q_{0,^{28}\text{Si}}/q_{0,\text{H}}\times10^{6}$ & $ 742$ & $ 762$ & $ 718$ & $ 779$ & $ 783$ & $ 756$ \\ \tablenotemark{e }$\Phi_{\text{HEAO3-C2}}$, MV & $ 857$ & $ 849$ & $ 827$ & $ 845$ & $ 850$ & $ 833$ \\ \tablenotemark{e }$\Phi_{\text{PAMELA}}$, MV & $ 578$ & $ 578$ & $ 572$ & $ 584$ & $ 587$ & $ 582$ \\ \tablenotemark{e }$\Phi_{\text{AMS}}$, MV & $ 638$ & $ 645$ & $ 581$ & $ 729$ & $ 768$ & $ 649$ \enddata \tablenotetext{a}{$D(\rho) \propto \beta \rho^{\delta}$ where $\rho$ is the rigidity. $D(\rho)$ is normalized to $D_0$ at 4~GV.} \tablenotetext{b}{The injection spectrum is parameterized as $q(\rho) \propto \rho^{\gamma_0}$ for $\rho < \rho_1$, $q(\rho) \propto \rho^{\gamma_1}$ for $\rho_1 < \rho < \rho_2$, and $q(r) \propto \rho^{\gamma_2}$ for $\rho > \rho_2$. The spectral shape of the injection spectrum is the same for all species except H and He.} \tablenotetext{c}{The proton and $e^-$ fluxes are normalized at the Solar location at the kinetic energy of 100~GeV.} \tablenotetext{d}{The injection spectra for isotopes are adjusted as a ratio of the proton injection spectrum at 100~GeV. The isotopes not listed here have the same value as found in \citet{JohannessonEtAl:2016}.} \tablenotetext{e}{Solar modulation is calculated using the force-field approximation. } \end{deluxetable*} CR propagation in the heliosphere is described by the \citet{1965P&SS...13....9P} equation. Spatial diffusion, convection with the solar wind, drifts, and adiabatic cooling are the main processes that influence transport of CRs to the inner heliosphere. These effects have been incorporated into realistic (time-dependent, 3D) models \citep[e.g.,][]{2003JGRA..108.1228F,2006ApJ...640.1119L,2004AnGeo..22.3729P,BoschiniEtAl:2017}. There is considerable degeneracy between the parameters of the heliospheric propagation models and those controlling the low-energy behavior of the Galactic CR propagation \citep{BoschiniEtAl:2017}. So far, the detailed analysis was made only for CR protons, helium, antiprotons, and electrons \citep{BoschiniEtAl:2017, BoschiniEtAl:2018}. Evaluation of CR propagation parameters involves propagation of elements heavier than oxygen (Table~\ref{tab:CRdata}), for which the same thorough analysis is not available yet. Besides, the current work is aimed at the study of the effects of different gas distributions on the interstellar CR propagation. Therefore, the simplest available heliospheric modulation is used, the so-called ``force-field'' approximation \citep{1968ApJ...154.1011G} It characterizes the whole complexity of the time-dependent heliospheric modulation with a single parameter -- the ``modulation potential''. Such an approach has no predictive power, but has been widely used as a simple low-energy parameterization of the modulated spectrum. The interstellar propagation parameters are tuned using a maximum-likelihood fit employing the data sets listed in Table~\ref{tab:CRdata}. To reduce the number of free parameters in each fit, the procedure is split into two stages, similar to the analysis described in \citet{CummingsEtAl:2016}. The propagation model parameters that are fit for are listed in Table~\ref{tab:CRparameters}. There is a strong degeneracy between the halo height and the normalization of the diffusion coefficient. Even though using the radioactive-clock isotopes ($^{10}$Be, $^{26}$Al, $^{36}$Cl, $^{54}$Mn) constrains the halo size significantly, the range of possible values remains quite large \citep{JohannessonEtAl:2016}. Instead of fitting for both, the diffusion coefficient and the halo size, simultaneously, the halo height is fixed to 6~kpc, in good agreement with previous analyses \citep[e.g.,][]{MoskalenkoEtAl:2005,OrlandoStrong:2013,JohannessonEtAl:2016}. At the first stage, the interstellar propagation parameters are fitted together with the injection spectra and abundances of elements heavier than helium. With the propagation parameters and the injection spectra for those elements determined, they are held constant. The injection spectra for electrons, protons, and helium are then obtained at the second stage. To reduce the number of parameters the injection spectrum of helium is coupled with that of protons so that the breaks are at the same rigidities, and spectral indices of helium are smaller than the protons by a parameter $\Delta_{\text{He}}$ that is also derived from the fit. This is similar to the linking the proton and helium spectra in the analysis described in \citet{JohannessonEtAl:2016}. Fourteen parameters are determined at the first stage of the procedure, while the second stage fits for fifteen parameter values. The calculations are made for a Cartesian spatial grid with dimensions $\pm 20$~kpc for the $X$ and $Y$ coordinates, with $\Delta X=\Delta Y = 0.2$~kpc, $\Delta Z = 0.1$~kpc and CR kinetic energy grid covering 10~MeV/nucleon to 100~TeV/nucleon with logarithmic spacing at 16 bins/decade. The span and sampling of the spatial and energy grids is chosen to enable realistic and efficient computations given the available resources\footnote{Increasing the energy grid sampling by a factor of 2 only produces a change in the propagated CR intensities at maximum of $\sim$$2$\%. The runtime and memory consumption is increased by a proportional factor for the finer energy grid, but would not substantially alter the results or conclusions.}. The spatial grid sub-division size allows adequate sampling of the CR and ISM density distributions. The $X,Y$ size of the grid is sufficient to ensure that CR leakage from the Galaxy is determined by the halo height rather than the extent of the $X,Y$ grid. It has been shown that there is only a weak effect on parameters determined for 2D models using 20~kpc and 30~kpc radial boundaries even with halo heights as large as 10~kpc \citep{AckermannEtAl:2012}. Table~\ref{tab:CRparameters}\footnote{The parameters for the 2D gas model are reproduced from \citet{PorterEtAl:2017}} shows the results of the fitting procedure for the SA0, SA50, and SA100 CR source density models for both 2D and 3D gas models. The model predictions for the CR flux at the location of the Sun are very similar being within $\sim$$5$\% of each other as shown in Figure~\ref{fig:Spectra1}. The latter is not surprising because the models are fit to the CR data. All models generally agree with data better than 10\% with deviations reaching up to 20\% for some energy ranges and elements. This level of agreement with data is sufficient for the purpose of this paper. \begin{figure*}[t] \center{ \includegraphics[width=0.50\textwidth]{f4a}\hfill \includegraphics[width=0.50\textwidth]{f4b}\\ \includegraphics[width=0.50\textwidth]{f4c}\hfill \includegraphics[width=0.50\textwidth]{f4d}\\ \includegraphics[width=0.50\textwidth]{f4e}\hfill \includegraphics[width=0.50\textwidth]{f4f} } \caption{{\it GALPROP}{} predictions calculated using the new 3D gas distributions and CR source density models compared with CR data: SA0 (solid curve), SA50 (dotted curve), and SA100 (dashed curve). Shown are CR species: protons (top left), helium (top right), $e^-$ (center left), boron over carbon ratio (center right), boron (bottom left), and carbon (bottom right). Bottom panel of each figure shows the fractional residuals. } \label{fig:Spectra1} \end{figure*} \begin{figure*}[tb!] \center{ \includegraphics[width=0.33\textwidth]{f5a}\hfill \includegraphics[width=0.33\textwidth]{f5b}\hfill \includegraphics[width=0.33\textwidth]{f5c}\\ \includegraphics[width=0.33\textwidth]{f5d}\hfill \includegraphics[width=0.33\textwidth]{f5e}\hfill \includegraphics[width=0.33\textwidth]{f5f} } \caption{Total CR energy density in the mid-plane of the Galaxy ($Z=0$) for SA0, SA50, and SA100 CR source distributions (left to right, respectively) with the 2D gas distribution on top and the 3D gas distribution at the bottom. The solar location is marked as a white point and the white dashed lines mark the longitude grid with 30$^\circ$ step. The black dashed curves trace the positions of the spiral arms in the CR source distribution and the cyan dotted curves trace the spiral arms in the 3D gas distribution.} \label{fig:CRdensity} \end{figure*} Comparison of the best-fit parameters of the models employing the 2D gas distributions with those using the new 3D gas distributions shows that there are significant changes in the normalization of the diffusion coefficient $D_0$ and its rigidity dependence $\delta$. There are also corresponding changes in the Alfv\'en velocity values $v_A$. Using the 3D gas distributions results in slower diffusion and weaker diffusive reacceleration. The main reason for this change in the parameter values is the lower local gas density for the 3D gas models. The average surface density of the combined interstellar gas within a few kpc of the Sun using the new 3D distributions is nearly a factor of 2 lower than that calculated using the 2D distributions. This is the region where most of the secondary Boron reaching the Earth is produced \citep{JohannessonEtAl:2016}. The surface density, rather than central plane number density, is used for inter-comparison because the scale height of the CR diffusion zone is significantly larger than that of the gas and the surface density thus provides better estimate of total column density of the gas traversed by CRs. The difference in total surface density of about a factor of 2 agrees reasonably well with the factor of 2 change in $D_0$. In turn, slower diffusion results in weaker diffusive acceleration that is needed to reproduce the data. The reason for this significant change in the local gas surface density between the distributions is two-fold. First the method for determination of the gas distribution using a student-t likelihood favors models that under-predict the data leaving residuals that can be absorbed by additional model components. This is evident from the positive residuals seen in Figure~\ref{fig:longitudeProfiles}. Those cannot, however, explain a factor of 2 difference because the residuals around $l \sim 90^\circ$ and $l \sim-90^\circ$ are $\sim 40$\% of the model and should therefore account for only about half of the difference. Another important difference between the gas distributions is that the older 2D distributions were derived assuming the Sun is located at 10~kpc from the GC \citep{GordonBurton:1976, BronfmanEtAl:1988}. These distributions have not been scaled to the IAU recommended value of $R_\odot=8.5$~kpc that is used for the {\it GALPROP}{} calculations. Convolving the old 2D gas distributions with {\it GALGAS}{} using the updated rotation curve and the updated solar location results in a significant over-prediction of the data. The 2D gas distributions for the newly released {\it GALPROP}{} version 56 have been re-scaled to the correct Sun-GC distance of $R_\odot=8.5$~kpc. The scaled 2D gas distributions provide a good agreement with the H\,{\sc i}\ and CO data. Using the rescaled 2D gas distributions in {\it GALPROP}\ in a fit to the CR data results in $D_0 = 3.16\times10^{28}$~cm$^{2}$s$^{-1}$, which is about half-way between the results for the old 2D distributions and those obtained with the new 3D distributions. For compatibility with previous work and, in particular, the work by \citet{PorterEtAl:2017} the calculations here do not use the corrected 2D gas distributions. {\it GALPROP}{} is not the only propagation code that uses these incorrectly scaled distributions, other codes that have incorporated the {\it GALPROP}{} analytic gas code and use it with $R_\odot=8.5$ kpc are also susceptible to the error. The change in propagation parameters between the different CR source density models is small for both 2D and 3D gas distributions, but statistically significant. There is no obvious trend for most of the parameters. That is, the values for those of SA50 are not always between the values for SA0 and SA100. Note that the values of $v_A$ and $\delta$ determined here for the {\it GALPROP}{} 2D gas distributions differ from those obtained by \citet{JohannessonEtAl:2016} and \citet{CummingsEtAl:2016} because of the datasets employed. The larger value of the delta parameter comes from the reduced Alfv\'{e}n speed obtained from the fits: higher Alfv\'{e}n speeds result in a larger bump around $\sim$$1$ GeV in the B/C ratio than that required by the AMS-02 data used in this paper. Figure~\ref{fig:CRdensity} shows the total energy density of CRs in the Galactic plane for the 6 models considered in this paper. It illustrates how the gas and CR source distribution affects the final distribution of CRs in the Galaxy after propagation. The spiral arm structure of the CR source distribution is clearly visible. The visible width of the spiral arms is considerably larger than the input CR sources because of the diffusion. The change in diffusion parameters between the two gas distributions creates a visible and significant effect where the spiral arm features are distinctly sharper with the new gas distributions and smaller diffusion coefficient. The imprint on the gas distribution can also be seen for the SA0 CR source distribution where the enhanced density in the spiral arm causes faster cooling. \begin{figure*}[tb!] \centering \includegraphics[width=0.33\textwidth]{f6a}\hfill \includegraphics[width=0.33\textwidth]{f6b}\hfill \includegraphics[width=0.33\textwidth]{f6c}\\ \includegraphics[width=0.33\textwidth]{f6d}\hfill \includegraphics[width=0.33\textwidth]{f6e}\hfill \includegraphics[width=0.33\textwidth]{f6f} \caption{Top row: CR energy density of secondary positrons in the mid-plane of the Galaxy for the SA0-2D gas (left), SA0-3D gas (center), and SA100-3D gas (right). Bottom row: B/C ratio at 10~GeV/nuc in the mid-plane of the Galaxy for same models. The lines and curves are the same as in Figure~\ref{fig:CRdensity}. The three labeled red dots with the white border in the bottom right panel mark locations for the CR spectra shown in Figure~\ref{fig:secondaryRatios}. } \label{fig:CRsecondaries} \end{figure*} \begin{figure*}[tb!] \centering \includegraphics[width=0.33\textwidth]{f7a}\hfill \includegraphics[width=0.33\textwidth]{f7b}\hfill \includegraphics[width=0.33\textwidth]{f7c}\\ \includegraphics[width=0.33\textwidth]{f7d}\hfill \includegraphics[width=0.33\textwidth]{f7e}\hfill \includegraphics[width=0.33\textwidth]{f7f} \caption{Top row: B/C ratio for SA0-2D (black solid curve), SA100-2D (blue dotted curve), SA0-3D (red dashed curve), and SA100-3D (green dash-dotted curve) models at the positions marked by the red dots with the white border in Figure~\ref{fig:CRsecondaries}. Bottom row: positron spectrum plotted at those same locations for the same models. The red label in the panels corresponds to the appropriate dot location.} \label{fig:secondaryRatios} \end{figure*} \begin{figure*}[tb!] \center{ \includegraphics[width=0.32\textwidth]{f8a}\hfill \includegraphics[width=0.32\textwidth]{f8b}\hfill \includegraphics[width=0.32\textwidth]{f8c}\\ \includegraphics[width=0.32\textwidth]{f8d}\hfill \includegraphics[width=0.32\textwidth]{f8e}\hfill \includegraphics[width=0.32\textwidth]{f8f} } \caption{Top row: total $\gamma$-ray{} intensity ($\pi^0$-decay, bremsstrahlung, and IC) at 30~MeV, 1.0~GeV, and 100~GeV energies (left to right, respectively) for SA0-2D gas reference case. Bottom row: fractional residuals from a comparison with SA0-3D gas \big(\big[SA0-3D -- SA0-2D\big]/SA0-2D\big) at the same energies. The maps are in Galactic coordinates with $(l,b)=(0^\circ,0^\circ)$ at the centre with $l$ increasing to the left. The longitude meridians and latitude parallels have $45^\circ$ spacing.} \label{fig:0_armsFraction} \end{figure*} The total energy density of CRs is dominated by protons with energies of about few GeV that are mostly primary in origin. The gas distribution only affects the primary CRs by changing the cooling and spallation rates and, therefore, has a minor impact on the total energy density. Secondary CRs are produced in interactions between primary CRs and interstellar gas, so the gas distribution has a much larger influence on CR secondaries and their spatial densities. This is illustrated in Figure~\ref{fig:CRsecondaries}, which shows the energy density of secondary positrons in the plane for selected models. The spiral arm distribution of the gas is clearly visible in the energy density distribution of the secondary particles, while the primary CR source distribution has only a relatively minor effect (Figure~\ref{fig:CRdensity}). The different drivers for the spatial structure of the primary and secondary CRs result in non-trivial dependence of derived quantities, such as the secondary/primary ratios if the CR source and gas spatial densities are not aligned. This is illustrated in the B/C ratio shown in Figure~\ref{fig:CRsecondaries}. There is a clear reduction in the ratio along the spiral arm pattern of the CR source distribution (black dashed curves), while the ratio is seen to be larger along the spiral arm pattern of the gas distribution (cyan dotted curves). For the cases where the spiral arm patterns of both distributions aligns the effect of each almost cancels out. To further illustrate this point, the energy dependences of the B/C ratio at three selected locations in the Galaxy are shown in Figure~\ref{fig:secondaryRatios}. The locations are shown in the bottom right panel of Figure~\ref{fig:CRsecondaries} and are chosen to be at the same Galactocentric distance $R_\odot$. The locations align with a spiral arm in the gas distributions (location L2), a spiral arm in the CR source distribution (location L3) or both (location L1). The exact CR and gas distributions can, therefore, have a large effect on the determination of the parameters of propagation models when calculating secondary production and the B/C ratio. The effect on the pure secondaries, such as secondary positrons shown in the bottom row of Figure~\ref{fig:secondaryRatios} is dominated by the gas distributions because the distribution of primary sources have only a small effect. \begin{figure*}[tb!] \center{ \includegraphics[width=0.32\textwidth]{f9a}\hfill \includegraphics[width=0.32\textwidth]{f9b}\hfill \includegraphics[width=0.32\textwidth]{f9c}\\ \includegraphics[width=0.32\textwidth]{f9d}\hfill \includegraphics[width=0.32\textwidth]{f9e}\hfill \includegraphics[width=0.32\textwidth]{f9f}\\ \includegraphics[width=0.32\textwidth]{f9g}\hfill \includegraphics[width=0.32\textwidth]{f9h}\hfill \includegraphics[width=0.32\textwidth]{f9i}\\ \includegraphics[width=0.32\textwidth]{f9j}\hfill \includegraphics[width=0.32\textwidth]{f9k}\hfill \includegraphics[width=0.32\textwidth]{f9l} } \caption{Fractional residuals for the total $\gamma$-ray{} intensity ($\pi^0$-decay, Bremsstrahlung, and IC) at 30~MeV, 1.0~GeV, and 100~GeV energies (left to right, respectively) for SA50-2D gas, SA100-2D gas, SA50-3D gas, and SA100-3D gas (top to bottom, respectively) compared to SA0-2D gas reference model. The maps are in Galactic coordinates with $(l,b)=(0^\circ,0^\circ)$ at the center with $l$ increasing to the left. The longitude meridians and latitude parallels have $45^\circ$ spacing.} \label{fig:armsFraction} \end{figure*} \subsection{$\gamma$-ray{} maps} High-energy interstellar emissions are calculated using {\it GALPROP}{} for the SA0, SA50, and SA100 source density models (Section~\ref{sec:crcalc}), and the standard 2D gas \citep{AckermannEtAl:2012} and the new 3D gas model distributions. The standard 2D ISRF model \citep{AckermannEtAl:2012} and the annular gas maps from \citet{AjelloEtAl:2016} are used for all calculations. Because of the column density corrections described in Section~\ref{sec:GALPROP}, replacing the 2D gas distributions in {\it GALPROP}{} with the new 3D gas distributions will only affect the $\gamma$-ray{} skymaps through the change they have on the CR flux. Calculations of the interstellar emission are made with the same spatial and kinetic energy grids that were used for tuning CR propagation parameters (Section~\ref{sec:crcalc}). The $\gamma$-ray{} intensity maps are calculated for the energy range 30~MeV -- 300~GeV using a logarithmic energy grid with 10 bins/decade spacing. Production of higher energy $\gamma$-ray{s} involves interactions of CRs with energies above several TeV, where the assumption of a steady-state CR injection is less valid due to the stochastic nature of CR sources and fewer sources capable of accelerating particles to VHEs \citep{MoskalenkoEtAl:2001,BernardEtAl:2013}. Calculations of pion (and heavier mesons) production and decay are done using a parameterization by \citet{KamaeEtAl:2006}. Secondary $e^\pm$ produced in hadronic interactions are combined with primary electrons to calculate IC scattering. Their contribution to the interstellar emissions is important for energies $\lesssim50$~MeV \citep{PorterEtAl:2008,BouchetEtAl:2011}. All calculations of the IC emission use the anisotropic scattering cross section \citep{2000ApJ...528..357M} that accounts for the full directional intensity distribution of the ISRF model. The effects the two different gas distributions have on the total predicted $\gamma$-ray{} intensity are first explored. The differences between the model calculations in this paper are of the order of few tens of percent, which is much smaller than the variations in the calculated sky intensity that are few orders of magnitude, e.g., between low and high-latitude regions. Such differences between models are most usefully illustrated as relative differences compared to the reference case: SA0-2D~gas. Modifications in the gas distributions used in {\it GALPROP}{} affect the propagation parameters and hence all three components of the interstellar emission: $\pi^0$-decay, Bremsstrahlung, and IC. Figure~\ref{fig:0_armsFraction} shows a comparison between SA0-2D and SA0-3D model intensities. The top panels show the total intensity calculated in the SA0-2D model at 30~MeV, 1~GeV, and 100~GeV (left to right), while the bottom panels show the fractional difference between the two models, \big(SA0-3D -- SA0-2D\big)/SA0-2D, for the same energies. The intensity at 30~MeV is dominated by Bremsstrahlung and IC emission from low-energy electrons with a large contribution from secondary electrons and positrons. At 1~GeV the intensity is dominated by the $\pi^0$-decay emission where 10~GeV particles contributing the most. The intensity at 100~GeV is still dominated by $\pi^0$-decay emission over most of the sky with IC emission contributing nearly equally in the inner Galaxy. The most visible feature of the bottom ratio maps in Figure~\ref{fig:0_armsFraction} is the positive residuals in the 30~MeV map, in particular for the local molecular clouds at low latitudes. This extra emission is almost entirely caused by increased Bremsstrahlung emission even though the models are tuned to the same electron data (Section~\ref{sec:crcalc}). Because of the smaller number density of the gas in the 3D model and corresponding change in the propagation parameters, the local interstellar spectrum of electrons appears softer at low energies than for the case of the 2D gas model. Tuning to the same AMS-02 data thus requires a larger modulation potential. The softer interstellar spectrum of electrons leads to the enhanced Bremsstrahlung and IC emission compared to the reference case. The IC emission is, however, not as strongly affected because the average energy of the electrons generating most of the emission in this energy range is higher than for electrons producing Bremsstrahlung. Higher Bremsstrahlung $\gamma$-ray{} production also enhances the emissions from local gas that is distributed toward high-latitudes, but this is not visible in the ratio of the total intensity maps because IC emission dominates there. The effect is greatly reduced for GeV $\gamma$-ray{s}, because Bremsstrahlung becomes subdominant at these energies. Other effects visible at all energies are due to the difference in propagation parameters for the CR source and gas density combinations and the different spatial distribution of secondary CR particles in the Galaxy. The flux of secondary CRs in the outer Galaxy is smaller in the model using the 3D gas distributions compared to the reference case. The slower diffusion, however, results in more primary CR electrons in the outer Galaxy and less in the inner Galaxy compared to the reference case. This causes the bluish regions towards the inner Galaxy at intermediate latitudes where IC dominates over Bremsstrahlung and $\pi^0$-decay emission. The enhancement of the IC emission in the outer Galaxy is not visible in these total ratio maps because it is subdominant, particularly at 1~GeV. The decreased emission in the outer Galaxy, which shows as a sharp drop in the fractional residual maps at the annular gas map boundaries in the outer Galaxy, is a result of less Bremsstrahlung and $\pi^0$-decay emission in the outer Galaxy. Emissions from local clouds dominate the intensity in the outer Galactic plane, which explains why the ratio is slightly above 1 for the local clouds. The spiral arm structure of the 3D gas distributions is not visible in the fractional ratio maps because they affect the total $\gamma$-ray{} intensity by only a few percent, which is smaller than the effect caused by the change in propagation. Even though the effects of the spiral arm features are small in the total intensity the magnitude of the difference can be up to 15\% for individual Galactocentric annular maps (not shown). Comparisons of models with CR sources in the spiral arms, SA50 and SA100, are shown in Figure~\ref{fig:armsFraction} as fractional residuals against the reference model. Shown are residuals for both the 2D and 3D gas distributions. The spiral arm contribution in the CR source distribution falls off more quickly with Galactic radius than the disk contribution. This causes reduced emission in the outer Galaxy for models with a higher fraction of sources in the spiral arms. The same effect also causes an increased flux of CRs in the inner Galaxy that produces an increased intensity towards the inner Galaxy, but the lack of sources at $R < 3$~kpc in the spiral arm component results in a reduced intensity near the GC. This combination produces the doughnut-like shape in the residuals towards the inner Galaxy. The effect is enhanced for the CR electrons because they lose energy more quickly than nuclei and, therefore, do not travel very far from their sources. The enhancements around the spiral arm tangents are, therefore, mostly visible in Bremsstrahlung and IC emission components. These can be seen in 30 MeV and 100 GeV fractional residual maps, where one of these components or both are bright enough. Correspondingly, the tangents are least visible in the fractional residuals at 1~GeV where the $\pi^0$-decay component dominates. The increased brightness of the 30~MeV map for the SA50-3D gas model is due to the larger value needed for the AMS-02 modulation potential to match the CR data (see Table~\ref{tab:CRparameters}). As discussed before there is considerable degeneracy between the determination of the heliospheric modulation and the low-energy CR intensities. Combined with non-linear CR propagation models and numerical uncertainties, this can lead to the minimizer having difficulties in finding the true best-fit parameters when fitting to CR data. This may explain why the values for the modulation potential differ so significantly between the three models using the 3D gas distribution. {\em Fermi}--LAT\ data can be used to further constrain the spectrum of CR electrons, but such analysis requires modeling outside the scope of this paper and is deferred to future work. The effects of variations of the CR source distribution on the $\gamma$-ray{} intensity maps are fairly similar in cases of 3D and 2D gas distributions. There is a reduction in the intensity in the outer Galaxy and toward the GC while the intensity in the inner Galaxy increases. The details are, however, somewhat different. The slower diffusion combined with the same electron cooling rate leads to more CR electrons near their sources and fewer far from the plane. The doughnut like excess towards the inner Galaxy is thus more asymmetric with higher intensity near the plane. This is most visible at 100~GeV where the increased intensity for latitudes $|b| \gtrsim 30^\circ$ is suppressed compared to the same source models with the 2D gas distribution. This effect also enhances and sharpens the spiral arm tangent features for both IC and Bremsstrahlung. The CR nuclei are not as strongly affected because the change in the gas number density and hence the cooling and fragmentation rates compensate to some extent for the change in the diffusion coefficient. \section{Discussion} The 3D gas density models derived in this work employ only a few spatial distribution components, which are able to account for the main features of the interstellar gas with relatively few parameters. This enables the model fitting to be made within a reasonable time frame. Among the elements held constant for the tuning of the gas distribution models is the gas rotation field, which defines the conversion between distance and velocity. To test the effect of varying the rotation field a fit to the gas data was performed using a model with the rotation curve of \citet{Clemens:1985}, but other components were kept the same. Using this rotation curve resulted in an overall worse fit of the data, as determined by the log-likelihood. Considerably more overlap between spiral arms was evident, and the features in the longitude--velocity diagram of the model did not match those in the data as well as the best-fit model determined above. The smooth disk also contained a larger fraction of the total mass of the model. The assumed velocity field is, therefore, very important for accurate determination of a 3D spatial density model for the interstellar gas. \begin{figure}[t] \centering \includegraphics[width=0.5\textwidth]{f10} \caption{The H\,{\sc i}\ velocity spectrum in the direction of $(l,b)=(-50^\circ,0^\circ)$. Data are shown as black points with error bars smaller than the point size. The blue curve shows the model calculations. The residuals are shown at the bottom panel.} \label{fig:velocitySpectra50} \end{figure} Inclusion of a detailed description for the non-cylindrical streaming motions into the models would add another level of complexity to the analysis and was not attempted in the present work. However, to test their effect a simplified sinusoidally varying radial component \begin{equation} v_r(\Theta) = v_{r,0} \sin(\Theta - \theta_{r,0}) \label{eq:vr} \end{equation} \noindent was added to the cylindrical rotation with $v_{r,0}$ and $\theta_{r,0}$ as free parameters. The same model fitting procedure was then followed as for the H\,{\sc i}\ and CO models described above with all components included. The likelihood improved with this additional component, but not by enough to warrant its inclusion into the final models. The improvement in the likelihood came almost entirely from a somewhat closer match to the data for the H\,{\sc i}\ model. The total mass for the atomic and molecular gas density models increases with this modification, which is expected for an improved fit. The total gas mass in the H\,{\sc i}\ model increases to $6.0\times 10^9$~M$_\odot$ with almost all increase due to the spiral arms. The radial number density distributions of the disk and arms also become flatter, being nearly constant from 2 kpc to 15 kpc from the GC and varying only by $\sim$25\% over the entire range. The gas mass in the CO model slightly increases to $0.74\times 10^9$~M$_\odot$, where most of the increase is due to the bulge/bar component. The radial density distribution of the CO disk/arm component is also flatter, having about 20\% of the disk mass in the outer Galaxy compared to 5\% without the velocity modifications. The mophology of the spiral arms also changes: the pitch angles for all arms fall in between $12.0^\circ$ and $13.5^\circ$ with all starting points in the range 3.1--3.6 kpc. The spiral arms are thus more symmetric in their shapes. The arm densities for the CO model are still very asymmetric with both arm 1 and arm 2 set to zero by the fit to the CO data. In particular, arm 2 that has been identified with Sagittarius and Carina in the model still does not match the features in the CO data. The flaring parameter, $r_z$, increases and is now in better agreement with that determined by \citet{KalberlaDedes:2008}. The warp changes slightly with all the warp-related parameters increasing more slowly with radius without changing its azimuthal dependence. The resulting velocity field is broadly consistent with that found by \citet{TchernyshyovPeek:2017} and shown in their Figure~5. There are negative velocity corrections close to the GC and in the outer Galaxy for longitudes larger than $180^\circ$, while the corrections are positive in the outer Galaxy for longitudes smaller than $180^\circ$. The only large-scale feature they found that is not reproduced is the positive peculiar velocity around longitude $\sim45^\circ$. The final value for $v_{r,0}$ was around 28~km~s$^{-1}$, so the magnitude of the variations is also similar to that obtained by \citet{TchernyshyovPeek:2017}. This shows that streaming motions are important to consider, but also that these simple modifications to the velocity field in combination with the geometrical model derived in this paper do not qualitatively change the final results. In particular, the effect on the CR propagation parameters is only minor. The neutral hydrogen in the interstellar gas is not uniform. It is composed of, at least, three different phases that can be separated by their temperature: the stable cold, the stable warm and the unstable warm \citep{VerschuurMagnani:1994}. Observations have shown that each of these phases contain about one third of the total mass of the neutral hydrogen \citep{HeilesTroland:2003}. Different temperatures of these phases result in widely different broadening of the line emission profiles, ranging from few km~s$^{-1}$ for the cold phase up to few tens of km~s$^{-1}$ for the warm stable phase. The average density of the gas in these phases is also different with the cold gas generally being about an order of magnitude denser than the warm gas. The single temperature analysis employed in this work uses the optically thin approximation that is appropriate only for the warm neutral medium and cannot be expected to capture all density structure of the neutral hydrogen. In particular, the cold component should exhibit smaller average line broadening than employed in this work. The spiral arm components are the densest parts of the H\,{\sc i}{} model and are also likely to have a larger fraction of the cold medium than the more diffuse disk component, therefore, their line profiles are narrower than the disk line profiles. Colder gas is also more likely to be optically thick. The gas number density in the structures consisting of the cold phase is thus under-predicted in this work because of both, the model inability to reproduce the observed narrow emission lines, and the optically thin assumption that over-predicts the line emissivity per hydrogen atom. An improved treatment that accounts for the multi-component neutral medium is necessary to simultaneously account for the cold and warm phases. The results of the current paper should be considered as providing an approximate lower bound to the true gas number density in the Galaxy. Even though the models developed in this paper generally under-predict the data, there are a few exceptions. An example can be found in the H\,{\sc i}\ longitude profiles shown in Figure~\ref{fig:longitudeProfiles}. The model over-prediction toward the GC can be seen, though the number density in the H\,{\sc i}{} model becomes nearly 0 at the GC (see Table~\ref{tab:diskParameters} and Figure~\ref{fig:gasMaps}). The over-prediction is likely a consequence of using the optically thin approximation in Eq.~(\ref{eq:HItransport}). To test this hypothesis, the analysis was redone assuming a constant $T_S=150$~K throughout the Galaxy. This results in a lower brightness temperature toward the inner Galaxy despite the gas number density being non-zero and the total H\,{\sc i}{} mass being $\sim10$\% higher than determined using the optically thin assumption. The reduced brightness is the result of absorption from foreground gas that has higher opacity in the model with lower $T_S$. The likelihood of the model with the lower $T_S$ value is, however, significantly lower. This simple modification may help data agreement in the direction toward the GC, but other regions then are not fitted very well. The optically thin assumption may also affect the model-data agreement in other regions. In particular, near the Crux-Centaurus arm tangent at $l \approx -50^\circ$, the model significantly over-predicts the data. This is shown in the velocity spectra in the LOS towards $(l,b)=(-50^\circ,0^\circ)$ in Figure~\ref{fig:velocitySpectra50}. Using $T_S \sim 110$ K, the peak of the $T_S$ distribution estimated by \citet{StrasserTaylor:2004}, would reduce the model prediction to agree well with the data in that direction. However, the assumption of a single value for $T_S$ over the entire Galaxy would unlikely result in a global improvement in the model-data agreement because of the non-linearity of the optical depth correction, Eq.~(\ref{eq:HItransport}). Instead, modeling $T_S$ variations throughout the Galaxy that takes into account observations of H\,{\sc i}{} in absorption, where possible, may help. The modifications of the $\gamma$-ray{} maps due to the implementation of the new 3D gas distributions can almost be entirely attributed to changes in the calculated CR flux. This is because the final maps are scaled with column densities determined from the line emission surveys for each LOS. Because of large uncertainties in the properties of the CR source distributions these changes may be compensated for, at least partially, by variations in the CR source distribution. This work has shown that changing the CR source distribution has a larger effect on the $\gamma$-ray{} maps than changing the gas distributions used by the propagation codes. The correlation between the gas distribution and CR source distribution does not mean, however, that using a more realistic gas distribution is unnecessary. On the contrary using the best available gas distribution is vital if attempting to use $\gamma$-ray{} observations to constrain the properties of CR sources and propagation. The most popular method for determining the CR source distribution from $\gamma$-ray{} data is to use the annular maps created from the line emission surveys and assume the CRs illuminate the gas uniformly. This is a useful first approximation, but the calculations made in this paper show that implementation of the 3D distributions of CR sources and components of the interstellar medium can result in variations of tens of percent within a single annular map, which is far greater than the statistical uncertainty of current data that is less than one percent. Also, the annular maps may provide excellent resolution and details on the sky, but they suffer from poor distance resolution and near-far distance ambiguities. The 3D gas models are, therefore, necessary to accurately weight the CR flux within the region covered by the annular map. This is even more important if 3D source density distributions are used. The SA0--2D model is based on the same assumption as the $^{\rm S}$L$^{\rm Z}6^{\rm R}20^{\rm T}150^{\rm C}5$ model of \citet{AckermannEtAl:2012} and the Pulsars model of \citet{AjelloEtAl:2016}. It is interesting to compare the fractional residual maps in Figure~\ref{fig:0_armsFraction} and Figure~\ref{fig:armsFraction} to the fractional residual maps presented in these two earlier studies. In Figure~7 of \citet{AckermannEtAl:2012} there is a general trend of negative residuals for the latitudes ranges $5^\circ < |b| < 15^\circ$ that corresponds to the reduction in IC emission at these latitudes when using the 3D gas distribution. This is also in agreement with the scaling factor for the ISRF in that work being less than 1 in their inner Galaxy region. There is also indication of asymmetry in the residuals along the plane with positive residuals at $l\sim45^\circ$ and negative residuals at $l\sim-45^\circ$. A similar pattern is also seen the residuals in Figure~2 of \citet{AjelloEtAl:2016}. While none of the models in this work show this exact behavior it is not difficult to imagine modifications to the CR source distribution that could naturally account for such model asymmetries. \section{Summary} New 3D models for the large-scale distributions of atomic and molecular gas in the Galaxy have been developed. They are based on a combination of a limited number of geometric components with smoothly varying number density that enables the description of the observed large-scale structure: the warped disk, the central bulge/bar, and the 4 major spiral arms. The parameters of model components have been tuned to match the LAB 21-cm line emission survey \citep{KalberlaEtAl:2005} and the CfA composite CO survey \citep{DameEtAl:2001}. The best-fit parameters and resulting models agree very well with previous work, but it is the first time that they are presented together within a single consistent framework. To study the effects of variations in the gas and source distributions on CR propagation parameters they are incorporated into the GALPROP CR propagation code. The new gas density models significantly affect the best-fit values of CR propagation parameters determined from the fits to local CR data. The parameter values depend on both the CR source density distribution and the gas density models and will have to be re-evaluated as better models are constructed. The effects that three different CR source density models with different injection rates in the smooth disk and spiral arms components have on the $\gamma$-ray{} intensity skymaps have also been investigated. The combined 3D CR source and gas density models produce non-trivial features in the large-scale $\gamma$-ray{} residual sky maps that can be compared with those obtained from the prior analysis of the interstellar $\gamma$-ray{} emissions observed by the {\em Fermi}--LAT. Elements of the 3D models provide an explanation for residual features previously obtained from analysis of $\gamma$-ray{} data using the 2D axisymmetric models. \acknowledgements {\it GALPROP}{} development is partially funded via NASA grant NNX17AB48G. Some of the results in this paper have been derived using the HEALPix~\citep{GorskiEtAl:2005} package.
{ "timestamp": "2018-02-26T02:12:03", "yymm": "1802", "arxiv_id": "1802.08646", "language": "en", "url": "https://arxiv.org/abs/1802.08646" }
\section{Introduction} \label{sec:intro} Theory and observations both suggest that there are fewer baryons detected in the local universe than predicted. Observations of the cosmic microwave background \citep[i.e.][]{planck2015} and Big Bang nucleosynthesis (BBN) models \citep{kaplinghat} predict that baryons comprise approximately $5\%$ of the total mass budget in the Universe. In the local universe, the known baryon content falls short by about a factor of two \citep{fuku,cenost,bregman,sinha}. This discrepancy is known as the ``missing baryons problem.'' It is theorized \citep[e.g.][]{cenost,dave}) that the bulk of these missing baryons may be in the form of a diffuse gas that traces the filaments of the cosmic web, known as the warm-hot intergalactic medium (WHIM). Simulations predict WHIM temperatures of $\log{T} \simeq 5 - 7$~K and baryonic densities of $n_b \simeq 10^{-6} - 10^{-5}$~cm$^{-3}$ \citep[see][]{bregman}, making this medium very difficult to observe directly with existing observatories. Evidence has been found for the WHIM in the form of absorption lines in the soft X-ray spectra of high redshift objects \citep[e.g.][]{zappacosta}. However, observations via absorption lines have been unable to constrain the amount of baryons present or to trace large-scale filamentary structure. There has yet to be a high significance observation of the WHIM in large-scale filaments \citep[i.e.][]{kull,fang} aside from a handful of possible detections of the more dense part of the WHIM at the outskirts of galaxy clusters \citep[][]{wang,werner,eckert2015,bulbul}. More recently, \cite{graaff} claimed a $5.1\sigma$ detection of WHIM filaments using stacked Sunyaev Zel'dovich measurements. Galaxy clusters are excellent probes of the large-scale distribution of the WHIM, because they are found at the intersection of dark matter filaments \citep[i.e.][]{gonzalez}. This makes galaxy clusters excellent probes to study large-scale and intercluster filaments. The WHIM may have an impact on the intracluster medium (ICM) of galaxy clusters, particularly where the ICM in the outskirts of the clusters are expected to interface with the WHIM in large-scale filaments. Entropy profiles of the ICM are generally found to lie below what one would expect based on pure gravitational collapse models \citep{kaiser,voit} near a cluster's virial radius \citep[i.e.][]{edge,david,allen,arnaud,walker2013,urban}. This interaction triggers thermodynamic processes, causing departures from the expected hydrostatic equilibrium \citep[i.e.][]{ichikawa}. One such process that could explain the observed entropy flattening is unresolved cool clumps of infalling gas at large cluster radii \citep[][]{simionescu,tchernin}, but the required clumping factors in observations are often larger than what is predicted by simulations \citep[][]{walker2013,urban}. Other proposed mechanisms for observed entropy flattening include: accretion shocks that weaken as the cluster grows \citep[][]{lapi,fusco}, non-thermal pressure support from bulk motions, turbulence or cosmic-rays \citep[][]{lau,vazza,battaglia}, and electron-ion non-equilibrium \citep[][]{fox,wong2009,hoshino,avestruz}. All of these mechanisms are expected to correlate with large-scale structure filaments. Thus, entropy flattening may indicate regions where the outskirts of clusters interface with WHIM filaments. \begin{table*}[t] \centering \caption{Physical properties for the objects studied in this work. } \label{tab:objectinfo} \begin{tabular}{@{}ccccccc@{}} \toprule Object & RA & DEC & $z$ & $T_X$ {[}keV{]} & $r_{500}$ {[}Mpc{]} & $M_{500}$ {[}M$_\odot${]} \\ \midrule A3391 & $06^{\rm{h}}26^{\rm{m}}22^{\rm{s}}.8$ & $-53^{\rm{d}}41^{\rm{m}}44^{\rm{s}}$ & 0.0551 & 5.39 & 0.90 & $2.16 \times 10^{14}$ \\ A3395 & $06^{\rm{h}}27^{\rm{m}}14^{\rm{s}}.4$ & $-54^{\rm{d}}28^{\rm{m}}12^{\rm{s}}$ & 0.0506 & 5.10 & 0.93 & $2.4 \times 10^{14}$ \\ ESO-161 & $06^{\rm{h}}26^{\rm{m}}05^{\rm{s}}.2$ & $-54^{\rm{d}}02^{\rm{m}}04^{\rm{s}}$ & 0.0520 & 1.09 & 0.51 & $2.8 \times 10^{13}$ \\ \bottomrule \end{tabular} \end{table*} It is worth mentioning that there are exceptions to this entropy flattening trend. For example, \cite{bulbul} found that the entropy profile of Abell 1750 is consistent with a self-similar appearance near the virial radius, and argue that lower mass systems are less likely to exhibit entropy flattening. Subsequently, the same suggestion was made independently by \cite{tholken} that low mass systems are less likely to show evidence for flattened entropy profiles. In addition, this view is supported by the observations of the low mass fossil cluster RX J1159+5531 \citep{su}, which appears to adhere to self-similarity azimuthally. More relaxed clusters seem to follow self-similarity more closely than clusters undergoing mergers \citep[][]{eckert2012,eckert2013} (for a review see \cite{wong2016}). The double peaked cluster Abell 3395 (hereafter A3395) was first characterized with Einstein observations \citep{forman}. A3395 is relatively close, both in projected separation on the sky and in redshift, to Abell 3391 (hereafter A3391). There is also a galaxy group ESO 161-IG 006 (hereafter ESO-161) located between the two subclusters in the intercluster filament, in alignment with the clusters. In Table~\ref{tab:objectinfo}, the cluster masses \citep{piffaretti} and group mass estimated in this work, redshifts for the group and clusters \citep{tritton,santos}, positions\footnote[1]{The NASA/IPAC Extragalactic Database (NED) is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration.}, the radius at which the mean cluster density is 500 times the critical density of the universe at the redshift of the clusters \citep{piffaretti}, and their X-ray temperatures \citep{vikhlinin} are shown. The cluster centers have a separation of $47.08\arcmin$ on the sky, which corresponds to 2.9 Mpc at their mean redshift. {\it ASCA}, {\it ROSAT}, {\it Planck}, and {\it Suzaku} observations indicate and confirm that A3395 and A3391 are connected by an intercluster filament, with detectable diffuse emission apart from cluster scattered light \citep{tittley,planck2013,sugawara}. Previous dynamical analysis suggests that the filament is aligned almost parallel to the line of sight, with an inclination angle in the $3.1^\circ-9^\circ$ range \citep{tittley}. Thus, the filament is an ideal target for direct detection of the diffuse gas since the projected surface brightness is much higher than if the system were perpendicular to the line of sight. Here, we report findings based on six observations with {\it Chandra} and {\it XMM-Newton} of A3395, A3391, and the connecting filament. This paper is organized as follows: in Section 2 we discuss the data reduction and analysis techniques; in Section 3 we present the resulting images, spectra, temperature, and metallicity profiles; in Section 4 we discuss the nature and orientation of the filament as well as ESO-161; and conclusions and a summary of this work are presented in Section 5. Unless otherwise stated, all uncertainties are 90\% confidence intervals. For this analysis we assume the abundance table of \cite{grevesse}. The mean redshift of A3395 and A3391 is $\bar{z}=0.053$, such that 1\arcsec\ on the sky corresponds to $\approx 1.04$ kpc. We use the fiducial cosmology $H_0 = 70 \rm{\ km \ s^{-1} \ Mpc^{-1}}$, $\Omega_M = 0.3$, and $\Omega _\Lambda = 0.7$. \section{Data Analysis} \label{sec:da} This section discusses the data reduction and analysis techniques employed in this work for \textit{Chandra} and \textit{XMM-Newton}. \subsection{Chandra Data Reduction and Analysis} \label{subsec:chdra} \begin{table*}[t] \centering \caption{Summary of the {\it Chandra} and {\it XMM-Newton} X-ray pointings} \label{tab:pointings} \begin{tabular}{@{}cccccccc@{}} \toprule Observatory & Pointing & ObsID & RA & Dec & Date Obs & \begin{tabular}[c]{@{}c@{}}Exposure {[}ks{]}\\ ACIS-I \\EMOS1/EMOS2/EPN\end{tabular} & PI \\ \midrule {\it Chandra} & A3391 & 4943 & $06^{\rm{h}}26^{\rm{m}}22^{\rm{s}}.20$ & $-53^{\rm{d}}41^{\rm{m}}37^{\rm{s}}.50$ & 2004-01-15 & 18.3 & T. Reiprich \\ \textit{Chandra} & Filament North & 13525 & $06^{\rm{h}}25^{\rm{m}}22^{\rm{s}}.52$ & $-53^{\rm{d}}53^{\rm{m}}54^{\rm{s}}.09$ & 2012-08-18 & 48.4 & S. Randall \\ \textit{Chandra} & Filament Center & 13519 & $06^{\rm{h}}26^{\rm{m}}10^{\rm{s}}.69$ & $-54^{\rm{d}}05^{\rm{m}}08^{\rm{s}}.53$ & 2012-08-17 & 47.1 & S. Randall \\ \textit{Chandra} & Filament South & 13522 & $06^{\rm{h}}26^{\rm{m}}46^{\rm{s}}.24$ & $-54^{\rm{d}}17^{\rm{m}}05^{\rm{s}}.87$ & 2012-08-12 & 48.8 & S. Randall \\ \textit{Chandra} & A3395 & 4944 & $06^{\rm{h}}26^{\rm{m}}49^{\rm{s}}.56$ & $-54^{\rm{d}}32^{\rm{m}}35^{\rm{s}}.16$ & 2004-07-11 & 20.7 & T. Reiprich \\ \textit{XMM-Newton} & Filament Center & 0400010201 & $06^{\rm{h}}26^{\rm{m}}31^{\rm{s}}.62$ & $-54^{\rm{d}}04^{\rm{m}}44^{\rm{s}}.7$ & 2007-04-06 & 38.2/38.5/23.1 & M. Henriksen \\ \bottomrule \end{tabular} \end{table*} A summary of the observations is given in Table~\ref{tab:pointings}. The aimpoint for each Chandra observation was on the front-side illuminated ACIS-I CCD. We use CIAO version 4.8 and CALDB 4.7.2 to reduce the data to level 2 event files with the {\em chandra\_repro} script. The observations were taken in very faint (VF) mode and the event and background files were filtered appropriately. We use the CIAO tool {\em deflare} to remove periods of strong flaring or data drop outs by removing periods where the light curve is more than $3\sigma$ from the mean. We find no instances of strong flaring. The total filtered ACIS-I exposure time is 183.3 ks. We chose blank sky background files closest to the period of observation for each {\it Chandra} observation listed in Table~\ref{tab:pointings} for imaging. We use the CIAO tool {\em reproject\_events} to create images with the blank sky background files for all of the observations. These background images were normalized to match the hard band (10-12~keV) count rate in the observations to account for differences in the particle background. We create exposure maps for each image by utilizing the {\em asphist} and {\em mkinstmap} routines. We used these to create a background subtracted, exposure corrected mosaic image in the 0.3-7.0 keV band, shown in Figure~\ref{fig:mosaic}. \begin{figure*}[ht!] \centering \includegraphics[width=\textwidth]{mosaicSidebySideAnnotated4.pdf} \caption{Left: The annotated background-subtracted exposure-corrected mosaic {\it Chandra} image of A3391, A3395, and the intercluster filament is shown in the 0.3-7.0 keV energy band and smoothed by a 12\arcsec~gaussian. The boxes denote regions used for the temperature profile of the filament. The excluded region, marked by the ellipse and red line, contains the galaxy group ESO-161. The green dashed circular region to the east of ESO-161 is used for local background modeling for the group temperature measurement. The green dashed circular region to the west on ACIS-I6 is used for background in all {\it Chandra} spectral analyses. The {\it XMM-Newton} (see Table~\ref{tab:pointings}) field of view is shown in the dashed cyan region for reference. Right: Same as left but smoothed by a 40\arcsec gaussian to highlight the intercluster filament. Spectra extracted from the green box region were used to estimate the global temperature and density of the filament. The northern and southern circles are $r_{200}$ for A3391 and A3395 respectively. The wedges are used to derive the surface brightness profile of the group.} \label{fig:mosaic} \end{figure*} To find background point and extended sources, we use the CIAO tool {\em wavdetect} with wavelet scales of 2, 4, 6, 8, 12, and 16 pixels, where the pixels are 0.98\arcsec\ in length. These sources are then excluded from all spectra, surface brightness profiles, and images. For the purposes of making images, we use the CIAO tool {\em dmfilth} to fill in regions of excluded point sources in all of the observations by drawing photons from a Poisson distribution matched to local background regions around the point sources. The {\em specextract} procedure with CIAO was used to extract spectra as well as the appropriate response files for analysis. All spectra in this work were grouped to a minimum of 40 net counts per bin. XSPEC version 12.9.0 was used to perform the spectral analysis. Rather than use the CALDB blank sky files for background modeling, we use the circular westernmost region on the ACIS-I6 chip shown in the left panel of Figure~\ref{fig:mosaic}. This has advantages over the blank sky background files, which are a sky average rather than the background in a nearby region of the sky. The stowed {\it Chandra} background files, in which long exposures were taken with ACIS stowed and in VF mode, are used for instrumental background in the spectral fitting with an applied hard-band (10-12 keV) correction as was done for the blank sky background files for imaging. The scaled stowed spectra are consequently subtracted from the source spectra and local I6 background spectrum during spectral modeling. The stowed dataset accurately represents the quiescent, non-X-ray background (NXB), and this dataset introduces an additional $\pm 2\%$ statistical uncertainty (for more information on the stowed dataset we refer the reader to \cite{hickox}). For our faintest region (see Section~\ref{subsec:spectroscopy}), the effect that the systematic NXB uncertainty has on our measured kT and XSPEC normalization error range is an increase of less than $10\%$ and less than $3\%$ respectively. This impact on the error ranges is small for our faintest region, so we do not include the systematic NXB uncertainty in our error budget. The background spectrum on the I6 chip (see left panel of Figure~\ref{fig:mosaic}) is simultaneously fit with the source spectra to include background uncertainties in the calculated error ranges. A \textit{RASS} spectrum was also extracted from an annulus with an inner radius of $1^{\circ}$ and an outer radius of $1.1^{\circ}$ centered around ESO-161 from the \textit{RASS} observation of this system in order to better constrain the local background parameters in our spectral fits. This spectrum was simultaneously fit as part of the background model throughout the {\it Chandra} analysis. An absorbed Astrophysical Plasma Emission Code (APEC) \citep{smith} model was used for the source spectra. The background spectra were simultaneously fit, along with the source spectra with an absorbed powerlaw (PL) for the cosmic X-ray background, an absorbed APEC model for the galactic halo (GH), as well an unabsorbed APEC model for the local hot bubble (LHB). The ICM, GH, and CXB are absorbed assuming a Galactic Hydrogen density column of $N_H=6.3 \times 10^{20}$ cm$^{-2}$ found with the ftool {\em nh} \citep{kalberla}. The LHB is unabsorbed. Parameters from the background spectral fit are shown in Table~\ref{tab:bg}. \begin{table}[ht!] \centering \begin{threeparttable}[b] \caption{CXB and foreground components derived from a spectral fit of RASS data as well as Chandra data from the green dashed region on the ACIS-I6 chip (see the left panel of Figure~\ref{fig:mosaic})without contamination from source data.} \label{tab:bg} \begin{tabular}{@{}lll@{}} \toprule Component & \begin{tabular}[c]{@{}l@{}}kT/$\Gamma$ \\ {[}keV{]}/$\Gamma$ \end{tabular} & \begin{tabular}[c]{@{}l@{}}$N$ {[}cm$^{-5}$/photons \\ keV$^{-1}$~cm$^{-2}$~s$^{-1}$ at 1 keV{]}\\ (\textit{Chandra/RASS})\end{tabular} \\ \midrule LHB & $0.092^{+0.06}_{-0.05}$ & \makecell{$1.55^{+8.0}_{-0.17}\times10^{-5}$} \\ GH & $0.37^{+0.31}_{-0.09}$ & \makecell{$2.2^{+0.76}_{-0.92}\times10^{-5}$} \\ PL & $1.4^f$ & \makecell{$2.7^{+0.55}_{-0.58}\times10^{-5}$/$2.1^{+0.28}_{-0.35}\times10^{-3}$} \\ \bottomrule $f$ fixed parameter \\ $\Gamma$ photon index \end{tabular} \end{threeparttable} \end{table} All spectra are fit in the energy range 0.5-7.0 keV for \textit{Chandra} data, 0.3-12.0 keV for \textit{XMM} data, and 0.1-2.0 keV for the \textit{RASS} data. All fitted parameters (temperatures, abundances, and area-scaled normalizations) were constrained to be equal across all datasets, with two exceptions. First, the normalization of the source (e.g., filament emission) component was fixed at zero in background regions that did not include source emission. Second, while the CXB normalizations were set equal for the on-axis Chandra regions, they were independent of the CXB normalizations for the I6 and {\it RASS} spectra, which were in turn independent of one another. This was done to account for the different point source detection thresholds on axis, off axis, and with {\it ROSAT} (see Section~\ref{subsec:chsys}). \subsection{Systematics Regarding the Chandra X-Ray Background} \label{subsec:chsys} The ACIS-I6 chip, indicated by the 5 offset single CCDs from the primary observations in Figure~\ref{fig:mosaic}, is far from the telescope axis, while the four ACIS-I0-3 chips are relatively close to on the telescope axis. {\it Chandra} resolves point sources well, but still only to a limiting flux, which is different for on versus off axis observations. For the faint, diffuse emission that is characterized in this work, accurate modeling of the cosmic x-ray background (CXB) is essential. The fainter, unresolved point sources contribute flux that needs to be characterized. For the on-axis observations, we adopt the methodology for estimating the total flux from the unresolved CXB that \cite{bautz}, \cite{bulbul} and \cite{walker2012} implement in similar analyses using {\it Suzaku} data, which we describe here. The {\it Chandra} filament observations allow us to detect point sources down to a flux of $1.4_{-0.6}^{+2.6} \times 10^{-15}$ ergs cm$^{-2}$~s$^{-1}$, the faintest point source detected in our observations. \cite{moretti} defines the unresolved CXB flux in ergs cm$^{-2}$~s$^{-1}$~deg$^{-2}$ as: \begin{equation} F_{CXB} = (2.18 \pm 0.13) \times 10^{-11} - \int_{S_{lim}}^{S_{max}} \frac{dN}{dS}\times S~ds. \end{equation} The analytic form of the source flux distribution in the 2-10 keV band is characterized as: \begin{equation} N(>S) = N_0 \frac{(2 \times 10^{-15})^\alpha}{S^\alpha + S_0^{\alpha - \beta}S^\beta} ~ \rm{ergs}~\rm{cm}^{-2}~\rm{s}^{-1}, \end{equation} where $\alpha = 1.57_{-0.18}^{+0.10}$ and $\beta = 0.44^{+0.12}_{-0.13}$ are the power law indices for the bright and faint components of the distribution respectively, $N_0 = 5300_{-1400}^{+2850}$, $S_{lim}$ is the flux of the faintest point source detected in our filament observations, and $S_{max}$ is $8 \times 10^{-12}$ ergs cm$^{-2}$~s$^{-1}$. We find the unresolved CXB in our observations has a flux of $7.5 \pm 2.1 \times 10^{-12}$ ergs cm$^{-2}$~s$^{-1}$~deg$^{-2}$. Finally, the expected $1\sigma$ uncertainty in the unresolved CXB flux may be given by: \begin{equation} \sigma^2 = \frac{1}{\Omega}\int_{0}^{S_{lim}}\frac{dN}{dS}\times S^2~ds \end{equation} where $\Omega$ is the solid angle \citep{bautz}. We find the expected RMS deviation to be $1.5\times10^{-12}$ ergs cm$^{-2}$~s$^{-1}$~deg$^{-2}$. We fix the on-axis CXB normalization to this unresolved flux and allow the normalization to vary within $1\sigma$. The off-axis CXB flux is well modeled without injecting such priors into the fits, so the off-axis CXB normalization is left free to vary independently in all {\it Chandra} fits. \subsection{XMM-Newton Data Reduction and Analysis} \label{xmmda} For the { \it XMM-Newton} data, we gather photon events registered by the MOS1, MOS2, and PN detectors of European Photon Imaging Camera (EPIC). To reduce any contamination of the photon detections by soft protons, solar flare periods are suppressed through a wavelet filtering of two event light curves extracted in the 10-12 and 1-5 keV energy band, respectively. \begin{figure}[ht!] \centering \includegraphics[width=0.45\textwidth]{xmmAnnotated2.pdf} \caption{Log-scale {\it XMM-Newton} image of the A3391/A3395 intercluster filamentary region smoothed by a 19.6\arcsec~gaussian. This image is taken in the 0.3-2.5 keV band with MOS1, MOS2, and PN. The elliptical group region shown in Figure~\ref{fig:mosaic} (left panel) is shown for reference. Detected point sources are masked.} \label{fig:xmmpic} \end{figure} The exposure times after filtering are shown in Table~\ref{tab:pointings}. Events registered by anomalously bright CCDs of the MOS cameras \citep{kuntz2008} have also been suppressed. All events are rebinned spatially and spectrally into a cube that samples the mirror Point Spread Functions (PSF) and the detector energy responses. The resulting spatial binning is 1.6\arcsec, while the spectral binning increases in the range 15-190 eV as a function of event energies. Following the approach presented in \cite{bourdin2008}, effective exposure and background noise values are associated with each bin of the event cube and are subsequently used for both imaging and spectroscopy. The background noise model includes false detections of instrumental origin (detector fluorescence lines), cosmic induced particle background, and extended emission of astrophysical origin (CXB, and Galactic trans-absorption emission; see \cite{kuntz2000}). More precisely, spatial and spectral variations of the instrumental background are modeled for each detector following the approach described in \cite{bourdin2013}. Amplitudes of the astrophysical emissions have been jointly fitted with the instrumental background in a sky region located to the northeast of the {\it XMM-Newton} pointing, which is spatially separated from the A3395-A3391 intercluster filament. Spectroscopic measurements similarly rely on modeling the source emission measure provided by the APEC model. We similarly assume elemental abundances follow the solar composition tabulated in \cite{grevesse} and the Spectral Energy Distribution (SED) of the intercluster plasma is redshifted to z=0.0530. The ICM, GH, and CXB are absorbed assuming the Galactic Hydrogen column density reported in Section~\ref{subsec:chdra}. The LHB is unabsorbed. For the CXB, the power-law index was fixed at 1.4, while the temperatures of the LHB and GH were fixed at 0.1 keV and 0.3 keV respectively. The normalizations are free to vary. The background model was fit simultaneously with the source model and the normalizations are generally consistent with the {\it Chandra} background normalizations within $2\sigma$ (see Table~\ref{tab:bg}). In this modeling, all astrophysical components are corrected for spatial variations of the instrument effective area and redistributed as a function of the energy response of the detectors. Photon images and surface brightness profiles are corrected for the background noise model and the effective exposure time expected within their energy band. For these purposes, effective exposures assume the incidental photon energy to follow the SED of an isothermal plasma of temperature kT=5 keV. To increase the S/N, the exposure and background corrected photon image in Figure~\ref{fig:xmmpic} has been smoothed by a gaussian kernel of width $\sigma$(fwhm) = 19.2\arcsec. \begin{figure}[ht!] \centering \includegraphics[width=0.45\textwidth]{eso161_group1-4_v3.pdf} \caption{{\it XMM-Newton} images of ESO-161 in the 0.3-2.5 keV energy range. The color bar is counts/pixel. Left: The original {\it XMM-Newton} photon image of ESO-161 with 14.4\arcsec pixels. Right: Curvelet denoised at the $3\sigma$ level {\it XMM-Newton} image of ESO-161 derived from the left photon image. The green arrows indicate the eastern leading edge. The red arrows indicate the downstream edge discussed in Section~\ref{subsec:eso161}.} \label{fig:coldfront} \end{figure} The image presented in Figure~\ref{fig:coldfront} uses curvelet denoising to preserve surface brightness edges. Specifically, a curvelet transfrom of first generation \citep{candes2002,starck} is computed from the photon image (see the left panel of Figure~\ref{fig:coldfront}). This transform combines ridgelet and wavelet bands, whose variance is stabilized following the Multiscale Variance Stabilized Transform proposed in \cite{Zhang}. Variance stabilized coefficients of the exposure corrected photon image are subsequently thresholded at 3$\sigma$, which yields a boolean support of significant coefficients. To restore the source surface brightness, a curvelet transform of the background noise image is projected onto the significant coefficient support and subtracted from the thresholded transform of the photon image shown right panel of Figure~\ref{fig:coldfront}. \section{Results} \label{sec:results} \subsection{Imaging} \label{subsec:imaging} A3391 is the northern cluster of the system, with A3395 located to the south. There is an extended filament of hot X-ray gas connecting these two clusters (see Figure~\ref{fig:xmmpic}). A3395 is comprised of two main subclusters, with a gas filament connecting the two subclusters in the east-west direction in the northern part of the system (see ObsID 4944 in the right panel of Figure~\ref{fig:mosaic}). ESO-161 is most clearly seen in Figures~\ref{fig:coldfront} and~\ref{fig:eso161zoom}, along with the extended diffuse emission in Figure~\ref{fig:xmmpic} indicated inside the group ellipse region. \begin{figure}[ht!] \centering \includegraphics[width=0.45\textwidth]{azimuthAnnotatedNEedge.pdf} \caption{Close up \textit{Chandra} image of ESO-161 of Figure~\ref{fig:mosaic} (right panel). Left: ESO-161 with the annuli used for a surface brightness profile (see Section~\ref{subsec:eso161}) overlaid. Right: The overlaid azimuthal region is used for a surface brightness profile (see Section~\ref{subsec:eso161}). The arrows indicate where the two possible stripped gas tails are located.} \label{fig:eso161zoom} \end{figure} The bridge of emission spanning $\sim 3$~Mpc in projection and connecting the subclusters is highlighted in Figure~\ref{fig:mosaic} (right panel) and Figure~\ref{fig:xmmpic} (for a wider field of view, see \cite{tittley,planck2013}). The box regions shown in Figure~\ref{fig:mosaic} (left panel) are used for the temperature, abundance, density, and entropy profiles derived in Sections~\ref{subsec:spectroscopy} and~\ref{subsec:entropyprofile}. The galaxy group ESO-161 found in the intercluster filament has extended emission mostly to the west, which can be seen in Figures~\ref{fig:xmmpic}, and~\ref{fig:eso161zoom}. We created a surface brightness profile (Figure~\ref{fig:esosb}) in the east-west direction across ESO-161 from the region marked by wedges in Figure~\ref{fig:mosaic} (right panel). Here, the annuli are equally spaced bins of 0.6\arcmin. Note that the x-axis origin, 0\arcmin, of the surface brightness profile corresponds to the center of the annuli, with the west annuli noted as positive arcminutes and the east annuli are negative arcminutes. The goal of this analysis was to discern where the group emission reached the background emission. Therefore, a cut was made where the surface brightness profile flattens out at $\approx$ 8\arcmin~to the west and at $\approx$ 3\arcmin~to the east. The {\it XMM} image was then examined to refine the group region by eye as it is shown in Figure~\ref{fig:mosaic} (left panel). The group emission is elliptical and slightly angled in the NE-SW direction. \begin{figure}[ht!] \centering \includegraphics[width=0.45\textwidth]{eso161ewsb3-3.pdf} \caption{Surface brightness profile in the 0.3-2.5 keV band of ESO-161 extracted from the \textit{Chandra} observations with the wedges shown in the right panel of Figure~\ref{fig:mosaic}. Errors are $1\sigma$.} \label{fig:esosb} \end{figure} \subsection{Spectroscopy} \label{subsec:spectroscopy} To measure a gas mass, electron density, and temperature for the whole filament, we use the box region shown in the right panel of Figure~\ref{fig:mosaic} to extract the spectra in Figure~\ref{fig:bigboxspec}. The black, red, and green lines are the source spectra corresponding to ObsIDs 13525, 13519, and 13522 respectively. The dark blue line is the simultaneously fitted local background spectrum for the green dashed region on the ACIS-I6 chip for ObsID 13525 in the left panel of Figure~\ref{fig:mosaic}. The light blue line is the \textit{RASS} annulus simultaneously fitted background spectrum. We note the soft excess in the residuals for the local {\it Chandra} background component. Adding a soft proton component to the model does not improve the fit. Letting the GH and LHB parameters vary untied between spectra removes this soft excess for the I6 background spectrum in the residuals, however then the GH and LHB model parameters are then in tension with each other. Due to the low S/N of the I6 spectrum ($\sim 15\%$), we choose to leave these parameters tied. Performing either of these analyses, however, does not significantly change the best fit parameter values or error ranges. \begin{figure}[ht] \centering \includegraphics[width=0.45\textwidth]{bigboxfil_2temp_spec.pdf} \caption{Top: The best fit spectrum for the box shown in the right panel of Figure~\ref{fig:mosaic}. The black, red, and green lines are from ObsIDs 13525, 13519, and 13522 respectively. The dark blue line is the simultaneously-fitted background spectrum for the dashed region on ObsID 13525 shown on the ACIS-I6 chip in the left panel of Figure~\ref{fig:mosaic}. The light blue line is the simultaneously-fitted background \textit{RASS} spectrum. Bottom: Residuals for the top spectra. Dotted lines are model components.} \label{fig:bigboxspec} \end{figure} For all reported gas masses, we assume a 3D cylindrical geometry for the filament with the length and radius dimensions corresponding to the box region length and half-width edges respectively, assuming that the filament is slightly more extended than what is encompassed within the 16\arcmin~by 16\arcmin~{\it Chandra} field of view. The box region in Figure~\ref{fig:mosaic} (right panel), for a 3D cylindrical geometry has a radius of 0.7 Mpc and a length of 0.9 $\sin(i)^{-1}$~Mpc, where $i$ is the inclination angle of the filament to the line of sight. The box region was placed where the filament emission is relatively bright and where the ICM emission is relatively faint, thus maximizing the S/N of the filament emission. The box regions used for the profiles in Figure~\ref{fig:mosaic} (left panel) were assumed to have a 3D cylindrical geometry with radii of 0.7 Mpc and a length of 0.3 $\sin(i)^{-1}$~Mpc. The electron density of the filament is derived from the normalization in XSPEC, and is given by: \begin{equation} \begin{split} n_e = \biggl(1.6 \ N \ \rm{sin}({\it i}) \times 1.08\times10^{-10}(1+z)^2 \\ \biggl (\frac{D_A}{\rm{Mpc}}\biggl)^2\biggl(\frac{r}{\rm{Mpc}}\biggl)^{-2}\biggl(\frac{l_{obs}}{\rm{Mpc}}\biggl)^{-1}\biggl)^{1/2}, \end{split} \label{eq:electrondensity} \end{equation} where $D_A$ is the angular distance to the system, $r$ is the radius of the filament, $l_{obs}$ is the observed projected length of the filament, and $N$ is the XSPEC normalization. The electron density profile across the filament is shown in Figure~\ref{fig:neprof}. \begin{figure}[ht!] \centering \includegraphics[width=0.45\textwidth]{nevxgroupexbin2.pdf} \caption{The electron density profile for the filamentary region derived from the 1T model normalizations. The black and cyan dashed lines are $r_{200}$ for A3391 and A3395 respectively. The X-axis is the distance from the center of A3391.} \label{fig:neprof} \end{figure} The aforementioned box region (see right panel of Figure~\ref{fig:mosaic}) was best fit with a two temperature APEC model ($\chi^2/dof = 619.46/594$) rather than a one temperature APEC model ($\chi^2/dof = 714.96/595$). We find projected temperatures kT = $4.45_{-0.55}^{+0.89}$~keV and $0.29_{-0.03}^{+0.08}$~keV. Under the assumption that the hotter temperature component is that associated with the filament (see Section~\ref{subsec:filament} for discussion), we find an electron density $n_e=1.08^{+0.06}_{-0.05} \times 10^{-4}$~$\sin(i)^{\frac{1}{2}}$~cm$^{-3}$, and $M_{\rm gas} = 2.7^{+0.2}_{-0.1} \times 10^{13}$~$\sin(i)^{-\frac{1}{2}}$~M$_\odot$ for the filament assuming that this inferred density extends outside the {\it Chandra} FOV into the box region shown in Figure~\ref{fig:mosaic}. If the filament is indeed completely covered by the {\it Chandra} FOV and is not extended, then the gas mass would be $\sim 1.7 \times 10^{13}$~$\sin(i)^{-\frac{1}{2}}$~M$_\odot$. This gas mass is in good agreement with \cite{tittley}. Note that our errors are statistical; there are additional systematic errors associated with the assumed cylindrical geometry and unknown substructure of the filament. The mean baryonic density of the universe at $\bar{z} = 0.053$ is $\bar{\rho}_{baryon} \simeq 4 \times 10^{-31}$ g cm$^{-3}$. If the filament is in the plane of the sky (see Section~\ref{subsubsec:orientation}), $\frac{\rho_{fil}}{\bar{\rho}_{baryon}} \lesssim 541$. If the filament is indeed aligned close to the plane of the sky, this overdensity is not consistent with that expected for the WHIM gas, which is thought to range between 1-250 \citep{bregman}. The boxes shown in Figure~\ref{fig:mosaic} (left panel) are used to extract spectra and create the projected one temperature (1T) profile. In Figure~\ref{fig:tempprof}, we display three temperature profiles; two {\it Chandra} profiles including the group and excluding the group ESO-161, and the {\it XMM} profile also excluding the group region. \begin{figure}[ht!] \centering \includegraphics[width=0.45\textwidth]{tvxgroupex.pdf} \caption{The projected temperature from \textit{Chandra} and \textit{XMM-Newton} observations as a function of distance from the center of the northern subcluster, A3391. Each point corresponds to a box region from which we extracted spectra (see the left panel of Figure~\ref{fig:mosaic}). The black points include the group emission and offset for viewing purposes. The green triangles are the \textit{XMM} measurements and the blue circles are the \textit{Chandra} measurements. Regions are labeled for reference.} \label{fig:tempprof} \end{figure} We additionally fit a two temperature (2T) APEC model for the same spectra in each of the regions in the profile, keeping the metallicity fixed at $0.3 Z_\odot$. We find that fitting a second cool component to the spectrum of the northern filament observation only for regions 2 and 3 yields better statistically significant 2T fits shown in Table~\ref{tab:2temp} (see Section~\ref{subsec:filament} for discussion). \begin{table}[] \centering \caption{The best fit parameters for 1 and 2 temperature models for regions 2 and 3 in the 0.5-7.0 keV band.} \label{tab:2temp} \begin{tabular}{@{}lccccc@{}} \toprule Reg & \begin{tabular}[c]{@{}c@{}}kT$_1$\\ {[}keV{]}\end{tabular} & \begin{tabular}[c]{@{}c@{}}$N_1$\\ ($10^{-4}$)\\ {[}cm$^{-5}${]}\end{tabular} & \begin{tabular}[c]{@{}c@{}}kT$_2$\\ {[}keV{]}\end{tabular} & \begin{tabular}[c]{@{}c@{}}$N_2$\\ ($10^{-4}$)\\ {[}cm$^{-5}${]}\end{tabular} & $\chi^2$/dof \\ \midrule 2 & $3.03_{-0.49}^{+0.88}$ & $9.38_{-0.50}^{+0.10}$ & ... & ... & 359.51/328 \\ 2 & $4.77_{-1.21}^{+2.24}$ & $7.27_{-0.71}^{+0.68}$ & $0.44_{-0.10}^{+0.17}$ & $6.74_{-2.58}^{+3.23}$ & 301.83/327 \\ 3 & $3.53_{-0.87}^{+1.85}$ & $7.10_{-0.64}^{+0.84}$ & ... & ... & 203.55/211 \\ 3 & $3.87_{-1.00}^{+2.46}$ & $7.15_{-0.69}^{+1.24}$ & $0.28_{-0.09}^{+0.15}$ & $3.89_{-2.47}^{+8.49}$ & 183.87/210 \\ \bottomrule \end{tabular} \end{table} We find a best fit 1T model with projected temperature kT $= 4.49^{+2.31}_{-0.97}$ keV, and electron density $n_e = 9.79^{+0.92}_{-0.92} \times 10^{-5}$~$\sin(i)^{\frac{1}{2}}$~cm$^{-3}$ for Region 4. This is the only area in our study that lies approximately at $r_{200}$ of both of the subclusters. Figure~\ref{fig:allfilm2spec} shows the spectrum for Region 4, where the black (13519) and red (13522) lines are the source spectra, the green line (13525) is the simultaneously fit dashed background region (see the left panel of Figure~\ref{fig:mosaic}), and the blue line is the simultaneously fit background \textit{RASS} spectrum. \begin{figure}[ht!] \centering \includegraphics[width=0.45\textwidth]{allfilm2_spec.pdf} \caption{Spectral fit and the residuals for Region 4, located approximately at $r_{200}$ for both clusters, shown in the left panel of Figure~\ref{fig:mosaic}. The black and red lines are the source spectra from north to south respectively, the green line is the simultaneously fitted dashed background region (see the left panel of Figure~\ref{fig:mosaic}), and the blue line is the simultaneously fitted background \textit{RASS} spectrum.} \label{fig:allfilm2spec} \end{figure} We obtained upper limits of the projected abundances (Table~\ref{tab:abund}) for regions 1, 3, 4, and 6 shown in the left panel of Figure~\ref{fig:mosaic} derived from an absorbed APEC model. \begin{table}[H] \centering \caption{Projected abundance values for the box regions shown in Figure~\ref{fig:mosaic} (left panel).} \label{tab:abund} \begin{tabular}{ll} \hline Region & \begin{tabular}[c]{@{}l@{}}Abundance {[}$Z_\odot${]}\end{tabular} \\ \hline 1 & $<0.72$ \\ 2 & $0.14^{+0.12}_{-0.12}$ \\ 3 & $<0.63$ \\ 4 & $<0.65$ \\ 5 & $0.60^{+0.74}_{-0.20}$ \\ 6 & $<0.63$ \\ \hline \end{tabular} \end{table} The temperature profile along the intercluster filament derived from {\it XMM} observations is in good agreement within uncertainties with the Chandra results. We could not fit Region 6 with the {\it XMM} data, as the region is only covered by EMOS1 after the filtering of bright CCDs, and the signal to noise is too low to constrain the fit. The large uncertainties for Region 3, and to a lesser extent Region 4, are a result of excluding the group region and subsequently having low areal coverage of the observation (see the left panel of Figure~\ref{fig:mosaic}), as well as a low inherent S/N as Region 4 is our faintest region. Our temperature, abundance, and density profiles (see Section~\ref{subsec:entropyprofile}) are also in good agreement with those found with {\it Suzaku} in \cite{sugawara}. \section{Discussion} \label{sec:discussion} \subsection{The Filament}\label{subsec:filament} Here we discuss the derived density and entropy profiles, as well as the galaxy group ESO-161 to further explore the orientation and nature of the system. \subsubsection{Nature of the Filament} \label{subsec:entropyprofile} We fit the filament data with 2T models because there is likely contamination from ICM emission between $r_{500}$-$r_{200}$. As shown in Section~\ref{subsec:spectroscopy}, we find that a 2T model is a better statistically significant fit for regions 2 and 3 with a cooler component ranging from $\sim 0.2-0.6$~keV (see Table~\ref{tab:2temp}). We find a group temperature of $\sim 1.09$ keV (see Section~\ref{subsec:eso161}). We find that the measured cooler components in the filament regions are consistent with the temperature profile one would expect for an $\sim 1$~keV group at this distance from the group center based on the universal group temperature profile derived by \cite{sun}. Therefore, the 2T fits for both of the aforementioned regions as well as the box region shown in the right panel of Figure~\ref{fig:mosaic}, which all cover $r_{500}$ for the group, indicate that there is extended group emission in the filament beyond the group excluded region shown in Figure~\ref{fig:mosaic} (left panel). The electron density (see Equation~\ref{eq:electrondensity}) profile for the filament, assuming it is in the plane of the sky, is shown in Figure~\ref{fig:neprof}. The black and cyan dashed lines are $r_{200}$ for A3391 and A3395 respectively. There appears to be a dip in the electron density at the midpoint of the filament, at $\sim r_{200}$ of both the subclusters. This minimum in the density profile is approximately 2 dex higher than the mean baryonic density of the universe at the mean redshift of the system. The entropy profile is shown in Figure~\ref{fig:entropyprofile} where we define entropy as $K = k_BTn_e^{-2/3}$, where $k_b$ is Boltzmann's constant, $n_e$ is the electron density, and $T$ is the temperature. The entropy profiles for galaxy clusters derived from \cite{voit} for A3391 and A3395 are shown in Figure~\ref{fig:entropyprofile}, where the center of each cluster was determined from NED. The blue and green lines are the self-similar entropy profiles for A3391 and A3395 respectively: \begin{equation} K(r) = 1.41 \pm 0.03~K_{200}~(r/r_{200})^{1.1}. \end{equation} $K_{200}$ is the entropy at $r_{200}$ and is defined as: \begin{equation} \begin{split} K_{200} = 362~\rm{keV}~\rm{cm}^2~\frac{T_X}{1~ keV}~\biggl(\frac{T_{200}}{T_X}\biggl) \\ \times\biggl(\frac{H(z)}{H_0}\biggl)^{-4/3}\biggl(\frac{\Omega_m}{0.3}\biggl)^{-4/3}, \end{split} \end{equation} where $T_X \approx T_{200}$, and $\Omega_m$ is the matter density parameter. The black vertical dashed line is $r_{200}$ for A3391, and the cyan vertical dashed line is $r_{200}$ for A3395. $r_{200}$ values for the clusters were estimated using reported values of $r_{500}$ \citep{piffaretti} and assuming $r_{200} \sim 1.7r_{500}$ \citep[e.g.][]{ami}. We note that these values for $r_{200}$ are slightly smaller than those reported in \cite{sugawara}. The values for $r_{200}$ derived in \cite{sugawara} are estimated using the empirical $r_{200}-T_X$ relation \citep{henry} and are 2.3 and 2.1 times the measured $r_{500}$ value for A3391 and A3395 respectively. This difference in $r_{200}$ affects the normalization of the self-similar entropy profile (see Section~\ref{subsubsec:orientation} for discussion). \begin{figure}[ht!] \centering \includegraphics[width=0.5\textwidth]{33913395entProf_bin2.pdf} \caption{The blue line shows the self-similar entropy profile for A3391 derived from \cite{voit} (left two panels). The green line is the same, but for A3395 (right two panels). The black vertical dashed line is $r_{200}$ for A3391, and the cyan vertical dashed line is $r_{200}$ for A3395. The data points are the derived entropy for the green box regions shown in the left panel of Figure~\ref{fig:mosaic} and points are labeled for reference. The magenta pentagons are the entropy, assuming the filament is in the plane of the sky, and the red triangles are entropy values for a filament orientation i = $3.1^\circ$ to the line of sight, as suggested by \cite{tittley}. The distance shown on the x-axis is the distance from the cluster center.} \label{fig:entropyprofile} \end{figure} The data points in Figure~\ref{fig:entropyprofile} represent the entropy derived from the measured gas temperatures and electron densities shown in Figure~\ref{fig:tempprof} and Figure~\ref{fig:neprof} respectively. The 90\% temperature and electron density errors were propagated to derive the 90\% entropy uncertainties (see Figure~\ref{fig:entropyprofile}). The magenta pentagons are the entropy of the filament assuming it is in the plane of the sky ($i = 90^\circ$) and the red triangles are entropy values for a filament orientation $i = 3.1^\circ$ to the line of sight, the lowest inclination to the line of sight that \cite{tittley} argue for following their dynamical analysis of the system. Even assuming the filament is in the plane of the sky, the entropy at large radii, namely near $r_{200}$ for both clusters, is much larger than the expected entropy values for the dense range of the WHIM gas by at least a factor of four, with the predicted value for the WHIM at this redshift being approximately 250 keV cm$^2$ \citep{vala}. All of the regions for the profile lie inside $r_{200}$ of one, or both for the case of Region 4, of the subclusters. The extended ICM gas is expected to be hotter than the WHIM, and will bias the electron density towards higher values, so it is unclear what the overall entropy bias is due to these regions overlapping with the subcluster outskirts. The radius of the filament profile geometry was assumed based upon the size of the \textit{Chandra} observations, and the filament may actually be more extended than what is captured in the 16\arcmin~by 16\arcmin~observations. If this is the case, our electron density measurements are biased high. This in turn biases the entropy low, reinforcing the conclusion that the gas in the filamentary region is ICM gas, as the entropy across the filament is already too high to be consistent with the WHIM emission. We find no evidence for a shock that would support the suggestion by \cite{sugawara} of shock heated gas in this region. The flat temperature profile across the filament is consistent with ICM gas undergoing tidal pulling into the filament due to an early stage merger between the clusters. An interaction between the subclusters was also recently suggested by \cite{sugawara}. \subsubsection{Orientation of the Filament}\label{subsubsec:orientation} It has been shown that the entropy profile of most massive clusters lies at about self-similar within $r_{500}$, and then flattens below self-similar at larger radii \citep[i.e.][]{walker2013}. There is an uncertainty when relating a measured $r_{500}$ with $r_{200}$. The effect that this has on the self-similar entropy profile is in the normalization of the profile. This uncertainty in the normalization of the self-similar entropy profile makes it difficult to say with conviction which inclination brings the profile closer to an expected self-similar value. Figure~\ref{fig:entropyprofile} suggests a filamentary geometry close to the plane of the sky. The larger $r_{200}$ values determined empirically by \cite{sugawara} only serve to strengthen the argument for a filament orientation closer to the plane of the sky than the range close to the plane of the sky reported in other works, as the larger $r_{200}$ values decrease the normalization of the self-similar entropy profile. In any case, this uncertainty in normalization does not change the observed flattening of the entropy profile. Given that the global filamentary temperature is $\sim 4.5$ keV, the gas is very likely from the ICM outskirts of the two clusters, in which case the clusters must be close enough to be tidally interacting and cannot have a large line-of-sight separation. \cite{tittley} found through a dynamical analysis of the system that the subclusters and the connecting filament are oriented close to the line of sight, having an inclination, $i$, between $3.1^\circ-9.0^\circ$. \cite{sugawara} suggest that the filament may be inclined $\approx 10^\circ$ to the line of sight in order for their X-ray measured Compton $y$ parameter to agree with the $y_{SZ}$ parameter reported by \cite{planck2013} for the filament. However, \cite{sugawara} also suggest that the discrepancy in $y$ parameters is likely a combination of the system not being in the plane of the sky, or there is unresolved multi-phase gas or shock heated gas present in the {\it Suzaku} observations. Indeed, if the system is inclined $10^\circ$ to the line of sight, the true separation between the subclusters would be over 17 Mpc, making it unlikely for the clusters to be interacting. However, we note that the center of the galaxy group ESO-161 is just outside the {\it Suzaku} field of view, so the extended cooler phase gas from the group, mixing with the surrounding filament gas may be an explanation for the $y$ parameter discrepancy. The line of sight velocity difference of the clusters ($\sim 240$ km s$^{-1}$ \citep{struble}) is rather small and consistent with an early stage merger without a large line-of-sight peculiar velocity component. If the velocity difference were much larger then that would imply the clusters are significantly unbound and unable to interact tidally, or that the clusters are undergoing a late stage merger. The former scenario is in contradiction with the temperature and entropy values that we measure, and the latter scenario contradicts the observed line of sight separation between the clusters. \subsection{ESO-161} \label{subsec:eso161} To constrain the temperature of the galaxy group ESO-161, we fit the group region (see the left panel of Figure~\ref{fig:mosaic}) with an absorbed APEC model and the same background prescription described in Section~\ref{sec:da}; we use the dashed circular region to the east of the group shown in the left panel of Figure~\ref{fig:mosaic} to simultaneously model the local background. Our fit yielded a temperature of $1.09^{+0.58}_{-0.05} \rm{keV}$ for the group. This temperature is significantly cooler than the best fit for the surrounding region, $4.45^{+0.89}_{-0.55}$ keV (see Section~\ref{subsec:spectroscopy}). The emission to the west of the group (see Figure~\ref{fig:xmmpic}) is indicative of a diffuse tail. With \textit{Chandra}, this diffuse gas can be resolved into a bimodal structure (see Figure~\ref{fig:eso161zoom}). We use the azimuthal regions shown in Figure~\ref{fig:eso161zoom} to derive the azimuthal surface brightness profile in the 0.3-3.0 keV band shown in Figure~\ref{fig:esoazprof}. This profile shows a hint of a double peak, in the same position as the arrows pointing towards the ram pressure stripped tail candidates (see the right panel of Figure~\ref{fig:eso161zoom}) indicated by the dotted lines, furthering evidence that the tail indeed may have a bimodal structure. The extended emission to the west of ESO-161 is suggestive that the group may be undergoing ram pressure stripping as the group moves through the filament. The bimodal tail may have a ``downstream edge" to the west of the group center, which is more apparent in the right panel of Figure~\ref{fig:coldfront}. This bimodal tail structure may indicate an ellipsoidal potential in origin \citep[e.g.][]{roediger} for the group. \cite{randallm86} first suggested that the double tails are due to stripping from ellipsoidal potentials. \begin{figure}[ht!] \centering \includegraphics[width=0.45\textwidth]{eso161az23-3.pdf} \caption{The surface brightness profile for the azimuthal regions shown in Figure~\ref{fig:eso161zoom} in the 0.3-3.0 keV band. The dotted lines indicate the position of the arrows in Figure~\ref{fig:eso161zoom}, pointing towards the visual stripped tails of gas. Errors are $1\sigma$.} \label{fig:esoazprof} \end{figure} Another clue bolstering the ram pressure stripping scenario is the possible cold front shown in Figure~\ref{fig:coldfront}. The northeastern edge is consistent with the ``upstream edge" reported in \cite{roediger} for systems experiencing ram pressure stripping as they move through an ambient medium. To further investigate the prominent edge seen to the east of the galaxy group in Figure~\ref{fig:coldfront}, we derived a surface brightness profile in the northeast region of the group shown in Figure~\ref{fig:eso161zoom} (left panel), which may be seen in Figure~\ref{fig:coldfrontSB}. \begin{figure} \centering \includegraphics[width=0.45\textwidth]{eso161sbcoldfrontbin2binreg3-3.pdf} \caption{The {\it Chandra} surface brightness profile in the 0.3-3.0 keV band for the region shown in the left panel of Figure~\ref{fig:eso161zoom}, centered on the group. Errors are $1\sigma$.} \label{fig:coldfrontSB} \end{figure} There is a clear drop in surface brightness at $\sim$2\arcmin~ in Figure~\ref{fig:coldfrontSB}, moving radially away from the group. We do not have enough data to distinguish if this edge is really a cold front, shock front, or neither. More observing time with {\it XMM-Newton} would shed light on this question. This apparent edge, as well as the bimodal tail structure of stripped gas is indicative that the galaxy group is consistent with moving east in projection through the filament. \cite{gg} give the conditions for ram pressure stripping to occur as $P_{ram} = \frac{1}{2}\rho_{ICM}v_r^2~\gtrsim~\sigma^2 \rho_{gas}$, where $\rho_{ICM}$ is the density of the intercluster medium, $v_r$ is the velocity of the group relative to the intercluster medium, $P_{ram}$ is the ram pressure, $\sigma$ is the galaxy group's velocity dispersion, and $\rho_{gas}$ is the galaxy's gas density. In order to estimate the gas density of the group, we assume an oblate spheroid geometry, with the line-of-sight axis equal to the projected major axis and the minor axis in the plane of the sky. Using the electron density inferred from the box region in Figure~\ref{fig:mosaic} (right panel), the density for the group region is $n_e \sim 1.8 \times 10^{-4}$~cm$^{-3}$. \cite{tittley} report that the group velocity dispersion is $1800$~km/s. This velocity dispersion is much too high for the group to be bound, so we use the following method to roughly estimate the velocity dispersion of ESO-161. We use the $M_x-T_x$ relation derived by \cite{vikhlinin} to estimate $M_{500}$. While the sample used to derive the $M_x-T_x$ relation in \cite{vikhlinin} consists of galaxy clusters and not groups, \cite{sun} report that the relation also holds for lower temperature galaxy clusters and groups. We find that $M_{500} \sim 2.3 \times 10^{13}$~M$_\odot$. Assuming spherical symmetry we may then use $M_{500} = 500 \times \rho_c \frac{4}{3} \pi r_{500}^3$, where $\rho_c$ is the critical density of the Universe at the redshift of ESO-161, $9.86\times10^{-30}$~g cm$^{-3}$, and $r_{500}$ is the radius at which the density of the galaxy group is 500 times the critical density of the Universe at the redshift of the galaxy group, to estimate the radius of the group. Finally, we may then use $M_{500} = \frac{3}{G}\sigma^2r_{500}$, where $G$ is the gravitational constant and $\sigma$ is the velocity dispersion of the group. We find that the group has a velocity dispersion of $\sim 250$~km s$^{-1}$. This velocity dispersion is approximately six times lower than what \cite{tittley} report. We find that the group must have a relative velocity to the filamentary region $v_r \geq 360$~km s$^{-1}$ in order for ram pressure stripping to occur. If the filament is oriented to the median inclination angle given by \cite{tittley}, then the minimum relative velocity would have to be $\sim 630$ km/s as the group moves eastward through the filament. Another possibility for the extended emission to the west of the group is tidal stripping, perhaps due to a gravitational interaction with another massive object. The extended emission to the north of the group (see the right panel of Figure~\ref{fig:coldfront}), as well as the $<$1 keV temperatures found in regions to the north of the group (see Table~\ref{tab:2temp}) could indicate that ESO-161 is moving to the southeast, around A3391, and the group experienced a tidal stripping event. The emission to the west may also be the result of tidal stripping due to an interaction with a currently unidentified, possibly gas stripped object. In any case, these gaseous double tail-like structures are commonly seen in ram pressure stripped galaxies, most notably in the Virgo Cluster \citep[i.e.][]{forman1979,randallm86}, and in simulations \citep[i.e.][]{roediger}. This would therefore lead to the conclusion that ESO-161 is being ram pressure stripped as it moves eastward in projection through the intercluster filament. We note that such clear examples of ram pressure stripped galaxy groups near low density cluster outskirt environments are quite rare \citep[e.g.][]{degrandi}. \section{Summary and Conclusion} \label{sec:summary} We have presented results based on {\it Chandra} and {\it XMM-Newton} observations of the intercluster gas filament connecting A3391 and A3395. We find the following: \begin{itemize} \item[--] A global projected temperature kT = $4.45_{-0.55}^{+0.89}$~keV, electron density $n_e=1.08^{+0.06}_{-0.05} \times 10^{-4}\sin(i)^{\frac{1}{2}}$~cm$^{-3}$ for the intercluster filament. \item[--] The temperature and electron density derived for the global intercluster filamentary region indicates that the filament gas mass is $M_{\rm gas} = 2.7^{+0.2}_{-0.1} \times 10^{13}\sin(i)^{-\frac{1}{2}}$~M$_\odot$. This is a similar mass to what is reported for the intercluster filament between the two subclusters Abell 222 and Abell 223 \citep{werner}, and is consistent with what \cite{tittley} found in their analysis with \textit{ROSAT} for A3395/A3391. \item[--] The temperature and entropy profiles derived for the filament suggest ICM gas is being tidally pulled into the intercluster filamentary region as part of an early stage pre-merger. The density across the intercluster filament is consistent with the dense WHIM as well as what is expected in cluster outskirts, near the virial radius, although the temperature and entropy are much higher than what is expected for the WHIM. \item[--] The galaxy group ESO-161, located between A3391 and A3395 in the intercluster filament, may be undergoing a stripping event as the group moves eastward seemingly perpendicular to the filament with a minimum relative velocity of approximately 360 km s$^{-1}$ if the filament is oriented in the plane of the sky. In addition, the group has a distinct edge in surface brightness to the east, which would require a deeper observation with {\it XMM-Newton} to characterize. \end{itemize} Since the subclusters appear to be tidally interacting, their line of sight separation must not be large, leading us to conclude that the filament is probably not oriented close to the line of sight as was suggested by \cite{tittley}. Furthermore, an in-plane orientation yields a density that is more consistent with ram pressure stripping of the galaxy group ESO-161. The only evidence we find for cooler phase gas is that likely associated with the galaxy group ESO-161. The filament density, even in projection, is consistent with the theoretical density of the WHIM, however this density is also consistent with density profiles of clusters out to the virial radius \citep{morandi}. \cite{sugawara} argue that the filament temperature is too high to be explained by universal cluster temperature profiles of the subclusters, and attribute this to a shock, perhaps as the subclusters merge. We do not find evidence for merger shocks in the filament. The 4.5 keV filament temperature that we measure is consistent with ICM gas being tidally pulled into the intercluster filament in the early stages of a massive cluster merger. This heated gas above the cluster temperature profiles as well as temperatures expected for the WHIM could also be attributed to adiabatic compression in the filament. \acknowledgments We would like to thank Rodolfo Montez Jr., and Yuanyuan Su for their enlightening discussions and comments. We would also like to thank the referee for their helpful comments. SWR was partially supported by the Chandra X-ray Center through NASA contract NAS8-03060, and the Smithsonian Institution. Support for GEA was partially provided by Chandra X-ray Observatory grants GO2-13161X and GO3-14133X, and by the Graduate Assistance in Areas of National Need (GAANN). This publication received federal support from the Latino Initiatives Pool, administered by the Smithsonian Latino Center. {} \bibliographystyle{aasjournal}
{ "timestamp": "2018-02-27T02:00:14", "yymm": "1802", "arxiv_id": "1802.08688", "language": "en", "url": "https://arxiv.org/abs/1802.08688" }
\section{Introduction} \label{sec:intro} \lettrine[lines=2]{T}{o} perceive and understand more complex scenes, researchers attempt to capture more information, e.g. multispectral and light field, of scenes. Traditional cameras take pictures by using 2D sensors with Bayer filter arrays. The captured trichromatic images are 2D projections of 3D scenes with three color channels, i.e. red, green and blue. With the development of optics and computational photography, both multispectral and light field acquisition have been widely explored to capture more spectral channels or light field information of scenes. In the past decades, several spectral imaging methods were proposed to capture color images with more spectral channels than traditional trichromatic photography. Generally, according to the system architectures, existing multispectral cameras can be divided into several types, e.g., scanning based spectrometers \cite{james2007spectrograph}, filter-based spectrometers\cite{gat2000imaging}, Coded Aperture Snapshot Spectral Imager(CASSI) \cite{arce2014compressive}, Computed Tomography Imaging Spectrometer(CTIS) \cite{vandervlugt2007reconfigurable}, and Prism-Mask Video Imaging Spectrometer(PMVIS) \cite{cao2011prism}. Besides spectral information, the light field (or equivalently, scene depth) is also an important clue for many tasks in computer vision and graphics. Recently, to capture the light field, several methods, e.g., microlens array based method \cite{ng2005light}, multi-camera array based methods \cite{zhang2004self} and focal stack based methods \cite{li2014saliency} are proposed. (Note that the focal stack is one type of representations of the light field, thus in this paper we use the terms light field and focal stack indiscriminatively.) Confocal laser scanning microscope(CLSM) \cite{Pawley1995Handbook} can acquire the microscopic focal stack using the scanning scheme. As for snapshot depth acquisition, the chromatic information is explored for extracting depth from color (RGB) images \cite{garcia2000chromatic}\cite{levin2007image}\cite{cossairt2010spectral}\cite{trouve2013passive}. Besides, Time of Flight (ToF) camera \cite{Schuon2008High} and coded illumination \cite{Schubert1997Fast} camera were presented to capture the depth directly, so that the light field can be derived by model based rendering. However, capturing both spectral and light field information together is difficult for too many anisotropic data need to be measured. To solve this problem, this paper proposes a delicately designed camera system, which tries to enlarge the chromatic aberration of the lens while eliminating its rest aberrations (e.g., spherical aberration, coma aberration and astigmatism). By using the proposed camera, the light rays with different wavelengths from slices at different depths of scenes can focus on the same imaging plane. Thus by dispersing the light incidented into the sensor plane into different spectral channels, the images focusing at different depths can be separated. Placing a multispectral imager at the sensor plane, we can capture a multispectral image whose channels are focused at different depths. In other words, we drive a spectral-varying focal stack whose slices are of different spectral channels. Then, a Local Linear Transformation (LLT) based algorithm is present to reconstruct the multispectral focal stack, which contains full information of multispectral light field. Our optical system model can be simplified in Fig.~\ref{fig: thumbnail}. \begin{figure}[!ht] \vspace{-3mm} \begin{minipage}[b]{1.0\linewidth} \centerline{\includegraphics[width=7cm]{figure1.pdf}} \end{minipage} \caption{Diagram of the proposed multispectral light field acquisition method.} \label{fig: thumbnail} \vspace{-3mm} \end{figure} After acquiring the spectrally varying focal stack, we proposed to transfer spectral information between different slices and fill up the vacant channels of each slice to reconstruct the multispectral focal stack. In this paper, inspired by Local Linear Transform (LLT) method introduced by Yue \textit{et al}\cite{yue2014deblur}, we develop a new method called the local linear transformation (LLT) to transfer the spectral information between channels. Specifically, we impose a strong global blur(e.g. Gaussian) on all the captured channels to remove the different high frequency between these slices caused by different focusing planes. There exists a reasonable linear mapping, namely Local Linear Transformation (LLT), between any two blurred channels and the transformation is also valid for the corresponding sharp slices. In order to extract the LLT mappings, the gradient descent based algorithm is applied. In all, the main contributions of this paper are as follows: (a) a simple chromatic aberration enlarged camera design for multispectral light field acquisition; (b) a local linear transformation(LLT) based reconstruction method for effectively and efficiently reconstructing the multispectral light field. \section{Proposed Method} \subsection{Optical System Design} In this paper, we propose to design an optical system to focus on planes of different depths of the scene with different spectral channels, so that we can capture the spectrally varying focal stack by using a multispectral imager at once. According to this idea, we want to enlarge the camera's chromatic aberration to enlarge the focusing range while eliminating the rest optical aberrations, including piston, tilt, defocus, spherical aberration, coma, astigmatism, field curvature and image distortion, to get promising image quality of all the channels. Here, instead of starting from the scratch to design the lens, we propose a more simple method, i.e. adding a cubic glass behind a well-designed lens (all the aberrations are well corrected), as shown in Fig.~\ref{fig: lens}. With this specially designed lens set, the imaging system has different focal planes with different depths in the scene. The simulation results with ZEMAX is fundamentally consistent with our theory, as shown in Fig.~\ref{fig: focal stack}. By selecting the central wavelengths of spectral channels ranging from 430nm to 700nm, the corresponding slices between plane 1 and plane 3 in Fig.~\ref{fig: focal stack} can be captured. In our paper, ten spectral channels are selected with equal intervals between the central wavelengths, so that the corresponding slices have non-uniform depth intervals. Fig.~\ref{fig: aberration}(a) presents the spots pattern of a certain point with different wavelengths. In the shown section of spot diagram, the red rays focus nearly on the best image plane because its RMS(Root Mean Square) radius is the smallest, while other wavelengths from the same depth in the scene are not. This is a result of chromatic aberration. More specifically, the longer the wavelength is, the smaller the RMS radius of the will be, which is especially obvious for the blue and red one in the figure. Fig.~\ref{fig: aberration}(b) represents the optical path difference(OPD) of the system. Taking the object at the spindle for instance still, the disparity(i.e. the OPD of different wavelengths) becomes apparent among different wavelengths as the entrance pupil scalar(i.e. abscissa) increases, which shows the enhanced chromatic aberration indirectly. In other words, when the light deviates the spindle of lens set, the real convergence points of different spectra become more distant spatially. Note that different colors in Fig.~\ref{fig: aberration} represent different wavelengths varying from 430nm to 700nm. \begin{figure}[!ht] \vspace{-3mm} \centering \includegraphics[width=0.9\linewidth]{figure3_1.pdf} \caption{\bf The specially designed lens set. (Color rays by: Fields)} \label{fig: lens} \vspace{-3mm} \end{figure} \begin{figure}[!ht] \vspace{-3mm} \centering \includegraphics[width=0.9\linewidth]{reverse2.pdf} \caption{\textbf{Different focal planes at different depths in the scene. (Color rays by: Wavelengths)} Object planes marked 1, 2, 3 denote depths of $d_1$, $d_2$, $d_3$ and wavelengths of 400nm, 550nm, 700nm respectively.} \label{fig: focal stack} \vspace{-3mm} \end{figure} \begin{figure}[!ht] \vspace{-3mm} \centering \subfigure[] { \includegraphics[width=0.32\linewidth]{figure4_1.pdf} } \hfill \subfigure[] { \includegraphics[width=0.58\linewidth]{figure5_1.pdf} } \caption{{\bf Diagram of optical aberrations.} (a)Section of the Spot diagram (b)Section of the OPD.} \label{fig: aberration} \vspace{-3mm} \end{figure} \subsection{Multispectral Focal Stack Reconstruction} \begin{figure}[!ht] \vspace{-3mm} \begin{minipage}[b]{1.0\linewidth} \centerline{\includegraphics[width=8.5cm]{flowchart.pdf}} \end{minipage} \caption{Flowchart of our multispectral focal stack reconstruction.} \label{fig: flowchart} \vspace{-2mm} \end{figure} Here, we present the proposed Local Linear Mapping (LLT) based multispectral focal stack reconstruction algorithm. As shown in Fig.~\ref{fig: flowchart}, we capture one channel at a single depth, which is treated as the sharp channel in LLT algorithm. The blurred channel is derived by blurring the captured image computationally. By computing the LLT maps $\textbf{A}_{i, k}$ and $\textbf{B}_{i, k}$, we can restore the missing channels by channel transferring. According to Local Linear Transformation (LLT) property introduced by Yue \textit{et al}.\cite{yue2014deblur}, in a local area with the same blur effect, the pixel values of different channels follow a certain linear transformation, which is valid in the same area for pixels of the sharp version of those channels as well. In our scenario, the defocus blur varies in both spatial and channel dimensions. It is no trivial to apply the LLT in this case directly, since there is no obvious blur-sharp pair in our application. Therefore, the LLT is proposed to transfer the spectral information while keeping the blur patterns which imply depth information of the scene. For any two slices, which are captured with different wavelengths and different local blur effects caused by different focusing distances, we apply a strong \textit{Gaussian Kernel} to blur both of them to remove the different high frequency information, so that the blurred images can be regarded as uniformly blurred with a large Gaussian kernel. Thus, the two slices are of the same blur pattern and the LLT maps can be computed from these blurred slice pairs. Given the LLT Maps, the full-channel focal stacks can be restored by linearly transforming between different channels of the captured spectral-varying focal stack. Specifically, we blur all the channels with a large Gaussian Kernel (${\bf G}_{\sigma}$) at first. Empirically, the standard variation $\sigma = 10$ is strong enough for images captured in practice. Then, according to LLT property \cite{yue2014deblur}, we know there exist local linear transforms between blurred channels and the original channels. The relationship can be described as follows: \begin{subequations} \begin{equation} \begin{aligned} \textbf{I}_{\lambda_i} = \textbf{A}_{i, k}\odot \textbf{I}_{\lambda_k} + \textbf{B}_{i, k} \end{aligned} \end{equation} \begin{equation} \begin{aligned} \textbf{L}_{d_k, \lambda_i} = \textbf{A}_{i, k}\odot \textbf{L}_{d_k, \lambda_k} + \textbf{B}_{i, k}, \end{aligned} \label{restoreEquation} \end{equation} \end{subequations} where $\textbf{I}_{\lambda_i}$ and $\textbf{I}_{\lambda_k}$ are two blurred channels, which are computed by blurring the original sharp slice pair $\textbf{L}_{d_i, \lambda_i}$ and $\textbf{L}_{d_k, \lambda_k}$. Since the Gaussian blur kernel applied here are very large, the blurred images can be regarded as uniformly blurred. Thus $\textbf{I}_{\lambda_i}$ and $\textbf{I}_{\lambda_k}$ do not contain the depth information, so we remove the subscript $d_i$ and $d_k$ here. $\textbf{L}_{d_k, \lambda_i}$ and $\textbf{L}_{d_k, \lambda_k}$ are the corresponding sharp channels. $\textbf{A}_{i, k}$, $\textbf{B}_{i, k}$ are the local linear transformation maps. $d_k$ and $\lambda_i$ represent the depth in the scene and corresponding wavelength separately. $\odot$ means element-wise multiplication of matrices. To compute LLT maps $\textbf{A}_{i, k}$ and $\textbf{B}_{i, k}$, an objective function is introduced: \begin{equation} \label{ObFunction} \begin{aligned} \mathop{\min} E &=\|\textbf{A}_{i, k}\odot\textbf{I}_{\lambda_k}+\textbf{B}_{i, k}-\textbf{I}_{\lambda_i}\|^2_2\\ &+\alpha\|\textbf{A}_{i, k}\odot\nabla\textbf{I}_{\lambda_k}-\nabla\textbf{I}_{\lambda_i}\|^2_2\\ &+\beta(\|\nabla\textbf{A}_{i, k}\|^2_2+\|\nabla\textbf{B}_{i, k}\|^2_2), \end{aligned} \end{equation} where $\textbf{I}_S$ and $\textbf{I}_B$ separately represents the original sharp and blurred channel.$\alpha$ and $\beta$ are the weights of constraint terms, and are set to 1 and 0.1 according to \cite{yue2014deblur}. $\nabla$ is the gradient operator. The traditional Gradient Descent method is applied to optimize Eq.~\ref{ObFunction}. Specifically, the derivatives of Eq.~\ref{ObFunction} can be computed respectively by \begin{equation} \label{derivative} \begin{aligned} g_A &= 2 \textbf{I}_S\odot(\textbf{A}_{i, k}\odot\textbf{I}_{\lambda_k}+\textbf{B}_{i, k}-\textbf{I}_{\lambda_i})\\ &+2\alpha\nabla\textbf{I}_S\odot(\textbf{A}_{i, k}\odot\nabla\textbf{I}_{\lambda_k}-\nabla\textbf{I}_{\lambda_i}) + 2\beta\nabla^T\nabla\textbf{A}_{i, k}\\ g_B &= 2 (\textbf{A}_{i, k}\odot\textbf{I}_{\lambda_k}+\textbf{B}_{i, k}-\textbf{I}_{\lambda_i})+2\beta\nabla^T\nabla\textbf{B}_{i, k}, \end{aligned} \end{equation} By iteratively searching along the gradient directions given by Eq.~\ref{derivative}, $\textbf{A}_{i, k}$ and $\textbf{B}_{i, k}$ can be derived. Fig.~\ref{fig: glt} shows an example of LLT map pair $\textbf{A}_{i, k}$ and $\textbf{B}_{i, k}$. \begin{figure}[t] \vspace{-3mm} \centering \subfigure { \includegraphics[width=0.35\linewidth]{Av.pdf} } \subfigure { \includegraphics[width=0.35\linewidth]{Bv.pdf} } \caption{An example of LLT map pair $\textbf{A}_{i, k}$(left) and $\textbf{B}_{i, k}$(right)} \label{fig: glt} \vspace{-2mm} \end{figure} \begin{figure*}[t] \vspace{-3mm} \centering \subfigure[$d_1$] { \includegraphics[width=0.16\linewidth]{0_00.pdf} } \hspace{-2.5ex} \subfigure[$d_1$] { \includegraphics[width=0.16\linewidth]{0_r00.pdf} } \subfigure[$d_8$] { \includegraphics[width=0.16\linewidth]{0_07.pdf} } \hspace{-2.5ex} \subfigure[$d_8$] { \includegraphics[width=0.16\linewidth]{0_r07.pdf} } \subfigure[$d_{10}$] { \includegraphics[width=0.16\linewidth]{0_09.pdf} } \hspace{-2.5ex} \subfigure[$d_{10}$] { \includegraphics[width=0.16\linewidth]{0_r09.pdf} } \caption{Comparison between ground-truth images(a),(c),(e) and our restored images(b),(d),(f).} \label{MinionsPic} \vspace{-3mm} \end{figure*} With $\textbf{A}_{i, k}$ and $\textbf{B}_{i, k}$, we can transfer the spectral information of $I_{d_i, \lambda_i}$ to $I_{d_j, \lambda_j}$. By transforming the spectral information of all the channels to a certain channel $I_{d_i, \lambda_i}$ a multispectral slice focusing on depth $d_i$ is recovered. Similarly, the rest slices of the focal stack can be reconstructed by transfering the spectral information to the rest spectral-varying slices respectively. \section{Experimental Result} We test the proposed chromatic aberration enlarged camera and LLT-based reconstruction algorithm on the multispectral focal stacks synthesized from both on-line dataset, i.e. LFSD \cite{li2014saliency}, and real captured images. The focal stacks in the dataset are composed of ten RGB slices focused on different depths. In order to obtain the multispectral images, we synthesize the pseudo spectra from the RGB measurement by using the training based algorithm \cite{nguyen2014training}. In our experiment, we use ten slices with spectral channels with central wavelengths 430nm, 460nm, ..., 700nm as the input, the corresponding slices are denoted by $d_1$, $d_2$, ..., $d_{10}$. The ground truth is the full-channel focal stack composed of two multispectral slices, and each of the slices has ten spectral channels. In the experiment, we select a single channel of each multispectral slice to simulate our focal stack camera. The entire multispectral focal stacks are restored by using the proposed LLT-based reconstruction algorithm. To quantitatively evaluate the performance, the peak signal-to-noise ratio(PSNR) and the structure similarity(SSIM) are employed. We also present reconstructed images to demonstrate the performance qualitatively. \begin{table}[htbp] \vspace{-1mm} \caption{Quantitative evaluation for four selected depths and channels of our results.} \tiny \centering \begin{tabular}{ccccccccc} \toprule \multirow{2}[4]{*}{} & \multicolumn{2}{c}{$\lambda=$430nm} & \multicolumn{2}{c}{$\lambda=$520nm} & \multicolumn{2}{c}{$\lambda=$610nm} & \multicolumn{2}{c}{$\lambda=$700nm} \\ \cmidrule{2-9} & \multicolumn{1}{c}{PSNR} & \multicolumn{1}{c}{SSIM} & \multicolumn{1}{c}{PSNR} & \multicolumn{1}{c}{SSIM} & \multicolumn{1}{c}{PSNR} & \multicolumn{1}{c}{SSIM} & \multicolumn{1}{c}{PSNR} & \multicolumn{1}{c}{SSIM} \\ \midrule $d_1$ & Inf & 1.0000 & 34.80 & 0.9799 & 35.08 & 0.9811 & 35.70 & 0.9841 \\ \midrule $d_4$ & 39.79 & 0.9768 & Inf & 1.0000 & 37.76 & 0.9829 & 35.26 & 0.9729 \\ \midrule $d_7$ & 42.27 & 0.9867 & 36.28 & 0.9808 & Inf & 1.0000 & 42.26 & 0.9959 \\ \midrule $d_{10}$ & 38.20 & 0.9801 & 30.44 & 0.9684 & 33.55 & 0.9859 & Inf & 1.0000 \\ \bottomrule \end{tabular}% \label{MinionsTab}% \vspace{-2mm} \end{table} \paragraph{Quantitative evaluation.} The average quantitative measurements of restored images are shown in Tab.~\ref{MinionsTab}. It is obvious that the proposed method can achieve promising performance in terms of both PSNR and SSIM metrics. \begin{figure}[h] \centering \begin{minipage}[c]{1.0\linewidth} \scriptsize\rotatebox{90}{$\lambda = 430$nm} \includegraphics[width=0.23\linewidth]{pseu_d2_209.pdf} \hfill \includegraphics[width=0.23\linewidth]{pseu_d4_209.pdf} \hfill \includegraphics[width=0.23\linewidth]{pseu_d7_209.pdf} \hfill \includegraphics[width=0.23\linewidth]{pseu_d11_209.pdf} \end{minipage} \begin{minipage}[c]{1.0\linewidth} \scriptsize\rotatebox{90}{$\lambda = 520$nm} \includegraphics[width=0.23\linewidth]{pseu_d2_212.pdf} \hfill \includegraphics[width=0.23\linewidth]{pseu_d4_212.pdf} \hfill \includegraphics[width=0.23\linewidth]{pseu_d7_212.pdf} \hfill \includegraphics[width=0.23\linewidth]{pseu_d11_212.pdf} \end{minipage} \begin{minipage}[c]{1.0\linewidth} \scriptsize\rotatebox{90}{$\lambda = 610$nm} \includegraphics[width=0.23\linewidth]{pseu_d2_214.pdf} \hfill \includegraphics[width=0.23\linewidth]{pseu_d4_214.pdf} \hfill \includegraphics[width=0.23\linewidth]{pseu_d7_214.pdf} \hfill \includegraphics[width=0.23\linewidth]{pseu_d11_214.pdf} \end{minipage} \vspace{0.25cm} \begin{minipage}[c]{1.0\linewidth} \scriptsize\rotatebox{90}{$\lambda = 700$nm} \includegraphics[width=0.23\linewidth]{pseu_d2_217.pdf} \hfill \includegraphics[width=0.23\linewidth]{pseu_d4_217.pdf} \hfill \includegraphics[width=0.23\linewidth]{pseu_d7_217.pdf} \hfill \includegraphics[width=0.23\linewidth]{pseu_d11_217.pdf} \end{minipage} \begin{minipage}[c]{1.0\linewidth} \scriptsize\rotatebox{90}{Details of results} \includegraphics[width=0.23\linewidth]{pseu_d2.pdf} \hfill \includegraphics[width=0.23\linewidth]{pseu_d4.pdf} \hfill \includegraphics[width=0.23\linewidth]{pseu_d7.pdf} \hfill \includegraphics[width=0.23\linewidth]{pseu_d11.pdf} \end{minipage} \begin{minipage}[b]{.25\linewidth} \centering \centerline{$d_1$}\medskip \end{minipage} \hfill \begin{minipage}[b]{0.2\linewidth} \centering \centerline{$d_4$}\medskip \end{minipage} \hfill \begin{minipage}[b]{.20\linewidth} \centering \centerline{$d_7$}\medskip \end{minipage} \hfill \begin{minipage}[b]{0.20\linewidth} \centering \centerline{$d_{10}$}\medskip \end{minipage} \vspace{-3mm} \caption{\textbf{The experimental multi-spectral focal stack result on real-captured scene} Top 4 rows: selected input at depth $d_1$, $d_4$, $d_7$, $d_{10}$ with spectral wavelength 430nm, 520nm, 610nm, 700nm. Bottom: the details of restored results.} \label{fig:multispec} \vspace{-3mm} \end{figure} \paragraph{Qualitative evaluation.} We also show the results of qualitative evaluations. Fig.~\ref{MinionsPic} shows the comparisons on synthetic data between ground truth and our results side by side. To facilitate comparison, we compute the RGB color images from the ground truth and our recovered multispectral slices. It is obvious that the recovered results are very similar to the ground truth. Besides, we also test the method on real captured data. Fig.~\ref{fig:multispec} is an example of our reconstructed multispectral focal stack on the real captured images. The channels of our results at different depths (i.e. $d_1$, $d_4$, $d_7$ and $d_{10}$) are shown in Fig.~\ref{fig:multispec}. Each row represents a selected wavelength, i.e. 430nm, 520nm, 610nm and 700nm, and the close-ups are shown in the bottom of each figure. From the results, we can see that the proposed method performs well on both fine details and smooth areas. \section{Conclusion and Discussion} In this paper, we have proposed a chromatic aberration enlarged camera and an LLT-based reconstruction algorithm for acquiring multispectral focal stacks. The proposed method achieves promising performance in terms of both quantitative and qualitative evaluations in our experiments. Limited by the complexity, the proposed method cannot work in real time at the current stage. We will try to further simplify and optimize the algorithm in the future. \bibliographystyle{IEEEbib}
{ "timestamp": "2018-02-27T02:04:20", "yymm": "1802", "arxiv_id": "1802.08805", "language": "en", "url": "https://arxiv.org/abs/1802.08805" }
\section*{Introduction} \label{sec_intro-prelim} An operator space, more precisely a concrete operator space is a (closed) subspace of a C$^*$-algebra. We refer to \cite{Ef_Ru_B} and \cite{Pis_B} for basic knowledge on operator spaces and operator space tensor products. Let $\mathcal{B}(\mathcal{H})$ denote the C$^*$-algebra of all bounded linear maps on a Hilbert space $\mathcal{H}$. For two operator spaces $E,F$ which are subspaces of C$^*$-algebras $\mathcal{A}\subseteq \mathcal{B}(\mathcal{H}_1), \mathcal{B}\subseteq \mathcal{B}(\mathcal{H}_2)$ respectively, define the \emph{min} norm denoted by $\Vert\cdot\Vert_{min}$ on the algebraic tensor product $E\bigotimes F$ using the natural embeddings, $E\bigotimes F\subseteq \mathcal{A}\bigotimes \mathcal{B}\subseteq \mathcal{B}(\mathcal{H}_1)\bigotimes \mathcal{B}(\mathcal{H}_2)\subseteq \mathcal{B}(\mathcal{H}_1\bigotimes\mathcal{H}_2)$. Completion of the algebraic tensor product $E\bigotimes F$ under this norm, denoted by $E\bigotimes^{min} F$, is the minimal tensor product, which is often called as the operator space injective tensor product or the spatial tensor product. Clearly, $E\bigotimes^{min} F$ is an operator space as it is a closed subspace of the C$^*$-algebra $\mathcal{B}(\mathcal{H}_1\bigotimes\mathcal{H}_2)$. For an operator space $E\subseteq \mathcal{B}(\mathcal{H})$, there are natural norms on $M_n(E)$, the space of all $n\times n$ matrices with entries from $E$, using the identification, $M_n(E)=M_n\bigotimes^{min}E$ or equivalently using the embedding, $M_n(E)\subseteq M_n(\mathcal{B}(\mathcal{H}))=\mathcal{B}(\mathcal{H}^n)$, where $M_n$ is the C$^*$-algebra of all $n\times n$ matrices with scalar entries, identified with $\mathcal{B}(\mathbb{C}^n)$. This sequence of norms satisfies Ruan's axioms \cite{Ef_Ru_B}, and surprisingly, these norms on $E$ uniquely (unique in the sense described as in \cite{Ef_Ru_B}) determines the embedding $E\subseteq \mathcal{B}(\mathcal{H})$. Ruan proved that any vector space $E$ together with a sequence of matrix norms $\Vert\cdot \Vert=(\Vert\cdot\Vert_n)_{n\in \mathbb{N}}$ where $\Vert\cdot\Vert_n$ is a matrix norm on $M_n(E)$ satisfying the \emph{Ruan's Axioms} \cite{Ru} can be embedded as a subspace of some C$^*$-algebra, say $\mathcal{A}_E$, such that the sequence of matrix norms induced by the inclusion $E\subseteq \mathcal{A}_E$ coincides with $\Vert\cdot\Vert$. An operator space defined in this way is commonly called an abstract operator space. For operator spaces $E$ and $F$, a linear map $\phi:E\to F$ is said to be \emph{completely bounded} (cb in short) if the associated maps $\phi^{(n)}:M_n(E)\to M_n(F)$ defined as $\phi^{(n)}([e_{ij}]):=[\phi(e_{ij})]$ are uniformly bounded, where $n\in \mathbb{N}$, and is said to be a \emph{complete isometry} if the map $\phi_n$ turns out to be an isometry for every $n$. Intuitively, one may think of cb maps as those which respect the matrix norms at each level. The set of all cb maps denoted by $\mathcal{CB}(E,F)$ forms a normed linear space by defining the cb norm as $\Vert \phi\Vert_{cb}:=\sup_{n\in \mathbb{N}}\Vert \phi^{(n)}\Vert$, for $\phi\in \mathcal{CB}(E,F)$. It can further be given an operator space structure by identifying $M_n(\mathcal{CB}(E,F))$ with $\mathcal{CB}(E,M_n(F))$. This identification defines an operator space structure on the dual of an operator space $E$ when we choose $F=\mathbb{C}$. There is no doubt that the most appropriate morphisms in the category of operator spaces are completely bounded maps. However, we can also have other special kind of linear maps between operator spaces, we shall discuss two of its kind in this article. In Section \ref{sec_weighted-cb}, we introduce weighted cb maps and under Section \ref{sec_lambda-cb}, $\Lambda_\mu$-cb maps. The sets of all weighted cb and $\Lambda_\mu$-cb maps, are given natural operator space structures. In Section \ref{sec_bil}, for three operator spaces $E,F$, and $G$, we introduce a certain class of bilinear maps from $E\times F$ to $G$, which we call as completely $\lambda_\mu$-bounded bilinear maps, in a similar fashion as how completely bounded bilinear maps and jointly completely bounded bilinear maps are defined \cite{Ef_Ru,Bl_Pn}. With suitable choice of $\lambda$ and $\mu$, the set of all completely $\lambda_\mu$-bounded bilinear maps, denoted by $\mathrm{CB}_\lambda^\mu(E\times F,G)$ become an operator space in a natural way and we associate a tensor norm on the algebraic tensor product $E\bigotimes F$ in such a way that its dual become completely isometric to the space $\mathrm{CB}_\lambda^\mu(E\times F,\mathbb{C})$. \section{Preliminaries and notations} \label{sec_prelim-notations} Let $E$ be a concrete operator space. Then the matrix norms on $M_n(E)$ satisfies the following two conditions: \begin{enumerate}[\quad(R1)] \item $\Vert e_1\oplus e_2\Vert_{n+m}\leq \max\{\Vert e_1\Vert_n,\Vert e_2\Vert_m\}$ for any $e_1\in M_n(E)$ and $e_2\in M_m(E)$. \item $\Vert \alpha e\beta\Vert_m\leq \Vert\alpha\Vert \Vert e\Vert_n \Vert \beta\Vert$ for any $e\in M_n(E)$, $\alpha\in M_{m\times n}$ and $\beta\in M_{n\times m}$. \end{enumerate} We refer to the above two conditions as Ruans's Axioms. Whenever $X,Y$ are normed linear space we shall denote by $\mathcal{L}(X,Y)$ and $\mathcal{B}(X,Y)$, respectively the spaces of all linear maps and bounded linear maps from $X$ to $Y$. Let $E,F$ be any two operator spaces. Every completely bounded map from $E$ to $F$ is bounded, and the converse holds whenever $F$ is finite dimensional (or the map has finite rank) or if $F$ is a subspace of a commutative C$^*$-algebra. A well known example of a bounded map which is not cb is the usual involution on $\mathcal{B}(\ell^2)$ given by $T\mapsto T^*$ where $T^*$ denotes the adjoint of a bounded linear operator $T$ on $\ell^2$. The smallest and the largest Banach space cross tensor norms on the algebraic tensor product of two Banach spaces are called respectively Banach space injective norm denoted by $\Vert\cdot\Vert_{\nu}$ and Banach space projective norm denoted by $\Vert\cdot\Vert_{\gamma}$ which are defined on the algebraic tensor product of two Banach spaces $X,Y$ as, $\Vert u\Vert_\nu=\sup \sum_{i=1}^n |f(x_i)g(y_i)|$ where $f\in X^*_1,g\in Y^*_1,u=\sum_{i=1}^n x_i\otimes y_i$ and $\Vert u\Vert_\gamma=\inf \{\sum_{i=1}^n\Vert x_i\Vert\Vert y_i\Vert\mid u=\sum_{i=1}^n x_i\otimes y_i\}$. We refer the reader to \cite{Ry_B,Bl} for all necessary details on tensor products of Banach spaces and operator algebras. If $\Vert\cdot\Vert_\mu$ is a Banach space tensor norm, we shall always denote by $X\bigotimes_\mu Y$ the algebraic tensor product with this norm and its completion will be denoted by $X\bigotimes^\mu Y$. Let $M_\infty(E)$ denote the set of all infinite matrices $[e_{ij}]_{1\leq i,j<\infty}$ with only finitely many non-zero entries from $E$. Clearly, $M_\infty(\mathbb{C})$ can naturally be embedded as a subspace of $\mathcal{K}(\ell^2)$, the C$^*$-algebra of all compact operators on $\ell^2$. Thus we may also identify $M_\infty(E)$ as $M_\infty(\mathbb{C})\bigotimes_{min} \mathcal{K}(\ell^2)$. An element $u\in M_\infty(E\bigotimes F)$, where $E$ and $F$ are operator spaces, can be represented in three special ways as $u=\alpha(e\otimes f)\beta$, $u=\alpha(e\odot f)\beta$ and $u=\alpha(e\bullet f)\beta$ where $e=[e_{ij}]\in M_\infty(E),f=[f_{kl}]\in M_\infty(F),\alpha,\beta\in M_\infty(\mathbb{C})$ and $e\otimes f=[e_{ij}\otimes f_{kl}]$, $e\odot f=[\sum_{r=1}^\infty e_{ir}\otimes f_{rj}]$ and $e\bullet f=\alpha [e_{ij}\otimes f_{ij}]\beta$. One can define $\Vert u\Vert_\wedge=\inf\{\Vert\alpha\Vert\Vert e\Vert\Vert f\Vert\Vert\beta\Vert\mid u=\alpha(e\otimes f)\beta\}$, $\Vert u\Vert_h=\inf\{\Vert\alpha\Vert\Vert e\Vert\Vert f\Vert\Vert\beta\Vert\mid u=\alpha(e\odot f)\beta\}$ and $\Vert u\Vert_s=\inf\{\Vert\alpha\Vert\Vert e\Vert\Vert f\Vert\Vert\beta\Vert\mid u=\alpha(e\bullet f)\beta\}$, which gives three operator space structures to $E\bigotimes F$, whose completion with respect to these norms are named as the operator space projective tensor product, the Haagerup tensor product and the Schur tensor product respectively \cite{AD_DW,VR_AK}. All necessary details on operator space tensor products can be seen in \cite{Bl,Ef_Ru_B} and \cite{Pis_B}. An operator space tensor product $\bigotimes^\mu$ is said to be injective if whenever $\phi_1:E_1\to F_1$ and $\phi_2:E_2\to F_2$ are complete isometries, then, so is the tensor product map $\phi_1\otimes\phi_2:E_1\bigotimes_\mu E_2\to F_1\bigotimes_\mu F_2$. On the other hand, if $\phi_1\otimes \phi_2$ turn out to be completely bounded whenever $\phi_1$ and $\phi_2$ are completely bounded with $\Vert\phi_1\otimes\phi_2\Vert_{cb}\leq \Vert\phi_1\Vert_{cb}\Vert\phi_2\Vert_{cb}$, then we say $\mu$ is functorial. The operator space injective and the Haagerup tensor products are well known examples for injective tensor products while most of the tensor products that we consider including the operator space projective and the Schur tensor product are functorial \cite{Ef_Ru_B,VR_AK}. An operator space tensor norm $\Vert\cdot\Vert_\mu$ is said to be matrix subcross if for any $e\in M_n(E)$ and $f\in M_n(F)$, $\Vert e\otimes f\Vert_\mu\leq \Vert e\Vert\Vert f\Vert$ and if equality holds then we call it matrix cross. The elementary matrix in $\mathbb{M}_n$ whose $ij^\text{th}$ entry is 1 and all other entries are zeroes will be denoted by $\varepsilon_{ij}$ so that any $e=[e_{ij}]\in M_n(E)$ can be written as $e=\sum_{i,j}e_{ij}\otimes \varepsilon_{ij}$. \section{Weighted cb maps on operator spaces} \label{sec_weighted-cb} Let $E,F$ be two operator spaces and $\lambda=(\lambda_n: M_n\to M_n)_{n\in \mathbb{N}}$ be a uniformly bounded sequence of non-zero bounded linear maps. A linear map $\phi:E\to F$ is said to be weighted completely bounded with $\lambda$ as the weight ($\lambda$-cb in short), if the associated maps $\phi\otimes \lambda_n: E\bigotimes_{min}M_n\to F\bigotimes_{min}M_n$ are uniformly bounded, i.e. $\sup_{n\in \mathbb{N}}\Vert\phi\otimes \lambda_n\Vert<\infty$. If $\phi$ is $\lambda$-cb, then we set $\Vert\phi\Vert_{cb}^\lambda=\sup_{n\in \mathbb{N}}\Vert\phi\otimes \lambda_n\Vert$. The collection of all $\lambda$-cb maps from $E$ to $F$, denoted by $\mathcal{CB}_\lambda(E,F)$ is a linear subspace of $\mathcal{L}(E,F)$ and $\Vert\cdot\Vert_{cb}^\lambda$ is a norm on it. We leave the details to the reader. A special kind of weights are those obtained by matrix conjugation by unitaries. If $U_n\in M_n$ are unitary matrices for each $n\in \mathbb{N}$, we may define $\lambda_n:M_n\to M_n$ as $\lambda_n(A)=U_n^{-1}AU_n$. In this case, if $\phi:E\to F$ is linear, then $\phi\otimes\lambda_n$ will be acting on $M_n(E)$ as $(\phi\otimes\lambda_n)(e)=U_n^{-1}\phi^{(n)}(e)U_n$ for any $e\in M_n(E)$. Then, $\mathcal{CB}_\lambda(E,F)=\mathcal{CB}(E,F)$ isometrically, because, for any $e\in M_n(E)$, if $U\in M_n$ is a unitary, then $\Vert U^{-1}eU\Vert=\Vert e\Vert$, which easily follows from Ruan's axiom R2. Another interesting example is the choice of $\lambda_n$ as the transpose map on $M_n$ for every $n$. Here, $\lambda_n$ is an isometry for every $n$. Clearly, for any $\phi:E\to F$ and $e=[e_{ij}]\in M_n(E)$, we have $(\phi\otimes \lambda_n)(e)=[\phi(e_{ji})]$. Thus, it easily follows that the identity map on $\mathcal{K}(\ell^2)$ fails to be $\lambda$-cb, whereas the adjoint map succeeds to be $\lambda$-cb. \begin{prop} \label{prop_lambdacb_composition} Let $E,F,G$ be operator spaces. If $\phi\in \mathcal{CB}(E,F)$ and $\psi\in \mathcal{CB}_\lambda(F,G)$, or $\phi\in \mathcal{CB}_\lambda(E,F)$ and $\psi\in \mathcal{CB}(F,G)$, then $\psi\circ\phi\in \mathcal{CB}_\lambda(E,G)$. \begin{proof} Let $e\in M_n(E)$. Consider \[ ((\psi\circ\phi)\otimes \lambda_n)(e)=\left((\psi\otimes \lambda_n)\circ \phi^{(n)}\right)(e)=\left(\psi^{(n)}\circ(\phi\otimes \lambda_n)\right)(e). \] Thus, if $\phi$ is cb and $\psi$ is $\lambda$-cb, then \[ \Vert((\psi\circ\phi)\otimes \lambda_n)(e)\Vert=\left\Vert\left((\psi\otimes \lambda_n)\circ \phi^{(n)}\right)(e)\right\Vert\leq \Vert\psi\Vert_{cb}^\lambda\Vert\phi\Vert_{cb}\Vert e\Vert. \] Hence, $\psi\circ\phi$ is $\lambda$-cb with $\Vert\psi\circ\phi\Vert_{cb}^\lambda\leq \Vert\psi\Vert_{cb}^\lambda\Vert\phi\Vert_{cb}$. A similar calculation works if we take $\phi$ as $\lambda$-cb and $\psi$ as cb. \end{proof} \end{prop} As in the case of cb-maps, using the uniform boundedness of the sequence $\lambda$, we can impose conditions on the space $F$ so that any bounded linear map $\phi:E\to F$ become $\lambda$-cb for any $\lambda$. \begin{prop} \label{prop_lambdacb} Let $E,F$ be operator spaces and $\phi\in \mathcal{B}(E,F)$. Then, $\phi\in \mathcal{CB}_\lambda(E,F)$ if any of the following conditions is/are satisfied. \begin{enumerate}[(i)] \item\label{prop_lambdacb_functionals} $F=\mathbb{C}$, i.e. $\phi$ is a bounded linear functional. \item\label{prop_lambdacb_finiterank} $F$ is finite dimensional or if $\phi$ is of finite rank. \item\label{prop_lambdacb_commrange} $F$ is a subspace of some commutative C$^*$-algebra. \end{enumerate} \begin{proof} \begin{enumerate}[(i)] \item Consider $\phi\otimes \lambda_n:M_n(E)\to M_n(\mathbb{C})$ given by $e=[e_{ij}]\mapsto \sum_{i,j}\phi(e_{ij})\otimes \lambda_n(\varepsilon_{ij})=\lambda_n(\phi^{(n)}(e))$. Hence \[ \Vert (\phi\otimes \lambda_n)(e)\Vert=\Vert \lambda_n(\phi^{(n)}(e))\Vert\leq \sup_{n\in \mathbb{N}}\Vert \lambda_n\Vert\Vert\phi\Vert_{cb}\Vert e\Vert. \] Thus $\phi$ is $\lambda$-cb with $\Vert\phi\Vert_{cb}^\lambda\leq \sup_{n\in \mathbb{N}}\Vert \lambda_n\Vert\Vert\phi\Vert$. \item Let $\{f_k\mid 1\leq k\leq n\}$ be an Auerbach basis for the range of $\phi$, i.e. $f_k$ are unit vectors and there exists bounded linear functionals $g_k$ for $1\leq k\leq n$ such that $g_i(f_j)=\delta_{i,j}$. Without loss of generality we can assume that $g_k$ is defined on whole of $F$. For each $k$, the bounded linear functional $g_k\circ\phi$ being $\lambda$-cb, and the maps $\theta_k:\mathbb{C}\to F$ mapping $c\mapsto cf_k$ being cb, we can conclude from Proposition \ref{prop_lambdacb_composition} that $\theta_k\circ (g_k\circ \phi)$ is $\lambda$-cb and hence $\phi=\sum_{k=1}^n \theta_k\circ g_k\circ\phi$ is also $\lambda$-cb. \item By the injectivity of the minimal tensor product, without loss of generality we can assume that $F=C(X)$, the C$^*$-algebra of all continuous scalar valued functions on a compact Hausdorff $X$. Moreover, we have the identification $C(X)\bigotimes_{min}M_n=C(X,M_n)$, where the latter is the C$^*$-algebra of all continuous $M_n$ valued functions on $X$. Consider $\phi\otimes \lambda_n:M_n(E)\to C(X,M_n)$ given by $e=[e_{ij}]\mapsto f$ where $f:X\to M_n$ is the function $x\mapsto \lambda_n[\phi(e_{ij})(x)]=\lambda_n(\phi^{n}(e)(x))$. Hence \[ \Vert (\phi\otimes \lambda_n)(e)\Vert=\sup_{x\in X}\Vert \lambda_n(\phi^{n}(e)(x))\Vert\leq \Vert \lambda_n\Vert\Vert\phi\Vert_{cb}\Vert e\Vert\leq \sup_{n\in \mathbb{N}}\Vert \lambda_n\Vert\Vert\phi\Vert\Vert e\Vert. \] Thus $\phi$ is $\lambda$-cb with $\Vert\phi\Vert_{cb}^\lambda\leq \sup_{n\in \mathbb{N}}\Vert \lambda_n\Vert\Vert\phi\Vert$.\qedhere \end{enumerate} \end{proof} \end{prop} Now we shall give an operator space structure to $\mathcal{CB}_\lambda(E,F)$. \begin{lem} Let $E,F$ be operator spaces and $\phi_{ij}\in \mathcal{CB}_\lambda(E,F)$ for $1\leq i,j\leq n$. Then, the map $\phi:E\to M_n(F)$ defined as $\phi(e)=[\phi_{ij}(e)]$ is $\lambda$-cb. \begin{proof} Consider $\phi\otimes \lambda_n:E\bigotimes M_m\to M_n(F)\bigotimes M_m$. For $e=[e_{kl}]=\sum_{k,l}e_{kl}\otimes \varepsilon_{kl}\in E\bigotimes M_m$, we have \[ (\phi\otimes\lambda_m)(e)=\sum_{k,l}\phi(e_{kl})\otimes \lambda_m(\varepsilon_{kl})= \sum_{k,l}[\phi_{ij}(e_{kl})]_{i,j}\otimes \lambda_m(\varepsilon_{kl})=[(\phi_{ij}\otimes \lambda_m)(e)]_{ij}. \] Thus \[ \Vert (\phi\otimes\lambda_m)(e)\Vert=\Vert [(\phi_{ij}\otimes \lambda_m)(e)]_{ij}\Vert\leq n^2\max_{i,j}\Vert\phi_{ij}\otimes\lambda_m\Vert\leq n^2\max_{i,j}\Vert\phi_{ij}\Vert_\lambda\Vert e\Vert, \] i.e. $\phi$ is $\lambda$-cb with $\Vert \phi\Vert_{cb}^\lambda\leq n^2\max_{1\leq i,j\leq n}\Vert\phi_{ij}\Vert_{cb}^{\lambda}$. \end{proof} \end{lem} As in the case of $\mathcal{CB}(E,F)$, we can associate a sequence of matrix norms on $\mathcal{CB}_\lambda(E,F)$ using the identification $M_n(\mathcal{CB}_\lambda(E,F))=\mathcal{CB}_\lambda(E,M_n(F))$, and we expect that this sequence of matrix norms would give rise to an operator space structure on $\mathcal{CB}_\lambda(E,F)$. \begin{thm} $\mathcal{CB}_\lambda(E,F)$ is an operator space with matrix norms obtained from the identification $M_n(\mathcal{CB}_\lambda(E,F))=\mathcal{CB}_\lambda(E,M_n(F))$. \begin{proof} We shall verify the Ruan's Axioms. Let $\phi_1\in M_n(\mathcal{CB}_\lambda(E,F))=\mathcal{CB}_\lambda(E,M_n(F))$ and $\phi_2\in M_m(\mathcal{CB}_\lambda(E,F))=\mathcal{CB}_\lambda(E,M_m(F))$. Then, for any $e\in E$, we have \[ ((\phi_1\oplus\phi_2)\otimes\lambda_k)(e)=((\phi_1\otimes\lambda_k)(e))\oplus((\phi_2\otimes\lambda_k)(e)), \] and $F$ being an operator space, we get \begin{equation*} \begin{aligned} \Vert((\phi_1\oplus\phi_2)\otimes\lambda_k)(e)\Vert &\leq\max\{\Vert(\phi_1\otimes\lambda_k)(e)\Vert,\Vert(\phi_2\otimes\lambda_k)(e)\Vert\}\\ &\leq\max\{\Vert(\phi_1\Vert_{cb}^\lambda \Vert e\Vert,\Vert\phi_2\Vert_{cb}^\lambda\Vert e\Vert\}. \end{aligned} \end{equation*} Taking supremum over $e\in M_k(E)$ such that $\Vert e\Vert\leq 1$, we get required inequality as in R1. Similarly, let $\phi=[\phi_{ij}]\in M_n(\mathcal{CB}_\lambda(E,F))$ and $\alpha,\beta\in M_n$. Let $e\in M_k(E)$. Consider \[ (\alpha\phi\beta\otimes \lambda_k)(e)=\alpha((\phi\otimes \lambda_k)(e))\beta, \] and $F$ being an operator space, we get \begin{equation*} \begin{aligned} \Vert(\alpha\phi\beta\otimes \lambda_k)(e)\Vert &\leq \Vert\alpha\Vert \Vert(\phi\otimes \lambda_k)(e)\Vert\Vert\beta\Vert\\ &\leq \Vert\alpha\Vert \Vert\phi\Vert_{cb}^\lambda\Vert\beta\Vert \Vert e\Vert. \end{aligned} \end{equation*} Thus we have the required inequality as in R2. \end{proof} \end{thm} Notice that, from Proposition \ref{prop_lambdacb}(\ref{prop_lambdacb_functionals}), we can conclude that the bounded linear functionals on an operator space coincides with the $\lambda$-cb functionals. We may even choose all $\lambda_n$ as isometries so that we have isometrically $\mathcal{CB}_\lambda(E,\mathbb{C})=\mathcal{CB}(E,\mathbb{C})$ for any operator space $E$. But we still can not conclude that $\mathcal{CB}_\lambda(E,M_n(\mathbb{C}))=\mathcal{CB}(E,M_n(\mathbb{C}))$. As a result, the operator space structure on $\mathcal{CB}_\lambda(E,\mathbb{C})$ need not be the same as that on $\mathcal{CB}(E,\mathbb{C})$. Hence we shall denote the dual ($\lambda$-dual) space $\mathcal{CB}_\lambda(E,\mathbb{C})$ as $E^*_\lambda$ to distinguish it from the usual operator space dual. \subsection*{Quantizations through tensor products} \label{subsec_quant} The process of defining an operator space structure on a given normed linear space $X$, either as an isometric embedding into a C$^*$-algebra or by explicitly defining a sequence of matrix norms satisfying the Ruan's axioms with the condition that the matrix norm on $M_1(X)$ coincides with the given norm on $X$, is called \emph{quantization}. It is to be noted that any normed linear space can be embedded naturally inside a commutative C$^*$-algebra, namely the C$^*$-algebra of all continuous complex valued functions on the closed unit ball of the dual space $X^*$ which is compact with respect to the weak$^*$ topology. This is popularly known as the MIN quantization. It is minimal in the sense that the matrix norms obtained by any other quantization on $X$ would be bigger than that obtained by the MIN quantization. On the other hand, there is a MAX quantization which is maximal in the sense analogues to the one mentioned above. In fact, we can explicitly construct the matrix norms in the two cases as described in \cite[Section 3.3]{Ef_Ru_B}. We shall denote by $X_{MIN}$ and $X_{MAX}$ respectively the space $X$ with the MIN and MAX operator space structures, and the operator space norms on them shall be denoted by $\Vert\cdot\Vert_{MIN}$ and $\Vert\cdot\Vert_{MAX}$ respectively. For a normed linear space $X$, the MIN quantization can be obtained as follows: For any $x\in M_n(X)=M_n\bigotimes X$, $\Vert x\Vert_{MIN}=\Vert x\Vert_{M_n\bigotimes^\nu X}$ where $\bigotimes^\nu$ denotes the Banach space injective tensor product. Motivated by this, we ask the following question: If $X$ is a normed linear space and $\bigotimes^\mu$ denotes a Banach space tensor product, then can we define an operator space norm on $X$ by identifying $M_n(X)$ with $M_n\bigotimes_\mu X$? We shall only concentrate on tensor products which give rise to tensor norms lying in between the Banach space injective and the Banach space projective tensor norms, which would imply that the tensor norm is a cross norm. Of course, we shall consider the case when $X$ is an operator space and $\bigotimes^\mu$ is an operator space tensor product so that we may obtain various quantizations of the underlying normed linear space. In any case, for any $X$, the tensor product $M_n\bigotimes^\mu X$ being bicontinuously isomorphic to $M_n\bigotimes^\lambda X$, we can have the identification $M_n\bigotimes^\mu X=M_n(X)$ as vector spaces. Hence, we shall denote by $M_n^\mu(X)$ the space $M_n\bigotimes^\mu X$. For notational convenience we shall use the phrase `$\bigotimes^\mu$ quantizes' or `$\bigotimes^\mu$ is a quantizing tensor product' to mean that for every normed linear space $X$, the identification $M_n(X)=M_n^\mu(X)$ defines an operator space structure on $X$. If $X$ is an operator space and $\bigotimes^\mu$ is an operator space tensor product, then we hope that it must be clear from the context that we are talking about a new (not necessarily different) operator space structure on $X$. When we consider an injective operator space tensor product $\bigotimes^\mu$, the operator space $M_n$ being embedded completely isometrically inside $\mathcal{K}(\ell^2)$, we have $M_n\bigotimes^\mu E\subset \mathcal{K}(\ell^2)\bigotimes^\mu E$ completely isometrically for any operator space $E$ and for any $n\in \mathbb{N}$. Therefore, for any $e=[e_{ij}]\in M_n^\mu(E)$, we have $\Vert e\Vert_{M_n^\mu(E)}=\Vert e\Vert_{\mathcal{K}(\ell^2)\bigotimes^\mu E}$. Similarly, for $n,m\in \mathbb{N}$, we have completely isometric embeddings $M_n^\mu(E)\hookrightarrow M_{n+m}^\mu(E)$ and $M_m^\mu(E)\hookrightarrow M_{n+m}^\mu(E)$ given by \[e_1\mapsto \begin{bmatrix} e_1 & 0\\ 0 & 0 \end{bmatrix}, \quad e_2\mapsto \begin{bmatrix} 0 & 0\\ 0 & e_2 \end{bmatrix}, \] for any $e_1\in M_n^\mu(E)$ and $e_2\in M_m^\mu(E)$. Thus we have the completely isometric embedding $M_n^\mu(E)\oplus M_m^\mu(E)\subseteq M_{n+m}^\mu(E)$, which gives $\Vert e_1\oplus e_2\Vert=\max\{\Vert e_1\Vert,\Vert e_2\Vert\}$. This proves that matrix norms satisfies the Ruan's axiom R1 whenever $\mu$ is injective. We could not find a sufficient condition on $\bigotimes^\mu$ such that R2 may also be satisfied. Using a Banach space tensor product $\bigotimes^\mu$, one may extend the idea of weighted cb maps on operator spaces to those between normed linear spaces as follows. Let $X,Y$ be two normed linear spaces and $\lambda=(\lambda_n)_{n\in \mathbb{N}}$ be a uniformly bounded sequence of linear maps where $\lambda_n:M_n\to M_n$. A linear map $\phi:X\to Y$ is said to be $\lambda_\mu$ completely bounded or $\lambda_\mu$-cb in short, if the associated maps $\phi\otimes \lambda_n:X\bigotimes^\mu M_n\to Y\bigotimes_\mu M_n$ are uniformly bounded. But, if $\bigotimes^\mu$ is a quantizing tensor product, one may observe that it is not much different from the weighted cb maps, for, if $X', Y'$ denote the operator spaces obtained by quantizing $X$ and $Y$ respectively using $\bigotimes^\mu$, then it is easy to observe that the notion of $\lambda_\mu$-cb maps from $X\to Y$ coincides with the definition $\lambda$-cb maps from $X'$ to $Y'$. Thus, in this case, it suffices to study properties of weighted cb maps on operator spaces to understand $\lambda_\mu$-cb maps on normed linear spaces. As a particular case, if we take the Banach space injective tensor product $\bigotimes^\nu$ in place of $\bigotimes^\mu$, then the operator spaces $X'$ and $Y'$ are respectively $X_{MIN}$ and $Y_{MIN}$. Hence, from Proposition \ref{prop_lambdacb}(\ref{prop_lambdacb_commrange}), $Y_{MIN}$ being subspace of a commutative C$^*$-algebra, we have $\mathcal{CB}_\lambda(X_{MIN},Y_{MIN})=\mathcal{B}(X_{MIN},Y_{MIN})=\mathcal{B}(X,Y)$. If $\bigotimes^\mu$ is not a quantizing tensor product, then one may consider the notion of $\lambda_\mu$-cb maps defined above on normed linear spaces or on operator spaces which may give strange results. We do not plan to discuss it here. We shall consider a bilinear analogue of $\lambda_\mu$-cb maps on operator spaces under Section \ref{sec_bil}. \section{$\Lambda_\mu$-cb maps on operator spaces} \label{sec_lambda-cb} Let $\Lambda$ be a collection of operator spaces containing atleast one non-trivial element (always assumed from here onwards). The collection may not be a set, we will be considering examples of $\Lambda$ being proper classes, for example, the class of all commutative C$^*$-algebras. Let $\bigotimes^\mu$ be an operator space tensor product not necessarily quantizing, but we shall assume that $\bigotimes^\mu$ is matrix cross, meaning that, for any two operator spaces $E,F$, we must have $\Vert e\otimes f\Vert_\mu=\Vert e\Vert\Vert f\Vert$ for all $e\in M_n(E)$ and $f\in M_m(F)$. Let $E,F$ be operator spaces. A bounded linear map $\phi:E\to F$ is said to be \emph{$\Lambda_\mu$-completely bounded} ($\Lambda_\mu$-cb in short) if the associated maps $\phi_X^\mu:=\phi\otimes_\mu I_X:E\bigotimes_\mu X\to F\bigotimes_\mu X$ are uniformly bounded (i.e. $\sup_{X\in \Lambda}\Vert \phi_X^\mu\Vert<\infty$.), where $I_X$ denotes the identity map from $X$ to $X$. Denote by $\mathcal{CB}_\Lambda^\mu(E,F)$, the set of all $\Lambda_\mu$-cb maps from $E$ to $F$. When $E=F$, $\mathcal{CB}_\Lambda^\mu(E,E)$ may be denoted as $\mathcal{CB}_\Lambda^\mu(E)$. Define \[ \Vert\cdot\Vert_{cb}^{\Lambda_\mu}:\mathcal{CB}_\Lambda^\mu(E,F)\to \mathbb{R}\text{\quad as\quad }\Vert \phi\Vert_{cb}^{\Lambda_\mu}:=\sup_{X\in\Lambda}\Vert \phi_X^\mu\Vert. \] A routine verification shows the following. \begin{prop} For any two operator spaces $E,F$ and any collection $\Lambda$ of operator spaces, $\mathcal{CB}_\Lambda^\mu(E,F)$ forms a vector subspace of $\mathcal{L}(E,F)$ and the function $\Vert\cdot\Vert_{cb}^{\Lambda_\mu}$ defines a norm on it. \end{prop} In the very special case when $\mu$ is taken as the \emph{min} tensor product, we shall omit the symbol $\mu$ from our notations and call a $\Lambda_\mu$-cb map simply as a $\Lambda$-cb map, $\mathcal{CB}_\Lambda^\mu(E,F)$ as $\mathcal{CB}_\Lambda(E,F)$, $\phi_X^\mu$ as $\phi_X$ etc. When the tensor product $\mu$ under consideration is injective, as in the cases of the \emph{min} and Haagerup tensor products, we may take $\Lambda$ as a collection of unital C$^*$-algebras, because, if $\Lambda=\{X_\lambda\}$ is a collection of operator spaces, then $\Vert \sum_{i=1}^n e_i\otimes x_i\Vert_{E\bigotimes_\mu X_\lambda}=\Vert \sum_{i=1}^n e_i\otimes x_i\Vert_{E\bigotimes_\mu \mathcal{A}_\lambda}$ for any element $\sum_{i=1}^n e_i\otimes x_i\in E\bigotimes X_\lambda$ and for every $\lambda\in \Lambda$, where $\mathcal{A}_\lambda$ is a unital C$^*$-algebra such that $X_\lambda\subseteq \mathcal{A}_\lambda$ completely isometrically. Injectivity of the tensor product $\bigotimes^\mu$ simplify things. However, very important tensor products such as the operator space projective tensor product and the Schur tensor product are not injective in general, though, it may be functorial which is the second best thing that we can ask for, after injectivity. \begin{prop} \label{prop_bdd} Let $E,F$ be two operator spaces and $\Lambda$ be a collection of operator spaces. Then the following statements hold. \begin{enumerate}[(i)] \item For any $\phi\in \mathcal{CB}_\Lambda^\mu(E,F)$, we have, $\Vert \phi\Vert\leq \Vert \phi\Vert_{cb}^{\Lambda_\mu}$. \item If $\bigotimes^\mu$ is functorial, then for any $\phi\in \mathcal{CB}(E,F)$, we have, $\phi\in \mathcal{CB}_\Lambda^\mu(E,F)$ and $\Vert \phi\Vert_{cb}^{\Lambda_\mu}\leq\Vert \phi\Vert_{cb}$. \end{enumerate} \begin{proof} \begin{enumerate}[(i)] \item Let $X\in \Lambda$ be a non-trivial operator space and $x_0\in X$ be a unit vector. Consider $\phi_X^\mu:E\bigotimes^\mu X\to F\bigotimes^\mu X$ defined on rank one tensors as $\phi_X^\mu(e\otimes x)=\phi(e)\otimes x$, for all $e\in E$ and $x\in X$. Let $(e_n)$ be a sequence of unit vectors in $E$ such that $\Vert\phi(e_n)\Vert$ converges to $\Vert\phi\Vert$. Consider the sequence $\phi_X^\mu(e_n\otimes x_0)$. As $\Vert e_n\otimes x_0\Vert=1$ and $\Vert\phi_X^\mu(e_n\otimes x_0)\Vert=\Vert\phi(e_n)\otimes x_0\Vert=\Vert\phi(e_n)\Vert$, we can conclude that $\Vert\phi\Vert\leq\Vert\phi_X^\mu\Vert$ and hence $\Vert\phi\Vert\leq\Vert\phi\Vert_{cb}^{\Lambda_\mu}$. \item The tensor product $\bigotimes^\mu$ being functorial, $\phi:E\to F$ and $I_X:X\to X$ being completely bounded, $\phi_X^\mu$ is also completely bounded with $\Vert\phi_X^\mu\Vert\leq \Vert\phi\Vert_{cb}\Vert I_X\Vert_{cb}=\Vert\phi\Vert_{cb}$, i.e. $\Vert\phi\Vert_{cb}^{\Lambda_\mu}=\sup_{X\in \Lambda}\Vert\phi_X^\mu\Vert\leq \Vert\phi\Vert_{cb}$. \qedhere \end{enumerate} \end{proof} \end{prop} From the above proposition, we have, $\mathcal{CB}(E,F)\subseteq \mathcal{CB}_\Lambda^\mu(E,F)$ whenever $\mu$ is injective (the set inclusion must not be confused with isometric embedding of normed linear spaces), in particular we have $\mathcal{CB}(E,F)\subseteq\mathcal{CB}_\Lambda(E,F)$ by taking $\mu=min$. But, of course, the containment can be strict, depending on the choice of $\Lambda$. For example, let $\Lambda\subset \{M_n\mid n\in \mathbb{N}\}$ be finite, and choose $E,F,\phi$ such that $\phi$ is bounded but not completely bounded. An easy way to do this is by considering the identity map $I:E_{MIN}\to E_{MAX}$ where $E$ is an infinite dimensional space and $E_{MIN}$ and $E_{MAX}$ denote respectively the minimal and the maximal quantizations of $E$. On the other hand, if we choose $\Lambda=\{M_n\mid n\in \mathbb{N}\}$, then the definition of $\Lambda$-cb maps coincides with that of cb maps. Thus we have $\mathcal{CB}(E,F)=\mathcal{CB}_\Lambda(E,F)$ isometrically, in this case. As a simple consequence of the injectivity of the minimal tensor product, if $X,Y$ are operator spaces with $i_Y:Y\to X$ being a complete isometry, then for any operator space $E$ with $I:E\to E$ being the identity map, the tensor map $I\otimes i_Y:E\bigotimes_{min}Y\to E\bigotimes_{min}X$ is also a complete isometry. In other words, if $Y\subseteq X$, then for any $u=\sum_{i=1}^n e_i\otimes y_i$ where $e_i\in E$ and $y_i\in Y$, we have, $\Vert u\Vert_{E\bigotimes_{min}Y}=\Vert u\Vert_{E\bigotimes_{min}X}$. Thus, if $\Lambda$ has the property that, for every $n\in \mathbb{N}$, there is a C$^*$-algebra $\mathcal{A}\in\Lambda$ and a completely isometric embedding of $M_n$ into $\mathcal{A}$, then every $\Lambda$-cb map is cb. This is because, if we choose $\mathcal{A}_n$ to be a C$^*$-algebra in $\Lambda$ satisfying $M_n\subseteq \mathcal{A}_n$, then by injectivity of the minimal tensor product, if $\phi:E\to F$ is a bounded map, then $\Vert\phi_{\mathcal{A}_n}\Vert\geq \Vert\phi^{(n)}\Vert$ where $\phi^{(n)}$ denotes the $n^\text{th}$ amplification of $\phi$. Thus taking supremum over $\Lambda$, we get, $\Vert\phi\Vert_{cb}^\Lambda\geq \Vert\phi\Vert_{cb}$. Choosing $\Lambda=\{\mathcal{K}(\ell^2)\}$ gives an example for the above case, as $\mathbb{M}_n$ is a $*$-subalgebra of $\mathcal{K}(\ell^2)$ for every $n$. Another similar example is taking $\Lambda=\{M_n(\mathcal{A})\mid n\in \mathbb{N}\}$ where a C$^*$-algebra $\mathcal{A}$ is fixed. As every cb map is $\Lambda$-cb, we have aplenty of examples of $\Lambda$-cb maps no matter whatever $\Lambda$ is. If $F$ is a finite dimensional operator space or a commutative C$^*$-algebra, then any bounded map $\phi:E\to F$ is $\Lambda$-cb for any operator space $E$. In particular, any bounded linear functional $\phi$ on $E$ is $\Lambda$-cb with $\Vert\phi\Vert_{cb}^\Lambda=\Vert\phi\Vert$. But in other cases, such as $F$ being finite dimensional, eventhough all cb maps are $\Lambda$-cb, their norms can be different, meaning that the identity map between the spaces $\mathcal{CB}(E,F)$ and $\mathcal{CB}_\Lambda(E,F)$ may not be an isometry. This will be discussed later when we consider two different operator space structures on the dual space. From Proposition \ref{prop_bdd}, we conclude that, for $\phi\in \mathcal{CB}(E,F)$, we have, $\Vert\phi\Vert\leq\Vert\phi\Vert_{cb}^\Lambda\leq\Vert\phi\Vert_{cb}$ for any non-empty $\Lambda$ and the examples discussed above with slight modifications give strict inequalities (See Example \ref{eg_ckmn} below). Another immediate observation is that if $\Lambda_1\subset\Lambda_2$, then $\mathcal{CB}_{\Lambda_2}(E,F)\subset \mathcal{CB}_{\Lambda_1}(E,F)$, and for $\phi\in \mathcal{CB}_{\Lambda_2}(E,F)$, we have, $\Vert\phi\Vert_{cb}^{\Lambda_1}\leq \Vert\phi\Vert_{cb}^{\Lambda_2}$. It is a clear fact that for $n>1$, there is no commutative C$^*$-algebra $\mathcal{A}$ such that $M_n$ is embedded as a subalgebra of $\mathcal{A}$. Thus it is quite natural to expect that there is no completely isometric embedding of $M_n$ into $\mathcal{A}$ too. Curiously, analogous conclusion about non-commutative C$^*$-algebras may fail. There exists non-commutative C$^*$-algebras which do not contain any $M_n$ as subalgebras, though which contain $M_n$ embedded as subspaces for some $n>1$, completely isometrically. The reduced C$^*$-algebra of the free group on two generators is such an example. \begin{prop} \label{commutativecsa} Let $\Lambda$ be the class of all commutative (unital) C$^*$-algebras and $E,F$ be any two operator spaces. Then $\mathcal{CB}_\Lambda(E,F)=B(E,F)$ isometrically. \begin{proof} Let $\mathcal{A},\mathcal{B}$ be C$^*$-algebras such that $E,F$ are embedded into $\mathcal{A},\mathcal{B}$ completely isometrically, respectively. Let $\Lambda=\{C(K_i)\}_{i\in I}$, where $C(K_i)$ denotes the C$^*$-algebra of all continuous complex valued functions on a compact Hausdorff space $K_i$ with the sup norm denoted by $\Vert\cdot\Vert_\infty$ with pointwise algebraic operations. We have the identification $\mathcal{A}\bigotimes^{min}C(K_i)=C(K_i,\mathcal{A})$ where $C(K_i,\mathcal{A})$ denotes the C$^*$-algebra of all $\mathcal{A}$-valued continuous functions on $K_i$. Hence, we have the completely isometric embeddings, $E\bigotimes^{min}C(K_i)\subset C(K_i,\mathcal{A})$ and $F\bigotimes^{min}C(K_i)\subset C(K_i,\mathcal{B})$. Let $\phi:E\to F$ be a bounded linear map. Consider $\phi_{\mathcal{A}_i}:C(K_i,\mathcal{A})\to C(K_i,\mathcal{B})$. For $f\in F\bigotimes^{min}C(K_i)$ identified as a continuous $\mathcal{A}$ valued function on $K_i$, its image $\phi_i(f)$ is the continuous $\mathcal{B}$ valued function on $K_i$ defined as $\phi_i(f)(x_i)=\phi(f(x_i))$ for every $x_i\in K_i$. It follows that $\Vert \phi_i\Vert=\sup_{\Vert f\Vert_\infty\leq 1,x_i\in X_i}\Vert \phi(f(x_i))\Vert=\Vert \phi\Vert$. In fact, using the well known identity $E\bigotimes^{min} C(K)=E\bigotimes^\nu C(K)$ (isometrically), we could have easily arrived at the conclusion, without any further calculations, that $\Vert \phi_i\Vert=\Vert \phi\Vert\;\forall\; i$, where $\nu$ denotes the Banach space injective tensor product. \end{proof} \end{prop} \begin{eg} \label{eg_ckmn} Fix $n\in \mathbb{N}$. Let $\Lambda_n$ be the collection of all closed $*$-subalgebras of $C(K,\mathbb{M}_n)$ for some compact Hausdorff space $K$. Let $\phi:E\to F$ be a bounded linear map between operator spaces. As $C(K,\mathbb{M}_n)$ can be identified with $C(K)\bigotimes^{min}M_n$, we have naturally the completely isometric identification, $E\bigotimes^{min}C(K,\mathbb{M}_n)=M_n(E\bigotimes^{min}C(K))=M_n(E)\bigotimes^{min} C(K)$. Hence the map $\phi_{C(K,\mathbb{M}_n)}$ is nothing but the map $(\phi^{(n)})_{C(K)}$ where $\phi^{(n)}$ is the $\text{n}^{\text{th}}$ amplification of $\phi$. But, from Proposition \ref{commutativecsa}, we have, $\Vert \phi\Vert=\Vert\phi_{C(K)}\Vert$ for every $\phi$, and hence, $\Vert\phi_{C(K,\mathbb{M}_n)}\Vert=\Vert\phi^{(n)}\Vert\leq n\Vert\phi\Vert$. Thus, $\mathcal{CB}_{\Lambda_n}(E,F)$ is bicontinuously isomorphic to $B(E,F)$ via the identity map. \end{eg} \begin{prop} The normed linear space $\mathcal{CB}_\Lambda^\mu(E,F)$ is complete whenever $F$ is complete. \begin{proof} Let $(\phi^n)_{n\in \mathbb{N}}$ be a Cauchy sequence in $\mathcal{CB}_\Lambda^\mu(E,F)$. As the norm on $B(E,F)$ is smaller than that on $\mathcal{CB}_\Lambda^\mu(E,F)$ for any $\Lambda$-cb map, we have that $(\phi^n)_{n\in \mathbb{N}}$ is Cauchy in $B(E,F)$ also. But, we know that $B(E,F)$ is complete whenever $F$ is complete. Hence $(\phi^n)_{n\in \mathbb{N}}$ converges to some $\phi\in B(E,F)$. We shall prove that $(\phi^n)_{n\in \mathbb{N}}$ converges to $\phi$ in $\Vert\cdot\Vert_{cb}^{\Lambda_\mu}$ norm and that $\phi\in \mathcal{CB}_\Lambda^\mu(E,F)$. Let $\varepsilon>0$ be given. As $(\phi^n)_{n\in \mathbb{N}}$ is Cauchy, there exists $N\in \mathbb{N}$ and $K>0$ such that $\Vert\phi^m-\phi^n\Vert_{cb}^{\Lambda_\mu}<\frac{\varepsilon}{2}$ for all $m,n\geq N$ and $\Vert\phi^n\Vert_{cb}^{\Lambda_\mu}<K$ for every $n\in \mathbb{N}$. As $\Vert\phi-\phi^n\Vert$ converges to zero, $\Vert\phi_X-\phi_X^n\Vert$ also converges to zero for every $X\in\Lambda$ as $n$ tends to infinity. Hence there exists $N_X\in \mathbb{N}$ such that $\Vert\phi_X-\phi_X^m\Vert<\frac{\varepsilon}{2}$ for every $m\geq N_X$. Let $X\in\Lambda$ and $m>\max\{N_X,N\}$. Consider $\Vert\phi_X-\phi_X^n\Vert\leq \Vert\phi_X-\phi_X^m\Vert+\Vert\phi_X^m-\phi_X^n\Vert<\frac{\varepsilon}{2}+\Vert\phi^m-\phi^n\Vert_{cb}^{\Lambda_\mu}<\varepsilon$ for every $n\geq N$. Hence $\Vert\phi-\phi^n\Vert_{cb}^{\Lambda_\mu}\leq \varepsilon$ for every $n\geq N$, i.e. $(\phi^n)_{n\in \mathbb{N}}$ converges to $\phi$ in $\Vert\cdot\Vert_{cb}^{\Lambda_\mu}$ norm (inside $B(E,F)$). Let $n\geq N$. Consider $\Vert \phi_X\Vert\leq \Vert\phi_X-\phi_X^n\Vert+\Vert\phi_X^n\Vert\leq \Vert(\phi-\phi^n)_X\Vert+\Vert\phi_X^n\Vert\leq \Vert\phi-\phi^n\Vert_{cb}^{\Lambda_\mu}+\Vert\phi^n\Vert_{cb}^{\Lambda_\mu}<\varepsilon+K$. Hence $\Vert\phi\Vert_{cb}^{\Lambda_\mu}\leq \varepsilon+K$ and thus $\phi\in \mathcal{CB}_\Lambda^\mu(E,F)$. \end{proof} \end{prop} \begin{prop} \label{banach-algebra} Let $E,F,G$ be operator spaces. If $\phi\in \mathcal{CB}_\Lambda^\mu(E,F)$ and $\psi\in \mathcal{CB}_\Lambda^\mu(F,G)$, then $\psi\circ\phi\in \mathcal{CB}_\Lambda^\mu(E,G)$. Moreover, $\Vert\psi\circ\phi\Vert_{cb}^{\Lambda_\mu}\leq\Vert\psi\Vert_{cb}^{\Lambda_\mu}\Vert\phi\Vert_{cb}^{\Lambda_\mu}$. In particular, $\mathcal{CB}_\Lambda^\mu(E)$ is a Banach algebra. \begin{proof} Let $X\in\Lambda$. Consider $(\psi\circ\phi)_X^\mu:E\bigotimes_\mu X\to G\bigotimes_\mu X$ defined as $(\psi\circ\phi)_X(e\otimes x)=(\psi\circ\phi)(e)\otimes x$ on rank one tensors. Thus, $(\psi\circ\phi)_X=\psi_X\circ\phi_X$ and hence $\Vert (\psi\circ\phi)_X\Vert\leq \Vert\psi_X\Vert \Vert\phi_X\Vert\leq \Vert\psi\Vert_{cb}^{\Lambda_\mu} \Vert\phi\Vert_{cb}^{\Lambda_\mu}$. As the above inequality is true for all $X\in\Lambda$, we can easily conclude that $\Vert\psi\circ\phi\Vert_{cb}^{\Lambda_\mu}\leq\Vert\psi\Vert_{cb}^{\Lambda_\mu}\Vert\phi\Vert_{cb}^{\Lambda_\mu}$. \end{proof} \end{prop} In the special case as in Example \ref{eg_ckmn}, we can talk about an involution for $\mathcal{CB}_{\Lambda_n}(E,F)$. Because for any $\phi\in \mathcal{CB}_{\Lambda_n}(E,F)$, the usual adjoint operator $\phi^*:F^*\to E^*$ turns out to be a $\Lambda_n$-cb map with $\Vert\phi^*\Vert_{cb}^{\Lambda_n}=\Vert\phi^{*(n)}\Vert=\Vert\phi^{(n)}\Vert=\Vert\phi\Vert_{cb}^{\Lambda_n}$, i.e. $\mathcal{CB}_{\Lambda_n}(E,F)$ has an isometric involution. The involution in general case is not clear. \begin{rem} In Proposition \ref{banach-algebra}, if we relax the condition of both $\phi$ and $\psi$ being $\Lambda_\mu$-cb to only one of the maps being $\Lambda_\mu$-cb and the other being bounded, then clearly there is no guarantee that their composition is $\Lambda_\mu$-cb. Because, any bounded map $\phi$ on an operator space is a composition of a $\Lambda_\mu$-cb map and a bounded map, namely the identity map and $\phi$ itself! We have been working with a fixed $\Lambda$ and a fixed $\mu$. But, of course, we can find particular instances such as some triples $((\Lambda_1,\mu_1),(\Lambda_2,\mu_2),(\Lambda_3,\mu_3))$ such that $\psi\circ\phi\in \mathcal{CB}_{\Lambda_3}^{\mu_3}(E,G)$ whenever $\phi\in \mathcal{CB}_{\Lambda_1}^{\mu_1}(E,F)$ and $\psi\in \mathcal{CB}_{\Lambda_2}^{\mu_2}(F,G)$. We do not plan to discuss it here. \end{rem} \subsection{An operator space structure on $\mathcal{CB}_\Lambda^\mu(E,F)$} \label{subsec_lambda-cb_oss} Now we shall associate a sequence of matrix norms with $\mathcal{CB}_\Lambda^\mu(E,F)$, satisfying the Ruan's Axioms R1 and R2, so that it become an operator space. We will make use of the following lemma. \begin{lem} If $E,F$ are operator spaces and $\phi_{ij}:E\to F$ are $\Lambda_\mu$-cb maps for $1\leq i,j\leq n$. Then the map $\phi:E\to M_n(F)$ defined as $\phi(e)=(\phi_{ij}(e))$ is $\Lambda_\mu$-cb. \begin{proof} Consider $\phi_X^\mu:E\bigotimes_\mu X\to M_n(F)\bigotimes_\mu X$ defined on rank one tensors as \[\phi_X^\mu(e\otimes x)=(\phi_{ij}(e))\otimes x=(\phi_{ij}(e)\otimes x)=\left(\phi_{ij,X}^\mu(e\otimes x)\right).\] Thus if $u=\sum_{i=1}^k e_i\otimes x_i\in E\bigotimes X$, we have \[ \Vert \phi_X^\mu (u)\Vert=\Vert \phi_{ij, X}^\mu (u)\Vert\leq n^2\max_{i,j}\Vert\phi_{ij,X}^\mu\Vert\Vert u\Vert\leq n^2\max_{i,j}\Vert\phi_{ij}\Vert_\Lambda^\mu\Vert u\Vert, \] i.e. $\phi$ is $\Lambda_\mu$-cb with $\Vert \phi\Vert_\Lambda^\mu\leq n^2\max_{1\leq i,j\leq n}\Vert\phi_{ij}\Vert_{cb}^{\lambda_\mu}$. \end{proof} \end{lem} Let us identify $M_n(\mathcal{CB}_\Lambda^\mu(E,F))$ with $\mathcal{CB}_\Lambda^\mu(E,M_n(F))$ using the bijection $\Phi:M_n(\mathcal{CB}_\Lambda^\mu(E,F))\to \mathcal{CB}_\Lambda^\mu(E,M_n(F))$, defined as $\Phi([\phi_{ij}])(e):=[\phi_{ij}(e)]$ for $e\in E$ and $\phi_{ij}\in \mathcal{CB}_\Lambda^\mu(E,F)$, $1\leq i,j\leq n$. Instead of directly verifying the Ruan's axioms, we can easily see that there is a natural completely isometric embedding of $\mathcal{CB}_\Lambda^\mu(E,F)$ into a product space. Let $D_\mathcal{A}=\{u\in E\bigotimes^\mu X\mid \Vert u\Vert\leq 1\}$ and for $X\in\Lambda$ and $u\in D_X$ let $F_u=F$. Define $\Phi:\mathcal{CB}_\Lambda^\mu(E,F)\to \prod_{X\in\Lambda}\prod_{u\in D_X} (F_u\bigotimes^\mu X)$ as $\phi\mapsto (\phi_X^\mu(u))$. \begin{prop} The map $\Phi$ defined above is a complete isometry. \begin{proof} Let $\phi=[\phi_{ij}]_{ij}\in M_n(\mathcal{CB}_\Lambda^\mu(E,F))=\mathcal{CB}_\Lambda^\mu(E,M_n(F))$. We have \[\Vert\phi\Vert=\sup_{X\in\Lambda}\Vert\phi_X^\mu\Vert =\sup_{u\in D_X,X\in\Lambda}\Vert\phi_X^\mu(u)\Vert =\sup_{u\in D_X,X\in\Lambda}\Vert\left[(\phi_{ij}\otimes I_X)(u)\right]_{ij}\Vert. \] Now consider $\Phi(\phi)=\left((\phi\otimes I_X)(u)\right)_{u,X}=\left([(\phi_{ij}\otimes I_X)(u)]_{ij}\right)_{u,X}$. Hence, as we have the sup norm on a product space, we get \[ \Vert\Phi(\phi)\Vert=\sup_{u,X}\Vert[(\phi_{ij}\otimes I_X)(u)]_{ij}\Vert. \] This proves that $\Phi$ is a complete isometry. \end{proof} \end{prop} The above proposition shows that $\mathcal{CB}_\Lambda^\mu(E,F)$ is an operator space equipped with natural matrix norms. The dual $E^*=\mathcal{CB}(E,\mathbb{C})$ of an operator space $E$ has a natural operator space structure inherited from the identification $M_n(\mathcal{CB}(E,\mathbb{C}))=\mathcal{CB}(E,M_n)$ \cite{Ef_Ru}. On the other hand, the matrix norms defined on $\mathcal{CB}_{\Lambda}(E,\mathbb{C})$ is using the identification, $M_n(\mathcal{CB}_\Lambda(E,\mathbb{C}))=\mathcal{CB}_\Lambda(E,M_n)$. We denote by $E^*_\Lambda$ the operator space $\mathcal{CB}_\Lambda(E,\mathbb{C})$. Any bounded linear functional on $E$ being completely bounded with $\Vert\phi\Vert_{cb}=\Vert\phi\Vert$, we have, $\Vert\phi\Vert_{cb}^\Lambda=\Vert\phi\Vert_{cb}$. Hence $E^*$ and $E^*_\Lambda$ are not completely isometric in general, though they are always isometric. However, since $\Vert \psi\Vert_{cb}^\Lambda\leq \Vert\psi\Vert_{cb}$ for any $\psi:E\to M_n$, we have, $\Vert\psi\Vert_{M_n(E^*_\Lambda)}\leq\Vert\psi\Vert_{M_n(E^*)}$ for every $\psi\in M_n(E^*)$. When we consider $\Lambda=\{M_n\mid n\in \mathbb{N}\}$, then the natural map gives the isometric identity $\mathcal{CB}_\Lambda(E,M_n)=\mathcal{CB}(E,M_n)$ for any $n\in \mathbb{N}$, which further gives rise to a complete isometry between $E^*$ and $E^*_\Lambda$. \begin{eg} When we take $\Lambda=\Lambda_n$ as in Example \ref{eg_ckmn}, then $E^*$ and $E^*_\Lambda$ fail to be completely isometric in general. For example, taking $E=M_k$ for some $k>n$ and functionals $\psi_{ij}$ as the projections to $ji^{\text{th}}$ coordinate gives rise to the map $\psi=(\psi_{ij})\in \mathcal{CB}_\Lambda(E,M_k)$ as the transpose map for which $\Vert\psi\Vert_{cb}^\Lambda<\Vert\psi\Vert_{cb}$. \end{eg} \section{Bilinear maps associated with weighted cb maps} \label{sec_bil} A. Defant and D. Wiesner \cite{AD_DW} introduced a generalized way of defining operator space tensor products using special kind of multilinear maps and compared it with homogeneous polynomials. The operator space projective, Haagerup and Schur tensor products came out as three special cases in their construction. Later, matrix ordering and related properties associated with it have been studied in \cite{PL_AK_VR}. The following definition of $\lambda_\mu$-cb bilinear maps is motivated from their work. Let $E,F$ and $G$ be operator spaces, and $\bigotimes^\mu$ be an operator space matrix cross tensor product. Let $\lambda=(\lambda_n:M_n\times M_n\to M_{k(n)})$ be a sequence of bilinear maps where $n,k(n)\in \mathbb{N}$. We call a bilinear map $\phi:E\times F\to G$ as \emph{completely $\lambda_\mu$-bounded} (\emph{$\lambda_\mu$-cb} in short) if the associated bilinear maps $\phi_n:E\bigotimes_\mu M_n\times F\bigotimes_\mu M_n\to G\bigotimes_\mu M_{k(n)}$ defined on rank one tensors as $\phi_n(e\otimes a,f\otimes b)=\phi(e,f)\otimes \lambda_n(a,b)$ are uniformly bounded, i.e. $\sup_{n\in \mathbb{N}}\Vert \phi_n\Vert<\infty$. Let us denote by $\mathrm{CB}_\lambda^\mu(E\times F,G)$ the set of all $\lambda_\mu$-cb bilinear maps from $E\times F$ to $G$. It is easy to see that $\mathrm{CB}_\lambda^\mu(E\times F,G)$ forms a vector space and the function $\Vert\cdot\Vert_\lambda^\mu:\mathrm{CB}_\lambda^\mu(E\times F,G)\to \mathbb{R}$ defined as $\Vert\phi\Vert_\lambda^\mu=\sup_{n\in \mathbb{N}}\Vert\phi_n\Vert$ is a seminorm on it, and whenever $\lambda_n$ is nonzero for some $n\in \mathbb{N}$, it becomes a norm. Hence we shall always consider the case when not all $\lambda_n$ are zero bilinear maps and we assume further that $\lambda$ is uniformly bounded. Now let us give an operator space structure to $\mathrm{CB}_\lambda^\mu(E\times F,G)$ using the (natural) identification $M_n(\mathrm{CB}_\lambda^\mu(E\times F,G))=\mathrm{CB}_\lambda^\mu(E\times F,M_n(G))$. \begin{thm} If $E,F,G$ are operator spaces and $\phi^{ij}\in \mathrm{CB}_\lambda^\mu(E\times F,G)$ for $1\leq i,j\leq m$, then the bilinear map $\phi:E\times F\to M_m(G)$ defined as $\phi(e,f)=[\phi^{ij}(e,f)]$ is completely $\lambda_\mu$-bounded with $\Vert\phi\Vert_\lambda^\mu\leq n^2\max_{i,j}\Vert\phi^{ij}\Vert_\lambda^\mu$. Moreover, $\mathrm{CB}_\lambda^\mu(E\times F,G)$ is an operator space with matrix norms induced from the identification $M_n(\mathrm{CB}_\lambda^\mu(E\times F,G))=\mathrm{CB}_\lambda^\mu(E\times F,M_n(G))$. \begin{proof} Consider $\phi_n:E\bigotimes_\mu M_n\times F\bigotimes_\mu M_n\to M_m(G)\bigotimes_\mu M_{k(n)}$. Let $u_1=\sum_{r=1}^s e_r\otimes a_r\in E\bigotimes M_n$ and $u_2=\sum_{r=1}^s f_r\otimes b_r\in F\bigotimes M_n$. Then $\phi_n(u_1,u_2)=[\sum_{r=1}^s\phi^{ij}(e_r,f_r)\otimes \lambda_n(a_r,b_r)]$. Thus \begin{equation*} \begin{aligned} \Vert\phi_n(u_1,u_2)\Vert &= \Vert [\phi_n^{ij}(u_1,u_2)]\Vert\\ &\leq m^2 \max_{i,j}\Vert\phi_n^{ij}(u_1,u_2)\Vert\\ &\leq m^2 \max_{i,j}\Vert\phi_n^{ij}\Vert\Vert u_1\Vert\Vert u_2\Vert\\ &\leq m^2 \max_{i,j}\Vert\phi^{ij}\Vert_\lambda^\mu\Vert u_1\Vert\Vert u_2\Vert, \end{aligned} \end{equation*} i.e. $\phi= [\phi^{ij}]\in \mathrm{CB}_\lambda^\mu(E\times F,M_m(G))$ and $\Vert\phi\Vert_\lambda^\mu\leq m^2\max_{i,j}\Vert\phi^{ij}\Vert_\lambda^\mu$. For the other part, we shall verify the Ruan's axioms in order to prove that it is an operator space. Let $\phi_1\in M_n(\mathrm{CB}_\lambda^\mu(E\times F,G))=\mathrm{CB}_\lambda^\mu(E\times F,M_n(G))$ and $\phi_2\in M_m(\mathrm{CB}_\lambda^\mu(E\times F,G))=\mathrm{CB}_\lambda^\mu(E\times F,M_m(G))$. Consider $\phi_1\oplus\phi_2\in M_{n+m}(\mathrm{CB}_\lambda^\mu(E\times F,G))=\mathrm{CB}_\lambda^\mu(E\times F,M_{n+m}(G))$. Let $u_1=\sum_{r=1}^s e_r\otimes a_r\in E\bigotimes M_l$ and $u_2=\sum_{r=1}^s f_r\otimes b_r\in F\bigotimes M_l$. Then \begin{equation*} \begin{aligned} \Vert(\phi^1\oplus\phi^2)_l(u_1,u_2)\Vert &= \left\Vert \begin{bmatrix} \phi_l^1(u_1,u_2) & 0\\ 0 & \phi_l^2(u_1,u_2) \end{bmatrix} \right\Vert\\ &\leq \max_{i=1,2}\Vert\phi_l^i(u_1,u_2)\Vert\\ &\leq \max_{i=1,2}\Vert\phi_l^i\Vert\Vert u_1\Vert\Vert u_2\Vert\\ &\leq \max_{i=1,2}\Vert\phi^i\Vert_\lambda^\mu\Vert u_1\Vert\Vert u_2\Vert. \end{aligned} \end{equation*} Thus $\Vert\phi^1\oplus\phi^2\Vert_\lambda^\mu\leq \max_{i=1,2}\Vert\phi^i\Vert_\lambda^\mu$. Let $\phi=[\phi^{ij}]\in M_n(\mathrm{CB}_\lambda^\mu(E\times F,G))$ and $\alpha,\beta$ be scalar rectangular matrices such that $\alpha\phi\beta\in M_m(\mathrm{CB}_\lambda^\mu(E\times F,G))=\mathrm{CB}_\lambda^\mu(E\times F,M_m(G))$. Then \begin{equation*} \begin{aligned} \Vert(\alpha\phi\beta)_l(u_1,u_2)\Vert &= \Vert (\alpha[\phi^{ij}]\beta)_l(u_1,u_2)\Vert\\ &=\Vert \alpha[\phi_l^{ij}(u_1,u_2)]\beta\Vert\\ &\leq\Vert\alpha\Vert\Vert[\phi_l^{ij}(u_1,u_2)]\Vert\Vert\beta\Vert\\ &\leq\Vert\alpha\Vert\Vert\phi_l\Vert\Vert u_1\Vert\Vert u_2\Vert\Vert\beta\Vert\\ &\leq\Vert\alpha\Vert\Vert\phi\Vert_\lambda^\mu\Vert\Vert\beta\Vert\Vert u_1\Vert\Vert u_2\Vert. \end{aligned} \end{equation*} Thus $\Vert\alpha\phi\beta\Vert_\lambda^\mu\leq \Vert\alpha\Vert\Vert\phi\Vert_\lambda^\mu\Vert\Vert\beta\Vert$. As $\Vert\cdot\Vert_\lambda^\mu$ is a norm on $M_m(\mathrm{CB}_\lambda^\mu(E\times F,G))$ for all $m\in \mathbb{N}$, it follows that the above two inequalities are sufficient to conclude that $\mathrm{CB}_\lambda^\mu(E\times F,G)$ is an operator space. \end{proof} \end{thm} \begin{rem} \label{bil_intertwine} It is very important to observe that intertwining the roles of $E$ and $F$ in $\mathrm{CB}_\lambda^\mu(E\times F,G)$ matters, i.e. in general, $\mathrm{CB}_\lambda^\mu(E\times F,G)$ and $\mathrm{CB}_\lambda^\mu(F\times E,G)$ are not completely isometric. For example, consider $\lambda=(\lambda_n:M_n\times M_n\to M_n)$ defined as $\lambda_n(a,b)=ab$. Then the non-commutativity of the matrix multiplication plays an important role in making the difference. \end{rem} We shall now consider three very special cases as follows: \begin{enumerate}[\text{Case} 1:] \item $\lambda=(\lambda_n:M_n\times M_n\to M_n)$ defined as $\lambda_n(a,b)=ab$, the usual matrix multiplication representing composition of linear maps. \item $\lambda=(\lambda_n:M_n\times M_n\to M_{n^2})$ defined as $\lambda_n(a,b)=a\otimes b$, where $a\otimes b$ denotes the Kronecker multiplication, i.e. for $a=[a_{ij}],b=[b_{ij}]\in M_n$ the pair $(a,b)$ is mapped to $[a_{ij}b_{kl}]$. \item $\lambda=(\lambda_n:M_n\times M_n\to M_n)$ defined as $\lambda_n(a,b)=a\odot b$, where $a\odot b$ denotes the Schur multiplication of two matrices, i.e. for $a=[a_{ij}],b=[b_{ij}]\in M_n$ the pair $(a,b)$ is mapped to $[a_{ij}b_{ij}]$. \end{enumerate} Clearly, when we take $\mu=min$, then the above three cases give rise to the corresponding spaces of bilinear maps $\mathrm{CB}_\lambda^\mu(E\times F,G)$ as the spaces of matricially completely bounded bilinear maps, jointly completely bounded bilinear maps, and completely Schur bounded bilinear maps respectively. We shall have a closer look at each of them. When we take $G=\mathbb{C}$, we call $\psi\in \mathrm{CB}_\lambda^\mu(E\times F,\mathbb{C})$ as a \emph{$\lambda_\mu$-cb bilinear form}. In the same way in which we associate jointly completely bounded (respectively completely bounded) bilinear forms with the operator space projective (respectively Haagerup) tensor products \cite{Bl_Pn}, we wish to associate a tensor norm $\Vert\cdot\Vert_\lambda^\mu$ on the algebraic tensor product $E\bigotimes F$ using the completely $\lambda_\mu$-bounded bilinear forms on $E\times F$ such that the operator space dual of the completed tensor product $E\bigotimes^{\lambda_\mu} F$ is $\mathrm{CB}_\lambda^\mu(E\times F,\mathbb{C})$ completely isometrically, i.e. we shall define $\Vert\cdot\Vert_\lambda^\mu$ on $E\bigotimes F$ as follows: for $u\in M_n(E\bigotimes F)$, $\Vert u\Vert_\lambda^\mu=\sup\{\Vert\llangle u,\psi\rrangle\Vert\mid \psi\in M_m(\mathrm{CB}_\lambda^\mu(E\times F,\mathbb{C})),\Vert \psi\Vert_\lambda^\mu\leq 1\}$ where $\llangle u,\psi\rrangle$ denotes the matrix pairing with the abuse of notation of identifying linear maps on $E\bigotimes F$ with bilinear maps on $E\times F$. If the quantity $\Vert u\Vert_\lambda^\mu$ is finite for every $u\in M_n(E\bigotimes F)$, then clearly it defines a norm on $M_n(E\bigotimes F)$ for every $n$, and the matrix norms satisfy the Ruan's conditions for an operator space norm. The operator space structure obtained by doing so is called the \emph{dual operator space structure}. We denote by $E\bigotimes^{\lambda_\mu} F$ the completion with respect this norm, which we call the $\lambda_\mu$ tensor product of $E$ and $F$. In the three cases mentioned above, we need to verify whether $\sup\{\Vert\llangle u,\psi\rrangle\Vert\mid \psi\in M_m(\mathrm{CB}_\lambda^\mu(E\times F,\mathbb{C})),\Vert \psi\Vert_\lambda^\mu\leq 1\}$ is finite for each $u\in M_n(E\bigotimes F)$. Let us denote by $\bigotimes_{\lambda_\mu}$ the natural map $E\bigotimes_\mu M_n\times F\bigotimes_\mu M_n\to E\bigotimes F\bigotimes M_{k(n)}$ given by $\bigotimes_{\lambda_\mu}(e\otimes a,f\otimes b)=a\otimes f\otimes \lambda(a,b)$. In case 1, let $u\in M_n(E\bigotimes F)$. Then we can write $u$ in the form $u=\bigotimes_{\lambda_\mu}(e,f)$ for some $e=[e_{ij}]\in M_m^\mu(E)$ and $f=[f_{kl}]\in M_m^\mu(F)$. Let $\phi=[\phi_{pq}]\in M_p(\mathrm{CB}_\lambda^\mu(E\times F,\mathbb{C}))$. Consider \begin{equation*} \begin{aligned} \left\llangle \bar{\phi},\textstyle\bigotimes_{\lambda_\mu}(e,f)\right\rrangle &= \left\llangle \left[\bar{\phi}_{pq}\right]_{p,q},\left[\sum_{j=1}^m e_{ij}\otimes f_{jl}\right]_{i,l} \right\rrangle\\ &= \left[\bar{\phi}_{pq}\left(\sum_{j=1}^m e_{ij\otimes f_{jl}}\right)\right]_{p,q,i,l}\\ &= \left[(\phi_{ij})_n(e,f)\right]\\ &= U\phi_n(e,f)V, \end{aligned} \end{equation*} for some invertible scalar matrices $U,V$ such that $\Vert U\Vert=1=\Vert V\Vert$. Hence \[ \Vert\llangle\phi,\textstyle\bigotimes_{\lambda_\mu}(e,f)\rrangle\Vert\leq \Vert U\phi_n(e,f)V\Vert\leq \Vert\phi\Vert_\lambda^\mu\Vert e\Vert_{M_n^\mu{E}}\Vert f\Vert_{M_n^\mu(F)}. \] A similar calculation shows the finiteness of $\sup\{\Vert\llangle u,\psi\rrangle\Vert\mid \psi\in M_m(\mathrm{CB}_\lambda^\mu(E\times F,\mathbb{C})),\Vert \psi\Vert_\lambda^\mu\leq 1\}$ for any $u\in M_n(E\bigotimes F)$ in the other two cases also. Deducing even basic properties of the tensor product such as associativity, functorial property are extremely challenging, though we can obtain commutativity of the tensor product by putting reasonable restrictions on $\lambda$. We say $\lambda$ is \emph{symmetric} if there exists a sequence of unitary matrices $(u_k)$ where $u_k\in M_k$ such that $\lambda_n(b,a)=u_{k(n)}^{-1}\lambda_n(a,b)u_{k(n)}$ for all $a,b\in M_n$, $n\in \mathbb{N}$. In Case 2 and Case 3 above, the given $\lambda$ is symmetric. Observe that $\Vert\lambda_n(b,a)\Vert=\Vert u_{k(n)}^{-1}\lambda_n(a,b)u_{k(n)}\Vert$ and hence it clearly follows that whenever $\lambda$ is symmetric, $\mathrm{CB}_\lambda^\mu(E\times F,G)=\mathrm{CB}_\lambda^\mu(F\times E,G)$ completely isometrically via the natural map $\Phi$ defined as $\Phi(\phi)=\bar{\phi}$ where $\bar{\phi}(f,e)=\phi(e,f)$. \begin{thm} If $\lambda$ is symmetric, then $\bigotimes^{\lambda_\mu}$ is commutative, i.e. for any two operator spaces $E,F$ we have completely isometrically $E\bigotimes^{\lambda_\mu} F=F\bigotimes^{\lambda_\mu} E$. \begin{proof} As $\lambda$ is symmetric, we have completely isometrically $\mathrm{CB}_\lambda^\mu(E\times F,\mathbb{C})\stackrel{\Phi}{=} \mathrm{CB}_\lambda^\mu(F\times E,\mathbb{C})$. But $\mathrm{CB}_\lambda^\mu(E\times F,\mathbb{C})$ is nothing but the dual of $E\bigotimes^{\lambda_\mu} F$, and $\mathrm{CB}_\lambda^\mu(F\times E,\mathbb{C})$ is that of $F\bigotimes^{\lambda_\mu} E$. For $\phi\in\mathrm{CB}_\lambda^\mu(E\times F,\mathbb{C})$, we have \[ \langle\phi, e\otimes f\rangle=\phi(e,f)=\Phi(\phi)(f,e)=\langle\Phi(\phi),f\otimes e\rangle. \] Hence the linear isomorphism $\Psi:E\bigotimes F\to F\bigotimes E$ defined on elementary tensors as $\Psi(e\otimes f)=f\otimes e$ extends to a complete isometry between $E\bigotimes^{\lambda_\mu} F$ and $F\bigotimes^{\lambda_\mu} E$, because if $(u_k)$ is a Cauchy sequence in $M_n(E\bigotimes_\lambda F)$, then $(\Psi^{(n)}(u_k))$ is Cauchy in $M_n(F\bigotimes_\lambda E)$ with $\Vert u_k\Vert=\Vert\Psi^{(n)}(u_k)\Vert$. \end{proof} \end{thm} As we have mentioned in Section \ref{subsec_quant}, we could not find a reasonable sufficient condition for a tensor product to be quantizing. It would be interesting if one can completely characterize the quantizing tensor products. An analogue of $\lambda_\mu$-cb maps when $\mu$ being non-quantizable and additionally considering a random class $\Lambda$ of Banach spaces/operator spaces as discussed in Section \ref{sec_lambda-cb} will be a generalization the ideas that we discussed, though it can be more complicated. \section*{Acknowledgements} Research of the first author is supported by the Council of Scientific \& Industrial Research (CSIR), Govt. of India (Ref. No.: 09/045(1403)/2015-EMR-I). \bibliographystyle{amsplain}
{ "timestamp": "2018-02-27T02:06:15", "yymm": "1802", "arxiv_id": "1802.08852", "language": "en", "url": "https://arxiv.org/abs/1802.08852" }
\section{Introduction and Scope} Realizing the physics programs of the planned and/or upgraded high-energy physics (HEP) experiments over the next 10 years will require the HEP community to address a number of challenges in the area of software and computing. For this reason, the HEP software community has engaged in a planning process over the past two years, with the objective of identifying and prioritizing the research and development required to enable the next generation of HEP detectors to fulfill their full physics potential. The aim is to produce a Community White Paper (CWP) \cite{HSF2017} which will describe the community strategy and a roadmap for software and computing research and development in HEP for the 2020s. This activity is organised under the umbrella of the HEP software foundation (HSF). The LHC experiments and HSF have been specifically charged by the WLCG project, but have reached out to other HEP experiments around the world throughout the community process in order to make it as representative as possible. The CWP process was carried out by working groups centered on specific topics. The topics of event reconstruction and software triggers are covered together in this document and have resulted from discussions within a single working group. The reconstruction of raw detector data and simulated data and its processing in real time represent a major component of today's computing requirements in high-energy physics. A recent projection \cite{Campana2016} of the ATLAS 2016 computing model results in $>85\%$ of the HL-LHC CPU resources being spent on the reconstruction of data or simulated events. This working group evaluated the most important components of next generation algorithms, data structures, and code development and management paradigms needed to cope with highly complex environments expected in high-energy physics detector operations in the next decade. New approaches to data processing were also considered, including the use of novel, or at least, novel to HEP, algorithms, and the movement of data analysis tasks into real-time environments. The remainder of this document is organized as follows. First we discuss how future changes including new and proposed facilities, detector designs, and evolutions in computing and software technologies change the requirements on software trigger and reconstruction applications. Second, we summarize current practices and identify the most limiting components in terms of both physics and computational performance. Finally we propose a research and development roadmap for the software trigger and event reconstruction areas including a survey of relevant on-going work for the topics identified in the roadmap. Of course any discussion of the computing challenges and priorities for software triggers and reconstruction necessarily overlaps with software domains covered by other CWP documents. Indeed, the critical role of real-time reconstruction in allowing the LHC data to be collected in the first place means that the requirements set out here will drive much of the R\&D across other areas, whether that be in the development of more performant math libraries, simplified but accurate detector descriptions, or new reconstruction algorithms based on machine learning paradigms. Such areas of overlap are noted wherever relevant in the text, and the reader is encouraged to refer to the other CWP documents for more details. \section{Nomenclature} This document will discuss software algorithms essential to the interpretation of raw detector data into analysis level objects in several contexts. Specifically, these algorithms can be categorized as: \begin{enumerate} \item {\bf Online}: Algorithms, or sequences of algorithms, executed on events read out from the detector in near-real-time as part of the software trigger, typically on a computing facility located close to the detector itself. \item {\bf Offline}: As distinguished from online, any algorithm or sequence of algorithms executed on the subset of events preselected by the trigger system, or generated by a Monte Carlo simulation application, typically in a distributed computing system. \item {\bf Reconstruction}: The transformation of raw detector information into higher level objects used in physics analysis. Depending on the experiment in question, these higher level objects might be charged particle trajectories (``tracks''), neutral or charged particle calorimeter clusters, Cherenkov rings, jets, and so on. A defining characteristic of “reconstruction” which separates it from “analysis” is that the quality criteria used in the reconstruction to, for example, minimize the number of fake tracks, should be general enough to be used in the full range of physics studies required by the experimental physics program. Reconstruction algorithms are also typically run as part of the processing carried out by centralized computing facilities. \item {\bf Trigger}: the online classification of events, performed with the objective of reducing either the number of events which are kept for further ``offline'' analysis, the size of such events, or both. In this working group we were only concerned with software triggers, whose defining characteristic is that they process data without a fixed latency. Software triggers are part of the real-time processing path and must make processing decisions quickly enough to keep up with the incoming data, possibly with the benefit of substantial disk buffers. \item {\bf Real-time analysis}: The typical goal of a physics analysis is to combine the products of the reconstruction algorithms (tracks, clusters, jets...) into complex objects (hadrons, gauge bosons, new physics candidates...) which can then be used to infer some fundamental properties of nature (CP asymmetry, lepton universality, Higgs couplings...). We define as real-time analysis any physics analysis step that goes beyond object reconstruction and is performed online within the trigger system, in certain cases using simplified algorithms to fit within the trigger system constraints. Real-time analysis techniques are so far quite experiment-specific. Techniques may include the selection and classification of all objects crucial to the calibration of the detector performance, evaluation of backgrounds, as well as physics searches that are otherwise impossible given limitations of data samples passing the trigger and saved for offline work. \end{enumerate} The online and offline algorithms have traditionally been viewed as related, but at least partly separate due to their differing goals and requirements. Because the online algorithms have to run on all events read out from the detector,\footnote{Of course online algorithms are a sequence, and some of the algorithms in this sequence may select events for processing by later algorithms. So not every online algorithm must run on every event read out from the detector, but when grouped together the ensemble of ``online'' algorithms process all events read out from the detector in near-real-time.} they typically must be executed on dedicated computer facilities (e.g. a server farm) located near to the detector in order to avoid prohibitive networking and data transfer costs.\footnote{Note that this is not an argument about latency. A hardware fixed latency trigger has a set maximum time to evaluate any single event. A software trigger has no such constraint, and instead has some average processing time to evaluate events, given by a combination of the processing power available, the size of the disk buffers which can store data waiting to be analyzed, and the network bandwidth available to move events to this disk buffer. } Such dedicated farms typically have a small (if any) amount of disk space to buffer events while waiting for them to be processed, and the online algorithms must therefore be tuned to strict timing and memory consumption requirements. In contrast the offline algorithms run only on a subset of events which have been saved to long term storage. They must still execute within a reasonable time so that their output is made available for analysis in a timely fashion, and to fit the computing resources available to the experiment, but these pressures are generally much less severe than for online processing. In addition, online algorithms often run in dedicated frameworks, with additional layers of safety or control compared to their offline counterparts. Increasingly, however, the difficulties of maintaining these parallel software environments is driving online algorithms to become special cases of the offline ones configured for increased speed at the cost of precision, but otherwise relying on the same underlying codebase. This development is also driven by the desire to reduce systematic uncertainties introduced by having separate online and offline reconstruction, in particular for ``precision'' measurements. This physics motivation to use offline algorithms online can lead to performance improvements in offline algorithms beyond what might have otherwise been achieved. In turn, such improvements free up offline resources, notably for producing the large samples of simulated events which will be needed in the HL-LHC period. We therefore assume that this trend will continue and intensify on the timescale considered in this document. \section{New Challenges anticipated on the 5-10 year timescale} This section summarizes the challenges identified by the working group for software trigger and event reconstruction techniques in the next decade. We have organized these challenges into those from new and upgrade accelerator facilities, from detector upgrades and new detector technologies, increases in anticipated event rates to be processed by algorithms (both online and offline), and from evolutions in software development practices. \subsection{Challenges posed by Future Facilities } Here we briefly describe some of the relevant accelerator facility upgrades and their impact on experimental data samples. These will be the basis of our discussion of how software trigger and event reconstruction algorithms must evolve. \begin{enumerate} \item LHC Run 3: Run 3 of the LHC is expected to last three years, starting in 2021, after the second long shutdown of the LHC. Having already exceeded the design luminosity of the LHC facility, there is no significant increase in instantaneous luminosity expected for the CMS and ATLAS experiments. The CMS and ATLAS experiments expect to accumulate up to 300~fb$^{-1}$ of data each by the end of this run \cite{Bordry2016}. This is a nearly 10x increase over the samples of 13~TeV data collected through 2016. Both LHCb and ALICE will undergo major upgrades for Run 3 (described in the next section) : LHCb will have an instantaneous luminosity five times higher than in Run 2 \cite{LHCb2012}, while ALICE will upgrade \cite{ALICE2013} its readout and real-time data processing in order to enable the full 50~kHz Pb-Pb collision rate to be saved for offline analysis. \item High-luminosity LHC (HL-LHC): The HL-LHC project \cite{Zimmerman2009} is currently planned to begin operations in 2026. It is an upgrade to LHC, expected to result in an increase of up to a factor of 10 in instantaneous luminosity over the LHC design (so up to $10^{35}$~cm$^{-2}$s$^{-1}$). The beam energy of 14~TeV and 25~ns bunch spacing imply a considerable increase in the number of simultaneous collisions (pileup) seen by experiments. Operating scenarios under study are pileup of 140 or 200 for ATLAS \cite{ATLAS2015} and CMS \cite{CMS2015} at a 25~ns bunch spacing, both possibly with luminosity leveling techniques that would provide a relatively constant luminosity throughout a fill. In addition, LHCb \cite{LHCb2017} is planning a consolidation during LS3 for initial HL-LHC operations, with improvements to various detector components, followed by a potential later Upgrade II with a pileup of 50 to 60, to run from roughly 2030 onwards. The HL-LHC project is expected to run for at least 10 years. \item Super KEK-B: The Super KEK-B facility \cite{Ohnishi2013}, together with the Belle-II experiment \cite{BelleII2010}, plans to achieve a 40 times increase in instantaneous luminosity over that achieved by the previous generation of $e^+e^-$ colliders operating on or near the Upsilon(4S) resonance (KEK-B and PEP-II). Super KEK-B should begin production data taking in 2018 and run until at least 2024 with a goal of 50~ab$^{-1}$ of data collected (e.g., nearly 60 billion $B$-$\bar B$ pair events recorded). \item Long Baseline Neutrino Facility (LBNF): The LBNF project \cite{DUNE2016} is planning a high-intensity neutrino beamline from Fermilab to the SURF underground facility in South Dakota. Based on the NuMI beamline, LBNF plans a proton-beam power of 1.2~MW ($7.5\cdot 10^{13}$ protons per cycle) later upgraded to 2.4~MW ($1.5-2.0 \cdot 10^{14}$ protons per cycle). The facility expects to operate for 20 years starting 2025. \item Linear Colliders (ILC and CLIC): two electron-positron collider projects are currently under study, the International Linear Collider (ILC), to be built in Japan and the Compact Linear Collider (CLIC) at CERN. The ILC will operate at a center of mass energy of 250-500~GeV. With a nominal luminosity of $1.47\cdot 10^{34}$~cm$^{-2}$s$^{-1}$ at 500~GeV and an expected raw data rate of 1~GB/s the two planned experiments will each accumulate up to 10~PB/year. Both colliders plan to run without any hardware trigger, requiring a fast and efficient prompt reconstruction and event building. \item Future Circular Collider (FCC): A 100 TeV facility is being studied as the next step in the energy frontier projects after HL-LHC \cite{Ball2014}. This could be realized as a 100 km circumference tunnel using 16 T magnets. Scenarios under discussion for such a facility include up to 1000 pileup events and the need to be hermetic up to a much larger eta (eg. $\eta=6$) than planned for the HL-LHC upgrades to the ATLAS or CMS experiments. \end{enumerate} Several common themes are apparent from these facility plans. Accelerator operating conditions continue to evolve towards higher intensity and higher energy, as required to bring new discovery potential. This means more complex and higher particle density environments from which signs of new physics must be extracted by trigger systems, event reconstruction algorithms and by analysis. This complexity brings new requirements and challenges to detector design as well as software algorithms, where detection efficiency needs to be maintained in more complex environments without increasing false-positive rates. For HL-LHC, the increased pileup of many interactions in a single crossing leads to several critical problems. The higher particle multiplicities and detector occupancies will lead to a significant slowdown in all reconstruction algorithms, from the tracking itself to the reconstruction in other devices such as the electromagnetic calorimeter and RICH detectors. In addition to making the algorithms slower, pileup also leads to a loss of physics performance, for example : \begin{itemize} \item Reduced reconstruction efficiency in Electromagnetic calorimeters; \item Increased association of tracks to wrong primary vertices; \item Reduced efficiency of identifying isolated electrons, muons, taus, and photons; \item Reduced selection efficiencies for electrons and photons; \item Reconstruction efficiencies for hadronic tau decays and b-jets; \item Worse energy resolution for electrons, photons, taus, jets, and missing transverse energy; \item Worse reconstruction of jet properties (substructure for top/$W$-tagging, quark/gluon discrimination, etc); \end{itemize} The central challenge for object reconstruction at HL-LHC is thus to maintain excellent efficiency and resolution in the face of high pileup values, especially at low object transverse momenta. Detector upgrades such as increases in channel density, high precision timing and improved detector geometric layouts are essential to overcome these problems. For software, particularly for triggering and event reconstruction algorithms, there is an additional need not to dramatically increase the event processing time. A comprehensive program of studies is required to assess the efficiencies and resolutions for various approaches in events with up to 200 pileup interactions. The increase in event complexity also brings a ``problem'' of overabundance of signal to the experiments, and specifically the software trigger algorithms. Traditional HEP triggers select a small subset of interesting events and store all information recorded in such events. This approach assumes first that only a very small fraction of collisions potentially contain interesting physics, second that the features of such events will be strikingly different than the features of ``uninteresting'' events, and third that discarding any information in an event would prevent later correcting any defects in the real-time processing, or preventing a full understanding of the process of interest. The evolution towards a genuine real-time processing of data has been driven by a breakdown in the first two assumptions and technological developments which means that the third assumption is no longer as worrying as it once was. An illustrative example is the search for low-mass dark matter at the LHC, such as the search for dark photons at LHCb \cite{Ilten2016}. Since interactions between dark photons and Standard Model (SM) particles have very low cross sections, the probability of producing dark photons in proton-proton collisions is extremely small. Thus, discovering them at the LHC will require an immense number of proton-proton collisions and a highly efficient trigger. The key problem is that when the dark photon lifetime is small compared to the detector resolution, which is the case in much of the interesting parameter space, there is an overwhelming irreducible SM background from off-shell photons producing di-muon events. The current LHCb trigger configuration discards about 90\% of potential dark photon decays, and a variety of other beyond the Standard Model and SM signals where the LHCb detector itself has good sensitivity. Once this stage is removed (and the luminosity is increased), the potential signal rate will increase by a factor of 50. However, the irreducible SM-background rate also increase significantly. Since offline storage resources are not expected to increase at nearly this rate, the data must be compressed much more than is now within the online system. In other words, techniques developed in the offline analysis for background must be integrated into the online software trigger. Similar considerations apply to low-mass dijet searches at ATLAS/CMS, where the enormous background rate from quantum chromodynamics (QCD), which grows linearly with pileup, limits the study of physics at the electroweak scale in hadronic final states given current techniques. Trigger output bandwidth limitations mean that only a tiny fraction of lower-energy jets can be recorded, and hence the potential statistical precision of searches involving these low energy jets is vastly reduced. This is again relevant in the context of dark matter searches, here for light mediators between quarks and dark matter particles with low coupling strength. Further details are described in \cite{aTLAs2017, CMS2016}. These considerations also apply to other areas of the LHC physics program. For example, by Run~3 \cite{LHCb2014}, most LHC bunch crossings will produce charm hadrons at least partially in the LHCb acceptance, and all bunch crossings will produce strange hadrons detectable in the LHCb acceptance. Even more dramatically, in the HL-LHC era, a potential Upgrade II of LHCb will have to cope with multiple reconstructible charm hadron signals per bunch crossing, and even semileptonic beauty-hadron decays will become more abundant than can be stored offline. The ability of the LHC to continue improving our knowledge in these areas will therefore entirely depend on the ability to select the specific signal decay modes in question at the trigger level. Taken together, the overabundance of interesting signals, which mandates more complex event reconstruction at the trigger level, and the increasing event complexity, which makes the event reconstruction ever more expensive, will influence the design and requirements of trigger and reconstruction algorithms over the next decade. \subsection{Challenges and Opportunities from Evolution in Experimental apparatus} A number of new detector and hardware trigger concepts are proposed on the 5-10 year timescale in order to help in overcoming the challenges identified above. In many cases, these new technologies bring novel requirements to software trigger and event reconstruction algorithms. These include: \begin{enumerate} \item High-granularity calorimetry: Experiments including CMS (for HL-LHC), ILC, CLIC and FCC are proposing very high granularity calorimeters in order to better separate showers from closeby particles in a high-density (i.e., high-pileup) environment and thereby improve the jet energy resolution. This granularity brings significant computational challenges as there is much more information to process in order to fully take advantage of these devices. Efficient algorithms \cite{Marshall2015,Gray2016} are needed to fully optimize the ambiguity reduction capability of the signals from millions of channels within finite computing times. \item Precision timing detectors \cite{Gray2017} for charged particles: Experiments including ATLAS, CMS and LHCb are pursuing timing detectors with precision approaching 30~ps. This information is another tool for dramatically reducing the effect of pileup on triggering and reconstruction algorithms in particular in the areas of tracking and vertex finding. Integrating timing information into Kalman filtering and vertex disambiguation algorithms can improve both physics performance and time-to-process events online and offline. Making current tracking plus new timing detectors effectively integrate into a performant 4D-detection system will necessitate the development of new reconstruction algorithms that make use of both spatial and timing information. \item Hardware triggers based on tracking information: Hardware trigger systems designed to identify tracks down to 2 GeV of transverse momentum are a valuable tool for ATLAS and CMS \cite{ATLAS2015,CMS2015}. This technology will enhance the ability to trigger on a range of physics signatures including isolated leptons, multi-jet signatures and displaced vertices, as well as mitigating the effects of pileup on these objects. Once an event is triggered, the results obtained from these triggers can also be a valuable tool in the software trigger, for example by seeding and therefore speeding up the subsequent reconstruction. Similar systems, with an ability to go below 500 MeV of transverse momentum are also under consideration for the LS3 Consolidation and Upgrade II of LHCb, and could be particularly valuable in the study of strange hadrons if the thresholds could be reduced down to 100 MeV of transverse momentum. \item Data streaming techniques: Experiments with no hardware trigger allow a software trigger to see all data before events are rejected from further processing. There is a clear advantage in physics capability if the data streaming capability is sufficient and if software triggers are efficient, effective and reliable enough for the task. There is always an advantage if additional algorithms can be run on an event before a decision must be taken about its importance. In the case of LHCb this means a facility and algorithms that are capable of processing with a sustained throughput of 30~MHz \cite{LHCb2014}. Similarly, the Alice experiment plans a 50~kHz interaction rate with no, or simple minimum bias, hardware trigger, and will stream 3~TB/s from the TPC in a common online-offline computing system \cite{ALICE2015}. \end{enumerate} \subsection{Challenges from Event rates and real-time processing} Trigger systems for next-generation experiments are evolving to be more capable, both in their ability to select a wider range of events of interest for the physics program of their experiment, and their ability to stream a larger rate of events for further processing. ATLAS and CMS both target systems where the output of the hardware trigger system is increased by 10x over the current capability, up to 1~MHz \cite{ATLAS2015,CMS2015}. In other cases, such as LHCb, the full collision rate (between 30 to 40~MHz for typical LHC operations) will be streamed to real-time or quasi-realtime software trigger systems. It is interesting to note that because the ATLAS/CMS events are O(10) times larger than those of LHCb (roughly O(1) vs O(0.1) MB/event), the resulting data rates are rather similar, namely 1-5~TB/s. This enhanced capability naturally increases the demands on software trigger algorithms and offline reconstruction algorithms. In many cases, the current trigger bandwidth of experiments is limited by the offline processing and storage capabilities of an experiment rather than its ability to physically write out data to disk or tape for further processing. This can be due either to the time to fully reconstruct events for analysis (corresponding to a fixed set of CPU resources) or by the size of the analysis data itself (corresponding to a fixed set of disk resources). Current experiments are therefore constantly working to reduce the CPU needs to reconstruct events and storage needs for analyzing them, through a combination of code improvements (refactoring, vectorization, exploitation of optimized instruction sets) or through entirely rewriting algorithms. This is an ongoing process that continues to yield throughput improvements, aided by improved code analysis tools, modern compilers and other tools. For many experiments, there is also a potential tradeoff between physics quality and CPU needs. A typical example is that track reconstruction requires less CPU if the transverse momentum threshold for tracks to be identified is raised, thereby reducing the overall combinatorics. This sort of CPU improvement is almost never desirable as it reduces the overall physics output. Instead software trigger and reconstruction applications are pressed to include more and more algorithms over time. Typical examples for ATLAS and CMS are new jet-finding, isolation, or pileup subtraction approaches, and, more generally, algorithms developed targeting specific physics use cases, for example in the trigger-level search for low-mass hadronic resonances in ATLAS described in \cite{Abreu2014}. Conversely, the need to reconstruct higher multiplicity, or increasingly soft, decays challenges LHCb’s applications. Recent examples of significant storage reductions for the analysis data tier include the CMS MiniAOD \cite{Petrucciani2015} and the ATLAS xAOD \cite{Eifert2015}, both deployed for LHC Run 2 analysis. Improvements can be achieved by refinements in physics object selection (e.g., saving less uninteresting information), and by enhanced compression techniques (using lossless or lossy methods). Real-time analysis is the only solution for those signals which are so abundant that they cannot all be saved to disk, or for which discriminating against backgrounds requires the best possible detector calibration, alignment, reconstruction, and analysis. In this case, some or all of the output of the software trigger system also serves as the final analysis format. Its development is justified by the need to conduct the broadest possible program of physics measurements with our existing detectors. This is critical for two reasons: first, because we do not want to miss any signatures of New Physics around the electroweak scale, whether direct or indirect, but also because, even if the New Physics scale lies beyond the reach of current detectors, we must probe the widest possible parameter space in order to motivate and guide the design of future colliders and experiments. This is particularly true given the cost and long timescale of such future facilities. An early version of this approach consisted of keeping only a limited set of physics objects (e.g., jets) as computed in real-time processing, and proof-of-concept implementations exist since LHC Run~1 \cite{Aaij2016,Abreu2014,CMS2016}. In order to perform precision measurements in real-time, however, it is critical to be able to keep data long enough (hours or days, depending on the experiment) to perform quasi-real-time calibrations and a final offline analysis quality reconstruction in the trigger system itself. This approach was commissioned by LHCb in 2015. As a result, roughly one third of the LHCb experiment’s Run~2 trigger selections now use the real-time reconstruction to reduce the amount of data kept for further analysis. Advantages of these approaches include an order of magnitude reduction in data volume from not saving the raw detector data, potentially reduced systematic errors in analysis due to differences in algorithms or calibrations used in the online and offline processing, and a reduction or elimination of the need for offline processing. These advantages allow an entire category of physics to be probed by HL-LHC experiments that would not otherwise be considered. Among the challenges posed by these approaches are the need for very robust event reconstruction algorithms, the need to derive detector calibrations sufficient for final analysis within this short processing window, and the need to plan analyses sufficiently in advance of data taking so that the choices made in the real-time analysis will be robust against eventual systematics studies. \subsection{Challenges from Evolutions in Computing technology} This section summarizes recent, and expected, evolutions in computing technologies. These are both opportunities to move beyond commodity x86 technologies, which HEP has used very effectively over the past 20 years, and significant challenges to continue to derive sufficient event processing throughput per cost to enable our physics programs at reasonable computing cost. A full description of this technology evolution and its effect on HEP is beyond the scope of this document \cite{Bird2014}. Here we identify the main technology changes identified as driving out research and development in the area of software trigger and event reconstruction: \begin{itemize} \item {\bf Increase of SIMD capabilities}: The size of vector units on modern commodity processors are increasing rapidly. While not all algorithms can easily be adapted to benefit from this capability, large gains are possible where algorithms can be vectorized. Essentially all HEP codes need modifications, or large scale refactoring, to effectively utilize SIMD capabilities. \item {\bf Evolution towards multi- or many-core architectures}: The current trend is to move away from ever faster processing cores towards more power efficient and more numerous processing cores. This change has already broken the traditional ``one-core-one-event'' model in many HEP frameworks and algorithms. As core counts increase, a larger number of algorithm developers, instead of just those developing the most resource intensive algorithms, will need to incorporate parallelism techniques into their algorithm implementations. \item {\bf Slow increase in memory bandwidth}: Software trigger and event reconstruction applications in HEP are very memory intensive, and I/O access to memory in commodity hardware has not kept up with CPU capabilities. To evolve towards modern architectures, including the effective use of hierarchical memory structures, HEP algorithms will need to be refactored or rewritten to considerably reduce the required memory per processing core. \item {\bf Rise of heterogeneous hardware}: Evolution in HEP algorithms has long taken advantage of a single dominant commodity computing platform (x86 running Linux). More and more studies have shown or are investigating the throughput benefits of using low-power systems, GPUs, FPGA systems for critical pieces of processing. Software trigger and event processing algorithms are those that may benefit the most in HEP from the effective use of these technologies if they are able to adapt to the requirements of using these systems effectively. \item {\bf Possible evolution in facilities}: HEP facilities are likely to evolve both due to the increased architectural variability of affordable hardware and due to evolution in data science techniques. One example of the latter is an analysis center whose design is driven by data science techniques and technologies. These technologies (e.g., Hadoop, Spark) are under investigation in HEP for analysis and may change the way HEP data centers are resourced. A particular example of how this will impact trigger or reconstruction algorithms is that of physics object identification algorithms. These are frequently rerun by analysts in order to include the most recent version developed by the collaboration in their analysis. \end{itemize} Evolution in computing technology is generally a slow but continual process. However, architectures available today can provide necessary development platforms for trigger and event reconstruction developers to adapt codes to be better suited to future architectures. \subsection{Challenges from Evolutions in Software technology} The status and evolution of software development in HEP is the subject of the Software Development CWP working group. In this section, we briefly discuss some of the issues and opportunities of particular relevance to software trigger and event reconstruction work. The move towards open source software development and continuous integration systems brings a number of important opportunities to assist developers of software trigger and event reconstruction algorithms. Continuous integration systems have already brought the ability to automate code quality and performance checks, both for algorithm developers and code integration teams. Scaling these up to allow for sufficiently high statistics checks is among the still outstanding challenges. While it is straightforward to test changes where no regression is expected, fully developed infrastructure for assisting developers in confirming the physics and technical performance of their algorithms during development is a work in progress. As the timescale for experimental data taking and analysis increases, the issues of legacy code support increase. In particular, it seems unaffordable to imagine rewriting all of the software developed by the LHC experiments during the long shutdown preceding HL-LHC operations. Thus, as the HL-LHC run progresses, much of the code base for software trigger and event reconstruction algorithms will be 15-30 years in age. This implies an increased need for sustainable software development and investment in software education for experimental teams. Code quality demands increase as traditional offline analysis components migrate into trigger systems, or more generically into algorithms that can only be run once. As described above, this may be due to either the prohibitive cost of rerunning algorithms over large data sets, or due to not having retained sufficient data to rerun algorithms (e.g., not retaining the full raw data). Algorithms in the software trigger and event reconstruction areas are very frequently contributed to by a large community of physicists rather than expert programmers. In many cases, the most sensitive algorithms may be checked and optimized by programming experts. This has so far satisfied the need of reducing the total computing resources needed by experiments, as typically only a few algorithmic components dominate the overall computing need. However, this approach would be currently impossible to carry out across the entire reconstruction code stack. As the complexity (and number) of real-time algorithms increases, the need for training as well as code validation and regression checking increases considerably. These challenges are further complicated by a growing diversity of developers in the software trigger and event reconstruction areas. There is a generally growing gap between the basic programming techniques which students learn in their undergraduate courses and the state-of-the-art programming techniques required to fully exploit the power of emerging parallel hardware architectures. Software development methods and programming techniques evolve particularly quickly and often developers are self taught on current techniques. The experiment software environment must facilitate contributions from a range of developers while adjusting to challenges of a more and more complex online and offline software toolkit. \section{Current Approaches} In this section, we summarize current practices and resource requirements for software trigger and event reconstruction implementation and infrastructure needs. These include the scale of resource requirements for these activities, the type of data structures and data contents kept for custodial storage and analysis use, and calibration technique requirements across different experiments in high-energy physics. For each topic, we will briefly introduce the issues and then summarize experiment specific implementation details where available. This is meant to provide a representative range of approaches and to illustrate where the most challenging aspects are. Approaches from all experiments are not always included, as this information is summarized from working-group contributions. \subsection{Computing Resource Requirements} The table below summarizes the online and offline computing requirements of current and future experiments. Not all quantities are relevant for all experiments, and many items evolve depending on where in the lifecycle an experiment is. For example, disk requirements depend both on the number of events collected in a given running period but also on how long the experiment has been collecting data still of relevance for current analysis activities. More information about each online and offline computing system can be found in the references \cite{ALICE2015, Allton2017, ATLAS2015, CMS2015, DUNE2015, DUNE2017, LHCb2017, Miyamoto2015, Richter2016}. The columns are defined as follows: \begin{enumerate} \item Online CPUs: Approximate size (or fair-share in case of a shared resource) of the real-time processing facility available for use by the experiment. \item Offline CPUs: Approximate size (or fair-share in case of a shared resource) of the offline processing facility available for use by the experiment. For LHC experiments we have given the 2017 pledged CPU amount including both Tier0 and distributed Tier1 and Tier2 resources. These resources are used for Monte Carlo simulation, analysis and event reconstruction processing. Thus, these numbers only provide an indication of the CPU requirements for event reconstruction. \item Input rate to software trigger: Approximate rate of events seen by software trigger. Typically this is the output rate of the hardware trigger. \item Event rate for analysis: Approximate rate of events saved by the experiment for custodial storage. Typically this is the rate of events saved by the software trigger system. \item RAW data size: Approximate size of typical physics events saved for offline processing. While the analysis tier size for an experiment is only loosely connected to the raw data size, this measure does provide a measure of the total data volume for the offline event processing. As discussed above, there are also frequently reduced data formats saved by trigger systems instead of the full raw data. These are typically much smaller than the sizes provided here. \item Analysis data size: Approximate size of typical physics events in the format used most frequently for analysis (e.g., xAOD in ATLAS, miniAOD in CMS). \item Offline disk: The total disk available for production and analysis activities. This is expected to scale primarily with the total size of the analysis data. \end{enumerate} \afterpage{ \clearpage \begin{landscape} \begin{table} \begin{tabular}{|l|p{20mm}|p{20mm}|p{20mm}|p{20mm}|p{20mm}|p{20mm}|p{20mm}|} \hline Experiment & Online CPUs (kHS06) /GPUs & Offline CPUs (kHS06) & Input rate to software trigger (kHz) & Event rate for analysis (kHz or GB/s) & RAW data size (MB/evt) & Analysis data size (MB/evt) & Offline disk (PB) \\ \hline CMS (2017) & 500& 1729& 100& 1& 1.5& ~0.03& 123 \\ \hline CMS (HL-LHC) & & & 750& 5-7.5& 4.2-4.6& & \\ \hline ATLAS (2017) & 28k \newline CPU cores& 2194& 100& 1& 1& & 172\\ \hline ATLAS (HL-LHC) & & & 1000& 10& 5& & \\ \hline LHCb (2017)& 25k \newline CPU cores& 413& 1000& 0.7 Gb/s& 0.07& & 35\\ \hline LHCb (Run 3-4) & & & 30000& $>2$ GB/s& 0.13& & \\ \hline ALICE (2017) & 9k \newline CPU cores, \newline 180 GPUs & 805& 0.3\newline (central \newline Pb-Pb)& & 100\newline (central \newline Pb-Pb) & & 66 \\ \hline ALICE (Run 3) & & & 50& 20 GB/s& 60& & 60/yr \\ \hline Belle II (2021) & 6400 \newline CPU cores& 600& 30 kHz& 10 kHz& 0.1& 0.01& 60\\ \hline ProtoDUNE& & 5& & 0.6 GB/s& 60& & \\ \hline DUNE (0 supp)& & & & 11 MHz& $1.5e^{-4}$& & 54/yr \\ \hline ILC (ILD,SiD)& 250& 400& 1 GB/s& 1 GB/s& 1& 0.01& 150 \\ \hline \end{tabular} \caption{The LHCb resource requirements for Run 3 and 4 are currently under review, and will be agreed following publication of a computing model document later in 2018. The figures given here are minimal requirements below which physics performance is already known to degrade in an unacceptable manner.} \end{table} \end{landscape} \clearpage } This table illustrates that there is substantial commonality in the scale of computing needed for current generation of experiments. It is also clear that there is a large increase in the scale of data expected for the next generation of experiments. This is true both in terms of event rate and event size. \subsection{Analysis Data tiers and data structures} Here we summarize data structures either consumed by, or produced by, the software trigger and event reconstruction applications in some current HEP experiments. These are meant to be representative of current practices and not inclusive of all approaches used in HEP. Historically, these are the raw data format and analysis data format, respectively. Recent work has led to some evolution in what sort of data structures are used in some experiments. We summarize the approach taken and issues observed by different experiments. \vskip 0.5cm \noindent {\bf LHCb}: Major advances were made in the LHCb trigger system in Run 2. Increasing the number of (logical) cores used in the trigger farm to 50000, deploying faster reconstruction algorithms, and data-caching (described below) now allow the multilayer software trigger system to execute the offline reconstruction algorithms. LHCb now splits its 0.7~GB/s of data written to permanent storage into two distinct streams. The full stream persists all low-level data as in Run 1. The turbo stream persists a user-defined selection of high-level information, typically a small subset of reconstructed objects in each selected event. This makes it possible to keep a much larger number of events, with the cost being that the lower-level information is lost. Several LHCb publications have now demonstrated that this reduced-file-size approach works in practice, with typical event size reduction being one order of magnitude with respect to the raw event. \vskip 0.5cm \noindent {\bf CMS}: The data formats used by CMS for event data are built from custom written C++ classes organized into a ROOT TTree-based structure by the CMS framework. The RAW data is largely composed of a packed byte format organized and segmented according to the detector readout electronics layout. Analysis data are made up of more complex objects, used also in the reconstruction and analysis algorithms themselves. Particle flow constituent objects and high level physics objects (e.g. muon, electrons, jets) are made of classes derived from a common base class in order to make combination of objects in a straightforward and efficient manner. Formats in Run~2 have evolved to include more compact data structures (e.g., the miniAOD formats) and for a few trigger streams the concept of ``scouting''. Instead of keeping raw detector data, the scouting data structures keep only a small set of high-level objects as derived in the CMS software trigger. This makes the data format very compact (about 1\% of a full raw data event) and allows for a very high rate of data to be kept. CMS has also used data structures with lossy-compression algorithms (e.g., saving variables using fewer than 32 bits) to reduce the size of its ``MIniAOD'' data tier, now widely used in analysis. This type of approach is widely applicable where the detector configuration or reconstruction algorithm itself limits resolution to be far worse than that implied by the usual 32-bit or 64-bit data types. While the custom structures developed by each experiment so far are unlikely to be completely generalizable, we would still benefit from a field-standard, or data-science standard library containing those parts which are in common. CMS is also investigating how less complex data structures can allow for even small data formats that match the needs of the final analysis stages. \vskip 0.5cm \noindent {\bf ATLAS}:In Run 2, a reduced analysis object stream has been added, containing only high-level trigger jet objects, to be used for the search of low-mass hadronic resonances in trigger-level analyses. This stream requires roughly 1\% of the full HLT bandwidth, allowing for data taking rates comparable to the full offline physics stream \cite{ATLASTwiki} \subsection{Most resource consuming algorithm} {\bf ALICE}: Reconstruction and compression of data from the ALICE TPC is the dominant component of the ALICE event reconstruction and the driving feature behind the design of the O2 facility \cite{ALICE2015} for event reconstruction starting in LHC Run 3. The reconstruction of particle trajectories from the ALICE TPC is done by cellular automation and Kalman filter algorithms. In the Run 2 high-level trigger, these algorithms run on GPUs as part of a pipelined system. \vskip 0.5cm \noindent {\bf LHCb}: The specific forward geometry of LHCb, with a dipole magnet, and its use of RICH detectors for particle identification, have a significant impact on both the time cost of reconstruction algorithms and their utility in triggering and real-time analysis. The largest components are pattern recognition and Kalman fitting of tracks, which run on all events, and calorimeter and RICH reconstruction, which run on a subset of events. The overall CPU cost for these algorithms is roughly the same, considering the different fractions of events they run on. The full tracking cost greatly depends on the transverse momentum range of tracks being searched for. The RICH reconstruction is expensive, but because many decays of interest include kaons, its results reduce the time spent in later processing for combining tracks into displaced vertices. \vskip 0.5cm \noindent {\bf CMS}: The computing resource needs of trigger and reconstruction algorithms is a strong function of event complexity (or pileup) in CMS. In Run 2 conditions, the track pattern recognition is the largest single CPU contributor in CMS reconstruction. Kalman fitting \cite{CMS2014}, particle flow algorithms \cite{CMS2017} and HCAL local reconstruction algorithms are also a considerable fraction of the total offline reconstruction CPU budget. The importance of these algorithms is also true for online trigger algorithm configurations where a simplified particle flow technique is used in the final event selections. As they are carried out in each event online, the calorimetric raw data unpackers and algorithms for pileup rejection at the detector level (e.g. to form reconstructed detector clusters) are also quite important online. Both online and offline, CMS has implemented a low track transverse momentum threshold (200-300 MeV) in order to achieve the best possible particle flow performance. This has a large impact on the CPU needs of CMS, not only in tracking, but elsewhere in the reconstruction where the algorithm complexity scales with the number of tracks reconstructed. For HL-LHC, CMS has the additional computational challenge of clustering and particle flow with its high-granularity calorimeter. \subsection{Calibration techniques and requirements} {\bf LHCb}: the full detector alignment and calibration of all sub-detectors must continue to be performed continuously in real-time, and the real-time monitoring framework must also be maintained in order to enable problems to be quickly identified and fixed. The calibrations are performed \cite{LHCbRTCalib2016} at a certain frequency: \begin{itemize} \item A few times per year: Muon system alignment, RICH mirror alignment, Fine calorimeter calibration \item Start of each LHC fill: Vertex detector alignment, Full tracker alignment, Straw-tube tracker gas calibration, Coarse calorimeter calibration \item Multiple times per fill: RICH refractive index calibration \end{itemize} In order to perform the calibrations in real-time, LHCb exploits the trigger farm to run the calibration tasks in parallel with trigger tasks. This enables e.g. the full tracker alignment to be completed in around 8 minutes. The calibration jobs are automated and return an updated set of constants which are compared to the result of the previous calibration; if the change goes beyond a certain threshold the constants are updated and an expert is alerted to validate the update. The jobs themselves are built on the same codebase as the rest of the LHCb reconstruction, and the monitoring uses the same setup as the general detector monitoring. \vskip 0.5cm \noindent {\bf CMS}: The prompt calibration loop is an automated process, runs as part of the CMS Tier-0 facility using a small subset of the total data volume saved by the CMS trigger. Example calibrations include the assessment of bad detector channels for each run as well as the global tracker alignment. Results are typically available within 24 hours and thus are used in the full prompt reconstruction processing of the CMS data. Other calibrations, including the calorimeter light output calibrations, are performed outside of the Tier-0 infrastructure but in time for the prompt reconstruction processing on a regular interval. Finally, calibrations requiring higher statistics are performed either as the LHC data taking period progresses (e.g. performed once adequate statistics are obtained) or after the end of the run to achieve the ultimate detector performance required by analysis (e.g, tracker module level alignment and full energy scale calibration of the electromagnetic calorimeter). \vskip 0.5cm \noindent {\bf ATLAS}: For real-time analyses with trigger jets, reconstruction and calibration are kept as close as possible to the full reconstruction. Dedicated calibrations mitigating the effect of pile-up and restoring the hadronic energy scale are applied to these partially recorded events, making the properties of jets reconstructed at the HLT comparable to those of jets reconstructed from full events. These calibrations account for differences between fully recorded jets as well as for missing information from detectors other than the calorimeters. In the case of jets during the first part of Run 2, only calorimeter information is available to the HLT for jet reconstruction and calibration. It is foreseen that tracking information can be added already during Run 2 using the Fast TracKer (FTK). The jet reconstruction procedure for trigger jets is summarized in \cite{Abreu2014}. \vskip 0.5cm \noindent {\bf ALICE}: Real-time calibration and data quality monitoring are critical parts of the O2 system for Run 3 as the data volume processed is too large to be redone in a later, truely offline, processing. \section{Research and Development Roadmap} This section describes the proposed research and development roadmap defined in the working group. We identify seven broad areas to be critical for software trigger and event reconstruction work over the next decade. These are: \begin{itemize} \item Enhanced vectorization programming techniques \item Algorithms and data structures to efficiently exploit many-core architectures \item Algorithms and data structures for non-x86 computing architectures (e.g., GPUs, FPGAs) \item Enhanced quality assurance (QA) and quality control (QC) for reconstruction techniques \item Real-time analysis \item Precision physics-object reconstruction, identification and measurement techniques \item Fast software trigger and reconstruction algorithms for high-density environments \end{itemize} For each area, we identify the overall goals of the research, as well as short, medium and long term milestones. These can be viewed as goals that should be achieved in advance of next-generation experiments. The short and medium term milestones are intended to be achievable on a timescale to inform software, computing and trigger technical design reports where possible. \subsection{Roadmap area 1: Enhanced vectorization programming techniques} \noindent {\bf Motivation}: HEP developed toolkits and algorithms typically make poor use of vector units on commodity computing systems. Improving this will bring speedups to applications running on both current computing systems and most future architectures. \noindent {\bf Overall goal}: To evolve current toolkit and algorithm implementations, and best programming techniques to better use SIMD capabilities of current and future computing architectures. \vskip 0.5cm \noindent {\bf Short-term goals}: Identify best practices and documented examples of how HEP code was improved to increase vectorization performance via series of developer meetings. Adopt and apply industry tools for code performance analysis to identify tools of particular importance for investigation. \vskip 0.5cm \noindent {\bf Medium-term goals}: Initiate work to make measurable improvement to vectorization performance of HEP code stacks via software toolkit improvements or rewrites. Continue developer discussions while toolkits are improved to facilitate co-development where common approaches to improving different toolkits are possible. \vskip 0.5cm \noindent {\bf Long-term goals}: Demonstrate and facilitate widespread experiment adoption of new or improved toolkits. \subsection{Roadmap area 2: Algorithms and data structures to efficiently exploit many-core architectures} \noindent {\bf Motivation}: Computing platforms are generally evolving towards having more cores in order to increase processing capability. This evolution has resulted in multi-threaded frameworks in use, or in development, across HEP. Algorithm developers can further improve throughput by being thread safe and enabling the use of fine-grained parallelism. \vskip 0.5cm \noindent {\bf Overall goal}: To evolve current event models, toolkits and algorithm implementations, and best programming techniques to improve the throughput of multithreaded software trigger and event reconstruction applications. \vskip 0.5cm \noindent {\bf Short-term goals}: Identify the key lessons of the work done to make current toolkits thread-safe or at least ``thread friendly''. Understand and document what conceptual limitations a thread-safe framework imposes on reconstruction and selection logic, and what kind of workflow scheduling it requires. Identify what classes of reconstruction and selection algorithms are able to efficiently work in a multi-threaded framework and which are not. \vskip 0.5cm \noindent {\bf Medium-term goals}: Document best practice for thread safe computing in a way which allows new collaboration members to efficiently develop in such a framework. Develop a toolkit to logically express proposed new algorithms in terms of data sources, sinks, and consumers, which can automatically analyze an algorithm and establish how thread-safe its logic is. \vskip 0.5cm \noindent {\bf Long-term goals}: Introduce training on multi-threaded and more generally parallel algorithm development in physics syllabi, building on the experience gained and the toolkits developed earlier. \subsection{Roadmap area 3: Algorithms and data structures for non-x86 computing architectures (e.g., GPUs, FPGAs)} \noindent {\bf Motivation}: Computing architectures using technologies beyond CPUs offer an interesting alternative for increasing throughput of the most time consuming trigger or reconstruction algorithms. Such architectures (e.g., GPUs, FPGAs) could be easily integrated into dedicated trigger or specialized reconstruction processing facilities (e.g., online computing farms). \vskip 0.5cm \noindent {\bf Overall goal}: To demonstrate how the throughput of toolkits or algorithms can be improved through the use of new computing architectures in a production environment. \vskip 0.5cm \noindent {\bf Short-term goals}: Develop reliable and portable benchmarking for mixed architectures which properly accounts for I/O overheads between them. Using this, identify, and develop prototypes for, event reconstruction and software trigger algorithms where specialized hardware is likely to bring significant improvement in metrics such as total event throughput per cost or throughput per Watt. \vskip 0.5cm \noindent {\bf Medium-term goals}: Demonstrate programming models and software toolkits appropriate for a heterogeneous computing environment. Considerations include facilitating high-level code-reuse between architectures, construction of appropriate data structures, and adoption of externally developed toolkits, such as mathematical libraries, that provide significant performance improvements on certain architectures. Deploy prototypes for limited scale operational tests where resources are available in a controlled fashion (e.g., experiment trigger computing facilities). \vskip 0.5cm \noindent {\bf Long-term goals}: Demonstrate ability of event reconstruction applications to reliably use and benefit from heterogenous computing facilities on a distributed computing system. Define cost-benefit metrics to use to guide facility providers for providing the largest event throughput in cases where HEP controls the mix of hardware to be purchased. \subsection{Roadmap area 4: Enhanced QA/QC for reconstruction techniques} \noindent {\bf Motivation}: HEP experiments have extensive continuous integration systems, including varying types of code regression checks. These are typically maintained by individual experiments and have not yet reached the scale where statistical regression checks, as well as technical and physics performance can be enabled for each proposed software change. \vskip 0.5cm \noindent {\bf Overall goal}: Enable the development, automation, and deployment of extended QA and QC tools and facilities for software trigger and event reconstruction algorithms. \vskip 0.5cm \noindent {\bf Short-term goals}: Discuss integration and testing systems currently used by experiments to formulate requirements for scope and scale needed for a common integration system capable of providing developers feedback on trigger and event reconstruction outputs according to physics and technical metrics. \vskip 0.5cm \noindent {\bf Medium-term goals}: Develop and demonstrate a scalable system for use by multiple experiments based on industry standard continuous integration tools. Define requirements for regression and validation techniques (for Monte Carlo and data studies) given evolution towards heterogeneous hardware and real-time calibrations. \vskip 0.5cm \noindent {\bf Long-term goals}: Develop and demonstrate next-generation tools needed for regression testing, software integration and validation and data quality related tasks. \subsection{Roadmap area 5: Real-time analysis} \noindent {\bf Motivation}: Real-time analysis techniques are being adopted to enable a wider range of physics signals to be saved by the trigger for final analysis. As rates increase, these techniques can become more important and widespread by enabling only techniques such as saving only reconstructed event information or only the parts of an event associated with the signal candidates, reducing the required disk space substantially. \vskip 0.5cm \noindent {\bf Overall goal} : Evaluate and demonstrate the tools needed to facilitate real-time analysis techniques. Research topics include compression and custom data formats; toolkits for real-time detector calibration and validation which will enable full offline analysis chains to be ported into real-time; and frameworks which will enable non-expert offline analysts to design and deploy real-time analyses without compromising data taking quality. \vskip 0.5cm \noindent {\bf Short-term goals}: Discuss the ongoing real-time analysis frameworks being built by different HEP experiments and establish areas of commonality. Understand the extent to which cross-experiment toolkits can help with these areas. \vskip 0.5cm \noindent {\bf Medium-term goals}: Develop common toolkits for enabling real-time analysis across experiments, drawing on experience and collaboration with real-time applications in industry wherever possible. Develop a framework which enables non-experts to design and deploy real-time analyses without threatening the integrity of data taking. \vskip 0.5cm \noindent {\bf Long-term goals}: Begin to include real-time analysis requirements in the design experiments (in particular high luminosity hadron colliders such as FCC). This means explicitly optimizing detector hardware not for physics which can be done with events we can afford to store to disk, but optimizing for physics which can be done in real-time. \subsection{Roadmap area 6: Precision physics-object reconstruction, identification and measurement techniques} \noindent {\bf Motivation}: The central challenge for object reconstruction at HL-LHC is thus to maintain excellent efficiency and resolution in the face of high pileup values, especially at low object transverse momentum. Both trigger and reconstruction approaches need to exploit new techniques and higher granularity detectors to maintain or even improve physics measurements in the future. Reconstruction in very high pileup environments, such as the HL-LHC or FCC-hh, may also greatly benefit from adding timing information to our detectors, in order to exploit the finite beam crossing time during which interactions are produced. \vskip 0.5cm \noindent {\bf Overall goal}: Develop and demonstrate tools needed to efficient techniques for physics object reconstruction and identification in complex environments. \vskip 0.5cm \noindent {\bf Short-term goals}: Identify areas where either new toolkits based on either novel techniques or new detector designs, are likely to achieve significant physics quality improvements especially in very dense (e.g., high pileup) environments at facilities including HL-LHC and FCC-hh. Known candidates are charged tracking techniques including precision timing detector information, jet imaging techniques, and particle-flow algorithms that exploit high-precision calorimetry. \vskip 0.5cm \noindent {\bf Medium-term goals}: Development and demonstration of algorithms and integrated software packages that are efficient and performant in complex environments on planned detector configurations for HL-LHC. Understand interplay between new information and traditional observables to determine how physics measurables should be best derived. Development of algorithms to optimize splitting events containing different sets of objects to obtain the best balance between data storage overhead and data processing overhead during physics analyses. \vskip 0.5cm \noindent {\bf Long-term goals}: Deploy algorithms in experimental software stacks as they mature. \subsection{Roadmap area 7: Fast software trigger and reconstruction algorithms for high-density environments} \noindent {\bf Motivation}: Future experimental facilities will bring a large increase in event complexity. The scaling of current-generation algorithms with this complexity must be improved to avoid a large increase in resource needs. In addition, it may be desirable or indeed necessary to deploy new algorithms, including advanced machine learning techniques developed in other fields, in order to solve these problems. \vskip 0.5cm \noindent {\bf Overall goal}: Evolve or rewrite existing toolkits and algorithms focused on their physics and technical performance at high event complexity (e.g. high pileup at HL-LHC). Most important targets are those which limit expected throughput performance at future facilities (e.g., charged-particle tracking). A number of such efforts are already in progress across the community. \vskip 0.5cm \noindent {\bf Short-term goals}: Identify additional areas where substantial gains in event reconstruction and software trigger algorithms may be obtained by either a large-scale reimplementation of existing algorithms or by the use of a new algorithmic approach (including machine learning concepts). Possible areas of investigation include improved memory locality for algorithms and data structures. \vskip 0.5cm \noindent {\bf Medium-term goals}: Develop and demonstrate new toolkits. Evaluate their effectiveness against current approaches using both physics driven and event throughput per computing cost metrics. \vskip 0.5cm \noindent {\bf Long-term goals}: Deploy algorithms in experimental software stacks as they mature. It is particularly important to test new approaches using data driven studies to demonstrate robustness against changing detector and accelerator operating conditions. \section{Conclusions} The next decade will see the volume and complexity of data being processed by HEP experiments increase by at least one order of magnitude. While much of this increase is driven by the planned upgrades to the four major LHC detectors, new experiments such as DUNE will also make significant demands on the HEP data processing infrastructure. It is therefore essential that event reconstruction algorithms and software triggers continue to evolve so that they are able to efficiently exploit future computing architectures and deal with this increase in data rates without loss of physics capability. We have identified seven key areas where research and development is necessary to enable the community to exploit the full power of the enormous datasets which we will be collecting. Three of these areas concern the increasingly parallel and heterogeneous computing architectures which we will have to write our code for. In addition to a general effort to vectorize our codebases, we must understand what kinds of algorithms are best suited to what kinds of hardware architectures, develop benchmarks that allow us to compare the physics-per-dollar-per-watt performance of different algorithms across a range of potential architectures, and find ways to optimally utilise heterogeneous processing centres. The consequent increase in the complexity and diversity of our codebase will necessitate both a determined push to educate tomorrow’s physicists in modern coding practices, and a development of more sophisticated and automated quality assurance and control for our codebases. The increasing granularity of our detectors, and the addition of timing information to help cope with the extreme pileup conditions at the HL-LHC, will require us to both develop new kinds of reconstruction algorithms and to make them fast enough for use in real-time. Finally, the increased signal rates will mandate a push towards real-time analysis in many areas of HEP, in particular those with low transverse momentum signatures. The success of this research and development program will be intimately linked to challenges confronted in other areas of HEP computing, most notably the development of software frameworks which are able to support heterogeneous parallel architectures, including the associated data structures and I/O, the development of lightweight detector models that maintain physics precision with minimal timing and memory consequences for the reconstruction, enabling the use of offline analysis toolkits and methods within real-time analysis, and an awareness of advances machine learning reconstruction algorithms being developed outside HEP and the ability to apply them to our problems. For this reason perhaps the most important task ahead of us is to maintain the community which has coalesced together in this CWP process, so that the work done in these sometimes disparate areas of HEP fuses coherently together into a solution to the problems facing us over the next decade. \section{Appendix} This section compiles short descriptions of some on-going projects within the community of relevance to the research and development roadmap identified by the working group. We provide short descriptions (typically taken from the project web page when available), and links to code and/or recent references. In some cases the code is part of a larger experiment framework, but we nevertheless felt it was important to identify these on-going projects. We do not attempt to cover two classes of software development projects. First, we do not include projects to develop new code or to improve the performance of existing code purely within a single experiment (or group of experiments) whose code is not easily shared with other parts of HEP either because it is not publically available, because it is strictly tied to use in the processing framework of a specific experiment, or for other reasons. Such projects are numerous in nature and are of critical importance to the success of future experiments, but typically these works can not be easily made into a commonly available toolkit. Second are toolkits developed outside of the HEP community. Software trigger and event reconstruction algorithms leverage numerous toolkits developed outside of HEP. Such toolkits are frequently the basis of new research and development projects and are one mechanism to ensure both good community support and efficient code that is likely to evolve with computing technology. These include packages designed for linear algebra, machine learning and other mathematical libraries. It is important that the HEP community encourage the use of these toolkits for future development, but unfortunately the breath of these toolkits make them too numerous to include here. On-going community software projects identified by the working group: \vskip 0.5 cm \noindent {\bf ACTS (A Common Tracking Software)} This project is supposed to be an experiment-independent set of track reconstruction tools. The main philosophy is to provide high-level track reconstruction modules that can be used for any tracking detector. The description of the tracking detector's geometry is optimized for efficient navigation and quick extrapolation of tracks. \begin{itemize} \item Project homepage: \href{https://gitlab.cern.ch/acts/a-common-tracking-sw}{GitLab homepage} \item References: ACTS-CDOT-Status-2017-03-07.pdf \end{itemize} \vskip 0.5 cm \noindent {\bf AIDA Tracking Toolkit} A generic, mostly framework independent, tracking toolkit. Development of this software package is in the process of being merged with the ACTS project. \begin{itemize} \item Project homepage: \href{https://github.com/AIDASoft/aidaTT}{GitHub/AIDASoft/aidaTT} \item References: F. Gaede, et. al., “Software toolkit with tracking algorithms”, AIDA Delivery Report D2.8, (2015) (http://cds.cern.ch/record/1982416). \end{itemize} \vskip 0.5 cm \noindent {\bf Arbor} ArborPFA is a C++ implementation of a Particle Flow Algorithm developed with the PandoraSDK framework. The idea under this clustering algorithm is based on the topological development of hadronic showers in high granularity sampling calorimeters follows an oriented-tree structure. \begin{itemize} \item Project homepage: \href{http://arborpfa.github.io/ArborPFA/}{GitHub homepage} \item References: M. Ruan and H. Videau, “Arbor, a new approach of the Particle Flow Algorithm”, in Proceedings, International Conference on Calorimetry for the High Energy Frontier (CHEF2013), p. 316. (2013) (https://arxiv.org/abs/1403.4784). \end{itemize} \vskip 0.5 cm \noindent {\bf Cross architecture Kalman Filter} The aim of this project is to produce a fast and efficient Kalman Filter, while preserving correctness of results, across a variety of architectures. \begin{itemize} \item Project homepage: \href{https://gitlab.cern.ch/dcampora/cross_kalman}{GitLab/dcampora/cross\_kalman} \item References: D. H. C. Perez, Presentation at CHEP 2016: https://indico.cern.ch/event/505613/contributions/2227256/ \end{itemize} \vskip 0.5 cm \noindent {\bf FastJet} A software package for jet finding in pp and e+e− collisions. It includes fast native implementations of many sequential recombination clustering algorithms, plugins for access to a range of cone jet finders and tools for advanced jet manipulation. \begin{itemize} \item Project homepage: \href{http://fastjet.fr/}{Homepage} \item References: M. Cacciari, G.P. Salam and G. Soyez, Eur.Phys.J. C72 (2012) 1896 [arXiv:1111.6097]. \end{itemize} \vskip 0.5 cm \noindent {\bf HEP.TrkX} Project to evaluate and broaden the range of computational techniques and algorithms utilized in addressing HEP tracking challenges. Specifically the project will provide a framework to develop and evaluate new algorithms for track finding and classification, that will be demonstrated by applying advanced pattern recognition techniques to track candidate formation. On-going research includes deep neural networks applied to HL-LHC online and offline tracking. \begin{itemize} \item Project homepage: \href{https://heptrkx.github.io/}{GitHub} \item References: S. Farrell, et. al. “The HEP.TrkX project”, Presentation at the Connecting the Dots workshop, Orsay 2017 (Farrell\_HEPTrkX\_CTD2017.pdf). \end{itemize} \vskip 0.5 cm \noindent {\bf Kalman-Filter tracking on parallel architectures} This project aims to develop tracking algorithms based on the Kalman Filter for use in a collider experiment that are fully vectorized and parallelized. These will be usable with parallel processor architectures such as Intel's Xeon Phi and GPUs, but yet maintain and extend the physics performance required for the challenges for the High Luminosity LHC (HL-LHC) planned for the 2020s. \begin{itemize} \item Project homepage: \href{http://trackreco.github.io}{GitHub} \item References: Parallelized Kalman-Filter-Based Reconstruction of Particle Tracks on Many-Core Processors and GPUs - Submitted to proceedings of Connecting The Dots / Intelligent Trackers 2017 (Orsay) arXiv:1705.02876. Kalman filter tracking on parallel architectures - Proceedings of the 22nd International Conference on Computing in High Energy and Nuclear Physics (CHEP 2016) (San Francisco) arXiv:1702.06359. \end{itemize} \vskip 0.5 cm \noindent {\bf PandoraPFA} Toolkit of particle flow algorithms and a framework for developing particle flow based reconstruction approaches. \begin{itemize} \item Project homepage: \href{https://github.com/PandoraPFA}{GitHub/PandoraPFA} \item References: M.A. Thomson, Particle Flow Calorimetry and the PandoraPFA Algorithm, Nucl. Instr. Meth. Phys. Res. A 611 (2009) 25; arXiv:0907.3577. \end{itemize} \vskip 0.5 cm \noindent {\bf Pixel Tracking on GPUs} Fast and parallelizable algorithms for track seeding (in particular for Cellular Automation algorithm). \begin{itemize} \item Project homepage: Currently part of \href{https://github.com/cms-sw/cmssw}{GitHub/cms-sw/cmssw}. To be integrated into ACTS project. \item References: \href{https://indico.cern.ch/event/567550/contributions/2627138/attachments/1512745/2359625/201708_Felice_ACAT17.pptx}{ACAT presentation} \end{itemize} \vskip 0.5 cm \noindent {\bf PODIO} C++ library to support the creation and handling of data models in particle physics. It is based on the idea of employing plain-old-data (POD) data structures wherever possible, while avoiding deep-object hierarchies and virtual inheritance. This is to both improve runtime performance and simplify the implementation of persistency services. \begin{itemize} \item Project homepage: \href{https://github.com/hegner/podio}{GitHub/hegner/podio} \item References: B. Hegner and F. Gaede, “PODIO: Design Document for the PODIO Event Data Model Toolkit”, AIDA-2020-NOTE-2016-004 (2016) (https://cds.cern.ch/record/2212785). \end{itemize}
{ "timestamp": "2018-02-26T02:11:47", "yymm": "1802", "arxiv_id": "1802.08638", "language": "en", "url": "https://arxiv.org/abs/1802.08638" }
\section{Introduction} The main idea of this paper is to present an $n$-ary ($n>2$) generalization of the results achieved by the first author on the decomposition of linear spaces induced by bilinear maps on a linear space \cite{Yo4}. In the mentioned paper, given a linear space $\mathbb V$ of arbitray dimension and a bilinear map $f$ on $\mathbb V$, Calderón introduced the notions of $f$-orthogonal, $f$-invariant and strongly $f$-invariant subspaces, as well as the notion of $f$-simplicity, which are just the usual notions of orthogonality, invariance and simplicity, but now defined with respect to $f$. Then, for a fixed basis of $\mathbb V$, he developed connection tecnhiques allowing to obtain a first nontrivial decomposition of $\mathbb V$ as the direct sum of $f$-orthogonal vector subspaces. In order to improve the obtained decomposition he introduced an adequate equivalence relation on the above family of linear subspaces, leading to the first main result: a nontrivial decomposition of $\mathbb V$ as an $f$-orthogonal direct sum of strongly $f$-invariant linear subspaces, with respect to a fixed basis. After that, observing that different choices of the bases of $\mathbb V$ may lead to different decompositons, he studied sufficient conditions to assure induced isomorphic decompositions of $\mathbb V$ with respect to different bases of $\mathbb V$. Another important result gives necessary and sufficient conditions for the $f$-simplicity of the linear subspaces in the second decomposition of $\mathbb V$. The author ends the paper providing an application of the previous results to the the structure theory of arbitrary algebras. At this point, a parenthesis is due to underline the considerable amount of recent works where the above mentioned and similar connection techniques are applied as a tool to obtain interesting results in the frameworks of several types of algebras. Without being exhaustive, these techniques were used, for instance, along with the notions of multiplicative basis and quasi-multiplicative basis not only related with algebras (see Caledrón and Navarro, \cite{Yo,Yo2}), but also with some $n$-ary generalizations (see, {\it e.g.}, the works of Calder\'on, Barreiro, Kaygorodov and S\'anchez in \cite{bcks,kmod,Yo_n_algebras}). Further, connection techniques were also applied in the context of graded Lie algebras (see Calderón (2014) \cite{Yo3}) and to obtain structural results on graded Leibniz triple systems (see Cao and Chen (2016) \cite{Cao2}). The present work follows an approach that uses, as close as possible, generalized $n$-ary versions of the techniques applied in \cite{Yo4}, obtaining generalized results which are similar to those of Calderón. The paper is organized as follows. In Section 2 we present the necessary basic notions related with $n$-linear maps and develop all connection techniques needed to obtain the main results. As a consequence, we get that each choice of a basis ${\mathcal B} $ of $\mathbb V$ rises a first nontrivial decomposition of $\mathbb V$, induced by $f$, as an $f$-orthogonal direct sum of linear subspaces with respect to ${\mathcal B} $. This decomposition is then enhanced by the introduction of an adequate equivalence relation on the above family of linear subspaces, leading to our first main result: $\mathbb V$ decomposes as a nontrivial $f$-orthogonal direct sum of strongly $f$-invariant linear subspaces, with respect to a fixed basis. In Section 3 the relation among the previous decompositions of $\mathbb V$ given by different choices of its bases is discussed. Concretely, after defining the notion of orbit associated to an $n$-linear map $f$, it is shown that if two bases, ${\mathcal B} $ and ${\mathcal B}^{\prime} $ of $\mathbb V$ belong to the same orbit under an action of a certain subgroup of ${\rm GL}(\mathbb V)$ on the set of all bases of $\mathbb V$, then they induce isomorphic decompositions of $\mathbb V$. In Section 4 we generalize the concept of $i$-division basis to the case of $n$-ary algebras. After that, we obtain a characterization of the $f$-simplicity of the components of the main decomposition obtained in Section 2. That is, we prove that any of the linear subspaces in the decomposition of $\mathbb V$ in $f$-orthogonal, strongly $f$-invariant linear subspaces of $\mathbb V$ is $f$-simple if and only if its annihilator is zero and it admits an $i$-division basis. Finally, in Section 5 an application of the previous results to the the structure theory of arbitrary $n$-ary algebras is included. \section{Development of the techniques. First decomposition theorem} We begin by noting that throughout the paper all of the linear spaces $\mathbb V$ considered are of arbitrary dimension and over an arbitrary base field ${\mathbb F}$. Hereinafter, $\mathbb V$ is a linear space and $f: \mathbb V\times \dots \times \mathbb V \to \mathbb V$ an $n$-linear map on $\mathbb V$, $n\geq 2$. We start recalling some notions concerning $\mathbb V$ and $f$. \begin{definition}\rm Two linear subspaces $V_1$ and $V_2$ of $\mathbb V$ are called {\it $f$-orthogonal} if $$f(\mathbb V,\dots, V_1^{(i)},\dots,V_2^{(j)},\dots,\mathbb V)=0,$$ for all $i,j\in \left\{1,\dots,n \right\}$, $i \neq j$, where the notations $V_1^{(i)}$ and $V_2^{(j)}$ mean that $V_1$ and $V_2$ occupy the $i$-th and $j$-th entries of $f$, respectively. It is also said that a decomposition of $\mathbb V$ as a direct sum of linear subspaces $$\mathbb V= \bigoplus_{j\in J} V_j$$ is {\it $f$-orthogonal} if $V_j$ and $V_k$ are $f$-orthogonal for any $j,k \in J$, with $j \neq k $. \end{definition} \begin{definition}\rm A linear subspace $ W$ of $\mathbb V$ is called {\it $f$-invariant } if $$f(W,\dots,W)\subset W.$$ The linear space $W$ is called {\it strongly $f$-invariant } if $$f(\mathbb V,\dots, W^{(i)},\dots,\mathbb V)\subset W,$$ for all $i \in \left\{1,\dots,n \right\}$. The linear space $\mathbb V$ will be called {\it $f$-simple} if $$f(\mathbb V,\dots,\mathbb V) \neq 0$$ and its only strongly $f$-invariant subspaces are $\{0\}$ and $\mathbb V$. \end{definition} \begin{definition} The {\it annihilator} of $f$ is defined as the set $${\rm Ann}(f)=\{v \in \mathbb V: f(\mathbb V,\dots, v^{(i)},\dots,\mathbb V)=0, \text{ for all } i \in \left\{1,\dots,n \right\}\}. $$ \end{definition} \medskip Let us fix a basis ${\mathcal B}=\{e_i\}_{ i \in I}$ of $\mathbb V$. For each $e_i \in {\mathcal B}$, we introduce a symbol $\overline{e}_i \notin {\mathcal B}$ and the following set $$\overline{{\mathcal B}} := \{\overline e_i : e_i \in {\mathcal B}\}.$$ We will also write $\overline{(\overline{e}_i)} := e_i \in {\mathcal B}$, $\mathbb V^{*}:=\mathbb V \setminus \{0\}$ and $\mathcal{P}(\mathbb V^{*})$ the power set of $\mathbb V^{*}$. \medskip We define the $n$-linear mapping \begin{equation}\label{gene} F: \mathcal{P}(\mathbb V^{*})\times \left( ({\mathcal B}\dot\cup \overline{{\mathcal B}})\times \dots \times ({\mathcal B}\dot\cup \overline{{\mathcal B}} )\right) \to \mathcal{P}(\mathbb V^{*}) \end{equation} as \begin{itemize} \item[(i)] $ F(\emptyset , {\mathcal B}\dot\cup \overline{{\mathcal B}}, \dots, {\mathcal B}\dot\cup \overline{{\mathcal B}} )= \emptyset$. \item[(ii)] For any $\emptyset \neq U \in {\mathcal P}( \mathbb V^{*})$ and $\xi_i \in {\mathcal B} $, $i=1,\dots,n-1,$ $$\hspace{-1cm} F(U, \xi_{1},\dots, \xi_{n-1})=\left( \bigcup_{ \begin{array}{c} k\in \{1,\dots,n\}\\ \sigma \in {\mathbb S}_{n-1} \end{array} } \{f(\xi_{\sigma (1)},\dots, u^{(k)}, \ldots, \xi_{\sigma (n-1)}):u \in U\} \right)\setminus \{0\} .$$ \item[(iii)] For any $\emptyset \neq U \in {\mathcal P}(\mathbb V^{*})$ and $\overline{\xi}_i \in \overline{{\mathcal B}}$, $i=1,\dots,n-1,$ $$\hspace{-1cm} F(U , \overline{\xi}_{1},\dots,\overline{\xi}_{n-1})=\left( \bigcup_{ \begin{array}{c} k\in \{1,\dots,n\}\\ \sigma \in {\mathbb S}_{n-1} \end{array} } \{u \in \mathbb V: f(\xi_{\sigma (1)},\dots, u^{(k)},\dots, \xi_{\sigma (n-1)})\in U\} \right)\setminus \{0\} .$$ \item[(iv)] $F(U , \xi_{1},\dots, \xi_{n-1})=\emptyset$, if there are $i,j\in \{1,\dots,n-1\},\ i\neq j$, such that $\xi_{i}\in {\mathcal B}$, $\xi_{j}\in \overline{{\mathcal B}}$. \end{itemize} \begin{remark} \label{Fsym} It is clear that $$F(U , \xi_{\sigma (1)},\dots, \xi_{\sigma (n-1)}) = F(U ,\xi_{1},\dots,\xi_{n-1}),$$ and $$F(U , \overline{\xi}_{\sigma (1)},\dots, \overline{\xi}_{\sigma (n-1)}) = F(U ,\overline{\xi}_{1},\dots,\overline{\xi}_{n-1}),$$ for all $\xi_{1},\dots,\xi_{n-1}\in {\mathcal B},\ \overline{\xi}_{1},\dots,\overline{\xi}_{n-1}\in \overline{{\mathcal B}},\ \sigma \in {\mathbb S}_{n-1}$. \end{remark} \begin{lemma}\label{lema1} Concerning the mapping $F$ previously defined, we have \begin{enumerate} \item[1.] For any $v \in \mathbb V^{*}$ and $\xi_i \in {\mathcal B}\ i=1,\dots,n-1$,\\ $w \in F(\{v\} , \xi_{1},\dots,\xi_{n-1})$ if and only if $v \in F(\{w\},\overline{\xi}_{1},\dots,\overline{\xi}_{n-1})$. \item[2.] For any $U \in \mathcal{P}(\mathbb V^{*})$ and $\xi_{i}\in {\mathcal B}\dot\cup \overline{{\mathcal B}}, \ i=1,\dots,n-1$,\\ $v \in F(U ,\xi_{1},\dots,\xi_{n-1})$ if and only if $F(\{v\},\overline{\xi}_{1},\dots,\overline{\xi}_{n-1})\cap U \neq \emptyset $. \end{enumerate} \end{lemma} \begin{proof} 1. Let us start admitting that $w \in F(\{v\} , \xi_{1},\dots,\xi_{n-1})$, being $v \in \mathbb V^{*}$ and $\xi_i \in {\mathcal B},\ i=1,\dots,n-1$. This means that $$w=f(\xi_{\sigma (1)},\dots,v^{(k)}, \dots, \xi_{\sigma (n-1)}), $$ for some $k\in \{1, \dots ,n-1\}$ and $\sigma \in {\mathbb S}_{n-1}$, and thus $$v\in F(\{w\}, \overline{\xi}_{\sigma (1)},\dots, \overline{\xi}_{\sigma (n-1)}). $$ According to the previous remark, we have: $$v\in F(\{w\}, \overline{\xi}_{1},\dots, \overline{\xi}_{n-1}). $$ The reciprocal result can be proved analogously. 2. Suppose that $U\in \mathcal{P}(\mathbb V^{*})$ and $\xi_{i}\in {\mathcal B}\dot\cup \overline{{\mathcal B}}, \ i=1,\dots,n-1.$ Let us first admit that $v \in F(U ,\xi_{1},\dots,\xi_{n-1})$. Then $v \in F(\{w\},\xi_{1},\dots,\xi_{n-1})$ for some $w\in U$. By item 1., this is equivalent to $w\in F(\{v\}, \overline{\xi}_{1},\dots, \overline{\xi}_{n-1})$ and thus $$w\in F(\{v\},\overline{\xi}_{1},\dots, \overline{\xi}_{n-1})\cap U \neq \emptyset .$$ The reciprocal assertion can be proved in a similar way. \end{proof} \begin{definition}\label{connection}\rm Let $e_i, e_j \in {\mathcal B}$. We say that $e_i $ is {\it connected} to $e_j$ if either, \begin{itemize} \item[(i)] $e_i=e_j$ or \item[(ii)] there exists an ordered list $(X_1,X_2,\dots,X_m)$, where $X_{i}=\left(a_{i1},\dots,a_{in-1} \right)$ such that $a_{ik} \in {\mathcal B}\dot\cup \overline{{\mathcal B}} $, $i \in \{1,\dots,m\},\ k\in \{1,\dots,n-1\},$ satisfying: \begin{enumerate} \item [{\rm 1.}] $F(\{e_i\} , X_1)\neq\emptyset$,\\ $F(F(\{e_i\} , X_1),X_{2})\neq\emptyset,\\ \vdots\\ F (\dots (F(F(\{e_i\} , X_1),X_{2}),\dots,X_{m-1})\neq\emptyset$. \medskip \item [{\rm 2.}] $e_j\in F( F (\dots (F(F(\{e_i\} , X_1),X_{2}),\dots,X_{m-1}), X_{m}).$ \end{enumerate} \end{itemize} In this case we say that $(X_1,X_2,\dots,X_m)$ is a {\it connection} from $e_i$ to $e_j$. \end{definition} \begin{lemma}\label{lema3} Let $(X_1, X_2, \dots, X_{m-1}, X_m)$ be any connection from $e_i$ to $e_j$, where $e_i$ and $e_j$ are arbitrary elements in ${\mathcal B}$, with $e_i \neq e_j$. Then the ordered list $(\overline{X}_m,\overline{X}_{m-1},\dots,\overline{X}_2,\overline{X}_1)$ is a connection from $e_j$ to $e_i$. \end{lemma} \begin{proof} The proof will be done by induction on $m$. In the case $m=1$ we have that $e_j\in F(\{e_i\} , X_1)=F(\{e_i\},a_{11},\dots,a_{1 n-1})$ implying that $$e_i\in F(\{e_j\} , \overline{a}_{11},\dots,\overline{a}_{1 n-1})=F(\{e_j\} , \overline{X}_1),$$ by 1. of Lemma \ref{lema1}. Thus $(\overline{X}_1)$ is a connection from $e_j$ to $e_i$. \medskip Admit now that the assertion holds for any connection with $m\geq 1$ elements, and let us show this assertion also holds for any connection $$(X_1,X_2,\dots,X_m,X_{m+1})$$ with $m+1$ ($(n-1)$-tuples) elements. So, consider a connection \newline $(X_1,X_2,\dots,X_m,X_{m+1})$ from $e_i$ to $e_j$. Let us begin by setting $$U:=F( F (\dots (F(F(\{e_i\} , X_1),X_{2}),\dots,X_{m-1}), X_{m}).$$ Applying 2. of Definition \ref{connection} we have that $e_j \in F(U , X_{m+1}).$ Then, by 2. of Lemma \ref{lema1}, $F(\{e_j\} , \overline{X}_{m+1})\cap U\neq\emptyset$. Admit that \begin{equation}\label{eqq1} x\in F(\{e_j\} , \overline{X}_{m+1})\cap U\neq \emptyset. \end{equation} Since $x\in U$ we have that $(X_1, X_2, \dots, X_{m-1}, X_m)$ is a connection from $e_i$ to $x$ with $m$ elements. Henceforth $(\overline{X}_m,\overline{X}_{m-1},\dots,\overline{X}_2,\overline{X}_1)$ connects $x$ to $e_i$. From here, and by equation (\ref{eqq1}), we obtain $$e_i \in F(F(\dots (F(F(\{e_j\} , \overline{X}_{m+1}), \overline{X}_m),\dots , \overline{X}_{2}),\overline{X}_{1}),$$ which means that $$(\overline{X}_{m+1},\overline{X}_{m},\dots,\overline{X}_2,\overline{X}_1)$$ connects $e_j$ to $e_i$. \end{proof} \begin{proposition}\label{equi} The relation $\sim$ in ${\mathcal B}$, defined by $e_i\sim e_j$ if and only if $e_i$ is connected to $e_j$, is an equivalence relation. \end{proposition} \begin{proof} The relation $\sim$ is clearly reflexive (see (i) of Definition \ref{connection}) and symmetric (see Lemma \ref{lema3}). Hence let us verify its transitivity. Admit that $e_i, e_j, e_k \in {\mathcal B}$ are pairwise different such that $e_i \sim e_j$ and $e_j \sim e_k$ (the cases in which two among those elements are equal are trivial). Then there are connections $(X_1,\dots,X_m)$ and $(Y_1,\dots,Y_p)$ from $e_i$ to $e_j$ and from $e_j$ to $e_k$, respectively. Therefore, $(X_1,\dots,X_m,Y_1,\dots,Y_p)$ is a connection from $e_i$ to $e_k$ showing the transitivity of $\sim$, and the result is proved. \end{proof} \smallskip Henceforth, by the above defined equivalence relation, we introduce the quotient set $${\mathcal B}/ \sim := \{[e_i] : e_i \in {\mathcal B}\},$$ where $[e_i]$ stands for the set of elements in ${\mathcal B}$ which are connected to $e_i$. \medskip For each $[e_i] \in {\mathcal B}/ \sim$ we may introduce the linear subspace $$V_{[e_i]}:= \bigoplus_{e_j \in [e_i] } {\mathbb F} e_j,$$ allowing us to write \begin{equation}\label{elp3} \mathbb V = \bigoplus\limits_{[e_i]\in {\mathcal B}/ \sim} V_{[e_i]}. \end{equation} Next we show that this is a decomposition of $\mathbb V$ in pairwise $f$-orthogonal subspaces. \begin{lemma}\label{elp1} For any $[e_i],[e_j] \in {\mathcal B}/ \sim$ with $[e_i] \neq [e_j]$, we have that \begin{equation} f(\mathbb V,\dots,V_{[e_i]}^{\left(k_{1}\right)},\dots,V_{[e_j] }^{\left(k_{2}\right)},\dots,\mathbb V)=0, \label{f-orth_dec} \end{equation} for all $k_{1},k_{2}\in \{1,\dots,n\},\ k_{1} \neq k_{2}$. \end{lemma} \begin{proof} In order to prove (\ref{f-orth_dec}) it is sufficient to show that \begin{equation*} f(\xi_{\sigma (1)},\dots,V_{[e_i]}^{\left(k_{1}\right)},\dots,V_{[e_j] }^{\left(k_{2}\right)},\dots,\xi_{\sigma (n-2)})=0, \end{equation*} for any permutation $\sigma \in {\mathbb S}_{n-2}$, $ \xi_{1},\dots,\xi_{n-2} \in {\mathcal B}$. Admit the opposite assertion. Then there are $e_k \in [e_i]$, $e_p \in [e_j] $ and $v\in \mathbb V^{*}$ such that \begin{equation} v=f(\xi_{\sigma (1)},\dots,e_{k}^{\left(k_{1}\right)},\dots,e_{p }^{\left(k_{2}\right)},\dots,\xi_{\sigma (n-2)}), \label{aux} \end{equation} for some $\sigma \in {\mathbb S}_{n-2}$. By definition of $F$, from (\ref{aux}) we may deduce two facts: $$\text{ (i) } v\in F(\{e_{k}\},e_{p},\xi_{1},\dots,\xi_{n-2}),$$ $$\text{ (ii) } v\in F(\{e_{p}\},e_{k},\xi_{1},\dots,\xi_{n-2}).$$ From (ii) and 1. of Lemma \ref{lema1}, we have $$\text{ (iii) } e_{p}\in F(\{v\},\overline{e}_{k},\overline{\xi}_{1},\dots,\overline{\xi}_{n-2})$$ From (i) and (iii), we observe that $\left(X_{1},X_{2}\right)$, where $$ X_{1}=\left(e_{p},\xi_{1},\dots,\xi_{n-2}\right) \text{ and }X_{2}=\left(\overline{e}_{k},\overline{\xi}_{1},\dots,\overline{\xi}_{n-2}\right),$$ is a connection from $e_{k}$ to $e_{p}$. Thus, $[e_i]=[e_k]=[e_p]=[e_j]$, causing a contradiction. \end{proof} As a consequence of Lemma \ref{elp1} and equation (\ref{elp3}) we have. \begin{proposition}\label{meta} Given $\mathbb V$ and $f$ as initially defined, $\mathbb V$ decomposes as the $f$-orthogonal direct sum of linear subspaces $$\mathbb V = \bigoplus\limits_{[e_i]\in {\mathcal B}/ \sim}V_{[e_i]}.$$ \end{proposition} \smallskip The family of linear subspaces of $\mathbb V$ formed by all of the $V_{[e_i]}$, $[e_i]\in {\mathcal B}/ \sim$, which gives rise to the decomposition in Proposition \ref{meta}, is not good enough for our purposes. So we need to introduce a new equivalence relation on this family, as follows. \medskip We begin by observing that the above mentioned decomposition of $\mathbb V$ allows us to consider, for each $V_{[e_i]}$, the projection map $$\Pi_{V_{[e_i]}}: \mathbb V \to V_{[e_i]}.$$ Also, let us consider these family of nonzero linear subspaces of $\mathbb V$, \begin{equation*} \hbox{${\mathcal{F}}:=\{V_{[e_i]}:[e_i]\in {\mathcal B}/ \sim \}.$} \end{equation*}% \begin{definition}\label{ane3}\rm We will say that $V_{[e_i]} \approx V_{[e_j]}$ if and only if either $V_{[e_i]} = V_{[e_j]}$ or there exists a subset \begin{equation*} \{[\xi_1],[\xi_2], \ldots,[\xi_m]\} \subset {\mathcal B}/ \sim, \end{equation*} such that \begin{itemize} \item[(i)] $[\xi_1]=[e_i]$ and $[\xi_m]=[e_j].$ \item[(ii)] {\scriptsize $$\hspace{-0.75cm} \sum\limits_{1 \leq k_1<k_2\leq n} \left[\Pi_{V_{[\xi_1]}}(f(\mathbb V, \ldots, V_{[\xi_2]}^{(k_1)}, \ldots, V_{[\xi_2]}^{(k_2)}, \ldots, \mathbb V)) + \Pi_{V_{[\xi_2]}}(f(\mathbb V, \ldots, V_{[\xi_1]}^{(k_1)}, \ldots, V_{[\xi_1]}^{(k_2)}, \ldots, \mathbb V)) \right]\neq 0.$$ $$\hspace{-0.75cm} \sum\limits_{1 \leq k_1<k_2\leq n} \left[\Pi_{V_{[\xi_2]}}(f(\mathbb V, \ldots, V_{[\xi_3]}^{(k_1)}, \ldots, V_{[\xi_3]}^{(k_2)}, \ldots, \mathbb V)) + \Pi_{V_{[\xi_3]}}(f(\mathbb V, \ldots, V_{[\xi_2]}^{(k_1)}, \ldots, V_{[\xi_2]}^{(k_2)}, \ldots, \mathbb V)) \right]\neq 0.$$ $\vdots$\\ $$\hspace{-0.75cm} \sum\limits_{1 \leq k_1<k_2\leq n} \left[\Pi_{V_{[\xi_{m-1}]}}(f(\mathbb V, \ldots, V_{[\xi_m]}^{(k_1)}, \ldots, V_{[\xi_m]}^{(k_2)}, \ldots, \mathbb V)) + \Pi_{V_{[\xi_m]}}(f(\mathbb V, \ldots, V_{[\xi_{m-1}]}^{(k_1)}, \ldots, V_{[\xi_{m-1}]}^{(k_2)}, \ldots, \mathbb V)) \right]\neq 0.$$} \end{itemize} \end{definition} \medskip Clearly $\approx$ is an equivalence relation on ${\mathcal{F}}$ and so we can introduce the quotient set \begin{equation*} {\mathcal{F}} / \approx :=\{ [V_{[e_i]}]: V_{[e_i]} \in {\mathcal{F}}\}. \end{equation*} For each $[V_{[e_i]}] \in {\mathcal{F}} / \approx$, we denote by $% \overbrace{V_{[e_i]}}$ the linear subspace of $\mathbb V$ \begin{equation*} \overbrace{V_{[e_i]}}:= \bigoplus\limits_{V_{[e_j]} \in [V_{[e_i]}]} V_{[e_j]}. \end{equation*} By equation (\ref{elp3}) and the definition of $\approx$, we clearly have \begin{equation}\label{hel1} \mathbb V=\bigoplus\limits_{[V_{[e_i]}] \in {\mathcal{F}} / \approx} \overbrace{V_{[e_i]}}. \end{equation} Also, we can assert by Lemma \ref{elp1} that $$f(\mathbb V, \ldots, {{\overbrace{V_{[e_i]}}}}^{(k_1)}, \ldots, {{\overbrace{V_{[e_j]}}}}^{(k_2)} \ldots, \mathbb V)=0$$ when $[V_{[e_i]}] \neq [V_{[e_j ]}]$ in ${\mathcal{F}} / \approx$, for all $k_1,k_2\in \left\{1,\dots,n \right\}$, $k_1 \neq k_2$. \begin{proposition}\label{lema_submodulo} For any $[V_{[e_i]}] \in {\mathcal{F}} / \approx$, $\overbrace{V_{[e_i]}}$ is a strongly $f$-invariant linear subspace of $\mathbb V$. \end{proposition} \begin{proof} We begin by proving that \begin{equation}\label{panzer5} f(\mathbb V, \ldots, {\overbrace{V_{[e_i]}}}^{(k_1)}, \ldots, {\overbrace{V_{[e_i]}}}^{(k_2)}, \ldots, \mathbb V) \subset \overbrace{V_{[e_i]}}. \end{equation} Indeed, in case some $0 \neq w \in f(\mathbb V, \ldots, {\overbrace{V_{[e_i]}}}^{(k_1)}, \ldots, {\overbrace{V_{[e_i]}}}^{(k_2)}, \ldots, \mathbb V)$, decomposition (\ref{hel1}) allows us to write $$w=w_1+w_2+ \dots +w_m$$ for some $0\neq w_j \in \overbrace{V_{[\xi_j]}}$ for $j=1, \ldots ,m$ and $\xi_j \in \mathcal B$. Observe now that Lemma \ref{elp1} gives us that there exist nonzero $x,y \in V_{[e_k]}$ with $V_{[e_k]} \subset \overbrace{V_{[e_i]}}$ and $z_1, \ldots z_{n-2} \in \mathbb V,$ such that \begin{equation}\label{boli} 0\neq w=f(z_1, \ldots, x^{(k_1)},\ldots, y^{(k_2)}, \ldots, z_{n-2}) \end{equation} Let us consider $0 \neq w_1 \in \overbrace{V_{[\xi_1]}}$, being so $w_1 \in V_{[e_r]}$ for some $V_{[e_r]} \subset \overbrace{V_{[\xi_1]}}$. By equation (\ref{boli}) we have $$\Pi_{V_{[e_r]}}(f(z_1, \ldots, x^{(k_1)},\ldots, y^{(k_2)}, \ldots, z_{n-2}))=w_1 \neq 0.$$ That is $$\Pi_{V_{[e_r]}}(f(\mathbb V, \ldots, V_{[e_k]}^{(k_1)},\ldots, V_{[e_k]}^{(k_2)}, \ldots, \mathbb V )) \neq 0$$ and we get that the set $\{[e_k],[e_r]\}$ gives us $V_{[e_k]} \approx V_{[e_r] }$. Hence $$V_{[e_i]} \approx V_{[e_k]} \approx V_{[e_r] } \approx V_{[\xi_1] }$$ and we conclude $V_{[\xi_1] } \subset \overbrace{V_{[e_i]}}$. From here $w_1 \in \overbrace{V_{[e_i]}}$. In a similar way we get that any $w_j \in \overbrace{V_{[e_i]}}$ for $j=2,...,m$ and so $w \in \overbrace{V_{[e_i]}}$. Consequently, the inclusion (\ref{panzer5}) holds, as desired. \medskip Finally, by decomposition (\ref{hel1}), Lemma \ref{elp1} and equation (\ref{panzer5}), we have the following inclusion $$\sum\limits_{j=1}^{n}f(\mathbb V, \ldots, {\overbrace{V_{[e_i]}}}^{(j)}, \ldots, \mathbb V) \subset \overbrace{V_{[e_i]}},$$ and thus $f(\mathbb V, \ldots, {\overbrace{V_{[e_i]}}}^{(j)}, \ldots, \mathbb V) \subset \overbrace{V_{[e_i]}}$ for all $j\in\{1,\dots,n\}.$ \end{proof} \begin{theorem}\label{theo1} Let $\mathbb V$ be a linear space equipped with an $n$-linear map \newline $f: \mathbb V \times \ldots \times \mathbb V \to \mathbb V$. For any basis ${\mathcal B} =\{e_i: i \in I\}$ of $\mathbb V$ we have that $\mathbb V$ decomposes as the $f$-orthogonal direct sum of strongly $f$-invariant linear subspaces $$\mathbb V=\bigoplus\limits_{[V_{[e_i]}] \in {\mathcal{F}} / \approx} \overbrace{V_{[e_i]}}.$$ \end{theorem} \begin{proof} Consider the decomposition, as direct sum of linear subspaces $$\mathbb V=\bigoplus\limits_{[V_{[e_i]}] \in {\mathcal{F}} / \approx} \overbrace{V_{[e_i]}},$$ given by equation (\ref{hel1}). Now Lemma \ref{elp1} shows that this decomposition is $f$-orthogonal and Proposition \ref{lema_submodulo} that all of the linear subspaces $\overbrace{V_{[e_i]}}$ are strongly $f$-invariant. \end{proof} \section{On the relation among the decompositions given by different choices of bases} Observe that the decomposition of $\mathbb V$ as an $f$-orthogonal direct sum of strongly $f$-invariant linear subspaces given by Theorem \ref{theo1} is related with the initial choice of the basis. Indeed, as it was exemplified in \cite{Yo4}, for $n=2$, two different bases of $\mathbb V$ may lead to two different of those decompositions of $\mathbb V$. The same happens in the $n$-ary case, with $n>2$, as shown in the following example. Let $\mathbb V$ be the ${\mathbb R}$-linear space $\mathbb V:={\mathbb R}^4$ equipped with the $n$-linear map $f:{\mathbb R}^4 \times \dots \times {\mathbb R}^4 \to {\mathbb R}^4$ defined as $$f(\overline{x}_1,\dots,\overline{x}_n)=(x_{11}x_{21},x_{11}x_{21}, 0,0),$$ where $$\overline{x}_i=(x_{i1},\dots,x_{i4})$$ for each $i\in \{1,\dots,n\}.$ Let us consider the following two bases of ${\mathbb R}^4$: $$ {\mathcal B}:=\{e_1,\dots,e_4\}, $$ that is, the canonical basis, and $${\mathcal B}^{\prime}:=\{(1,0,1,0), (1,0,-1,0),e_2, e_4\}.$$ Then it is possible to observe that the decomposition of $\mathbb V={\mathbb R}^4$, given in Theorem \ref{theo1} with respect to the basis ${\mathcal B}$ is given by $${\mathbb R}^4=({\mathbb R}e_1 \oplus {\mathbb R}e_2) \bigoplus ({\mathbb R}e_3) \bigoplus ({\mathbb R}e_4).$$ However, the same kind of decomposition with respect to ${\mathcal B}^{\prime}$ is given by $${\mathbb R}^4=({\mathbb R}(1,0,1,0) \oplus {\mathbb R}(1,0,-1,0) \oplus {\mathbb R} e_2)\bigoplus ( {\mathbb R}e_4).$$ Thus, it will be an interesting task to find a sufficient condition for two different decompositions of a linear space $\mathbb V$, induced by an $n$-linear map $f$ and with respect to two different bases of $\mathbb V$, being isomorphic. The following notion will help us in this purpose. \begin{definition}\rm Let $\mathbb V$ be a linear space equipped with an $n$-linear map $f: \mathbb V \times \dots \times \mathbb V \to \mathbb V$ and consider $$ \Gamma:=\ \mathbb V=\bigoplus\limits_{i \in I} V_i \text{ and } \Gamma^{\prime}:=\ \mathbb V=\bigoplus\limits_{j \in J} W_j$$ two decompositions of $\mathbb V$ as an $f$-orthogonal direct sum of strongly $f$-invariant linear subspaces. It is said that $\Gamma$ and $\Gamma^{\prime}$ are {\it isomorphic} if there exists a linear isomorphism $g:\mathbb V \to \mathbb V$ satisfying $$f(g(v_1),\dots g(v_n))=g(f(v_1,\dots , v_n))$$ for any $v_1,\dots , v_n \in \mathbb V$, and a bijection $\sigma: I \to J$ such that $$g(V_i)=W_{\sigma(i)}$$ for any $i \in I$. \end{definition} \begin{lemma}\label{genesis} Let $\mathbb V$ be a linear space equipped with an $n$-linear map \newline $f: \mathbb V \times \dots \times \mathbb V \to \mathbb V$ and consider ${\mathcal B}=\{e_i: i\in I\}$ a fixed basis of $\mathbb V$. Let also $g:\mathbb V \to \mathbb V$ be a linear isomorphism satisfying $$f\left(g\left(\xi_1\right),\dots,g\left(\xi_{n}\right)\right)=g\left(f\left(\xi_1,\dots,\xi_{n}\right)\right)$$ for any $\xi_i \in \mathcal B$. Then for any $U \in {\mathcal P}( \mathbb V^{*})$ and $\xi_k \in \mathcal B,\ k \in I$, the following assertions hold: \begin{itemize} \item[(i)] $g\left(F\left(U,\xi_1,\dots, \xi_{n-1}\right)\right) =F\left(g(U),g\left( \xi_1 \right),\dots,g\left( \xi_{n-1}\right)\right)$, \item[(ii)] $g\left(F\left(U,\overline{\xi}_1,\dots,\overline{\xi}_{n-1}\right)\right) =F\left(g(U),\overline{g\left({\xi}_1\right)},\dots,\overline{g\left({\xi}_{n-1}\right)}\right)$, \end{itemize} where $F$ is the mapping defined by equation (\ref{gene}). \end{lemma} \begin{proof} (i) We have $$\hspace{-1cm} g\left(F\left(U, \xi_1,\dots,\xi_{n-1}\right)\right) =\left( \bigcup_{ \begin{array}{c} k\in \{1,\dots,n\}\\ \sigma \in {\mathbb S}_{n-1} \end{array} } \left\{g\left(f(\xi_{\sigma (1)},\dots, u^{(k)}, \ldots, \xi_{\sigma (n-1)})\right):u \in U\right\} \right)\setminus \{0\} $$ $$\hspace{-1cm} =\left( \bigcup_{ \begin{array}{c} k\in \{1,\dots,n\}\\ \sigma \in {\mathbb S}_{n-1} \end{array} } \left\{f \left(g\left(\xi_{\sigma (1)}\right),\dots, g(u)^{(k)}, \ldots, g\left(\xi_{\sigma (n-1)}\right)\right):u \in U\right\} \right)\setminus \{0\} $$ $$=F\left(g(U),g\left( \xi_1\right),\dots,g\left( \xi_{n-1}\right)\right).$$ (ii) In this case we have $$\hspace{-1cm} g\left(F\left(U,\overline{\xi}_1,\dots,\overline{\xi}_{n-1}\right)\right)$$ $$\hspace{-1cm}=\left( \bigcup_{ \begin{array}{c} k\in \{1,\dots,n\}\\ \sigma \in {\mathbb S}_{n-1} \end{array} } \left\{u\in \mathbb V : f\left(\xi_{\sigma (1)},\dots, (g^{-1}(u))^{(k)}, \ldots, \xi_{\sigma (n-1)}\right) \in U\right\} \right)\setminus \{0\} $$ $$ \hspace{-1cm} =\left( \bigcup_{ \begin{array}{c} k\in \{1,\dots,n\}\\ \sigma \in {\mathbb S}_{n-1} \end{array} } \left\{u\in \mathbb V : f \left(g\left( \xi_{\sigma (1)}\right),\dots, u^{(k)}, \ldots, g\left(\xi_{\sigma (n-1)}\right)\right) \in g(U)\right\} \right)\setminus \{0\} $$ $$=F\left(g(U),\overline{g\left({\xi}_1\right)},\dots,\overline{g\left({\xi}_{n-1}\right)}\right).$$ Observe that in both cases we took into account Remark \ref{Fsym}. \end{proof} \begin{proposition}\label{188} Let $\mathbb V$ be a linear space equipped with an $n$-linear map $f: \mathbb V \times \dots \times \mathbb V \to \mathbb V$ and consider ${\mathcal B}=\{e_i: i\in I\}$ a fixed a basis of $\mathbb V$. Further, admit that $g:\mathbb V \to \mathbb V$ is a linear isomorphism satisfying $$f\left(g\left(\xi_1\right),\dots,g\left(\xi_{n}\right)\right)=g\left(f\left(\xi_1,\dots,\xi_{n}\right)\right)$$ for any $\xi_i \in \mathcal B$. Then the decompositions $$\Gamma:=\mathbb V=\bigoplus\limits_{[V_{[e_i]}] \in {\mathcal{F}} / \approx} \overbrace{V_{[e_i]}} \hbox{ and } \Gamma^{\prime}:=\mathbb V=\bigoplus\limits_{[V_{[g(e_i)]}] \in {\mathcal{F}^{\prime}} / \approx} \overbrace{V_{[g(e_i)]}},$$ corresponding to the choices of ${\mathcal B}$ and ${\mathcal B}^{\prime}:=\{g(e_i): i \in I\}$ respectively in Theorem \ref{theo1}, are isomorphic. \end{proposition} \begin{proof} Firstly, let us observe that, according to the previous result, we may state that if $e_i$ is connected to some $e_j$, for some $i,j \in I$, $e_i,e_j \in \mathcal B$ through a connection $(X_1,X_2,\dots,X_m)$, where $X_{i}=\left(a_{i1},\dots,a_{in-1} \right)$ such that $a_{ik} \in {\mathcal B}\dot\cup \overline{{\mathcal B}} $, $i \in \{1,\dots,m\},\ k\in \{1,\dots,n-1\}$, then $g(e_i)$ is connected to $g(e_j)$ through the connection $(g(X_1),g(X_2),...,g(X_n))$, where $g(X_{i}):=\left(g(a_{i1}),\dots,g(a_{in-1}) \right)$ and $g(a_{ik}) \in {\mathcal B^{\prime}} \cup \overline{{\mathcal B}^{\prime}}$, (where $g(\overline{e}_k):=\overline{g(e_k)}$). \smallskip Thus, it is possible to conclude that $$g(V_{[e_i]})=V_{[g(e_i)]}$$ for any $[e_i] \in {\mathcal B} / \sim$. Further, it is also clear that the mapping $\mu$ such that $$\mu(V_{[e_i]})=V_{[g(e_i)]}$$ defines a bijection between the families ${\mathcal{F}}:=\{V_{[e_i]}:[e_i]\in {\mathcal B}/ \sim \}$ and ${\mathcal{F}}^{\prime}:=\{V_{[g(e_i)]}:[g(e_i)]\in {\mathcal B}^{\prime}/ \sim \}$. \smallskip Now, from Lemma \ref{genesis} we have $$\hspace{-1cm} g\left(\Pi_{V_{[e_i]}}(f(\mathbb V, \ldots, V_{[e_j]}^{(k_1)}, \ldots, V_{[e_j]}^{(k_2)}, \ldots, \mathbb V)\right) =\Pi_{V_{[g(e_i)]}} \left( f(\mathbb V, \ldots, V_{[g(e_j)]}^{(k_1)}, \ldots, V_{[g(e_j)]}^{(k_2)}, \ldots, \mathbb V) \right)$$ for $i,j \in I$ and $k_1,k_2 \in \{1,\dots , n\}$, with $k_1<k_2$. This allows to deduce that \begin{equation}\label{gene1} g(\overbrace{V_{[e_i]}})=\overbrace{V_{[g(e_i)]}} \end{equation} for any $[V_{[e_i]}] \in {\mathcal{F}} / \approx$, which induces a second bijection, $\sigma$, now between the families $ {\mathcal{F}} / \approx$ and ${\mathcal{F}^{\prime}} / \approx $ given by \begin{equation}\label{gene2} \sigma([V_{[e_i]}])=[V_{[g(e_i)]}]. \end{equation} From equations (\ref{gene1}) and (\ref{gene2}) we conclude that the decompositions $\Gamma$ and $\Gamma^{\prime}$ are isomorphic. \end{proof} Being $f$ an $n$-linear map on $\mathbb V$, the following set $${\rm O}_{f}(\mathbb V)=\{g \in {\rm GL}(\mathbb V): f(g(v_1),\dots,g(v_n))=g(f(v_1,\dots,v_n)) \hbox{ for any } v_1,\dots,v_n \in \mathbb V\},$$ (where $\rm{GL}(\mathbb V)$ denotes the group of all linear isomorphisms of $\mathbb V$), is known as the {\it orbit} of $\mathbb V$ (associated to $f$). We have that ${\rm O}_{f}(\mathbb V)$ is a subgroup of $\rm{GL}(\mathbb V)$. If we also denote by ${\mathfrak B}$ the set of all bases of $\mathbb V$ we get the action \begin{equation}\label{act} {\rm O}_{f}(\mathbb V) \times {\mathfrak B} \to {\mathfrak B} \end{equation} given by $(g, \{e_i\}_{i \in I})=\{g(e_i)\}_{i \in I}$. The previous result states that if two bases ${\mathcal B} $ and ${\mathcal B}^{\prime} $ of $\mathbb V$ belong to the same orbit under the action given by equation (\ref{act}), then they induce two isomorphic decompositions of $\mathbb V$. Finally, this can be stated as follows. \begin{corollary} Let $\mathbb V$ be a linear space equipped with an $n$-linear map \newline $f: \mathbb V \times \dots \times \mathbb V \to \mathbb V$ and fix two bases ${\mathcal B} =\{e_i: i \in I\}$ and ${\mathcal B}^{\prime} =\{u_i: i \in I\}$ of $\mathbb V$. Suppose there exists a bijection $\mu:I \to I$ such that the linear isomorphism $g:\mathbb V \to \mathbb V$ determined by $g(e_i):=u_{\mu(i)}$ for any $i\in I$, satisfies $$f\left(g(v_1),\dots,u_{\mu(i)}^{(k_1)},\dots, u_{\mu(j)}^{(k_2)},\dots,g(v_{n-2})\right)=g(f(v_1,\dots,e_i^{(k_1)},\dots, e_j^{(k_2)},\dots,v_{n-2}))$$ for any $i,j \in I$, $k_1, k_2 \in \{1,\dots,n\}$, with $k_1<k_2$. Then the decompositions $$\Gamma:=\mathbb V=\bigoplus\limits_{[V_{[e_i]}] \in {\mathcal{F}} / \approx} \overbrace{V_{[e_i]}} \hbox{ and } \Gamma^{\prime}:=\mathbb V=\bigoplus\limits_{[V_{[u_i]}] \in {\mathcal{F}^{\prime}} / \approx} \overbrace{V_{[u_i]}},$$ corresponding to the choices of ${\mathcal B}$ and ${\mathcal B}^{\prime}$, respectively, in Theorem \ref{theo1}, are isomorphic. \end{corollary} \section{A characterization of the $f$-simplicity of the components } Our aim in this section is to establish a characterization theorem on the $f$-simplicity of the linear subspaces $\overbrace{V_{[e_i]}}$, which appear in the decomposition of $\mathbb V$ given in Theorem \ref{theo1}. \medskip Let us begin by recalling several concepts from the theory of algebras. \medskip Let $\mathbb{A}$ be an algebra equipped with an $n$-ary multiplication $[ .,\dots ,.]\ :\mathbb{A}\times \dots \times \mathbb{A} \to \mathbb{A}$ and ${\mathcal{B}}$ a basis of $\mathbb{A}$. The basis ${\mathcal{B}}$ is said to be an \textit{$i$-division basis} if for any $e_i \in {\mathcal{B}}$ and $b_1,\dots,b_{n-1} \in \mathbb{A}$ such that $$[ b_1,\dots ,e_i^{(k)},\dots,b_{n-1} ] =w\neq 0$$ for some $k\in \{1,\dots,n\}$ we have that $e_i,b_1,\dots,b_{n-1} \in {\mathcal{I}}(w)$, where ${\mathcal{I}}(w)$ denotes the {\it ideal of $\mathbb A$ generated by $w$}. \medskip The above notion can be generalized to the case of a linear space $\mathbb V$ equipped with an $n$-linear map $f: \mathbb V \times \dots \times \mathbb V \to \mathbb V$. We refer to the minimal strongly $f$-invariant subspace of $\mathbb V$ that contains $v$ as the {\it strongly $f$-invariant subspace of $\mathbb V$ generated by $v$}, and will be denoted by ${\mathcal{I}}(v)$. Observe that the sum of two strongly $f$-invariant subspaces of $\mathbb V$ is also a strongly $f$-invariant subspace, and that the whole $\mathbb V$ is a trivial strongly $f$-invariant subspace. \begin{definition}\rm Let $\mathbb V$ be a linear space, ${\mathcal B}=\{e_i:i \in I\}$ a fixed basis of $\mathbb V$ and $f: \mathbb V \times \dots \times \mathbb V \to \mathbb V$ an $n$-linear map. It is said that ${\mathcal B}$ is an {\it $i$-division basis} of $\mathbb V$ respect to $f$, if for any $e_i \in {\mathcal{B}% }$ and $b_1,\dots,b_{n-1} \in \mathbb{V}$ such that $$f\left( b_1,\dots ,e_i^{(k)},\dots,b_{n-1} \right) =w\neq 0$$ for some $k\in \{1,\dots,n\}$ we have that $e_i,b_1,\dots,b_{n-1} \in {\mathcal{I}}(w)$, where ${\mathcal{I}}(w)$ denotes the strongly $f$-invariant subspace of $\mathbb V$ generated by $w$. \end{definition} Let us return to the decomposition of the linear space $\mathbb V$, given an $n$-linear map $f: \mathbb V \times \dots \times \mathbb V \to \mathbb V$ and fixed a basis ${\mathcal B}$, $$\mathbb V=\bigoplus\limits_{[V_{[e_i]}] \in {\mathcal{F}} / \approx} \overbrace{V_{[e_i]}}$$ as deduced by Theorem \ref{theo1}. For any $\overbrace{V_{[e_i]}}$ we can restrict $f$ to the $n$-linear map $$f^{\prime}: \overbrace{V_{[e_i]}} \times \dots \times \overbrace{V_{[e_i]}} \to \overbrace{V_{[e_i]}}$$ and consider on $\overbrace{V_{[e_i]}}$ the basis ${\mathcal B}^{\prime}:= {\mathcal B} \cap \overbrace{V_{[e_i]}}$. Then we can assert: \begin{theorem}\label{ane100} The linear space $\overbrace{V_{[e_i]}}$ is $f^{\prime}$-simple if and only if ${\rm Ann}(f^{\prime})=0$ and ${\mathcal B}^{\prime}$ is an $i$-division basis of $\overbrace{V_{[e_i]}}$ with respect to $f^{\prime}$. \end{theorem} \begin{proof} Suppose that $\overbrace{V_{[e_i]}}$ is $f^{\prime}$-simple. Observe firstly that ${\rm Ann}(f^{\prime})$ is a strongly $f^{\prime}$-invariant subspace of $\overbrace{V_{[e_i]}}$, and thus ${\rm Ann}(f^{\prime})=0.$ Additionally, if we consider some $e_j \in {\mathcal{B}}^{\prime}$ and $b_1,\dots,b_{n-1} \in \overbrace{V_{[e_i]}}$ such that $$f^{\prime}\left( b_1,\dots ,e_j^{(k)},\dots,b_{n-1} \right) =w\neq 0$$ for some $k\in \{1,\dots,n\}$, since $\overbrace{V_{[e_i]}}$ is $f^{\prime}$-simple, we have $${\mathcal{I}}(w)=\overbrace{V_{[e_i]}}$$ and so $e_j,b_1,\dots,b_{n-1} \in {\mathcal{I}}(w)$. Thus, the basis ${\mathcal B}^{\prime}$ is an $i$-division basis of $\overbrace{V_{[e_i]}}$ with respect to $f^{\prime}$. \medskip Conversely, let us suppose that ${\rm Ann}(f^{\prime})=0$ and that the set ${\mathcal B}^{\prime}$ is an $i$-division basis of $\overbrace{V_{[e_i]}}$ with respect to $f^{\prime}$. Consider any nonzero strongly $f^{\prime}$-invariant linear subspace $W$ of $\overbrace{V_{[e_i]}}$ and take some nonzero $ w \in W$. Since ${\rm Ann}(f^{\prime})=0$, there are nonzero elements $$ \xi_1,\dots,\xi_{n-1} \in {\mathcal B}^{\prime}$$ such that $$0 \neq f\left( \xi_1,\dots ,w^{(j)},\dots,\xi_{n-1} \right) \in W$$ for some $j\in \{1,\dots,n\}.$ Since ${\mathcal B}^{\prime}$ is an $i$-division basis, we get \begin{equation}\label{ane1} \xi_k \in W, \end{equation} for all $k\in \{1,\dots,n-1\}$. Let us now prove that $ V_{[\xi_k]} \subset W$ for each $k\in \{1,\dots,n-1\}$. To do so, we have to show that for any $\nu_j \in [\xi_k]$ such that $\nu_j \neq \xi_k$, we must conclude that $\nu_j \in W$. It is clear that $\xi_k$ is connected to any $\nu_j\in [\xi_k]$, and thus there is a connection $ (X_1,X_2,...,X_m)$, where $X_{i}=\left(a_{i1},\dots,a_{in-1} \right)$ such that $a_{il} \in {\mathcal B}\dot\cup \overline{{\mathcal B}} $, $i \in \{1,\dots,m\},\ l\in \{1,\dots,n-1\}$, from $\xi_k$ to $\nu_j$. Recall that we are dealing with an $f$-orthogonal and strongly $f$-invariant decomposition of $\mathbb{V}$ (by Theorem \ref{theo1}). Thus, we may claim that the elements $a_{il}$ satisfy \begin{equation}\label{ane2} a_{il} \in {\mathcal B}^{\prime} \cup \overline{ {\mathcal B}^{\prime} }, \end{equation} and that the whole connection process from $\xi_k$ to $\nu_j$ can be deduced in $\overbrace{V_{[e_i]}}$. \medskip We have that $$F(\{\xi_k\}, X_1)=F(\{\xi_k\}, a_{11},\dots,a_{1 n-1}) \neq \emptyset.$$ There are two cases to discuss. \\ First case: $a_{1l} \in {\mathcal B}^{\prime}$, $l=1,\dots,n-1$ and so there exists $$0\neq x=f\left( a_{11},\dots ,\xi_k^{(r)},\dots,a_{1 n-1} \right),$$ for some $r\in \{1,\dots,n\}$. \\ Second case: $a_{1l} \in \overline{{\mathcal B}^{\prime}}$, $l=1,\dots,n-1$ and so there exists $0\neq x \in \overbrace{V_{[e_i]}}$ such that $$f\left( \overline{a}_{11},\dots ,x^{(r)},\dots,\overline{a}_{1 n-1} \right)= \xi_k,$$ for some $r\in \{1,\dots,n\}$. Consider the first case. As a consequence of the inclusion (\ref{ane1}), we obtain $ x \in W.$ Consider now the second case. By the $i$-division property of the basis ${\mathcal B}^{\prime}$ and due to inclusion (\ref{ane1}) we conclude that $x \in {\mathcal I}(\xi_k) \subset W$. So, in both cases we have shown that \begin{equation}\label{fel1} F(\{\xi_k\}, X_1) \subset W. \end{equation} \medskip By the connection definition, we have $$F(F(\{\xi_k\}, X_1), X_2)\neq \emptyset,$$ where $F(\{\xi_k\}, a_1) \subset W$ as seen in (\ref{fel1}). Given an arbitrary $t \in F(F(\{\xi_k\}, X_1), X_2)$, as before, we have two cases to distinguish. In the first one $a_{2l} \in {\mathcal B}^{\prime}$, $l=1,\dots,n-1$ and so there exists $z \in F(\{\xi_k\}, X_1)$ such that $$0\neq z=f\left( a_{21},\dots ,\xi_k^{(r^{\prime})},\dots,a_{2 n-1} \right),$$ for some $r^{\prime}\in \{1,\dots,n\}$.\\ In the second one $a_{2l} \in \overline{{\mathcal B}^{\prime}}$, and then there exists $z \in F(\{\xi_k\}, X_1)$ such that $0\neq f(\overline{a}_{21},\dots ,t^{(r^{\prime})},\dots,\overline{a}_{2 n-1}) =z$ . \medskip In the first case the inclusion (\ref{fel1}) shows that $t \in W.$ In the second case the $i$-division property of ${\mathcal B}^{\prime}$ gives us that $t \in {\mathcal I}(z) \subset W$. In both cases, we have $$F(F(\{\xi_k\}, X_1), X_2)\subset W.$$ Iterating this argument on the connection (\ref{ane2}), we obtain that $$\nu_j \in F( F (\dots (F(F(\{\xi_k\},X_1),X_{2}),\dots,X_{m-1}), X_{m}) \subset W$$ and so we can assert that \begin{equation}\label{ane4} V_{[\xi_k]} \subset W. \end{equation} \medskip To finish the proof, we must prove that all $V_{[\nu_j]}$ such that $V_{[\nu_j]}\approx V_{[\xi_k]}$ verifies $V_{[\nu_j]} \subset W.$ Under the above assumption, there exists a subset \begin{equation}\label{metalli} \{[\xi_k],[\nu_2],...,[\nu_j]\} \subset {\mathcal B}/ \sim \end{equation} satisfying the conditions in Definition \ref{ane3}. From here, $$\sum\limits_{1 \leq i<i^{\prime} \leq n} \left[\Pi_{V_{[\xi_k]}}(f(\mathbb V, \scriptsize{\ldots}, V_{[\nu_2]}^{(i)}, \scriptsize{\ldots}, V_{[\nu_2]}^{(i^{\prime})}, \scriptsize{\ldots}, \mathbb V)) + \Pi_{V_{[\nu_2]}}(f(\mathbb V, \scriptsize{\ldots}, V_{[\xi_k]}^{(i)}, \scriptsize{\ldots}, V_{[\xi_k]}^{(i^{\prime})}, \scriptsize{\ldots}, \mathbb V)) \right]\neq 0.$$ Therefore, there are $i,i^{\prime}\in \{1,\dots,n\}$ with $i<i^{\prime}$, such that $$\Pi_{V_{[\nu_2]}}(f(\mathbb V, \ldots, V_{[\xi_k]}^{(i)}, \ldots, V_{[\xi_k]}^{(i^{\prime})}, \ldots, \mathbb V))\neq 0$$ or $$\Pi_{V_{[\xi_k]}}(f(\mathbb V, \ldots, V_{[\nu_2]}^{(i)}, \ldots, V_{[\nu_2]}^{(i^{\prime})}, \ldots, \mathbb V))\neq 0.$$ \medskip Consider the first case, in which $$\Pi_{V_{[\nu_2]}}(f(\mathbb V, \ldots, V_{[\xi_k]}^{(i)}, \ldots, V_{[\xi_k]}^{(i^{\prime})}, \ldots, \mathbb V))\neq 0.$$ Then there exist $e_k^{\prime}, e_k^{\prime \prime } \in [\xi_k]$ and $b_1,\dots,b_{n-2}\in \mathbb V$ such that $$0 \neq f(b_1,\dots,{e_k^{\prime}} ^{(i)}, \dots, {e_k^{\prime \prime}} ^{(i^{\prime})}, \dots, b_{n-2})=x_2+c$$ where $0 \neq x_2 \in V_{[\nu_2]}$ and $c \in \bigoplus\limits_{[\nu_j] \neq [\nu_2]} V_{[\nu_j]}$. Since ${\rm Ann}(f^{\prime})=0$, and taking into account Lemma \ref{elp1}, there exist $e_{21}^{\prime},\dots,e_{2 n-1}^{\prime} \in [\nu_2]$ such that $$0\neq f(e_{21}^{\prime},\dots,{x_2} ^{(r)}, \dots, e_{2 n-1}^{\prime}) =q$$ for some $r\in \{1,\dots,n\}$. By Lemma \ref{elp1} and (\ref{ane4}) we have that $$ 0\neq f(e_{21}^{\prime},\dots,{f(b_1,\dots,{e_k^{\prime}} ^{(i)}, \dots, {e_k^{\prime \prime}} ^{(i^{\prime})}, \dots, b_{n-2})} ^{(r)}, \dots, e_{2 n-1}^{\prime}) =$$ $$ f(e_{21}^{\prime},\dots,{(x_2+c)} ^{(r)}, \dots, e_{2 n-1}^{\prime}) =f(e_{21}^{\prime},\dots,{x_2} ^{(r)}, \dots, e_{2 n-1}^{\prime}) =q\in W.$$ From here, by the $i$-division property of ${\mathcal B}^{\prime}$ we conclude that $$e_{21}^{\prime},\dots,e_{2 n-1}^{\prime} \in {\mathcal I}(q) \subset W.$$ Concerning the second case, recall that we have $$\Pi_{V_{[\xi_k]}}(f(\mathbb V, \ldots, V_{[\nu_2]}^{(i)}, \ldots, V_{[\nu_2]}^{(i^{\prime})}, \ldots, \mathbb V))\neq 0.$$ Similarly to the first case, there exist $e_2^{\prime}, e_2^{\prime \prime } \in [\nu_2]$ and $b_1,\dots,b_{n-2}\in \mathbb V$ such that $$0 \neq f(b_1,\dots,{e_2^{\prime}} ^{(i)}, \dots, {e_2^{\prime \prime}} ^{(i^{\prime})}, \dots, b_{n-2})=x_k+d$$ where $0 \neq x_k \in V_{[\xi_k]}$ and $d \in \bigoplus\limits_{[\nu_j] \neq [\xi_k]} V_{[\nu_j]}$. Again, since ${\rm Ann}(f^{\prime})=0$, there exist $e_{k1}^{\prime},\dots,e_{k n-1}^{\prime} \in [\xi_k]$ such that $$0\neq f(e_{k1}^{\prime},\dots,{x_k} ^{(r)}, \dots, e_{k n-1}^{\prime}) =s$$ for some $r\in \{1,\dots,n\}$. By Lemma \ref{elp1} and inclusion (\ref{ane4}) we have that $$ 0\neq f(e_{k1}^{\prime},\dots,{f(b_1,\dots,{e_2^{\prime}} ^{(i)}, \dots, {e_2^{\prime \prime}} ^{(i^{\prime})}, \dots, b_{n-2})} ^{(r)}, \dots, e_{k n-1}^{\prime}) =$$ $$ f(e_{k1}^{\prime},\dots,{(x_k+d)} ^{(r)}, \dots, e_{k n-1}^{\prime}) =f(e_{k1}^{\prime},\dots,{x_k} ^{(r)}, \dots, e_{k n-1}^{\prime}) =s\in W.$$ From here, by the $i$-division property of ${\mathcal B}^{\prime}$ we conclude that $$e_{21}^{\prime},\dots,e_{2 n-1}^{\prime} \in {\mathcal I}(q) \subset W.$$ Applying the $i$-division property of ${\mathcal B}^{\prime}$ this leads to $$f(b_1,\dots,{e_2^{\prime}} ^{(i)}, \dots, {e_2^{\prime \prime}} ^{(i^{\prime})}, \dots, b_{n-2})\in {\mathcal I}(s) \subset W.$$ A second application of the $i$-division property of ${\mathcal B}^{\prime}$ allows us to write $e_2^{\prime} \in W$. \medskip At this point, we have shown in both cases that there are elements in $[\nu_2]$ belonging to $W$. Hence by using the same previous argument as done with $\xi_k$, (see inclusions (\ref{ane1}) and (\ref{ane4})), we get that $$V_{[\nu_2]} \subset W.$$ \medskip It is clear that this reasoning can be repeated for all other elements of the set (\ref{metalli}). Henceforth $$V_{[\nu_j]} \subset W$$ and consequently, since $$\overbrace{V_{[e_i]}}=\overbrace{V_{[\xi_k]}}:= \bigoplus\limits_{V_{[e_j]} \in [V_{[\xi_k]}]} V_{[e_j]}$$ we proved that $$\overbrace{V_{[e_i]}}=W,$$ that is $\overbrace{V_{[e_i]}}$ is $f$-simple. \end{proof} \begin{remark}\rm The above result can be restated as follows. {\it The linear space $\overbrace{V_{[e_i]}}$ is $f^{\prime}$-simple if and only if ${\rm Ann}(f^{\prime})=0$ and every non-zero element in $\overbrace{V_{[e_i]}}$ is an $i$-division element with respect to $f^{\prime}$.} \end{remark} \section{Application to the structure theory of arbitrary $n$-ary algebras} In this section we will apply the results obtained in the previous sections to the structure theory of arbitrary $n$-ary algebras. \medskip We will denote by ${\mathfrak A}$ an arbitrary $n$-ary algebra in the sense that there are no restrictions on the dimension of the algebra nor on the base field ${\mathbb F}$, and that no specific identity on the product ($n$-Lie (Filippov) \cite{Fil}, $n$-ary Jordan \cite{kps}, $n$-ary Malcev \cite{Pojidaev}, etc.) is supposed. That is, ${\mathfrak A}$ is just a linear space over ${\mathbb F}$ endowed with a $n$-linear map $$[ \cdot, \ldots, \cdot ] :{\mathfrak A} \times \ldots \times {\mathfrak A} \to {\mathfrak A}$$ $$\hspace{1cm}(x_1, \ldots, x_n) \mapsto [x_1, \ldots, x_n]$$ called {\it the product} of ${\mathfrak A}$. \medskip We recall that given an $n$-ary algebra $({\mathfrak A}, [\cdot, \ldots, \cdot ])$, a {\it subalgebra} of ${\mathfrak A}$ is a linear subspace ${\mathfrak B}$ closed for the product. That is, such that $[{\mathfrak B}, \ldots, {\mathfrak B}] \subset {\mathfrak B}$. A linear subspace ${\mathfrak I}$ of ${\mathfrak A}$ is called an {\it ideal} of ${\mathfrak A}$ if $[{\mathfrak A}, \ldots, {\mathfrak I}^{(r)}, \ldots, {\mathfrak A}] \subset {\mathfrak I},$ for all $ r\in\{1,\dots,n\}$. An $n$-ary algebra ${\mathfrak A}$ is said to be {\it simple} if its product is nonzero and its only ideals are $\{0\}$ and ${\mathfrak A}$. We finally recall that the {\it annihilator} of the algebra $({\mathfrak A}, [.,\dots,.])$ is defined as the linear subspace $${\rm Ann}({\mathfrak A})=\{x \in {\mathfrak A}: [{\mathfrak A}, \ldots, x^{(k)}, \ldots, {\mathfrak A}] =0, \mbox{ for all $k\in\{1,\dots,n\}$ }\}.$$ \medskip If we fix any basis ${\mathcal B}=\{e_i\}_{i \in I}$ of ${\mathfrak A}$, and denote the product $[.,\dots,.]$ of ${\mathfrak A}$ as $f$, Theorem \ref{theo1} applies to get that ${\mathfrak A}$ decomposes as the $f$-orthogonal direct sum of strongly $f$-invariant linear subspaces $${\mathfrak A}=\bigoplus\limits_{[{\mathfrak A}_{[e_i]}] \in {\mathcal{F}} / \approx} \overbrace{{\mathfrak A}_{[e_i]}}.$$ \medskip Now observe that the $f$-orthogonality of the linear subspaces means that, when $[{\mathfrak A}_{[e_i]}] \neq [{\mathfrak A}_{[e_j]}]$, we have $$[ \mathfrak A, \ldots, \overbrace{{\mathfrak A}_{[e_i]}}^{(k_1)}, \ldots, \overbrace{{\mathfrak A}_{[e_j]}}^{(k_2)}, \ldots, \mathfrak A]=0,$$ for all $k_1,k_2\in \left\{1,\dots,n \right\}$, $k_1 \neq k_2$, and that the strongly $f$-invariance of a linear subspace $\overbrace{{\mathfrak A}_{[e_i]}}$ means that $\overbrace{{\mathfrak A}_{[e_i]}}$ is actually an ideal of ${\mathfrak A}$. From here, we can state: \begin{theorem} Let $({\mathfrak A}, [\cdot, \ldots, \cdot ])$ be an arbitrary algebra. Then for any basis ${\mathcal B} =\{e_i: i \in I\}$ of ${\mathfrak A}$ one has the decomposition $${\mathfrak A}=\bigoplus\limits_{[{\mathfrak A}_{[e_i]}] \in {\mathcal{F}} / \approx} \overbrace{{\mathfrak A}_{[e_i]}},$$ being any $\overbrace{{\mathfrak A}_{[e_i]}}$ an ideal of $ {\mathfrak A}$. Furthermore, any pair of different ideals in this decomposition is $f$-orthogonal. \end{theorem} \smallskip In the same context, if we restrict the product $[\cdot, \ldots, \cdot ]$ of ${\mathfrak A}$ to any ideal $\overbrace{{\mathfrak A}_{[e_i]}}$, we get the algebra $(\overbrace{{\mathfrak A}_{[e_i]}}, [\cdot, \ldots, \cdot ])$. Now, by observing that the $f'$-simplicity of $(\overbrace{{\mathfrak A}_{[e_i]}}, [\cdot, \ldots, \cdot ])$ is equivalent to the simplicity of $(\overbrace{{\mathfrak A}_{[e_i]}}, [\cdot, \ldots, \cdot ])$ as an algebra, and that ${\rm Ann}(f^{\prime})={\rm Ann}(\overbrace{{\mathfrak A}_{[e_i]}})$, Theorem \ref{ane100} allows us to assert the following. \begin{theorem} The ideal $(\overbrace{{\mathfrak A}_{[e_i]}}, [\cdot, \ldots, \cdot ])$ is simple if and only if ${\rm Ann}(\overbrace{{\mathfrak A}_{[e_i]}})=0$ and ${\mathcal B}^{\prime}:={\mathcal B} \cap \overbrace{{\mathfrak A}_{[e_i]}}$ is an $i$-division basis of $\overbrace{{\mathfrak A}_{[e_i]}}$. \end{theorem} \bigskip
{ "timestamp": "2018-02-27T02:07:32", "yymm": "1802", "arxiv_id": "1802.08892", "language": "en", "url": "https://arxiv.org/abs/1802.08892" }
\section{Introduction and results} In the study of partial differential equations mathematical models describing the natural phenomena, e.g., the heat equation, the Fisher--KPP equation and so on, are one of the important topics in the mathematical analysis, and studied by many mathematicians. Recently, there have been many variations of systems of partial differential equations which describe complicated phenomena. One of systems which describe important biological phenomena related to animals life is the Keller--Segel system \begin{align*} n_t&=\Delta n^m -\chi\nabla\cdot(n^{q-1} \nabla c),\\ c_t&=\Delta c -c+n, \qquad x\in \Omega,\ t>0, \end{align*} where $\Omega \subset \mathbb{R}^N$ ($N\in \mathbb{N}$), $m>0$, $\chi\ge 0$, $q \ge 2$, which describes migration of species by chemotaxis which is the property such that species move towards higher concentration of the chemical substance. Here $\Delta n^m$ with $m=1$, that is, $\Delta n$ is called a {\it linear diffusion} and $\Delta n^m$ with $m\neq 1$ is called a {\it nonlinear diffusion}. In other words, the case that $m>1$ is said to be a {\it degenerate diffusion} or a {\it porous medium diffusion}, and the case that $0 < m < 1$ is said to be a {\it singular diffusion} or a {\it fast diffusion}. This system with $m=1$ and $q=2$ was first proposed by Keller--Segel \cite{K-S} and then the system with $m,q\in \mathbb{R}$ was suggested by Hillen--Painter \cite{Hillen_Painter_2009}. In the case that $m=1$ and $q=2$ it is known that the size of initial data determines behaviour of solutions to the 2-dimensional Keller--Segel system. More precisely, there exists some constant $C > 0$ such that if an initial data $n_0$ satisfies $\lp{1}{n_0} < C$ then global bounded classical solutions exist (\cite{Nagai-Senba-Yoshida}), moreover, for any $m > C$ there exist initial data such that $\lp{1}{n_0} = m$ and the corresponding solution blows up in finite time (\cite{Horstmann-Wang,Mizoguchi-Winkler}), where $C>0$ can be given as $C = \frac{8\pi}{\chi}$ in the radial setting and $C = \frac{4\pi}{\chi}$ in the nonradial setting. \medskip In the other dimensional case we can see that there are many global/blow-up solutions. In the case that $m=1,q=2$ and $N=1$ Osaki--Yagi \cite{Osaki-Yagi} showed that global existence and boundedness hold for all smooth initial data: This means that there is not a blow-up solution in the 1-dimensional setting. On the other hand, it is known that the 3-dimensional case has many blow-up solutions: Winkler \cite{Winkler_2013_blowup} established that for all $m>0$ there are initial data $n_0$ such that $\lp{1}{n_0}=m$ and the corresponding solution blows up in finite time. To obtain global existence of classical solutions we need some additional conditions for initial data: Winkler \cite{win_aggregationvs} and Cao \cite{Xinru_higher} proved existence of global boundedness classical solutions under conditions that initial data $(n_0,c_0)$ is small enough with respect to a suitable Lebesgue norm. \medskip On the other hand, in the case that $m\ge 1$ and $q\ge 2$, it is known that relations between $m$ and $q$ determine whether solutions of the Keller--Segel system exist globally or not; in the case that $m > q - \frac{2}{N}$ Ishida--Seki--Yokota \cite{Ishida-Seki-Yokota} obtained global existence of solutions; on the other hand, in the case that $m < q- \frac 2N$ Ishida--Yokota \cite{Ishida-Yokota_2013} proved that there exist initial data such that the corresponding solution blows up in finite or infinite time, and recently Hashira--Ishida--Yokota \cite{Hashira-Ishida-Yokota} found initial data such that the solution blows up in finite time. \medskip As a generalized problem, the nonlinear Keller--Segel system with logistic source which the first equation in the above system is replaced with \begin{align*} n_t=\Delta n^m -\chi\nabla\cdot(n^{q-1}\nabla c) +\kappa n -\mu n^2, \end{align*} where $m,\mu>0$, $\chi,\kappa \ge 0$, $q\ge 2$, was also studied, and it is shown that the logistic source $\kappa n -\mu n^2$ suppresses blow-up phenomenon. When $m=1$ and $q=2$, in the 2-dimensional setting Osaki et al.\ \cite{OTYM} obtained global existence and boundedness for all smooth initial data, and in the higher dimensional setting Winkler \cite{Winkler_2010_logistic} proved existence of global classical solutions under largeness conditions for $\mu>0$; in the Keller--Segel system with logistic source global existence of solutions holds even though the $L^1$-norm of the initial data is large enough. Moreover, Lankeit \cite{lankeit_evsmoothness} established global existence of weak solutions without largeness condition for $\mu >0$. On the other hand, in the case that $q=2$ a resent result established by Zheng--Wang \cite{Zheng-Wang_2016} asserted that the condition that $m > 1+ \frac{(N-2)_+}{N+2}$ derives global existence and boundedness. \medskip As we confirmed before, in the Keller--Segel system the relation between $m$ and $q$ strongly affects behaviour of solutions, and the logistic source often relaxes conditions for global existence. Thus ``what is the condition for $m$ to derive existence of global/blow-up solutions?'' makes one of the main topics. This will be also one of the important topics in the study of the chemotaxis-(Navier--)Stokes system with logistic source \begin{align*} n_t + u\cdot\nabla n & =\Delta n^m -\chi\nabla\cdot(n^{q-1}\nabla c) + \kappa n - \mu n^2, \\ c_t + u\cdot\nabla c & =\Delta c -nc, \\ u_t + \lambda(u\cdot\nabla)u & = \Delta u +\nabla P +n\nabla \Phi, \ \nabla \cdot u=0, \quad x\in \Omega,\ t>0, \end{align*} where $m,\mu >0$, $\chi ,\kappa \ge0$, $q\ge 2$, $\lambda =0$ (the chemotaxis-Stokes system) or $\lambda = 1$ (the chemotaxis-Navier--Stokes system). In the case that $m=1$, $q=2$ and moreover $\kappa=\mu=0$ Winkler established global existence of classical solutions in the 2-dimensional setting (\cite{W-2012}) and obtained existence of global weak solutions in the 3-dimensional setting (\cite{Winkler_2016}). In the case that $q=2$ and $\kappa=\mu=0$ Tao--Winkler \cite{Tao-Winkler_2012_KSF} and Chung--Kang \cite{Chung-Kang_2016} showed global existence under the condition that $m>1$. \medskip On the other hand, in the chemotaxis-(Navier--)Stokes system with logistic source results similar to those in the Keller--Segel system with logistic source hold. In the case of the fluid-free system Lankeit--Wang \cite{Lankeit-Wang_2017} showed global existence and boundedness in the system with $m=1$ under some largeness conditions for $\mu>0$, and Jin \cite{Jin-2017} proved global existence of weak solutions of the system with $m>1$. In the system with fluid equation and $m=1$, $q=2$ global existence and boundedness of classical solutions hold in the 2-dimensional setting (cf.\ \cite{HKMY_1}) and existence of global bounded classical solutions to the chemotaxis-Stokes system holds under some largeness conditions for $\mu>0$ in the 3-dimensional setting (cf. \cite{CKM}), and moreover Lankeit \cite{Lankeit_2016} showed that global existence of weal solutions in the system without largeness conditions for $\mu>0$. However, the chemotaxis-Navier--Stokes system with degenerate diffusion and logistic source has never been tried; which means that conditions of $m$ for global existence of weak solutions are still open problem. The purpose of this paper is to derive some condition of $m$ for existence of global weak solutions in the degenerate chemotaxis-Navier--Stokes system with logistic source. Here we note that the methods in \cite{Jin-2017} cannot be applied to this problem because of the difficulty of the Navier--Stokes equation (explained later). \medskip In order to attain this purpose we consider the following degenerate and singular chemotaxis-Navier--Stokes system with logistic term: \begin{equation}\label{P} \begin{cases} n_t + u\cdot\nabla n = \Delta n^m - \chi\nabla\cdot(n\nabla c) + \kappa n -\mu n^2, &x\in \Omega,\ t>0, \\[2mm] c_t + u\cdot\nabla c = \Delta c -nc, &x \in \Omega,\ t>0, \\[2mm] u_t + (u\cdot\nabla) u = \Delta u + \nabla P + n\nabla\Phi, \quad \nabla\cdot u = 0, &x \in \Omega,\ t>0, \\[2mm] \partial_\nu n^m = \partial_\nu c = 0, \quad u = 0, &x \in \partial\Omega,\ t>0, \\[2mm] n(x,0)=n_{0}(x),\ c(x,0)=c_0(x),\ u(x,0)=u_0(x), &x \in \Omega, \end{cases} \end{equation} \noindent where $\Omega$ is a bounded domain in $\mathbb{R}^3$ with smooth boundary $\partial\Omega$ and $\partial_\nu$ denotes differentiation with respect to the outward normal of $\partial\Omega$; $\chi, \kappa\ge 0$ and $\mu, m>0$ are constants; $n_0, c_0, u_0, \Phi$ are known functions satisfying \begin{align}\label{condi;ini1} &0 < n_0 \in C(\overline{\Omega}), \quad 0 < c_0 \in W^{1,q}(\Omega), \quad u_0 \in D(A^{\theta}), \\ \label{condi;ini2} % &\Phi \in C^{1+\beta}(\overline{\Omega}) \end{align} for some $q > 3$, $\theta \in \left(\frac{3}{4}, 1\right)$, $\beta > 0$ and $A$ denotes the realization of the Stokes operator under homogeneous Dirichlet boundary conditions in the solenoidal subspace $L_{\sigma}^2(\Omega)$ of $L^2(\Omega)$. Before stating the main theorem, we define weak solutions of \eqref{P}. \begin{df}\label{def;weaksol} A triplet $(n, c, u)$ is called a $($global$)$ {\it weak solution} of \eqref{P} if \begin{align*} n &\in L^2_{\rm loc}([0,\infty);L^2(\Omega)), \\ n^m & \in L^{\frac{4}{3}}_{\rm loc}([0,\infty);W^{1,\frac{4}{3}}(\Omega)), \\ c &\in L^2_{\rm loc}([0,\infty);W^{1,2}(\Omega)), \\ u &\in L^2_{\rm loc}([0,\infty);W^{1,2}_{0,\sigma}(\Omega)) \end{align*} and the identities \begin{align*} &-\int^\infty_0\!\!\!\!\int_\Omega n\varphi_t -\int_\Omega n_0\varphi(\cdot,0) -\int^\infty_0\!\!\!\!\int_\Omega n u\cdot\nabla\varphi \\ &\hspace{7.0mm}=-\int^\infty_0\!\!\!\!\int_\Omega\nabla n^{m} \cdot\nabla\varphi +\chi\int^\infty_0\!\!\!\!\int_\Omega n \nabla c\cdot\nabla\varphi +\int^\infty_0\!\!\!\!\int_\Omega (\kappa n - \mu n^2)\varphi, \\[1.5mm] &-\int^\infty_0\!\!\!\!\int_\Omega c\varphi_t -\int_\Omega c_0\varphi(\cdot,0) -\int^\infty_0\!\!\!\!\int_\Omega cu\cdot\nabla\varphi =-\int^\infty_0\!\!\!\!\int_\Omega\nabla c\cdot\nabla\varphi -\int^\infty_0\!\!\!\!\int_\Omega nc\varphi, \\[1.5mm] &-\int^\infty_0\!\!\!\!\int_\Omega u\cdot\psi_t -\int_\Omega u_0\cdot\psi(\cdot,0) -\int^\infty_0\!\!\!\!\int_\Omega u\otimes u\cdot\nabla\psi \\ &\hspace{7.0mm}=-\int^\infty_0\!\!\!\!\int_\Omega\nabla u\cdot\nabla\psi +\int^\infty_0\!\!\!\!\int_\Omega n\nabla\psi\cdot\nabla\Phi \end{align*} hold for all $\varphi\in C^{\infty}_0(\overline{\Omega}\times [0,\infty))$ and all $\psi\in C^{\infty}_{0,\sigma}(\Omega \times [0,\infty))$, respectively. \end{df} The main result reads as follows. The following theorem gives existence of global weak solutions to \eqref{P}. \begin{thm}\label{mainthm1} Let $\Omega\subset\mathbb{R}^3$ be a bounded smooth domain and let $\chi, \kappa \geq 0$ and $\mu, m>0$. Assume that $n_0, c_0, u_0$ and $\Phi$ satisfy \eqref{condi;ini1}{\rm --}\eqref{condi;ini2} with some $q>3$, $\theta \in(\frac{3}{4},1)$ and $\beta \in (0,1)$. Then there exists a weak solution of \eqref{P}, which can be approximated by a sequence of solutions $(n_{\varepsilon} ,c_{\varepsilon}, u_{\varepsilon})$ of an approximate problem $($see Section \ref{Sec2}\/$)$ in a pointwise manner. \end{thm} \begin{remark} This result means existence of weak solutions to \eqref{P} for {\it all} $m>0$; which implies that we could construct weak solutions of \eqref{P} in not only the case that $m>1$ (the case of a porous medium diffusion) but also $0 < m <1$ (the case of a fast diffusion). \end{remark} The proof of Theorem \ref{mainthm1} can be applied to a nondegenerate chemotaxis-Navier--Stokes system which namely is the case that $\Delta n^m$ is replaced with $\Delta (n+1)^m$, and enables us to see the following result. \begin{corollary}\label{cor} Let $\Omega\subset\mathbb{R}^3$ be a bounded smooth domain and let $\chi, \kappa \geq 0$ and $ \mu, m>0$. Assume that $n_0, c_0, u_0$ and $\Phi$ satisfy \eqref{condi;ini1}{\rm --}\eqref{condi;ini2} with some $q>3$, $\theta \in(\frac{3}{4},1)$ and $\beta \in (0,1)$. Then there exists a weak solution of the nondegenerate chemotaxis-Navier--Stokes system. \end{corollary} The strategy for the proof of Theorem \ref{mainthm1} is described as follows. We start with the construction of local approximate solutions of \eqref{P}. We next derive estimates for the approximate solution. Thanks to the estimates, we extend the local approximate solution globally in time. Finally, passing to the limit from the global approximate solution, we obtain the desired global weak solution. % In the previous work by Jin \cite{Jin-2017} which deals with the fluid-free case, aided by uniform $L^\infty $-estimates, we can attain convergences of approximate solutions. Now, because of the difficulty of the Navier--Stokes equation, we could not expect $L^\infty$-boundedness of solutions; thus we have to need different methods to consider this problem. More precisely, we rely on the Lions--Aubin lemma. In this strategy the key is to establish estimates for $\nabla n_{\varepsilon}$, where $n_{\varepsilon}$ is the solution of the first equation in the approximate problem and $\varepsilon$ is a positive parameter. More precisely, generalizing calculations in \cite{Lankeit_2016}, we first obtain \begin{equation}\label{test} \int_{0}^{T}\int_{\Omega} \frac{(n_{\varepsilon}+\varepsilon)^{m-1}|\nabla n_{\varepsilon}|^{2}}{n_{\varepsilon}} = \left( \frac{2}{m+1} \right)^2 \int_{0}^{T}\int_{\Omega} \frac{|\nabla (n_{\varepsilon}+\varepsilon)^{\frac{m+1}{2}}|^{2}}{n_{\varepsilon}} \leq C_1(T) \end{equation} for all $\varepsilon \in (0, 1)$ with some constant $C_1(T)>0$. Aided by this estimate we can have that $((n_{\ep}+ \varepsilon)^\frac{m+1}{2} )_{\varepsilon\in (0,1)}$ is bounded in $L^\frac{4}{3}([0,T);W^{1,\frac{4}{3}}(\Omega))$; however, it seems to be difficult to obtain the estimate for $\partial_t (n_{\ep}+\varepsilon)^\frac{m+1}{2}$ for all $m>0$. Thus we need additional estimates for approximate solutions. Here the inequality \eqref{test} ensures that \begin{align*} \int_{0}^{T}\int_{\Omega}(n_{\varepsilon}+\varepsilon)^{m-2}|\nabla n_{\varepsilon}|^{2} \leq \int_{0}^{T}\int_{\Omega} \frac{(n_{\varepsilon}+\varepsilon)^{m-1}|\nabla n_{\varepsilon}|^{2}}{n_{\varepsilon}} \leq C_1(T) \end{align*} for all $\varepsilon \in (0, 1)$. This together with the identity \[ \int_{0}^{T}\int_{\Omega}|\nabla (n_{\varepsilon}+\varepsilon)^{\frac{m}{2}}|^{2} = \frac{m^2}4 \int_{0}^{T}\int_{\Omega}(n_{\varepsilon}+\varepsilon)^{m-2}|\nabla n_{\varepsilon}|^{2} \] means that $\bigl((n_{\varepsilon}+\varepsilon)^{\frac{m}{2}}\bigr)_{\varepsilon \in (0, 1)}$ is bounded in $L^2_{\rm loc}([0, \infty); W^{1, 2}(\Omega))$. We moreover see that $$ \|\partial_t (n_{\varepsilon}+\varepsilon)^{\frac{m}{2}} \|_{L^1(0, T; (W^{2, 4}_{0}(\Omega))^{*})} \leq C_2(T) $$ for all $\varepsilon \in (0, 1)$ with some $C_2(T)>0$, which derives that $(\partial_{t}(n_{\varepsilon}+\varepsilon)^{\frac{m}{2}})_{\varepsilon \in (0, 1)}$ is bounded in $L^1(0, T; (W^{2, 4}_{0}(\Omega))^{*})$. Then, aided by the Lions--Aubin theorem, we can show convergences of solutions of the approximation of \eqref{P} and we can prove Theorem \ref{mainthm1}. The plan of this paper is as follows. In Section \ref{Sec2} we introduce the approximate problem \eqref{Pe} of \eqref{P} and establish global existence in \eqref{Pe}. In Section \ref{Sec3} we show the several estimates for solutions to the approximate problem of \eqref{P}. Finally, in Section \ref{Sec4} we prove Theorem \ref{mainthm1} by passage to the limit in the approximate problem via estimates from Section \ref{Sec3}. \section{Global existence in an approximate problem}\label{Sec2} We start by considering the following approximate problem with parameter $\varepsilon >0$: \begin{equation}\label{Pe} \begin{cases} n_{\varepsilon t}+u_\varepsilon\cdot\nabla n_{\varepsilon} = \Delta (n_{\varepsilon}+\varepsilon)^m -\chi\nabla\cdot\big(\frac{n_{\varepsilon}}{1+\varepsilon n_{\varepsilon}}\nabla c_\varepsilon\big) +\kappa n_{\varepsilon} -\mu n_{\varepsilon}^2, \\[2mm] c_{\varepsilon t}+u_\varepsilon\cdot\nabla c_\varepsilon =\Delta c_\varepsilon -c_\varepsilon\frac{1}{\varepsilon}\log\big(1+\varepsilon n_{\varepsilon}\big), \\[2mm] u_{\varepsilon t}+(Y_\varepsilon u_\varepsilon\cdot\nabla)u_\varepsilon =\Delta u_\varepsilon +\nabla P_\varepsilon + n_{\varepsilon}\nabla\Phi, \quad\nabla\cdot u_\varepsilon=0, \\[2mm] \partial_\nu n_{\varepsilon}|_{\partial\Omega} =\partial_\nu c_\varepsilon|_{\partial\Omega}=0, \quad u_\varepsilon|_{\partial\Omega}=0, \\[2mm] n_{\varepsilon}(\cdot,0)=n_{0},\quad c_\varepsilon(\cdot,0)=c_0,\quad u_\varepsilon(\cdot,0)=u_0, \end{cases} \end{equation} where $Y_\varepsilon=(1+\varepsilon A)^{-1}$. In this section we shall show global existence of solutions to the approximate problem \eqref{Pe}. We first give the following result which states local existence in \eqref{Pe}. \smallskip \begin{lem}\label{localsol} Let $\chi, \kappa \geq0$, $\mu>0$, $m>0 $ and let $\Phi\in C^{1+\beta}(\overline{\Omega})$ for some $\beta \in (0,1)$. Assume that $n_0,c_0,u_0$ satisfy \eqref{condi;ini1} with some $q>3,\theta \in(\frac{3}{4},1)$. Then for each $\varepsilon > 0$ there exist $T_{{\rm max},\e}$ and uniquely determined functions\/{\rm :} \begin{align*} n_{\varepsilon} &\in C^0(\overline{\Omega}\times[0,T_{{\rm max},\e})) \cap C^{2,1}(\overline{\Omega}\times(0,T_{{\rm max},\e})), \\ c_\varepsilon &\in C^0(\overline{\Omega}\times[0,T_{{\rm max},\e})) \cap C^{2,1}(\overline{\Omega}\times(0,T_{{\rm max},\e})) \cap L^\infty_{\rm loc}([0,T_{{\rm max},\e});W^{1,q}(\Omega)), \\ u_\varepsilon &\in C^0(\overline{\Omega}\times[0,T_{{\rm max},\e})) \cap C^{2,1}(\overline{\Omega}\times(0,T_{{\rm max},\e})), \end{align*} which together with some $P_\varepsilon\in C^{1,0}(\overline{\Omega}\times(0,T_{{\rm max},\e}))$ solve \eqref{Pe} classically. Moreover, $n_{\varepsilon}$ and $c_\varepsilon$ are positive and the following alternative holds\/{\rm :} $T_{{\rm max},\e}=\infty$ or \begin{align* \norm{n_{\varepsilon}(\cdot,t)}_{L^{\infty}(\Omega)} +\norm{c_\varepsilon(\cdot,t)}_{W^{1,q}(\Omega)} +\norm{A^\theta u_\varepsilon(\cdot,t)}_{L^2(\Omega)} \to \infty \end{align*} as $t\nearrow T_{{\rm max},\e}$. \end{lem} \begin{proof} This lemma can be shown by a standard fixed point theorem with parabolic regularity arguments. More precisely, a combination of the proofs of \cite[Lemma 2.1]{Tao-Winkler_2011_non} and \cite[Lemma 2.1]{W-2012} enables us to obtain local existence in \eqref{Pe}. \end{proof} In the following for all $\varepsilon\in (0,1)$ we denote by $(n_{\ep},c_{\ep},u_{\ep})$ the corresponding solution of \eqref{Pe} given by Lemma \ref{localsol} and by $T_{{\rm max},\ep}$ its maximal existence time. Then we shall see global existence of solutions to the approximate problem \eqref{Pe} and their useful estimates. We first recall basic inequalities which are often used in studies of the chemotaxis-Navier--Stokes system. \begin{lem}\label{pote1} There exists a constant $C_{1} > 0$ such that \[ \int_{\Omega} n_{\varepsilon}(\cdot, t) \leq C_1 \quad \mbox{for all} \ t \in (0, T_{{\rm max}, \varepsilon}) \ \mbox{and all} \ \varepsilon >0. \] Moreover, there exists $C_{2}>0$ satisfying \[ \int_t^{t+\tau} \int_\Omega n_{\varepsilon}^2 \le C_{2} \] holds for all $t\in (0,T_{{\rm max}, \varepsilon}-\tau)$ and all $\varepsilon>0$, where $\tau \in (0, T_{{\rm max}, \varepsilon})$. \end{lem} \begin{proof} Integrating the first equation in \eqref{Pe} shows this lemma. \end{proof} \begin{lem}\label{lem;Linf;c} The function $t \mapsto \lp{\infty}{c_{\varepsilon}(\cdot, t)}$ is nonincreasing. In particular, \begin{equation* \|c_{\varepsilon}(\cdot, t)\|_{L^{\infty}(\Omega)} \le \|c_0\|_{L^{\infty}(\Omega)} \end{equation*} holds for all $t \in (0, T_{{\rm max}, \varepsilon})$ and all $\varepsilon>0$. Moreover, we have \[ \int_0^{T_{{\rm max}, \varepsilon}} \int_\Omega |\nabla c_{\varepsilon}|^2 \le \frac 12 \int_\Omega |c_0|^2 \quad \mbox{for all}\ \varepsilon>0. \] \end{lem} \begin{proof} Applying the maximum principle to the second equation in \eqref{Pe} (see e.g., \cite[Lemma 2.1]{W2014}), we can establish the $L^\infty$-estimate for $c_{\ep}$. Moreover, multiplying the third equation in \eqref{Pe} by $c_{\ep}$ and integrating it over $\Omega \times (0,T_{{\rm max}, \varepsilon})$ imply this lemma. \end{proof} We next consider an estimate for the energy function $\mathcal{F}_\varepsilon:(0,T_{{\rm max},\ep})\to \mathbb{R}$ defined as \[ \mathcal{F}_\varepsilon(t) := \int_\Omega n_{\ep}(\cdot,t) \log n_{\ep}(\cdot,t) + \frac \chi 2 \int_\Omega \frac{|\nabla c_{\ep}(\cdot,t)|^2}{c_{\ep}(\cdot,t)} + K\chi \int_\Omega |u_{\ep}(\cdot,t)|^2 \] with some $K>0$, which plays an important role not only in the case that $m=1$ (\cite{Lankeit_2016}) but also in the case that $m>0$. In order to see an estimate for $\mathcal{F}_\varepsilon$ we provide the following 3 lemmas. \begin{lem}\label{ne} There exists a constant $C>0$ such that for all $\varepsilon>0$, \begin{align*} \frac{d}{dt}\int_{\Omega} n_{\varepsilon}\log n_{\varepsilon} + \frac{\mu}{2}\int_{\Omega} n_{\varepsilon}^2\log n_{\varepsilon} + \frac{4m}{(m+1)^2}\int_{\Omega} \frac{|\nabla (n_{\varepsilon} + \varepsilon)^{\frac{m+1}{2}}|^2}{n_{\varepsilon}} \leq \chi\int_{\Omega}\frac{\nabla n_{\varepsilon} \cdot \nabla c_{\varepsilon}}{1+\varepsilon n_{\varepsilon}} + C \end{align*} on $(0, T_{{\rm max}, \varepsilon})$. \end{lem} \begin{proof} We first obtain from $\nabla \cdot u_{\ep}=0$ in $\Omega \times (0,T_{{\rm max},\ep})$ and straightforward calculations that \begin{align}\label{ineq;energy;n} \frac d{dt} \int_\Omega n_{\ep} \log n_{\ep} & = \int_\Omega \log n_{\ep} \Delta (n_{\ep}+\varepsilon)^m - \chi \int_\Omega \log n_{\ep} \nabla \cdot \left( \frac{n_{\ep}}{1+\varepsilonn_{\ep}}\nabla c_{\ep} \right) \\ \notag &\quad\, + \kappa \int_\Omega n_{\ep} \log n_{\ep} -\mu \int_\Omega n_{\ep}^2 \log n_{\ep} + \kappa \int_\Omega n_{\ep} -\mu\int_\Omega n_{\ep}^2. \end{align} Then, noting from the boundedness of the functions $s\mapsto (\kappa s-\frac \mu 2 s^2)\log s + \kappa s-\mu s^2$ on $(0,\infty)$ that \begin{align*} \kappa \int_\Omega n_{\ep} \log n_{\ep} -\frac \mu 2 \int_\Omega n_{\ep}^2 \log n_{\ep} + \kappa \int_\Omega n_{\ep} -\mu\int_\Omega n_{\ep}^2 \le C_1 \end{align*} with some $C_1>0$, we can see from \eqref{ineq;energy;n} with the relation \begin{align*} \int_{\Omega} \log n_{\ep} \Delta (n_{\ep}+\varepsilon)^m &= -m\int_{\Omega} (n_{\varepsilon}+\varepsilon)^{m-1}\frac{|\nabla n_{\varepsilon}|^2}{n_{\varepsilon}}\\ &= -\frac{4m}{(m+1)^2}\int_{\Omega} \frac{|\nabla (n_{\varepsilon}+\varepsilon)^{\frac{m+1}{2}}|^2}{n_{\varepsilon}} \end{align*} that this lemma holds. \end{proof} The following 2 lemmas have already been proved in the proofs of \cite[Lemmas 2.8 and 2.9]{Lankeit_2016}. Thus we only recall statements of lemmas. \begin{lem}\label{ushi} There exist $K, C, k>0$ such that for all $\varepsilon>0$, \begin{align*} \frac{d}{dt}\int_{\Omega}\frac{|\nabla c_{\varepsilon}|^2}{c_{\varepsilon}} &+ k\int_{\Omega}c_{\varepsilon}|D^2\log c_{\varepsilon}|^2 + k\int_{\Omega}\frac{|\nabla c_{\varepsilon}|^4}{c_{\varepsilon}^3} \\ &\leq C + K\int_{\Omega}|\nabla u_{\varepsilon}|^2 -2\int_{\Omega}\frac{\nabla c_{\varepsilon}\cdot\nabla n_{\varepsilon}}{1+\varepsilon n_{\varepsilon}} \quad \mbox{on}\ (0, T_{{\rm max}, \varepsilon}). \end{align*} \end{lem} \begin{lem}\label{tra} For all $\eta>0$ there exists $C_{\eta} > 0$ such that for all $\varepsilon>0$, \begin{align*} \frac{d}{dt}\int_{\Omega}|u_{\varepsilon}|^2 + \int_{\Omega} |\nabla u_{\varepsilon}|^2 \leq \eta \int_{\Omega}n_{\varepsilon}^2\log n_{\varepsilon} + C_{\eta} \quad \mbox{on}\ (0, T_{{\rm max}, \varepsilon}). \end{align*} \end{lem} Thanks to these lemmas, we can establish the estimate for $\frac{d \mathcal{F}_\varepsilon}{dt}$ which enables us to derive the desired estimate for $\mathcal{F}_\varepsilon$. \begin{lem}\label{tastu} There exist $C, k_{0}, K >0$ satisfying \begin{align}\label{ineq;Fe} &\frac{d}{dt}\Bigl[\int_{\Omega} n_{\varepsilon}\log n_{\varepsilon} + \frac{\chi}{2}\int_{\Omega}\frac{|\nabla c_{\varepsilon}|^2}{c_{\varepsilon}} + K\chi\int_{\Omega}|u_{\varepsilon}|^2 \Bigr] \\ \notag &+ \frac{\mu}{4}\int_{\Omega} n_{\varepsilon}^2\log n_{\varepsilon} + \frac{4m}{(m+1)^2}\int_{\Omega} \frac{|\nabla (n_{\varepsilon}+\varepsilon)^{\frac{m+1}{2}}|^2}{n_{\varepsilon}} + k_{0}\int_{\Omega} c_{\varepsilon}|D^2\log c_{\varepsilon}|^2 \\ \notag &+ k_{0}\int_{\Omega}\frac{|\nabla c_{\varepsilon}|^4}{c_{\varepsilon}^3} + k_{0}\int_{\Omega}|\nabla u_{\varepsilon}|^2 \leq C \quad \mbox{on}\ (0, T_{{\rm max}, \varepsilon})\quad \mbox{for all}\ \varepsilon>0. \end{align} \end{lem} \begin{proof} This lemma can be derived by a combination of Lemmas \ref{ne}, \ref{ushi} and \ref{tra} with $\eta:=\frac{\mu}{4K\chi}$. \end{proof} Now we are in a position to see the estimate for $\mathcal{F}_\varepsilon$ uniformly-in-$\varepsilon$. \begin{lem}\label{me} There exists $C>0$ such that \begin{align*} \mathcal{F}_\varepsilon(t) = \int_\Omega n_{\ep}(\cdot,t) \log n_{\ep}(\cdot,t) + \frac \chi 2 \int_\Omega \frac{|\nabla c_{\ep}(\cdot,t)|^2}{c_{\ep}(\cdot,t)} + K\chi \int_\Omega |u_{\ep}(\cdot,t)|^2 \leq C \end{align*} for all $t\in (0,T_{{\rm max},\ep})$ and all $\varepsilon>0$ and \begin{align*} &\int_{t}^{t+\tau}\int_{\Omega} n_{\varepsilon}^2\log n_{\varepsilon} + \int_{t}^{t+\tau}\int_{\Omega} \frac{|\nabla (n_{\varepsilon}+\varepsilon)^{\frac{m+1}{2}}|^2}{n_{\varepsilon}} + \int_{t}^{t+\tau}\int_{\Omega} c_{\varepsilon}|D^2\log c_{\varepsilon}|^2 \leq C, \\[1mm] &\int_{t}^{t+\tau}\int_{\Omega}\frac{|\nabla c_{\varepsilon}|^4}{c_{\varepsilon}^3} + \int_{t}^{t+\tau}\int_{\Omega}|\nabla u_{\varepsilon}|^2 \leq C, \\[1mm] &\int_{t}^{t+\tau}\int_{\Omega}|\nabla (n_{\varepsilon}+\varepsilon)^{\frac{m+1}{2}}|^{\frac{4}{3}} + \int_{\Omega} |\nabla c_{\varepsilon}|^2 + \int_{t}^{t+\tau}\int_{\Omega}|\nabla c_{\varepsilon}|^4 + \int_{t}^{t+\tau}\int_{\Omega} n_{\varepsilon}^2 \leq C \end{align*} for all $t \in [0, T_{{\rm max}, \varepsilon}-\tau)$ and all $\varepsilon>0$, where $\tau:=\min\{1, \frac{1}{2}T_{{\rm max}, \varepsilon}\}$. \end{lem} \begin{proof} The proof is based on that of \cite[Lemma 2.11]{Lankeit_2016}. Noticing from the inequalities $s\log s \le \frac{1}{2e}+s^2 \log s$, $\int_\Omega \frac{|\nabla c_{\ep}|^2}{c_{\ep}} \le \lp{\infty}{c_0}\int_\Omega \frac{|\nabla c_{\ep}|^4}{c_{\ep}^3} + |\Omega|$ (from Lemma \ref{lem;Linf;c}) and $\int_\Omega |u_{\ep}|^2 \le C_1 \int_\Omega |\nabla u_{\ep}|^2$ with some $C_1>0$ (from the Poincar{\'{e}} inequality) that Lemma \ref{tastu} implies \begin{align*} \frac {d\mathcal{F}_\varepsilon}{dt} + k_1 \mathcal{F}_\varepsilon \le k_2 \end{align*} with some $k_1,k_2>0$, we establish the boundedness of $\mathcal{F}_\varepsilon$ on $(0,T_{{\rm max},\ep})$. Then for $\tau=\min\{1,\frac 12T_{{\rm max},\ep}\}$ integrating the inequality \eqref{ineq;Fe} over $(t,t+\tau)$ with Lemmas \ref{pote1} and \ref{lem;Linf;c} implies this lemma. \end{proof} Then we shall establish global existence in approximate problem \eqref{Pe} by using a Moser--Alikakos-type procedure. \begin{lem}\label{saru} For all $\varepsilon\in(0, 1)$, $T_{{\rm max}, \varepsilon}=\infty$ holds. \end{lem} \begin{proof} The proof is based on that of \cite[Lemma 3.9]{Winkler_2016}. Assume that $T_{{\rm max}, \varepsilon}<\infty$ and put $p:=\min\{3+m,4\}$. We shall first verify the $L^p$-estimate for $n_{\ep}$. We see from the first equation and the fact $\nabla \cdot u_{\ep} =0 $ on $\Omega \times (0,\infty)$ that \begin{align*} \frac{1}{p}\frac{d}{dt}\int_{\Omega}n_{\varepsilon}^{p} &= \int_{\Omega}n_{\varepsilon}^{p-1} \nabla\cdot \left( m(n_{\varepsilon}+\varepsilon)^{m-1}\nabla n_{\varepsilon} -\chi \frac{n_{\varepsilon}}{1+\varepsilon n_{\varepsilon}}\nabla c_{\varepsilon} \right) \\ & \quad \, + \int_\Omega n_{\ep}^{p-1} (\kappa n_{\varepsilon}-\mu n_{\varepsilon}^2) - \frac 1p \int_\Omega u_{\varepsilon}\cdot\nabla n_{\varepsilon}^p \\ &=-m(p-1)\int_{\Omega}n_{\varepsilon}^{p-2}(n_{\varepsilon}+\varepsilon)^{m-1}|\nabla n_{\varepsilon}|^2 \\ &\,\quad+\chi(p-1)\int_{\Omega} \frac{n_{\varepsilon}^{p-1}}{1+\varepsilon n_{\varepsilon}}\nabla n_{\varepsilon}\cdot\nabla c_{\varepsilon} +\kappa\int_{\Omega}n_{\varepsilon}^{p} -\mu\int_{\Omega}n_{\varepsilon}^{p+1}. \end{align*} Here, since $2p-4+2(1-m)_+<p+1$, the Young inequality yields that \begin{align*} \chi(p-1) &\int_{\Omega} \frac{n_{\varepsilon}^{p-1}}{1+\varepsilon n_{\varepsilon}} \nabla n_{\varepsilon}\cdot\nabla c_{\varepsilon} \\ & \le \frac{\chi(p-1)}{\varepsilon} \int_{\Omega} n_{\ep}^\frac{p-2}{2}(n_{\ep}+\varepsilon)^\frac{m-1}{2}|\nabla n_{\varepsilon}| n_{\varepsilon}^{\frac{p}{2}-1}(n_{\ep}+\varepsilon)^{-\frac{m-1}{2}} |\nabla c_{\varepsilon}| \\ &\leq \frac{m(p-1)}{2}\int_{\Omega} n_{\varepsilon}^{p-2}(n_{\varepsilon}+\varepsilon)^{m-1}|\nabla n_{\varepsilon}|^2 +\int_{\Omega} n_{\varepsilon}^{2p-4}(n_{\ep}+\varepsilon)^{2(1-m)} +C_{1}\int_{\Omega}|\nabla c_{\varepsilon}|^4 \\ &\leq \frac{m(p-1)}{2}\int_{\Omega} n_{\varepsilon}^{p-2}(n_{\varepsilon}+\varepsilon)^{m-1}|\nabla n_{\varepsilon}|^2 +\frac{\mu}{2}\int_{\Omega}n_{\varepsilon}^{p+1} + C_2 +C_{1}\int_{\Omega}|\nabla c_{\varepsilon}|^4 \end{align*} on $(0, T_{{\rm max}, \varepsilon})$ with some $C_{1}=C_1(\varepsilon) > 0$ and $C_2=C_2(\varepsilon)>0$, where we used the inequalities $(a+b)^r \le 2^{r}(a^r + b^r)$ ($a,b \ge 0$, $r>0$) and $(a+b)^r \le b^r$ ($a,b \ge 0$, $r\le 0$) to obtain the last inequality. Therefore we obtain from the positivity of $n_{\varepsilon}$ that \begin{align*} \frac{1}{p}\frac{d}{dt}\int_{\Omega}n_{\varepsilon}^{p} \leq C_1\int_{\Omega}|\nabla c_{\varepsilon}|^4 + \kappa\int_{\Omega}n_{\varepsilon}^{p} + C_2. \end{align*} Thus it follows from Lemma \ref{me} that \begin{align* \int_{\Omega}n_{\varepsilon}^p \leq C_{3} \quad \mbox{on}\ (0, T_{{\rm max}, \varepsilon}), \end{align*} where $C_{3}=C_{3}(\varepsilon)>0$. Then, aided by the $L^2$-estimate for $\nabla u_{\ep}$ (from a testing argument), we can obtain that \begin{align* \|A^\theta u_{\varepsilon}(\cdot, t)\|_{L^{2}(\Omega)} \leq C_{4} \end{align*} for all $t\in (0,T_{{\rm max},\ep})$ with some $C_4=C_4(\varepsilon)>0$. Then the continuous embedding $D(A^\theta) \hookrightarrow L^\infty(\Omega)$ implies the $L^\infty$-estimate for $u_{\ep}$. By using these estimates a standard $L^p$-$L^q$ estimate for the Neumann heat semigroup on bounded domains and the inequality \[ \lp{3}{u(\cdot,t) \nabla c (\cdot,t)} \le \lp{\infty}{u(\cdot,t)} \lp{6}{\nabla c(\cdot,t)}^{\frac{1}{2}} \lp{2}{\nabla c(\cdot,t)}^{\frac{1}{2}} \] and the $L^2$-estimate for $\nabla c_{\ep}$ from Lemma \ref{me} imply the $L^6$-estimate for $\nabla c_{\ep}$ (cf.\ an argument in the proof of \cite[Lemma 3.10]{CKM}). Finally we shall verify the $L^\infty$-estimate for $n_{\ep}$. Put $\widetilde{\nep} (x,t):=\max\{n_{\ep} (x,t),s_0\} $ for $(x,t)\in \Omega\times (0,T_{{\rm max},\ep})$ with some $s_0>0$. Then we can see from $\nabla \cdot u_{\ep}=0$ in $\Omega\times (0,T_{{\rm max},\ep})$ that \begin{align*} \frac{d}{dt} \int_\Omega \widetilde{\nep}^p &+ p (p-1) \int_\Omega (n_{\ep}+\varepsilon)^{m-1}\widetilde{\nep}^{p-2}|\nabla \widetilde{\nep}|^2 \\ &\le p(p-1)\chi \int_\Omega \widetilde{\nep}^{p-2} \frac{n_{\varepsilon}}{1+\varepsilon n_{\varepsilon}}\nabla c_{\varepsilon} \cdot \nabla \widetilde{\nep} + \kappa \int_\Omega \widetilde{\nep}^{p-1} n_{\ep} \end{align*} on $(0,T_{{\rm max},\ep})$. Thus, noting that \[ \frac{n_{\varepsilon}}{1+\varepsilon n_{\varepsilon}}\nabla c_{\varepsilon} \in L^{\infty}(0, T_{{\rm max}, \varepsilon}; L^6(\Omega)), \quad n_{\varepsilon} \in L^{\infty}(0, T_{{\rm max}, \varepsilon}; L^{3+2m}(\Omega)), \] from a Moser--Alikakos-type procedure (see the proof of \cite[Lemma A.1]{Tao-Winkler_2012}), we can attain that \[ \lp{\infty}{n_{\ep}(\cdot,t)}\le C_5 \] for all $t\in (0,T_{{\rm max},\ep})$ with some $C_5=C_5(\varepsilon) >0$, which with extensibility criterion shows $T_{{\rm max},\ep}=\infty$ for each $\varepsilon \in (0,1)$. \end{proof} \section{Uniform-in-$\varepsilon$ estimates}\label{Sec3} In this section we collect lemmas which are needed to show convergence of solutions of \eqref{Pe} as $\varepsilon \searrow 0$. Here the case that $m=1$ has already been dealt with in \cite{Lankeit_2016}. Thus we shall consider the case that $m>0$ with $m\neq 1$. From Lemma \ref{me} we only know not some estimate for $\nabla n_{\ep}$ but $L^{\frac{4}{3}}_{\rm loc}([0,\infty); W^{1,\frac{4}{3}}(\Omega))$-boundedness of $((n_{\ep}+\varepsilon)^{\frac{m+1}{2}})_{\varepsilon\in (0,1)}$. However, it seems to be difficult to derive an enough regularity of $\partial_t (n_{\ep}+\varepsilon)^{\frac{m+1}{2}}$ for each $m>0$. Therefore we need to establish an estimate for $\nabla (n_{\ep}+\varepsilon)^\gamma$ with some $\gamma < \frac{m+1}{2}$. The following lemma is a cornerstone in the proof of Theorem \ref{mainthm1}. \begin{lem}\label{tool1} For all $T>0$ there exists a constant $C=C(T)>0$ such that \begin{align*} \int_0^T\int_{\Omega}|\nabla (n_{\varepsilon}+\varepsilon)^{\frac{m}{2}}|^2 \leq C \quad \mbox{on}\ (0, T). \end{align*} \end{lem} \begin{proof} In light of Lemma \ref{me} we see that there exists $C_1=C_1(T)>0$ such that \begin{align*} \int_0^T\int_{\Omega}|\nabla (n_{\varepsilon}+\varepsilon)^{\frac{m}{2}}|^2 &= \int_0^T\int_{\Omega}\frac{|\nabla (n_{\varepsilon}+\varepsilon)^{\frac{m+1}{2}}|^2}{n_{\varepsilon}+\varepsilon} \\ &\le \int_0^T\int_{\Omega}\frac{|\nabla (n_{\varepsilon}+\varepsilon)^{\frac{m+1}{2}}|^2}{n_{\varepsilon}} \leq C_1 \end{align*} with $C_1=C_1(T)>0$. \end{proof} We next confirm the following lemma which will play an important role in deriving some time regularity of $(n_{\ep}+\varepsilon)^\frac{m}{2}$. \begin{lem}\label{tori} Let $m>0$ be such that $m\neq 1$. Then for all $T>0$ there exists a constant $C=C(T)>0$ such that for all $\varepsilon \in (0,1)$, \begin{align*} \int_{\Omega}(n_{\varepsilon} +\varepsilon)^{m} \leq C \end{align*} on $(0,T)$ and \begin{align*} \int_0^T \io (n_{\varepsilon}+\varepsilon)^{2m-3}|\nabla n_{\varepsilon}|^2 \le C \end{align*} hold. \end{lem} \begin{proof} From the first equation in \eqref{P} with $\nabla \cdot u_{\ep} =0$ in $\Omega \times (0,\infty)$ we have \begin{align*} \frac{1}{m}\frac{d}{dt}\int_{\Omega}(n_{\varepsilon}+\varepsilon)^{m} & = -m(m-1)\int_{\Omega}(n_{\varepsilon}+\varepsilon)^{2m-3}|\nabla n_{\varepsilon}|^2 \\ &\,\quad +(m-1) \chi \int_{\Omega}(n_{\varepsilon}+\varepsilon)^{m-2} \frac{n_{\varepsilon}}{1+\varepsilon n_{\varepsilon}}\nabla n_{\varepsilon}\cdot\nabla c_{\varepsilon} \\ & \quad \, + \int_{\Omega}(n_{\varepsilon}+\varepsilon)^{m-1} (\kappan_{\ep}-\mu n_{\ep}^2). \end{align*} Here we note from the Young inequality that \begin{align}\label{ineq;ab;intnepm} \left| \chi (m-1) \int_{\Omega}(n_{\varepsilon}+\varepsilon)^{m-2} \frac{n_{\varepsilon}}{1+\varepsilon n_{\varepsilon}}\nabla n_{\varepsilon}\cdot\nabla c_{\varepsilon} \right| &\leq \chi |m-1| \int_{\Omega}(n_{\varepsilon}+\varepsilon)^{m-1} |\nabla n_{\varepsilon}||\nabla c_{\varepsilon}| \\ \notag &\leq \frac{m|m-1|}{2}\int_{\Omega} (n_{\varepsilon}+\varepsilon)^{2m-3}|\nabla n_{\varepsilon}|^2 \\ \notag &\quad \, + \int_{\Omega} (n_{\varepsilon}+\varepsilon)^2 + C_{1}\int_{\Omega} |\nabla c_{\varepsilon}|^4, \end{align} where $C_{1}>0$. In the case that $m>1$, since $m-1>0$, we obtain that \begin{align*} \frac{1}{m}\frac{d}{dt}\int_{\Omega}(n_{\varepsilon}+\varepsilon)^{m} +& \frac{m(m-1)}{2}\int_{\Omega} (n_{\varepsilon}+\varepsilon)^{2m-3}|\nabla n_{\varepsilon}|^2 \\ &\leq \int_{\Omega} (n_{\varepsilon}+\varepsilon)^2 + C_{1}\int_{\Omega} |\nabla c_{\varepsilon}|^4 + \kappa\int_{\Omega}(n_{\varepsilon}+\varepsilon)^{m}, \end{align*} which together with Lemma \ref{me} shows that there is $C_2=C_2(T)>0$ such that \begin{align*} \int_{\Omega}(n_{\varepsilon}+\varepsilon)^{m} \leq C_{2} \end{align*} on $(0,T)$ for all $\varepsilon\in (0,1)$ and \begin{align*} \int_0^T\int_{\Omega}(n_{\varepsilon}+\varepsilon)^{2m-3}|\nabla n_{\varepsilon}|^2 \le C_2 \end{align*} for all $\varepsilon\in (0,1)$. On the other hand, in the case that $0<m<1$, we have from \eqref{ineq;ab;intnepm} that \begin{align*} \frac{1}{m}\frac{d}{dt}\int_{\Omega}(n_{\varepsilon}+\varepsilon)^{m} & \geq \frac{m(1-m)}{2}\int_{\Omega} (n_{\varepsilon}+\varepsilon)^{2m-3}|\nabla n_{\varepsilon}|^2 \\ &\quad\, - \int_{\Omega} (n_{\varepsilon}+\varepsilon)^2 - C_{1}\int_{\Omega} |\nabla c_{\varepsilon}|^4 - \mu \int_{\Omega}(n_{\varepsilon}+\varepsilon)^{m+1}. \end{align*} Thus, integrating it over $(0,T)$, we derive from applications of the Young inequality \begin{align*} \int_\Omega (n_{\ep}+\varepsilon)^m \le m \int_\Omega (n_{\ep}+1) + (1-m)|\Omega| \end{align*} and \begin{align*} \int_0^T \int_\Omega (n_{\ep}+\varepsilon)^{m+1} \le \frac{m+1}{2} \int_0^T \int_\Omega (n_{\ep}+1)^2 + \frac{1-m}{2}|\Omega|T \end{align*} with Lemma \ref{pote1} that \begin{align*} \frac{m(1-m)}{2}\int_0^T \int_{\Omega} (n_{\varepsilon}+\varepsilon)^{2m-3}|\nabla n_{\varepsilon}|^2 \le C_3 \end{align*} with some $C_3=C_3(T)>0$. \end{proof} In order to see some time regularity of $(n_{\ep} + \varepsilon)^\frac{m}{2}$ we will give the following lemma. \begin{lem}\label{tool3} Let $m>0$ be such that $m\neq1$. Then for all $T>0$ there exists a constant $C=C(T)>0$ such that for all $\varepsilon \in (0,1)$, \begin{align*} \int_\Omega (n_{\ep}+\varepsilon)^{m-1} \le C \end{align*} on $(0,T)$ and \begin{align*} \int_0^T\int_{\Omega}(n_{\varepsilon}+\varepsilon)^{2m-4}|\nabla n_{\varepsilon}|^2 \leq C \end{align*} hold. \end{lem} \begin{proof} We derive from the first equation in \eqref{Pe} and integration by parts that \begin{align}\label{Blouson} \frac{d}{dt}\int_{\Omega}(n_{\varepsilon}+\varepsilon)^{m-1} &= -m(m-1)(m-2)\int_{\Omega}(n_{\varepsilon}+\varepsilon)^{2m-4}|\nabla n_{\varepsilon}|^2 \\ \notag &\,\quad+\chi(m-1)(m-2) \int_{\Omega}(n_{\varepsilon}+\varepsilon)^{m-3}\frac{n_{\varepsilon}}{1+\varepsilon n_{\varepsilon}} \nabla n_{\varepsilon}\cdot\nabla c_{\varepsilon} \\ \notag &\,\quad+\kappa(m-1)\int_{\Omega}(n_{\varepsilon}+\varepsilon)^{m-2}n_{\varepsilon} -\mu(m-1)\int_{\Omega}(n_{\varepsilon}+\varepsilon)^{m-2}n_{\varepsilon}^2. \end{align} Now we note from the Young inequality and Lemma \ref{me} that \begin{align}\label{ineq;m-1} & \left| \chi(m-1)(m-2) \int_{\Omega}(n_{\varepsilon}+\varepsilon)^{m-3}\frac{n_{\varepsilon}}{1+\varepsilon n_{\varepsilon}} \nabla n_{\varepsilon}\cdot\nabla c_{\varepsilon} \right| \\ \notag &\leq \frac{m|(m-1)(m-2)|}{2}\int_{\Omega}(n_{\varepsilon}+\varepsilon)^{2m-4}|\nabla n_{\varepsilon}|^2 + C_1\int_{\Omega}|\nabla c_{\varepsilon}|^2 \\ \notag &\leq \frac{m|(m-1)(m-2)|}{2}\int_{\Omega}(n_{\varepsilon}+\varepsilon)^{2m-4}|\nabla n_{\varepsilon}|^2 + C_2 \end{align} with $C_1, C_2>0$. We first consider the cases that $m> 2$ and $0<m<1$. Since it follows that $(m-1)(m-2)>0$ and that if $m>2$ then \begin{align*} \kappa(m-1)\int_{\Omega}(n_{\varepsilon}+\varepsilon)^{m-2}n_{\varepsilon} -\mu(m-1)\int_{\Omega}(n_{\varepsilon}+\varepsilon)^{m-2}n_{\varepsilon}^2 \le \kappa(m-1)\int_{\Omega}(n_{\varepsilon}+\varepsilon)^{m-1} \end{align*} and if $0<m<1$ then \begin{align*} \kappa(m-1)\int_{\Omega}(n_{\varepsilon}+\varepsilon)^{m-2}n_{\varepsilon} -\mu(m-1)\int_{\Omega}(n_{\varepsilon}+\varepsilon)^{m-2}n_{\varepsilon}^2 \le \mu(1-m)\int_{\Omega}(n_{\varepsilon}+\varepsilon)^{m} \le C_3 \end{align*} with some $C_3=C_3(T)>0$ (from Lemma \ref{tori}), we infer from \eqref{Blouson} that \begin{align*} &\frac{d}{dt}\int_{\Omega}(n_{\varepsilon}+\varepsilon)^{m-1} + \frac{m(m-1)(m-2)}{2}\int_{\Omega}(n_{\varepsilon}+\varepsilon)^{2m-4}|\nabla n_{\varepsilon}|^2 \\ \notag &\leq \kappa|m-1|\int_{\Omega}(n_{\varepsilon}+\varepsilon)^{m-1} + C_4 \end{align*} with some $C_4=C_4(T)>0$, and hence there exists a constant $C_5=C_5(T)>0$ such that for all $\varepsilon\in(0,1)$, \begin{align*} \int_{\Omega}(n_{\varepsilon}+\varepsilon)^{m-1} \le C_5 \end{align*} on $(0,T)$ and \begin{align*} \int_{\Omega}(n_{\varepsilon}+\varepsilon)^{2m-4}|\nabla n_{\varepsilon}|^2 \leq C_5. \end{align*} On the other hand, in the case that $1<m<2$, we see from \eqref{Blouson} and \eqref{ineq;m-1} that \begin{align*} \frac{d}{dt}\int_{\Omega}(n_{\varepsilon}+\varepsilon)^{m-1} &\geq \frac{m(m-1)(2-m)}{2}\int_{\Omega}(n_{\varepsilon}+\varepsilon)^{2m-4}|\nabla n_{\varepsilon}|^2 -C_6 \\ \notag &\quad\, -\mu(m-1)\int_{\Omega}(n_{\varepsilon}+\varepsilon)^{m} \end{align*} with some $C_6>0$. Hence, noticing that $ \int_\Omega (n_{\ep}+\varepsilon)^{m-1} \le (m-1) \int_\Omega (n_{\ep}+1) + (2-m)|\Omega| $ and $(m-1)(2-m)>0$, we derive from Lemmas \ref{pote1} and \ref{tori} that \begin{align*} \frac{m(m-1)(2-m)}{2} \int_0^T \int_{\Omega}(n_{\varepsilon}+\varepsilon)^{2m-4}|\nabla n_{\varepsilon}|^2 \leq C_7 \end{align*} with some $C_7=C_7(T)>0$. Finally, in the case that $m=2$, Lemma \ref{tool1} implies this lemma. \end{proof} The following time regularity of $(n_{\ep}+\varepsilon)^\frac{m}{2}$ will be useful in applying a Lions--Aubin type lemma later. \begin{lem}\label{tool4} Let $m>0$ be such that $m\neq1$. Then for all $T>0$ there exists a constant $C=C(T)>0$ satisfying \[ \|\partial_t (n_{\varepsilon}+\varepsilon)^{\frac{m}{2}}\|_{L^1(0, T; (W^{2, 4}_{0}(\Omega))^{*})} \leq C \quad \mbox{for all}\ \varepsilon \in (0, 1). \] \end{lem} \begin{proof} Let $T>0$ and let $\psi \in L^{\infty}(0, T; W^{2, 4}_0(\Omega))$. The first equation in \eqref{Pe} and integration by parts yield that \begin{align*} \int_0^T\int_{\Omega} (\partial_t (n_{\varepsilon}+\varepsilon)^{\frac{m}{2}})\psi &= -\int_0^T\int_{\Omega}\psi u_{\varepsilon}\cdot\nabla(n_{\varepsilon}+\varepsilon)^{\frac{m}{2}} \\ &\,\quad-\frac{m^2}{m+1}\left(\frac{m}{2}-1\right) \int_0^T\int_{\Omega} \psi \frac{\nabla (n_{\varepsilon}+\varepsilon)^{\frac{m+1}{2}}}{(n_{\varepsilon}+\varepsilon)^{\frac{1}{2}}} \cdot (n_{\varepsilon}+\varepsilon)^{m-2}\nabla n_{\varepsilon} \\ &\,\quad-\frac{m^2}{2}\int_0^T\int_{\Omega} (n_{\varepsilon}+\varepsilon)^{m-\frac{3}{2}}\nabla n_{\varepsilon} \cdot (n_{\varepsilon}+\varepsilon)^{\frac{m-1}{2}}\nabla \psi \\ &\,\quad+\frac{m\chi}{m+1}\left(\frac{m}{2}-1\right) \int_0^T\int_{\Omega}\frac{n_{\varepsilon}\psi}{(1+\varepsilon n_{\varepsilon})(n_{\varepsilon}+\varepsilon)} \frac{\nabla (n_{\varepsilon}+\varepsilon)^{\frac{m+1}{2}}}{(n_{\varepsilon}+\varepsilon)^{\frac{1}{2}}} \cdot \nabla c_{\varepsilon} \\ &\,\quad+\frac{m\chi}{2}\int_0^T\int_{\Omega} (n_{\varepsilon}+\varepsilon)^{\frac{m}{2}} \frac{n_{\varepsilon}}{(1+\varepsilon n_{\varepsilon})(n_{\varepsilon}+\varepsilon)} \nabla c_{\varepsilon}\cdot\nabla\psi \\ &\,\quad+\frac{m\kappa}{2}\int_0^T\int_{\Omega}(n_{\varepsilon}+\varepsilon)^{\frac{m}{2}-1} n_{\varepsilon}\psi -\frac{m\mu}{2}\int_0^T\int_{\Omega}(n_{\varepsilon}+\varepsilon)^{\frac{m}{2}-1} n_{\varepsilon}^2\psi. \end{align*} Then, noting from the Young inequality that \begin{align*} &\left| \int_0^T\int_{\Omega}\psi u_{\varepsilon}\cdot\nabla(n_{\varepsilon}+\varepsilon)^{\frac{m}{2}} \right| \le \frac{\|\psi\|_{L^\infty(\Omega\times (0,T))}}{2} \left( \int_0^T\int_{\Omega} |u_{\varepsilon}|^2 + \int_0^T\int_\Omega |\nabla(n_{\varepsilon}+\varepsilon)^{\frac{m}{2}}|^2 \right), \\ & \left| \int_0^T\int_{\Omega} \psi \frac{\nabla (n_{\varepsilon}+\varepsilon)^{\frac{m+1}{2}}}{(n_{\varepsilon}+\varepsilon)^{\frac{1}{2}}} \cdot (n_{\varepsilon}+\varepsilon)^{m-2}\nabla n_{\varepsilon} \right| \\ &\quad \quad \quad \le \frac{\|\psi\|_{L^\infty(\Omega\times (0,T))}}{2} \int_0^T\int_{\Omega} \left( \frac{|\nabla (n_{\varepsilon}+\varepsilon)^{\frac{m+1}{2}}|^2}{n_{\varepsilon}} + \int_0^T \io (n_{\varepsilon}+\varepsilon)^{2m-4}|\nabla n_{\varepsilon}|^2 \right) \end{align*} an \begin{align*} & \left| \int_0^T\int_{\Omega} (n_{\varepsilon}+\varepsilon)^{m-\frac{3}{2}}\nabla n_{\varepsilon} \cdot (n_{\varepsilon}+\varepsilon)^{\frac{m-1}{2}}\nabla \psi \right| \\ &\quad \quad \quad \le \frac{\|\nabla \psi\|_{L^\infty(\Omega\times (0,T))}}{2} \left( \int_0^T \io (n_{\varepsilon}+\varepsilon)^{2m-3}|\nabla n_{\varepsilon}|^2 + \int_0^T \io (n_{\varepsilon}+\varepsilon)^{m-1} \right), \\ &\left| \int_0^T\int_{\Omega}\psi\frac{n_{\varepsilon}}{(1+\varepsilon n_{\varepsilon})(n_{\varepsilon}+\varepsilon)} \frac{\nabla (n_{\varepsilon}+\varepsilon)^{\frac{m+1}{2}}}{(n_{\varepsilon}+\varepsilon)^{\frac{1}{2}}} \cdot \nabla c_{\varepsilon} \right| \\ &\quad \quad \quad \le \frac{\|\psi\|_{L^\infty(\Omega\times (0,T))}}{2} \left( \int_0^T\int_{\Omega} \frac{|\nabla (n_{\varepsilon}+\varepsilon)^{\frac{m+1}{2}}|^2}{n_{\varepsilon}} + \int_0^T \io |\nabla c_{\varepsilon}|^2 \right) \end{align*} as well as \begin{align*} &\left| \int_0^T\int_{\Omega} (n_{\varepsilon}+\varepsilon)^{\frac{m}{2}} \frac{n_{\varepsilon}}{(1+\varepsilon n_{\varepsilon})(n_{\varepsilon}+\varepsilon)} \nabla c_{\varepsilon}\cdot\nabla\psi \right| \\ & \quad \quad \quad \le \frac{\|\nabla \psi\|_{L^\infty(\Omega\times (0,T))}}{2} \left( \int_0^T\int_{\Omega} (n_{\varepsilon}+\varepsilon)^{m} + \int_0^T \io |\nabla c_{\varepsilon}|^2 \right) \end{align*} with \begin{align*} &\left| \frac{m\kappa}{2}\int_0^T\int_{\Omega}(n_{\varepsilon}+\varepsilon)^{\frac{m}{2}-1} n_{\varepsilon}\psi - \frac{m\mu}{2}\int_0^T\int_{\Omega}(n_{\varepsilon}+\varepsilon)^{\frac{m}{2}-1} n_{\varepsilon}^2\psi \right| \\ & \quad \quad \quad \quad \le \|\psi\|_{L^\infty(\Omega\times (0,T))} \left( \int_0^T \io (n_{\ep}+\varepsilon)^{\max\{m,2\}} +C_1T\right) \end{align*} for some $C_1>0$ (from the fact that $\frac{m}{2}+1 \le \max\{m,2\}$), we obtain from Lemmas \ref{me}, \ref{tool1}, \ref{tori} and \ref{tool3} together with the continuous embedding $W^{2,4}_0(\Omega) \hookrightarrow W^{1,\infty}(\Omega)$ that \begin{align*} \int_0^T\int_{\Omega} (\partial_t (n_{\varepsilon}+\varepsilon)^{\frac{m}{2}})\psi \leq C_2\|\psi\|_{L^{\infty}(0, T; W^{2, 4}(\Omega))} \end{align*} with some $C_2=C_2(T)>0$. Therefore a standard duality argument enables us to see this lemma. \end{proof} We also give the following lemma concerned with time regularities of $c_{\ep}$ and $u_{\ep}$. \begin{lem}\label{cu} For all $T>0$ there exists $C=C(T)>0$ satisfying $$ \|(c_{\varepsilon})_t\|_{L^2(0, T; (W^{1, 2}_{0}(\Omega))^{*})} \le C \quad \mbox{and} \quad \|(u_{\varepsilon})_ t\|_{L^2(0, T; (W^{1, 3}_{0}(\Omega))^{*})} \leq C\quad \mbox{for all}\ \varepsilon \in (0, 1). $$ \end{lem} \begin{proof} This lemma can be proved from the same arguments as those in the proofs of \cite[Lemmas 2.14 and 2.15]{Lankeit_2016}. \end{proof} Finally we give an estimate for $\nabla (n_{\ep}+\varepsilon)^m$ to see convergence of $\int_0^T \io \nabla (n_{\ep}+\varepsilon)^m \cdot \nabla \varphi$ for all $\varphi\in C^\infty_0(\overline{\Omega}\times [0,\infty))$. \begin{lem}\label{inu1} Let $m>0$ be such that $m\neq 1$. Then for all $T>0$ and all $r\in(1, \frac 43]$ there exists a constant $C=C(T)>0$ such that \begin{align*} \|\nabla (n_{\varepsilon}+\varepsilon)^m\|_{L^r(0, T; L^r(\Omega))} \leq C \quad \mbox{for all}\ \varepsilon \in (0, 1). \end{align*} \end{lem} \begin{proof} Let $r=\frac 43$. Since $\frac{r}{2-r}= 2$, the Young inequality yields \begin{align*} \int_{0}^{T}\int_{\Omega}|\nabla (n_{\varepsilon}+\varepsilon)^m|^r &= \int_{0}^{T}\int_{\Omega}(n_{\varepsilon}+\varepsilon)^{(m-1)r}|\nabla n_{\varepsilon}|^r \\ &\leq \int_{0}^{T}\int_{\Omega}(n_{\varepsilon}+\varepsilon)^{\frac{r}{2-r}} + C_1\int_{0}^{T}\int_{\Omega}(n_{\varepsilon}+\varepsilon)^{2m-3}|\nabla n_{\varepsilon}|^2 \\ &\le \int_{0}^{T}\int_{\Omega}(n_{\varepsilon}+1)^{2} + C_1\int_{0}^{T}\int_{\Omega}(n_{\varepsilon}+\varepsilon)^{2m-3}|\nabla n_{\varepsilon}|^2 \end{align*} with some $C_1>0$. Therefore Lemmas \ref{me} and \ref{tori} lead to this lemma. \end{proof} \section{Convergence: Proof of Theorem \ref{mainthm1}}\label{Sec4} In this section we consider convergence of solutions of approximate problem \eqref{Pe} and then prove Theorem \ref{mainthm1}. We first state the following result which can be obtained from the previous estimates in Section \ref{Sec3}. \begin{lem}\label{convergences} There exist a sequence $(\varepsilon_j)_{j\in\mathbb{N}}$ such that $\varepsilon_j\searrow 0$ as $j\to\infty$ and functions $n, c, u$ such that \begin{align*} n&\in L^2_{\rm loc}([0,\infty);L^2(\Omega)), \\ c&\in L^2_{\rm loc}([0,\infty);W^{1,2}(\Omega)), \\ u&\in L^2_{\rm loc}([0,\infty);W^{1,2}_{0,\sigma}(\Omega)) \end{align*} and that for all $p\in [1,6)$, \begin{align} \label{conv;nm2} (n_{\ep}+\varepsilon)^\frac{m}{2} &\to n^\frac{m}{2} && \mbox{in }L^2_{\rm loc}([0,\infty);L^p(\Omega)) \ \mbox{and a.e.} \ \mbox{in} \ \Omega\times (0,\infty) , \\ \label{conv;n} n_{\varepsilon}&\to n &&\mbox{in }L^2_{\rm loc}([0,\infty);L^2(\Omega)), \\ \label{conv;c} c_\varepsilon&\to c &&\mbox{in }C^0_{\rm loc}([0,\infty);L^p(\Omega)), \\ \label{conv;u} u_\varepsilon&\to u &&\mbox{in }L^2_{\rm loc}([0,\infty);L^p(\Omega)), \\ \label{conv;nac} \nabla c_\varepsilon&\to\nabla c &&\mbox{weakly in} \ L^4_{\rm loc}([0,\infty);L^4(\Omega)), \\ \label{conv;nau} \nabla u_\varepsilon&\to\nabla u &&\mbox{weakly in} \ L^2_{\rm loc}([0,\infty);L^2(\Omega)), \\ \label{conv;Yu} Y_\varepsilon u_\varepsilon&\to u &&\mbox{in }L^2_{\rm loc}([0,\infty);L^2(\Omega)) \end{align} as $\varepsilon=\varepsilon_j\searrow0$. \end{lem} \begin{proof} Let $T>0$. Thanks to Lemmas \ref{tool1}, \ref{tori} and \ref{tool4}, we have that \[ \left( (n_{\ep}+\varepsilon)^\frac{m}{2} \right)_{\varepsilon\in (0,1)} \ \mbox{is bounded in} \ L^2(0,T; W^{1,2}(\Omega)) \] and \[ \left( \partial_t (n_{\ep}+\varepsilon)^\frac{m}{2} \right)_{\varepsilon\in (0,1)} \ \mbox{is bounded in} \ L^1(0,T;(W^{2,4}_0(\Omega)^\ast). \] Therefore, aided by the compact embedding $W^{1,2}(\Omega) \hookrightarrow L^p(\Omega)$ for all $p\in [1,6)$ and the continuous embedding $L^p(\Omega) \hookrightarrow (W^{2,4}_0(\Omega))^\ast$, we can see from a Lions--Aubin type lemma (see \cite[Corollary 4]{Simon_1987}) that $((n_{\ep}+\varepsilon)^\frac{m}{2})_{\varepsilon\in (0,1)}$ is relatively compact in $L^2 (0,T; L^p(\Omega))$, which means that there are a sequence $(\varepsilon_j)_{j\in \mathbb{N}} \searrow 0$ and a function $v \in L^2(0,T;L^p(\Omega))$ such that $(n_{\ep}+ \varepsilon)^\frac{m}{2} \to v$ in $L^2(0,T;L^p(\Omega))$ as $\varepsilon=\varepsilon_j\searrow0$. Then by putting $n := v^{\frac{2}{m}}$ we have \eqref{conv;nm2}, which yields that $n_{\ep} \to n$ a.e.\ in $\Omega\times (0,\infty)$ as $\varepsilon=\varepsilon_j\searrow0$. The rest of the proof is mainly based on arguments in the proof of \cite[Proposition 2.1]{Lankeit_2016}; thus we will give a short proof. Since a uniform bound on $\int_0^T \io \Phi(n_{\ep}^2)$, where $\Phi(s):=\frac{s}{2}\log (s)$ for $s>0$, derives from the Dunford--Pettis theorem (cf.\ \cite[Lemma IV 8.9]{Dunford-Schwartz}) that $(n_{\ep}^2)_{\varepsilon\in (0,1)}$ is weakly relatively precompact in $L^1(\Omega\times (0,T))$, we obtain that there is a subsequence of $(\varepsilon_j)_{j\in \mathbb{N}}$ such that $\int_0^T \io n_{\varepsilon}^2 \to \int_0^T \io n^2$ as $\varepsilon=\varepsilon_j\searrow0$. This together with the convergence $n_{\ep} \to n$ weakly in $L^2(0,T;L^2(\Omega))$ as $\varepsilon=\varepsilon_j\searrow0$ (from Lemma \ref{me}) yields that \eqref{conv;n} along a further subsequence. On the other hand, by virtue of Lemmas \ref{lem;Linf;c}, \ref{me} and \ref{cu}, we can establish that $(c_{\ep})_{\varepsilon\in (0,1)}$ and $((c_{\ep})_t)_{\varepsilon\in (0,1)}$ are bounded in $L^\infty (0,T; W^{1,2}(\Omega))$ and in $L^2(0,T;(W^{1,2}_0(\Omega))^\ast)$, respectively, as well as $(u_{\ep})_{\varepsilon\in (0,1)}$ and $((u_{\ep})_t)_{\varepsilon\in (0,1)}$ are bounded in $L^2 (0,T; W^{1,2}_\sigma)$ and in $L^2(0,T; (W^{1,3}_\sigma(\Omega))^\ast)$, respectively. Thus \cite[Corollary 4]{Simon_1987} again implies \eqref{conv;c} and \eqref{conv;u}, and then Lemma \ref{me} leads to the convergences \eqref{conv;nac} and \eqref{conv;nau} along a further subsequence. Finally, noticing that $\lp{2}{Y_\varepsilon u_{\ep}(\cdot,t) - u(\cdot,t)} \to 0$ as $\varepsilon=\varepsilon_j\searrow0$ a.e.\ $t>0$ and $\lp{2}{Y_\varepsilon u_{\ep}(\cdot,t) -u(\cdot,t)}^2 \le C$ for all $t>0$ and all $\varepsilon>0$ in view of Lemma \ref{me}, we can establish from the dominated convergence theorem that \eqref{conv;Yu} along a further subsequence. \end{proof} We then provide convergence of $\nabla (n_{\ep} + \varepsilon)^m$ from Lemma \ref{inu1}. \begin{lem}\label{nablaconv} Let $m>0$ be such that $m \neq 1$. Then the function $n$ obtained in Lemma \ref{convergences} satisfies that $n^m \in L^{\frac{4}{3}}_{\rm loc}([0, \infty); W^{1, \frac{4}{3}}(\Omega))$ and \[ \nabla(n_{\varepsilon}+\varepsilon)^m \to \nabla n^m \quad \mbox{weakly in}\ L^\frac{4}{3}_{\rm loc}([0, \infty); L^\frac{4}{3}(\Omega)) \] as $\varepsilon=\varepsilon_j\searrow0$. \end{lem} \begin{proof} Let $T>0$. Since the Poincar\'e--Wirtinger inequality yields that \[ \int_0^T \lp{\frac{4}{3}}{(n_{\ep} + \varepsilon)^m}^\frac{4}{3} \le C_1 \int_0^T \lp{\frac{4}{3}}{\nabla (n_{\ep}+\varepsilon)^m}^\frac{4}{3} + C_1 \int_0^T |\Omega|^{-\frac{1}{4}} \lp{1}{(n_{\ep} + \varepsilon)^m} \] with some $C_1>0$, we first note from the Fatou lemma and Lemmas \ref{tori}, \ref{inu1} that \begin{align*} \int_0^T \lp{\frac{4}{3}}{n^m}^\frac{4}{3} & \le \liminf_{\varepsilon \searrow 0} \int_0^T \lp{\frac{4}{3}}{(n_{\ep} + \varepsilon)^m}^\frac{4}{3} \le C_2 \end{align*} with some $C_2=C_2(T)>0$, which implies that $n^m \in L^\frac{4}{3}(0,T; L^\frac{4}{3}(\Omega))$. We next have from Lemma \ref{inu1} that there exist a subsequence of $(\varepsilon_j)_{j\in \mathbb{N}}$ obtained in Lemma \ref{convergences} (again denoted by $(\varepsilon_j)_{j\in \mathbb{N}}$) and a function $w\in L^\frac{4}{3}(0,T; L^\frac{4}{3}(\Omega))$ such that \[ \nabla (n_{\ep} + \varepsilon)^m \to w \quad \mbox{weakly in} \ L^\frac{4}{3}(0,T; L^\frac{4}{3}(\Omega)) \] as $\varepsilon = \varepsilon_j \searrow 0$. In order to verify that $w=\nabla n^m$, it is enough to confirm that $(n_{\ep} + \varepsilon)^m \to n^m$ in $L^1(0,T;L^1(\Omega))$ as $\varepsilon = \varepsilon_j \searrow 0$. Now, since we have already known that $(n_{\ep}+\varepsilon)^m$ is uniform integrable (from Lemma \ref{tori}) and $(n_{\ep}+\varepsilon)^m \to n^m$ a.e. in $\Omega\times (0,\infty)$ as $\varepsilon = \varepsilon_j \searrow 0$, the Vitali convergence theorem entails that \[ (n_{\ep} + \varepsilon)^m \to n^m\quad \mbox{in} \ L^1(0,T;L^1(\Omega)) \] as $\varepsilon = \varepsilon_j \searrow 0$. Thanks to this strong convergence, we can verify that $w=\nabla n^m$, which together with $w\in L^\frac{4}{3}(0,T; L^\frac{4}{3}(\Omega))$ shows that $n^m \in L^\frac{4}{3}(0,T; W^{1,\frac{4}{3}}(\Omega))$. \end{proof} We will establish global existence of weak solutions to \eqref{P} from convergences obtained in Lemmas \ref{convergences} and \ref{nablaconv}. \begin{prth1.1} Let $\varphi\in C^{\infty}_0(\overline{\Omega}\times [0,\infty))$ and $\psi\in C^{\infty}_{0,\sigma}(\Omega \times [0,\infty))$. Testing each equations in \eqref{Pe} by these functions and using integration by parts, we can see that \begin{align}\label{weaksolPep} &-\int^\infty_0\!\!\!\!\int_\Omega n_{\ep} \varphi_t -\int_\Omega n_0\varphi(\cdot,0) -\int^\infty_0\!\!\!\!\int_\Omega n_{\ep} u_{\ep} \cdot\nabla\varphi \\ \notag &\hspace{5.8mm}=-\int^\infty_0\!\!\!\!\int_\Omega\nabla (n_{\ep}+ \varepsilon)^{m} \cdot\nabla\varphi +\chi\int^\infty_0\!\!\!\!\int_\Omega \frac{n_{\ep}}{1+\varepsilon n_{\ep}} \nabla c_{\ep} \cdot\nabla\varphi +\int^\infty_0\!\!\!\!\int_\Omega (\kappa n_{\ep} - \mu n_{\ep}^2)\varphi, \\[2.0mm] \label{weaksolPep2} &-\int^\infty_0\!\!\!\!\int_\Omega c_{\ep} \varphi_t -\int_\Omega c_0\varphi(\cdot,0) -\int^\infty_0\!\!\!\!\int_\Omega c_{\ep} u_{\ep} \cdot\nabla\varphi \\ \notag &\hspace{5.8mm}=-\int^\infty_0\!\!\!\!\int_\Omega \nabla c_{\ep} \cdot \nabla\varphi -\int^\infty_0\!\!\!\!\int_\Omega \frac{1}{\varepsilon}(\log(1+\varepsilon n_{\ep})) c_{\ep}\varphi, \\[2.0mm] \label{weaksolPep3} &-\int^\infty_0\!\!\!\!\int_\Omega u_{\ep} \cdot\psi_t -\int_\Omega u_0\cdot\psi(\cdot,0) -\int^\infty_0\!\!\!\!\int_\Omega Y_\varepsilon u_{\ep} \otimes u_{\ep}\cdot\nabla\psi \\ \notag &\hspace{5.8mm}=-\int^\infty_0\!\!\!\!\int_\Omega\nabla u_{\ep} \cdot\nabla\psi +\int^\infty_0\!\!\!\!\int_\Omega n_{\ep} \nabla\psi\cdot\nabla\Phi \end{align} hold. Now, since the dominated convergence theorem implies that \begin{align*} \frac{1}{1+\varepsilon n_{\ep}} \to 1 \quad \mbox{in} \ L^4_{\rm loc} ([0,\infty); L^4(\Omega)) \quad \mbox{as}\ \varepsilon=\varepsilon_j\searrow0, \end{align*} the convergences of $n_{\ep}$ in $L^2_{\rm loc}([0,\infty); L^2(\Omega))$ and $\nabla c_{\ep}$ weakly in $L^4_{\rm loc}([0,\infty); L^4(\Omega))$ (see Lemma \ref{convergences}) derive \begin{equation}\label{conv;nnabalc} \frac{n_{\ep}}{1+\varepsilon n_{\ep}}\nabla c_{\ep} = n_{\ep} \cdot \frac{1}{1+\en_{\ep}} \cdot \nabla c_{\ep} \to n\nabla c \quad \mbox{weakly in} \ L^1_{\rm loc}([0,\infty);L^1(\Omega)) \end{equation} as $\varepsilon=\varepsilon_j\searrow0$. On the other hand, to confirm convergence of $\frac{1}{\varepsilon} (\log (1+\varepsilon n_{\ep})) c_{\ep}$ in $L^1_{\rm loc}([0,\infty); L^1(\Omega))$ we shall show that $f_\varepsilon (n_{\ep}) \to n $ in $L^2_{\rm loc}([0,\infty);L^2(\Omega))$ as $\varepsilon=\varepsilon_j \searrow 0$, where $f_\varepsilon (s):= \frac{1}{\varepsilon} \log (1+\varepsilon s)$ for $s\geq0$. Noticing from \eqref{conv;n} that \[ |f_\varepsilon (n) - n|^2 \to 0 \quad \mbox{a.e.\ in} \ \Omega\times(0, T) \ \mbox{as} \ \varepsilon=\varepsilon_j \searrow 0 \] and from the inequality $f_\varepsilon(s) \leq s$ ($s\geq0$) that \[ |f_\varepsilon (n) - n|^2 \le 2 n^2, \] we deduce from the dominated convergence theorem that for all $T>0$, \[ \|f_\varepsilon (n) - n\|^2_{L^2(0,T; L^2(\Omega))} = \int_0^T\int_{\Omega} |f_\varepsilon (n) - n|^2 \to 0 \] as $\varepsilon=\varepsilon_j \searrow 0$. Therefore we can see from the inequality \[ 0 < f_\varepsilon' (s) = \frac{1}{1+\varepsilon s} \le 1\] and \eqref{conv;n} that for all $T>0$, \begin{align*} \|f_\varepsilon (n_{\ep}) - n\|_{L^2(0,T; L^2(\Omega))} & \le \|f_\varepsilon (n_{\ep}) - f_\varepsilon (n)\|_{L^2(0,T; L^2(\Omega))} +\|f_\varepsilon (n) - n\|_{L^2(0,T; L^2(\Omega))} \\ &\le f_\varepsilon ' (n_{\ep}) \|n_{\ep}- n\|_{L^2(0,T; L^2(\Omega))} +\|f_\varepsilon (n) - n\|_{L^2(0,T; L^2(\Omega))} \\ & \to 0 \end{align*} as $\varepsilon= \varepsilon_j \searrow 0$. This together with \eqref{conv;c} enables us to obtain that \begin{equation}\label{conv;lognc} \frac{1}{\varepsilon} (\log (1+\varepsilon n_{\ep})) c_{\ep} = f_\varepsilon (n_{\ep}) c_{\ep} \to nc \quad \mbox{in} \ L^1_{\rm loc}([0,\infty);L^1(\Omega)) \end{equation} as $\varepsilon= \varepsilon_j \searrow 0$. Then all convergences in Lemmas \ref{convergences}, \ref{nablaconv} as well as \eqref{conv;nnabalc}, \eqref{conv;lognc} make sure to pass to the limit in all integrals in \eqref{weaksolPep}--\eqref{weaksolPep3}, which means that the triplet $(n,c,u)$ is a global weak solution of \eqref{P}. \qed \end{prth1.1}
{ "timestamp": "2018-02-27T02:04:22", "yymm": "1802", "arxiv_id": "1802.08807", "language": "en", "url": "https://arxiv.org/abs/1802.08807" }
\section*{} A quantum network, or entangled graph state of qubits \cite{ref:raussendorf2001aow,ref:benjamin2006bgs,ref:benjamin2009pmb, ref:santori2010nqo}, is a valuable resource for both universal quantum computation and quantum communication. Optically heralded entanglement \cite{ref:barrett2005ehf} can be utilized to generate the graph state's entanglement edges between qubit nodes. However, to date, entanglement generation rates are too low to realize multi-qubit networks \cite{ref:bernien2013heb,ref:kalb2017edb, ref:hucul2014mea,ref:delteil2016ghe,ref:schwartz2016dgc} due to photon emission into unwanted spatial and spectral modes. The integration of crystal defect-based qubits with photonic circuits offers an opportunity to significantly enhance photon collection efficiency \cite{ref:schroder2016qnd}, albeit at the cost of degrading the defect's optical properties, such as an increase in inhomogeneous emission energies and decreased spectral stability \cite{ref:faraon2011rez,ref:fu2010cnn}. Since the entanglement protocols used for generating graph state edges require detection of multiple indistinguishable photons from separate emitters \cite{ref:barrett2005ehf,ref:cabrillo1999ces}, compensating for the static and dynamic spread in emission energies is of critical importance for scalable on-chip graph state generation. The dc Stark effect has been used previously to tune \cite{ref:bassett2011ets,ref:bernien2013heb} and stabilize \cite{ref:acosta2012dso} the emission energy of quantum defects in bulk diamond. Here, in a series of experiments, we demonstrate the ability to tune the emission energy of photonic device-coupled near-surface NV centers over a large tuning range. Measurements on many single waveguide-coupled NV centers highlight the variability in response to an applied bias voltage, suggesting challenges in reaching the level of control necessary for chip-scale integration. Despite this variability, we are able to apply real-time voltage feedback control to partially stabilize the emission energy of a device-coupled NV center. \begin{figure} \begin{center} \includegraphics[width =0.5\textwidth]{Figure1_v6.png} \caption{(a) The measurement setup. Excitation is provided by a 532~nm laser at normal incidence to either an NV center in a waveguide or a waveguide-coupled disk resonator. Collection is via the grating coupler at the end of the waveguide. (b) SEM of finished devices. (Inset) False colored SEM image (GaP = pink, Ti/Au = yellow, diamond = grey). Electrode voltages are indicated by +V and GND. (c) Cross section of the photonic devices and electrodes. The magnitude of a simulated electric field in units of 10$^7$~V/m, resulting from the application of 100~V to the +V electrode, is shown in the color scale.} \label{fig:device} \end{center} \end{figure} In a GaP-on-diamond photonics platform,\cite{ref:barclay2011hnr, ref:gould2016lsg} we are able to efficiently couple single NV centers to disk resonators, enhancing the zero-phonon line (ZPL) emission rate via the Purcell effect \cite{ref:gould2016eez}. NV centers within $\sim$ 15~nm of the diamond surface, created via implantation and annealing, couple evanescently with the GaP layer. As a result of the static dipole moment of the defect's excited state, there is variation in emission energy both between different defects, due to variation in the local environment caused by implantation and processing damage, and in the emission energy of a single defect over time due to electric field fluctuations. However, this dipole moment also enables electric field control of the defect's emission energy \cite{ref:tamarat2006ssc,ref:bassett2011ets,ref:bernien2012tpq,ref:bernien2013heb}. We provide this control through the addition of Ti/Au electrodes to this GaP-on-diamond photonics platform. In the photonic devices used in these experiments\cite{SI}, NV centers are evanescently coupled to either a 150~nm-wide GaP single-mode waveguide or a whispering gallery mode of a 1.3 $\mu$m diameter waveguide-coupled disk resonator (Figure \ref{fig:device}(a)). A grating coupler at the end of the waveguide enables collection and measurement of the NV center emission. Around each photonic device is a pair of Ti/Au electrodes with a $\sim 6~\mu$m spacing (Figure \ref{fig:device}(b))\cite{SI}. These electrodes allow application of a biasing electric field transverse to the waveguides, with a simulated field strength inside the waveguide and disk resonators of a few MV/m (Figure \ref{fig:device}(c))\cite{SI}, similar to values used in previous Stark effect tuning experiments \cite{ref:acosta2012dso}. Due to the (001) diamond growth direction, this externally applied field has components both parallel and perpendicular to the NV axis\cite{SI}. Details on device fabrication and yield can be found in Ref.[19] and in the Supplemental Information (SI)\cite{SI}. Measurements were performed between 12-14~K in a closed-cycle He cryostat. A 532~nm laser was used for optical excitation, focused onto the sample with a 0.7~NA microscope objective. Photoluminescence (PL) was collected from the grating coupler using the same objective, coupled into a grating spectrometer, and detected by a CCD camera (Figure \ref{fig:device}(a)). The input and collection optical paths were separated by a 562~nm dichoric beamsplitter. Bias voltages were applied using a computer-controlled piezocontroller in the range of 0-100~V. \begin{figure} \begin{center} \includegraphics[width=0.5\columnwidth]{Figure2_v7.png} \caption{(a) PL emission from a single waveguide-coupled ZPL showing (left) PL as a function of wavelength and (right) the applied bias voltage. The red diamonds are the center wavelength of the ZPL (upper axis). (b) Left: Xe gas tuning of a disk resonator. The cavity mode is indicated by a dashed white line. The laser excitation spot is moved slightly between the two measurements at $t\sim28$~minutes, resulting in the appearance of a second coupled NV center. At 50 minutes, the Xe gas flow is restored and the cavity tuned from resonance. The two coupled emission lines tune with the application of the bias.} \label{fig:tuning} \end{center} \end{figure} We first demonstrate electric-field tuning of a waveguide-coupled NV center. Exciting the waveguide at normal incidence with collection from the relevant grating coupler (Figure \ref{fig:device}(a)), we found several coupled NV centers whose ZPL emission could be tuned with an applied bias voltage. The tuning range varied from a few GHz to a few hundred GHz. Figure \ref{fig:tuning}(a) shows an example of a nearly linear response of emission energy to applied bias voltage for a waveguide-coupled NV center with a $\sim 185$ GHz repeatable tuning range. We also electrically control the emission energy of a resonator-coupled NV center. We first tune the cavity mode of a waveguide-coupled disk resonator onto NV ZPL resonance via Xe gas deposition, while collecting the PL emission from the waveguide grating coupler. The Xe gas deposition results in a redshift of the resonator cavity mode. Figure \ref{fig:tuning}(b, left) shows the resulting Xe gas tuning curve for one disk resonator. Xe gas flow is halted from $t\sim15$~minutes to $t\sim45$~minutes to perform two voltage experiments and then resumed. NV centers that couple with the cavity mode are bright when in resonance with the cavity mode and not visible otherwise \cite{ref:gould2016eez}. There are several NV centers that couple to the cavity mode for this particular disk resonator. With the cavity mode tuned to resonance with two NV centers, we apply a square wave bias voltage (Figure \ref{fig:tuning}(b)), and we see the two ZPL emission lines moving in response to the applied bias voltage. These emission lines are from two separate centers since the laser spot position corresponding to maximum PL emission is different for each center. Specifically, when the laser spot position is adjusted at $t\sim28$~minutes, the second ZPL appears. This demonstrates the ability to tune NV center emission energy even while coupled to a cavity mode, combining the enhanced emission rates from the cavity coupling with tunability from the electrodes. \begin{figure} \begin{center} \includegraphics[width = 0.5 \columnwidth]{Figure3_v6.png} \caption{ (a) Examples of different types of voltage-dependent behavior in an ensemble of waveguide-coupled NV centers. (Left) PL as a function of wavelength. (Right) Applied bias voltage (black squares, bottom axis) and center wavelength of the PL emission from the ZPL indicated by a red arrow in the left panel (red squares, top axis). Examples of voltage-independent ZPL emission energy (yellow arrow) and uncorrelated spectral diffusion (green arrow) are also indicated in the left panel. Color scale is in counts per 20~s. (b) ZPL emission energy as a function of time for the applied square-wave bias voltage (c). A slow transient response of the PL emission energy is observed. (d) Current-voltage characteristic for the electrodes measured for different incident green laser powers, showing the presence of a power-dependent photocurrent.} \label{fig:strange} \end{center} \end{figure} Our measurements of the tuning range of many centers show substantial variation in individual NV center voltage response. Due to the implantation density used to create the NV centers \cite{SI}, several centers can be excited in a single laser spot ($d_{laser}\sim 0.8~\mu$m). Figure \ref{fig:strange}(a) shows the bias voltage response of such an ensemble in the same waveguide. One of the NV centers, indicated by a red arrow above Figure \ref{fig:strange}(a), displays a large voltage-dependent tuning range of 190 GHz, while a nearby NV ZPL has a stable emission energy under the applied bias voltage (yellow arrow) and a third displays a large spectral diffusion ($\sim$ 70 GHz) uncorrelated with the applied bias voltage (green arrow). This variation in NV center behavior has several causes. The first factor is the center's relative orientation to the electric field. Longitudinal fields, parallel to the NV symmetry axis, will shift excited state energy levels linearly, while transverse fields will result in an excited state energy splitting that grows with increasing field.\cite{ref:bassett2011ets,ref:acosta2012dso,SI} The second factor is the few-mW non-resonant excitation, which results in photoionization of nearby defects and long-lived non-equilibrium charge distributions that can either amplify or screen the external electric field,\cite{ref:bassett2011ets} changing the effective Stark shift.\cite{ref:bassett2011ets,ref:bernien2012tpq} The local electric field is the combination of this local photoinduced field and the externally applied field. Previous work on Stark tuning of grown-in NV centers has shown similar variation in tuning ranges and voltage responses.\cite{ref:tamarat2006ssc,ref:bassett2011ets,ref:acosta2012dso} This variability in behavior even over small spatial scales presents a challenge to achieving the level of control required for chip-scale entanglement generation. In addition to spatial variability, we often observe temporal variability in the bias voltage response of these NV centers. Figure \ref{fig:strange}(b) shows a slow transient response of the ZPL emission energy from a single waveguide-coupled NV center to the applied bias voltage (Figure \ref{fig:strange}(c)). Current-voltage characteristic measurements of the electrodes, measured with a source measure unit, demonstrate the generation of a laser power-dependent photocurrent of a few pA per mW (Figure \ref{fig:strange}(d)), which suggests that slow photoinduced charging processes are responsible for these transient responses \cite{ref:bassett2011ets}. An exponential decay with a $\sim$ 40~s time constant fits the observed response. The observed emission linewidths of these NV centers also suggest fast spectral diffusion of the ZPL energy. Consequently, charging processes in this system take place over a range of timescales from sub-second to several tens of seconds. Voltage feedback stabilization must address energy variation across this range of timescales, implying that faster stabilization measurements will provide better correction to both the effects of the fast spectral diffusion and the slower field variations. Chip scale entanglement generation will require addressing the spatial and temporal variations of implanted NV center energies so that multiple centers can be tuned to and maintained in resonance, despite the static and dynamic energy variation of these centers that results from implantation and processing damage. Previous NV center emission energy stabilization experiments used photoluminescence excitation (PLE) measurements of grown-in NV centers for feedback control of the bias voltage \cite{ref:acosta2012dso}, but the large dynamic energy variation of implanted NV centers annealed at 800$^{\circ}$C makes PLE measurements difficult \cite{ref:fu2010cnn}. To obtain the necessary spectral resolution to demonstrate temporal stabilization, we thus utilized an Echelle spectrometer with 1.3~GHz resolution~\cite{ref:acosta2012dso}. We identified a spectrally-diffusing NV center with a Stark tunability of $\sim$ 300~MHz/V over a range of 100~V. We measured the spectral diffusion of this NV center under a constant bias voltage of 45~V for 50 minutes (Figure \ref{fig:stab}(a)), using a 40~s integration time for the CCD to ensure an adequate signal to noise ratio. The measured linewidth was 4.5$\pm 1.2$~GHz, a result of spectral diffusion of the ZPL emission during the integration time of the CCD camera. In the second measurement, the bias voltage was actively adjusted to stabilize the ZPL emission energy. Figure \ref{fig:stab}(c) shows the applied bias voltage as a function of time. After each 40s spectrum, the ZPL emission energy was determined and a correction to the bias voltage applied based on a PID algorithm\cite{SI}. Figure \ref{fig:stab}(b) shows the reduced ZPL spectral diffusion under active stabilization with linewidth of 5.2$\pm 1.2$ ~GHz. \begin{figure} \begin{center} \includegraphics[width = 0.7\columnwidth]{Figure4.png} \caption{(a) PL from an unstabilized ZPL at a constant 45~V bias voltage. The maximum spectral diffusion is 7.2~GHz and the average absolute difference from the center wavelength is 2.7~GHz. (b) Stabilized ZPL PL from a waveguide-coupled NV center. The maximum spectral diffusion is 3.9~GHz and the average absolute difference from the center wavelength is 1.2~GHz. (c) Bias voltage applied during active Stark effect stabilization of the ZPL in (b). NV PL is collected at the excitation spot due to the temporal drift of the cavity resonance over the time required for these measurements.} \label{fig:stab} \end{center} \end{figure} Finally, chip-scale entanglement generation in this platform will require high yields from the fabrication process. In this experiment, 12 waveguides and 21 resonators were studied. All waveguides and 8 of the resonators exhibited coupled NV centers. Of the 8 resonator-coupled centers, two could be voltage-tuned more than 20~GHz with the remainder tuning $<$ 6~GHz. This yield demonstrates the need for both NV center-device alignment, e.g. via patterned implantation \cite{ref:toyli2010csn}, and full Stark control (2 or 3 axis)\cite{ref:bassett2011ets,ref:acosta2012dso} for chip-based entanglement. We have demonstrated electric field control of the emission energy of photonic device-integrated implanted NV centers and feedback stabilization of this emission energy. Combined with the enhanced emission rates and collection efficiencies possible in photonic devices, these results are a necessary component of chip-scale entanglement generation. The performance of these devices can be improved by feasible steps. First, implantation and high-temperature vacuum annealing recipe development\cite{ref:chu2014cot} can result in near-surface NV centers with narrower linewidths and smaller spectral diffusion. This will enable the use of PLE measurements for the requisite fast feedback stabilization and better enable the generation of indistinguishable photons from separate NV centers. Second, future device designs can incorporate both patterned implantation and multiaxis Stark control to improve the yield of tunable NV centers. With these improvements, it should be possible to perform on-chip generation of indistinguishable photons from multiple NV centers with the collection rates necessary for scalable entangled state generation. \begin{acknowledgement} This material is based on work supported by the National Science Foundation (Grant No. 1506473) and the Defense Advanced Research Projects Agency (Award No. W31P4Q-15-1-0010). We would like to acknowledge V. Acosta and C.Santori for useful discussions, N. Shane Patrick for e-beam lithography support, and HP Labs for donation of the Echelle spectrometer. Devices were fabricated at the Washington Nanofabrication Facility, a part of the National Nanotechnology Coordinated Infrastructure network. E. R. Schmidgall was supported by an appointment to the Intelligence Community Postdoctoral Research Fellowship Program, administered by Oak Ridge Institute for Science and Education through an interagency agreement between the U.S. Department of Energy and the Office of the Director of National Intelligence. \end{acknowledgement} \begin{suppinfo} Additional information on fabrication procedures, NV center properties, the stabilization algorithm, and additional figures. \end{suppinfo} \section{Electrostatic Modeling} We modeled the electric field produced by the waveguide-transverse electrode structure in Figure 1 of the main text using COMSOL Multiphysics. Since the bottom of each electrode is composed of 7~nm Ti, we follow Acosta {\it et al.}\cite{ref:acosta2012dso} in modeling the electrodes as being composed completely of Ti. The modeled structure is incorporated directly from the GDSII file used in the electron beam lithography writes. We use a relative permittivity for the diamond substrate of $\epsilon_{r,d}=5.1$ and for the GaP $\epsilon_{r,GaP}=11.4$. We simulate 100~V applied to one electrode with the second electrode grounded, the maximum bias voltage applied in our experiments. This results in a maximum applied field inside the disk resonators of $\sim$ 5 MV/m and inside the waveguide of $\sim$ 3 MV/m at the height of the GaP/diamond interface. Figure \ref{fig:sup_field} shows the simulated field profile. It is important to note that, due to photoinduced fields especially under non-resonant excitation,\cite{ref:bassett2011ets,ref:acosta2012dso} these simulated fields are not the final electric fields inside the photonic devices. \begin{figure} \begin{center} \includegraphics[width=0.9\columnwidth]{FigureS2_v2.png} \caption{Electric field magnitude (V/m) from the integrated electrodes for V=100~V applied to one electrode with the second electrode grounded. (a) XY (top-down) view at a height of 600~nm above the diamond substrate, corresponding to the GaP-diamond interface in the photonic devices. The semitransparent rectangles indicate the position of the electrodes on the diamond substrate. (b) YZ (side) view.} \label{fig:sup_field} \end{center} \end{figure} \section{Fabrication} Near-surface NV centers were created in the single-crystal electronic grade diamond sample (ElementSix) by N${^+}$ ion implantation (10 keV, 3$\times 10^{10}$ cm$^{-2}$, CuttingEdge Ions), followed by a two-step anneal. A 2-hour, 800$^{\circ}$C annealing step was performed under a 5\%/95\% H$_{2}$/Ar forming gas atmosphere in order to allow diffusion of vacancies to form NV centers. A subsequent 24-hour $460^{\circ}$C anneal was performed in air in order to oxygen-terminate the surface and improve stability of the negatively charged state of the near-surface NV centers \cite{ref:fu2010cnn}. A 125~nm thick GaP membrane was transferred to the diamond via epitaxial liftoff in hydrofluoric acid and van der Waals bonding \cite{ref:yablonovitch1990vdw}. Using negative-tone optical lithography (NR9-1000 resist, AD10:DI 3:1 developer), a set of 5~nm/50~nm Ti/Au alignment marks was fabricated on the GaP-on-diamond chip via evaporation and liftoff in acetone. These alignment marks were used for ensuring overlay between the electron beam lithography and reactive ion etching step that defined the photonic devices and the electron beam lithography and evaporation/liftoff step that defined the integrated Ti/Au electrodes and wires around the photonic devices. Following definition of the alignment marks, hydrogen silsesquioxane (HSQ) was spin-coated to be used as the electron beam lithography resist. Photonic devices were patterned by electron-beam lithography (JEOL 6300). Two reactive-ion etch (RIE) steps were then used to etch the devices. The first RIE step (3.0 mTorr, 1.0/6.0/3.0 sccm Cl$_{2}$/Ar/N$_2$, 235~V dc bias) was used to etch the GaP layer and the second (25.0 mTorr, 20 sccm O$_{2}$, 65~V dc bias) was used to etch the diamond. The diamond etch depth was $\sim$ 600~nm. For the definition of metal electrodes, PMMA A8 495 was used as a positive resist for electron beam lithography. After development in 1:1 MIBK/IPA, 7~nm/70~nm Ti/Au was deposited via evaporation and liftoff in acetone. Thick 70~nm/700~nm Ti/Au bond pads for wirebonding were defined by negative-tone optical lithography (NR9-1000 resist, AD10:DI 3:1 developer), evaporation, and liftoff. The sample was then indium mounted on the sample holder, and ball/wedge ultrasonic wirebonding with Au wire was used to connect sample electrodes to the sample holder electroless nickel immersion gold PCB connections. Figure \ref{fig:sup_process} shows an overview of the fabrication process. \begin{figure} \begin{center} \includegraphics[width=0.8\columnwidth]{FigureS1.png} \caption{The fabrication process.} \label{fig:sup_process} \end{center} \end{figure} \section{Fabrication Yields} Of the 32 waveguides and 128 disk resonators fabricated on this sample, 12 waveguides had working electrodes such that a voltage-dependent response was observed from waveguide-coupled NV centers (38\%). The remaining devices had either broken waveguides (10 devices, 31\%), broken electrode leads (7 devices, 22\%), or both (3 devices, 9\%). In the intact waveguides, every waveguide had coupled NV centers whose emission could be observed via the grating coupler. Of the 128 disk resonators, 21 were located on intact waveguides with working electrodes and had cavity modes within Xe gas tuning range ($\sim$ 2nm) of the ZPL emission wavelength (16 \% of the disk resonators). Xe gas tuning resulted in resonator-coupled NV centers in 8 of these 21 resonators. Thus, the overall yield for resonator-coupled NV centers with working electrodes is 6 \%. The photonic device yields are comparable with yields reported in previous GaP-on-diamond device fabrication attempts \cite{ref:gould2016eez,ref:gould2016lsg}. The addition of electrodes lowers the overall yield for the disk resonators, but increasing the number of disk resonators per waveguide from the 4 used here to 6 or more should offset this effect. \section{Properties of the Device-Coupled NV Centers} \begin{figure} \begin{center} \includegraphics[width =\columnwidth]{Figure_S3v1.png} \caption{(a) Linewidth of $N=15$ device-coupled implanted NV centers (blue bars). Red line shows a normal distribution of mean 34.7~GHz, standard deviation 7.4~GHz. (b) Tuning range of $N=30$ device-coupled implanted NV centers.} \label{fig:NVstats} \end{center} \end{figure} In these devices, the average linewidth for a device NV center as measured in our standard grating spectrometer is 34.7 GHz with a standard deviation of 7.4 GHz (Figure \ref{fig:NVstats}(a)). This is based on fitting the observed linewidths of 15 waveguide-coupled NV centers. Using our high-resolution Echelle spectrometer we find a cavity NV linewidth of 6.3 $\pm$ 3.3~GHz from a set of 14 centers. We note that due to the higher resolution of this system, we are biased toward detecting NV centers with narrower linewidths. For comparison purposes, the standard spectrometer-limited and Echelle spectrometer-limited linewidths of a single deep as-grown NV center in this same sample is 16 GHz and 1.3 GHz (1 pixel), respectively. Thus the linewidths of the device-incorporated shallow implanted NV centers are overall larger than those of growth-incorporated deep NV centers. We observe a broad distribution of tuning ranges (Figure \ref{fig:NVstats}(b)) within the ensemble of investigated NV centers. In a set of 30 of the observed device-coupled implanted NV centers, no tuning was visible for 11 of the centers (37\%) though this does not preclude tuning that we cannot observe with our spectrometer resolution. In 7 of the centers (23\%), tuning was observed but the range was less than 50 GHz. In 5 of the NV centers (17\%), the tuning range was between 50 and 100 GHz. In 7 of the centers (23\%), the tuning range was larger than 100 GHz. In all centers where tuning was observed, hysteretic behavior was also observed. \subsection{Tuning Ranges in the Absence of Strain} The distribution of tuning ranges in Figure \ref{fig:NVstats}(b) is consistent with NV orientations relative to the waveguide and electrode geometry. The sample growth direction is [001] and the edges are $\langle110\rangle$ (Figure \ref{fig:sample}). \begin{figure} \begin{center} \includegraphics[width = 0.25\textwidth]{sample.png} \caption{Sample axes and field direction for estimating the tuning range.} \label{fig:sample} \end{center} \end{figure} We can define a sample $X$ axis along $[110]$ and a sample $Y$ axis along $[\bar{1}10]$. With respect to the $X$ and $Y$ axes, there are 8 possible NV center orientations: along $\pm X$ ($[111]$,$[11\bar{1}]$,$[\bar{1}\bar{1}1]$, and $[\bar{1}\bar{1}\bar{1}]$) and along $\pm Y$ (($[\bar{1}11]$,$[\bar{1}1\bar{1}]$,$[1\bar{1}1]$ and $[1\bar{1}\bar{1}]$). The photonic devices are aligned to the sample axes, so we can treat the applied electric field as being strictly along the sample $Y$ axis. Thus, for half the NV orientations, the applied field is oriented strictly perpendicular to the NV symmetry axis and results in a splitting of the $E_x$ and $E_y$ excited NV energy levels (Case 1). In the other half, the applied field has components both parallel to and perpendicular to the NV center symmetry axis, which results in both a shift and a splitting of the $E_x$ and $E_y$ energy levels (Case 2). The magnitude of these energy changes depends on the components of the NV dipole moment parallel to ($d_{||}$) and perpendicular to ($d_{\perp}$) the NV symmetry axis. The resulting energy shifts are thus: \begin{equation}\begin{cases}E_x = d_{\perp}F\approx 25~\text{GHz}& \text{Case 1}\\E_y = -d_{\perp}F \approx -25~\text{GHz}& \text{Case 1}\\ E_{x} = d_{||} 0.8 F+ d_{\perp} 0.6 F \approx 31~\text{GHz}& \text{Case 2}\\ E_{y} = d_{||} 0.8F - d_{\perp} 0.6 F \approx1~\text{GHz}& \text{Case 2}\end{cases}\end{equation} where $F$ is the magnitude of the electric field and the tuning ranges are calculated using literature-reported values for $d_{||}$ and $d_{\perp}$ of 4 GHz/MV/m and 5 GHz/MV/m respectively \cite{ref:acosta2012dso} and the previously simulated field magnitude of $F=$5 MV/m. These values do not take into account the fact that observed Stark tuning coefficients are $\sim $ 4 times larger under strong cw green excitation of the type used in our experiments \cite{ref:acosta2012dso,ref:bassett2011ets}, nor do they take into account photoinduced local fields. Additionally, the reported $d_{||}$, $d_{\perp}$ values are estimated to be correct only to within a factor of 2-3 \cite{ref:acosta2012dso}. The fundamental mode of the waveguide is TE polarized (parallel to the applied external field), such that only NV transitions that emit in this polarization will be observed via the grating coupler. When accounting for these factors, the tuning ranges observed in our devices are consistent with what can be expected from the applied external field. \subsection{Tuning Ranges in the Presence of Strain} These implanted device-coupled NV centers are in a high strain environment, as evidenced by the wide distribution of NV center ZPL emission energies in Figure 3(a) of the main text. Consequently, the applied electric field is primarily a perturbation on the strain-induced energy variation. In the case of fixed strain, the Hamiltonian can be written as \begin{equation}H = (\hbar \omega_0 + d_{||}F_z)\textbf{I}+\frac{1}{\sqrt{2}}\left(\begin{matrix}V_{E_x} & -V_{E_y}\\-V_{E_y} & -V_{E_x}\end{matrix}\right)\end{equation} where in this case $x$,$y$, and $z$ are the coordinate basis of the NV center including strain and $V_{E_{x,y}} =S_{E_{x,y}}-d_{\perp}F_{x,y}$ where $F_{x,y}$ is the component of the applied field along the NV center basis axes and $S_{E_{x,y}}$ are the fixed strain components \cite{ref:bassett2011ets}. The energy eigenvalues are then given by $E_{\pm}=h\nu\pm\frac{1}{2}h\delta$ where $h\nu = \hbar \omega_0+d_{||}F_z$ and $h\delta = \sqrt{2}\left(V_{E_x}^2+V_{E_y}^2\right)^{1/2}$. We can transform from the sample coordinate basis to the NV coordinate basis using the transformation matrix \begin{equation}\textbf{M}= \frac{1}{\sqrt{3}}\left(\begin{matrix}p_X & p_Y & -\sqrt{2}p_Z \\ -\sqrt{3}p_{Y}p_{Z}& \sqrt{3}p_{X}p_{Y}& 0\\ \sqrt{2} p_{X} & \sqrt{2}p_{Y}& p_{Z} \end{matrix}\right)\end{equation} where $p_{X,Y,Z}$ are the projections of the NV center on the sample $X$,$Y$, and $Z$ axes (eg, for [111], $p_X = 1$, $p_Y=0$ and $p_Z=1$)\cite{ref:bassett2011ets}. In the case of this [111] NV center for an applied field $F = F_Y$ along the sample $Y$ axis, the energy shifts are thus given by \begin{equation}E_{\pm}=\hbar\omega_0\pm\frac{\sqrt{2}}{2}\left[S_{E_x}^2+\left(S_{E_y}-d_{\perp}F_Y\right)^2\right]^{1/2}.\end{equation} Similarly, for a $[\bar{1}11]$ NV center ($p_X = 0, p_Y = 1, p_Z = 1$) and an applied field $F = F_Y$, the energy shifts are given by \begin{equation} E_{\pm} = \hbar \omega_0 + \frac{d_{||}\sqrt{2}}{\sqrt{3}}F_Y\pm\frac{\sqrt{2}}{2}\left[ \left(S_{E_x}-\frac{d_{\perp}F_Y}{\sqrt{3}}\right)^2+S_{E_y}^2\right]^{1/2}.\end{equation} In these cases, since the emission axes are primarily determined by the strain as opposed to the applied electric field, both components will couple to the TE mode of the waveguide. Thus, strain increases the number of observed emission lines and provides further variation to the expected tuning ranges. \section{Feedback Stabilization Experiment} The feedback was performed on the ZPL position as measured using the Echelle spectrometer. An initial tuning range measurement of the ZPL emission is used to determine the tuning range and bias voltage dependence per pixel ($V_{pix}$). The initial peak position for the initial bias voltage $p_0$ is input by the user. Initial fitting parameters (peak intensity, background level, and peak width) are also provided by the user at the beginning of the feedback measurement. \begin{figure}[h] \begin{center} \includegraphics[width = 0.7\columnwidth]{SIFigure.png} \caption{NV frequency stabilization using proportional feedback. (a) PL from an unstabilized ZPL at a constant 45~V bias voltage. The maximum spectral diffusion is 8.3~GHz and the average absolute difference from the center wavelength is 3.2~GHz. (b) Stabilized ZPL PL from a waveguide-coupled NV center. The maximum spectral diffusion is 4.5~GHz and the average absolute difference from the center wavelength is 1.3~GHz. (c) Bias voltage applied during active Stark effect stabilization of the ZPL in (b). Feedback is purely proportional to the error in peak position i.e. $K_p$=0.8,$~K_i$=$~K_d$=0.} \label{fig:sup_stable} \end{center} \end{figure} \subsection{Voltage Feedback Algorithm} The applied bias voltage is updated per spectrum based on current peak position $p$. Due to the broad linewidths, a Gaussian of the form $y = a \exp{[-(p-b)/c]^{2}}+d$ is fit to a region around the desired ZPL emission line, where $a$ is the intensity above background, $b$ the center emission energy, $c$ relates to the linewidth, and $d$ is the background. The difference between the current emission line location and the initial line location $\Delta p$ is then calculated and the bias voltage updated by a proportional integral derivative (PID) algorithm to correct for this error. \begin{equation}V_{new}=V_{prev} + (K_p~ \Delta p + K_i~I + K_d~D)~ V_{pix}\end{equation} \begin{equation}I=I + \Delta p~\Delta V \end{equation} \begin{equation}D= (\Delta p - \Delta p_{old})/\Delta V\end{equation} \begin{equation}\Delta V = |V_{prev}-V_{old}|\end{equation} where $K_p$,$~K_i$ and $K_d$ are the PID parameters determined by a semi-supervised learning algorithm. For the data shown in Fig. 4 of the main text, $K_p$=0.8,$~K_i$=$K_p$/500,$~K_d$=$K_p$/10 and $V_{pix}$=4.17 V/pixel]. $\Delta V$ is the previous voltage step and $\Delta p_{old}$ is the previous error in peak position. The PID algorithm shows higher stabilization than simple proportional feedback scheme which is shown in Fig.~\ref{fig:sup_stable}.
{ "timestamp": "2018-02-27T02:02:15", "yymm": "1802", "arxiv_id": "1802.08758", "language": "en", "url": "https://arxiv.org/abs/1802.08758" }
\section{Background and Motivation} \label{sec:introduction} \indent Understanding the relationship between various macroscopic parameters for the different species of a gas is critical for understanding the evolution and dynamics of said gas. A gas in thermodynamic equilibrium exhibits equal temperatures between all constituent species, i.e., $\left(T{\scriptstyle_{s'}}/T{\scriptstyle_{s}}\right){\scriptstyle_{tot}}$ $=$ 1 for $s'$ $\neq$ $s$ (see Appendix \ref{app:Definitions} for further details and parameter/symbol definitions) and does not allow for heat flow. The phase-space distributions for the constituents of a gas in thermodynamic equilibrium are isotropic, they exhibit no skewness (i.e., heat flux), and they are centered at the same bulk flow velocity. A subtle contrast exists for thermal equilibrium where one still maintains $\left(T{\scriptstyle_{s'}}/T{\scriptstyle_{s}}\right){\scriptstyle_{tot}}$ $=$ 1 for $s'$ $\neq$ $s$ but this does not require isotropic or uniformly flowing velocity distributions, e.g., one can have heat fluxes or counter-streaming populations \citep[e.g.,][]{evans90a, hoover86a}. A non-equilibrium gas can exhibit $\left(T{\scriptstyle_{s'}}/T{\scriptstyle_{s}}\right){\scriptstyle_{tot}}$ $\neq$ 1, among other departures from a maximal entropy state. If the temperatures are mass-proportional, i.e., uniform thermal speeds, then the species can be said to have the same velocity distribution \citep[e.g.,][]{ogilvie69a}. \indent Generally, a gas requires some form of irreversible energy dissipation and transfer between species to reach thermodynamic equilibrium. In the Earth's atmosphere, the primary mechanism is binary particle collisions \citep[e.g.,][]{petschek58, shu92a}. In the ionized gas or plasma of the solar wind Coulomb collisions play a role but the collision rate is often too low to drive $\left(T{\scriptstyle_{s'}}/T{\scriptstyle_{s}}\right){\scriptstyle_{tot}}$ $\rightarrow$ 1 as the plasma convects to Earth from the sun \citep[e.g.,][]{kasper08a, maruca13b}. Because the collision rate is often so low, the solar wind is typically considered a collisionless plasma (or at the very least a weakly collisional plasma). Determining the initial non-equilibrium $\left(T{\scriptstyle_{s'}}/T{\scriptstyle_{s}}\right){\scriptstyle_{tot}}$ $\neq$ 1 state of the gas and how it is established is critical to understanding its evolution and dynamics. \indent The relationship between various plasma parameters for the different major solar wind species is not well understood but understanding is critical for predicting the evolution and dynamics of the solar wind. Some of the most important parameters are (see Appendix \ref{app:Definitions} for definitions): the scalar temperatures $T{\scriptstyle_{s, j}}$ of species $s$, temperature ratios $\left(T{\scriptstyle_{s'}}/T{\scriptstyle_{s}}\right){\scriptstyle_{j}}$, and plasma betas $\beta{\scriptstyle_{s, j}}$ (where $j$ $=$ $tot$, $\parallel$, or $\perp$, i.e., the component with respect to the quasi-static magnetic field, $\textbf{\textit{B}}{\scriptstyle_{o}}$). While some of these parameters have been measured in previous work (e.g., see Tables \ref{tab:PrevTemperatures}, \ref{tab:PrevTempRatios}, and \ref{tab:PrevBetas} in Appendix \ref{app:PreviousMeasurements}), we are not aware of a comprehensive statistical study of these parameters in the literature using data from a single spacecraft. \indent Many of these plasma parameters are critically important for theoretical and observational work using spacecraft that cannot measure, for instance, the electron distributions. Most previous studies were case studies or limited to one parameter without summarizing tables providing quantities for future reference and independent of comparison to other parameters. For instance, the study by \citet[][]{newbury98a} is one of the only studies that directly compared the electron and proton temperatures for a long-duration (i.e., more than 1 year), statistically significant dataset (they used 18 months of ISEE 3 data). It is also one of the most often cited works\footnote{There were 71 citations on SAO/NASA ADS as of~\today.} for the average values of the ratio $\left(T{\scriptstyle_{e}}/T{\scriptstyle_{p}}\right){\scriptstyle_{tot}}$ in the solar wind. However, it relied on five minute averages for only 18 months of data or $\sim$160,000 measurements. \indent One of the reasons there have been no long-duration statistical studies is because making an accurate measurement of, for instance, the full electron velocity distribution is very difficult. The spacecraft needs to unambiguously measure the total electron density using quasi-thermal noise spectroscopy \citep[e.g.,][]{meyervernet89a, pulupa14a, pulupa14b} or the spacecraft electric potential or both. Both of these measurements require accurate electric field instrumentation with the former requiring radio frequency measurements above the local upper hybrid frequency \citep[e.g.,][]{pulupa14a, pulupa14b} and the latter accurate, DC-coupled monopole measurements \citep[e.g.,][]{cully07a}. Only recently with the high quality instrumentation and calibrated measurements by the \emph{Wind} spacecraft have truly long-duration, single spacecraft statistical studies of the solar wind been possible. \indent In this paper we describe the first long-duration statistical analysis of the temperatures, plasma betas, and temperature ratios of electrons, protons, and alpha-particles observed by the \emph{Wind} spacecraft near 1 AU between January 1995 and January 2005 comprised of over one million measurements. That is, this study spans from the end of solar cycle 22 (i.e., March 1986 to May 1996) through much of solar cycle 23 (i.e., June 1996 to December 2008). However, since the study does not span multiple complete solar cycles, we cannot perform a proper solar cycle dependence analysis. This work is timely as it will provide a statistical baseline for the upcoming \emph{Solar Orbiter} \citep[][]{muller13a} and \emph{Parker Solar Probe} \citep[][]{fox16a} and future IMAP missions. \indent The paper is outlined as follows: Section \ref{sec:DefinitionsDataSets} introduces the data sets, databases, and methodology used herein; Section \ref{sec:SolarWindStatistics} describes the statistical results; and Section \ref{sec:DiscussionandConclusions} presents the discussion and conclusion. We also include appendices that provide additional details for the reader of the parameter definitions (Appendix \ref{app:Definitions}), collision rates (Appendix \ref{app:CoulombCollisions}), and summaries of previous work (Appendix \ref{app:PreviousMeasurements}). \section{Data Sets and Methodology} \label{sec:DefinitionsDataSets} \indent In this section we introduce the instrument data sets, shock database, interplanetary coronal mass ejection (ICME) catalog, the data selection method, and the analysis techniques used to examine the solar wind plasma observed by the \emph{Wind} spacecraft \citep{harten95a} near 1 AU. The symbol/parameter definitions can be found in Appendix \ref{app:Definitions}. Note that the purpose of the study is not to evaluate the quality of the datasets but to provide a concise summary of the statistical properties of some critically important solar wind parameters near 1 AU. \subsection{Instruments} \label{subsec:Instruments} \indent Quasi-static magnetic field vectors ($\textbf{\textit{B}}{\scriptstyle_{o}}$) were taken from the \emph{Wind}/MFI dual, triaxial fluxgate magnetometers \citep[][]{lepping95} using the one minute cadence data. The components of some parameters used herein are defined with respect to $\textbf{\textit{B}}{\scriptstyle_{o}}$ using the subscript $j$. That is, for all temperature-dependent parameters we compute the values for the entire distribution ($j$ $=$ $tot$), the parallel component ($j$ $=$ $\parallel$), and the perpendicular component ($j$ $=$ $\perp$). \indent The proton and alpha-particle densities ($n{\scriptstyle_{s}}$) and temperatures ($T{\scriptstyle_{s, j}}$) were taken from the \emph{Wind}/SWE Faraday Cups (FCs) \citep[][]{ogilvie95} at a $\sim$92 second cadence covering an energy-per-charge range of $\sim$150--8000 eV/C. The velocity moments were calculated using a nonlinear least-squares fit of bi-Maxwellians to each species \citep[][]{kasper06a}. The SWE FCs have an variable energy(speed) resolution of $\sim$6.5--13\%($\sim$3.3--6.5\%), depending on energy window. The ion moments are constrained assuming quasi-neutrality, i.e., $n{\scriptstyle_{e}}$ $=$ $n{\scriptstyle_{p}}$ $+$ 2$n{\scriptstyle_{\alpha}}$, where $n{\scriptstyle_{e}}$ is taken from the WAVES/TNR observations (see explanation below). \indent The electron densities ($n{\scriptstyle_{e}}$) and temperatures ($T{\scriptstyle_{e, j}}$, where $j$ $=$ $tot$, $\parallel$, or $\perp$) were taken from the \emph{Wind}/3DP electron electrostatic analyzers \citep[][]{lin95a} with a $\sim$45 or $\sim$78 second cadence and constant energy bin width of $\sim$20\%. The electron distributions are constructed by merging the data from the EESA Low and EESA High instruments \citep[e.g., see][for instrument details]{wilsoniii09a, wilsoniii10a} from the 3DP suite following a similar method to that in \citet[][]{pulupa14a}. However, instead of separating the electron distributions into the core, halo, and strahl components, the velocity moments are directly computed from the merged instrument measurements. The energy range of EESA Low and High are commandable but a notable point is that the lowest energy channel setting is $\sim$3 eV (more commonly the instrument is set to $E{\scriptstyle_{min}}$ $\sim$ 5 eV). Thus, with a $\sim$20\% energy bin width an approximation of the lowest resolvable temperature would be $\sim$2 eV without fitting to a model function. \indent In order to obtain accurate electron moments, EESA measurements must be corrected for the effects of spacecraft floating potential. We estimate spacecraft potential following the methods in \citet[][]{salem01a}. We note that unlike many previous missions, \emph{Wind} can unambiguously measure the total electron density by observing the upper hybrid line (also called the plasma line) with the WAVES/TNR instrument \citep[][]{bougeret95a}. Analysis of the upper hybrid line via the technique of quasi-thermal noise spectroscopy \citep[][]{meyervernet89a} yield an accurate measurement of $n{\scriptstyle_{e}}$ unaffected by spacecraft potential. The unbiased value of $n{\scriptstyle_{e}}$ measured by TNR is used to validate and refine the spacecraft potential correction, improving accuracy of the electron moments. \indent The study is limited to data derived from velocity moments of the entire species over the observed energy ranges, e.g., we do not separate the electron data into the core, halo, or strahl components \citep[e.g.,][]{bale13a, horaites18a, pulupa14a} nor do we account for secondary proton beam contributions \citep[e.g.,][]{wicks16a}. We also do not separate the distributions by fast or slow solar wind speeds, as this will be addressed in detail in a future study that will also examine the subcomponents of each species [e.g., \textit{Salem et al. in preparation}]. \subsection{Lists and Data Selection} \label{subsec:ListsandDataSelection} \indent The 3DP dataset spans from January 1, 1995 to December 31, 2004, thus we limit all results and analysis to that time range. As this does not span a solar cycle, we did not perform any solar-cycle-dependent analyses. \indent We measure the plasma parameters observed by \emph{Wind} and separate into four categories based upon five constraints (see below for definitions): all times (\textbf{Constraints 1} and \textbf{2}), all times excluding interplanetary (IP) shocks (\textbf{Constraints 1}--\textbf{3}), only times within magnetic obstacles (MOs) \citep[e.g.,][]{nieveschinchilla16a, nieveschinchilla18a} (\textbf{Constraints 1}, \textbf{2}, and \textbf{4}), and all times excluding MOs (\textbf{Constraints 1}, \textbf{2}, and \textbf{5}). We define five constraints to limit the ``low quality'' data from the analysis and provide a concise, objective reference for the four categories listed above. We defined the constraints as: \begin{description}[itemsep=0pt,parsep=0pt,topsep=0pt] \item[\textbf{Constraint 1}] Require the 3DP and SWE fit flags be $\geq$ 5 and 10, respectively. The fit flags are statistical estimates of the quality of the model fit results. The SWE fit flag also indicates whether a constraint was imposed to find a stable solution in the fitting algorithm, i.e., a SWE fit flag $=$ 10 means no assumptions or constraints were necessary. \item[\textbf{Constraint 2}] Require $T{\scriptstyle_{s, tot}}$ $<$ 1 keV \& $T{\scriptstyle_{s, \parallel}}$ $<$ 1.2 keV \& $T{\scriptstyle_{s, \perp}}$ $<$ 1.2 keV \& $\lvert \textbf{\textit{B}}{\scriptstyle_{o,j}} \rvert$ $<$ 120 nT as a second ``high quality'' data qualifier. This constraint is mostly to remove outliers and extreme conditions that may not be caught by \textbf{Constraint 1}. \item[\textbf{Constraint 3}] Use all time periods excluding IP shocks, where times within/near an IP shock are defined as five hours prior to and one day after the shock arrival at \emph{Wind}. Time periods defining when the spacecraft was in/around IP shocks are taken from the Harvard Smithsonian Center for Astrophysics' \emph{Wind} shock database\footnote{\url{https://www.cfa.harvard.edu/shocks/wi\_data/}}. \item[\textbf{Constraint 4}] Only use time periods during magnetic obstacles (MOs) within ICMEs. Time periods defining when the spacecraft was in/around ICMEs are taken from the \emph{Wind} ICME Catalog\footnote{\url{https://wind.nasa.gov/ICMEindex.php}}. The times within MOs are given by the MO time ranges given for each ICME entry \citep[][]{nieveschinchilla16a, nieveschinchilla18a}. \item[\textbf{Constraint 5}] Use all time periods excluding MOs. \end{description} \noindent Note that all results presented herein satisfy \textbf{Constraints 1} and \textbf{2}. For the time period analyzed, there were 170 ICMEs and 239 IP shocks. \indent Prior to computing any quantity or ratio, we constructed a uniform grid of two minute intervals spanning from January 1, 1995 00:00:33.565 UTC to January 1, 2005 00:00:33.565 UTC. All data falling within any two minute bin are averaged and from those averages we compute the temperature ratios and plasma betas. We compute the electron-to-proton, $\left(T{\scriptstyle_{e}}/T{\scriptstyle_{p}}\right){\scriptstyle_{j}}$, electron-to-alpha, $\left(T{\scriptstyle_{e}}/T{\scriptstyle_{\alpha}}\right){\scriptstyle_{j}}$, and alpha-to-proton, $\left(T{\scriptstyle_{\alpha}}/T{\scriptstyle_{p}}\right){\scriptstyle_{j}}$ temperature ratios . We also compute the electron ($\beta{\scriptstyle_{e, j}}$), proton ($\beta{\scriptstyle_{p, j}}$), and alpha-particle ($\beta{\scriptstyle_{\alpha, j}}$) plasma betas. Further details and parameter/symbol definitions can be found in Appendix \ref{app:Definitions}. \indent The SWE FC instrument does not always fully resolve the proton and alpha-particle peaks. Thus, the total number of resolved proton and alpha-particle intervals are not the same. Further, the numerical scheme used to parameterize the distributions generally finds the perpendicular components more easily than the parallel, resulting in more perpendicular than parallel and total component intervals. Since we do not directly compare perpendicular and parallel or perpendicular and total, we did not eliminate fit results satisfying \textbf{Constraints 1} and \textbf{2} if only the perpendicular solution was valid. \indent Note that the electron data set does not include burst data often triggered by solar wind transients (e.g., shocks). This partly limits the direct comparison between all periods and those excluding interplanetary (IP) shocks for any parameter depending upon electron moments. \section{Solar Wind Statistics} \label{sec:SolarWindStatistics} \indent In this section we introduce and discuss the one-variable statistics and distributions of $T{\scriptstyle_{s, j}}$, $\left(T{\scriptstyle_{s'}}/T{\scriptstyle_{s}}\right){\scriptstyle_{j}}$, and $\beta{\scriptstyle_{s, j}}$. In Tables \ref{tab:Temperatures}, \ref{tab:TemperatureRatios}, and \ref{tab:PlasmaBeta} all sections satisfy \textbf{Constraints 1} and \textbf{2}. The formats of Tables \ref{tab:Temperatures}, \ref{tab:TemperatureRatios}, and \ref{tab:PlasmaBeta} and Figures \ref{fig:Temperatures}, \ref{fig:TemperatureRatios}, and \ref{fig:PlasmaBeta} are all the same, respectively. \indent Since none of the parameters have fully Gaussian distributions, we use the median and lower and upper quartile values rather than mean and standard deviation in the tables. These values more accurately represent the data and are less biased by tails. The median values are shown in each of the figure panels as well. \begin{deluxetable*}{| l | c | c | c | c | c | c |} \tabletypesize{\small} \tablecaption{Temperature Parameters \label{tab:Temperatures}} \tablehead{\colhead{Temperature [eV]} & \colhead{$X{\scriptstyle_{min}}$}\tablenotemark{a} & \colhead{$X{\scriptstyle_{max}}$}\tablenotemark{b} & \colhead{$\bar{\mathbf{X}}$}\tablenotemark{c} & \colhead{$X{\scriptstyle_{25\%}}$}\tablenotemark{d} & \colhead{$X{\scriptstyle_{75\%}}$}\tablenotemark{e} & \colhead{N}\tablenotemark{f}} \startdata \multicolumn{7}{ |c| }{\textbf{All data in table satisfies Constraints 1 and 2}} \\ \multicolumn{7}{ |c| }{\textbf{All Good Time Periods}} \\ \hline $T{\scriptstyle_{e, tot}}$ & 2.43 & 58.8 & 11.9 & 10.0 & 14.0 & 820057 \\ $T{\scriptstyle_{e, \perp}}$ & 2.29 & 77.2 & 12.8 & 10.5 & 15.4 & 820057 \\ $T{\scriptstyle_{e, \parallel}}$ & 2.49 & 58.6 & 11.4 & 9.7 & 13.4 & 820057 \\ \hline $T{\scriptstyle_{p, tot}}$ & 0.16 & 940.6 & 8.6 & 4.7 & 15.9 & 1095314 \\ $T{\scriptstyle_{p, \perp}}$ & 0.05 & 1192.7 & 7.5 & 4.2 & 14.7 & 1124159 \\ $T{\scriptstyle_{p, \parallel}}$ & 0.05 & 1196.1 & 10.7 & 5.2 & 20.0 & 1103884 \\ \hline $T{\scriptstyle_{\alpha, tot}}$ & 0.45 & 964.7 & 10.8 & 4.9 & 31.6 & 476255 \\ $T{\scriptstyle_{\alpha, \perp}}$ & 0.20 & 1198.6 & 21.4 & 6.5 & 58.0 & 883543 \\ $T{\scriptstyle_{\alpha, \parallel}}$ & 0.23 & 1192.2 & 14.8 & 5.4 & 50.7 & 564208 \\ \hline \multicolumn{7}{ |c| }{\textbf{Constraint 3: Time periods excluding IP shocks}} \\ \hline $T{\scriptstyle_{e, tot}}$ & 2.71 & 52.7 & 11.9 & 10.0 & 14.0 & 760815 \\ $T{\scriptstyle_{e, \perp}}$ & 2.59 & 65.2 & 12.8 & 10.5 & 15.4 & 760815 \\ $T{\scriptstyle_{e, \parallel}}$ & 2.74 & 46.9 & 11.3 & 9.7 & 13.3 & 760815 \\ \hline $T{\scriptstyle_{p, tot}}$ & 0.16 & 865.8 & 8.6 & 4.7 & 15.8 & 1001957 \\ $T{\scriptstyle_{p, \perp}}$ & 0.06 & 1192.7 & 7.5 & 4.2 & 14.6 & 1028552 \\ $T{\scriptstyle_{p, \parallel}}$ & 0.05 & 1173.6 & 10.8 & 5.2 & 19.9 & 1009757 \\ \hline $T{\scriptstyle_{\alpha, tot}}$ & 0.49 & 923.3 & 10.5 & 4.8 & 31.1 & 428536 \\ $T{\scriptstyle_{\alpha, \perp}}$ & 0.20 & 1198.6 & 21.3 & 6.4 & 57.5 & 804268 \\ $T{\scriptstyle_{\alpha, \parallel}}$ & 0.23 & 1192.2 & 14.9 & 5.4 & 51.3 & 512190 \\ \hline \multicolumn{7}{ |c| }{\textbf{Constraint 4: Time periods during ICMEs}} \\ \hline $T{\scriptstyle_{e, tot}}$ & 2.43 & 52.4 & 10.4 & 8.2 & 13.2 & 29389 \\ $T{\scriptstyle_{e, \perp}}$ & 2.29 & 77.2 & 11.1 & 8.5 & 14.5 & 29389 \\ $T{\scriptstyle_{e, \parallel}}$ & 2.49 & 52.0 & 10.0 & 7.9 & 12.4 & 29389 \\ \hline $T{\scriptstyle_{p, tot}}$ & 0.39 & 549.5 & 4.2 & 2.5 & 7.5 & 72530 \\ $T{\scriptstyle_{p, \perp}}$ & 0.06 & 719.6 & 3.9 & 2.3 & 6.9 & 73349 \\ $T{\scriptstyle_{p, \parallel}}$ & 0.05 & 1196.1 & 4.4 & 2.5 & 8.9 & 73149 \\ \hline $T{\scriptstyle_{\alpha, tot}}$ & 0.49 & 525.1 & 5.8 & 3.1 & 14.5 & 43343 \\ $T{\scriptstyle_{\alpha, \perp}}$ & 0.25 & 1169.4 & 8.4 & 3.7 & 23.8 & 63344 \\ $T{\scriptstyle_{\alpha, \parallel}}$ & 0.23 & 1135.5 & 5.8 & 3.0 & 14.2 & 45878 \\ \hline \multicolumn{7}{ |c| }{\textbf{Constraint 5: Time periods excluding ICMEs}} \\ \hline $T{\scriptstyle_{e, tot}}$ & 2.71 & 58.8 & 11.9 & 10.1 & 14.0 & 790668 \\ $T{\scriptstyle_{e, \perp}}$ & 2.59 & 65.2 & 12.9 & 10.5 & 15.5 & 790668 \\ $T{\scriptstyle_{e, \parallel}}$ & 2.74 & 58.6 & 11.4 & 9.7 & 13.4 & 790668 \\ \hline $T{\scriptstyle_{p, tot}}$ & 0.16 & 940.6 & 9.0 & 4.9 & 16.4 & 1022784 \\ $T{\scriptstyle_{p, \perp}}$ & 0.05 & 1192.7 & 7.9 & 4.4 & 15.2 & 1050810 \\ $T{\scriptstyle_{p, \parallel}}$ & 0.05 & 1128.5 & 11.3 & 5.6 & 20.5 & 1030735 \\ \hline $T{\scriptstyle_{\alpha, tot}}$ & 0.45 & 964.7 & 11.6 & 5.2 & 33.4 & 432912 \\ $T{\scriptstyle_{\alpha, \perp}}$ & 0.20 & 1198.6 & 23.2 & 6.9 & 60.3 & 820199 \\ $T{\scriptstyle_{\alpha, \parallel}}$ & 0.23 & 1192.2 & 16.6 & 5.8 & 53.9 & 518330 \\ \hline \enddata \tablenotetext{a}{minimum} \tablenotetext{b}{maximum} \tablenotetext{c}{median} \tablenotetext{d}{lower quartile} \tablenotetext{e}{upper quartile} \tablenotetext{f}{number of finite measurements} \tablecomments{For symbol definitions, see Appendix \ref{app:Definitions}.} \end{deluxetable*} \subsection{Plasma Temperatures} \label{subsec:PlasmaTemperatures} \indent In this section we introduce and discuss the one-variable statistics and distributions of $T{\scriptstyle_{s, j}}$ for electrons ($s$ $=$ $e$), protons ($s$ $=$ $p$), and alpha-particles ($s$ $=$ $\alpha$). Note that the solar wind is a non-equilibrium, weakly collisional, kinetic gas. Temperatures in such a gas are more representative of the average kinetic energy in the species bulk flow rest frame than the thermodynamic variable. Thus, we we report temperatures in units of $eV$ rather than $K$. One can easily convert to kelvin by multiplying any number in $eV$ by $\sim$11,604 $K eV^{-1}$. \indent Table \ref{tab:Temperatures} and Figure \ref{fig:Temperatures} show the one-variable statistics and histograms of $T{\scriptstyle_{s, j}}$ under the four solar wind categories defined in Section \ref{sec:DefinitionsDataSets}. The first section of Table \ref{tab:Temperatures} shows the one-variable statistics for all good data. The ranges for $T{\scriptstyle_{e, j}}$ are much more restricted than for $T{\scriptstyle_{p, j}}$ and $T{\scriptstyle_{\alpha, j}}$. The mean values (see Supplemental ASCII file) for $T{\scriptstyle_{e, j}}$ and $T{\scriptstyle_{p, j}}$ are similar but the median values $T{\scriptstyle_{e, j}}$ are higher than for $T{\scriptstyle_{p, j}}$. Interestingly, a comparison between the first two sections show little difference, i.e., the exclusion of IP shocks does not appear to affect the mean or median of any of the temperatures, which is unexpected as ions are often more strongly heated than electrons \citep[][]{wilsoniii10a}. What does appear to affect the temperatures are the so called magnetic obstacles (MOs) associated with ICMEs \citep[e.g.,][]{nieveschinchilla18a}. Although this is somewhat expected as the time periods with MOs are partly defined by low $T{\scriptstyle_{p, tot}}$ and/or low $\beta{\scriptstyle_{p, tot}}$, there is rarely a requirement put on the alpha-particle parameters yet they show distinct differences as well. \begin{figure*} \centering {\includegraphics[trim = 0mm 0mm 0mm 0mm, clip, height=170mm]{Figure1}} \caption{Temperatures [eV] for different particle species in each column and for the different constraints (i.e., rows) listed in Table \ref{tab:Temperatures}. In each panel, there are three color-coded histograms for the different components defined as follows: total (magenta); parallel (blue); and perpendicular (green). The color-coded vertical lines are the median values (values found in Table \ref{tab:Temperatures}) of the distributions for the corresponding color-coded histograms.} \label{fig:Temperatures} \end{figure*} \indent Figure \ref{fig:Temperatures} shows color-coded histograms of the total (magenta), parallel (blue), and perpendicular (green) components of $T{\scriptstyle_{s, j}}$ for electrons (first column), protons (second column), and alpha-particles (third column). Overlaid are color-coded vertical lines showing the median values for each component, under each set of solar wind conditions (separated by rows) listed in Table \ref{tab:Temperatures}. Note that the rows in Figure \ref{fig:Temperatures} correspond to the sections in Table \ref{tab:Temperatures}. One can see that $T{\scriptstyle_{p, j}}$ and $T{\scriptstyle_{\alpha, j}}$ have a much broader distribution for all $j$ under all conditions than $T{\scriptstyle_{e, j}}$. It is not clear why $T{\scriptstyle_{\alpha, j}}$ exhibits a double-peaked distribution at $\sim$4 and $\sim$70 eV for all conditions except during MOs. We suspect this is related to the interpretation of \citet[][]{maruca13b} where the plasma leaves the solar corona in a super-mass-proportional temperature state and slowly relaxes as it propagates to 1 AU, i.e., the plasma starts with $\left(T{\scriptstyle_{\alpha}}/T{\scriptstyle_{p}}\right){\scriptstyle_{tot}}$ $>$ 4 in the solar corona. \indent The majority of temperatures were in the range $\sim$1--50 eV with $\sim$98\% satisfying 5 eV $<$ $T{\scriptstyle_{e, tot}}$ $<$ 20 eV, $\sim$92\% satisfying 1 eV $<$ $T{\scriptstyle_{p, tot}}$ $<$ 30 eV, and $\sim$86\% satisfying 1 eV $<$ $T{\scriptstyle_{p, tot}}$ $<$ 50 eV. The extrema for $T{\scriptstyle_{s, j}}$ are the minority because the distributions are centrally concentrated. For instance, only $\sim$0.3\% satisfy $T{\scriptstyle_{e, tot}}$ $<$ 5 eV, in contrast to $\sim$28\% and $\sim$26\% satisfying $T{\scriptstyle_{p, tot}}$ $<$ 5 eV and $T{\scriptstyle_{\alpha, tot}}$ $<$ 5 eV, respectively. In the opposite extreme, only $\sim$1.5\% satisfy $T{\scriptstyle_{e, tot}}$ $>$ 20 eV, in contrast to $\sim$2.1\% and $\sim$3.1\% satisfying $T{\scriptstyle_{p, tot}}$ $>$ 50 eV and $T{\scriptstyle_{\alpha, tot}}$ $>$ 100 eV, respectively. \subsection{Temperature Ratios} \label{subsec:TemperatureRatios} \begin{figure*} \centering {\includegraphics[trim = 0mm 0mm 0mm 0mm, clip, height=170mm]{Figure2}} \caption{Temperature ratios [unitless] for different particle species in each column and for the different constraints (i.e., rows) listed in Table \ref{tab:TemperatureRatios}. In each panel, there are three color-coded histograms for the different components defined as follows: total (magenta); parallel (blue); and perpendicular (green). The color-coded vertical lines are the median values (values found in Table \ref{tab:TemperatureRatios}) of the distributions for the corresponding color-coded histograms.} \label{fig:TemperatureRatios} \end{figure*} \indent In this section we introduce and discuss the one-variable statistics and distributions of $\left(T{\scriptstyle_{s'}}/T{\scriptstyle_{s}}\right){\scriptstyle_{j}}$ for electron-to-proton ($s'$ $=$ $e$ and $s$ $=$ $p$), electron-to-alpha-particle ($s'$ $=$ $e$ and $s$ $=$ $\alpha$), and proton-to-alpha-particle ($s$ $=$ $p$ and $s$ $=$ $\alpha$) ratios. \begin{deluxetable*}{| l | c | c | c | c | c | c |} \tabletypesize{\small} \tablecaption{Temperature Ratio Parameters \label{tab:TemperatureRatios}} \tablehead{\colhead{Temperature Ratios [N/A]} & \colhead{$X{\scriptstyle_{min}}$} & \colhead{$X{\scriptstyle_{max}}$} & \colhead{$\bar{\mathbf{X}}$} & \colhead{$X{\scriptstyle_{25\%}}$} & \colhead{$X{\scriptstyle_{75\%}}$} & \colhead{N}} \startdata \multicolumn{7}{ |c| }{\textbf{All data in table satisfies Constraints 1 and 2}} \\ \multicolumn{7}{ |c| }{\textbf{All Good Time Periods}} \\ \hline $\left(T{\scriptstyle_{e}}/T{\scriptstyle_{p}}\right){\scriptstyle_{tot}}$ & 0.07 & 25.3 & 1.27 & 0.78 & 2.14 & 445801 \\ $\left(T{\scriptstyle_{e}}/T{\scriptstyle_{p}}\right){\scriptstyle_{\perp}}$ & 0.03 & 184.6 & 1.51 & 0.90 & 2.49 & 454588 \\ $\left(T{\scriptstyle_{e}}/T{\scriptstyle_{p}}\right){\scriptstyle_{\parallel}}$ & 0.02 & 161.1 & 1.01 & 0.60 & 1.83 & 445732 \\ \hline $\left(T{\scriptstyle_{e}}/T{\scriptstyle_{\alpha}}\right){\scriptstyle_{tot}}$ & 0.02 & 22.9 & 0.82 & 0.32 & 1.78 & 193704 \\ $\left(T{\scriptstyle_{e}}/T{\scriptstyle_{\alpha}}\right){\scriptstyle_{\perp}}$ & 0.01 & 42.3 & 0.48 & 0.21 & 1.41 & 393745 \\ $\left(T{\scriptstyle_{e}}/T{\scriptstyle_{\alpha}}\right){\scriptstyle_{\parallel}}$ & 0.009 & 45.8 & 0.64 & 0.23 & 1.62 & 207313 \\ \hline $\left(T{\scriptstyle_{\alpha}}/T{\scriptstyle_{p}}\right){\scriptstyle_{tot}}$ & 0.02 & 19.0 & 1.94 & 1.35 & 3.49 & 476255 \\ $\left(T{\scriptstyle_{\alpha}}/T{\scriptstyle_{p}}\right){\scriptstyle_{\perp}}$ & 0.004 & 200.0 & 3.16 & 1.63 & 4.79 & 883427 \\ $\left(T{\scriptstyle_{\alpha}}/T{\scriptstyle_{p}}\right){\scriptstyle_{\parallel}}$ & 0.02 & 271.7 & 2.11 & 1.33 & 3.87 & 564072 \\ \hline \multicolumn{7}{ |c| }{\textbf{Constraint 3: Time periods excluding IP shocks}} \\ \hline $\left(T{\scriptstyle_{e}}/T{\scriptstyle_{p}}\right){\scriptstyle_{tot}}$ & 0.07 & 25.3 & 1.27 & 0.78 & 2.14 & 412947 \\ $\left(T{\scriptstyle_{e}}/T{\scriptstyle_{p}}\right){\scriptstyle_{\perp}}$ & 0.06 & 184.6 & 1.52 & 0.91 & 2.51 & 421080 \\ $\left(T{\scriptstyle_{e}}/T{\scriptstyle_{p}}\right){\scriptstyle_{\parallel}}$ & 0.02 & 161.1 & 1.00 & 0.60 & 1.82 & 412917 \\ \hline $\left(T{\scriptstyle_{e}}/T{\scriptstyle_{\alpha}}\right){\scriptstyle_{tot}}$ & 0.02 & 22.9 & 0.82 & 0.32 & 1.80 & 177265 \\ $\left(T{\scriptstyle_{e}}/T{\scriptstyle_{\alpha}}\right){\scriptstyle_{\perp}}$ & 0.01 & 42.3 & 0.48 & 0.21 & 1.42 & 363851 \\ $\left(T{\scriptstyle_{e}}/T{\scriptstyle_{\alpha}}\right){\scriptstyle_{\parallel}}$ & 0.009 & 45.8 & 0.64 & 0.23 & 1.63 & 190413 \\ \hline $\left(T{\scriptstyle_{\alpha}}/T{\scriptstyle_{p}}\right){\scriptstyle_{tot}}$ & 0.02 & 19.0 & 1.91 & 1.33 & 3.48 & 428536 \\ $\left(T{\scriptstyle_{\alpha}}/T{\scriptstyle_{p}}\right){\scriptstyle_{\perp}}$ & 0.01 & 200.0 & 3.16 & 1.62 & 4.77 & 804166 \\ $\left(T{\scriptstyle_{\alpha}}/T{\scriptstyle_{p}}\right){\scriptstyle_{\parallel}}$ & 0.02 & 271.7 & 2.12 & 1.32 & 3.92 & 512077 \\ \hline \multicolumn{7}{ |c| }{\textbf{Constraint 4: Time periods during ICMEs}} \\ \hline $\left(T{\scriptstyle_{e}}/T{\scriptstyle_{p}}\right){\scriptstyle_{tot}}$ & 0.09 & 25.3 & 2.06 & 1.27 & 3.33 & 18203 \\ $\left(T{\scriptstyle_{e}}/T{\scriptstyle_{p}}\right){\scriptstyle_{\perp}}$ & 0.1 & 79.4 & 2.35 & 1.46 & 3.76 & 18356 \\ $\left(T{\scriptstyle_{e}}/T{\scriptstyle_{p}}\right){\scriptstyle_{\parallel}}$ & 0.03 & 161.1 & 1.87 & 1.09 & 3.13 & 18186 \\ \hline $\left(T{\scriptstyle_{e}}/T{\scriptstyle_{\alpha}}\right){\scriptstyle_{tot}}$ & 0.04 & 22.9 & 1.45 & 0.60 & 2.65 & 10077 \\ $\left(T{\scriptstyle_{e}}/T{\scriptstyle_{\alpha}}\right){\scriptstyle_{\perp}}$ & 0.02 & 42.3 & 1.02 & 0.38 & 2.35 & 16643 \\ $\left(T{\scriptstyle_{e}}/T{\scriptstyle_{\alpha}}\right){\scriptstyle_{\parallel}}$ & 0.02 & 30.0 & 1.34 & 0.58 & 2.55 & 10290 \\ \hline $\left(T{\scriptstyle_{\alpha}}/T{\scriptstyle_{p}}\right){\scriptstyle_{tot}}$ & 0.02 & 18.3 & 1.78 & 1.34 & 2.80 & 43343 \\ $\left(T{\scriptstyle_{\alpha}}/T{\scriptstyle_{p}}\right){\scriptstyle_{\perp}}$ & 0.004 & 79.8 & 2.27 & 1.46 & 4.19 & 63318 \\ $\left(T{\scriptstyle_{\alpha}}/T{\scriptstyle_{p}}\right){\scriptstyle_{\parallel}}$ & 0.04 & 40.3 & 1.66 & 1.22 & 2.58 & 45837 \\ \hline \multicolumn{7}{ |c| }{\textbf{Constraint 5: Time periods excluding ICMEs}} \\ \hline $\left(T{\scriptstyle_{e}}/T{\scriptstyle_{p}}\right){\scriptstyle_{tot}}$ & 0.07 & 23.2 & 1.25 & 0.77 & 2.09 & 427598 \\ $\left(T{\scriptstyle_{e}}/T{\scriptstyle_{p}}\right){\scriptstyle_{\perp}}$ & 0.03 & 184.6 & 1.49 & 0.89 & 2.45 & 436232 \\ $\left(T{\scriptstyle_{e}}/T{\scriptstyle_{p}}\right){\scriptstyle_{\parallel}}$ & 0.02 & 129.3 & 0.99 & 0.59 & 1.77 & 427546 \\ \hline $\left(T{\scriptstyle_{e}}/T{\scriptstyle_{\alpha}}\right){\scriptstyle_{tot}}$ & 0.02 & 17.2 & 0.79 & 0.32 & 1.74 & 183627 \\ $\left(T{\scriptstyle_{e}}/T{\scriptstyle_{\alpha}}\right){\scriptstyle_{\perp}}$ & 0.01 & 37.6 & 0.47 & 0.21 & 1.37 & 377102 \\ $\left(T{\scriptstyle_{e}}/T{\scriptstyle_{\alpha}}\right){\scriptstyle_{\parallel}}$ & 0.009 & 45.8 & 0.61 & 0.22 & 1.57 & 197023 \\ \hline $\left(T{\scriptstyle_{\alpha}}/T{\scriptstyle_{p}}\right){\scriptstyle_{tot}}$ & 0.2 & 19.0 & 1.96 & 1.35 & 3.55 & 432912 \\ $\left(T{\scriptstyle_{\alpha}}/T{\scriptstyle_{p}}\right){\scriptstyle_{\perp}}$ & 0.01 & 200.0 & 3.24 & 1.65 & 4.81 & 820109 \\ $\left(T{\scriptstyle_{\alpha}}/T{\scriptstyle_{p}}\right){\scriptstyle_{\parallel}}$ & 0.02 & 271.7 & 2.18 & 1.34 & 3.97 & 518235 \\ \hline \enddata \tablecomments{Definitions/Symbols are the same as in Table \ref{tab:Temperatures}. For symbol definitions, see Appendix \ref{app:Definitions}.} \end{deluxetable*} \indent Table \ref{tab:TemperatureRatios} and Figure \ref{fig:TemperatureRatios} show the one-variable statistics of $\left(T{\scriptstyle_{s'}}/T{\scriptstyle_{s}}\right){\scriptstyle_{j}}$ under the four solar wind categories defined in Section \ref{sec:DefinitionsDataSets}. Similar to $T{\scriptstyle_{s, j}}$, the only solar wind condition that appears to show a significant effect on any of the $\left(T{\scriptstyle_{s'}}/T{\scriptstyle_{s}}\right){\scriptstyle_{j}}$ ratios is the periods during MOs. The mean and median values for $\left(T{\scriptstyle_{e}}/T{\scriptstyle_{s}}\right){\scriptstyle_{j}}$ are larger during these periods, consistent with MOs showing little effect on $T{\scriptstyle_{e, j}}$ but strongly depressing $T{\scriptstyle_{p, j}}$ and $T{\scriptstyle_{\alpha, j}}$. The distinctive double-peak in $\left(T{\scriptstyle_{\alpha}}/T{\scriptstyle_{p}}\right){\scriptstyle_{j}}$ in the third column of Figure \ref{fig:TemperatureRatios} is consistent with previous observations \citep[e.g.,][]{kasper06a, kasper08a}. The first peak near unity would correspond to thermal equilibrium between the two ion species in a collisionally mediated gas. The second peak near four would correspond to equal thermal speeds, but we suspect this results from super-mass-proportional heating in the corona that has not yet thermalized \citep[][]{maruca13b}. \indent In contrast, the electrons almost never exhibit mass-proportional temperatures with either the protons or alpha-particles but $\sim$49\% satisfy 0.5 $<$ $\left(T{\scriptstyle_{e}}/T{\scriptstyle_{p}}\right){\scriptstyle_{tot}}$ $<$ 1.5 while $\sim$32\% of $\left(T{\scriptstyle_{e}}/T{\scriptstyle_{\alpha}}\right){\scriptstyle_{tot}}$ and $\sim$34\% of $\left(T{\scriptstyle_{\alpha}}/T{\scriptstyle_{p}}\right){\scriptstyle_{tot}}$ satisfy the same criteria. Thus, a sizable fraction of the time the electrons, protons, and alpha-particles are near thermal equilibrium with each other. The higher ratio for electrons and protons could result from the higher electron-proton than proton-alpha and electron-alpha Coulomb collision rates in the solar wind \citep[e.g.,][]{spitzer53a, borovsky11a} (e.g., see Section \ref{subsec:CollisionRates} and Appendix \ref{app:CoulombCollisions}) or it may be a reflection of the primordial solar wind near the sun \citep[e.g.,][]{kasper17b}. It is important to note that the collision rate is not the heating or heat-exchange rate. An interesting aside is that $\left(T{\scriptstyle_{e}}/T{\scriptstyle_{p}}\right){\scriptstyle_{\perp}}$ $>$ $\left(T{\scriptstyle_{e}}/T{\scriptstyle_{p}}\right){\scriptstyle_{\parallel}}$ is satisfied only $\sim$17\% of the time while $\left(T{\scriptstyle_{e}}/T{\scriptstyle_{\alpha}}\right){\scriptstyle_{\perp}}$ $>$ $\left(T{\scriptstyle_{e}}/T{\scriptstyle_{\alpha}}\right){\scriptstyle_{\parallel}}$ is satisfied $\sim$32\% of the time. However, the median values for $\left(T{\scriptstyle_{s'}}/T{\scriptstyle_{p}}\right){\scriptstyle_{\perp}}$ are larger than $\left(T{\scriptstyle_{s'}}/T{\scriptstyle_{p}}\right){\scriptstyle_{\parallel}}$ ($s'$ $=$ $e$ or $\alpha$) for all four solar wind categories in Table \ref{tab:TemperatureRatios}. The opposite is true for the electron-to-alpha-particle ratios. What may not be captured here is the energy-dependent interactions between waves/turbulence and charged particles. For instance, it is known that electrostatic ion-acoustic waves can generate anisotropic, energetic ion tails while simultaneously generating anisotropic heating of the core electrons \citep[e.g.,][]{dum78a, dum78b}. The net effects may not significantly alter the total temperatures of either species, as seen in previous studies \citep[e.g.,][]{wilsoniii10a}. However, a detailed investigation of the heating mechanisms and subcomponents of each species is beyond the scope of this work. \indent The double-peak in $\left(T{\scriptstyle_{e}}/T{\scriptstyle_{\alpha}}\right){\scriptstyle_{j}}$ in the second column of Figure \ref{fig:TemperatureRatios} has local maxima near $\sim$0.2 and $\sim$2.0. This appears to be a consequence of the combination of a double-peaked $T{\scriptstyle_{\alpha, j}}$ spanning the single-peaked $T{\scriptstyle_{e, j}}$ seen in Figure \ref{fig:Temperatures}. \indent The majority of data were in the range $\sim$0.5--5.0 with $\sim$88\% for $\left(T{\scriptstyle_{e}}/T{\scriptstyle_{p}}\right){\scriptstyle_{tot}}$, $\sim$61\% for $\left(T{\scriptstyle_{e}}/T{\scriptstyle_{\alpha}}\right){\scriptstyle_{tot}}$, and $\sim$92\% for $\left(T{\scriptstyle_{\alpha}}/T{\scriptstyle_{p}}\right){\scriptstyle_{tot}}$. Similar to $T{\scriptstyle_{s, j}}$, the extrema for $\left(T{\scriptstyle_{s'}}/T{\scriptstyle_{s}}\right){\scriptstyle_{j}}$ are the minority. For instance, only $\sim$9.7\% satisfy $\left(T{\scriptstyle_{e}}/T{\scriptstyle_{p}}\right){\scriptstyle_{tot}}$ $<$ 0.5, $\sim$1.5\% satisfy $\left(T{\scriptstyle_{e}}/T{\scriptstyle_{\alpha}}\right){\scriptstyle_{tot}}$ $<$ 0.1, and $\sim$4.4\% satisfy $\left(T{\scriptstyle_{\alpha}}/T{\scriptstyle_{p}}\right){\scriptstyle_{tot}}$ $<$ 1.0. On the opposite side, $\sim$2.3\%, $\sim$1.7\%, and $\sim$8.0\% satisfy $\left(T{\scriptstyle_{e}}/T{\scriptstyle_{p}}\right){\scriptstyle_{tot}}$ $>$ 5, $\left(T{\scriptstyle_{e}}/T{\scriptstyle_{\alpha}}\right){\scriptstyle_{tot}}$ $>$ 5, and $\left(T{\scriptstyle_{\alpha}}/T{\scriptstyle_{p}}\right){\scriptstyle_{tot}}$ $>$ 5, respectively. \begin{figure* \centering {\includegraphics[trim = 0mm 0mm 0mm 0mm, clip, height=170mm]{Figure3}} \caption{Plasma betas [unitless] for different particle species in each column and for the different constraints (i.e., rows) listed in Table \ref{tab:PlasmaBeta}. In each panel, there are three color-coded histograms for the different components defined as follows: total (magenta); parallel (blue); and perpendicular (green). The color-coded vertical lines are the median values (values found in Table \ref{tab:PlasmaBeta}) of the distributions for the corresponding color-coded histograms.} \label{fig:PlasmaBeta} \end{figure*} \subsection{Plasma Betas} \label{subsec:PlasmaBetas} \indent In this section we introduce and discuss the one-variable statistics and distributions of $\beta{\scriptstyle_{s, j}}$ for electrons ($s$ $=$ $e$), protons ($s$ $=$ $p$), and alpha-particles ($s$ $=$ $\alpha$). \indent Table \ref{tab:PlasmaBeta} and Figure \ref{fig:PlasmaBeta} show the one-variable statistics of $\beta{\scriptstyle_{s, j}}$ under the four solar wind categories defined in Section \ref{sec:DefinitionsDataSets}. Both $\beta{\scriptstyle_{e, j}}$ and $\beta{\scriptstyle_{p, j}}$ have median values near unity under all conditions except during MOs. The smaller median values of $\beta{\scriptstyle_{\alpha, j}}$ are likely dominated by the consistently smaller $n{\scriptstyle_{\alpha}}$ values in the solar wind. Again, all three regions for each $\beta{\scriptstyle_{s, j}}$ look the same except for periods during MOs, which are statistically smaller for all species (by roughly an order of magnitude for protons and alpha-particles). Curiously, the $\beta{\scriptstyle_{e, j}}$ are much lower during MOs despite the $T{\scriptstyle_{e, j}}$ values being roughly the same under all four solar wind categories implying smaller $n{\scriptstyle_{e}}$ during MOs. \indent Note that care should be taken when reading the range of possible $\beta{\scriptstyle_{s, j}}$ under all conditions (i.e., first section in Table \ref{tab:PlasmaBeta}). Although our stringent criteria outlined in Section \ref{sec:DefinitionsDataSets} were intended to remove unphysical results, such extremes in $\beta{\scriptstyle_{s, j}}$ should be viewed with additional scrutiny. For instance, only $\sim$0.5\% satisfy $\beta{\scriptstyle_{e, tot}}$ $<$ 0.1, $\sim$4.9\% satisfy $\beta{\scriptstyle_{p, tot}}$ $<$ 0.1, and $\sim$0.6\% satisfy $\beta{\scriptstyle_{\alpha, tot}}$ $<$ 0.001. In contrast, only $\sim$2.6\% satisfy $\beta{\scriptstyle_{e, tot}}$ $>$ 10, $\sim$1.5\% satisfy $\beta{\scriptstyle_{p, tot}}$ $>$ 10, and $\sim$1.8\% satisfy $\beta{\scriptstyle_{\alpha, tot}}$ $>$ 1. The majority of electron and proton betas were in the range $\sim$0.5--5.0 with $\sim$78\% satisfying 0.5 $<$ $\beta{\scriptstyle_{e, tot}}$ $<$ 5.0, $\sim$73\% satisfying 0.5 $<$ $\beta{\scriptstyle_{p, tot}}$ $<$ 5.0, while $\sim$78\% of the alpha-particle betas satisfied 0.01 $<$ $\beta{\scriptstyle_{\alpha, tot}}$ $<$ 0.5. \begin{deluxetable*}{| l | c | c | c | c | c | c |} \tabletypesize{\small} \tablecaption{Plasma Beta Parameters \label{tab:PlasmaBeta}} \tablehead{\colhead{Plasma Betas [N/A]} & \colhead{$X{\scriptstyle_{min}}$} & \colhead{$X{\scriptstyle_{max}}$} & \colhead{$\bar{\mathbf{X}}$} & \colhead{$X{\scriptstyle_{25\%}}$} & \colhead{$X{\scriptstyle_{75\%}}$} & \colhead{N}} \startdata \multicolumn{7}{ |c| }{\textbf{All data in table satisfies Constraints 1 and 2}} \\ \multicolumn{7}{ |c| }{\textbf{All Good Time Periods}} \\ \hline $\beta{\scriptstyle_{e, tot}}$ & 0.006 & 8870 & 1.09 & 0.64 & 1.99 & 820056 \\ $\beta{\scriptstyle_{e, \perp}}$ & 0.007 & 8914 & 1.17 & 0.71 & 2.06 & 820056 \\ $\beta{\scriptstyle_{e, \parallel}}$ & 0.005 & 8848 & 1.05 & 0.60 & 1.96 & 820056 \\ \hline $\beta{\scriptstyle_{p, tot}}$ & 0.001 & 4568 & 1.05 & 0.54 & 1.77 & 1095171 \\ $\beta{\scriptstyle_{p, \perp}}$ & $4 \times 10^{-5}$ & 4391 & 0.92 & 0.47 & 1.62 & 1124001 \\ $\beta{\scriptstyle_{p, \parallel}}$ & $6 \times 10^{-5}$ & 4923 & 1.29 & 0.62 & 2.18 & 1103741 \\ \hline $\beta{\scriptstyle_{\alpha, tot}}$ & $5 \times 10^{-5}$ & 612 & 0.05 & 0.02 & 0.17 & 476129 \\ $\beta{\scriptstyle_{\alpha, \perp}}$ & $5 \times 10^{-5}$ & 594 & 0.08 & 0.02 & 0.22 & 883409 \\ $\beta{\scriptstyle_{\alpha, \parallel}}$ & $2 \times 10^{-5}$ & 647 & 0.07 & 0.02 & 0.27 & 564081 \\ \hline \multicolumn{7}{ |c| }{\textbf{Constraint 3: Time periods excluding IP shocks}} \\ \hline $\beta{\scriptstyle_{e, tot}}$ & 0.006 & 8870 & 1.11 & 0.65 & 2.01 & 760814 \\ $\beta{\scriptstyle_{e, \perp}}$ & 0.007 & 8914 & 1.19 & 0.73 & 2.07 & 760814 \\ $\beta{\scriptstyle_{e, \parallel}}$ & 0.005 & 8848 & 1.07 & 0.61 & 1.98 & 760814 \\ \hline $\beta{\scriptstyle_{p, tot}}$ & 0.001 & 4568 & 1.07 & 0.57 & 1.78 & 1001816 \\ $\beta{\scriptstyle_{p, \perp}}$ & $8 \times 10^{-5}$ & 4391 & 0.94 & 0.49 & 1.63 & 1028396 \\ $\beta{\scriptstyle_{p, \parallel}}$ & 0.0001 & 4923 & 1.32 & 0.66 & 2.20 & 1009616 \\ \hline $\beta{\scriptstyle_{\alpha, tot}}$ & $5 \times 10^{-5}$ & 295 & 0.05 & 0.02 & 0.18 & 428411 \\ $\beta{\scriptstyle_{\alpha, \perp}}$ & $8 \times 10^{-5}$ & 373 & 0.08 & 0.02 & 0.22 & 804136 \\ $\beta{\scriptstyle_{\alpha, \parallel}}$ & $2 \times 10^{-5}$ & 415 & 0.07 & 0.02 & 0.28 & 512064 \\ \hline \multicolumn{7}{ |c| }{\textbf{Constraint 4: Time periods during ICMEs}} \\ \hline $\beta{\scriptstyle_{e, tot}}$ & 0.006 & 37 & 0.40 & 0.23 & 0.62 & 29389 \\ $\beta{\scriptstyle_{e, \perp}}$ & 0.007 & 38 & 0.42 & 0.25 & 0.66 & 29389 \\ $\beta{\scriptstyle_{e, \parallel}}$ & 0.005 & 36 & 0.38 & 0.22 & 0.60 & 29389 \\ \hline $\beta{\scriptstyle_{p, tot}}$ & 0.001 & 159 & 0.13 & 0.05 & 0.31 & 72530 \\ $\beta{\scriptstyle_{p, \perp}}$ & $4 \times 10^{-5}$ & 157 & 0.12 & 0.04 & 0.29 & 73349 \\ $\beta{\scriptstyle_{p, \parallel}}$ & $6 \times 10^{-5}$ & 164 & 0.13 & 0.05 & 0.35 & 73149 \\ \hline $\beta{\scriptstyle_{\alpha, tot}}$ & $5 \times 10^{-5}$ & 29 & 0.01 & 0.00 & 0.02 & 43343 \\ $\beta{\scriptstyle_{\alpha, \perp}}$ & $5 \times 10^{-5}$ & 49 & 0.01 & 0.00 & 0.03 & 63344 \\ $\beta{\scriptstyle_{\alpha, \parallel}}$ & $2 \times 10^{-5}$ & 32 & 0.01 & 0.00 & 0.02 & 45878 \\ \hline \multicolumn{7}{ |c| }{\textbf{Constraint 5: Time periods excluding ICMEs}} \\ \hline $\beta{\scriptstyle_{e, tot}}$ & 0.02 & 8870 & 1.13 & 0.67 & 2.04 & 790667 \\ $\beta{\scriptstyle_{e, \perp}}$ & 0.03 & 8914 & 1.21 & 0.74 & 2.10 & 790667 \\ $\beta{\scriptstyle_{e, \parallel}}$ & 0.02 & 8848 & 1.09 & 0.63 & 2.01 & 790667 \\ \hline $\beta{\scriptstyle_{p, tot}}$ & 0.002 & 4568 & 1.11 & 0.63 & 1.85 & 1022641 \\ $\beta{\scriptstyle_{p, \perp}}$ & 0.0003 & 4391 & 0.98 & 0.54 & 1.69 & 1050652 \\ $\beta{\scriptstyle_{p, \parallel}}$ & 0.0001 & 4923 & 1.38 & 0.73 & 2.27 & 1030592 \\ \hline $\beta{\scriptstyle_{\alpha, tot}}$ & 0.0002 & 612 & 0.06 & 0.02 & 0.19 & 432786 \\ $\beta{\scriptstyle_{\alpha, \perp}}$ & $5 \times 10^{-5}$ & 594 & 0.09 & 0.03 & 0.23 & 820065 \\ $\beta{\scriptstyle_{\alpha, \parallel}}$ & $6 \times 10^{-5}$ & 647 & 0.09 & 0.02 & 0.29 & 518203 \\ \hline \enddata \tablecomments{Definitions/Symbols are the same as in Table \ref{tab:Temperatures}. For symbol definitions, see Appendix \ref{app:Definitions}.} \end{deluxetable*} \subsection{Collision Rates} \label{subsec:CollisionRates} \indent In this section we introduce and discuss the one-variable statistics and distributions of the classical binary Coulomb collision frequency \citep[e.g.,][]{krall73a, schunk75a, schunk77a, spitzer53a} for a 90$^{\circ}$ deflection angle between particles of species $s$ and $s'$ (see Appendix \ref{app:CoulombCollisions} for definitions and details). We also estimate the effective wave-particle interaction rates between electrostatic ion-acoustic waves and particles \citep[e.g., see][and references therein]{wilsoniii14a, wilsoniii14b}. We use electrostatic ion-acoustic waves, instead of other modes, for four reasons: they are common in the solar wind \citep[e.g.,][]{gurnett79b}, they interact with both ions and electrons \citep[e.g.,][]{dum78a, dum78b}, to compare to previous work \citep[e.g.,][]{wilsoniii07a, wilsoniii14b}, and because there is an analytical expression for the collision rate comprised of measurable parameters. The purpose is to compare with Coulomb collision rates to determine whether wave-particle interactions should be considered in modeling the evolution of the solar wind from the sun to the Earth. \indent When we calculated the particle-particle and wave-particle collision rates for all data satisfying \textbf{Constraints 1} and \textbf{2} finding the minimum to maximum ranges: \begin{itemize}[itemsep=0pt,parsep=0pt,topsep=0pt] \item[] \textbf{Range of Values} \begin{itemize}[itemsep=0pt,parsep=0pt,topsep=0pt] \item $\nu{\scriptstyle_{ee}}$ $\sim$ $1 \times 10^{-8}$ -- $1 \times 10^{-4}$ \# $s^{-1}$ \item $\nu{\scriptstyle_{pp}}$ $\sim$ $6 \times 10^{-12}$ -- $3 \times 10^{-5}$ \# $s^{-1}$ \item $\nu{\scriptstyle_{\alpha \alpha}}$ $\sim$ $1 \times 10^{-11}$ -- $8 \times 10^{-6}$ \# $s^{-1}$ \item $\nu{\scriptstyle_{ep}}$ $\sim$ $4 \times 10^{-8}$ -- $1 \times 10^{-4}$ \# $s^{-1}$ \item $\nu{\scriptstyle_{e \alpha}}$ $\sim$ $2 \times 10^{-8}$ -- $5 \times 10^{-5}$ \# $s^{-1}$ \item $\nu{\scriptstyle_{p \alpha}}$ $\sim$ $3 \times 10^{-11}$ -- $3 \times 10^{-6}$ \# $s^{-1}$ \end{itemize} \item[] \textbf{Median Values} \begin{itemize}[itemsep=0pt,parsep=0pt,topsep=0pt] \item $\nu{\scriptstyle_{ee}}$ $\sim$ $2 \times 10^{-6}$ \# $s^{-1}$ \item $\nu{\scriptstyle_{pp}}$ $\sim$ $1 \times 10^{-7}$ \# $s^{-1}$ \item $\nu{\scriptstyle_{\alpha \alpha}}$ $\sim$ $2 \times 10^{-8}$ \# $s^{-1}$ \item $\nu{\scriptstyle_{ep}}$ $\sim$ $3 \times 10^{-6}$ \# $s^{-1}$ \item $\nu{\scriptstyle_{e \alpha}}$ $\sim$ $4 \times 10^{-7}$ \# $s^{-1}$ \item $\nu{\scriptstyle_{p \alpha}}$ $\sim$ $1 \times 10^{-8}$ \# $s^{-1}$ \end{itemize} \end{itemize} \indent If we compare the median values for different time periods, we find that the maximum collision rates for all rates defined by Equations \ref{eq:SHcollfreq_0}--\ref{eq:SHcollfreq_5} occur during MOs (i.e., \textbf{Constraint 4}). See the Supplemental ASCII files for the full statistical results. \indent If we compare these rates to the quasi-linear approximation for the effective collision rates \citep[e.g., see Equation \ref{eq:WPIcollfreq_0} and][and references therein]{wilsoniii14a, wilsoniii14b} between ion-acoustic waves and particles, $\nu{\scriptstyle_{iaw}}$, we find $\nu{\scriptstyle_{iaw}}$ $\sim$ $6 \times 10^{-5}$ -- $3 \times 10^{-3}$ \# $s^{-1}$ with a median of $\sim 5 \times 10^{-4}$ \# $s^{-1}$. Here we used the typical amplitudes observed in the solar wind of $\sim$0.1 mV/m \citep[e.g.,][]{gurnett79b}. Note for every order of magnitude change in the electric field amplitude, $\nu{\scriptstyle_{iaw}}$ will change by two orders of magnitude. We should also note that Equation \ref{eq:WPIcollfreq_0} is known to give rates that $\sim$2--3 orders of magnitude too small \citep[e.g.,][]{petkaki08a, yoon06a}. We still use this equation, however, because it always underestimates the net effect of the waves so it can serve as a lower bound for wave-particle interaction rates. If we look at the ratio of the ion-acoustic to Coulomb collision rates, we find: \begin{itemize}[itemsep=0pt,parsep=0pt,topsep=0pt] \item[] \textbf{Range of Values (assuming 0.1 mV/m)} \begin{itemize}[itemsep=0pt,parsep=0pt,topsep=0pt] \item $\nu{\scriptstyle_{iaw}}/\nu{\scriptstyle_{ee}}$ $\sim$ $3 \times 10^{+0}$ -- $\sim$ $1 \times 10^{+5}$ \item $\nu{\scriptstyle_{iaw}}/\nu{\scriptstyle_{pp}}$ $\sim$ $3 \times 10^{+1}$ -- $\sim$ $9 \times 10^{+6}$ \item $\nu{\scriptstyle_{iaw}}/\nu{\scriptstyle_{\alpha \alpha}}$ $\sim$ $1 \times 10^{+2}$ -- $\sim$ $3 \times 10^{+7}$ \item $\nu{\scriptstyle_{iaw}}/\nu{\scriptstyle_{ep}}$ $\sim$ $4 \times 10^{+0}$ -- $\sim$ $2 \times 10^{+4}$ \item $\nu{\scriptstyle_{iaw}}/\nu{\scriptstyle_{e \alpha}}$ $\sim$ $1 \times 10^{+1}$ -- $\sim$ $2 \times 10^{+4}$ \item $\nu{\scriptstyle_{iaw}}/\nu{\scriptstyle_{p \alpha}}$ $\sim$ $2 \times 10^{+2}$ -- $\sim$ $1 \times 10^{+7}$ \end{itemize} \item[] \textbf{Median Values (assuming 0.1 mV/m)} \begin{itemize}[itemsep=0pt,parsep=0pt,topsep=0pt] \item $\nu{\scriptstyle_{iaw}}/\nu{\scriptstyle_{ee}}$ $\sim$ $2 \times 10^{+2}$ \item $\nu{\scriptstyle_{iaw}}/\nu{\scriptstyle_{pp}}$ $\sim$ $4 \times 10^{+3}$ \item $\nu{\scriptstyle_{iaw}}/\nu{\scriptstyle_{\alpha \alpha}}$ $\sim$ $3 \times 10^{+4}$ \item $\nu{\scriptstyle_{iaw}}/\nu{\scriptstyle_{ep}}$ $\sim$ $2 \times 10^{+2}$ \item $\nu{\scriptstyle_{iaw}}/\nu{\scriptstyle_{e \alpha}}$ $\sim$ $1 \times 10^{+3}$ \item $\nu{\scriptstyle_{iaw}}/\nu{\scriptstyle_{p \alpha}}$ $\sim$ $4 \times 10^{+4}$ \end{itemize} \end{itemize} \noindent Thus, the wave-particle collision rates are $\sim$3--$3 \times 10^{+7}$ times larger than the Coulomb collision rates, even when considering very low amplitudes. We will discuss this further in Section \ref{sec:DiscussionandConclusions}. \section{Discussion and Conclusions} \label{sec:DiscussionandConclusions} \indent We present a long-duration (i.e., over a span of $\sim$10 years) statistical analysis of the temperatures, temperature ratios, and plasma betas of electrons, protons, and alpha-particles observed by the \emph{Wind} spacecraft near 1 AU. The primary purpose of this study is to provide a convenient and comprehensive statistical summary of the electron, proton, and alpha-particle plasma parameters\footnote{It is not the purpose of this work to detail the properties of the subcomponents of each particle species (e.g., halo electrons, proton beams, etc.), which is beyond the scope of this study and some of which will be addressed in detail in a future study [e.g., \textit{Salem et al. in preparation}]. We also did not provide an in-depth, physical interpretation of the plasma parameters as that will also be addressed in future work.} near 1 AU. The last long-duration (i.e., more than 1 year) statistical study comparing electrons with protons was published 20 years ago \citep[i.e.,][]{newbury98a} and relied upon only 18 months of $\sim$5 minute averaged data (i.e., $\sim$160,000 measurements). This study uses nearly 10 years of data spanning from the end of solar cycle 22 through much of solar cycle 23 (i.e., $>$1,000,000 measurements) with time periods separated into four categories (see Section \ref{sec:DefinitionsDataSets} for definitions): all times (\textbf{Constraints 1} and \textbf{2}), all times excluding interplanetary (IP) shocks (\textbf{Constraints 1}--\textbf{3}), only times within magnetic obstacles (MOs) \citep[e.g.,][]{nieveschinchilla16a, nieveschinchilla18a} (\textbf{Constraints 1}, \textbf{2}, and \textbf{4}), and all times excluding MOs (\textbf{Constraints 1}, \textbf{2}, and \textbf{5}). Below we discuss the observations and present the conclusions. \subsection{Discussion} \label{subsec:Discussion} \indent Tables \ref{tab:Temperatures}, \ref{tab:TemperatureRatios}, and \ref{tab:PlasmaBeta} and Figures \ref{fig:Temperatures}, \ref{fig:TemperatureRatios}, and \ref{fig:PlasmaBeta} show that the only time periods that appear to significantly affect the parameters are those during MOs, not IP shocks. In fact, the only parameter that does not appear to show a significant difference inside vs. outside of MOs are the $T{\scriptstyle_{e, j}}$ data. This causes the electron-dependent temperature ratios to be higher on average during MOs than the ion-only ratios, since the both the proton and alpha-particle temperatures decrease during MOs. That the electron temperatures are not significantly affected by MOs may be indicative of their higher mobility/conductivity than the ions. However, their betas are lower so either the densities drop or magnetic fields increase (most likely the latter based on the MO definition criteria). \indent The median scalar temperatures (see Table \ref{tab:Temperatures}) and plasma betas (see Table \ref{tab:PlasmaBeta}) are consistent with previous studies (e.g., see Table \ref{tab:PrevTemperatures} and \ref{tab:PrevBetas} for references) but the median temperature ratios (see Table \ref{tab:TemperatureRatios}) are lower than most previous observations (see Table \ref{tab:PrevTempRatios} for references), despite sharing similar observed data ranges for each ratio. The differences are magnified when we compare the temperature ratios during MOs (i.e., \textbf{Constraint 4}) to previous studies that focused on ICMEs and MOs. For instance, previous studies reported much larger average values for $\left(T{\scriptstyle_{e}}/T{\scriptstyle_{p}}\right){\scriptstyle_{tot}}$ (i.e., $\sim$4--6) than these observations (i.e., $\sim$1.6$\pm$1.3). The electron velocity moments from previous studies may have been calculated over reduced energy ranges, which could account for some of this difference. The spacecraft potential, which must be corrected for in order to obtain accurate electron moments, may also have have been less precisely measured in prior missions. \indent Many previous studies used instruments that had energy ranges consistent with only EESA Low (i.e., $\lesssim$ 1 keV) while this work includes EESA High (i.e., $\lesssim$ 30 keV). Further, only a few missions, including \emph{Wind}, have been able to accurately measure the upper hybrid line providing an unambiguous value for the total electron density, $n{\scriptstyle_{e}}$. The values of $n{\scriptstyle_{s}}$ (see Section \ref{sec:DefinitionsDataSets} for details) are used to compute $T{\scriptstyle_{s, j}}$ from elements of the diagonalized pressure tensor of species $s$, assuming an ideal gas law, in nearly all velocity moment software, both past and present (e.g., larger $n{\scriptstyle_{s}}$ for the same mean kinetic energy density will result in smaller $T{\scriptstyle_{s, j}}$). Thus, more accurate $n{\scriptstyle_{s}}$ should produce more accurate $T{\scriptstyle_{s, j}}$. We suspect that the improved accuracy of the $n{\scriptstyle_{e}}$, and thus $n{\scriptstyle_{p}}$ and $n{\scriptstyle_{\alpha}}$, improved the temperatures values over some previous work which has altered the corresponding temperature ratios. For example, this can be seen when one examines the study by \citet[][]{skoug00a}, where their average values for $T{\scriptstyle_{e, tot}}$, $T{\scriptstyle_{p, tot}}$, and $\left(T{\scriptstyle_{e}}/T{\scriptstyle_{p}}\right){\scriptstyle_{tot}}$ inside ICMEs were $\sim$9.2 eV, $\sim$2.0 eV, and $\sim$6.8, respectively, compared to these average values of $\sim$11.1 eV, $\sim$7.7 eV, and $\sim$2.6, respectively. \indent As we showed, the wave-particle collision rates are $\sim$$10^{+0}$--$10^{+7}$ times larger than the Coulomb collision rates, even when we use very low amplitudes. It is known that ion-acoustic wave amplitudes can be more than three orders of magnitude larger than the 0.1 mV/m used here \citep[e.g.,][]{wilsoniii07a, wilsoniii10a, wilsoniii14b}, which would increase $\nu{\scriptstyle_{iaw}}$ by six orders of magnitude. A potential application of these statistical observations is for instability analysis. For instance, the growth rate threshold was predicted to critically depend upon the $\left(T{\scriptstyle_{e}}/T{\scriptstyle_{p}}\right){\scriptstyle_{tot}}$ ratio for electrostatic ion-acoustic waves\footnote{Note, however, that temperature gradients, heat fluxes, and other non-Maxwellian velocity distribution features can reduce this threshold significantly \citep[e.g.,][]{dum78a, dum78b}.}, specifically growth is only supposed to occur when $\left(T{\scriptstyle_{e}}/T{\scriptstyle_{p}}\right){\scriptstyle_{tot}}$ $\gtrsim$ 3. We find this criterion is satisfied $\sim$12.4\% of the time\footnote{We suspect that the fraction of time satisfying $\left(T{\scriptstyle_{e}}/T{\scriptstyle_{p}}\right){\scriptstyle_{tot}}$ $\gtrsim$ 3 would change with an increased time resolution \citep[e.g.,][]{paschmann98a}.} for data satisfying \textbf{Constraints 1} and \textbf{2}. This is important since electrostatic ion-acoustic waves are known to interact with both the electrons and ions and are observed ubiquitously in the solar wind \citep[e.g.,][]{gurnett79b}, at current sheets \citep[e.g.,][]{malaspina13a}, and at IP shocks \citep[e.g.,][]{wilsoniii07a}. \indent The collision rates from Appendix \ref{app:CoulombCollisions} show that even for conservatively low wave amplitudes, the median values of the ratio between the wave-particle and particle-particle collision rates range from $\sim$$10^{+0}$--$10^{+7}$. If we assume ion-acoustic waves occur whenever $\left(T{\scriptstyle_{e}}/T{\scriptstyle_{p}}\right){\scriptstyle_{tot}}$ $\gtrsim$ 3 (i.e., $\sim$12.4\% occurrence rate), then we can adjust the collision rate ratios by this fractional occurrence rate to find the ratios of the net effects from either type of collision\footnote{Note that the range of collision rate ratios $\sim$$10^{+0}$--$10^{+7}$ assumes ion-acoustic waves exist 100\% of the time a particle distribution streams from the sun to the Earth. Adjusting by this $\sim$12.4\% is an approximation used to reduce their net effect to more realistic values.}. Then the median ratios would change to $\sim$$10^{-1}$--$10^{+6}$. To ensure that $\nu{\scriptstyle_{iaw}}$ is always greater than $\nu{\scriptstyle_{ss'}}$ with the $\sim$12.4\% occurrence rate correction, the wave amplitudes would need only increase to $\sim$0.17 mV/m, consistent with numerous solar wind observations \citep[e.g.,][]{gurnett79b, malaspina13a}. \indent Suppose we further limit the occurrence rate by the rate of current sheet crossings near 1 AU. \citet[][]{malaspina13a} reported that roughly 942 current sheets were observed per day, which is a $\sim$1.1\% occurrence rate. If we couple that with the $\sim$12.4\% rate for satisfying $\left(T{\scriptstyle_{e}}/T{\scriptstyle_{p}}\right){\scriptstyle_{tot}}$ $\gtrsim$ 3, then we have a $\sim$0.14\% net occurrence rate. This would reduce the collision rate ratios from $\sim$$10^{+0}$--$10^{+7}$ down to $\sim$$10^{-3}$--$10^{+4}$. Using the same logic as before, these ratios would approach unity if the wave amplitudes were increased to at least $\sim$1.7 mV/m. Again, this is not an unrealistically large amplitude for ion-acoustic waves in the solar wind \citep[e.g.,][]{gurnett79b, malaspina13a} and much smaller than those observed at IP shocks \citep[e.g.,][]{wilsoniii07a, wilsoniii10a}. \indent The primary limitation of assuming a collision rate between particles and ion-acoustic waves is the lack of a statistical study of the true occurrence rate and amplitudes of such modes in the solar wind. The closest study was performed nearly 40 years ago by \citet[][]{gurnett79b} using dynamic spectra measurements of solar wind electric and magnetic fields from \emph{Helios} and \emph{Voyager}. It is well known that dynamic spectra measurements underestimate wave amplitudes by up at least an order of magnitude when the waves are composed of short duration, bursty wave packets \citep[e.g.,][]{tsurutani09a}. Thus, the ratios above would increase by two orders of magnitude for instantaneous wave amplitudes. Further, the wave-particle collision rates from Equation \ref{eq:WPIcollfreq_0} are known to be $\sim$2--3 orders of magnitude too small \citep[e.g.,][]{petkaki08a, yoon06a}. If we only include the collision rate correction, then the $\sim$1.7 mV/m amplitude would reduce to $\sim$0.017 mV/m to match the net effects contributed by Coulomb collisions, even with the $\sim$0.14\% net occurrence rate correction factor. \indent Perhaps another way to express the differences is to revisit the collision rate ratios. If we include the $\sim$0.14\% net occurrence rate correction factor but increase the contributions from wave-particle collisions by $\sim$3 orders of magnitude (i.e., to account for known underestimations in theory and observations), the ratios increase to $\sim$4--$3 \times 10^{+7}$. That is, there would need to be at least three Coulomb collisions for every collision with an ion-acoustic wave, which would correspond to roughly 11 days\footnote{Estimate takes the inverse of the median Coulomb collision rates from the list as a time scale for one collision, multiplies by number of necessary collisions, then divides by 86,400 seconds/day.} for the fastest Coulomb collision rate and $\sim$2380 days for the slowest. \subsection{Conclusion} \label{subsec:Conclusion} \indent We summarize the observations by showing the mean(median) [standard deviation] for each parameter for all time periods (i.e., satisfying \textbf{Constraints 1} and \textbf{2}) analyzed. Note that none of the distributions are symmetric (i.e., they all have a finite skewness), so the interpretation of the mean and standard deviation should be taken with that in mind. For instance, the standard deviation can exceed the mean and/or median (e.g., $T{\scriptstyle_{\alpha, tot}}$). The summary of the observations are as follows: \begin{itemize}[itemsep=0pt,parsep=0pt,topsep=0pt] \item[] \textbf{Scalar Temperatures} \begin{itemize}[itemsep=0pt,parsep=0pt,topsep=0pt] \item $T{\scriptstyle_{e, tot}}$ $=$ 12.2(11.9)[3.2] eV \item $T{\scriptstyle_{p, tot}}$ $=$ 12.7(8.6)[14.1] eV \item $T{\scriptstyle_{\alpha, tot}}$ $=$ 23.9(10.8)[31.7] eV \end{itemize} \item[] \textbf{Temperature Ratios} \begin{itemize}[itemsep=0pt,parsep=0pt,topsep=0pt] \item $\left(T{\scriptstyle_{e}}/T{\scriptstyle_{p}}\right){\scriptstyle_{tot}}$ $=$ 1.64(1.27)[1.26] \item $\left(T{\scriptstyle_{e}}/T{\scriptstyle_{\alpha}}\right){\scriptstyle_{tot}}$ $=$ 1.24(0.82)[1.25] \item $\left(T{\scriptstyle_{\alpha}}/T{\scriptstyle_{p}}\right){\scriptstyle_{tot}}$ $=$ 2.50(1.94)[1.45] \end{itemize} \item[] \textbf{Plasma Betas} \begin{itemize}[itemsep=0pt,parsep=0pt,topsep=0pt] \item $\beta{\scriptstyle_{e, tot}}$ $=$ 2.31(1.09)[17.6] \item $\beta{\scriptstyle_{p, tot}}$ $=$ 1.79(1.05)[11.4] \item $\beta{\scriptstyle_{\alpha, tot}}$ $=$ 0.17(0.05)[1.35]. \end{itemize} \item[] \textbf{Collision Rates} \begin{itemize}[itemsep=0pt,parsep=0pt,topsep=0pt] \item $\nu{\scriptstyle_{ee}}$ $\sim$ $4 \times 10^{-6}$($2 \times 10^{-6}$)[$4 \times 10^{-6}$] \# $s^{-1}$ \item $\nu{\scriptstyle_{pp}}$ $\sim$ $3 \times 10^{-7}$($1 \times 10^{-7}$)[$4 \times 10^{-7}$] \# $s^{-1}$ \item $\nu{\scriptstyle_{\alpha \alpha}}$ $\sim$ $6 \times 10^{-8}$($2 \times 10^{-8}$)[$1 \times 10^{-7}$] \# $s^{-1}$ \item $\nu{\scriptstyle_{ep}}$ $\sim$ $5 \times 10^{-6}$($3 \times 10^{-6}$)[$4 \times 10^{-6}$] \# $s^{-1}$ \item $\nu{\scriptstyle_{e \alpha}}$ $\sim$ $7 \times 10^{-7}$($4 \times 10^{-7}$)[$1 \times 10^{-6}$] \# $s^{-1}$ \item $\nu{\scriptstyle_{p \alpha}}$ $\sim$ $3 \times 10^{-8}$($1 \times 10^{-8}$)[$7 \times 10^{-8}$] \# $s^{-1}$ \item $\nu{\scriptstyle_{iaw}}$ $\sim$ $5 \times 10^{-4}$($5 \times 10^{-4}$)[$2 \times 10^{-4}$] \# $s^{-1}$ (for 0.1 mV/m wave amplitude) \end{itemize} \end{itemize} \indent The results are relevant for long-term statistical models and parameter range limits in empirical models. These observations are also relevant to comparisons with astrophysical plasmas like the intra-galaxy-cluster medium. Finally, this work will provide a statistical baseline for the upcoming \emph{Solar Orbiter} and \emph{Parker Solar Probe} missions and future IMAP mission. \acknowledgments \noindent The authors thank A.F.- Vi{\~n}as and T. Nieves-Chinchilla for useful discussions of solar wind plasma physics. L.B.W. was partially supported by \emph{Wind} MO\&DA grants. C.S.S. was partially supported by NASA grant NNX16AI59G and NSF SHINE grant 1622498. S.D.B., T.A.B., C.S.S., and M.P.P. were partially supported by NASA grant NNX16AP95G. B.A.M. was partially supported by NASA grants NNX17AC72G and NNX17AH88G. K.G.K. and J.C.K. were partially supported by NASA grant NNX14AR78G. M.L.S. was partially supported by grants NNX14AT26G and NNX13AI75G. The CFA Interplanetary Shock Database is supported by NASA grant NNX13AI75G. The authors thank the Harvard Smithsonian Center for Astrophysics, the NASA SPDF/CDAWeb team, and the \emph{Wind} team for the interplanetary shock analysis, \emph{Wind} plasma and magnetic field data, and the \emph{Wind} ICME catalog, respectively. The \emph{Wind} shock database can be found at: \\ \noindent \url{https://www.cfa.harvard.edu/shocks/wi\_data/}. \\ \noindent The \emph{Wind} ICME catalog can be found at: \\ \noindent \url{https://wind.nasa.gov/ICMEindex.php}. \\ \noindent Analysis software used herein can be found at: \\ \noindent \url{https://github.com/lynnbwilsoniii/wind\_3dp\_pros}. \\
{ "timestamp": "2018-02-26T02:10:07", "yymm": "1802", "arxiv_id": "1802.08585", "language": "en", "url": "https://arxiv.org/abs/1802.08585" }
\section{Introduction} The last decade has witnessed huge success of deep neural networks in various domains. Examples include computer vision, speech recognition, and natural language processing~\citep{lecun2015deep}. However, their huge size often hinders deployment to small computing devices such as cell phones and the internet of things. Many attempts have been recently made to reduce the model size. One common approach is to prune a trained dense network \citep{han2015learning, han2015deep}. However, most of the pruned weights may come from the fully-connected layers where computations are cheap, and the resultant time reduction is insignificant. \citet{li2017pruning} and \citet{molchanov2017pruning} proposed to prune filters in the convolutional neural networks based on their magnitudes or significance to the loss. However, the pruned network has to be retrained, which is again expensive. Another direction is to use more compact models. GoogleNet~\citep{szegedy2015going} and ResNet~\citep{he2016deep} replace the fully-connected layers with simpler global average pooling. However, they are also deeper. SqueezeNet~\citep{iandola2016squeezenet} reduces the model size by replacing most of the $3\times3$ filters with $1 \times 1$ filters. This is less efficient on smaller networks because the dense $1\times1$ convolutions are costly. MobileNet~\citep{howard2017mobilenets} compresses the model using separable depth-wise convolution. ShuffleNet~\citep{zhang2017shufflenet} utilizes pointwise group convolution and channel shuffle to reduce the computation cost while maintaining accuracy. However, highly optimized group convolution and depth-wise convolution implementations are required. Alternatively, \citet{novikov2015tensorizing} compressed the model by using a compact multilinear format to represent the dense weight matrix. The CP and Tucker decompositions have also been used on the kernel tensor in CNNs \citep{lebedev2014speeding,kim2015compression}. However, they often need expensive fine-tuning. Another effective approach to compress the network and accelerate training is by quantizing each full-precision weight to a small number of bits. This can be further divided to two sub-categories, depending on whether pre-trained models are used \citep{lin2016fixed,mellempudi2017ternary} or the quantized model is trained from scratch~\citep{courbariaux2015binaryconnect,li2017training}. Some of these also directly learn with low-precision weights, but they usually suffer from severe accuracy deterioration \citep{li2017training,miyashita2016convolutional}. By keeping the full-precision weights during learning, \citet{courbariaux2015binaryconnect} pioneered the BinaryConnect algorithm, which uses only one bit for each weight while still achieving state-of-the-art classification results. \citet{rastegari2016xnor} further incorporated weight scaling, and obtained better results. Instead of simply finding the closest binary approximation of the full-precision weights, a loss-aware scheme is proposed in~\citep{hou2017loss}. Beyond binarization, TernaryConnect \citep{lin2015neural} quantizes each weight to $\{-1,0,1\}$. \citet{li2016ternary} and \citet{zhu2017trained} added scaling to the ternarized weights, and DoReFa-Net~\citep{zhou2016dorefa} further extended quantization to more than three levels. However, these methods do not consider the effect of quantization on the loss, and rely on heuristics in their procedures \citep{zhou2016dorefa,zhu2017trained}. Recently, a loss-aware low-bit quantized neural network is proposed in \citep{leng2017extremely}. However, it uses full-precision weights in the forward pass and the extra-gradient method \citep{vasilyev2010extragradient} for update, both of which are expensive. In this paper, we propose an efficient and disciplined ternarization scheme for network compression. Inspired by \citep{hou2017loss}, we explicitly consider the effect of ternarization on the loss. This is formulated as an optimization problem which is then solved efficiently by the proximal Newton algorithm. When the loss surface's curvature is ignored, the proposed method reduces to that of \citep{li2016ternary}, and is also related to the projection step of~\citep{leng2017extremely}. Next, we extend it to (i) allow the use of different scaling parameters for the positive and negative weights; and (ii) the use of $m$ bits (where $m>2$) for weight quantization. Experiments on both feedforward and recurrent neural networks show that the proposed quantization scheme outperforms state-of-the-art algorithms. {\bf Notations:} For a vector ${\bf x}$, $\sqrt{{\bf x}}$ denotes the element-wise square root (i.e., $[\sqrt{{\bf x}}]_i = \sqrt{x_i}$), $|{\bf x}|$ is the element-wise absolute value, $\|{\bf x}\|_p = (\sum_i|x_i|^p)^{\frac{1}{p}}$ is its $p$-norm, and $\text{Diag}({\bf x})$ returns a diagonal matrix with ${\bf x}$ on the diagonal. For two vectors ${\bf x}$ and ${\bf y}$, ${\bf x} \odot {\bf y}$ denotes the element-wise multiplication and ${\bf x} \oslash {\bf y}$ the element-wise division. $\|{\bf x}\|_{\bm Q}^2 = {\bf x}^\top {\bm Q} {\bf x}$. Given a threshold $\Delta$, ${\bf I}_{\Delta}({\bf x})$ returns a vector such that $[{\bf I}_{\Delta}({\bf x})]_i=1$ if $x_i>\Delta$, $-1$ if $x_i < -\Delta$, and 0 otherwise. ${\bf I}_{\Delta}^+({\bf x})$ considers only the positive threshold, i.e., $[{\bf I}^+_{\Delta}({\bf x})]_i=1$ if $x_i>\Delta$, and 0 otherwise. Similarly, $[{\bf I}^-_{\Delta}({\bf x})]_i=-1$ if $x_i< -\Delta$, and 0 otherwise. For a matrix ${\bf X}$, $\text{vec}({\bf X})$ returns a vector by stacking all the columns of ${\bf X}$, and $\text{diag}({\bf X})$ returns a vector whose entries are from the diagonal of ${\bf X}$. \section{Related Work} Let the full-precision weights from all $L$ layers be ${\bf w} = [{\bf w}_1^\top, {\bf w}_2^\top, \dots, {\bf w}_L^\top]^\top$, where ${\bf w}_l = \text{vec}({\bf W}_l)$, and ${\bf W}_l$ is the weight matrix at layer $l$. The corresponding quantized weights will be denoted $\hat{\bf{w}}=[\hat{\bf{w}}_1^\top, \hat{\bf{w}}_2^\top, \dots, \hat{\bf{w}}_L^\top]^\top$. \subsection{Weight Binarized Networks} In BinaryConnect \citep{courbariaux2015binaryconnect}, each element of ${\bf w}_l$ is binarized to $-1$ or $+1$ by using the sign function: $\text{Binarize}({\bf w}_l) = \text{sign} ({\bf w}_l) $. In the Binary-Weight-Network (BWN) \citep{rastegari2016xnor}, a scaling parameter is also included, i.e., $\text{Binarize}({\bf w}_l) = \alpha_l {\bf b}_l$, where $\alpha_l>0$, ${\bf b}_l \in \{-1, +1\}^{n_l}$ and $n_l$ is the number of weights in ${\bf w}_l$. By minimizing the difference between ${\bf w}_l$ and $\alpha_l {\bf b}_l$, the optimal $\alpha_l, {\bf b}_l$ have the simple form: $\alpha_l = \|{\bf w}_l\|_1/n_l$, and ${\bf b}_l = \text{sign} ({\bf w}_l)$. Instead of simply finding the best binary approximation for the full-precision weight ${\bf w}_l^t$ at iteration $t$, the loss-aware binarized network (LAB) directly minimizes the loss w.r.t. the binarized weight $\alpha^t_l {\bf b}_l^t$ \citep{hou2017loss}. Let ${\bf d}^{t-1}_l$ be a vector containing the diagonal of an approximate Hessian of the loss. It can be shown that $\alpha^t_l = \|{\bf d}^{t-1}_l \odot {\bf w}^t_l\|_1/\|{\bf d}^{t-1}_l\|_1$ and ${\bf b}_l^t = \text{sign}({\bf w}^t_l)$. \subsection{Weight Ternarized Networks} In a weight ternarized network, zero is used as an additional quantized value. In TernaryConnect~\citep{lin2015neural}, each weight value is clipped to $[-1,1]$ before quantization, and then a non-negative weight $[{\bf w}_l^t]_i$ is stochastically quantized to $1$ with probability $[{\bf w}_l^t]_i$ (and $0$ otherwise). When $[{\bf w}_l^t]_i$ is negative, it is quantized to $-1$ with probability $-[{\bf w}_l^t]_i$, and $0$ otherwise. In the ternary weight network (TWN)~\citep{li2016ternary}, ${\bf w}_l^t$ is quantized to $\hat{\bf{w}}_l^t = \alpha_l^t {\bf I}_{\Delta_l^t}({\bf w}_l^t) $, where $\Delta_l^t$ is a threshold (i.e., $[\hat{\bf{w}}_l^t]_i = \alpha_l^t$ if $[{\bf w}_l^t]_i>\Delta_l^t$, $-\alpha_l^t$ if $[{\bf w}_l^t]_i<-\Delta_l^t$ and 0 otherwise). To obtain $\Delta_l^t$ and $\alpha_l^t$, TWN minimizes the $\ell_2$-distance between the full-precision and ternarized weights, leading to \begin{equation} \label{eq:twn} \Delta_l^t = \arg \max_{\Delta>0} \frac{1}{\|{\bf I}_{\Delta}({\bf w}_l^t)\|_1} \left(\sum_{i : |[{\bf w}_l^t]_i|> \Delta_l^t} |[{\bf w}_l^t]_i| \right)^2, \;\; \alpha_l^t = \frac{1}{\|{\bf I}_{\Delta_l^t}({\bf w}_l^t)\|_1} \sum_{i: |[{\bf w}_l^t]_i|> \Delta_l^t} |[{\bf w}_l^t]_i|. \end{equation} However, $\Delta_l^t$ in (\ref{eq:twn}) is difficult to solve. Instead, TWN simply sets $\Delta_l^t= 0.7 \cdot \mathbf{E}(|{\bf w}_l^t|)$ in practice. In TWN, one scaling parameter ($\alpha_l^t$) is used for both the positive and negative weights at layer $l$. In the trained ternary quantization (TTQ) network~\citep{zhu2017trained}, different scaling parameters ($\alpha_l^t$ and $\beta_l^t$) are used. The weight ${\bf w}_l^t$ is thus quantized to $\hat{\bf{w}}_l^t= \alpha_l^t {\bf I}_{\Delta_l^t}^+({\bf w}_l^t) + \beta_l^t {\bf I}_{\Delta_l^t}^-({\bf w}_l^t)$. The scaling parameters are learned by gradient descent. As for $ \Delta_l^t$, two heuristics are used. The first sets $ \Delta_l^t$ to a constant fraction of $\max(|{\bf w}_l^t|) $, while the second sets $ \Delta_l^t$ such that at all layers are equally sparse. \subsection{Weight Quantized Networks} In a weight quantized network, $m$ bits (where $m\ge 2$) are used to represent each weight. Let $\mathcal{Q}$ be a set of $(2k+1)$ quantized values, where $k=2^{m-1}-1$. The two popular choices of $\mathcal{Q}$ are $\left\{-1, -\frac{k-1}{k}, \dots, -\frac{1}{k}, 0, \frac{1}{k}, \dots, \frac{k-1}{k}, 1\right\}$ (linear quantization), and $\left\{-1, -\frac{1}{2}, \dots, -\frac{1}{2^{k-1}}, 0, \frac{1}{2^{k-1}}, \dots, \frac{1}{2}, 1\right\}$ (logarithmic quantization). By limiting the quantized values to powers of two, logarithmic quantization is advantageous in that expensive floating-point operations can be replaced by cheaper bit-shift operations. When $m=2$, both schemes reduce to $\mathcal{Q}=\{-1, 0, 1\}$. In the DoReFa-Net~\citep{zhou2016dorefa}, weight ${\bf w}_l^t$ is heuristically quantized to $m$-bit, with:\footnote{Note that the quantized value of 0 is not used in DoReFa-Net.} \[ [\hat{\bf{w}}_l^t]_i = 2 \cdot \text{quantize}_m \left( \frac{\tanh([{\bf w}_l^t]_i)}{2\max(|\tanh([{\bf w}_l^t]_i)|)} + \frac{1}{2} \right)-1 \] in $\{-1, -\frac{2^m-2}{2^m-1}, \dots, -\frac{1}{2^m-1}, \frac{1}{2^m-1}, \dots, \frac{2^m-2}{2^m-1}, 1 \}$, where $\text{quantize}_m(x) = \frac{1}{2^m-1}\text{round}((2^m-1)x)$. Similar to loss-aware binarization \citep{hou2017loss}, \citet{leng2017extremely} proposed a loss-aware quantized network called low-bit neural network (LBNN). The alternating direction method of multipliers (ADMM) \citep{boyd-11} is used for optimization. At the $t$th iteration, the full-precision weight ${\bf w}_l^t$ is first updated by the method of extra-gradient~\citep{vasilyev2010extragradient}: \begin{equation} \label{eq:extra} \tilde{{\bf w}}_l^t = {\bf w}_l^{t-1} - \eta^t \nabla_l \mathcal{L}({\bf w}_l^{t-1}), \;\; {\bf w}_l^t = {\bf w}_l^{t-1} - \eta^t \nabla_l \mathcal{L}(\tilde{{\bf w}}_l^t), \end{equation} where $\mathcal{L}$ is the augmented Lagrangian in the ADMM formulation, and $\eta^t$ is the stepsize. Next, ${\bf w}_l^t$ is projected to the space of $m$-bit quantized weights so that $\hat{\bf{w}}_l^t$ is of the form $\alpha_l {\bf b}_l$, where $\alpha_l > 0$, and ${\bf b}_l\in \left\{-1, -\frac{1}{2}, \dots, -\frac{1}{2^{k-1}}, 0, \frac{1}{2^{k-1}}, \dots, \frac{1}{2}, 1\right\}$. \section{Loss-Aware Quantization} \subsection{Ternarization using Proximal Newton Algorithm} In weight ternarization, TWN simply finds the closest ternary approximation of the full precision weight at each iteration, while TTQ sets the ternarization threshold heuristically. Inspired by LAB (for binarization), we consider the loss explicitly during quantization and obtain the quantization thresholds and scaling parameter by solving an optimization problem. As in TWN, the weight ${\bf w}_l$ is ternarized as $\hat{\bf{w}}_l =\alpha_l {\bf b}_l$, where $\alpha_l>0$ and ${\bf b}_l\in \{-1, 0, 1\}^{n_l}$. Given a loss function $\ell$, we formulate weight ternarization as the following optimization problem: \begin{equation} \label{eq:obj} \min_{\hat{\bf{w}}} \; \ell(\hat{\bf{w}}) \;:\; \hat{\bf{w}}_l=\alpha_l {\bf b}_l, \; \alpha_l > 0, \; {\bf b}_l\in \mathcal{Q}^{n_l}, \;\; l=1,\dots,L, \end{equation} where $\mathcal{Q}$ is the set of desired quantized values. As in LAB, we will solve this using the proximal Newton method~\citep{lee2014proximal,rakotomamonjy2016dc}. At iteration $t$, the objective is replaced by the second-order expansion \begin{equation} \label{eq:2nd} \ell(\hat{\bf{w}}^{t-1}) + \nabla \ell(\hat{\bf{w}}^{t-1})^\top (\hat{\bf{w}} - \hat{\bf{w}}^{t-1}) + \frac{1}{2}(\hat{\bf{w}} - \hat{\bf{w}}^{t-1})^\top {\bf H}^{t-1} (\hat{\bf{w}} - \hat{\bf{w}}^{t-1}), \end{equation} where ${\bf H}^{t-1}$ is an estimate of the Hessian of $\ell$ at $\hat{\bf{w}}^{t-1}$. We use the diagonal equilibration pre-conditioner~\citep{dauphin2015equilibrated}, which is robust in the presence of saddle points and also readily available in popular stochastic deep network optimizers such as Adam \citep{kingma2014adam}. Let ${\bf D}_l$ be the approximate diagonal Hessian at layer $l$. We use ${\bf D}=\text{Diag}([\text{diag}({\bf D}_1)^\top, \dots, \text{diag}({\bf D}_L)^\top]^\top)$ as an estimate of ${\bf H}$. Substituting (\ref{eq:2nd}) into (\ref{eq:obj}), we solve the following subproblem at the $t$th iteration: \begin{eqnarray} \label{eq:obj_proximal} &\min_{\hat{\bf{w}}^t} & \nabla \ell(\hat{\bf{w}}^{t-1})^\top (\hat{\bf{w}}^t - \hat{\bf{w}}^{t-1})+\frac{1}{2}(\hat{\bf{w}}^t - \hat{\bf{w}}^{t-1})^\top {\bf D}^{t-1} (\hat{\bf{w}}^t - \hat{\bf{w}}^{t-1}) \\ \nonumber & \text{s.t.} & \hat{\bf{w}}_l^t=\alpha_l^t {\bf b}_l^t, \; \alpha_l^t > 0, \; {\bf b}_l^t\in \mathcal{Q}^{n_l}, \;\; l=1,\dots,L. \label{eq:quan} \nonumber \end{eqnarray} \begin{prop} \label{prop:two_step} Let ${\bf d}^{t-1}_l \equiv \text{diag}({\bf D}^{t-1}_l)$, the objective in (\ref{eq:obj_proximal}) can be rewritten as \begin{equation} \label{eq:ter_quantize} \min_{\hat{\bf{w}}^t} \frac{1}{2} \sum_{l=1}^{L} \|\hat{\bf{w}}_l^t- {\bf w}_l^t\|_{{\bf D}_l^{t-1}}^2, \end{equation} where \begin{equation} \label{eq:ter_sgd} {\bf w}^t_l \equiv \hat{\bf{w}}_l^{t-1} - \nabla_l \ell(\hat{\bf{w}}^{t-1}) \oslash {\bf d}^{t-1}_l. \end{equation} \end{prop} Obviously, this objective can be minimized layer by layer. Each proximal Newton iteration thus consists of two steps: (i) Obtain ${\bf w}^t_l$ in (\ref{eq:ter_sgd}) by gradient descent along $ \nabla_l \ell(\hat{\bf{w}}^{t-1})$, which is preconditioned by the adaptive learning rate $1 \oslash {\bf d}^{t-1}_l$ so that the rescaled dimensions have similar curvatures; (ii) Quantize ${\bf w}^t_l$ to $\hat{{\bf w}}^t_l$ by minimizing the scaled difference between $\hat{{\bf w}}_l^t$ and $ {\bf w}_l^t$ in (\ref{eq:ter_quantize}). Intuitively, when the curvature is low ($[{\bf d}_l^{t-1}]_i$ is small), the loss is not sensitive to the weight and ternarization error can be less penalized. When the loss surface is steep, ternarization has to be more accurate. Though the constraint in (\ref{eq:quan}) is more complicated than that in LAB, interestingly the following simple relationship can still be obtained for weight ternarization. \begin{prop} \label{prop:ter_alt} With $\mathcal{Q}=\{-1, 0, 1\}$, and the optimal $\hat{{\bf w}}_l^t$ in (\ref{eq:ter_quantize}) of the form $\alpha {\bf b}$. For a fixed ${\bf b}$, $\alpha = \frac{\|{\bf b} \odot {\bf d}_l^{t-1} \odot {\bf w}_l^t\|_1}{\|{\bf b} \odot {\bf d}_l^{t-1}\|_1}$; whereas when $\alpha$ is fixed, ${\bf b} = {\bf I}_{\alpha/2}({\bf w}_l^t)$. \end{prop} Equivalently, ${\bf b}$ can be written as $\Pi_{\mathcal{Q}} ({\bf w}_l^t/\alpha)$, where $\Pi_{\mathcal{Q}} (\cdot)$ projects each entry of the input argument to the nearest element in $\mathcal{Q}$. Further discussions on how to solve for $\alpha_l^t$ will be presented in Sections~\ref{sec:exact} and \ref{sec:approx}. When the curvature is the same for all dimensions at layer $l$, the following Corollary shows that the solution above reduces that of TWN. \begin{cor} \label{cor:twn} When ${\bf D}_l^{t-1} = \lambda {\bf I}$, $\alpha_l^t$ reduces to the TWN solution in (\ref{eq:twn}) with $\Delta_l^t = \alpha_l^t/2$. \end{cor} In other words, TWN corresponds to using the proximal gradient algorithm, while the proposed method corresponds to using the proximal Newton algorithm with diagonal Hessian. In composite optimization, it is known that the proximal Newton algorithm is more efficient than the proximal gradient algorithm~\citep{lee2014proximal,rakotomamonjy2016dc}. Moreover, note that the interesting relationship $\Delta_l^t = \alpha_l^t/2$ is not observed in TWN, while TTQ completely neglects this relationship. In LBNN~\citep{leng2017extremely}, its projection step uses an objective which is similar to (\ref{eq:ter_quantize}), but without using the curvature information. Besides, their ${\bf w}_l^t$ is updated with the extra-gradient in (\ref{eq:extra}), which doubles the number of forward, backward and update steps, and can be costly. Moreover, LBNN uses full-precision weights in the forward pass, while all other quantization methods including ours use quantized weights (which eliminates most of the multiplications and thus faster training). When (i) $\ell$ is continuously differentiable with Lipschitz-continuous gradient (i.e., there exists $\beta>0$ such that $\NM{\nabla \ell({\bf u}) - \nabla \ell({\bf v})}{2} \leq \beta \NM{{\bf u}-{\bf v}}{2}$ for any ${\bf u}, {\bf v}$); (ii) $\ell$ is bounded from below; and (iii) $[{\bf d}_l^t]_k > \beta \; \forall l,k,t$, it can be shown that the objective of (\ref{eq:obj}) produced by the proximal Newton algorithm (with solution in Proposition~\ref{prop:ter_alt}) converges \citep{hou2017loss}. In practice, it is important to keep the full-precision weights during update \citep{courbariaux2015binaryconnect}. Hence, we replace (\ref{eq:ter_sgd}) by ${\bf w}^t_l \leftarrow {\bf w}^{t-1}_l - \nabla_l \ell(\hat{\bf{w}}^{t-1}) \oslash {\bf d}_l^{t-1} $. The whole procedure, which is called Loss-Aware Ternarization (LAT), is shown in Algorithm~\ref{alg:whole} of Appendix~\ref{apdx:whole}. It is similar to Algorithm~1 of LAB \citep{hou2017loss}, except that $\alpha^t_l$ and ${\bf b}_l^t$ are computed differently. In step 4, following~\citep{li2016ternary}, we first rescale input ${\bf x}_l^{t-1}$ with $\alpha_l$, so that multiplications in dot products and convolutions become additions. Algorithm~\ref{alg:whole} can also be easily extended to ternarize weights in recurrent networks. Interested readers are referred to \citep{hou2017loss} for details. \subsubsection{Exact solution of $\alpha_l^t$} \label{sec:exact} To simplify notations, we drop the superscripts and subscripts. From Proposition~\ref{prop:ter_alt}, \begin{equation} \label{eq:opt_exact} \alpha = \frac{\|{\bf b}\odot {\bf d} \odot {\bf w}\|_1}{\|{\bf b}\odot {\bf d}\|_1}, \;\; {\bf b} = {\bf I}_{\alpha/2}({\bf w}). \end{equation} We now consider how to solve for $\alpha$. First, we introduce some notations. Given a vector ${\bf x} =[x_1, x_2, \dots, x_n]$, and an indexing vector ${\bf s} \in \mathbb{R}^n$ whose entries are a permutation of $\{1, \dots, n\}$, $\text{perm}_{\bf s}({\bf x})$ returns the vector $[x_{s_1}, x_{s_2}, \dots x_{s_n}]$, and $\text{cum}({\bf x})= [x_1, \sum_{i=1}^{2}x_i, \dots, \sum_{i=1}^n x_i]$ returns partial sums for elements in ${\bf x}$. For example, let ${\bf a} = [1, -1, -2]$, and ${\bf b}=[3, 1, 2]$. Then, $\text{perm}_{\bf b}({\bf a}) = [-2, 1,-1]$ and $\text{cum}({\bf a}) = [1, 0, -2]$. We sort elements of $|{\bf w}|$ in descending order, and let the vector containing the sorted indices be ${\bf s}$. For example, if ${\bf w}=[1, 0, -2]$, then ${\bf s} = [3, 1, 2]$. From (\ref{eq:opt_exact}), \begin{equation} \label{eq:tmp1} \alpha = \frac{\|{\bf I}_{\alpha/2}({\bf w}) \odot {\bf d} \odot {\bf w}\|_1}{\|{\bf I}_{\alpha/2}({\bf w}) \odot {\bf d}\|_1} = \frac{[ \text{cum}(\text{perm}_{{\bf s}}( |{\bf d} \odot {\bf w}|))]_j}{[ \text{cum}( \text{perm}_{{\bf s}}( |{\bf d}|))]_j} = 2c_j, \end{equation} where ${\bf c} = \text{cum}(\text{perm}_{{\bf s}}(|{\bf d} \odot {\bf w}|)) \oslash \text{cum}(\text{perm}_{{\bf s}}({\bf d})) \oslash 2$, and $j$ is the index such that \begin{equation} \label{eq:j} [\text{perm}_{{\bf s}}(|{\bf w}|)]_j > c_j> [\text{perm}_{{\bf s}}(|{\bf w}|)]_{j+1}. \end{equation} For simplicity of notations, let the dimensionality of ${\bf w}$ (and thus also of ${\bf c}$) be $n$, and the operation $\text{find}(\text{condition}({\bf x}))$ returns all indices in ${\bf x}$ that satisfies the condition. It is easy to see that any $j$ satisfying (\ref{eq:j}) is in ${\mathcal S}\equiv \text{find}([\text{perm}_{{\bf s}}(|{\bf w}|)]_{[1:(n-1)]}-{\bf c}_{[1:(n-1)]}) \odot ([\text{perm}_{{\bf s}}(|{\bf w}|)]_{[2:n]} - {\bf c}_{[1:n-1]})< 0 )$, where ${\bf c}_{[1:(n-1)]}$ is the subvector of ${\bf c}$ with elements in the index range 1 to $n-1$. The optimal $\alpha$ ($=2c_j$) is then the one which yields the smallest objective in (\ref{eq:ter_quantize}), which can be simplified by Proposition ~\ref{prop:global_alpha} below. The procedure is shown in Algorithm~\ref{alg:exact}. \begin{prop} \label{prop:global_alpha} The optimal $\alpha_l^t$ of (\ref{eq:ter_quantize}) equals $2\arg \max_{c_j: j \in {\mathcal S}} c_j^2 \cdot [\text{cum}(\text{perm}_{{\bf s}}({\bf d}_l^{t-1}))]_j$. \end{prop} \begin{algorithm} \caption{Exact solver of (\ref{eq:ter_quantize})}\label{alg:exact} \begin{algorithmic}[1] \STATE{\bf Input:} full-precision weight ${\bf w}_l^t$, diagonal entries of the approximate Hessian ${\bf d}_l^{t-1}$. \STATE ${\bf s} = \arg \text{sort} (|{\bf w}_l^t|)$; \STATE ${\bf c} = \text{cum}(\text{perm}_{{\bf s}}(|{\bf d}_l^{t-1} \odot {\bf w}_l^t|)) \oslash \text{cum}(\text{perm}_{{\bf s}}({\bf d}_l^{t-1})) \oslash 2$; \STATE ${\mathcal S} = \text{find}(([\text{perm}_{{\bf s}}(|{\bf w}_l^t|)]_{[1:(n-1)]}-{\bf c}_{[1:(n-1)]}) \odot ([\text{perm}_{{\bf s}}(|{\bf w}_l^t|)]_{[2:n]} - {\bf c}_{[1:n-1]}) < 0)$; \STATE $\alpha_l^t = 2\arg \max_{c_j: j \in {\mathcal S}} c_j^2 \cdot [\text{cum}(\text{perm}_{{\bf s}}({\bf d}_l^{t-1}))]_j$; \STATE ${\bf b}_l^t = {\bf I}_{\alpha_l^t/2}({\bf w}_l^t)$; \STATE{\bf Output:} $\hat{\bf{w}}_l^t = \alpha_l^t {\bf b}_l^t$. \end{algorithmic} \end{algorithm} \subsubsection{Approximate solution of $\alpha_l^t$} \label{sec:approx} In case the sorting operation in step~2 is expensive, $\alpha_l^t$ and ${\bf b}_l^t$ can be obtained by alternating the iteration in Proposition~\ref{prop:ter_alt} (Algorithm~\ref{alg:approx}). Empirically, it converges very fast, usually in 5 iterations. \begin{algorithm} \caption{Approximate solver for (\ref{eq:ter_quantize}).} \label{alg:approx} \begin{algorithmic}[1] \STATE{\bf Input:} ${\bf b}_l^{t-1}$, full-precision weight ${\bf w}_l^t$, diagonal entries of the approximate Hessian ${\bf d}_l^{t-1}$. \STATE{\bf Initialize:} $\alpha = 1.0, \alpha_{\text{old}} = 0.0, {\bf b}={\bf b}_l^{t-1}$, $\epsilon=10^{-6}$; \WHILE{$|\alpha - \alpha_{\text{old}}|>\epsilon$} \STATE $\alpha_{\text{old}} = \alpha$; \STATE $\alpha = \frac{\|{\bf b} \odot {\bf d}_l^{t-1} \odot {\bf w}_l^t\|_1}{\|{\bf b} \odot {\bf d}_l^{t-1}\|_1}$; \STATE ${\bf b} = {\bf I}_{\alpha/2}({\bf w}_l^t)$; \ENDWHILE \STATE{\bf Output:} $\hat{\bf{w}}_l^t = \alpha {\bf b}$. \end{algorithmic} \end{algorithm} \subsection{Extension to Ternarization with Two Scaling Parameters} As in TTQ \citep{zhu2017trained}, we can use different scaling parameters for the positive and negative weights in each layer. The optimization subproblem at the $t$th iteration then becomes: \begin{eqnarray} \label{eq:obj_two} &\min_{\hat{\bf{w}}^t} & \nabla \ell(\hat{\bf{w}}^{t-1})^\top (\hat{\bf{w}}^t - \hat{\bf{w}}^{t-1}) + \frac{1}{2}(\hat{\bf{w}}^t -\hat{\bf{w}}^{t-1})^\top {\bf D}^{t-1} (\hat{\bf{w}}^t - \hat{\bf{w}}^{t-1}) \\ \nonumber & \text{s.t.} & \hat{\bf{w}}_l^t \in \{-\beta_l^t, 0, \alpha_l^t\}^{n_l}, \; \; \alpha_l^t>0, \;\; \beta_l^t>0, \; l = 1, \dots, L. \nonumber \end{eqnarray} \begin{prop} \label{prop:opt_two} The optimal $\hat{\bf{w}}_l^t $ in (\ref{eq:obj_proximal}) is of the form $\hat{\bf{w}}_l^t = \alpha_l^t {\bf p}_l^t + \beta_l^t {\bf q}_l^t$, where $\alpha_l^t = \frac{\|{\bf p}_l^t \odot {\bf d}_l^{t-1} \odot {\bf w}_l^t\|_1}{\|{\bf p}_l^t \odot {\bf d}_l^{t-1}\|_1}, {\bf p}_l^t = {\bf I}_{\alpha_l^t/2}^+({\bf w}_l^t), \beta_l^t = \frac{\|{\bf q}_l^t\odot {\bf d}_l^{t-1} \odot {\bf w}_l^t\|_1}{\|{\bf q}_l^t \odot {\bf d}_l^{t-1}\|_1}$, and ${\bf q}_l^t = {\bf I}_{\beta_l^t/2}^-({\bf w}_l^t)$. \end{prop} The exact and approximate solutions for $\alpha_l^t$ and $\beta_l^t$ can be obtained in a similar way as in Sections~\ref{sec:exact} and \ref{sec:approx}. Details are in Appendix~\ref{apdx:two_scaling}. \subsection{Extension to Low-Bit Quantization} \label{sec:m-bit} For $m$-bit quantization, we simply change the set $\mathcal{Q}$ of desired quantized values in ~(\ref{eq:obj}) to one with $k=2^{m-1}-1$ quantized values. The optimization still contains a gradient descent step with adaptive learning rates like LAT, and a quantization step which can be solved efficiently by alternating minimization of $(\alpha, {\bf b})$ (similar to the procedure in Algorithm~\ref{alg:approx}) using the following Proposition. \begin{prop} \label{prop:mbit_alt} Let the optimal $\hat{{\bf w}}_l^t$ in (\ref{eq:ter_quantize}) be of the form $\alpha {\bf b}$. For a fixed ${\bf b}$, $\alpha = \frac{\|{\bf b} \odot {\bf d}_l^{t-1} \odot {\bf w}_l^t\|_1}{\|{\bf b} \odot {\bf d}_l^{t-1}\|_1}$; whereas when $\alpha$ is fixed, ${\bf b} = \Pi_{\mathcal{Q}}(\frac{{\bf w}_l^t}{\alpha})$, where $\mathcal{Q} = \left\{-1, -\frac{k-1}{k}, \dots, -\frac{1}{k}, 0, \frac{1}{k}, \dots, \frac{k-1}{k}, 1\right\}$ for linear quantization and $\mathcal{Q} = \left\{-1, -\frac{1}{2}, \dots, -\frac{1}{2^{k-1}}, 0, \frac{1}{2^{k-1}}, \dots, \frac{1}{2}, 1\right\}$ for logarithmic quantization. \end{prop} \section{Experiments} In this section, we perform experiments on both feedforward and recurrent neural networks. The following methods are compared: (i) the original full-precision network; (ii) weight-binarized networks, including BinaryConnect \citep{courbariaux2015binaryconnect}, Binary-Weight-Network (BWN)~\citep{rastegari2016xnor}, and Loss-Aware Binarized network (LAB)~\citep{hou2017loss}; (iii) weight-ternarized networks, including Ternary Weight Networks (TWN) \citep{li2016ternary}, Trained Ternary Quantization (TTQ)\footnote{For TTQ, we follow the \emph{CIFAR-10} setting in \citep{zhu2017trained}, and set $ \Delta_l^t = 0.005 \max(|{\bf w}_l^t|) $.} \citep{zhu2017trained}, the proposed Loss-Aware Ternarized network with exact solution (LATe), approximate solution (LATa), and with two scaling parameters (LAT2e and LAT2a); (iv) $m$-bit-quantized networks (where $m>2$), including DoReFa-Netm~\citep{zhou2016dorefa}, the proposed loss-aware quantized network with linear quantization (LAQm(linear)), and logarithmic quantization (LAQm(log)). Since weight quantization can be viewed as a form of regularization \citep{courbariaux2015binaryconnect}, we do not use other regularizers such as dropout and weight decay. \subsection{Feedforward Networks} \label{sec:fnn} In this section, we perform experiments with the multilayer perceptron (on the \emph{MNIST} data set) and convolutional neural networks (on \emph{CIFAR-10}, \emph{CIFAR-100} and \emph{SVHN}). For \emph{MNIST}, \emph{CIFAR-10}, and \emph{SVHN}, the setup is similar to that in \citep{courbariaux2015binaryconnect,hou2017loss}. Details can be found in Appendix~\ref{apdx:expt_detail}. For \emph{CIFAR-100}, we use $45,000$ images for training, another $5,000$ for validation, and the remaining $10,000$ for testing. The testing errors are shown in Table~\ref{tbl:fnn}. \begin{table*}[ht] \centering \caption{Testing errors (\%) on the feedforward networks. Algorithm with the lowest error in each group is highlighted.} \label{tbl:fnn} \begin{tabular}{ccc|c|c|c|c} \hline & & & \emph{MNIST} & \emph{CIFAR-10} & \emph{CIFAR-100} & \emph{SVHN} \\ \hline \multicolumn{2}{c}{no binarization} & full-precision & 1.11 & 10.38 & 39.06 & 2.28 \\ \hline\hline & & BinaryConnect & 1.28 & \textbf{ 9.86 } & 46.42 & 2.45 \\ \cline{3-7} \multicolumn{2}{c}{binarization} & BWN & 1.31 & 10.51 & 43.62 & 2.54 \\ \cline{3-7} & & LAB & {\bf 1.18} & 10.50 & {\bf 43.06} & {\bf 2.35} \\ \hline\hline & & TWN & 1.23 & 10.64 & 43.49 & 2.37 \\ \cline{3-7} & 1 scaling & LATe & 1.15 & 10.47 & 39.10 & {\bf 2.30} \\ \cline{3-7} ternarization & & LATa & \textbf{1.14 } & \textbf{ 10.38 } & 39.19 & {\bf 2.30} \\ \cline{2-7} & & TTQ & 1.20 & 10.59 & 42.09 & 2.38 \\ \cline{3-7} & 2 scaling & LAT2e & 1.20 & 10.45 & 39.01 & 2.34 \\ \cline{3-7} & & LAT2a & 1.19 & 10.48 & {\bf 38.84} & 2.35 \\ \hline\hline & & DoReFa-Net3 & 1.31 & 10.54 & 45.05 & 2.39 \\ \cline{3-7} \multicolumn{2}{c}{3-bit quantization} & LAQ3(linear) & 1.20 & 10.67 & 38.70 & 2.34 \\ \cline{3-7} & & LAQ3(log) & {\bf 1.16} & {\bf 10.52} & \textbf{38.50 } & \textbf{ 2.29 } \\ \hline \end{tabular} \end{table*} {\bf Ternarization:} On \emph{MNIST}, \emph{CIFAR100} and \emph{SVHN}, the weight-ternarized networks perform better than weight-binarized networks, and are comparable to the full-precision networks. Among the weight-ternarized networks, the proposed LAT and its variants have the lowest errors. On \emph{CIFAR-10}, LATa has similar performance as the full-precision network, but is outperformed by BinaryConnect. Figure~\ref{fig:train_cifar} shows convergence of the training loss for LATa on \emph{CIFAR-10}, and Figure~\ref{fig:train_alpha} shows the scaling parameter obtained at each CNN layer. As can be seen, the scaling parameters for the first and last layers (conv1 and linear3, respectively) are larger than the others. This agrees with the finding that, to maintain the activation variance and back-propagated gradients variance during the forward and backward propagations, the variance of the weights between the $l$th and $(l+1)$th layers should roughly follow $2/(n_l + n_{l+1})$ \citep{glorot2010understanding}. Hence, as the input and output layers are small, larger scaling parameters are needed for their high-variance weights. \begin{figure}[htbp] \begin{center} \subfigure[Training loss. \label{fig:train_cifar}]{\includegraphics[width=0.4\textwidth]{figures/cifar_train.png}} \subfigure[Scaling parameter $\alpha$.\label{fig:train_alpha}]{\includegraphics[width=0.4\textwidth]{figures/cifar_alpha_all.png}} \vspace{-.1in} \caption{Convergence of the training loss and scaling parameter by LATa on \emph{CIFAR-10}.} \label{fig:cifar} \end{center} \end{figure} {\bf Using Two Scaling Parameters:} Compared to TTQ, the proposed LAT2 always has better performance. However, the extra flexibility of using two scaling parameters does not always translate to lower testing error. As can be seen, it outperforms algorithms with one scaling parameter only on \emph{CIFAR-100}. We speculate this is because the capacities of deep networks are often larger than needed, and so the limited expressiveness of quantized weights may not significantly deteriorate performance. Indeed, as pointed out in \citep{courbariaux2015binaryconnect}, weight quantization is a form of regularization, and can contribute positively to the performance. {\bf Using More Bits:} Among the 3-bit quantization algorithms, the proposed scheme with logarithmic quantization has the best performance. It also outperforms the other quantization algorithms on \emph{CIFAR-100} and \emph{SVHN}. However, as discussed above, more quantization flexibility is useful only when the weight-quantized network does not have enough capacity. \subsection{Recurrent Networks} \label{sec:rnn} In this section, we follow \citep{hou2017loss} and perform character-level language modeling experiments on the long short-term memory (LSTM) \citep{hochreiter1997long}. The training objective is the cross-entropy loss over all target sequences. Experiments are performed on three data sets: (i) Leo Tolstoy's \emph{War and Peace}; (ii) source code of the \emph{Linux Kernel}; and (iii) \emph{Penn Treebank} Corpus~\citep{taylor2003penn}. For the first two, we follow the setting in \citep{karpathy2015visualizing,hou2017loss}. For \emph{Penn Treebank}, we follow the setting in \citep{mikolov2012context}. In the experiment, we tried different initializations for TTQ and then report the best. Cross-entropy values on the test set are shown in Table~\ref{tbl:rnn}. \begin{table*}[htbp] \centering \caption{Testing cross-entropy values on the LSTM. Algorithm with the lowest cross-entropy value in each group is highlighted.} \label{tbl:rnn} \begin{tabular}{ccc|c|c|c} \hline & & & \emph{War and Peace} & \emph{Linux Kernel} & \emph{Penn Treebank} \\ \hline \multicolumn{2}{c}{no binarization} & full-precision & 1.268 & 1.326 & 1.083 \\ \hline\hline & & BinaryConnect & 2.942 & 3.532 & 1.737 \\ \cline{3-6} \multicolumn{2}{c}{binarization} & BWN & 1.313 & 1.307 & {\bf 1.078} \\ \cline{3-6} & & LAB & {\bf 1.291} & \textbf{1.305 } & 1.081 \\ \hline\hline & & TWN & 1.290 & 1.280 & 1.045 \\ \cline{3-6} & 1 scaling & LATe & 1.248 & \textbf{ 1.256 } & 1.022 \\ \cline{3-6} ternarization & & LATa & 1.253 & 1.264 & 1.024 \\ \cline{2-6} & & TTQ & 1.272 & 1.302 & 1.031 \\ \cline{3-6} & 2 scaling & LAT2e & 1.239 & 1.258 & 1.018 \\ \cline{3-6} & & LAT2a & \textbf{ 1.245 } & 1.258 & {\bf 1.015} \\ \hline\hline & & DoReFa-Net3 & 1.349 & 1.276 & 1.017 \\ \cline{3-6} \multicolumn{2}{c}{3-bit quantization} & LAQ3(linear) & 1.282 & 1.327 & 1.017 \\ \cline{3-6} & & LAQ3(log) & {\bf 1.268} & {\bf 1.273} & \textbf{1.009} \\ \hline\hline & & DoReFa-Net4 & 1.328 & 1.320 & 1.019 \\ \cline{3-6} \multicolumn{2}{c}{4-bit quantization} & LAQ4 (linear) & 1.294 & 1.337 & 1.046 \\ \cline{3-6} & & LAQ4 (log) & {\bf 1.272} & {\bf 1.319} & {\bf 1.016} \\ \hline \end{tabular} \end{table*} {\bf Ternarization:} As in Section~\ref{sec:fnn}, the proposed LATe and LATa outperform the other weight ternarization schemes, and are even better than the full-precision network on all three data sets. Figure~\ref{fig:warpeace} shows convergence of the training and validation losses on \emph{War and Peace}. Among the ternarization methods, LAT and its variants converge faster than both TWN and TTQ. \begin{figure}[htbp] \begin{center} \subfigure[Training loss. \label{fig:train_warpeace}]{\includegraphics[width=0.40\textwidth]{figures/warpeace_train.png}} \subfigure[Validation loss.\label{fig:val_warpeace}]{\includegraphics[width=0.40\textwidth]{figures/warpeace_val.png}} \vspace{-.1in} \caption{Convergence of the training and validation losses on \emph{War and Peace}.} \label{fig:warpeace} \end{center} \end{figure} {\bf Using Two Scaling Parameters:} LAT2e and LAT2a outperform TTQ on all three data sets. They also perform better than using one scaling parameter on \emph{War and Peace} and \emph{Penn Treebank}. {\bf Using More Bits:} The proposed LAQ always outperforms DoReFa-Net when 3 or 4 bits are used. As noted in Section~\ref{sec:fnn}, using more bits does not necessarily yield better generalization performance, and ternarization (using 2 bits) yields the lowest validation loss on \emph{War and Peace} and \emph{Linux Kernel}. Moreover, logarithmic quantization is better than linear quantization. Figure~\ref{fig:weight} shows distributions of the input-to-hidden (full-precision and quantized) weights of the input gate trained after 20 epochs using LAQ3(linear) and LAQ3(log) (results on the other weights are similar). As can be seen, distributions of the full-precision weights are bell-shaped. Hence, logarithmic quantization can give finer resolutions to many of the weights which have small magnitudes. \begin{figure}[htbp] \begin{center} \subfigure[Full-precision weights. \label{fig:full-3-bit}]{\includegraphics[width=0.245\textwidth]{figures/k-bit/full-20_0.png}} \subfigure[Quantized weights. \label{fig:quantize-3-bit}]{\includegraphics[width=0.245\textwidth]{figures/k-bit/quantized-20_0.png}} \subfigure[Full-precision weights. \label{fig:full-3-bit-log}]{\includegraphics[width=0.245\textwidth]{figures/k-bit-log/full-20_0.png}} \subfigure[Quantized weights.\label{fig:quantize-3-bit-log}]{\includegraphics[width=0.245\textwidth]{figures/k-bit-log/quantized-20_0.png}} \vspace{-.1in} \caption{Distributions of the full-precision and LAQ3-quantized weights on \emph{War and Peace}. Left ((a) and (b)): Linear quantization; Right ((c) and (d)): Logarithmic quantization.} \label{fig:weight} \end{center} \end{figure} {\bf Quantized vs Full-precision Networks:} The quantized networks often perform better than the full-precision networks. We speculate that this is because deep networks often have larger-than-needed capacities, and so are less affected by the limited expressiveness of quantized weights. Moreover, low-bit quantization acts as regularization, and so contributes positively to the performance. \section{Conclusion} In this paper, we proposed a loss-aware weight quantization algorithm that directly considers the effect of quantization on the loss. The problem is solved using the proximal Newton algorithm. Each iteration consists of a preconditioned gradient descent step and a quantization step that projects full-precision weights onto a set of quantized values. For ternarization, an exact solution and an efficient approximate solution are provided. The procedure is also extended to the use of different scaling parameters for the positive and negative weights, and to $m$-bit (where $m>2$) quantization. Experiments on both feedforward and recurrent networks show that the proposed quantization scheme outperforms the current state-of-the-art. \subsubsection*{Acknowledgments} This research was supported in part by the Research Grants Council of the Hong Kong Special Administrative Region (Grant 614513). We thank the developers of Theano~\citep{2016arXiv160502688short}, Pylearn2~\citep{goodfellow2013pylearn2} and Lasagne. We also thank NVIDIA for the gift of GPU card.
{ "timestamp": "2018-05-11T02:07:23", "yymm": "1802", "arxiv_id": "1802.08635", "language": "en", "url": "https://arxiv.org/abs/1802.08635" }
\section{Introduction} Product information is essential for markets to function well. Without accurate knowledge of what they are buying, consumers cannot be expected to maximize personal welfare over the set of product choices. Of course, product characteristics that affect a consumer's ability to optimize in a marketplace go beyond knowledge of its individual components. In the absence of strong institutions, consumer's might greatly value a seller's reputation, or be concerned that a seller might cheat them, particularly if they are unable to observe product quality at time of purchase. Of critical importance in this setting would be the avenues through which consumers can collect information about the product space they seek to purchase from. In this paper, I examine a market with exactly these characteristics, and attempt to measure the relative importance of differential modes of information transmission in relation to consumer demand. Specifically, I study the demand for illegal products on online ``Darknet'' markets, which lack credible enforcement of laws and so must rely on incentives from vendor reputation and consumers being appropriately informed to function well,. Because of this and the concentrated set of locations where consumers can gather information about products, I am able to plausibly estimate the impact of product sentiment on future market demand, by controlling for all feasible supply-side decisions that may impact consumer demand. By ``sentiment'', I mean whether a vendor is being talked about in a positive or negative way, as measured by the word structure in the message. I find that both informal and formal communication (as measured by forum post and product review sentiment, respectively) have a large effect on consumer demand, and that these effects are comparable in magnitude. I find that these effects scale with sample size of the information set , but little other evidence for heterogeneity in the effects of product sentiment from varying the information sources. My marketplace is the illegal online market for illicit products, such as guns, narcotics, and fraud services, commonly known as the ``Darknet'' of the Internet. Due to the illegal nature of all transactions being performed\footnote{even if the product itself is legal in a given country, no government taxes are ever paid by the vendors, so they are always illegal.}, this marketplace is highly anonymous. Vendors and sellers alike are potentially culpable in the distribution of these goods, which in developed countries such as the United States, can result in up to 15 years of imprisonment \cite{csa}. In addition to the clear personal incentives one might have to remain anonymous, anonymity is enforced by requiring users to connect to these online marketplaces via the TOR protocol. TOR is a specific type of Internet routing service that scrambles ones personal data packets by interchanging them with the data packets of other current active TOR users \cite{dingledine2004tor}. The net result is that an observer, whether a government or a private organization, is unable to tell what IP address a traced data request is coming from, thereby offering an additional layer of security for both customers and vendors. On top of this, all transactions on the marketplace are done in Bitcoins, a popular cryptocurrency notorious for its ability to anonymize transactions. Besides the strong protections of anonymity used by the market, Darknet marketplaces operates like virtually any other e-commerce website. Listings are organized into relevant product categories (LSD, Opiods, Firearms, etc.), and customers can use a search engine to locate any specific product they wish to consume or vendor they wish to buy from. On an actual product page, buyers can view a provided description and image of the product from the seller, in addition to the full history of customer reviews from previous buyers, which each have a 0 to 5 star rating, a brief text description provided by previous buyer about their purchase experience and/or the quality of the product, and how long ago the review was made. In addition, the product page has summary statistics on the vendor, such as average rating across all products, total volume of transactions, where the buyer ships to/from in the world, along with a link to a vendor's profile page where one can view past reviews of all previously bought products. Figure \ref{fig:samplelisting} provides an example of a typical listing on the Darknet marketplace I use as my universe for this paper, Agora. One rather unique feature of these Darknet markets is that, in order for customers to finalize the transaction, buyers are required to leave a review once they have received the product. In order to ensure the market functions while guaranteeing agent anonymity, Darknet markets use an escrow system, where the marketplace acts as clearinghouse and transfers the Bitcoins from the buyer to the vendor only after the consumer verifies that they have received the product they ordered and leaves their review. The reasons for this are threefold: \begin{enumerate} \item If every customer is required to leave a review, then the set of reviews are less likely to be biased towards customers with ``extreme'' experiences, either good or bad, and so the reviews might be more informative. Thus, they provide incentives for suppliers to be reliable, and for customers to go ahead with a purchase if they are initially uncertain about the product. \item Introducing a third party intermediary further anonymizes the payment process. \item If a customer receives an item and has legitimate evidence of the product not matching its advertised description, they may submit a complaint to the marketplace's moderators, and, if found to be true, the complaint will result in the vendor not being paid and possibly banned from the website. This makes fraud costly and further incentivizes vendors to be honest and avoid scamming customers. \end{enumerate} However, a somewhat unfortunate consequence of this well meaning rule is that many vendors explicitly require customers to ``Finalize Early'' when receiving products. This practice involves customers finalizing their transactions with the market clearinghouse in advance of the vendor actually shipping the product, so that the vendor does not have to wait for delivery to receive the payment, a process that can sometimes take weeks. A customer may later edit their review appropriately to match their actual experience, but it is common for a reviewer to mandate that those who finalize early also give a 5-star review; otherwise they will not fulfill the transaction. Part of what originally motivated this paper's investigation into text sentiment was the common practice in Darknet markets of mandating 5-star reviews on finalize early orders, but little done to regulate the text accompanying the review (possibly because it is not an input into a vendor's overall rating, or possibly because it is much more difficult to filter text). Perhaps because of the anarcho-libertarian ideology that surrounded the creation of these Darknet marketplaces, this practice was not explicitly banned by moderators, since advertisers are transparent about this requirement if you wish to purchase from them. Instead, moderators built an explicit ``No Finalize Early'' flag placed on \textit{all} listings in search results so that consumers could easily filter out product offerings that may require finalize early. Misclassifying one's product listing as no finalize early can result in fines to the vendor and in some cases banning the vendor altogether from the market. One can imagine that all of this taken together implies a huge premium on a vendor's reputation. Despite the protections put in place, this market should still gravitate towards a set of well-established vendors who are well known to serve customers honestly. For this reason, changes in attributes of a vendor reputation (such as community attitudes towards them) should have more pronounced effects in this market, leading to more precise estimations of their effects. There is a limited literature on Darknet markets due to both their recent inception and illicit nature. Most of the work is done from a criminology perspective, explaining the difference in structures between Darknet markets and traditional illegal drug rings or cartels. One of the first studies of Darknet markets by Christin \cite{christin2013traveling} provides a comprehensive measurement analysis of the Darknet market during its initial stages when it consisted of one website. Demant et al. \cite{demant2016personal} is one of the first papers to examine the Darknet from a social science perspective. They investigate whether consumers on the Darknet are redistributers or direct consumers, and find suggestive evidence that consumers on the Darknet resemble direct-to-consumer sellers, a step below in the drug supply chain. There is some relevant work that has been done on the importance of communication in online settings. Lewis \cite{lewis2006asymmetric} examines the effects of differential vendor communication or advertisement on sales in eBay's online platform for buying and selling used cars. Luca \& Georgios \cite{luca2016fake} study the incentives of vendors listed on Yelp to procure review fraud in order to boost their own ratings on the website. \section{Consumer Model}\label{consumermodel} The market I study is one in which consumers choose to purchase different types of good (mostly different types of drugs). I assume that demand for product $j$ in time interval $t$ is given by a Poisson distribution: \[Pr(y_{jt} \textrm{ units purchased} | X = \frac{e^{y\beta X} e^{-e^{\beta X}}}{y_{jt}!} \] More importantly, the result of this functional form is that our expectation has a straightforward exponential form: \[E[y_{jt} | X_{j,t},\mu_j] =e^{X_{j,t}\beta + \mu_j }\] Where $X_{j,t}$ is a vector of time and product varying characteristics, and $\mu_j$ is a product-specific effect on average product demand. More plainly, I model my covariates of interest, $X_{j,t}$, as having a multiplicative $e^\beta$ effect on (mean) product demand for every 1-unit change. While it is well known that product demand is unobservable, since prices and quantities are results of both demand and supply schedules clearing, I argue in this paper that, due to the unique properties of the market I study, a researcher can observe virtually everything suppliers choose here that are observable to customers. And thus, a researcher can plausibly control for \textit{all} supply-side decisions a seller takes that impact a consumer's choice. Thus, assuming all supply decisions are perfectly controlled for, remaining variation in sales can be explained as a combination of demand-side variation and noise. From here, I take relevant covariates not determined by the supplier, and determine their effect on consumer demand. My demand-side covariates will mostly examine the effects of peer communication on a vendor's reputation and hence the demand for their products. I examine the impact of two channels of reputation formation: sentiment or satisfaction displayed by past customers on a product's review page, and sentiment concerning a vendor on the designated market forum, which was specifically established so customers could communicate to each other about events or products within the market. I designate these two channels as ``formal'' and ``informal'' communication, since one exists in the formal marketplace setting, where consumers are asked to explicitly review a product, while the other exists in a social setting where users are free to spontaneously discuss whatever they wish. This may sometimes include experiences on buying from certain vendors. I expect that both of these are crucially important to the demand in this market. Because there are virtually no other places on the Internet for consumers to communicate with each other (largely due to criminality concerns), vendor ratings, past reviews, and the forums are the only ways for potential customers to collect information about a product or seller. It is clear, especially since reviews are required of every past customer, that this will be one of the primary channels for customers to collect information on whether a vendor is selling high or low quality products, and whether or not they are committing fraud. Ex ante, though, it is not obvious that customers would highly value positive or negative reviews of vendors on forums. Since forum posts can be written by anyone, not just past customers, a consumer might view a vendor review on a forum as ``cheap talk'' since it is costless for a single user to send an arbitrary message on the forums, and thus they may ignore this information entirely. And so, especially since the author's utility is unlikely to be determined by the potential customer's purchase decision, the outcome may be a babbling equilibrium where forum messages are entirely noninformative \cite{farrell1996cheap}. At the same time, one might imagine there are sufficient incentives for forum members to accumulate social capital among their peers and obtain a reputation to be a member in good-standing with the community (See Wasko \& Faraj \cite{wasko2005should} for an investigation of incentives for knowledge sharing in an Internet forum setting). Given the limited means for consumers to collect information, I hypothesize that in this marketplace, sentiment of vendors displayed on forums will have an influential role in customer demand. In addition, I propose the following hypotheses for how consumer demand will depend on differing sources of vendor sentiment, which I test later in the paper: \begin{itemize} \item \textbf{Hypothesis 1}: Customers will value vendor sentiment more highly as evidence accumulates; that is, if there are more reviews, all else equal, the customer, in a Bayesian fashion, will value the mean vendor sentiment more (relative to their prior) when formulating posterior beliefs. \item \textbf{Hypothesis 2}: Customers will value vendor sentiment more highly from experienced customers versus inexperienced customers. By experienced customers, I mean customers that in some sense have a longer history of interactions on the marketplace from which to draw from. Given this, experienced customers are more likely to be well informed, and so their input should be more heavily weighted by customers making purchase decisions. \item \textbf{Hypothesis 3}: Customers will value vendor sentiment from peer communication less if product is a ``no finalize early'' purchase. The reason for this hypothesis stems from the disclosure literature. Those listings with a no finalize early flag are credibly committing quality disclosure of their product within a neighborhood of its true quality, since customers can themselves assess the quality of the product before finalizing the purchase. When this quality disclosure is credible, a sophisticated buyer and seller will be able to communicate the full information quality in equilibrium \cite{milgrom1986relying}. As a result, vendor sentiment will not be as important for the demand of no finalize early products; consumers should be able to rely on the vendor's own provided information on the product listing page, compared to those products which may require customers to finalize early. \item \textbf{Hypothesis 4}: Customers will find the review sentiment for a product on its listing page to be more important than sentiment for the vendor's other products. I hypothesize that reviews on the product page are more tailored or relevant towards the purchase of this product. For example, reviews of other products from the same vendor may be informative on how a vendor's customer service is, but may not contain information about the quality of the specific product a consumer is interested in purchasing. Therefore, product demand will be more impacted by review sentiment for the specific product, rather than sentiment about the overall vendor (as captured by review sentiment on all other product listings by the vendor). \end{itemize} \section{Data} For my analysis, I use three datasets concerning the anonymous market for drugs on the Internet. All of these datasets were generously uploaded for public usage by Gwern Branwen \cite{gwern}. The first dataset is comprised of weekly, \textit{complete}, sets of listings available on all Darknet marketplaces from a central Darknet search engine named GRAMS. GRAMS interacts with the API of major marketplaces on the Darknet to obtain a complete set of listings on each marketplace. However, the information GRAMS provides is more basic than that contained in some of the datasets I describe below, a trade-off that must be weighed against its relative completeness. GRAMS has for each item the description, price, and seller name, along with a field for which Darknet market (Silk Road 2, Evolution, Agora, etc.) it is being sold on. Namely, it does not contain any sales data or review data for each listing. I use the GRAMS dataset as a sort of validation set to compare with my incomplete HTML marketplace data, as well as for testing vendor responses to changes in sentiment on an extensive margin. \\ \indent I use some of the evidence in this GRAMS dataset to determine which marketplace websites would be best to study. Figure \ref{fig:marketsizes} shows the percent of listings (from the complete GRAMS data) on the Darknet from three of the largest marketplaces on the Darknet from 2014 to mid 2015. The sample period can be broken up into three phases; before The Silk Road 2 went offline (red line), before Evolution went offline (blue line), and after. On November 6th, 2014, The Silk Road 2 server (started as a successor to the infamous original Darknet market, The Silk Road) was seized by U.S. customs authorities and brought offline \cite{sr2bust}. It did not enjoy as large of a market share as its predecessor, partly due to a loss of a first mover advantage, and partly due to its buggy interface. Evolution, on the other hand, enjoys the largest market share during the sample period I later focus on (the year 2014). Known for its high degree of security (unlike Silk Road 2), Evolution suddenly, without notice, went offline on March 14th, 2015, to the bewilderment of it's users. It was confirmed by moderators on the website that the owners of the site ``cashed out'' and stole the asset holdings of vendors currently in escrow on the website, estimated to be worth \$12 million USD \cite{evoamount}.Because of the abrupt and peculiar circumstances surrounding the closing of the two other markets, I chose to not include these as the main object of study for this paper. Compare this to the shutdown of Agora, the other major market during this period, whose owners publicly announced in August 2015 that they were going to indefinitely take the website offline due to the increasing risk associated with running a Darknet website and security concerns. Everyone's asset holdings on the market were returned free of charge. Considering the sincere nature of it's exit from the marketplace , along with its relatively high market share, Agora seemed to be the best candidate market to study. The second dataset I use consists of approximately biweekly HTML scrapes of one of the Darknet's most prominent markets for drugs, Agora, from January 2014 to July 2015. The sample period I use for this research is data from the 2014 calendar year . These scrapes contain a rich amount of unstructured data on product listings on the market. Namely, they contain the complete product description provided by the vendor, an (optional) image of the product, the overall rating of the seller alongside the number of transactions the vendor has previously engaged in, shipping location, price, and partitions of the products into categories. Within categories, listings (theoretically) vary only on quantity and quality. In addition, each product page contains a history of reviews for the specific listing from previous customers, much in the style of online marketplaces such as Amazon and eBay. The web pages contain everything a consumer considering a purchase might see. Due to the instability of the TOR network that is required to connect to these online marketplaces, and semi-frequent DDOS attacks on the servers hosting the marketplace, many of these website scrapes are incomplete and often only cover a fraction of the listings on the marketplace on any given day. As a result, the observed time series of any individual product listing may be randomly right or left censored. For example, I might observe a listing in November of 2014 and never see it again, possibly because the vendor took the listing off the market soon after that scrape, or because the vendor left the listing on the site for an additional 2 months, but subsequent scrapes failed to arrive at the URL associated with that listing when crawling the website. The crawling procedure is described in \cite{gwern} and more or less follows a recursively defined random walk. Because of this, I can treat the listings I do observe as i.i.d. observations from the marketplace, since the random nature of the webcrawl ensures missingness will be unrelated to unobservable characteristics of the product. The third and final dataset used in this paper is weekly scrapes of the Agora forums. On the Darknet, each marketplace typically has an accompanying forum where buyers and sellers can discuss offerings on the markets. Importantly, this is the main place where customers can discuss whether certain vendors are selling high or low quality items, or whether they are ``scams''. Due to the highly illegal and anonymous nature of the marketplace, it is very difficult to externally validate the quality of a listing. Importantly, these forums are virtually the only venue prospective buyers can go to for more information on the product. Consumers cannot go to typical online social networks for fear of being identified as a buyer or seller of illegal products. Since the Agora forums also require a Tor connection to anonymize users, it is a relatively safe area to discuss legitimate questions on products and sellers. The only other known place on the Internet where some discussions of Darknet markets occurs is Reddit, but the discussion is relatively limited for confidentiality concerns. The number of threads from 2014 in the Agora subreddit was 1,551; as a comparison, in this same period, the number of forumn threads in the Agora Forums was 52,058. Even among the relevant subforums I devote my analysis to, that are explicitly designated to contain topics directly related to the marketplace, have 10 times as many threads as this subreddit during this period. Due to the concentrated nature of information on Darknet markets, the combination of the HTML of both the marketplace and the forum means that one can theoretically obtain a complete picture of the information customers would have had access to when deciding on a purchase. For this reason, I can completely characterize the information customers obtain from both formal (reviews) and informal (forum posts) means in this marketplace, which is what makes it such a uniquely interesting market to study. The forum data has information on the subject text of individual thread, what topic the thread is in (General Discussion, Vendor Discussion,etc.) as well as the text of replies to the thread. In addition, each post is accompanied with information about the author, including the username of the author (which is the same as their username in the Agora marketplace), how active and experienced the user is, whether the author is classified as a seller by Agora, and overall favorability of the author's posts, as measured by ``karma'', much like favorability measures employed on social media websites such as Reddit. Like the marketplace data, these HTML scrapes are often incomplete due to bandwidth limitations of the Tor network; however, because the forums are cumulative (posts are typically not removed after they are posted, and the forum website stores all threads in the history of its existence), I am more likely to observe the vast majority of posts in the data. It is for this reason that I limit my analysis of the Agora marketplace to the calendar year 2014, even though the data extends to July 2015. This will allow me to compile a more complete set of threads in my time period of interest, due to one of the most complete scrapes occurring in January 2015. Since threads are uniquely identified with an iterative integer (i.e. the first thread on the forums would have ID 1, the second thread has ID 2 etc.), I can also directly calculate the percentage of threads I am able to observe in the data. By this calculation, I observe 35,420 of 52,058 (68.04\%) of all the threads that have ever occurred on the forum. While some of the missing threads are due to the incompleteness of scrapes, it is likely that a substantial portion of missing threads are due to removal of a thread by moderators (for either being listed in the incorrect topic or violating the rules of the forum; spam is a common issue), so this should be considered a conservative lower bound on my coverage of the forum threads. The basic unit of observation in this paper will be the time series of each listing on the marketplace. For each item, I observe every time it is sold, via the mandatory review, along with the date the transaction was completed (i.e. when the user reports they have received the product and completes the escrow transaction). From this, I can construct the number of transactions finalized each day, and use this as a proxy for the number of purchases each day. This follows the methodology used to measure sales in the modest literature that has studied Darknet markets. In order to reduce the impact of measurement error on the observed date of a sale, as well as to expedite computations, I aggregate the daily time series of sales data for each item to a weekly time series. I then incorporate vendor and item characteristics (price, average review rating, vendor rating, etc.) to construct a panel dataset of market listings over time, measured in weeks. As mentioned before, these time series will suffer from censoring because of the incomplete nature of the web crawls used to generate the dataset. I cannot be sure that an item that shows up for the first time in the web scrapes was not listed earlier on the marketplace, but simply missed by earlier scrapes. In this way, the listing time series' will suffer from random censoring, since the mechanism by which the series is censored (both left and right censorship are possible) is unrelated to any characteristics of the product itself. Because of this, the effect of the censoring on statistical inference will be limited, and I ignore it's impact on my estimates for coefficients. Table \ref{tab:summstats} contains summary statistics of my panel dataset. As is evident in the table, product sales are relatively sparse (about 1 sale every 3 weeks per item). Across our sample period, we have 50,000 unique product from which to draw inference, sold by 2,000 different vendors. Both vendor ratings and item ratings are extremely high on average. Prices show extraordinary dispersion; this is mostly due to a couple of outlier listings that probably were mistakes by the vendor (possibly meant to put the price in USD terms); when the maximum price is excluded, the standard deviation is reduced by a factor of 10. To avoid these outliers altering the estimations, I log-transform prices whenever they are input into a regression equation. Table \ref{tab:summstats} also summarizes the \# of reviews by item / vendor, the average \# of mentions of a vendor on the forums, and the sentiment score variables we use later in the analysis. I now proceed to discuss how I take the rich text data from both reviews and forum threads, and turn it into useful covariates for regression analysis. \section{Transforming Communication into Data} \subsection{Vendor Sentiment in Text} In the absence of verifiable information from sellers, buyers must seek alternative means of acquiring information on the quality or veracity of a particular product. I consider two avenues by which consumers acquire information. The first is through customer reviews. Since these reviews are required, by design they should convey a relatively broad cross-section of past customer experiences. In practice, reviews are heavily biased upwards, resulting in a coarsening of information available to consumers. There is an immense pressure on Agora to give a vendor a 5-star rating for a product you purchase, in part because of the potential danger one risks by giving a distributor of illegal goods one's address (even if it is just a nearby P.O. Box). Many explicitly request it on their listing page, much like finalizing early. In fact, in my sample 97.7\% of reviews are 5 out 5 stars (one can rate a purchase from 0 to 5 stars). Vendors care a lot about this rating since a vendor's overall rating (an unweighted average of reviews on their products) is embedded into every active product listing by the vendor. In contrast, there are no widely distributed ``averages'' for the accompanying text required with each review. For this reason, I hypothesize that text, even in reviews with biased ratings, can provide valuable information to a prospective customer. In order to extract this information from the text, I use natural language processing methods to construct a ``sentiment'' variable associated with each review text that measures the relative positive or negative sentiment associated with a given product review, based on how certain words associate with customer sentiment of their purchase experience. After pre-processing the review text (removing stopwords, non alpha-numeric characters, as well as converting it to lowercase) using the Natural Language Processing Toolkit (NLTK) \cite{bird2009natural}, I decompose a review text into a matrix of word counts, and normalize the rows to sum to 1 (so the matrix consists of rows of word frequencies in an observed review). I then use multinomial inverse regression (MNIR) \cite{taddy2013multinomial} to relate these frequencies to positive or negative product sentiment. Specifically, I assume that text for customer reviews is generated from a sequence of i.i.d. draws of words $w_i$ from a multinomial \[w_i \sim Multinomial(\textbf{p}, m_i), p_j = \frac{e^{\alpha_j + \phi_j r_i + \epsilon_ij}}{\Sigma_k e^{\alpha_k + \phi_k r_i + \epsilon_ik}} \] whose probability vector \textbf{p} is a is determined by a latent measure of customer satisfaction of customer $i$, $r_i$. Here, $k$ indexes words across the entire vocabulary considered, while $\alpha_j$ and $\phi_j$ denote word-specific intercept and slope coefficients to relate each word's probability to $r$. I measure customer satisfaction with the star rating of a review each text is associated with. Even though this variable suffers from the bias and coarsening issues I just discussed above, this regression method should be able to identify strongly positive and negative text that accompanied all reviews, and give a better sense of whether, for example, the sample of 5-star reviews are really consistently associated with text as positive as the perfect rating given to vendors. To calibrate this multinomial model's hyperparameters, I choose 10 regularization paths for the estimation (for computational tractability), and a hyperparameter choice of $\gamma=0$, which amounts to a complexity penalty equivalent to that found in lasso regression. I also choose to drop all words from the estimation procedure that do not appear at least 5 times in the set of $\approx 200,000$ reviews I study. I do this because it reduces the vocabulary I need to consider in the model(from about 23,000 to 6,000 unique stemmed words), which significantly eases the estimation computationally. At the same time, however, these ``scarce'' words are unlikely to be very informative about consumer sentiment and might result in extreme estimates for the score loadings $\boldsymbol{\phi}$ that are only based on a couple of observations. The choice of 5 words as the cutoff is an arbitrary choice I made that seemed reasonable in the tradeoff between limiting extreme knife-edge estimates for $\boldsymbol{\phi}$, and excessive censoring of text. After estimating the multiomial model of word draws based on sentiment, I extract score loadings $\boldsymbol{\phi}$ associated with the star-rating for each word and apply these to the observed review text to construct the sufficient reduction projection scores $s_i= \Sigma_{k\in i} w_k\phi_k$, for each review. This score will capture all the available information in the text related to customer sentiment, in the sense that the review rating will now be orthogonal to the review text, conditional on the score variable \cite{taddy2013multinomial}. Table \ref{tab:extremewords} shows the 20 most negative and positive words from the estimation in terms of their associated scores, alongside the observed frequency of each word and the average rating of reviews that contain each of these words. Both the negative and positive words are stems we might expect; 3 out of 20 of the most negative words allude to the product being a ``scam'', and the others mostly refer to either dishonesty on the part of the vendor or warn other customers to steer clear. In contrast, the positively scored words appear to signal praise for the vendor, or mention a positive experience with the delivery of the product. Notably, only 4 of the 40 words that occupy the extremes of the score spectrum actually explicitly refer to the quality characteristics of the product itself. The other high-scoring words appear to be concerned with the quality of the vendor themself. This may be because I am estimating sentiment from reviews across all types of product categories. Within each category, different words may be used to describe a product as low or high quality. I would not expect the word ``medibud'', for example, to be associated with a positive review of prescription pills. At the same time, all products listed on Agora suffer from common concerns and issues with delivery of the goods or reliability of a vendor. In this sense, my model is limited in that it will likely have difficulty identifying sentiment concerning ``product-specific quality'' as opposed to ``vendor-specific quality'' \footnote{one could imagine training a MNIR regression model separately or reviews in each sub-category, but this approach would be limited when comparing to the forum data}, but at the same time this identification strategy will prove useful when I analyze vendor quality sentiment in the forums. Another limitation of my approach will be that I do not account for the estimation error from the MNIR model in my later regression analysis when I use the scores $s_i$ as inputs; I take the output of the model as the true estimates. Correcting for this would involve boostrapping over the sample of reviews, re-estimating the MNIR model, and then performing all subsequent analysis using the random sample of reviews. Due to the intense computational requirements of both the MNIR estimation and the later regression analysis I perform, it is excessively cumbersome to do this analysis multiple times at this stage, and I ignore any error from the MNIR model. For this reason, we might expect standard errors to be biased downward in the later analysis. The sentiment score $s_i$ is the main covariate used in my analysis. Of the 6,013 score loadings I extract from the multinomial model, about 34\% of them have non-zero estimates for their values. This is in part due to my choice of a strict lasso penalty, rather than a semi-concave one, but in addition due to the limited variation in the review rating variable. The vast majority of reviews are 5/5 stars. Nonetheless, there is substantial variation in the score sentiment variables I construct from the text using the MNIR model. Only 7\% of reviews I consider have sentiment scores of zero, which would be the case when a review text is ``uninformative'' in regard to customer sentiment (as measured by not having any words with nonzero score magnitudes). Figure \ref{fig:sentimentbyStars} shows a boxplot of sentiment by the star rating of the review. While there is a clear monotonic relationship by number of stars given, there is substantial variation in sentiment within each star rating as well. In particular, we see that the most variation occurs in the 5-star category, as might be expected since it is by far the most populated. The range of the upper and lower hinges of the 5-star boxplot demonstrate that within 5-star reviews, there is enough variation in text sentiment to cover virtually the entire support of observed sentiment in 1-4 star reviews. This suggests that there is in fact a non-trivial amount of 5-star reviewers whose text aligns more with the language used in low rating reviews. \subsection{Projecting the MNIR model onto Forum Data} In addition to analyzing customer review text, I analyze the text of posts in each forum thread on the Agora forums. I now describe the process through which I analyze forum text as a complement to review text. The unit of measurement for analyzing forum text are forum posts which are classified as ``mentions'' that satisfy one of two requirements. The first is that the text in a post specifically includes a vendor's username in it's body. This is a direct mention. The second is if the post is located within a forum thread of which either: \begin{enumerate} \item The title of the forum thread contains the vendor's username. \item The first post in the thread (the post by the originator of the thread) contains the vendor's username. \end{enumerate} The idea behind this latter identification strategy is that if a forum's title or first post has a vendor username, I assume the topic of the thread is the mentioned vendor, and specifically the vendor's quality. If a post mentions multiple vendors in the same body of text, for my main analysis, I drop the post, since it is impossible for me to evaluate within a forum post which positive or negative words are directed at which vendor. Later, as a robustness check, I rerun my baseline analysis by counting mentions of multiple vendors in a single post as separate but identical ``reviews'' of each individual vendor's quality. In practice this does not happen often. In order to reduce the noise I might introduce by including mentions that are not actually discussing with the quality of the vendor, I limit my analysis to forum posts within 4 sub-forums: General Discussion, Vendor Discussion, Product Offers, and Product Categories. The other sub-forums I exclude are: Referral Links, Newbie Section, Generic Randomness, German, Security discussion, Bugs, Philosophy, New features, and News. If forum posts are correctly categorized, it is unlikely that any of these excluded subforums would contain a vendor mention about the vendor's quality in relation to their products or customer service. Additionally, I exclude any vendor whose name is an English word, according to the Natural Language Toolkit's corpus of English words. I do this because it will be impossible to distinguish mentions that refer to the actual vendor in question or happen to just be using the English word the vendor is associated with. In my later analysis, to be consistent, I also throw out all reviews and product listings by vendors whose name is an English word as well, since these vendors may be discussed on the forums but my mention identification strategy would fail to identify them. About 5\% of the vendors in my data (the set of users who listed a product at least once in the listings dataset in 2014) have an English word as a name. I also remove one specific user, whose name is ``ebay'', who had posts associated with them in the forums that were actually discussing counterfeiting methods on the popular auction website eBay. Finally, I remove from my sample all posts by a vendor where they self-advertise (since consumers will presumably discount this information given the source). This constitutes a very small portion of all observed posts. Using this definition of vendor mentions, I identify 61,679 unique posts that mention a vendor. Of these, 17,234, or 28\%, are a direct mention, and rest are posts in forum threads whose subject line includes a vendor. Some examples of the titles of these forum threads are ``californiadreamin vendor. legitimate or not? feedback?'', ``beware goingpostal, they are scammers!!!!'' and ``current whitelist of opiate vendors (updated on the regular)''. Like all choices, my choice of how to identify vendor mentions in the Agora forums is somewhat arbitrary and has its limitations. I cannot rule out that some threads might veer off-topic and discuss an entirely new subject, unrelated to the vendor's quality. Inversely, there may be some posts that are in fact reviewing a vendor's quality, but fail to name the vendor in their text body (perhaps a reply to a direct mention of a vendor). Additionally, there will be some posts I do not capture simply because a particular thread was never scraped during the data collection process. A primary interest of this paper is to compare the relative impact of formal and informal sentiment on vendor quality. I take all 61,679 posts flagged as mentioning a specific vendor on the Agora forums, and convert the text to a vendor sentiment variable, analogously to how I did in the review text setting. However, in this setting I have no labeled dependent variables; there is not an associated numerical rating of a vendor with each forum mention. So, I treat these forum posts as coming from the same population as the customer reviews, and apply the MNIR model previously estimated on customer reviews to the forum posts. Specifically, I preprocess all post text as I did with reviews, treat each individual post as a unit of observation, and apply the score loadings $\boldsymbol{\phi}$ I estimated with the labeled review data to the sequence of words $w_p$ contained in each post, generating a vendor sentiment score for each forum post, $fs_p$. To my surprise, there is a smaller fraction of forum posts that are ``uninformative'' (i.e. $fs_p=0$); 3\% of the forum posts are uninformative, while about 7\% of formal reviews are uninformative. One can imagine that, even though posts mention a specific vendor, and are in one of the 4 appropriate subforums, some of these flagged posted are discussing an aspect of the vendor that is unrelated to consumer sentiment. There is a substantial amount of variation in the sentiment in forums as well. Figure \ref{fig:informedSentimentbySetting} shows the density function of informative (non-zero) customer sentiment scores in both the set of forum posts and customer reviews. The density function that includes \textit{all} consumer sentiment is similar in appearance but with a large spike at zero, which makes the plot more difficult to interpret due to the scaling. The distributions of forum and review sentiment appear comparable in shape, but the distribution of forum sentiment is clearly shifted to the left of review sentiment. An interpretation of this is that sentiment on the forums is more negative towards vendors than on the formal reviews. This suggests that the forums may serve as an unrestricted setting where users can communicate their actual opinion on vendors more freely without the threat of blackmail that might accompany a very negative product review. Or it may be that the set of users who post on forums and those who actually purchase products simply differ in their attitudes or beliefs towards vendors. Having generated scores $\{s_i\}$ and $\{s_p\}$ for reviews and forum posts, respectively, I (separately) standardize the non-zero scores in each setting to have mean 0 and standard deviation 1\footnote{I specifically excluded the zero-score text from the mean and standard deviation calculations because standardizing these reviews would cause them to take on a non-zero value, which is difficult to interpret since I know these scores are zero because they have no informative text in their messages. note that overall forum and review sentiment variables will still have mean 0 after this transformation, but a lower standard deviation than 1}. Doing so eases the interpretation of coefficient estimates for later analysis, so that a 1 standard deviation increase in (informative) sentiment will result in an additive $\beta $ or multiplicative $e^\beta$ increase in the dependent variable I consider, depending on whether I use a linear or exponential mean model. For each vendor, I then construct an aggregate average vendor sentiment variable for the history of reviews of vendor $v$ up until week $t$: \[\widetilde{s}_{v,t} = \frac{\Sigma_{i=1}^{n_v} s_{i,v} \mathbbm{1}_{\{t(i,v) < t\}}}{\Sigma_{i=1}^{n_v} \mathbbm{1}_{\{t(i,v) < t\}}} \] where $t(i,v)$ maps a review by consumer $i$ on a product sold by vendor $v$ to the week it was posted. Analogously, I construct for each vendor an average sentiment variable from forum mentions; that is, the mean of forum vendor sentiment scores $fs_{p,v}$ until week $t$: \[\widetilde{fs}_{v,t} = \frac{\Sigma_{p=1}^{m_v} fs_{p,v} \mathbbm{1}_{\{t(p,v) < t\}}}{\Sigma_{p=1}^{m_v} \mathbbm{1}_{\{t(p,v) < t\}}}\] where $m_v$ is the total number of forum mentions of a vendor in the sample period. Additionally, I construct total counts of the number of vendor's product reviews / vendor mentions up until time $t$ in the same manner (denoted $\widetilde{n}_{v,t}$ and $\widetilde{m}_{v,t}$, respectively). I choose to construct review sentiment variable's at the vendor level (rather than at the item level) so that average vendor sentiment between the two settings (reviews and forums) can be more directly comparable. Since discussion on forums mostly concerns vendors, rather than the specific items they sell, identifying specific mentions of products would have been challenging and probably yielded significantly less forum posts to work with. Because some vendors may not have a history of reviews or forum mentions at any given point in time, I always include a missing dummy when including these variables in a regression analysis. Tables \ref{tab:reviewSentCorr} and \ref{tab:forumSentCorr} contain regression output from regressing (standardized) sentiment in each setting on virtually all observables available. We see, in the review setting, that after adding product reviewed and review week fixed effects, there is monotonically increasing relationship between sentiment scores and stars given in the review (as should be the case). In addition, other interesting relationships include that buyers with a longer transaction history are overall more positive in their review sentiment. Also, the buyer's personal rating (vendors may voluntarily review buyers who buy from them) does not display a clear relationship, except that those without a perfect 5-star rating give more negative text feedback. As may be expected, reviews on no finalize early products are more positive than those that ask customers to finalize early (this effect is estimated from listings that within their listing time switch from no finalize early to finalize early, or vice versa). In regards to forum sentiment, we see that experienced forum members are less likely to be positive in their sentiment. This descriptive result is suggestive of experienced vendors being sophisticated in how they share information with the community: they may know to be ``good'' to vendors in their product reviews, yet on forums, they are less worried about how a vendor might punish their negative report, and can speak more openly. We find no correlation, after controlling for a variety of fixed effects (username of the poster, vendor they are mentioning, and week the post is published), of karma, a social media similar in spirit to Facebook likes, relating to how positive or negative forum post sentiment is. I also include in these tables one model regressing number of words in a review or mention on observables. We see that non-5 star reviews on average have ~3 more words, which may be expected since a review $<$ 5 stars represents a deviation from the default and so may indicate more thought is being given to these reviews. We also see some evidence of review fatigue: more experienced customers have, on average, less words in their reviews than newer buyers on the market. \subsection{Item Listing Text} A characteristic of a product listing that does not cleanly enter a regression equation is the listing text description of the product, along with the title of the listing. It is easy to see that changes in the description can in fact cause changes in sales: perhaps by editing the text to be less misleading about the product's quality, or making the title more engaging to consumers scrolling through a list of products when searching the Darknet. Unlike the previous section, where I was able to compare reviews against each other because of associated ratings, there are no labels on the listing text, so I cannot say which listings texts represent effective or ineffective advertisement of the product. Thus, I need an alternative methodology to MNIR in order to be able to successfully control for changes in listing text over the lifespan of a given product. As before, I preprocess the text of listings and then transform the title and description text (separately) into word frequency matrices for all of the sample listings. If all the information in the text relevant to consumers is in these word frequency matrices, then I could simply include the word matrices as additional controls in my regression to capture possible changes in sales due to changes in the phrasing of the listing. However, due to the massive size of these word frequency matrices, (14,000 unique word stems in titles, and 71,000 unique word stems in descriptions), this is computationally infeasible. To circumvent this, I decompose the word frequency matrices into principal components (PCs) that can explain a large portion of the variation in the listing text data. By using the lower-dimensional principal components as controls, I can approximately control for the word frequencies. Due the size and sparsity of these word frequency matrices, I use truncated singular value decomposition (TSVD) to retrieve the initial PCs of each listing text description. Furthermore, I do this principal component decomposition on the ``joint'' title-description matrix that is obtained by merging the title and description word frequency matrices along observations, so that all rows now sum to 2, and each contain the frequencies of words within titles and descriptions separately. The motivation for this is that, rather than extracting, for example, the first 100 PCs of text and description separately, then including the 200 PCs in my regression specification as controls, there are likely to be strong correlations between words present in the title and words present in the item description. So doing a PC decomposition on the joint matrix will require fewer PCs to explain the same amount of variation in the data. I chose to not just lump the title word counts into the counts of description words since, on average, description text is much longer than the title text, so the title text would contribute much less to the estimated principal components. the joint matrix approach I use more equitably weights the principle components explaining variation in the title and description listing text. \section{Estimation} This paper limits its inference to the effects of our covariates of interest on expected product demand, which the Poisson distribution is found to be asymptotically robust to. This is the case even if the true underlying demand function differs from Poisson, as long as the true mean function shares the exponential mean functional form \cite{cameron2013count}. An alternative interpretation is that the true underlying model of product demand has covariates that are multiplicative factors of product demand (as opposed to additive, as is necessary in the linear regression model). This is a fairly general assumption in this setting, and if satisfied, we can consistently estimate the multiplicative parameters $e^\beta$ and their standard errors, robust to misspecification. Given our large sample size (about 600,000 observations of weekly sales data at the item level), I am not too concerned with our estimates deviating greatly from their asymptotic counterparts. A simple histogram of sales per week by item listing (Figure \ref{fig:sales_counts}) clearly indicates that sales of individual products per week are exponentially distributed, as might be expected. In fact, when we compare the empirical distribution of observed sales against a Poisson distribution with mean parameter $\lambda$ equal to the sample mean of weekly item sales, we see that the Poisson distribution, even on the unconditional empirical distribution, appears to be a fairly good approximation. One notable feature of the data is that while it appears similar in shape to a Poisson distribution with the same mean, it also appears overdispersed, as is evident from the additional mass on the $0$ and $\geq7$ sales bins. Indeed, the sample variance is about 4 times as large as the sample mean (1.21 versus 0.28). Given information on sales, sentiment, and other product/vendor characteristics, I estimate the relationship between my covariates of interest and product demand via Poisson regression. Based on the unconditional distribution of sales, modeling weekly product sales as Poisson appears to be a reasonably good approximation. In addition to this, I prefer Poisson regression (as opposed to other count data regression models) due to its closed form solution to incorporating fixed effects, which allows me to non-parametrically control for unobservable time-invariant product characteristics. My regression model for the baseline specification is the follows: \begin{multline*} \log(E[y_{j,t}|\mathbf{X}]) = \beta_1 \widetilde{s}_{v,t} +\beta_2 \widetilde{fs}_{v,t}\\ +\alpha_1 \log(\widetilde{n}_{v,t}) + \alpha_2 \mathbbm{1}\{\widetilde{n}_{v,t}=0\}+ \alpha_3 \log(\widetilde{m}_{v,t}) + \alpha_4 \mathbbm{1}\{\widetilde{m}_{v,t}=0\} \\ +\mathbf{\gamma_1} \mathbf{X_{v,t}} + \mathbf{\gamma_2} \mathbf{X_{v,t}} + + \mu_j + \delta_t \end{multline*} where $\mathbf{X_{v,t}}$ indicates a matrix of vendor controls, $\mathbf{X_{v,t}}$ indicates a matrix of product controls, $\delta_t$ and $ \mu_j$ are product and time fixed effects, respectively. I include a variety of controls to remove all supply-side variation available to consumers from the sales data when estimating my model of relating vendor sentiment to product demand. As I have stated before, by including variables for \textit{all} decisions made by the vendor that are observable to the consumer, I can ensure that a change by the vendor to, for example, flag the listing as no finalize early, is not confounded with the measured effect of vendor sentiment on demand. These include: \begin{itemize} \item Vendor Controls: the average vendor rating, a missing vendor rating indicator,and indicators for the past volume category the vendor selling the product is in. For example, in Figure \ref{fig:samplelisting}, the vendor's volume category is ``6$\sim$10 deals'' and their rating is 5/5. \item Item Controls: This includes an indicator for the listing being flagged as no finalize early, the natural log of the price of the item, the average item rating based on all reviews of that particular item, an indicator for zero item-specific reviews, and an indicator for the product category (absorbed by item fixed effects, but present when the regression is estimated with random effects). In addition, I control for the item description of the listing and the text title with the first 100 principal components of the joint word frequency matrix, which explain 1/3 of the variation in the item listing text data. As a robustness check, I re-ran my baseline specification with 300 PCs, explaining 1/2 of the variation, and found results to be nearly identical in my covariates of interest.\footnote{Results of these regressions are available by request.} \end{itemize} By also including time fixed effects (so indicators for each unique week in the dataset), I am able to implicitly control for any changes in aggregate market conditions, such as the stability of the TOR network, the Bitcoin exchange rate, and seasonal changes in sales (on both the demand and supply side). Through controlling all of these alternative channels that may effect sales, I can then cautiously interpret coefficients from regressing sales of a listing on vendor sentiment as responses in demand (since responses in supply that effect a consumer's choice are already conditioned on). \section{Results} \subsection{Baseline Specification} I now introduce the main regression results of the paper. Table \ref{tab:baselineRegs} displays estimates from a Poisson panel regression of sales on vendor sentiment in both the forum and customer review setting, along with the (logged) \# of reviews \& mentions. The panel regression is done with respect to each product listing (so it includes product fixed effects), along with cluster-robust standard errors at the item level to allow for arbitrary correlations across observations of an individual listing. I iteratively add more controls column-by-column to show the persistence of the relationships that are estimated. The main specification of interest is column (5), where I control for all relevant supply responses, in addition to time fixed effects, so that the coefficients may be interpreted as demand responses to sentiment (i.e. elasticities). The table shows multiplicative effects of a 1 unit increase in each regressor. Both measures of sentiment are averages of individual reviews/post sentiment that are normalized to have standard deviation 1 and mean 0 (among non-zero scores). The interpretation of the coefficient of, for example, Avg. Vendor Review Sentiment, is that an increase (decrease) in vendor sentiment among all reviews by 1 standard deviation leads to an 8.75\% increase (8.05\% decrease) in quantity demanded for the offered product. I find in the baseline regression that vendor sentiment on forums and reviews are approximately equal in their influence on demand- an average increase in 1 standard deviation of forum posts mentioning vendors leads to a 10.7\% increase in sales, which is not statistically significantly different from the impact of average vendor sentiment in reviews. Additionally, both have the expected sign - more positive vendor sentiment leads to higher demand. This verifies the hypothesis I had entering this paper- forum-based vendor sentiment has a substantial impact on consumer demand, and consumers in fact appear to value this information at least as highly as the information they glean from the text present in product reviews of a vendor. This is somewhat surprising, given the potential for mischaracterizations or lies in mentions on forums, and my somewhat crude method for identifying relevant forum posts. This underscores the potential importance of a designated space for Darknet market participants to freely exchange information on vendors. Besides the sentiment measures, other variables for the most part have the expected sign: average item rating (on a 0-5 scale) is very positively associated with increased demand, price has a negative multiplicative (insignificant) effect on sales, and flagging one's post as no finalize early has a positive effect. Note that we cannot interpret the coefficient on price and no finalize early as effects on demand, since the response in sales to these changes could be endogenously driven by the vendor's decision to change the price / flag (i.e. low sales lead them to lower the price). Vendor rating, somewhat surprisingly, is insignificant when we include all of our additional controls. This may reflect the fact that the vendor review sentiment variable is a ``sufficient statistic'' for the vendor's average rating, and so its inclusion in the regression makes the vendor rating variable irrelevant. Conditional on scores $s_i$ , the review ratings $r_i$ should be orthogonal to the words contained in the review text. A more positive interpretation of this finding is that the star ratings in reviews across a vendor's products are informative insofar as they reflect the information contained within the text; no additional demand-relevant information is present in the numerical rating. This cannot be completely true however, given the strongly positive association between sales and the product-specific average rating. However. it may be the case that information on vendor-wide attributes (such as communication, customer service) is transmitted exclusively through review text, while product attributes (such as quality of the good) may not be completely conveyed in the review text. The negative relation of total count on past reviews with future sales is simply capturing the limited stock of vendors. Since past reviews exactly coincide with past sales, vendors with a fixed supply will mechanically have to sell less as past sales accumulate. \subsection{Differential Effects} Having established the importance of overall average vendor sentiment on product demand, I now look to uncover some heterogeneous effects of vendor sentiment on product demand. Specifically, I test the hypotheses I proposed in section \ref{consumermodel}. These hypotheses are formally tested in Table \ref{tab:diffEffectRegs}, where I control for all the supply variation I considered in column (5) in Table \ref{tab:baselineRegs}. \begin{itemize} \item \textbf{Hypothesis 1}: This is tested in column (1) of Table \ref{tab:diffEffectRegs}. I test this by interacting average review/forum vendor sentiment with logged \# of reviews/posts. The coefficient on (uninteracted) average review/forum sentiment is now completely insignificant, while the interaction term is highly significant for both. This aligns nicely with a Bayesian framework of consumers updating their perceptions of vendors based on a stream of incoming reviews: consumers move their prior more (change their demand) when the observed sentiment in reviews/forums is associated with a higher sample size. \item \textbf{Hypothesis 2}: This is tested in column (2) of Table \ref{tab:diffEffectRegs}. I take all reviews, and split them into two groups: those by buyers who have bought at least 10 times on Agora, and those who have bought less (buyer past volume is observed on every review). This corresponds to an approximately median split of the sample of reviews in my dataset. Similarly, for forum posts, I split them into posts by forum members by ``ranking'', a title given to active/inactive users. My split is approximately based on whether or not a user has $\geq$ 200 posts\footnote{See \href{https://bitcointalk.org/index.php?topic=178608.msg2514705\#msg2514705}{here} for an official discussion of the rankings system by the same administrator of the Agora forums.}, where those with $\geq 200$ posts classified as experienced members (this is about 25\% of the sample, no cut closer to the median is available). Examining the regression results, I find that, to my surprise and contrary to my initial hypothesis, it appears that vendor sentiment in reviews from \textit{inexperienced} buyers is more influential on consumer demand (relative to experienced buyers). On the other hand, there is no observed heterogeneity based on my classification of experience in the forum setting. The ``P-value'' fields at the bottom of column (2) of the table shows the p-values from tests of equality of the coefficients on average sentiment between experienced and inexperienced users, done separately for both the review and forum setting. The non-differential effect of experience in forums appears to be a consequence of the way I measure experience (\# of past posts). This may not be particularly informative of a forum member's quality. The result that inexperienced members communicate vendor sentiment that is more influential on product demand is a curious one when taken at face value. However, there is a possible explanation. Directing our attention back to column (4) of table \ref{tab:reviewSentCorr}, we see some evidence that experienced and inexperienced buyers are giving different types of reviews, and this might be the driver of the result. Namely, less experienced buyers are, on average, inputting more words into each of their reviews, and their reviews are overall less positive than their more experienced counterparts. It may then be the case that inexperienced buyers are, perhaps naively, submitting more thoughtful reviews that are in fact more informative, while their experienced counterparts are foregoing the additional cost of giving an informative, possibly negative review that might agitate the vendor. It is worth reiterating that this argument is based on indirect evidence, and without explicitly controlling for each individual review's characteristics (which is difficult when we are aggregating reviews into an item panel framework), it cannot be verified. \item \textbf{Hypothesis 3}: This is tested in column (3) of Table \ref{tab:diffEffectRegs}. As before, I test whether the coefficient of average sentiment on finalize early versus no finalize early listings are equal, and display the p-value in the cells below. In both the forum and review settings, I find evidence that those listings that allow finalize early are impacted about twice as much by vendor sentiment communicated in reviews and forum posts. Although the coefficient on the interaction of finalize early and review sentiment is not significantly different from its no finalize early counterpart (p-value = 0.2), it is significantly different from zero. This confirms the hypothesis that those potential customers of products with a commitment to not finalize early will be able to take what the vendor reports in the listing description more at ``face-value''. It is particularly noteworthy that, despite the uninteracted no finalize early indicator entering positively and significantly in the previous table, its interaction yields results pointing to less importance when analyzing the vendor sentiment available to customers (i.e. I am not capturing the fact that those vendors with no finalize early listings are more positively reviewed than those with finalize early listings). \item \textbf{Hypothesis 4}: This is tested in column (4) of Table \ref{tab:diffEffectRegs}. For each listing, I isolate the set of reviews for that product, and compare the impact of the average review sentiment these reviews have to the average sentiment for all other reviews given of a vendor up until time $t$. To my surprise, I find that the average of ``other'' product reviews is 5 times as influential on customer demand (2\% increase versus 10\% increase)! However, this has an explanation besides product demand being more influenced by reviews for other products. Since a vendor is likely to list multiple products, there are, on average, many more reviews for a vendor's other products than any individual listing. As a result, when one controls for the additional information from sample size conveyed in average other product sentiment by interacting with logged \# of reviews (analogous to the procedure done in column (1), but this time separately for product reviews and other product reviews), the two produce similar coefficients that are not statistically significantly different from each other\footnote{Regression results of this interaction are available upon request}. This column, then, serves to reaffirm Hypothesis 1, that consumers value the signal sent by average customer sentiment more when it is accompanied by a larger sample of reviews, when formulating their posterior on vendor quality. \end{itemize} Overall, the largest takeaway from these differential effects is that consumer demand is significantly more influenced by vendor sentiment on the web when one increases the sample from which this average comes from. In this sense, there are definitely ``increasing scales'' to positive vendor sentiment if it continues to be perpetrated by more and more customers. \subsection{Vendor Response on the Extensive Margin} While I have shown that within the lifespan of a product listing, vendor changes in product characteristics can be appropriately controlled for, there may be other forms of response by the vendor. Namely, it may be the case that a vendor responds by pulling a listing altogether from the marketplace due to poor sentiment , and this may be distorting my interpretations of coefficients on vendor sentiment as demand responses. To investigate this potential confounder, I regress the total number of listings produced by each vendor on the sentiment variables discussed before, while controlling for vendor attributes. If it is found that vendors are not responsive to customer sentiment in text on the extensive margin, then it is unlikely this will be a significant confounder. Because I am now interested in evaluating weekly number of total listings, I can draw upon the GRAMS dataset in addition to the Agora scrapes. The GRAMS data will contain the uncensored count of listings for each vendor on Agora, so it should produce the more precise estimate of the true relation between vendor sentiment and number of posted listings on the Darknet. Table \ref{tab:venlistings} shows the regression output of a Poisson regression of listings count on sentiment variables. I include a version of both fixed and random gamma vendor effects, and I draw my dependent variable from both the GRAMS and Agora HTML datasets. As is evident, there is no statistically significant vendor response in the number of listings by a vendor to text sentiment. Across the board, the regression using GRAMS data show zero effect of sentiment in both forums and reviews on a vendor's number of listings. And while the coefficient on vendor review sentiment on the Agora scrapes is significant at the 10\% level, it in fact moves in the \textit{opposite} direction one expects. The interpretation of the coefficient is that if positive sentiment about a vendor in reviews increases, they actually offer less products, rather than more. Given this counterintuitive coefficient, along with the estimated effect of zero everywhere else, and the noisy nature of the Agora data (since product counts may be incorrectly underestimated due to the web crawl timing out), I discount this finding as an outlier not reflective of the true relationship. \subsection{Robustness Checks} Here, I discuss some validity checks performed to confirm my results are not knife's edge in nature and survive a broad set of specifications. To begin, Tables \ref{tab:baselineRegs_re} \& \ref{tab:diffEffectRegs_re} are the same specification as \ref{tab:baselineRegs} \& \ref{tab:diffEffectRegs}, discussed above, but this time estimated with random gamma-distributed effects instead of non-parametric fixed effects. We see that in this specification, qualitatively results are identical, and the multiplicative effects are on average slightly greater in magnitude. Nothing else substantially changes. In addition, we see that the $\alpha$ parameter of the regression, the multiplier for the variance of the conditional Poisson distribution, is significant and positive. In fact, this specification (Poisson with gamma random effects) is equivalent to a negative binomial regression, which allows for overdispersion in the count data, and so leads to more efficient estimation when overdispersion is present \cite{cameron2013count}. These tables demonstrate that the main results of the paper are not a function of the Poisson regression assumption that the mean and variance of the conditional distribution are equal. \\ \indent I also include a version of the regressions present in Tables \ref{tab:baselineRegs} \& \ref{tab:diffEffectRegs} estimated via linear regression (maintaining product fixed effects), in Tables \ref{tab:baselineRegs_linear} \& \ref{tab:diffEffectRegs_linear}. While a linearly additive model for non-negative count data is questionable, these regressions also yield coefficients that are qualitatively similar to those estimated in the tables discussed above. They provide evidence that the estimates I find are not a result of the maximum-likelihood estimation procedure getting stuck in a local mode due to the high dimensionality of the coefficient parameter $\beta$ (I have about 180 different regressors, excluding fixed effects, in my main specification) since OLS yields a closed form solution for the global optimum. Finally, in Table \ref{tab:baselineRegs_allposts}, I estimate my main specification using the non-exclusive measure of forum mentions. Instead of limiting mentions to those that include only one specific vendor, I assign the posts that mention multiple vendors to each individual vendor as duplicates. The estimated effect is lower under this classification, possibly due to increased measurement error, but nonetheless significant at the 5\% level. \section{Conclusion} I present evidence in this paper that communication between marketplace participants - as captured by sentiment variables on vendor quality- is an important influence of demand for products. While that result is not particularly groundbreaking, I find that consumer demand is equally influenced by communication on both formal and informal networks - namely, product reviews versus community forums. This may come as a surprise since well-regulated product reviews should be more reliable, but the evidence presented here suggests that discussion even in unstructured settings like forums should be an important determinant of product demand. In addition, I find some empirical evidence that a vendor's ability to commit to disclosure, by flagging their listings as no finalize early, dampens the effect of communication on demand. Furthermore, I find strong evidence that product demand is more responsive to customer communication as the number of messages grows, as may be expected in a Bayesian updating framework. There are some limitations to interpreting these results, however. Since my dataset is based on imperfect web crawls of the Darknet, I cannot perfectly observe when a change in product characteristics occurs. While this missingness in random, it may be the case that I am not perfectly controlling supply side variation, and so cannot completely attribute the effect of sentiment on sales to demand. By aggregating to the weekly level, I attempt to minimize the influence of this, but also introduce new errors: namely, that some customers who bought during a week may have been exposed to a different listing page, depending on when they purchased and when a vendor updated their product page. This paper looks only at the effects on the Agora marketplace, but Agora is functioning alongside other Darknet markets, and their may be important cross-marketplace effects this paper does not explore. Additionally, I excluded from my analysis controlling for variation in the image shown on the product listing page. State-of-the-art machine learning algorithms exist to control for the information contained in an image, but these ultimately proved too cumbersome at this stage to include in this project. Ultimately, this paper serves as a useful starting point for the study of a market that is exceptional for testing economic theory, due to its highly anonymous nature and independence from real-world regulations, but more work on the economic properties of Darknet markets should follow. \bibliographystyle{plain}
{ "timestamp": "2018-02-27T02:03:17", "yymm": "1802", "arxiv_id": "1802.08778", "language": "en", "url": "https://arxiv.org/abs/1802.08778" }
\section{#1}} \usepackage{times} \usepackage[normalem]{ulem} \usepackage{tikz} \usetikzlibrary{arrows,shapes,positioning} \usetikzlibrary{decorations.markings} \tikzstyle arrowstyle=[scale=1] \tikzstyle directed=[postaction={decorate,decoration={markings, mark=at position .65 with {\arrow[arrowstyle]{stealth}}}}] \tikzstyle endreversedirected=[postaction={decorate,decoration={markings, mark=at position 1.0 with {\arrow[arrowstyle]{stealth}}}}] \tikzstyle enddirected=[postaction={decorate,decoration={markings, mark=at position 1.0 with {\arrow[arrowstyle]{stealth}}}}] \tikzstyle reverse directed=[postaction={decorate,decoration={markings, mark=at position .65 with {\arrowreversed[arrowstyle]{stealth};}}}] \DeclareGraphicsRule{*}{mps}{*}{} \renewcommand{\textfraction}{0} \renewcommand{\topfraction}{1.0} \renewcommand{\bottomfraction}{1.0} \newcommand{\Mathematica}[1]{} \newcommand{\varepsilon}{\varepsilon} \newcommand{\suppress}[1]{} \newcommand{\Eq}[1]{Eq.~(\ref{#1})} \newcommand{\eq}[1]{(\ref{#1})} \newcommand{\bra}[1]{\left<#1\right|} \newcommand{\ket}[1]{\left|#1\right>} \newcommand{\braket}[2]{\left.\left<#1\right|#2\right>} \newcommand{\frac12}{\frac12} \newcommand{\color{blue}}{\color{blue}} \newcommand{\begin{eqnarray}}{\begin{eqnarray}} \newcommand{\end{eqnarray}}{\end{eqnarray}} \newcommand{\begin{equation}}{\begin{equation}} \newcommand{\end{equation}}{\end{equation}} \newcommand{\begin{equation}}{\begin{equation}} \newcommand{\end{equation}}{\end{equation}} \newcommand{\color{red}}{\color{red}} \newcommand{\scalar}[2]{\left \langle#1\ #2\right \rangle} \newcommand{\mathrm{e}}{\mathrm{e}} \newcommand{\mathrm{i}}{\mathrm{i}} \newcommand{\mathrm{d}}{\mathrm{d}} \newcommand{\text{per}}{\text{per}} \newcommand{\text{fr}}{\text{fr}} \newcommand{\mathrm{e}}{\mathrm{e}} \newcommand{\mathrm{d}}{\mathrm{d}} \newcommand{\nonumber}{\nonumber} \newcommand{\mathrm{tr}}{\mathrm{tr}} \renewcommand{\epsilon}{\varepsilon} \newcommand{\mq}[2]{\uwave{#1}\marginpar{#2}}\newcommand{\nott}[1]{} \newcommand{\Fig}[1]{\includegraphics[width=\columnwidth]{./#1}} \newcommand{\fig}[2]{\includegraphics[width=#1\columnwidth]{./#2}} \newcommand{\ffig}[2]{\includegraphics[width=#1]{./#2}} \newcommand{\FFig}[1]{\includegraphics[width=0.87\columnwidth,angle=270]{./#1}} \newlength{\bilderlength} \newcommand{0.35}{0.35} \newcommand{0.25}{0.25} \newcommand{\bilderscale}{0.35} \newcommand{\hspace*{0.8ex}}{\hspace*{0.8ex}} \newcommand{\textdiagram}[1]{\renewcommand{\bilderscale}{0.25}\diagram{#1}\renewcommand{\bilderscale}{0.35}} \newcommand{\diagram}[1]{\settowidth{\bilderlength}{\hspace*{0.8ex}\includegraphics[scale=\bilderscale]{./#1}\hspace*{0.8ex}}\parbox{\bilderlength}{\hspace*{0.8ex}\includegraphics[scale=\bilderscale]{./#1}\hspace*{0.8ex}}} \newcommand{\Diagram}[1]{\settowidth{\bilderlength}{\includegraphics[scale=use0.35]{./#1}}\parbox{\bilderlength}{\includegraphics[scale=\bilderscale]{./#1}}} \newcommand{{\hat a}}{{\hat a}} \newcommand{{\hat a}}{{\hat a}} \newcommand{{\hat a}^\dagger}{{\hat a}^\dagger} \newcommand{{\hat a}^\dagger}{{\hat a}^\dagger} \newcommand{c^\dagger}{c^\dagger} \newcommand{c^\dagger}{c^\dagger} \newcommand{1\hspace*{-0.5ex}{\rm l}}{1\hspace*{-0.5ex}{\rm l}} \newcommand{\left|0\right>}{\left|0\right>} \newcommand{\left<0\right|}{\left<0\right|} \newcommand{{\rm f}}{{\rm f}} \renewcommand{\i}{{\rm i}} \newcommand{t_{\rm i}}{t_{\rm i}} \newcommand{t_{\rm f}}{t_{\rm f}} \newcommand{{\cal Q}}{{\cal Q}} \newcommand{{\cal R}}{{\cal R}} \graphicspath{{figures/},{}} \renewcommand{\log}{\ln} \renewcommand{\paragraph}{\subsubsection*} \arraycolsep0.5mm \newcommand{\an}[1]{{\color{blue} #1}} \newcommand{\dl}[1]{{\color{blue} \sout{#1}}} \begin{document} \bibliographystyle{KAY-hyper} \title{Field Theories for Loop-Erased Random Walks} \author{Kay J\"org Wiese${}^{{1}}$ and Andrei A.\ Fedorenko${}^2$ } \affiliation{${^{1}}$CNRS-Laboratoire de Physique Th\'eorique de l'Ecole Normale Sup\'erieure, 24 rue Lhomond, 75005 Paris, France, PSL University, Sorbonne Universit\'e, UPMC,} \affiliation{${^{2}}$Univ.\ Lyon, ENS de Lyon, Univ.\ Claude Bernard, CNRS, Laboratoire de Physique, F-69342 Lyon, France} \begin{abstract} Self-avoiding walks (SAWs) and loop-erased random walks (LERWs) are two ensembles of random paths with numerous applications in mathematics, statistical physics and quantum field theory. While SAWs are described by the $n \to 0$ limit of $\phi^4$-theory with $O(n)$-symmetry, LERWs have no obvious field-theoretic description. We analyse two candidates for a field theory of LERWs, and discover a connection between the corresponding and a priori unrelated theories. The first such candidate is the $O(n)$ symmetric $\phi^4$ theory at $n=-2$ whose link to LERWs was known in two-dimensions due to conformal field theory. Here it is established via a perturbation expansion in the coupling constant in arbitrary dimension. The second candidate is a field theory for charge-density waves pinned by quenched disorder, whose relation to LERWs had been conjectured earlier using analogies with Abelian sandpiles. We explicitly show that both theories yield identical results to 4-loop order and give both a perturbative and a non-perturbative proof of their equivalence. For the fractal dimension of LERWs in $d=3$ our theory gives at 5-loop order $z=1.624\pm 0.002$, in agreement with the estimate $z = 1.624 00 \pm 0.00005$ of numerical simulations. \end{abstract} \maketitle \kaysubsection{Introduction} Random walks (RWs) which are not allowed to self-intersect play an important role in combinatorics, statistical physics and quantum field theory. The two most prominent models are {\em self-avoiding walks} (SAWs) and {\em loop-erased random walks} (LERWs). The SAW was first introduced in polymer physics to model long polymer chains with self-repulsion due to excluded-volume effects. It is defined as the uniform measure on RW paths of a given length conditioned on having no self-intersection. Though this model is difficult to analyse rigorously, it was discovered by de Gennes \cite{DeGennes1972} that its scaling behavior in $d$ dimensions is given by the $O(n)$ symmetric $\phi^4$ theory in the unusual limit of $n \to 0$. The loop-erased random walk (LERW) is defined as the trajectory of a random walk (RW) in which any loop is erased as soon as it is formed \cite{Lawler1980}. An example is shown on figure \ref{f:LERW}. Similar to a self-avoiding walk (SAW) it has a scaling limit in all dimensions, e.g.~the end-to-end distance $R$ scales with the intrinsic length $\ell$ as $R \sim \ell^{1/z} $, where $z$ is the fractal dimension~\cite{Kozma2007}. LERWs appear in many combinatorial problems, e.g.\ the shortest path on a uniform spanning tree is a LERW. While LERWs are non-Markovian RWs, their traces are equivalent to those of the {\em Laplacian Random Walk} \cite{LyklemaEvertszPietronero1986,Lawler2006}, which is Markovian, if one considers the whole trace as state variable. It is constructed on the lattice by solving the Laplace equation $\nabla^2 \Phi(x)=0$ with boundary conditions $\Phi(x)=0$ on the already constructed curve, while $\Phi(x)=1$ at the destination of the walk, either a chosen point, or infinity. The walk then advances from its tip $x$ to a neighbouring point $y$, with probability proportional to $\Phi(y)$. In a variant of this model growth is allowed not only from the tip, but from any point on the already constructed object. This is known as the {\em dielectric breakdown model} \cite{NiemeyerPietroneroWiesmann1984}, the simplest model for lightning. In nature, lightning appears in several modes, among which a non-branched one, closer to the Laplacian RW, i.e.\ LERWs. In contrast to SAWs, LERWs have no obvious field-theoretic description. In three dimensions LERWs have been studied numerically~\cite{GuttmannBursill1990,AgrawalDhar2001,Grassberger2009,Wilson2010}, while in two dimensions they are described by SLE with $\kappa=2$ \cite{Schramm2000,LawlerSchrammWerner2004}, predicting a fractal dimension $z^{\rm LERW}_{d=2}=\frac54$. Coulomb-gas techniques link this to the 2d $O(n)$-model at $n=-2$ \cite{Nienhuis1982,Duplantier1992}. Below, we give perturbative arguments that this construction can equivalently be done via the $O(n)$-symmetric $\phi^4$ theory at $n=-2$, in any dimension $d$. \begin{figure} \Fig{LERW2} \caption{Trace of a LERW in blue, with the erased loops in red, on a 2-dimensional Honeycomb lattice.} \label{f:LERW} \end{figure} Coming from a different direction, it was conjectured in \cite{FedorenkoLeDoussalWiese2008a} that the field theory of the depinning transition of charge-density waves (CDWs) pinned by disorder is a field theory for LERWs. This equivalence is based on the conjecture of Narayan and Middleton \cite{NarayanMiddleton1994} that pinned CDWs can be mapped onto the Abelian sandpile model. The connection of Abelian sandpiles with uniform spanning trees, and thus with LERWs, was earlier established by Majumdar \cite{Majumdar1992,Dhar2006}. Despite the lack of a proof of this equivalence, the corresponding 2-loop predictions agree with rigorous bounds \cite{FedorenkoLeDoussalWiese2008a} and have been tested against numerical simulations at the upper critical dimension $d=4$ in \cite{Grassberger2009} where they correctly reproduce the leading and subleading logarithmic corrections of \cite{FedorenkoLeDoussalWiese2008a}. The depinning transition of CDWs is described by the functional RG (FRG) fixed point for random periodic systems, proposed by Narayan and Fisher \cite{DSFisher1985,NarayanDSFisher1992b,NarayanDSFisher1992a}, and developed in \cite{LeschhornNattermannStepanowTang1997,NattermannStepanowTangLeschhorn1992,LeDoussalWieseChauve2003,LeDoussalWieseChauve2002,ChauveLeDoussalWiese2000a,WieseHusemannLeDoussal2018,HusemannWiese2017}. If true, both $\phi^4$-theory at $n=-2$ and the FRG for CDWs must agree, at least for observables related to LERWs. We show below that both the $\beta$-function and the fractal dimension of LERWs coincide. This is done using ({\it i}) graph-theoretical arguments valid at all orders in perturbation theory, ({\it ii}) non-perturbative supersymmetry techniques, and ({\it iii}) an explicit 4-loop calculation. This does not mean that the theories are identical: for example, at depinning CDWs exhibit avalanches which are seemingly absent in the $\phi^4$ model. Our statement is that in the {\em sector} in which we can compare the two theories, they agree. This is illustrated on figure \ref{f:sector-illustration}. It does not exclude that one of the theories can handle observables the other cannot. A classical example for this behavior is the solution of the 2d Ising model via the conformal bootstrap, as proposed by Belavin, Polyakov and Zamolodzikov (BPZ) \cite{BelavinPolyakovZamolodchikov1984}. Here a theory with three operators, the energy $\epsilon$, the spin $\sigma$, and the identity $1\!\!1$, closes under OPE. However, other observables can be constructed from the Ising model on a lattice. Examples in case are domain walls, at criticality described by SLE. On the other hand, SLE does not (at least obviously) describe the operators of the original BPZ construction. Thus, SLE and BPZ describe different {\em sectors} of the same theory. \begin{figure} \fboxsep0mm {\setlength{\unitlength}{1cm} \begin{picture}(8.6,3.5) \definecolor{kyellow}{rgb}{0.95,1,0.6} \put(0,0){{\colorbox{kyellow}{\rule{86mm}{0mm}\rule{0mm}{35mm}}}} \put(0.5,0.25){{\colorbox{red!30}{\rule{46mm}{0mm}\rule{0mm}{30mm}}}} \put(1,1.9){{\colorbox{cyan!20}{\rule{36mm}{0mm}\rule{0mm}{10mm}}}} \put(1.5,1.4){$\phi^{4}$-theory at $n=-2$} \put(1.3,0.8){1 boson and 2 fermions} \put(2,0.5){(complex)} \put(1.9,2.5){one family of} \put(1.6,2.15){complex fermions} \put(5.5,1.75){\begin{minipage}{3cm} CDWs\\ at\\ depinning \end{minipage}} \end{picture}} \caption{The different field theories for LERWs.} \label{f:sector-illustration} \end{figure} \kaysubsection{Mapping of {LERWs} onto $O(n)$ $\phi^{4}$-theory at $n=-2$} We now map LERWs onto the $n$-component $\phi^{4}$ theory at $n=-2$. The latter is defined by the action \begin{equation}\label{theory1} {\cal S}[\vec \phi] := \int_x \frac12 [\nabla \vec \phi(x)]^2 + \frac{m^2}{2} \vec \phi(x)^2 + \frac{g}{ 8} [\vec \phi(x)^2]^2\ . \end{equation} One checks that, for $n=-2$, the full 2-point correlation function is given in Fourier space by the free-theory result \begin{equation}\label{cor-func} \left< \phi_i(k) \phi_j(k)\right> = \left< \phi_i(k) \phi_j(k)\right>_{0} = \frac{\delta_{{ij}}}{k^{2}+m^{2}}\ , \end{equation} independent of $g$. This fact is well known perturbatively \cite{Zinn-JustinBook2,KleinertSchulte-FrohlindeBook,KleinertNeuSchulte-FrohlindeLarin1991,KompanietsPanzer2017}. A non-perturbative derivation is given below, by mapping onto complex fermions. Eq.~(\ref{cor-func}) is the Laplace transform of the $k$-dependent Green function for a random walk (RW), \begin{equation}\label{phi-prop} \left< \phi_i(k) \phi_j(k)\right> = \int_{0}^{\infty}\mathrm{d} t\, \mathrm{e}^{-m^{2}t } \times \mathrm{e}^{-k^{2 } t }\ . \end{equation} Here $t\ge \ell$ is the time of the RW used to construct the LERW of length $\ell$, which scales as $\ell\sim t^{z/2}\sim m^{-z}$, and $z$ is the fractal dimension of the LERW. Let us convene that we draw the trajectory of a random walk in blue, and when it hits itself we do not erase the emerging loop, but color it in red. We claim that we can reconstruct these {\em colored} trajectories from $\phi^{4}$ theory. To this aim, we first reformulate the theory \eq{theory1} in terms of $N=n/2$ complex bosons $\Phi$ and $\Phi^{*}$, with $\Phi_{i}(x):=\frac1{\sqrt {2}}[\phi_{2i-1}(x)+ i \phi_{2i}(x)]$, $i=1,...,N=n/2$. Its action reads \begin{equation}\label{theory2} {\cal S}[\vec \Phi] := \!\! \int_x \! \nabla \vec \Phi^*(x) {\nabla \vec\Phi}(x) + {m^2} \vec \Phi^{*}(x) \vec \Phi(x) + \frac{g}{ 2} [\vec \Phi^{*}(x) \vec \Phi(x)]^2. \end{equation} Consider a specific path with $s$ intersections in the path-integral representation. We start with $s=1$: {\setlength{\unitlength}{1cm} \begin{eqnarray} && {\parbox{2.25\unitlength}{\begin{picture}(2.15,1)\put(0.15,0){\ffig{2\unitlength}{trace1}} \put(0.,0){$\scriptstyle x$} \put(1.0,0.7){$\scriptstyle y$} \put(0.45,0){\scriptsize 1} \put(1.9,0.5){\scriptsize 2} \put(0.45,0.925){\scriptsize 3} \put(0.03,0.95){$\scriptstyle z$} \end{picture}}}\\ &&\longrightarrow {\parbox{1.1cm}{{\begin{tikzpicture} \coordinate (v1) at (0,1.25) ; \coordinate (v2) at (0,-.25) ; \node (x) at (0,0) {$\!\!\!\parbox{0mm}{$\raisebox{-3mm}[0mm][0mm]{$\scriptstyle x$}$}$}; \coordinate (x1) at (0.5,0);\coordinate (y) at (1.5,0.5); \coordinate (y1) at (0.5,1) ;\node (z) at (0,1) {$\!\!\!\parbox{0mm}{$\raisebox{1mm}[0mm][0mm]{$\scriptstyle z$}$}$}; \fill (x) circle (1.5pt); \fill (z) circle (1.5pt); \draw [blue] (x) -- (x1); \draw [blue] (y1) -- (z); \draw [blue,directed](0.5,0) arc (-90:90:0.5); \end{tikzpicture}}}}~ -g {\parbox{1.6cm}{{\begin{tikzpicture} \node (v1) at (0,1.25){} ; \node (v2) at (0,-.25){} ; \node (x) at (0,0) {$\!\!\!\parbox{0mm}{$\raisebox{-3mm}[0mm][0mm]{$\scriptstyle x$}$}$}; \coordinate (x1) at (1,0) ;\coordinate (y) at (1.5,0.5); \coordinate (y1) at (1,1);\node (z) at (0,1) {$\!\!\!\parbox{0mm}{$\raisebox{1mm}[0mm][0mm]{$\scriptstyle z$}$}$}; \fill (x) circle (1.5pt); \fill (z) circle (1.5pt); \fill (x1) circle (1.5pt); \fill (y1) circle (1.5pt); \draw [blue,directed] (x) -- (x1); \draw [blue,directed] (y1) -- (z); \draw [blue,directed](1,0) arc (-90:90:0.5); \draw [dashed] (x1) -- (y1); \end{tikzpicture}}}}~ -g N {\parbox{2.6cm}{{\begin{tikzpicture} \node (v1) at (0,1.25){} ; \node (v2) at (0,-.25){} ; \node (x) at (0,0) {$\!\!\!\parbox{0mm}{$\raisebox{-3mm}[0mm][0mm]{$\scriptstyle x$}$}$}; \coordinate (x1) at (0.5,0) ;\coordinate (y) at (2.5,0.5);\coordinate (y1) at (0.5,1);\coordinate (y2) at (1.5,1) ; \node (z) at (0,1) {$\!\!\!\parbox{0mm}{$\raisebox{1mm}[0mm][0mm]{$\scriptstyle z$}$}$}; \coordinate (h1) at (1,0.5) ; \coordinate (h2) at (1.5,0.5) ; \fill (x) circle (1.5pt); \fill (z) circle (1.5pt); \fill (h1) circle (1.5pt); \fill (h2) circle (1.5pt); \draw [blue,directed] (x) -- (x1); \draw [blue,directed] (y1) -- (z); \draw [blue](0.5,0) arc (-90:90:0.5); \draw [red,directed](1.5,0.5) arc (-180:180:0.5); \draw [dashed] (h1) -- (h2); \end{tikzpicture}}}}\!\!\nonumber \end{eqnarray}}The first line is a drawing of a LERW starting at $x$, ending in $z$, and passing through the segments numbered 1 to 3. Due to the crossing at $y$, the loop labeled 2 is erased; we draw it in red. The second lines gives all diagrams up to order $g^{s}$. The first term is the free-theory result, proportional to $g^{0}$. The second term $\sim g$ cancels the first term, if one puts $g\to 1$. The third term is proportional to $N$, due to the loop, indicated in red. Setting $N\to -1$ compensates for the subtracted second term. Thus setting $g\to 1$ and $N\to -1$, the probability to go from $x$ to $z$ remains unchanged as compared to the free theory. This is a necessary condition to be satisfied. Since the first two terms cancel, what remains is the last diagram, corresponding to the drawing for the trajectory of the LERW we started with. Let us consider how this continues for $s=2$ intersections. Once a first loop has been formed, there are two possibilities: The walk can either hit a blue or a red part of his own trace. Let us first assume it hits a blue part. Then a second loop is formed, and the mapping at $g=1$, and $N=-1$ reads \begin{equation} \!\!\!\!{\setlength{\unitlength}{1cm}\parbox{3.15\unitlength}{\begin{picture}(3.25,1)\put(0.1,0){\ffig{3\unitlength}{trace2}} \put(0.,0.05){$\scriptstyle x$} \put(0.4,0.2){\scriptsize 1} \put(1.45,0.3){\scriptsize 2} \put(2.99,0.6){\scriptsize 3} \put(1.3,0.83){\scriptsize 4} \put(0.45,1.05){\scriptsize 5} \put(1.81,0.35){$\scriptstyle y$} \put(0.05,0.95){$\scriptstyle z$} \end{picture}}} \longrightarrow {\parbox{3.5cm}{{\begin{tikzpicture} \node (v1) at (0,1.25){} ; \node (v2) at (0,-.25){} ; \coordinate (x) at (0,0) ;\coordinate (x1) at (0.5,0) ;\coordinate (y) at (2.5,0.5) ;\coordinate (k1) at (3,0.5) ;\coordinate (k2) at (4,0.5) ;\coordinate (y1) at (0.5,1);\coordinate (y2) at (1.5,1) ; \coordinate (z) at (0,1) ; \coordinate (h1) at (1,0.5) ; \coordinate (h2) at (1.5,0.5) ; \fill (x) circle (1.5pt); \fill (z) circle (1.5pt); \fill (y) circle (1.5pt); \fill (k1) circle (1.5pt); \fill (h1) circle (1.5pt); \fill (h2) circle (1.5pt); \draw [blue,directed] (x) -- (x1); \draw [blue,directed] (y1) -- (z); \draw [blue](0.5,0) arc (-90:90:0.5); \draw [red,directed](2.5,0.5) arc (0:180:0.5); \draw [red,directed](1.5,0.5) arc (180:360:0.5); \draw [red,directed](3,0.5) arc (-180:180:0.5); \draw [dashed] (h1) -- (h2); \draw [dashed] (k1) -- (y); \end{tikzpicture}}}}\hspace{0.5cm} \end{equation} This is a result of multiple cancelations, which we can analyze vertex by vertex. The tricky part is what happens at point $y$: We can either not use any interaction, use the interaction following the lines of the original walk, or reconnect the lines of the walk to form a loop. Herer we used cancelation of the first two terms, retaining the last one, which resulted in the given drawing. (Note that the drawn red trace has the statistics of Brownian motion, as the two possible interaction terms mutually cancel.) The other possibility is to hit a red part of the trace, say at point $y$ \begin{equation} {\setlength{\unitlength}{1cm}\parbox{3.25\unitlength}{\begin{picture}(3.25,1.2)\put(0.25,0){\ffig{3\unitlength}{trace3}} \put(0.1,0){$\scriptstyle x$} \put(0.45,0){\scriptsize 1} \put(1.9,0.2){\scriptsize 2} \put(1.8,0.7){\scriptsize 3} \put(0.65,1){\scriptsize 4} \put(2.5,1.45){\scriptsize 5} \put(2.09,.78){$\scriptstyle y$} \put(3.25,1.4){$\scriptstyle z$} \end{picture}}}\longrightarrow {\parbox{2.6cm}{{\begin{tikzpicture} \node (v1) at (0,1.25){} ; \node (v2) at (0,-.25){} ; \coordinate (x) at (0,0) ;\coordinate (x1) at (0.5,0) ;\coordinate (k1) at (1,1) ; \coordinate (k2) at (1.5,1) ; \coordinate (y) at (2.5,0.5) ;\coordinate (y1) at (0.5,1.5);\coordinate (y2) at (1.5,1.5) ; \coordinate (z) at (0,1.5) ; \coordinate (h1) at (1,0.5) ; \coordinate (h2) at (1.5,0.5) ; \fill (x) circle (1.5pt); \fill (k1) circle (1.5pt); \fill (k2) circle (1.5pt); \fill (z) circle (1.5pt); \fill (h1) circle (1.5pt); \fill (h2) circle (1.5pt); \draw [blue,directed] (x) -- (x1); \draw [blue,directed] (y1) -- (z); \draw [blue](0.5,0) arc (-90:0:0.5); \draw [blue](1.,1.0) arc (0:90:0.5); \draw [red](2.5,1) arc (0:180:0.5); \draw [red](1.5,0.5) arc (180:360:0.5); \draw [dashed] (k2) -- (k1); \draw [dashed] (h1) -- (h2); \draw [blue,directed] (h1) -- (k1); \draw [red,directed] (k2) -- (h2); \draw [red,directed] (2.5,0.5)-- (2.5,1); \end{tikzpicture}}}}~~. \end{equation} Here nothing should happen, since the walk does not see the erased part of its trace. The appropriate cancelation is between ``no interaction'' and ``reconnecting'', since the latter would result in the erased loop appearing again in blue in perturbation theory. Thus also in this case we map onto the appropriate diagram of $\phi^4$-theory. Continuing these arguments inductively allows us to prove this for any number of intersections $s$. We have thus established a one-to-one correspondence between traces of LERWs and diagrams in perturbation theory. We still need an observable which is $1$ when inserted into a blue part of the trace, and $0$ within a red part. This can be achieved by modifying the probability to diffuse from $x$ to $z$, given by the expectation of $\Phi_{1}^{*}(x)\Phi_{1}(z) $, to \begin{equation}\label{calO} {\cal O}(x,y,z) := \Phi_{1}^{*}(x)\Phi_{1}(z) \left[ \Phi^{*} _{1}(y) \Phi _{1}(y) - \Phi^{*} _{2}(y) \Phi _{2}(y)\right] \ . \end{equation} The second factor checks whether point $y$ is part of the {\em blue} trace, as it vanishes in a {\em red} loop. An alternative representation (in the literature \cite{Amit,Kirkham1981,ShimadaHikami2016} associated to the operator which gives the {\em crossover exponent}) is \begin{equation}\label{calObis} {\cal O}(x,y,z) = \Phi_{1}^{*}(x) \Phi _{1}(y) \Phi^{*} _{2}(y) \Phi_{2}(z)\ . \end{equation} This equivalence is proven in appendix \ref{app:calO}. The fractal dimension $z$ of a LERW is extracted from the length of the walk after erasure (blue part) \begin{equation}\label{10} \frac{\left< \int_{{y,z}} {\cal O}(x,y,z) \right>}{\left< \int_{{z}} \Phi_1^{*}(x)\Phi_1(z) \right>} \equiv m^2 {\left< \int_{{y,z}} {\cal O}(x,y,z) \right>} \sim m^{-z}\ . \end{equation} As a consistency test, let us apply this procedure to self-avoiding walks: There all red loops carry a factor of $N=0$, and only configurations with no self-intersections survive. In this case the ratio \eq{10} equals 1, and the fractal dimension can be inferred from the 2-point function alone. It is crucial to note that while both LERWs and SAWs are non-intersecting, they do not have the same fractal dimension, due to the statistical weight contributed by the red loops. \kaysubsection{$\phi^{4}$-theory at $N=-1$, and fermions} Up to now, we worked with $N=-1$ families of complex bosons. We show below that instead one can consider the limit of $N_{\rm f}\to 1$ in a theory with $N_{{\rm f}}$ complex fermions, or more generally with $N_{\rm b}$ bosons and $N_{\rm f}$ fermions, where $N=N_{\rm b}-N_{{\rm f}}$. Among others, this provides a non-perturbative proof that the propagator at $N=-1$ is independent of $g$. The correspondence is based on the observation that $N_{\rm b}$-component bosons $\vec \Phi$ carry a factor of $N_{\rm b}$ per loop, while $N_{\rm f}$-component fermions $\vec \Psi$ yield a factor of $-N_{{\rm f}}$. On a more formal level, this can be inferred from the path integrals for both theories. Setting $ \mathbb{H}_{0}:= -\nabla^{2}+ \tau(x) $ then \begin{eqnarray} \!\!\!{\cal Z}_{0}^{\rm b}= \int{\cal D}[\Phi^{*}] {\cal D}[\Phi] \,\mathrm{e}^{-\int_{x}\vec\Phi^{*}(x)\mathbb H_{0} \vec\Phi(x)} &=& \mathrm{e}^{-N_{\rm b} \mathrm{tr} \ln( \mathbb H_{0})} ,~~~~\\ \!\!\!{\cal Z}_{0}^{\rm f}= \int{\cal D}[\Psi^{*}]{\cal D}[\Psi] \,\mathrm{e}^{-\int_{x}\vec\Psi^{*}(x)\mathbb H_{0}\vec\Psi(x)} &=& \mathrm{e}^{N_{{\rm f}} \mathrm{tr} \ln( \mathbb H_{0})} . \end{eqnarray} The bosonic correlation function is given by \begin{eqnarray} &&\!\!\!\left< \Phi_{i}(x)\Phi_{j}^{*}(y)\right> = \delta _{{ij}}\left( \mathbb{H}_{0}^{-1}\right) _{x,y} \\ &&\qquad= \frac1{{\cal Z}_{0}^{\rm b}}\!\int{\cal D}[\Phi^{*}\Phi]\, \Phi_{i}(x)\Phi_{j}^{*}(y)\,\mathrm{e}^{-\int_{x}\vec\Phi^{*}(x)\mathbb H_{0} \vec\Phi(x)} \ . \nonumber \end{eqnarray} For fermions an equivalent expression holds, and $ \left< \Psi_{i}(x) \Psi^{*}_{j}(y)\right> = \left<\Phi_{i}(x) \Phi^{*}_{j}(y)\right> $. Setting $\tau (x)= m^{2} + i \rho(x)$, a Hubbard Stratonovich transformation allows us to decouple the quartic interaction in an interacting theory of $N_{\rm b}$ bosons $\Phi_i$ and $N_{\rm f}$ fermions $\Psi_i$, \begin{eqnarray}\label{Phi-Psi-interaction} \lefteqn{\mathrm{e}^{- \frac{g}{2}\int_x [\vec \Phi^{*}(x) \vec \Phi(x) + \vec \Psi^{*}(x) \vec \Psi(x)]^2} } \\ &&= \int{\cal D} [\rho] \,\mathrm{e}^{-\frac1{2g}\int_{x } \rho(x)^{2} - i \rho(x)[ \vec\Phi^{*}(x)\vec\Phi(x)+\vec\Psi^{*}(x)\vec\Psi(x)]}\ .~~~\nonumber \end{eqnarray} As a consequence, a system of $N_{\rm b}$ bosons and $N_{\rm f}$ fermions with the interaction \eq{Phi-Psi-interaction} has partition function \begin{eqnarray}\label{15} {\cal Z}^{\rm b+f} &=& \int{\cal D}[\Phi^{*}] {\cal D}[\Phi] {\cal D}[\Psi^{*}] {\cal D}[\Psi]\nonumber\\ && ~~~ \mathrm{e}^{-\int_x \vec \Phi^*(x)(-\nabla^2+m^2) \vec \Phi(x)+ \vec \Psi^*(x)(-\nabla^2+m^2)\vec \Psi(x) }\nonumber\\ && ~~~ \times{\mathrm{e}^{- \frac{g}{2}\int_x [\vec \Phi^{*}(x) \vec \Phi(x) + \vec \Psi^{*}(x) \vec \Psi(x)]^2} } \nonumber\\ &=& \int{\cal D} [\rho] \,\mathrm{e}^{ (N_{\rm f}-N_{\rm b})\, \mathrm{tr} \ln( \mathbb H_{0}) -\frac1{2g}\int_{x } \rho(x)^{2} }\ . \end{eqnarray} More generally, the correlations $\left< \Phi^{*}_1(x)\Phi_{1}(y)\right>$ in complex $N$-component $\Phi^{4}$ theory can be calculated from a theory with $N_{\rm b}$ bosons and $N_{\rm f}$ fermions, where $N=N_{\rm b}-N_{{\rm f}}$. For the case $N_{\rm f}= -N =1$, the interaction is $[ \Psi^{*}(x) \Psi(x)]^2$, and vanishes since squares of Grassmann variables vanish: This theory of {\em complex fermions}, \begin{equation}\label{16} {\cal S} ^{{N_{\rm f}=1}}[\vec \Psi] = \int_x \nabla \Psi^*(x) {\nabla \Psi}(x) + {m^2} \Psi^{*}(x) \Psi(x)\ , \end{equation} is a free theory. It provides a non-perturbative proof that correlation functions of complex $\Phi^{4}$-theory at $N=-1$ ($n=-2$ real fields) are equivalent to those of the free theory. In $d=2$ this is also known from lattice models \cite{Nienhuis1982}. However, it does not yield the renormalization of the coupling $g$ at $N=-1$. To obtain the latter, one has to study $N_{\rm f}\neq 1$, and take the limit of $N_{\rm f}\to 1$ at the end. Or one uses one family of complex bosons $N_{\rm b}=1$ and two families of complex fermions $N_{\rm f}=2$, a formulation onto which we will map CDWs at depinning later. Finally, care has to be taken in identifying observables in both theories: While the 2-point functions of bosonic fields are symmetric under their exchange, those of the fermionic theory are antisymmetric. As a consequence $\left<\phi_{1}\phi_{1}\right>\neq 0$, whereas $\left<\psi_{1}\psi_{1}\right>= 0$. {\kaysubsection{Equivalence between $\phi^{4}$-theory at $N=-1$ and CDWs at depinning} Charge-density waves are ground states of solids, where the charge density is varying spatially, with a period set to 1. Coupling these elastic objects to quenched disorder leads after averaging over disorder to the field theory \cite{FukuyamaLee1978,LeeRice1979,DSFisher1985,NarayanDSFisher1992b,NarayanDSFisher1992a,NattermannScheidl2000} \begin{eqnarray}\label{CDW-action} {\cal S}^{\rm CDW} &=& \int_{x,t} \tilde u (x,t) (\partial_{t}-\nabla^{2}+m^2) u (x,t) \\ && -\frac12 \int_{x,t,t'} \tilde u (x,t)\tilde u (x,t') \Delta \big(u (x,t)-u (x,t')\big) .~~~~~~\nonumber \end{eqnarray} The function $\Delta(u)$ is an even function with period 1. Its renormalization can be studied using functional RG (FRG) \cite{DSFisher1985,NarayanDSFisher1992a,NarayanDSFisher1992b,LeschhornNattermannStepanowTang1997,NattermannStepanowTangLeschhorn1992,LeDoussalWieseChauve2003,LeDoussalWieseChauve2002,ChauveLeDoussalWiese2000a}. The analysis of the FRG flow for the function $\Delta(u)$ shows that the fixed point has the form \begin{equation}\label{18} \Delta(u) = \Delta(0) - \frac g{ 2} u(1-u)\ . \end{equation} In the absence of higher-order terms in $u$, the RG flow closes in the space of polynomials of degree 2, and for the quadratic term one is left with the renormalization of a single coupling constant $g$. Thus, as long as one is not interested in 2-point correlation functions, or avalanches, the fixed-point function can be replaced by $ \Delta (u) \to \frac g{ 2} u^2\ , $ which generates the simpler field theory, \begin{eqnarray}\label{SCDWsimp} {\cal S}^{\rm CDW}_{\rm simp} &:=& \int_{x,t} \tilde u (x,t) (\partial_{t}-\nabla^{2}+m^2) u (x,t) \\ && -\frac g { 4} \int_{x,t,t'} \tilde u (x,t)\tilde u (x,t') \big[u (x,t)-u (x,t')\big]^2\ .~~~~~~\nonumber \end{eqnarray} Let us define a further variant which retains from $\Delta \big(u (x,t')-u (x,t)\big) $ only the term $ u (x,t) u (x,t')$, \begin{eqnarray}\label{SSAW} {\cal S}^{\rm SAW} &:=& \int_{x,t} \tilde u (x,t) (\partial_{t}-\nabla^{2}+m^2) u (x,t) \nonumber\\ &&+ { \frac g2} \int_{x,t,t'}\tilde u (x,t) u(x,t) \tilde u (x,t') u (x,t') \ . \qquad \end{eqnarray} Perturbation expansion in this theory looks exactly like the one in \Eq{theory2}, with a different propagator to be compared with \Eq{phi-prop}, \begin{equation}\label{Rdef} R(k,t):=\left< \tilde u (k,0) u(-k,t ) \right> = \Theta(t ) \,\mathrm{e}^{-t(k^2+m^2)}\ . \end{equation} In this theory closed loops have weight zero, as they are acausal in It\^o discretization. If one can integrate freely over all times, diagrams in the dynamic theory reduce to those in the complex scalar theory. Thus the theory defined by \Eq{SSAW} can be mapped onto the action \eq{theory2} with $ n=N=0 $, i.e.\ a self-avoiding walk. This is well-known due to de~Gennes \cite{DeGennes1972}. We now show that the action \eq{SCDWsimp} has the same effective coupling as the action \eq{theory2} at $N=-1$. We first remark that the renormalized coupling is extracted from diagrams with times on which they depend taken far apart. An example is given by the first diagram in \Eq{21} below. To show the equivalence, we start by drawing all diagrams present in the SAW-theory \eq{SSAW}, complementing them by the missing diagrams originating from the additional vertices of \eq{SCDWsimp} as compared to \eq{SSAW}. These missing diagrams, a.k.a.\ {\em children}, can be generated from the diagrams for SAWs by moving one arrow from one side of the vertex into which it enters to the other side, \begin{equation}\label{21} {\parbox{2cm}{{\begin{tikzpicture} \coordinate (i1) at (0.,0) ; \coordinate (i2) at (0.,1) ; \coordinate (o1) at (2.,0) ; \coordinate (o2) at (2.,1) ; \coordinate (x1) at (0.5,0); \coordinate (x2) at (1.5,0); \coordinate (y1) at (0.5,1); \coordinate (y2) at (1.5,1); \fill (x1) circle (1.5pt); \fill (y1) circle (1.5pt); \fill (x2) circle (1.5pt); \fill (y2) circle (1.5pt); \draw [black] (i1) -- (o1); \draw [black] (i2) -- (o2); \draw [black,directed] (x1) -- (x2); \draw [black,directed] (y1) -- (y2); \draw [black,enddirected] (x2) -- (o1); \draw [black,enddirected] (y2) -- (o2); \draw [dashed] (x1) -- (y1); \draw [dashed] (x2) -- (y2); \end{tikzpicture}}}} \longrightarrow - {\parbox{2cm}{{\begin{tikzpicture} \coordinate (i1) at (0.,0) ; \coordinate (i2) at (0.,1) ; \coordinate (o1) at (2.,0) ; \coordinate (o2) at (2.,1) ; \coordinate (x1) at (0.5,0); \coordinate (x2) at (1.5,0); \coordinate (y1) at (0.5,1); \coordinate (y2) at (1.5,1); \fill (x1) circle (1.5pt); \fill (y1) circle (1.5pt); \fill (x2) circle (1.5pt); \fill (y2) circle (1.5pt); \draw [black] (i1) -- (x1); \draw [black] (i2) -- (y1); \draw [black,directed] (x1) -- (x2); \draw [red,directed] (y1) -- (x2); \draw [black,enddirected] (x2) -- (o1); \draw [black,enddirected] (y2) -- (o2); \draw [dashed] (x1) -- (y1); \draw [dashed] (x2) -- (y2); \end{tikzpicture}}}}\ . \end{equation} We then extract contributions to the effective coupling; this is cleverly done by remarking that {({\it i})} the form of the effective interaction is proportional to the second line of \Eq{SCDWsimp}, and ({\it ii}) it is extracted by retaining only terms of the form present in \Eq{SSAW}. This implies that the diagram generated in \Eq{21}, does not contribute. Indeed it comes with two other ones, \begin{equation} -{\parbox{2cm}{{\begin{tikzpicture} \coordinate (i1) at (0.,0) ; \coordinate (i2) at (0.,1) ; \coordinate (o1) at (2.,0) ; \coordinate (o2) at (2.,1) ; \coordinate (x1) at (0.5,0); \coordinate (x2) at (1.5,0); \coordinate (y1) at (0.5,1); \coordinate (y2) at (1.5,1); \fill (x1) circle (1.5pt); \fill (y1) circle (1.5pt); \fill (x2) circle (1.5pt); \fill (y2) circle (1.5pt); \draw [black] (i1) -- (x1); \draw [black] (i2) -- (y1); \draw [black,directed] (x1) -- (x2); \draw [black,directed] (y1) -- (x2); \draw [black,enddirected] (x2) -- (o1); \draw [black,enddirected] (y2) -- (o2); \draw [dashed] (x1) -- (y1); \draw [dashed] (x2) -- (y2); \end{tikzpicture}}}}+\frac12 {\parbox{2cm}{{\begin{tikzpicture} \coordinate (i1) at (0.,0) ; \coordinate (i2) at (0.,1) ; \coordinate (i2bis) at (0.,0.7) ; \coordinate (o1) at (2.,0) ; \coordinate (o2) at (2.,1) ; \coordinate (x1) at (0.5,0); \coordinate (x2) at (1.5,0); \coordinate (y1) at (0.5,1); \coordinate (y2) at (1.5,1); \fill (x1) circle (1.5pt); \fill (y1) circle (1.5pt); \fill (x2) circle (1.5pt); \fill (y2) circle (1.5pt); \draw [black] (i2) -- (y1); \draw [black] (i2bis) -- (y1); \draw [black,directed] (x1) -- (x2); \draw [black,directed] (y1) -- (x2); \draw [black,enddirected] (x2) -- (o1); \draw [black,enddirected] (y2) -- (o2); \draw [dashed] (x1) -- (y1); \draw [dashed] (x2) -- (y2); \end{tikzpicture}}}} +\frac12 {\parbox{2cm}{{\begin{tikzpicture} \coordinate (i1) at (0.,0) ; \coordinate (i1bis) at (0.,0.3) ; \coordinate (i2) at (0.,1) ; \coordinate (o1) at (2.,0) ; \coordinate (o2) at (2.,1) ; \coordinate (x1) at (0.5,0); \coordinate (x2) at (1.5,0); \coordinate (y1) at (0.5,1); \coordinate (y2) at (1.5,1); \fill (x1) circle (1.5pt); \fill (y1) circle (1.5pt); \fill (x2) circle (1.5pt); \fill (y2) circle (1.5pt); \draw [black] (i1) -- (x1); \draw [black] (i1bis) -- (x1); \draw [black,directed] (x1) -- (x2); \draw [black,directed] (y1) -- (x2); \draw [black,enddirected] (x2) -- (o1); \draw [black,enddirected] (y2) -- (o2); \draw [dashed] (x1) -- (y1); \draw [dashed] (x2) -- (y2); \end{tikzpicture}}}}\ . \end{equation} After time-integration, the two fields at the left-most vertex cancel, thus the above sum vanishes. The next diagram \begin{equation} {\parbox{2.2cm}{{\begin{tikzpicture} \coordinate (i1) at (0.,0) ; \coordinate (i2) at (0.,1) ; \coordinate (o1) at (2.,0) ; \coordinate (o2) at (2.,1) ; \coordinate (x1) at (1.5,0) ; \coordinate (x2) at (2.,0) ; \coordinate (y1) at (1.5,1); \coordinate (y2) at (2.,1) ; \coordinate (m1) at (0.5,0.5) ; \coordinate (m2) at (1.,0.5) ; \fill (m1) circle (1.5pt); \fill (m2) circle (1.5pt); \fill (x1) circle (1.5pt); \fill (y1) circle (1.5pt); \draw [black] (y2) -- (y1); \draw [black,enddirected] (x1) -- (x2); \draw [black,enddirected](0.,0) arc (-90:90:0.5); \draw [black,directed](1.5,1) arc (90:180:0.5); \draw [black,directed](1,0.5) arc (180:270:0.5); \draw [dashed] (x1) -- (y1); \draw [dashed] (m1) -- (m2); \end{tikzpicture}}}} \end{equation} has two children, \begin{equation} -{\parbox{2.2cm}{{\begin{tikzpicture} \coordinate (i1) at (0.,0) ; \coordinate (i2) at (0.,1) ; \coordinate (o1) at (2.,0) ; \coordinate (o2) at (2.,1) ; \coordinate (x1) at (1.5,0) ; \coordinate (x2) at (2.,0) ; \coordinate (y1) at (1.5,1); \coordinate (y2) at (2.,1) ; \coordinate (m1) at (0.5,0.5) ; \coordinate (m2) at (1.,0.5) ; \fill (m1) circle (1.5pt); \fill (m2) circle (1.5pt); \fill (x1) circle (1.5pt); \fill (y1) circle (1.5pt); \draw [black] (y2) -- (y1); \draw [black,enddirected] (x1) -- (x2); \draw [black,enddirected](0.,0) arc (-90:90:0.5); \draw [black,directed](1.5,1) arc (90:180:0.5); \draw [red,directed](1,0.5) arc (-90:0:0.5); \draw [dashed] (x1) -- (y1); \draw [dashed] (m1) -- (m2); \end{tikzpicture}}}} -{\parbox{2.2cm}{{\begin{tikzpicture} \coordinate (i1) at (0.,0) ; \coordinate (i2) at (0.,1) ; \coordinate (o1) at (2.,0) ; \coordinate (o2) at (2.,1) ; \coordinate (x1) at (1.5,0) ; \coordinate (x2) at (2.,0) ; \coordinate (y1) at (1.5,1); \coordinate (y2) at (2.,1) ; \coordinate (m1) at (0.5,0.5) ; \coordinate (m2) at (1.,0.5) ; \fill (m1) circle (1.5pt); \fill (m2) circle (1.5pt); \fill (x1) circle (1.5pt); \fill (y1) circle (1.5pt); \draw [black] (y2) -- (y1); \draw [black,enddirected] (x1) -- (x2); \draw [black,enddirected](0.,0) arc (-90:90:0.5); \draw [red,directed] (y1) -- (m1); \draw [black,directed](1,0.5) arc (180:270:0.5); \draw [dashed] (x1) -- (y1); \draw [dashed] (m1) -- (m2); \end{tikzpicture}}}}\to 0\ . \end{equation} They both vanish, since the first is an acausal loop, thus zero, and the second has only one connected component, thus does not contribute to the renormalization of $g$. Now consider \begin{equation} {\parbox{3.2cm}{{\begin{tikzpicture} \coordinate (i1) at (0.,0) ; \coordinate (i2) at (0.,1) ; \coordinate (o1) at (2.5,0.5) ; \coordinate (o2) at (2,0.5); \coordinate (x1) at (1.5,0); \coordinate (x2) at (2.,0); \coordinate (y1) at (1.5,1); \coordinate (y2) at (2.,1) ; \coordinate (m1) at (0.5,0.5) ; \coordinate (m2) at (1.,0.5) ; \fill (m1) circle (1.5pt); \fill (m2) circle (1.5pt); \fill (o1) circle (1.5pt); \fill (o2) circle (1.5pt); \draw [black,enddirected](0.,0) arc (-90:90:0.5); \draw [black,directed](2,0.5) arc (0:180:0.5); \draw [black,directed](1,0.5) arc (180:360:0.5); \draw [black,enddirected](3.,1) arc (90:270:0.5); \draw [dashed] (o1) -- (o2); \draw [dashed] (m1) -- (m2); \end{tikzpicture}}}} \end{equation} This diagram contains an acausal loop, thus does not contribute to the SAW theory \eq{SSAW} where it vanishes due to a factor of $N=0$. The diagram however has {\em children}; together they are \begin{eqnarray}\label{28} {\parbox{3.2cm}{{\begin{tikzpicture} \coordinate (i1) at (0.,0) ; \coordinate (i2) at (0.,1) ; \coordinate (o1) at (2.5,0.5) ; \coordinate (o2) at (2,0.5); \coordinate (x1) at (1.5,0); \coordinate (x2) at (2.,0); \coordinate (y1) at (1.5,1); \coordinate (y2) at (2.,1) ; \coordinate (m1) at (0.5,0.5) ; \coordinate (m2) at (1.,0.5) ; \fill (m1) circle (1.5pt); \fill (m2) circle (1.5pt); \fill (o1) circle (1.5pt); \fill (o2) circle (1.5pt); \draw [black,enddirected](0.,0) arc (-90:90:0.5); \draw [black,directed](2,0.5) arc (0:180:0.5); \draw [black,directed](1,0.5) arc (180:360:0.5); \draw [black,enddirected](3.,1) arc (90:270:0.5); \draw [dashed] (o1) -- (o2); \draw [dashed] (m1) -- (m2); \end{tikzpicture}}}} &-&{\parbox{3.2cm}{{\begin{tikzpicture} \coordinate (i1) at (0.,0) ; \coordinate (i2) at (0.,1) ; \coordinate (o1) at (2.5,0.5) ; \coordinate (o2) at (2,0.5); \coordinate (x1) at (1.5,0); \coordinate (x2) at (2.,0); \coordinate (y1) at (1.5,1); \coordinate (y2) at (2.,1) ; \coordinate (m1) at (0.5,0.5) ; \coordinate (m2) at (1.,0.5) ; \fill (m1) circle (1.5pt); \fill (m2) circle (1.5pt); \fill (o1) circle (1.5pt); \fill (o2) circle (1.5pt); \draw [black,enddirected](0.,0) arc (-90:90:0.5); \draw [red,directed](2,0.5) arc (0:180:0.75); \draw [black,directed](1,0.5) arc (180:360:0.5); \draw [black,enddirected](3.,1) arc (90:270:0.5); \draw [dashed] (o1) -- (o2); \draw [dashed] (m1) -- (m2); \end{tikzpicture}}}} \nonumber\\ \;\;-{\parbox{3.2cm}{{\begin{tikzpicture} \coordinate (i1) at (0.,0) ; \coordinate (i2) at (0.,1) ; \coordinate (o1) at (2.5,0.5) ; \coordinate (o2) at (2,0.5); \coordinate (x1) at (1.5,0); \coordinate (x2) at (2.,0); \coordinate (y1) at (1.5,1); \coordinate (y2) at (2.,1) ; \coordinate (m1) at (0.5,0.5) ; \coordinate (m2) at (1.,0.5) ; \fill (m1) circle (1.5pt); \fill (m2) circle (1.5pt); \fill (o1) circle (1.5pt); \fill (o2) circle (1.5pt); \draw [black,enddirected](0.,0) arc (-90:90:0.5); \draw [black,directed](2,0.5) arc (0:180:0.5); \draw [red,directed](1,0.5) arc (180:360:0.75); \draw [black,enddirected](3.,1) arc (90:270:0.5); \draw [dashed] (o1) -- (o2); \draw [dashed] (m1) -- (m2); \end{tikzpicture}}}} &+&{\parbox{3.2cm}{{\begin{tikzpicture} \coordinate (i1) at (0.,0) ; \coordinate (i2) at (0.,1) ; \coordinate (o1) at (2.5,0.5) ; \coordinate (o2) at (2,0.5); \coordinate (x1) at (1.5,0); \coordinate (x2) at (2.,0); \coordinate (y1) at (1.5,1); \coordinate (y2) at (2.,1) ; \coordinate (m1) at (0.5,0.5) ; \coordinate (m2) at (1.,0.5) ; \fill (m1) circle (1.5pt); \fill (m2) circle (1.5pt); \fill (o1) circle (1.5pt); \fill (o2) circle (1.5pt); \draw [black,enddirected](0.,0) arc (-90:90:0.5); \draw [red,directed](2,0.5) arc (0:180:0.75); \draw [red,directed](1,0.5) arc (180:360:0.75); \draw [black,enddirected](3.,1) arc (90:270:0.5); \draw [dashed] (o1) -- (o2); \draw [dashed] (m1) -- (m2); \end{tikzpicture}}}}\ .\qquad \end{eqnarray} The modified lines are in red. We first remark that none of them restricts the time-difference between the left and right-most vertices, and they all contribute to the effective coupling. Their coefficients are $0\times 1-1-1+1 = -1$. Integrating over times, the result is the same as in $\phi^4$-theory at $N=-1$, graphically represented as \begin{equation} {\parbox{3.2cm}{{\begin{tikzpicture} \coordinate (i1) at (0.,0) ; \coordinate (i2) at (0.,1) ; \coordinate (o1) at (2.5,0.5) ; \coordinate (o2) at (2,0.5); \coordinate (x1) at (1.5,0); \coordinate (x2) at (2.,0); \coordinate (y1) at (1.5,1); \coordinate (y2) at (2.,1) ; \coordinate (m1) at (0.5,0.5) ; \coordinate (m2) at (1.,0.5) ; \fill (m1) circle (1.5pt); \fill (m2) circle (1.5pt); \fill (o1) circle (1.5pt); \fill (o2) circle (1.5pt); \draw [blue,enddirected](0.,0) arc (-90:90:0.5); \draw [red,directed](2,0.5) arc (0:180:0.5); \draw [red,directed](1,0.5) arc (180:360:0.5); \draw [blue,enddirected](3.,1) arc (90:270:0.5); \draw [dashed] (o1) -- (o2); \draw [dashed] (m1) -- (m2); \end{tikzpicture}}}}\ . \end{equation} Noting the pairwise cancelations in loops of the form \eq{28}, this can be interpreted as the {\em missing} contribution of the first (acausal) diagram. To summarize: We just showed that at 1-loop order the action \eq{SCDWsimp} has the same effective coupling as the action \eq{theory2}, diagram by diagram (after time integration). These considerations can be formalized to higher orders, and we checked them explicitly up to 4 loops. The theory \eq{SCDWsimp} has a second renormalization, namely of friction, or time, which shows up in a renormalization of $ \int_{x,t} \tilde u(x,t) \dot u(x,t). $ The standard route to study this is to write down all diagrams constructed from \eq{SCDWsimp}, in which one field $\tilde u$ and one field $u$ remain \cite{LeDoussalWieseChauve2002}. Due to the structure of the action, the latter has the form $u(x,t)-u(x,t')$ and can be expanded as $\dot u(x,t) (t-t')$. The additional time difference, when appearing together with a response function, acts by inserting an additional point into the latter, as can be seen from the definition \eq{Rdef}, and the relation \begin{equation} t R(k,t) = \int_0^t \mathrm{d}{t'} R(k,t') R(k,t-t')\ . \end{equation} Following this strategy, we checked that up to 4-loop order all diagrams appearing after time-integration are equivalent to those encountered in expectations of ${\cal O} $, defined in \Eq{calO}. Graphically, this is proven by first realizing that one can alternatively study the renormalization of friction by considering insertions of $\int_{x,t}\tilde u(x,t) \dot u(x,t)$. Doing this, the time derivative on $\dot u$ can be passed through a closed string of response functions to either the earliest time in the diagram, and will then act on the remaining uncontracted field $u$, or it will end on a vertex with no further $u$ field, and thus vanish. The same argument can be done by moving the time derivative to the field $\tilde u$. These operations restrict the class of diagrams. Graphically, inserting $\int_{x,t}\tilde u(x,t) \dot u(x,t)$ corresponds again to inserting a point into diagrams correcting expectations of $\tilde u(x,t) u(x',t')$. The final step of the proof is to realize that this is equivalent to insertions of the crossover operator $\Phi _{1}(y) \Phi^{*} _{2}(y)$ in the theory \eq{theory2}. } Finally note that the absence of a renormalization of $-\nabla^2+m^2$ in Eq.~\eq{SCDWsimp} implied by the statistical tilt symmetry is equivalent to the absence of a renormalization of the theory \eq{16}. \kaysubsection{A non-perturbative proof for the equivalance of $\phi^4$-theory at $N=-1$ and CDWs} The method introduced in \cite{ParisiSourlas1979,ParisiSourlas1982} allows one to write the disorder average of any observable ${\cal O}[u_i]$ as \begin{align}\label{su5} &\overline {{\cal O}[u_i]} = \int\prod_{a=1}^r {\cal D}[{\tilde u_{a} }] {\cal D}[{ u_{a} }] {\cal D}[{\bar \psi_{a} }] {\cal D}[{\psi}_{a}]\, {{\cal O}[u_i]} \times \\ & ~~~\times \!\overline{\,\exp\!\left[-{ \int_{x}\tilde u_{a} (x)\frac{\delta{\cal H}[{u_{a} }]} {\delta {u_{a} } (x) }+\bar \psi_{a}(x)\frac{\delta^{2} {\cal H}[{u_{a} } ] }{\delta {u_{a} } (x)\delta {u_{a} } (y) } \psi_{a} (y) } \right]} .\nonumber \end{align} The integral over $\tilde u_a$ ensures that $u_a$ is at a minimum. $\bar \psi_a$ and $\psi_a$ are fermionic degrees of freedom (Grassmann variables), which compensate for the functional determinant appearing in the integration over $u$, yielding a partition function ${\cal Z}=1$. Averaging over disorder gives an effective action \begin{eqnarray}\label{su6a} \label{su6b} &&{\cal S}[\tilde u_a, u_{a},\bar \psi_{a}, \psi_{a}] \\ && = \!\sum_{a}\! \int_{x} \!\tilde u_{a} (x) (-\nabla^{2}{+}m^2) u_{a} (x) + \bar \psi_{a} (x) (-\nabla^{2}{+}m^2)\psi_{a} (x) \nonumber \end{eqnarray} \begin{eqnarray} &&- \sum_{a,b} \int_x\Big[ \frac12 \tilde u_{a} (x)\Delta \big(u_{a} (x)-u_{b} (x)\big)\tilde u_{b} (x) \nonumber\\ && \qquad~~~- \tilde u_{a} (x) \Delta' \big(u_{a} (x)-u_{b} (x)\big) \bar \psi_{b} (x)\psi_{b} (x)\nonumber \\ && \qquad~~~ -\frac12 \bar \psi_{a} (x)\psi _{a} (x)\Delta'' \big(u_{a} (x)-u_{b} (x)\big)\bar \psi_{b} (x)\psi_{b} (x) \Big]\nonumber \ . \end{eqnarray} The function $\Delta(u)$ is the same as in \Eq{CDW-action}. Note that we allow for an arbitrary number of replicas $r$. In the work \cite{ParisiSourlas1979} the focus was on $r=1$, which does not allow to extract the second cumulant of the disorder, i.e.\ its correlations. To do the latter, one needs at least $r=2$ copies, to which we specify now. As in the derivation of the action \eq{SCDWsimp}, we replace $\Delta(u)\to \frac{g}2 u^2$, and introduce center-of-mass coordinates, \begin{align} u_1(x) &= u(x) + \frac12 \phi(x)\ , & u_2(x)&= u(x) - \frac12 \phi(x)\ ,\\ \tilde u_1(x) &= \frac12 \tilde u(x) + \tilde \phi(x)\ , & \tilde u_2(x)&= \frac12 \tilde u(x) - \tilde \phi (x)\ . \end{align} The action \eq{su6b} can then be written as \begin{align} \label{CDW=phi4} {\cal S} &= \int_{x} \tilde \phi(x) (-\nabla^2 +m^2)\phi(x)+ \tilde u(x) (-\nabla^2 +m^2) u(x) \nonumber\\ &+ \sum_{a=1}^2\bar \psi_{a} (x) (-\nabla^{2}+m^2)\psi_{a} (x) \nonumber \\ &+ \frac g 2\tilde u(x) \phi(x)\!\left[\bar \psi_2(x)\psi_2(x) -\bar \psi_1(x)\psi_1(x) -\frac14 \tilde u(x) \phi(x)\right] \nonumber \\ &+ \frac g 2\left[ \tilde \phi(x)\phi(x)+\bar \psi_1(x)\psi_1(x) +\bar \psi_2(x)\psi_2(x) \right]^2 \ . \end{align} Note that only $\tilde u(x)$, but not the center-of-mass $u(x)$ appear in the interaction. While $u(x)$ may have non-trivial expectations, it does not contribute to the renormalization of $g$, and the latter can be obtained by dropping the third line of \Eq{CDW=phi4}. What remains is a $\phi^4$-type theory as in \Eq{15}, with one complex boson, and two complex fermions. It can equivalently be viewed as complex $\phi^4$-theory at $N=-1$, or real $\phi^4$-theory at $n=-2$. What is yet missing is information about the exponent $z$. One can use the operator $\cal O$ defined in Eqs.~\eq{calO} or \eq{calObis}, replacing $\Phi_i$ by $\psi_i$, and $\Phi_i^*$ by $\bar \psi_i$. Another possibility is to introduce time, adding a time argument to all fields, and replacing $-\nabla^2 +m^2$ by $\partial_t -\nabla^2 +m^2$. The interaction part, i.e.\ the last line of \Eq{CDW=phi4}, then becomes bilocal in time, i.e.\ the time integral appears inside the square bracket. % The tricky part is to ensure that time-integrated vacuum bubbles retain their static expectations. This can be done by specifying an initial condition, once again adding the action \eq{CDW=phi4} where all fields are evaluated at some initial time $t_0$. This implies that \begin{eqnarray} R(k;t',t)&=&\left< \phi(-k,t') \tilde \phi(k,t) \right> = \left< \psi_i(-k,t') \bar \psi_i(k,t) \right>\nonumber\\ &=& \Theta(t'-t) \mathrm{e}^{-(k^2+m^2)(t'-t)}+ \frac{\delta_{t,t_0}\delta_{t',t_0}}{k^2+m^2}\ . \qquad \end{eqnarray} The $\delta$-functions are to be understood s.t. \begin{equation} \int_t R(k_1,t,t) ... R(k_n,t,t) = \frac1{(k_1^2+m^2)... (k_n^2+m^2)}\ . \end{equation} We explicitly checked to 3-loop order that terms proportional to $\partial_t $ receive the same renormalization as at depinning. Furthermore we can analyze the renormalization of $\tilde \phi(x,t)\partial_t \phi(x,t)$ as an insertion. Contributing diagrams carry two external fields, one $\tilde \phi$, and one $\phi$. Passing the time derivative $\partial_t \phi(x,t)$ of the insertion to this external field, what remains is the insertion of a single point in the line connecting the external fields $\tilde \phi$ and $\phi$, but no contribution from insertions into loops. After integration over times, this is equivalent to the insertion of $\cal O$ defined above in Eqs.~\eq{calO} or \eq{calObis}. \kaysubsection{Fractal dimension of LERWs at 5-loop order} We generated all diagrams entering into ${\cal O}(x,y,z)$ at 5-loop order, and into the renormalization of the coupling constant at $4$-loop order, supplemented by the diagrams of \cite{KompanietsPanzer2017} at 5-loop order. At 3-loop order we obtain the fractal dimension $z$ of LERWs using the massive diagrams of Ref.~\cite{WieseHusemannLeDoussal2018}. To 4- and 5-loop order, we use diagrams in a massless minimal subtraction scheme obtained in \cite{KleinertNeuSchulte-FrohlindeLarin1991,KompanietsPanzer2017}. The result reproduces at 4-loop order the one given for the crossover exponent in Ref.~\cite{Kirkham1981}, setting there $n=-2$. This yields for the fractal dimension $ z$ of LERWs in dimension $d=4-\epsilon$, equivalent to the dynamical exponent of CDWs at depinning: \begin{eqnarray} \nonumber \!\!\!z &=& 2-\frac{\varepsilon }{3}- \frac{\varepsilon^2}{9} + \bigg[\frac{2 \zeta (3)}{9}-\frac{1}{18}\bigg]\varepsilon^3 \\ \nonumber && - \bigg[\frac{70 \zeta (5)}{81} -\frac{\zeta(4)}{6} -\frac{17 \zeta (3) }{162} +\frac{7}{324} \bigg]\varepsilon^4 \nonumber\\ && - \bigg[ \frac{541 \zeta (3)^2}{162} +\frac{37 \zeta (3)}{36}+\frac{29 \zeta (4)}{648}+ \frac{703 \zeta (5)}{243}\nonumber\\ && ~~~~+\frac{175 \zeta (6)}{162}-\frac{833 \zeta (7)}{216}+\frac{17}{1944}\bigg]\varepsilon^5+ {\cal O}(\varepsilon^6). \label{eq:On-24} \ \ \ \ \ \end{eqnarray} Using Borel resummation and $z=5/4$ in $d=2$ \cite{Schramm2000,LawlerSchrammWerner2004} to estimate the location of the branch cut yields \begin{equation} z(d=3)= 1.624 \pm 0.002 \ . \end{equation} This can be compared to the most precise numerical simulations to date by David Wilson \cite{Wilson2010}, \begin{equation} z(d=3) = 1.624 00 \pm 0.000 05\ . \end{equation} \medskip \kaysubsection{Summary and Perspectives} We presented evidence that both scalar $\phi^4$ theory at $n=-2$, as well as the field theory for CDWs at depinning describe loop-erased random walks. We sketched a proof of this equivalence, based on a diagrammatic expansion, and gave an algebraic proof for the latter. All claims were explicitly checked to 4-loop order. This equivalence gives a strong support for the Narayan-Middleton conjecture \cite{NarayanMiddleton1994} that CDWs pinned by disorder can be mapped onto the Abelian sandpile model, and thus for the conjecture of \cite{FedorenkoLeDoussalWiese2008a}. Remarkably, while CDWs at depinning map onto Abelian sandpiles, disordered elastic interfaces at depinning map onto Manna sandpiles \cite{LeDoussalWiese2014a,Wiese2015}. Thus each main universality class at depinning corresponds to a specific sandpile model. The result is surprising, since a simple $\phi^4$-type theory contains all necessary information to obtain the FRG fixed point of CDWs, a disordered system. It does not directly yield the renormalized 2-point function, or the physics of avalanches. As sketched on Fig.~\ref{f:sector-illustration}, our understanding is that the different field theories are not equivalent, but when restricted to the same {\em physical sector} make the same predictions. This opens a path to eventually tackle other systems which necessitate FRG via a simpler scalar field theory. Finally, the mapping of $\phi^4$-theory at $n=-2$ onto LERWs was done at a microscopic coupling $g=1$. Changing the latter to $p<1$ can be interpreted as a random walk where loops are erased with probability $p$. Since the RG fixed point is reached for any $0<p<1$, we conjecture that these {\em partially} loop-erased random walks are in the same universality class as LERWs. We propose taking $p$ close to 1 to measure the correction-to-scaling exponent $\omega$ precisely. While its $\epsilon$ expansion is known to 6-loop order \cite{KompanietsPanzer2017}, it is only slowly converging, and we estimate $\omega = 0.89 \pm 0.02$. \kaysubsection{Acknowledgements} It is a pleasure to thank E.~Br\'ezin, J.~Cardy, F.~David, K.~Gawedzki, P.~Grassberger, J.\ Jacobsen, A.~Nahum, S.~Rychkov, D.~Wilson and J.~Zinn-Justin for valuable discussions. \renewcommand{\doi}[2]{\href{http://dx.doi.org/#1}{#2}} \newcommand{\arxiv}[1]{\href{http://arxiv.org/abs/#1}{#1}}
{ "timestamp": "2018-05-02T02:12:35", "yymm": "1802", "arxiv_id": "1802.08830", "language": "en", "url": "https://arxiv.org/abs/1802.08830" }
\subsection*{Author Summary} Determining how mutations impact protein stability and function is instrumental in understanding how proteins carry out their biological tasks, how they evolve, and how to engineer novel proteins. However measuring differences in function between mutated variants does not distinguish whether mutations are directly affecting function or are destabilizing the protein. Here, we fit a thermodynamic model to data describing how thousands of variants of a protein bind to an antibody fragment. We accurately infer separate folding and binding energies in physical units, providing a detailed energy landscape describing how mutations affect binding and stability across the protein, and our non-linear model reproduces many of the observed interactions between sites. \section*{Introduction} Deep mutational scanning (DMS) studies have produced detailed maps of how proteins and regulatory sequences are related to function by assaying up to millions of mutated variants, and has had many applications, from identifying viral epitopes to protein engineering \cite{fowler_deep_2014,wrenbeck_deep_2017}. While these studies aim to understand molecular function and evolution by collecting large numbers of sequence-function pairs, the full sequence-function map is very difficult to determine due to the enormity of sequence space. Different sites in a sequence may not contribute to molecular function independently, and the effect of a substitution at one site may depend on the genetic background. This non-additivity, or epistasis, means that the entire space of possible sequences may have to be explored to understand molecular function. Given the limited data, mathematical modeling is necessary to make any progress. However, purely statistical inferences are difficult to interpret in terms of known biology, and can be too flexible to make reliable predictions \cite{otwinowski_inferring_2014,plessis_how_2016}. On the other hand, biophysical systems are not arbitrarily complex as they follow physical laws and structural constraints. In other words, there is hope that biophysical knowledge can help explain sequence-function relationships. A powerful assumption is that in between sequence and function lie relevant intermediate phenotypes for which we can derive relatively simple relations to sequence and function \cite{bershtein_bridging_2017}. The stability of a protein's fold is a fundamental molecular phenotype under selection. Studying how mutations affect stability is a fundamental challenge in protein science \cite{magliery_protein_2011}, and is the aim of some DMS studies \cite{araya_fundamental_2012,traxlmayr_construction_2012,kim_high-throughput_2013,olson_comprehensive_2014}. However, assaying molecular function, such as binding to a ligand, is a necessary and insufficient measure of stability, in that most proteins must be folded to function, but do not necessarily function if folded. In addition, high-throughput techniques, such as proteolysis assays \cite{rocklin_global_2017}, do not measure free energy, but measure scores that are correlated with stability. Similarly, scores from high-throughput binding assays do not measure binding energy in physical units, and do not distinguish whether changes seen in variants are due to changes in the overall fold stability or stability of the binding interface. While high-throughput assays often confound function and stability, these can be separated with a thermodynamic approach. Thermodynamic approaches have been at the heart of biophysical models applied to data to quantify the evolution of regulatory sequences \cite{mustonen_evolutionary_2005,mustonen_energy-dependent_2008,kinney_using_2010,lagator_mechanistic_2017}, and proteins \cite{bloom_thermodynamic_2005,wylie_biophysical_2011,echave_biophysical_2017}. In the context of proteins, there are typically a few relevant conformational states, and the kinetics are fast enough to reach thermal equilibrium, with a free energy determining the probability of each state. At a minimum, a protein has folded and unfolded states, and other states may be due to binding, mis-folding, or some other conformational changes. Two-state models have been important in understanding observed patterns of substitutions in protein evolution \cite{starr_epistasis_2016,bastolla_what_2017,liberles_interface_2012}, and in general, the ensemble of protein conformations generates epistasis that makes protein evolution difficult to predict \cite{sailer_molecular_2017}. A powerful simplification is to approximate the total energy by a sum over site-specific energies (additivity), which has been observed in most changes to fold stability \cite{wells_additivity_1990,sandberg_engineering_1993}. However, even with additivity in energy, the probability of a protein being in a particular state is non-linear with respect to energy and therefore epistatic with regard to sequence. In this work, we infer a thermodynamic model that separates folding and stability in a small bacterial protein, protein G domain B1 (GB1), a model system of folding and stability, where a recent high-throughput assay of functional binding to an immunoglobulin fragment (IgG-Fc) described the epistasis between nearly all pairs of residues \cite{olson_comprehensive_2014}. We infer a thermodynamic model with two states, bounded, and unbound, and another model with three states: bound-folded, unbound-folded and unfolded. The approximation of additivity in energy allows us to separate how mutations destabilize the binding interface and how they destabilize the overall fold. We validate these approximations by predicting independently measured changes in fold stability. We describe the folding and stability landscape of the protein, identify which sites contribute most to binding, and explain much of the observed epistasis without assuming any energetic interactions. \section*{Results} \subsection*{\textit{In vitro} selection of protein variants} Olson et al.~\cite{olson_comprehensive_2014} mutagenized GB1 to create a library of protein variants which contained all single amino acid substitutions (1045 variants) and nearly all double substitutions (536k variants) of a reference or wild-type sequence. The library was sequenced before and after an\textit{ in vitro }selection assay, and the fraction of bound protein to IgG-Fc for a variant with sequence $\sigma$ is $p^{\prime}(\sigma)=\frac{n_{1}(\sigma)}{n_{0}(\sigma)r}$, where $n_{0}(\sigma)$ and $n_{1}(\sigma)$ are the sequence counts before and after selection, and $r$ is a global factor that accounts for systematic differences between initial and final sequencing (see Methods for a maximum likelihood derivation of $p^{\prime}$). For convenience, we define \emph{fitness} as the logarithm of the binding fraction normalized by the wild-type $\sigma^{W}$ \begin{equation} f(\sigma)=\log\left(\frac{n_{1}(\sigma)}{n_{0}(\sigma)}\frac{n_{0}(\sigma^{W})}{n_{1}(\sigma^{W})}\right),\label{eq:fitness} \end{equation} as an analogy to the growth rate of an exponentially growing population, although we do not imply that this is the (relative) growth rate of an organism with this variant. With nearly every possible double substitution it is possible to study interactions between sites, or epistasis. We define pairwise epistasis in fitness as the difference between the fitness of the double mutants relative to the wild-type and the expectation of additivity, i.e., the sum of the fitness of two single mutants: \begin{equation} J_{ij}^{ab}=f(\sigma_{/(i,a)/(j,b)}^{W})-f(\sigma_{/(i,a)}^{W})-f(\sigma_{/(j,b)}^{W}).\label{eq:epistasis} \end{equation} where $/(i,a)$ and $/(j,b)$ indicate substitutions at positions $i,j$ with amino acids $a,b$. While changes in fitness across all single and double mutants show where a protein is sensitive to binding, such changes are not informative of whether mutations are destabilizing the binding interface or the overall fold, as changes in either one influence the fraction of bound protein. A thermodynamic model is necessary to separate these effects. \subsection*{Thermodynamic models} Proteins fold into complicated structures and interact with other molecules depending on the free energy of their different states or conformations. Under natural conditions, protein states reach thermodynamic equilibrium very quickly and the Boltzmann distribution relates the probability of state $i$ to the free energy $E_{i}$: $p_{i}(\sigma)=\frac{1}{Z}e^{-E_{i}(\sigma)}$, where $E_{i}(\sigma)$ are in dimensionless units and $Z$ is the normalization factor over states \cite{phillips_physical_2012}. For a two state bound/unbound model, the fraction of bound protein is $\frac{1}{1+e^{E(\sigma)}}$, with energy $E(\sigma)$. In order to separate binding and stability, we define three states: unfolded and unbound, folded and unbound, and folded and bound, and therefore the fraction of bound protein is \begin{equation} p(\sigma)=\frac{e^{-E_{f}(\sigma)-E_{b}(\sigma)}}{1+e^{-E_{f}(\sigma)}+e^{-E_{f}(\sigma)-E_{b}(\sigma)}}=\frac{1}{1+e^{E_{b}(\sigma)}(1+e^{E_{f}(\sigma)})}\label{eq:p} \end{equation} with folding energy $E_{f}(\sigma)$ and binding energy $E_{b}(\sigma)$. The folding energy is relative to the unfolded state, whereas the binding energy is relative to the folded-unbound state, up to a constant that depends on the concentration of ligand (the chemical potential). Importantly, this binding energy measures only the destabilization of the binding interface, and is distinct from dissociation constants that are related to our model by $K_{d}\propto e^{E_{b}}(1+e^{E_{f}})$ (neglecting the chemical potential). Intuitively, low binding and folding energy leads to large $p$, and the shaded areas in Fig.~\ref{fig:ped} show regions in energy space where each label indicates the most likely state. Given an experimentally measured $p$ of a single variant, there are many values $E_{b}$ and $E_{f}$ that match $p$, and it is not possible to identify these energies, as shown by the contour lines of equal $p$ in Fig.~\ref{fig:ped}. However, with the approximation of additivity in energy over sites, it is possible to combine information from many sequences to estimate energies. Additivity means that the energy is a sum over energies specific to each substitution $\epsilon(i,a)$: \begin{align} E_{f}(\sigma) & =\epsilon_{f}^{W}+\sum_{i}\epsilon_{f}(i,\sigma_{i})\label{eq:ef}\\ E_{b}(\sigma) & =\epsilon_{b}^{W}+\sum_{i}\epsilon_{b}(i,\sigma_{i})\label{eq:eb} \end{align} where subscripts $f$ and $b$ indicate folding and binding respectively, $\sigma_{i}$ is the amino acid at position $i$, and $\epsilon(i,\sigma_{i}^{W})=0$ so that the wild-type energy is $\epsilon^{W}$. While additivity has been observed in many experimental measurements of changes in fold stability \cite{wells_additivity_1990,sandberg_engineering_1993}, it is a local approximation that is likely to be violated for highly mutated sequences. To illustrate how multiple data points constrain this non-linear model, consider a hypothetical quartet of sequences and their measured binding fraction $p$: the wild-type, two sequences with single substitutions, and a sequence with both of those substitutions. The lines in Fig.~\ref{fig:ped} are the energy coordinates consistent with the given $p$, and the dashed lines are the additive energies that connect the wild-type (red line) to the single mutants (black lines), and to the double mutant (blue line). The parameters are not constrained given a wild-type $p$ and single mutant $p$, as a rectangles can be placed anywhere between two lines as long as the opposing corners land on them. However, when considering all four sequences the largest rectangle must have lengths that are the sum of the smaller rectangle lengths due to additivity, and the non-linearity of the curves constrains the parameters (additive energies) that can fit the data, with more data providing more constraints. \begin{figure} \begin{centering} \includegraphics[scale=0.6]{pedagogy} \par\end{centering} \caption{\label{fig:ped}Thermodynamic model of three protein states: unfolded and unbound, folded and unbound, and folded and bound, described by eq.~\ref{eq:p}. Shaded areas correspond to regions in energy space where the labeled state is dominant. Given sequences and binding fractions $p$, the non-linear Boltzmann form (eq.~\ref{eq:p}) imposes constraints on the possible parameters (additive energies). Solid lines are the energies compatible with $p$ for four hypothetical sequences: wild-type (red), two single mutants (black) and a double mutant (blue). Dashed lines represent additive energies, and connect the wild-type to the single mutants (black) and the double mutant (blue), which has lengths equal to the sum of additive energies.} \end{figure} We use all sequences and associated counts in a maximum a likelihood framework to infer all additive and wild-type folding and binding energies, converted to kcal/mol (see Methods). For comparison, we also infer energies of the two state model. In Methods, we modify the likelihood to account for non-specific background binding, and describe a procedure to overcome local optima via bootstrapping. \subsection*{Inferred energy landscape} We compare the inferred additive folding energies to independent low-throughput measurements of 81 single substitutions in Fig.~\ref{fig:prediction}A (collected from different sources in \cite{olson_comprehensive_2014}). The three state model (bound/unbound/unfolded) accurately predicts $\epsilon_{f}$ in physical units with an root mean squared error of 0.39 kcal/mol and a correlation of $\rho=0.91$, which is better than computational methods (\textasciitilde{}0.6 to \textasciitilde{}0.7) and close to the amount of correlation between replicates of low-throughput methods (\textasciitilde{}0.86) \cite{potapov_assessing_2009}. Six variants with 2-6 mutations are also predicted (Fig.~\ref{fig:prediction}A red) with comparable accuracy. However, 2 highly stable variants (G\textgreek{b}1-c3b4 with 7 mutations, and M2 with 4, not shown) are underestimated by 2.1 and 5.3 kcal/M respectively, suggesting the presence of significant synergistic epistasis in the folding energy for these variants. \begin{figure} \begin{centering} \includegraphics[scale=0.6]{pb_predict} \par\end{centering} \caption{\label{fig:prediction} A) Accurate prediction of changes in folding energy $\epsilon_{f}$ (eq.~\ref{eq:ef}, commonly referred to as $\Delta\Delta G$) by fitting a three state thermodynamic model to deep mutational scanning data. Predicted energies have a root mean square error of 0.39 kcal/mol and $\rho=0.91$ compared to independent measurements of $\epsilon_{f}$ for 81 single substitutions \cite{olson_comprehensive_2014}. Six variants with 2-6 mutations are shown in red. The line has a slope of unity. B) Folding energies (teal) have a stronger relation to residue depth than binding energies (red). Root mean square energy changes at each position are shown, and a plus sign indicates sites at the protein-protein interface \cite{sauer-eriksson_crystal_1995,sloan_dissection_1999}). } \end{figure} The two state model (bound/unbound) fits the data similarly well to the three state model, with a correlation between predicted fitness $\hat{f}(\sigma)=\log(p(\sigma)/p(\sigma^{W}))$ and measured fitness of 96.4\% and 97.1\% for two and three state models respectively (see Fig.~\ref{fig:yyhat}). However, the additive energies inferred by the two state model have no relation to the independently measured $\epsilon_{f}$ (Fig.~\ref{fig:ddg1}). Clearly a three state model is necessary to predict folding energy, and has the added benefit of estimating binding energy. To assess how much sampling noise influences our results, we calculate 95\% confidence intervals of $\epsilon_{f}$ and $\epsilon_{b}$ from the bootstrapped estimates (Fig.~\ref{fig:bootstrap}), and find that they are very narrow compared to the range of effect sizes for most of the 2092 energy parameters. Examining $\epsilon_{f}$ and $\epsilon_{b}$ across sites and amino acids (Fig.~\ref{fig:efiebi}) reveals a detailed picture of how folding and binding are sensitive to substitutions. The energies have striking differences in their patterns, and $\epsilon_{f}$ and $\epsilon_{b}$ are uncorrelated (Fig.~\ref{fig:folding-vs-binding}A, $\rho=0.03$, $p_{value}=.28$). Some substitutions have strong antagonistic effects, such as at positions 23 and 41, neither of which are at the binding interface. The substitutions 41L and 54G have particularly strong antagonism, although the double mutant is known to be strongly epistatic, and there may be systematic errors in these parameters from effects not captured by our model. With relatively few exceptions, amino acid substitutions in GB1 do not produce trade-offs between binding and fold stability. \begin{figure} \begin{centering} \includegraphics[scale=0.5]{submatrix} \par\end{centering} \caption{\label{fig:efiebi}Inferred additive binding $\epsilon_{b}$ and folding $\epsilon_{f}$ energies show strikingly different patterns. Three of the binding sites (27, 31, 43) have strong effects on binding. Many substitutions at positions 23 and 41 are beneficial for folding and deleterious for binding, although overall substitutions are uncorrelated. } \end{figure} The energy landscape of a protein is determined by its structure, so we expect that the inferred energies are related to structural features. Both $\epsilon_{f}$ and $\epsilon_{b}$ correlate with residue depth (Fig.~\ref{fig:prediction}B), and $\epsilon_{b}$ has a weaker relation to depth, except for a few sensitive shallow residues. The three most sensitive sites to binding are at the interface of the two proteins \cite{sauer-eriksson_crystal_1995,sloan_dissection_1999} (plus signs in Fig.~\ref{fig:prediction}B), but many other sites at the interface are not sensitive. A top down view of folding vs.~binding energy of single and double mutants depicts how the variants fall into each of the three states. In this phenotypic space, the wild-type is better than most of the observed sequences, and in terms of binding fraction, it is in the 72nd and 85th percentile of the single and double substitutions respectively. Most variants fall in the region of excess stability ($E_{f}\ll0$), whereas binding energies are distributed around the wild-type, implying that binding fraction is more sensitive to changes in binding energy than folding energy for the majority of variants. This is consistent with the lack of correlation between additive energies from the two state model and independently measured folding energies (Fig.~\ref{fig:ddg1}), as well as the lack of correlation between changes in fitness and folding energies \cite{olson_comprehensive_2014}. \begin{figure} \begin{centering} \includegraphics[width=1\textwidth]{pb_EbEf} \par\end{centering} \caption{\label{fig:folding-vs-binding} Folding vs.~binding energy for single (A) and double (B) mutants. Red dot is the wild-type energy, the red line is where binding fraction is the same as wild-type $p=p^{W}$, and below the red line variants have $p>p^{W}$. The three equilibrium states are labeled, with $E_{f}=0$ demarcating the folded and unfolded states.} \end{figure} \subsection*{Patterns of epistasis} While the energies in our models are non-epistatic, the binding fraction and fitness are epistatic due to the non-linearity between binding fraction and energy (eq.~\ref{eq:p}), and our inferred pairwise epistasis $\hat{J}_{ij}^{ab}$, analogous to eq.~\ref{eq:epistasis}, can be compared to the observed epistasis. We filter out non-biological epistasis due to experimental limits on measured fitness, i.e., non-specific background binding (see Methods), and average over amino acids in each pair of sites. The predictions from the three-state model reproduces the biological epistasis better than the two-state model, which vastly underestimates the magnitude of epistasis across the protein (Fig.~\ref{fig:epistasis}). Notably, the three state model predicts much of the negative epistasis, but misses clusters of positive epistasis. To quantify these deviations, the difference between the predicted and observed epistasis can be normalized by the noise in the observed epistasis, $z_{ij}=\frac{\hat{J}_{ij}-J_{ij}}{\sqrt{v_{J_{ij}}}}$ (see Methods). The clusters of positive epistasis are more clearly visible after filtering out all but the most underestimated epistasis (bottom 5\% of $z_{ij}$, Fig.~\ref{fig:epi2}). As noted in \cite{olson_comprehensive_2014}, the residues in these positions had correlated conformational dynamics in NMR and molecular dynamics studies (positions 7, 9, 11, 12, 14, 16, 33, 37, 38, 40, 54, 56, \cite{clore_amplitudes_2004,lange_molecular_2005,markwick_exploring_2007}). This suggests that this unexplained epistasis is due to systematic errors not accounted for in the model, such as epistasis in energy or some alternative conformations, and therefore large prediction errors can identify sites that should be studied in more detail. \begin{figure} \begin{centering} \includegraphics[scale=0.6]{epi} \par\end{centering} \caption{\label{fig:epistasis}Patterns of pairwise epistasis observed in the data and predicted by the three state and two state thermodynamic models. Shown are observed pairwise fitness epistasis (eq.~\ref{eq:epistasis}), and inferred epistasis from the three state model and the two state model. A network of residues that undergo correlated conformational dynamics (positions 7, 9, 11, 12, 14, 16, 33, 37, 38, 40, 54, 56) have significant observed positive epistasis that the thermodynamic models fail to estimate. Epistasis is averaged for each pair of sites over the relevant amino acid substitutions. We filter out non-biological epistasis that is a consequence of the experimental limits on measured fitness due to non-specific background binding (see Methods).} \end{figure} We also made predictions for a follow up study by Wu et al.~\cite{wu_adaptation_2016}, which targeted 4 highly epistatic sites in GB1, and assayed all combinations of mutations, i.e.~$20^{4}$ variants. Most of the mutants with 3 or 4 mutations have very weak binding, and the fitness predictions from the model trained on the Olson et.~al data can roughly predict their functionality (Fig.~\ref{fig:wupredict}). Our model predicts functional quadruple mutants with a true positive rate of 86\% and a true negative rate of 95\% (defining functional as $f>-2.5$, Tab.~\ref{tab:wu}). At the same time, the fitness is underestimated for many variants much more than expected from measurement variability (Fig\@.~\ref{fig:wuz}), suggesting that, in this more mutated data, some unaccounted for epistasis is restoring binding or stability for approximately 20\% of variants. \section*{Discussion} We have shown how with a few biophysical assumptions, i.e.,.~a small number of thermodynamic states and additivity of energy, are sufficient to extract a detailed folding and binding energy landscape of a protein. Many DMS studies use \textit{in vitro} and\textit{ in vivo} selection assays, and quantify the results with enrichment ratios, similar to fitness in eq.~\ref{eq:fitness}. Most of these studies focus on single substitutions from a wild-type, however fitness changes of single substitutions confound changes in stability and binding. We have shown how combining information from multiple sequences with many mutations provide the constraints to separate these phenotypes in a thermodynamic model, and therefore highly mutagenized sequence-function experiments can provide a rich description of a protein's energy landscape. The inferred energies from the three state model very accurately predict the independent low-throughput measurements, and show that it is feasible to infer folding and binding energy in physical units accurately from simple high-throughput functional assays. Several DMS studies have indirectly measured stability. Araya et al.~\cite{araya_fundamental_2012} applied a metric, based on the rescue effect of double mutations, to identify stabilizing mutations, and Rocklin et al.~\cite{rocklin_global_2017} used proteolysis assays that correlate with stability. Olson et al.~\cite{olson_comprehensive_2014} extracted stability measures from this dataset by searching for single mutation fitness changes in different genetic backgrounds that correlated with the literature set of stability measurements. Further refinements were developed with clustering methods that use structural information and physiochemical properties \cite{wu_high-throughput_2015}. However, these ad-hoc methods can suffer from over-fitting and require extraneous knowledge to work effectively. In contrast, the thermodynamic model directly infers stability in physical units and does not require any benchmark stability measurements or structural information. Our method also infers the binding energies that are marginal to the folded\textendash unbound state and measure the destabilization of the binding interface. In constrast, dissociation constants, as measured by titration curves, do not account for differences in fold stability. The recently developed tite-seq method \cite{adams_measuring_2016} infers a saturation constant to account for sequence dependent differences in stability and expression, but does not compensate for stability effects in the dissociation constant itself. We have shown that the three state model shows good agreement with patterns of epistasis, and deviations from our model identify a network of residues that have correlated conformational dynamics. Since the deviations and measurement noise itself can be rather large, sign epistasis, path accessibility, and other geometric features of the inferred genotype-phenotype map are likely to be distorted. It is possible that more complex models, such energetic interaction terms or more conformational states, can describe the remaining epistasis in double mutants and in variants with more than two mutations. The three state model works well for the relatively small GB1, but larger proteins may need additional states, such as mis-folded conformations, to accurately model their properties. Inferred energy landscapes from DMS may also be useful in understanding protein structure. Inferred energy parameters may be useful for calibrating potential functions used in structure prediction \cite{lee_ab_2017}. Refined thermodynamic models with pairwise epistatic energy may be able to infer protein contacts directly, similar to how multiple sequence alignments of homologous proteins can infer contacts \cite{morcos_direct_2014}, providing a way to predict structure from DMS studies. Thermodynamic models coupled with DMS also provide a way to study intrinsically disordered proteins, which fold and bind simultaneously, but have no persistent structure while unbound \cite{wright_linking_2009}. Since many conformations can correspond to these states, free energy differences are a natural way to quantify the properties of disordered proteins. While we have inferred a detailed genotype-phenotype map of GB1, yet we do not know the consequences for evoltuion, which depend on how the binding fraction affects the organismal fitness. Manhart and Morozov \cite{manhart_protein_2015} explored the evolutionary dynamics of a fitness function that is a linear combination of the three protein states. This leads to an evolutionary coupling between binding and folding, where selection on folding can drive changes in binding, and vice versa. With appropriate data it may be possible to infer selection coefficients associated with each state, as well as evolutionary trajectories in energy space, from multiple sequence alignments. \paragraph*{} \section*{Methods} \subsection*{Poisson likelihood for an \textit{in vitro} selection assay} In an \textit{in vitro} selection assay with one round, the library of protein variants is sequenced before and after binding, and therefore the count or multiplicity of each sequence carries information on the binding. For each variant $\sigma$, with initial and final counts $n_{0}$ and $n_{1}$, we define a Poisson log-likelihood with intensity $\lambda_{0}=N_{0}$ and $\lambda_{1}=N_{0}pr$, where $p$ represents the fraction of bound protein and $r$ is the systematic difference between initial and final sequencing. We omit the dependence on $\sigma$ for brevity (note that $r$ does not depend on $\sigma$). The per variant joint likelihood over the two time points is \begin{equation} LL=-N_{0}(1+pr)+n_{0}\log(N_{0})+n_{1}\log(N_{0}pr), \end{equation} omitting terms that don't depend on parameters. This has a nuisance parameter $N_{0}$ per sequence, which has an ML estimate, given the other parameters \[ N_{0}^{*}=\frac{n_{0}+n_{1}}{1+pr} \] Plugging it into the likelihood and dropping terms which don't depend on the parameters results in \begin{equation} LL=-(n_{0}+n_{1})\log(1+pr)+n_{1}\log(pr).\label{eq:LL} \end{equation} The maximum likelihood estimate of binding fraction is $p^{\prime}=\frac{n_{1}}{n_{0}r}$. Since some of the counts can be very small, we add a pseudo-count of $\frac{1}{2}$ to $n_{0}$ and $n_{1}$ to slightly regularize the estimate. \subsection*{Thermodynamic model inference} We use the likelihood in eq.~\ref{eq:LL}, and parameterize $p$ as a thermodynamic model following the Boltzmann distribution. We modify $p$ to account for non-specific background binding $p_{0}$: \begin{equation} p(\sigma)=\frac{1}{1+e^{E_{b}(\sigma)}(1+e^{E_{f}(\sigma)})}(1-p_{0})+p_{0}, \end{equation} and the energies have an additive relation to sequence defined by eqs.~\ref{eq:ef}\ref{eq:eb}. The total likelihood is the sum of the per variant likelihoods $\textbf{LL}=\sum_{\sigma}LL(\sigma)$. This log-likelihood is non-convex, and is optimized using the NLopt library \cite{steven_g._johnson_nlopt_nodate}, which implements the method of moving asymptotes algorithm \cite{svanberg_class_nodate}, and uses the log-likelihood gradients with respect to $r$, $p_{0}$, and the energies. The initial parameters were $r=1$, $p_{0}=0.01$, and all energies set to zero. $r$ and $p_{0}$ were reparameterized as $e^{r^{\prime}}$ and $e^{p_{0}^{\prime}}$ inside the optimization function, so that the original parameters are non-negative. In the optimization algorithm, upper and lower bounds on $\varepsilon$ are set to limit very small gradients which stop the optimization prematurely. The value of to $\pm15$ was chosen by optimizing with different bounds, $\pm10$, $\pm15$, $\pm20$, and $\pm25$, and choosing the result with the highest likelihood. Dimensionless energies are converted to kcal/mol with $T=297$, and therefore the bounds are $\pm8.85$ kcal/mol. \subsection*{Bootstrap} Since the optimization algorithm can get stuck in local optima, we use a bootstrap restarting procedure to overcome local optima related to sampling noise \cite{wood_minimizing_2001}, and to generate a bootstrap distribution of parameters to quantify their uncertainty due to sampling noise. The maximum likelihood parameters from the fit described above, $\theta$, are the initial parameters in an iterative procedure that alternates optimizing on the original and bootstrapped data. Each iteration consists of: 1) creation of bootstrapped data with counts drawn from a Poisson distribution with means $n_{0}$ and $n_{1}$. 2) Searching for the maximum likelihood parameters $\theta^{\prime}$ on the bootstrapped data with initial parameters $\theta$. 3) Searching for the maximum likelihood parameters $\theta^{\prime\prime}$ on the original data with initial parameters $\theta^{\prime}$. 4) If the likelihood is no better than the best optimization within a small threshold $LL(\theta^{\prime\prime})\le LL(\theta)+\eta$ ($\eta=0.0001$ ), then add the bootstrapped parameters $\theta^{\prime}$ to a list. 5) If the likelihood is better than the previous best $LL(\theta^{\prime\prime})>LL(\theta)+\eta$, then update the best parameters, $\theta\leftarrow\theta^{\prime\prime}$, and delete the list of bootstrapped parameters. 6) terminate the procedure once 100 bootstraps have been accumulated in the list. \subsection*{Pairwise epistasis} A minimal amount of non-specific background binding, $p_{0}$, imposes a lower bound on measured binding fraction in the experiment, estimated to be $f_{0}=\log(p_{0}/p(\sigma^{W})=-5.69$ by our three state model. This threshold effect produces large amounts of non-biological positive pairwise epistasis, e.g.~when the double mutant has the same level of binding as one of the single mutants at the background level. Therefore, the data shown in Fig.~\ref{fig:epistasis} excludes $\hat{J}_{ij}^{ab}$ with sequences near this threshold, i.e., $\min\left(f(\sigma_{/(i,a)/(j,b)}^{W}),f(\sigma_{/(i,a)}^{W}),f(\sigma_{/(j,b)}^{W})\right)<-4.5$. \subsection*{Sample variance of fitness and epistasis} Since estimated fitness is asymptotically Gaussian, the sample variance of fitness is equal to the curvature of the log-likelihood surface. Replacing $p^{\prime}$ with $e^{f^{\prime}}$ in eq.~\ref{eq:LL}, the asymptotic variance of $f^{\prime}$ is $-\left(\frac{\partial^{2}LL}{\partial f^{\prime2}}\right)^{-1}$. The variance of the fitness estimate, as defined in eq.~\ref{eq:fitness} is the sum of the focal and wild-type variances \begin{equation} v_{f}(\sigma)=\frac{n_{0}(\sigma)+n_{1}(\sigma)}{n_{0}(\sigma)n_{1}(\sigma)}+\frac{n_{0}(\sigma^{W})+n_{1}(\sigma^{W})}{n_{0}(\sigma^{W})n_{1}(\sigma^{W})}, \end{equation} and the variance in epistasis is the sum of the single and double mutant variances \[ v_{J_{ij}^{ab}}=v_{f}(\sigma_{/(i,a)/(j,b)}^{W})+v_{f}(\sigma_{/(i,a)}^{W})+v_{f}(\sigma_{/(j,b)}^{W}), \] The variance is then averaged over amino acids $a,b$ for each position $i,j$.
{ "timestamp": "2018-04-03T02:03:43", "yymm": "1802", "arxiv_id": "1802.08744", "language": "en", "url": "https://arxiv.org/abs/1802.08744" }
\section{Extended Abstract} Each submitted abstract should be submitted on white A4 paper. The fully justified text should be formatted in two parallel columns, each 8.25 cm wide, and separated by a space of 0.63 cm. Left, right, and bottom margins should be 1.9 cm and the top margin 2.5 cm. The font for the main body of the text should be Times New Roman 10 with interlinear spacing of 11 pt. \subsection{General Instructions for the Submitted Abstract} Each submitted abstract should be between \ul{a minimum of three and a maximum of four pages including figures}. \section{Paper} Each manuscript should be submitted on white A4 paper. The fully justified text should be formatted in two parallel columns, each 8.25 cm wide, and separated by a space of 0.63 cm. Left, right, and bottom margins should be 1.9 cm. and the top margin 2.5 cm. The font for the main body of the text should be Times New Roman 10 with interlinear spacing of 12 pt. Articles must be between 4 and 8 pages in length, regardless of the mode of presentation (oral or poster). \subsection{General Instructions for the Final Paper} Each paper is allocated between \ul{a minimum of four and a maximum of eight pages including figures}. The unprotected PDF files will appear in the on-line proceedings directly as received. Do not print the page number. \section{Page Numbering} \textbf{Please do not include page numbers in your article.} The definitive page numbering of articles published in the proceedings will be decided by the organising committee. \section{Headings / Level 1 Headings} Headings should be capitalised in the same way as the main title, and centred within the column. The font used is Times New Roman 12 bold. There should also be a space of 12 pt between the title and the preceding section, and a space of 3 pt between the title and the text following it. \subsection{Level 2 Headings} The format for level 2 headings is the same as for level 1 Headings, with the font Times New Roman 11, and the heading is justified to the left of the column. There should also be a space of 6 pt between the title and the preceding section, and a space of 3 pt between the title and the text following it. \subsubsection{Level 3 Headings} The format for level 3 headings is the same as for level 2 headings, except that the font is Times New Roman 10, and there should be no space left between the heading and the text. There should also be a space of 6 pt between the title and the preceding section, and a space of 3 pt between the title and the text following it. \section{Citing References in the Text} \subsection{Bibliographical References} All bibliographical references within the text should be put in between parentheses with the author's surname followed by a comma before the date of publication \cite{Martin-90}. If the sentence already includes the author's name, then it is only necessary to put the date in parentheses: \newcite{Martin-90}. When several authors are cited, those references should be separated with a semicolon: \cite{Martin-90,CastorPollux-92}. When the reference has more than three authors, only cite the name of the first author followed by ``et al.'' (e.g. \cite{Superman-Batman-Catwoman-Spiderman-00}). \subsection{Language Resource References} \subsubsection{When Citing Language Resources} When citing language resources, we recommend to proceed in the same way to bibliographical references, except that, in order to make them appear in a separate section, you need to use the \texttt{\\citelanguageresource} tag. Thus, a language resource should be cited as \citelanguageresource{speecon}. \subsubsection{When Not Citing Any Language Resource} When no language resource needs to be cited in the paper, you need to comment out a few lines in the \texttt{.tex} file: \begin{verbatim} \end{verbatim} \section{Figures \& Tables}\label{sec:figures} \subsection{Figures} All figures should be centred and clearly distinguishable. They should never be drawn by hand, and the lines must be very dark in order to ensure a high-quality printed version. Figures should be numbered in the text, and have a caption in Times New Roman 10 pt underneath. A space must be left between each figure and its respective caption. \ref{fig.1} \ref{lr:ref} \ref{main:ref} Section \ref{sec:figures} is not good Example of a figure enclosed in a box: \begin{figure}[!h] \begin{center} \includegraphics[scale=0.5]{image1.eps} \caption{The caption of the figure.} \label{fig.1} \end{center} \end{figure} Figure and caption should always appear together on the same page. Large figures can be centred, using a full page. \subsection{Tables} The instructions for tables are the same as for figures. \begin{table}[!h] \begin{center} \begin{tabularx}{\columnwidth}{|l|X|} \hline Level&Tools\\ \hline Morphology & Pitrat Analyser\\ \hline Syntax & LFG Analyser (C-Structure)\\ \hline Semantics & LFG F-Structures + Sowa's\\ & Conceptual Graphs\\ \hline \end{tabularx} \caption{The caption of the table} \end{center} \end{table} \section{Footnotes} Footnotes are indicated within the text by a number in superscript\footnote{Footnotes should be in Times New Roman 9 pt, and appear at the bottom of the same page as their corresponding number. Footnotes should also be separated from the rest of the text by a horizontal line 5 cm long.}. \section{Copyrights} The Lan\-gua\-ge Re\-sour\-ce and Evalua\-tion Con\-fe\-rence (LREC) proceedings are published by the European Language Resources Association (ELRA). They are available online from the conference website. ELRA's policy is to acquire copyright for all LREC contributions. In assigning your copyright, you are not forfeiting your right to use your contribution elsewhere. This you may do without seeking permission and is subject only to normal acknowledgement to the LREC proceedings. The LREC 2018 Proceedings are licensed under CC-BY-NC, the Creative Commons Attribution-Non-Commercial 4.0 International License. \section{Conclusion} Your submission of a finalised contribution for inclusion in the LREC proceedings automatically assigns the above-mentioned copyright to ELRA. \section{Acknowledgements} Place all acknowledgements (including those concerning research grants and funding) in a separate section at the end of the article. \section{Providing References} \subsection{Bibliographical References} Bibliographical references should be listed in alphabetical order at the end of the article. The title of the section, ``Bibliographical References'', should be a level 1 heading. The first line of each bibliographical reference should be justified to the left of the column, and the rest of the entry should be indented by 0.35 cm. The examples provided in Section \secref{main:ref} (some of which are fictitious references) illustrate the basic format required for articles in conference proceedings, books, journal articles, PhD theses, and chapters of books. \subsection{Language Resource References} Language resource references should be listed in alphabetical order at the end of the article, in the \textbf{Language Resource References} section, placed after the \textbf{Bibliographical References} section. The title of the ``Language Resource References'' section, should be a level 1 heading. The first line of each language resource reference should be justified to the left of the column, and the rest of the entry should be indented by 0.35 cm. The example in Section \secref{lr:ref} illustrates the basic format required for language resources. In order to be able to cite a language resource, it must be added to the \texttt{.bib} file first, as a \texttt{@LanguageResource} item type, which contains the following fields: \begin{itemize} \item{\texttt{author}: the builder of the resource} \item{\texttt{title}: the name of the resource} \item{\texttt{publisher}: the publisher of the resource (project, organisation etc)} \item{\texttt{year}: year of the resource release} \item{\texttt{series}: more general resource set this language resource belongs to} \item{\texttt{edition}: version of the resource} \item{\texttt{islrn}: the International Standard Language Resource Number (ISLRN) of the resource\footnote{The ISLRN number is available from \texttt{http://islrn.org}}} \end{itemize} If you want the full resource author name to appear in the citation, the language resource author name should be protected by enclosing it between \texttt{\{...\}}, as shown in the model \texttt{.bib} file. \vspace{.3\baselineskip} \section*{Appendix: How to Produce the \texttt{.pdf} Version} In order to generate a PDF file out of the LaTeX file herein, when citing language resources, the following steps need to be performed: \begin{itemize} \item{Compile the \texttt{.tex} file once} \item{Invoke \texttt{bibtex} on the eponymous \texttt{.aux} file} \item{Invoke \texttt{bibtex} on the \texttt{languageresources.aux} file} \item{Compile the \texttt{.tex} file twice} \end{itemize} \section{Bibliographical References} \label{main:ref} \bibliographystyle{lrec} \section{Introduction} Semantic parsing is the task of assigning meaning representations to natural language expressions. Informally speaking, a meaning representation describes \emph{who did what to whom, when, and where, and to what extent this is the case or not}. The availability of open-domain, wide coverage semantic parsers has the potential to add new functionality, such as detecting contradictions, verifying translations, and getting more accurate search results. Current research on open-domain semantic parsing focuses on supervised learning methods, using large semantically annotated corpora as training data. However, there are not many annotated corpora available. We present a parallel corpus annotated with formal meaning representations for English, Dutch, German, and Italian, and a way to evaluate the quality of machine-generated meaning representations by comparing them to gold standard annotations. Our work shows many similarities with recent annotation and parsing efforts around Abstract Meaning Representations, (AMR; Banarescu et al., 2013\nocite{amr:13}) in that we abstract away from syntax, use first-order meaning representations, and use an adapted version of \textsc{smatch}{} \cite{smatch:13} for evaluation. However, we deviate from AMR on several points: meanings are represented by scoped meaning representations (arriving at a more linguistically motivated treatment of modals, negation, presupposition, and quantification), and the non-logical symbols that we use are grounded in WordNet (concepts) and VerbNet (thematic roles), rather than PropBank \cite{propbank:05}. We also provide a syntactic analysis in the annotated corpus, in order to derive the semantic analyses in a compositional way. We make the following contributions: \begin{itemize}\itemsep0mm \item A meaning representation with explicit scopes that combines WordNet and VerbNet with elements of formal logic (Section~\ref{sec:smr}). \item A gold standard annotated parallel corpus of formal meaning representations for four languages (Section~\ref{sec:PMB}). \item A tool that compares two scoped meaning representations for the purpose of evaluation (Section~\ref{sec:dmatch} and Section~\ref{sec:usingdmatch}). \end{itemize} \begin{figure*} \hspace*{-3mm} \begin{tabular}{c@{\kern3.5mm}c@{\kern-1.5mm}c} \textsmaller[1]{24/3221:~~\sym{No~one~can~resist.\phantom{y}}} & \textsmaller[1]{00/2302:~~\sym{\grave{E}~tutto~nuovo.}} & \textsmaller[1]{00/3008:~~\sym{Hij~speelde~piano~en~zij~zong.}} \\[-1mm] \scalebox{1}{ \renewcommand\arraystretch{1.3} \drs[t]{}{\textlarger[2]{$\neg$}~\drs{$x_1$}{ person$\sym{.n.01}$$(x_1)$\\ \textlarger[2]{$\Diamond$}~\drs{$e_1$}{ resist$\sym{.v.02}$$(e_1)$\\ \hspace{3mm}$\sym{Agent}(e_1, x_1)$ }} }} & \scalebox{1}{ \renewcommand\arraystretch{1.3} \drs[t]{}{\drs{$x_1$}{thing$\sym{.n.12}(x_1)$} $\Rightarrow$ \drs{$s_1$~~$t_1$}{ new$\sym{.a.01}(s_1)$\\ \hspace{3mm}$\sym{Time}(s_1, t_1)$\\ \hspace{3mm}$\sym{Theme}(s_1, x_1)$\\ $\sym{time.n.08}(t_1)$\\ \hspace{3mm}$t_1 = \sym{now}$ } }} & \scalebox{.9}{ \renewcommand\arraystretch{1.15} \sdrs[t]{}{$k_1$ :: \drs{$x_1$~~$x_2$~~$e_1$~~$t_1$}{ male$\sym{.n.02}(x_1)$\\ play$\sym{.v.03}(e_1)$\\ \hspace{3mm}$\sym{Time}(e_1, t_1)$\\ \hspace{3mm}$\sym{Theme}(e_1, x_2)$\\ \hspace{3mm}$\sym{Agent}(e_1, x_1)$\\ $\sym{time.n.08}(t_1)$\\ \hspace{3mm}$t_1 \prec \sym{now}$\\ piano$\sym{.n.01}(x_2)$ } ~ $k_2$ :: \drs{$x_3$~~$e_2$~~$t_2$}{ female$\sym{.n.02}(x_3)$\\ $\sym{time.n.08}(t_2)$\\ \hspace{3mm}$t_2 \prec \sym{now}$\\ sing$\sym{.v.01}(e_2)$\\ \hspace{3mm}$\sym{Time}(e_2, t_2)$\\ \hspace{3mm}$\sym{Agent}(e_2, x_3)$ } }{$\sym{CONTINUATION}(k_1,k_2)$} } \\ \textsmaller[1]{\texttt{ \begin{tabular}[t]{l}\toprule \drgvar{k0} NOT \drgvar{b2}\\ \drgvar{b2} REF \drgvar{x1}\\ \drgvar{b2} person \posn{n}{01} \drgvar{x1}\\ \drgvar{b2} POS \drgvar{b3}\\ \drgvar{b3} Agent \drgvar{e1} \drgvar{x1}\\ \drgvar{b3} REF \drgvar{e1}\\ \drgvar{b3} resist \posn{v}{02} \drgvar{e1}\\ \bottomrule \end{tabular} }} & \textsmaller[1]{\texttt{ \begin{tabular}[t]{l}\toprule \drgvar{k0} IMP \drgvar{b2} \drgvar{b3}\\ \drgvar{b2} REF \drgvar{x1}\\ \drgvar{b2} thing \posn{n}{12} \drgvar{x1}\\ \drgvar{b3} REF \drgvar{s1}\\ \drgvar{b3} Theme \drgvar{s1} \drgvar{x1}\\ \drgvar{b3} new \posn{a}{01} \drgvar{s1}\\ \drgvar{b3} Time \drgvar{s1} \drgvar{t1}\\ \drgvar{b4} REF \drgvar{t1}\\ \drgvar{b4} time \posn{n}{08} \drgvar{t1}\\ \drgvar{b4} EQU \drgvar{t1} "now"\\ \bottomrule \end{tabular} }} & \textsmaller[1]{\texttt{ \begin{tabular}[t]{l@{\kern5mm}l}\toprule \drgvar{k0} DRS \drgvar{k1} & \drgvar{k0} DRS \drgvar{k2}\\ \drgvar{b1} REF \drgvar{x1} & \drgvar{b4} REF \drgvar{x3}\\ \drgvar{b1} male \posn{n}{02} \drgvar{x1} & \drgvar{b4} female \posn{n}{02} \drgvar{x3}\\ \drgvar{k1} REF \drgvar{e1} & \drgvar{k2} REF \drgvar{e2}\\ \drgvar{k1} play \posn{v}{03} \drgvar{e1} & \drgvar{k2} sing \posn{v}{01} \drgvar{e2}\\ \drgvar{k1} Agent \drgvar{e1} \drgvar{x1} & \drgvar{k2} Agent \drgvar{e2} \drgvar{x3}\\ \drgvar{k1} Theme \drgvar{e1} \drgvar{x2} & \drgvar{b5} REF \drgvar{t2}\\ \drgvar{k1} REF \drgvar{x2} & \drgvar{b5} time \posn{n}{08} \drgvar{t2}\\ \drgvar{k1} piano \posn{n}{01} \drgvar{x2} & \drgvar{b5} TPR \drgvar{t2} "now"\\ \drgvar{b3} REF \drgvar{t1} & \drgvar{k2} Time \drgvar{e2} \drgvar{t2}\\ \drgvar{b3} time \posn{n}{08} \drgvar{t1} & \drgvar{k0} CONTINUATION \drgvar{k1} \drgvar{k2}\\ \drgvar{b3} TPR \drgvar{t1} "now" & \drgvar{k1} Time \drgvar{e1} \drgvar{t1} \\ \bottomrule \end{tabular} }} \end{tabular} \caption{Examples of PMB documents with their scoped meaning representations and the corresponding clausal form. The first two structures are basic DRSs while the last one is a segmented DRS.} \label{fig:drs-scopes} \end{figure*} \section{Scoped Meaning Representations}\label{sec:smr} \subsection{Discourse Representation Structures}\label{ssec:drs} The backbone of the meaning representations in our annotated corpus is formed by the Discourse Representation Structures (DRS) of Discourse Representation Theory \cite{kampreyle:drt}. Our version of DRS integrates WordNet senses \cite{wordnet}, adopts a neo-Davidsonian analysis of events employing VerbNet roles \cite{Bonial:11}, and includes an extensive set of comparison operators. More formally, a DRS is an ordered pair of a set of variables (discourse referents) and a set of conditions. There are basic and complex conditions. Terms are either variables or constants, where the latter ones are used to account for indexicals \cite{indexicals:17}. Basic conditions are defined as follows: \begin{itemize}\itemsep0mm \item If W is a symbol denoting a WordNet concept and x is a term, then W(x) is a basic condition; \item If V is a symbol denoting a thematic role and x and y are terms, then V(x,y) is a basic condition; \item If x and y are terms, then x$=$y, x$\neq$y, x$\sim$y, x$<$y, x$\leq$y, x$\prec$y, and x$\bowtie$y are basic conditions formed with comparison operators. \end{itemize} WordNet concepts are represented as word\sym{.POS.SenseNum}, denoting a unique synset within WordNet. Thematic roles, including the VerbNet roles, always have two arguments and start with an uppercase character. Complex conditions introduce scopes in the meaning representation. They are defined using logical operators as follows: \begin{itemize}\itemsep0mm \item If B is a DRS, then $\lnot$B, $\Diamond$B, $\Box$B are complex conditions; \item If x is a variable, and B is a DRS, then x:B is a complex condition; \item If B and B' are DRSs, then B$\Rightarrow$B' and B$\lor$B' are complex conditions. \end{itemize} Besides basic DRSs, we also have segmented DRSs, following \newcite{asher:drt} and \newcite{asherlascarides}. Hence, DRSs are formally defined as follows: \begin{itemize}\itemsep0mm \item If D is a (possibly empty) set of discourse referents, and C a (possibly empty) set of DRS-conditions, then $<$D,C$>$ is a (basic) DRS; \item If B is a (basic) DRS, and B' a DRS, then B$\downarrow$B' is a (segmented) DRS; \item If U is a set of labelled DRSs, and R a set of discourse relations, then $<$U,R$>$ is a (segmented) DRS. \end{itemize} DRSs can be visualized in different ways. While the compact linear format saves space, the box notation increases readability. In this paper we use the latter notation. The examples of DRSs in the box notation are presented in Figure\,\ref{fig:drs-scopes}. However, for evaluation and comparison purposes, we convert a DRS into a flat clausal form, i.e. a set of clauses. This is carried out by using the labels for DRSs as introduced in \newcite{venhuizen2015PhDthesis} and \newcite{venhuizen2018discourse}, and breaking down the recursive structure of DRS by assigning them a label of the DRS in which they appear. Let t, t', and t'' be meta-variables ranging over DRSs or terms. Let $\cal{C}$ be a set of WordNet concepts, $\cal{T}$ a set of the thematic roles, and $\cal{O}$ the set of DRS operators (REF, NOT, POS, NEC, EQU, NEQ, APX, LES, LEQ, TPR, TAB, IMP, DIS, PRP, DRS). The resulting clauses are then of the form \mbox{t R t'} or \mbox{t R t' t''} where R $\in \cal{C}\cup\cal{T}\cup\cal{O}$. The result of translating DRSs to sets of clauses is shown in Figure\,\ref{fig:drs-scopes}. In a clausal form, it is assumed that different variables are represented with different variable names and vice versa. Due to this, before translating a DRS to a clausal form, different discourse referents in the DRS must be represented with different variable names. This assumption significantly simplifies the matching process between clausal forms (Section \ref{sec:dmatch}) and makes it possible to recover the original box notation of a DRS from its clausal form. \subsection{Comparing DRSs to AMRs}\label{ssec:drs_vs_amr} Since DRSs in a clausal form come close to the triple notation of AMRs \cite{smatch:13}, and both aim to model meaning of natural language expressions, it is instructive to compare these two meaning representations. The main difference between AMRs and DRSs is that the latter ones have explicit scopes (boxes) and scopal operators such as negation. Due to the presence of scope in DRSs, their clauses are more complex than AMR triples. The length of DRS clauses varies from three to four, in contrast to the constant length of AMR triples. Additionally, DRS clauses contain two different types of variables, for scopes and discourse referents, whereas AMR triples have just one type. Unlike AMRs, DRSs model tense. In general, the tense related information is encoded in a clausal form with three additional clauses, which express a WordNet concept, semantic role and a comparison operator. In order to give an intuition about the diversity of clauses in DRSs, Table~\ref{tab:dist} shows a distribution of various types of clauses in a corpus of DRSs (see Section\,\ref{sec:PMB}). Since every logical operator carries a scope, their number represents a lower bound of the number of scopes in the meaning representations. In addition to logical operators, scopes are introduced by presupposition triggers like proper names or pronouns. \begin{table}[t] \centering \caption{\label{tab:dist}Distribution of clause types for 2,049 gold DRSs.} \scalebox{.92}{ \begin{tabular}{@{\,}l@{~~}l@{~~}l@{~}r@{\,}} \toprule \textbf{Type} & \textbf{Description} & \textbf{Example} & \textbf{Total} \\ \midrule REF & Discourse referent & \ntt{\drgvar{b3} REF \drgvar{x2}} & 7,592 \\ NOT & Negation & \ntt{\drgvar{b1} NOT \drgvar{b2}} & 204 \\ POS & Possibility ($\Diamond$) & \ntt{\drgvar{b4} POS \drgvar{b5}} & 55 \\ NEC & Necessity ($\Box$) & \ntt{\drgvar{b2} NEC \drgvar{b3}} & 14 \\ IMP & Implication ($\Rightarrow$) & \ntt{\drgvar{b1} IMP \drgvar{b2} \drgvar{b3}} & 104 \\ PRP & Proposition ($:$) & \ntt{\drgvar{b1} PRP \drgvar{x6}} & 50 \\ REL & Discourse relation & \ntt{\drgvar{b1} CONTINUATION \drgvar{b2}} & 71 \\ DRS & DRS as a condition & \ntt{\drgvar{b4} DRS \drgvar{b5}} & 84 \\ Compare & Comparison operators & \ntt{\drgvar{x1} APX \drgvar{x2}} &2,100 \\ Concept & WordNet senses &\ntt{\drgvar{b2} hurt \posn{v}{02} \drgvar{e3}} &7,54 \\ Role & Semantic roles &\ntt{\drgvar{b2} Agent \drgvar{e3} \drgvar{x4}} &7,516 \\ \bottomrule \end{tabular} } \end{table} To make a meaningful comparison between AMRs and DRSs in terms of size, we compare the DRSs of 250,000 English sentences from the Parallel Meaning Bank (PMB; Abzianidze et al., 2017\nocite{PMBshort:2017}) to AMRs of the same sentences, produced by the state-of-the-art AMR parser from \newcite{clinAMR:17}. Statistics of the comparison are shown in Figure\,\ref{fig:senlength}. On average, DRSs are about twice as large as AMRs, in terms of the number of clauses as well as the number of unique variables. This is obviously due to the explicit presence of scope in the meaning representation. However, for both meaning representations the number of clauses and variables increase linearly with sentence length. \begin{figure}[h] \centering \includegraphics[scale=0.445]{amr_drs_comparison.pdf} \caption{\label{fig:senlength}Comparison of the number of triples/clauses and variables between AMRs and DRSs for sentences of different length.} \end{figure} \begin{figure*} \centering {\fboxsep=0m \fboxrule=2p \fcolorbox{pmbBlue}{white}{\includegraphics[scale=0.32]{pmb_IST_ed_cursor}} } \caption{The edit mode of the PMB explorer: semantic tag ($\mathtt{sem}$) and symbol ($\mathtt{sym}$) layers of the document are bronze and therefore editable, while the word sense ($\mathtt{sns}$), semantic role ($\mathtt{rol}$) and CCG category ($\mathtt{cat}$) layers are gold and uneditable.} \label{fig:explorer} \end{figure*} \section{The Parallel Meaning Bank} \label{sec:PMB} The scoped meaning representations, integrating word senses, thematic roles, and the list of operators, form the final product of our semantically annotated corpus: the Parallel Meaning Bank. The PMB is a semantically annotated corpus of English texts aligned with translations in Dutch, German and Italian \cite{PMBshort:2017}. It uses the same framework as the Groningen Meaning Bank \cite{Bos2017GMB}, but aims to abstract away from language-specific annotation models. There are five annotation layers present in the PMB: segmentation of words, multi-word expressions and sentences \cite{elephant}, semantic tagging \cite{Bjervaetal:16,semantic-tagset:17}, syntactic analysis based on CCG \cite{lewisSteedman:14}, word senses based on WordNet \cite{wordnet}, and thematic role labelling \cite{BosEvangNissim2012}. The semantic analysis for English is projected on the other languages, to save manual annotation efforts \cite{evang-thesis,evang2016}. All the information provided by these layers is combined into a single meaning representation using the semantic parser Boxer \cite{boxer}, in the form of Discourse Representation Structures. Note that the goal is to produce annotations that capture the most probable interpretation of a sentence; no ambiguities or under-specification techniques are employed. \begin{table}[t] \centering \caption{Statistics of the first PMB release.\label{tab:release}} \begin{tabular}{lrrr} \toprule & \textbf{Documents} & \textbf{Sentences} & \textbf{Tokens} \\ \midrule \textbf{English} & 2,049 & 2,057 & 11,664 \\ \textbf{German} & 641 & 642 & 3,430 \\ \textbf{Italian} & 387 & 387 & 1,944 \\ \textbf{Dutch} & 394 & 395 & 2,268 \\ \bottomrule \end{tabular} \end{table} At each step in this pipeline, a single component produces the automatic annotation for all four languages, using language-specific models. Human annotators can correct machine output by adding `Bits of Wisdom' \cite{gmb:eacl}. These corrections serve as data for training better models, and create a gold standard annotated subset of the data. Annotation quality is defined per layer and language, at three levels: bronze (fully automatic), silver (automatic with some manual corrections), and gold (fully manually checked and corrected). If all layers are marked as gold, it follows that the resulting DRS can be considered gold standard, too. The first public release\footnote{\url{http://pmb.let.rug.nl/data.php}} of the PMB contains gold standard scoped meaning representations for over 3,000 sentences in total (see Table~\ref{tab:release}). The release includes mainly relatively short sentences involving several semantic scope phenomena. A detailed distribution of clause types in the dataset is given in Table\,\ref{tab:dist}. A larger amount of texts and more complex linguistic phenomena will be included in future releases. In addition to the released data, the PMB documents are publicly accessible through a web interface, called the PMB explorer.% \footnote{\url{http://pmb.let.rug.nl/explorer}} In the explorer, visitors can view natural language texts with several layers of annotations and compositionally derived meaning representations, and, after registration, edit the annotations. It is also possible to use a word or a phrase search to find certain words or constructions with their semantic analyses. Figure~\ref{fig:explorer} shows the PMB explorer with the semantic analysis of a sentence in the edit mode. \section{Matching Scoped Representations} \label{sec:dmatch} \subsection{Evaluation by Matching} \label{ssec:eval-by-match} In the context of the Parallel Meaning Bank there are two main reasons to verify whether two scoped meaning representations capture the same meaning or not: (1) to be able to evaluate semantic parsers that produce scoped meaning representations by comparing gold-standard DRSs to system output; and (2) to check whether translations are meaning-preserving; a discrepancy in meaning between source and target could indicate a mistranslation. The ideal way to compare two meaning representations would be one based on inference. This can be implemented by translating DRSs to first-order formulas and using an off-the-shelf theorem prover to find out whether the two meanings are logically equivalent \cite{blackburnbos:2005}. This method can compare meaning representation that have different syntactic structures but still are equivalent in meaning. The disadvantage of this approach is that it yields just a binary answer: if a proof is found the meanings are the same, else they are not. An alternative way of comparing meaning representations is comparing the corresponding clausal forms by computing precision and recall over matched clauses \cite{allen:step2008}. The advantage of this approach is that it returns a score between 0 and 1, preferring meaning representations that better approximate the gold standard over those that are completely different. Since the variables of different clausal forms are independent from each other, the comparison of two clausal forms boils down to finding a (partial) one-to-one variable mapping that maximizes intersection of the clausal forms. For example, the maximal matching for the clausal forms in Figure \ref{fig:drs-compare} is achieved by the following partial mapping from the variables of the left form into the variables of the right one: \{\ntt[0]{k0}$\mapsto$\ntt[0]{b0}, \ntt[0]{e1}$\mapsto$\ntt[0]{v1}\}. For AMRs, finding a maximal matching is done using a hill-climbing algorithm called \textsc{smatch}{} \cite{smatch:13}. This algorithm is based on a simple principle: it checks if a single change in the current mapping results in a better matching mapping. If this is the case, it continues with the new mapping. Otherwise, the algorithm stops and has arrived at the final mapping. This means that it can easily get stuck in local optima. To avoid this, \textsc{smatch}{} does a predefined number of restarts of this process, where each restart starts with a new and random initial mapping. The first restart always uses a `smart' initial mapping, based on matching concepts. \begin{figure}[t] \centering \begin{tabular}{@{}c@{}@{}c@{}} \textsmaller[1]{01/3445:~~\sym{He~smiled.}} & \textsmaller[1]{00/3514:~~\sym{She~fled~Australia.}} \\[-3mm] \scalebox{1}{ \begin{tabular}[t]{@{}c@{}} \renewcommand\arraystretch{1.1} \drs[t]{$x_1$ ~$e_1$ ~$t_1$}{ male$\sym{.n.02}(x_1)$\\ smile$\sym{.v.01}(e_1)$\\ \hspace{3mm}$\sym{Time}(e_1, t_1)$\\ \hspace{3mm}$\sym{Agent}(e_1, x_1)$\\ $\sym{time.n.08}(t_1)$\\ \hspace{3mm}$t_1 \prec \sym{now}$ }\\ \raisebox{-3mm}{\fbox{\textsc{Spar}{} DRS}} \end{tabular}} & \scalebox{1}{ \drs[t]{$x_1$ ~$x_2$ ~$v_1$ ~$t_1$}{ female$\sym{.n.02}(x_1)$\\ flee$\sym{.v.01}(v_1)$\\ \hspace{3mm}$\sym{Time}(v_1, t_1)$\\ \hspace{3mm}$\sym{Source}(v_1, x_2)$\\ \hspace{3mm}$\sym{Theme}(v_1, x_1)$\\ $\sym{time.n.08}(t_1)$\\ \hspace{3mm}$t_1 \prec \sym{now}$\\ country$\sym{.n.02}(x_2)$\\ \hspace{3mm}$\sym{Name}(x_2,\text{australia})$ }} \\[-3mm] \textsmaller[1]{\texttt{ \begin{tabular}[t]{@{\,}l@{\,}}\toprule \strout{\matched{\drgvar{b1} REF \drgvar{x1}}}\\ \nonmatched{\drgvar{b1} male \posn{n}{02} \drgvar{x1}}\\ \strout{\matched{\drgvar{b3} REF \drgvar{t1}}}\\ \matched{\drgvar{b3} TPR \drgvar{t1} "now"}\\ \matched{\drgvar{b3} time \posn{n}{08} \drgvar{t1}}\\ \nonmatched{\drgvar{k0} Agent \drgvar{e1} \drgvar{x1}}\\ \strout{\matched{\drgvar{k0} REF \drgvar{e1}}}\\ \matched{\drgvar{k0} Time \drgvar{e1} \drgvar{t1}}\\ \nonmatched{\drgvar{k0} smile \posn{v}{01} \drgvar{e1}}\\ \bottomrule \end{tabular} }} & \textsmaller[1]{\texttt{ \begin{tabular}[t]{@{\,}l@{\,}}\toprule \strout{\matched{\drgvar{b1} REF \drgvar{x1}}}\\ \nonmatched{\drgvar{b1} female \posn{n}{02} \drgvar{x1}}\\ \strout{\matched{\drgvar{b3} REF \drgvar{t1}}}\\ \matched{\drgvar{b3} TPR \drgvar{t1} "now"}\\ \matched{\drgvar{b3} time \posn{n}{08} \drgvar{t1}}\\ \nonmatched{\drgvar{b0} Theme \drgvar{v1} \drgvar{x1}}\\ \nonmatched{\drgvar{b0} Source \drgvar{v1} \drgvar{x2}}\\ \strout{\matched{\drgvar{b0} REF \drgvar{v1}}}\\ \matched{\drgvar{b0} Time \drgvar{v1} \drgvar{t1}}\\ \nonmatched{\drgvar{b0} flee \posn{v}{01} \drgvar{v1}}\\ \strout{\nonmatched{\drgvar{b2} REF \drgvar{x2}}}\\ \nonmatched{\drgvar{b2} Name \drgvar{x2} "australia"}\\ \nonmatched{\drgvar{b2} country \posn{n}{02} \drgvar{x2}}\\ \bottomrule \end{tabular} }}\\ \end{tabular} \caption{The \textsc{Spar}{} DRS (Section\,\ref{ssec:sempar}) matches the DRS of 00/3514 PMB document with an F-score of 54.5\%. If redundant REF-clauses are ignored, the F-score drops to 40\%. These results are achieved with the help of the mapping \{\ntt[0]{k0}$\mapsto$\ntt[0]{b0}, \ntt[0]{e1}$\mapsto$\ntt[0]{v1}\}. } \label{fig:drs-compare} \end{figure} Our evaluation system, called \mbox{\textsc{counter}}{}\footnote{\url{http://github.com/RikVN/DRS_parsing/}}, is a modified version of \textsc{smatch}{}. Even though clausal forms do not form a graph and clauses consist of either three or four components, the principle behind the variable matching is the same. The actual implementation differs, mainly because \textsc{smatch}{} was not designed to handle clauses with three variables, e.g. $\langle$\ntt[0]{k0 Agent e1 x1}$\rangle$. In contrast to \textsc{smatch}{}, \mbox{\textsc{counter}}{} takes a set of clauses directly as input. \mbox{\textsc{counter}}{} also uses two smart initial mappings, based on either role-clauses, like $\langle$\ntt[0]{k0 Agent e1 x1}$\rangle$, or concept-clauses, like $\langle$\ntt[0]{k0 smile v.01 e1}$\rangle$. Also specific to this method is the treatment of REF-clauses in the matching process. Before matching two DRSs, redundant REF-clauses are removed. A REF-clause $\langle$\ntt[0]{b1 REF x1}$\rangle$ is redundant if its discourse referent \ntt[0]{x1} occurs in some basic condition of the same DRS \ntt[0]{b1}. Figure~\ref{fig:drs-compare} shows some examples of redundant REF-clauses. Not removing these redundant clauses would lead to inflated matching scores since for each matched variable the corresponding REF-clause will also match. Comparison of the clausal forms in Figure\,\ref{fig:drs-compare} demonstrates this fact. Note that not all REF-clauses are redundant: if a discourse referent is declared outside the scope of negation or an other scope operator, the REF-clause is kept. This is very infrequent in our data, since only a single REF-clause was preserved in 2,049 examples. \subsection{Evaluating Matching} \label{ssec:eval-match} As we showed in Figure\,\ref{fig:senlength}, DRSs are about twice as large as AMRs. This increase in size might be problematic, since it increases the average runtime for comparing DRSs. Moreover, if there are more variables, more restarts might be needed to ensure a reliable score, again increasing runtime. Therefore, our goal is that \mbox{\textsc{counter}}{} gets close to optimal performance in reasonable time. Since we want to be sure that this also holds for longer sentences, we use a balanced data set. We take 1,000 DRSs produced by the semantic parser Boxer for each sentence length from 2 to 20 (punctuation excluded), resulting in a set of 19,000 DRSs. To test \mbox{\textsc{counter}}{} in a realistic setting, we cannot compare the DRSs to themselves or to a DRS of the translation, since those are too similar. Therefore, the 19,000 English sentences of the DRS are parsed by an existing AMR parser \cite{clinAMR:17} and subsequently converted into a DRS by a rule-based system, \textsc{amr2drs}{}, as motivated by \newcite{bos:16}. An example of translating an AMR to a clausal form of a DRS is shown in Figure\,\ref{fig:amr2drs}. We convert AMR relations to DRS roles by employing a manually created translation dictionary, including rules for semantic roles (e.g. \ntt[0]{:ARG0} $\mapsto$ \ntt[0]{Agent} and \ntt[0]{:ARG1} $\mapsto$ \ntt[0]{Patient}) and pronouns (e.g. \ntt[0]{she} $\mapsto$ \sym{female.n.02}). Since AMRs do not contain tense information, past tense clauses% \footnote{Past tense was chosen because it is the most frequent tense in the data set.} are produced for the first verb in the AMR (see four tense related clauses in Figure\,\ref{fig:amr2drs}). Also, since AMRs do not use WordNet synsets, all concepts get a default first sense, except for concepts that are added by concept-specific rules, such as \sym{female.n.02} and \sym{time.n.08}. \begin{figure}[t] \begin{tabular}{@{}c@{}c@{}} \begin{minipage}[c]{5.1cm} \textsmaller[1]{\sym{She~removed~the~dishes~from~the~table.}}\\[2mm] \begin{tabular}{c@{}c@{}} \lstset{ moredelim=[is][\bfseries\textcolor{blue}]{/*}{*/}, moredelim=[is][\bfseries]{<}{>}, moredelim=[is][\textcolor{Brown}]{"}{"} } \begin{lstlisting}[basicstyle={\ttfamily\footnotesize\bfseries}] (<r> / "remove-01" :/*ARG0*/ (<s> / "she") :/*ARG1*/ (<d> / "dish") :/*ARG2*/ (<t> / "table")) \end{lstlisting} & \raisebox{-3mm}{\resizebox{10mm}{7mm}{\textcolor{black!30}{\ding{224}}}} \end{tabular} \end{minipage} & \textsmaller[1]{\texttt{ \begin{tabular}[c]{@{\,}l@{\,}}\toprule \textcolor{Brown}{\drgvar{b0} REF \drgvar{x1}}\\ \textcolor{Brown}{\drgvar{b0} remove \posn{v}{01} \drgvar{x1}}\\ \matched{\drgvar{b4} REF \drgvar{x5}}\\ \matched{\drgvar{b4} TPR \drgvar{x5} "now"}\\ \matched{\drgvar{b4} time \posn{n}{08} \drgvar{x5}}\\ \matched{\drgvar{b0} Time \drgvar{x1} \drgvar{x5}}\\ \textcolor{blue}{\drgvar{b0} Agent \drgvar{x1} \drgvar{x2}}\\ \textcolor{Brown}{\drgvar{b1} REF \drgvar{x2}}\\ \textcolor{Brown}{\drgvar{b1} female \posn{n}{02} \drgvar{x2}}\\ \textcolor{blue}{\drgvar{b0} Patient \drgvar{x1} \drgvar{x3}}\\ \textcolor{Brown}{\drgvar{b2} REF \drgvar{x3}}\\ \textcolor{Brown}{\drgvar{b2} dish \posn{n}{01} \drgvar{x3}}\\ \textcolor{blue}{\drgvar{b0} Theme \drgvar{x1} \drgvar{x4}}\\ \textcolor{Brown}{\drgvar{b3} REF \drgvar{x4}}\\ \textcolor{Brown}{\drgvar{b3} table \posn{n}{01} \drgvar{x4}}\\ \bottomrule \end{tabular} }}% \end{tabular} \caption{A clausal form obtained from an automatically generated AMR of the document 14/0849.} \label{fig:amr2drs} \end{figure} We compare the sets of DRSs using different numbers of restarts to find the best trade-off between speed and accuracy. The results are shown in Table \ref{tab:results}. The optimal scores are obtained using a Prolog script that performs an exhaustive search for the optimal mapping. As expected, increasing the number of restarts benefits performance. \newcite{smatch:13} consider four restarts the optimal trade-off between accuracy and speed, showing no improvement in F-score when using more than ten restarts.% \footnote{However, we found that, in practice, \textsc{smatch}{} still improves when using more restarts. Parsing the development set of the AMR dataset LDC2016E25 with the baseline parser of \newcite{clinAMR:17} yields an F-score of 55.0 for 10 restarts, but 55.4 for 100 restarts.} Contrary to \textsc{smatch}{}, performance for \mbox{\textsc{counter}}{} still increases with more than 4 restarts. In our case, it is a bit harder to select an optimal number of restarts, since this number depends on the length of the sentence, as shown in Figure~\ref{fig:len_comparison}. We see that for long sentences, 5 and 10 restarts are not sufficient to get close to the optimal, while for short sentences 5 restarts might be considered enough. In general, the best trade-off between speed and accuracy is approximately 20 restarts. \begin{table}[ht!] \centering \caption{\label{tab:results}Results of comparing 19,000 Boxer-produced DRSs to DRSs produced by \textsc{amr2drs}{}, for different number of restarts. For three or more restarts, we always use the smart role and concept mapping.} \scalebox{.9}{ \begin{tabular}{r|ccc|r} \toprule \textbf{Restarts} & \textbf{P}\% & \textbf{R}\% & \textbf{F1}\% & \textbf{Time (h:m:s)} \\ \midrule (random) 1 & 27.20 & 22.71 & 24.75 & 4:19 \\ (smart concepts) 1 & 27.45 & 22.92 & 24.98 & 4:35 \\ (smart roles) 1 & 27.27 & 22.76 & 24.81 & 4:37 \\ 5 & 30.25 & 25.25 & 27.53 & 19:33 \\ 10 & 30.65 & 25.59 & 27.89 & 37:08 \\ 20 & 30.84 & 25.75 & 28.07 & 1:10:13 \\ 30 & 30.90 & 25.80 & 28.12 & 1:41:43 \\ 50 & 30.94 & 25.83 & 28.16 & 2:41:38 \\ 75 & 30.96 & 25.85 & 28.17 & 3:53:01 \\ 100 & 30.97 & 25.85 & 28.18 & 5:01:25 \\\midrule Optimal & 30.98 & 25.86 & 28.19 & \\\bottomrule \end{tabular} } \end{table} \begin{figure}[t] \centering \includegraphics[scale=0.445]{length_comparison.pdf} \caption{\label{fig:len_comparison}Comparison of the differences to the optimal \mbox{F-score} per sentence length for different number of restarts.} \end{figure} \section{\mbox{\textsc{counter}}\ in Action} \label{sec:usingdmatch} \subsection{Semantic Parsing}\label{ssec:sempar} The first purpose of \mbox{\textsc{counter}}{} is to evaluate semantic parsers for DRSs. Since this is a new task, there are no existing systems that are able to do this. Therefore, we show the results of three baseline systems \textsc{pmb\,pipeline}, \textsc{Spar}{}, and \textsc{amr2drs}{} (Subsection\,\ref{ssec:eval-match}).\footnote{\textsc{Spar}{} and \textsc{amr2drs}{} are available at: \url{https://github.com/RikVN/DRS\_parsing/}} The \textsc{pmb\,pipeline} produces a DRS via the pipeline of the tools used for automatic annotation of the PMB.% \footnote{\url{http://pmb.let.rug.nl/software.php}} This means that it has no access to manual corrections, and hence it uses the most frequent word senses and default VerbNet roles. \textsc{Spar}{} is a trivial semantic `parser' which always outputs the DRS that is most similar to all other DRSs in the most recent PMB release (the left-hand DRS in Figure\,\ref{fig:drs-compare}). \begin{table}[t] \centering \caption{\label{tab:baselines}Comparison of three baseline DRS parsers to the gold-standard data set.} \begin{tabular}{lrrr} \toprule & \textbf{Precision\%} & \textbf{Recall\%} & \textbf{F-score\%}\\ \midrule \textsc{Spar}{} & 53.1 & 36.6 & 43.3 \\ \textsc{amr2drs}{} & 46.5 & 48.2 & 47.3 \\ \textsc{Pmb\,Pipeline} & 53.0 & 54.8 & 53.9 \\ \bottomrule \end{tabular} \end{table} The results of the three baseline parsers are shown in Table~\ref{tab:baselines}. The surprisingly high score of \textsc{Spar}{} is explained by the fact that the first PMB release mainly contains relatively short sentences with little structural diversity. The average number of clauses per clausal form (excluding redundant REF-clauses) is 8.7, where a substantial share (approximately 3) comes from tense related clauses. Due to this fact, guessing temporal clauses for short sentences has a big impact on F-score. This is illustrated by the comparison of the clausal forms in Figure\,\ref{fig:drs-compare}, where matching only temporal clauses results in an F-score of 40\%. \textsc{amr2drs} outperforms \textsc{spar} by a considerable margin, but is still far from optimal. This is also the case for \textsc{pmb\,pipeline}, which shows that, within the PMB, manual annotation is still required to obtain gold standard meaning representations. \subsection{Comparing Translations} The second purpose of \mbox{\textsc{counter}}{} is checking whether translations are meaning-preserving. As a pilot study, we compare the gold standard meaning representations of German, Italian and Dutch translations in the release to their English counterparts. The results are shown in Table~\ref{tab:translations}. The high F-scores indicate that the meaning representations are often syntactically very similar, if not identical. However, there is a considerable subset of meaning representations which are different from the English ones, indicating that there is at least a slight discrepancy in meaning for those translations. \begin{table}[h] \centering \caption{\label{tab:translations}Comparing meaning representations of English texts to those of German, Italian and Dutch translations.} \begin{tabular}{lrrrr} \toprule & \textbf{F-score\%} & \textbf{Docs} & \textbf{F\textless 1.0} & \textbf{\% total} \\ \midrule \textbf{German} & 98.4 & 579 & 61 & 10.5 \\ \textbf{Italian} & 97.6 & 341 & 46 & 13.5 \\ \textbf{Dutch} & 98.3 & 355 & 37 & 10.4 \\ \bottomrule \end{tabular} \end{table} Manual analysis of these discrepancies showed that there are several different causes for a discrepancy to arise. In most of the cases (38\%), a human annotation error was made. In 34\% of cases, a definite description was used in one language but not in the other. Examples are `has long hair' with the Italian translation `ha \underline{i} capelli lunghi', and `escape from prison' with the Dutch translation `vluchtte uit \underline{de} gevangenis'. In 15\% of cases proper names were translated (e.g. `United States' and `Stati Uniti'). This is not accounted for, since we do not currently make use of grounding proper names to a unique identifier, for instance by wikification \cite{cucerzan:2007}, or by using a language-independent transliteration of names. In 13\% of cases the translation was either non-literal or incorrect. Examples are `Tom lacks experience' with the Dutch translation `Tom heeft geen ervaring' (lit. `Tom has no experience'), `can't use chopsticks' with the German `kann nicht mit St\"{a}bchen essen' (lit. `cannot eat with sticks'), and `remove the dishes from the table' with the Dutch translation `ruimde de tafel af' (lit. `uncluttered the table'). The mapping of clausal forms involving non-literal translations is illustrated in Figure\,\ref{fig:drs-trans}. This preliminary analysis shows that this comparison of meaning representations provides an an additional method for detecting mistakes in annotation. It also showed that there are cases where our semantic analysis needs to be revised and improved. \begin{figure}[t] \hspace*{-3mm} \begin{tabular}{@{}c@{\kern-4mm}c@{}} \textsmaller[1]{\sym{She~removed~the~dishes~from~the~table.}} & \textsmaller[1]{\sym{\kern5mm Ze~ruimde~de~tafel~af.}} \\[-1mm] \scalebox{1}{ \renewcommand\arraystretch{1.1} \drs[t]{$x_1$ ~$x_2$ ~$e_1$ ~$x_3$ ~$t_1$ \hspace{5mm}}{ female$\sym{.n.02}(x_1)$\\ remove$\sym{.v.01}(e_1)$\\ \hspace{3mm}$\sym{Time}(e_1, t_1)$\\ \hspace{3mm}$\sym{Source}(e_1, x_3)$\\ \hspace{3mm}$\sym{Theme}(e_1, x_2)$\\ \hspace{3mm}$\sym{Agent}(e_1, x_1)$\\ $\sym{time.n.08}(t_1)$\\ \hspace{3mm}$t_1 \prec \sym{now}$\\ dish$\sym{.n.01}(x_2)$\\ table$\sym{.n.03}(x_3)$ }} & \scalebox{1}{ \renewcommand\arraystretch{1.1} \drs[t]{$x_1$ ~$x_2$ ~$e_1$ ~$t_1$ \hspace{8mm}}{ female$\sym{.n.02}(x_1)$\\ unclutter$\sym{.v.01}(e_1)$\\ \hspace{3mm}$\sym{Time}(e_1, t_1)$\\ \hspace{3mm}$\sym{Source}(e_1, x_2)$\\ \hspace{3mm}$\sym{Agent}(e_1, x_1)$\\ $\sym{time.n.08}(t_1)$\\ \hspace{3mm}$t_1 \prec \sym{now}$\\ table$\sym{.n.03}(x_2)$ }} \\[-2mm] \textsmaller[1]{\texttt{ \begin{tabular}[t]{@{\,}l@{\,}}\toprule \strout{\matched{\drgvar{b1} REF \drgvar{x1}}}\\ \matched{\drgvar{b1} female \posn{n}{02} \drgvar{x1}}\\ \strout{\matched{\drgvar{b5} REF \drgvar{t1}}}\\ \matched{\drgvar{b5} TPR \drgvar{t1} "now"}\\ \matched{\drgvar{b5} time \posn{n}{08} \drgvar{t1}}\\ \matched{\drgvar{k0} Agent \drgvar{e1} \drgvar{x1}}\\ \strout{\matched{\drgvar{k0} REF \drgvar{e1}}}\\ \nonmatched{\drgvar{k0} Theme \drgvar{e1} \drgvar{x2}}\\ \matched{\drgvar{k0} Time \drgvar{e1} \drgvar{t1}}\\ \nonmatched{\drgvar{k0} remove \posn{v}{01} \drgvar{e1}}\\ \strout{\nonmatched{\drgvar{b2} REF \drgvar{x2}}}\\ \nonmatched{\drgvar{b2} dish \posn{n}{01} \drgvar{x2}}\\ \matched{\drgvar{k0} Source \drgvar{e1} \drgvar{x3}}\\ \strout{\matched{\drgvar{b4} REF \drgvar{x3}}}\\ \matched{\drgvar{b4} table \posn{n}{03} \drgvar{x3}}\\ \bottomrule \end{tabular} }} & \textsmaller[1]{\texttt{ \begin{tabular}[t]{@{\,}l@{\,}}\toprule \strout{\matched{\drgvar{b1} REF \drgvar{x1}}}\\ \matched{\drgvar{b1} female \posn{n}{02} \drgvar{x1}}\\ \strout{\matched{\drgvar{b4} REF \drgvar{t1}}}\\ \matched{\drgvar{b4} TPR \drgvar{t1} "now"}\\ \matched{\drgvar{b4} time \posn{n}{08} \drgvar{t1}}\\ \matched{\drgvar{k0} Agent \drgvar{e1} \drgvar{x1}}\\ \strout{\matched{\drgvar{k0} REF \drgvar{e1}}}\\ \matched{\drgvar{k0} Source \drgvar{e1} \drgvar{x2}}\\ \matched{\drgvar{k0} Time \drgvar{e1} \drgvar{t1}}\\ \nonmatched{\drgvar{k0} unclutter \posn{v}{01} \drgvar{e1}}\\ \strout{\matched{\drgvar{b2} REF \drgvar{x2}}}\\ \matched{\drgvar{b2} table \posn{n}{03} \drgvar{x2}}\\ \bottomrule \end{tabular} }} \end{tabular} \caption{English and Dutch non-literal translations of the document 14/0849. Their clausal forms match each other (excl. redundant REF-clauses) with an F-score of 77.8\%. This matching is achieved by the mapping of variables \{\ntt[0]{b5}$\mapsto$\ntt[0]{b4}, \ntt[0]{b4}$\mapsto$\ntt[0]{b2}\}.} \label{fig:drs-trans} \end{figure} \section{Conclusions and Future Work} Large semantically annotated corpora are rare. Within the Parallel Meaning Bank project, we are creating a large, open-domain corpus annotated with formal meaning representations. We take advantage of parallel corpora, enabling the production of meaning representations for several languages at the same time. Currently, these are languages similar to English, two Germanic languages (Dutch and German) and one Romance language (Italian). Ideally, future work would include more non-Germanic languages. The DRSs that we present are meaning representations with substantial expressive power. They deal with negation, universal quantification, modals, tense, and presupposition. As a consequence, semantic parsing for DRSs is a challenging task. Compared to Abstract Meaning Representations, the number of clauses and variables in a DRS is about two times larger on average. Moreover, compared to AMRs, DRSs rarely contain clauses with single variables. All non-logical symbols used in DRSs are grounded in WordNet and VerbNet (with a few extensions). This makes evaluation using matching computationally challenging, in particular for long sentences, but our matching system \mbox{\textsc{counter}}{} achieves a reasonable trade-off between speed and accuracy. Several extensions to the annotation scheme are possible. Currently, the DRSs for the non-English languages contain references to synsets of the English WordNet. Conceptually, there is nothing wrong with this (as synsets can be viewed as identifiers for concepts that are language-independent), but for practical reasons it makes more sense to provide links to synsets of the original language \cite{germanet,OpenDutchWordNet,ItalWordNet,MultiWordNet}. In addition, we consider implementing semantic grounding such as wikification in the Parallel Meaning Bank. As for other future work, we plan to include a more fine-grained matching regarding WordNet synsets, since the current evaluation of concepts is purely string-based, with only identical strings resulting in a matching clause. For many synsets, however, it is possible to refer to them with more than one word\sym{.POS.SenseNum} triple, and this should be accounted for (e.g. fox\sym{.n.02} and dodger\sym{.n.01} both refer to the same synset). In a similar vein, we plan to experiment with including WordNet concept similarity techniques in \mbox{\textsc{counter}}{} to compute semantic distances between synsets, in case they do not fully match. Finally, we would like to stimulate research on semantic parsing with scoped meaning representations. Not only are we planning to extend the coverage of phenomena and the number of texts with gold-standard meaning representations for the four languages, we also aim to organize a shared task on DRS parsing for English, German, Dutch and Italian in the near future. \section{Acknowledgements} This work was funded by the NWO-VICI grant ``Lost in Translation -- Found in Meaning'' (288-89-003). We used a Tesla K40 GPU, which was kindly donated to us by the NVIDIA Corporation. We also want to thank the three anonymous reviewers for their comments. \section{Bibliographical References} \label{main:ref} \bibliographystyle{lrec}
{ "timestamp": "2018-04-11T02:07:50", "yymm": "1802", "arxiv_id": "1802.08599", "language": "en", "url": "https://arxiv.org/abs/1802.08599" }
\section{introduction} \subsection{Presentation of the problem} In this paper, we consider the incompressible inviscid flow in 3-D elastodynamics: \begin{equation}\label{elso} \left\{ \begin{array}{l} \pa_t \rho + \vu\cdot\nabla\rho=0,\\ \pa_t (\rho\vu) + \vu\cdot\nabla(\rho\vu)+ \nabla p= \mbox{div}(\rho\vF\vF^\top), \\ \pa_t \vF + \vu\cdot\nabla \vF = \nabla\vu \vF,\\ \mbox{div} \vu = 0, \\ \end{array} \right. \end{equation} where $\rho$ is the density of fluids, $\vu(t,x)=(u_1,u_2,u_3)$ denotes the fluid velocity, $p(t,x)$ is the pressure, $\vF(t,x)=(F_{ij})_{3\times 3}$ is the deformation tensor, $\vF^\top=(F_{ji})_{3\times 3}$ denotes the transpose of the matrix $\vF$, $\vF\vF^\top$ is the Cauchy-Green tensor in the case of neo-Hookean elastic materials, $(\nabla\vu)_{ij}=\pa_ju_i$, $(\nabla\vu \vF)_{ij}=\sum^3_{k=1}\vF_{kj}\partial_k u_i$, $(\mbox{div} \vF^\top)_i=\sum^3_{j=1}\pa_jF_{ji}$, $(\mbox{div}\vF\vF^\top)_i=\sum^3_{j,k=1}\pa_j(F_{ik}F_{jk})$. We study the solution of (\ref{elso}) which are smooth on each side of a smooth interface $\Gamma(t)$ in a domain $\Omega$. Precisely, for simplicity, we let \begin{align*} & \Om=\mathbb{T}^2\times[-1,1]\subset \mathbb{R}^3,\quad \Gamma(t)=\{x\in\Om|x_3=f(t,x'),x'=(x_1,x_2)\in\mathbb{T}^2\},\\ & \Om_t^\pm=\{x\in\Om|x_3\gtrless f(t,x'),x'=(x_1,x_2)\in\mathbb{T}^2\},\qquad Q_T^\pm=\underset{t\in(0,T)}{\bigcup}\{t\}\times\Om^\pm_t. \end{align*} We consider that $\rho_{\Om_t^\pm}=\rho^\pm$ are two constants, and \begin{align*} \vu^\pm:=\vu|_{\Om^\pm_t},\qquad \vF^\pm:=\vF|_{\Om^\pm_t},\qquad p^\pm:=p|_{\Om^\pm_t}, \end{align*} are smooth in $Q_T^\pm$ and satisfy \begin{equation}\label{els} \left\{ \begin{array}{ll} \rho^\pm\big(\pa_t \vupm + \vupm\cdot\nabla \vupm\big)+ \nabla p^{\pm} = \rho^\pm\sum\limits^3_{j=1}(\vF_j^\pm\cdot\nabla) \vF_j^\pm &\text{ in }\quad Q^{\pm}_T, \\ \mbox{div} \vupm = 0,\,\mbox{div} \vF^{\pm\top} = 0&\text{ in }\quad Q^{\pm}_T, \\ \pa_t \vF_j^\pm + \vu^\pm\cdot\nabla \vF_j^\pm =(\vF_j^\pm\cdot\nabla)\vu^\pm&\text{ in }\quad Q^{\pm}_T, \end{array} \right. \end{equation} with the boundary conditions on the moving interface $\Gamma_t$: \begin{align}\label{elsp} [p]\buildrel\hbox{\footnotesize def}\over = p^+-p^-=0,\quad \vupm\cdot\vn =V(t,x), \quad \vF_j^\pm\cdot\vn = 0. \end{align} Here $\vF^\pm_j=(F^\pm_{1j},F^\pm_{2j},F^\pm_{3j})$, $\vn$ is the outward unit normal to $\pa\Om^-_t$ and $V(t,x)$ is the normal velocity of $\Gamma_t$. On the artificial boundary $\Gamma^\pm=\mathbb{T}^2\times\{\pm1\}$, we impose the following boundary conditions on ($\vu^\pm$, $\vF^\pm$): \begin{align}\label{vorsheet:top-bot-bc} u_3^\pm=0,\qquad F_{3j}^\pm=0\qquad \text{on}~\Gamma^\pm. \end{align} The system (\ref{els}) is supplemented with the initial data \begin{align}\label{elsi} \vupm(0,x)=u _0^\pm(x),\qquad \vF^\pm(0,x)=\vF_0^\pm\qquad \text{in}~\Om_0^\pm, \end{align} where the initial data satisfies \begin{equation}\label{elsii} \left\{ \begin{array}{ll} \mbox{div}\vu^\pm_0=0,~\mbox{div}\vF^\pm_{j,0}=0&\text{in}~\Om_0^\pm,\\ \vu^+_0\cdot\vn_0=\vu^-_0\cdot\vn_0,~\vF^\pm_{j,0}\cdot\vn_0=0&\text{on}~\Gamma_0. \end{array} \right. \end{equation} The system (\ref{els})-(\ref{elsi}) is called the vortex sheet problem for incompressible elastodynamics. One of main goals in this paper is to study the local well-posedness of this system under some suitable stability conditions imposed on the initial data. In our setting, the boundary condition on $\Gamma_t$ in (\ref{els}) is transformed into \begin{align*} [p]=0,\qquad \vu^\pm\cdot \vN=\pa_tf,\qquad \vF_j^\pm\cdot \vN=0\qquad \text{on}~\Gamma_t, \end{align*} where $\vN=(-\pa_1f,-\pa_2f,1)$ and $\vn= {\vN}/{|\vN|}.$ Let us remark that the divergence free restriction on $\vF_j^\pm$ is automatically satisfied if $\mbox{div}\vF^\pm_{j,0}=0$. Indeed, if we apply the divergence operator to the third equation of (\ref{els}), we will deduce the following transport equation \begin{align*} \pa_t\mbox{div}\vF^\pm_j + \vupm\cdot\nabla\mbox{div}\vF^\pm_j=0. \end{align*} Similar argument can be also applied to yield that $\vF^\pm_j\cdot\vN = 0$ if $\vF^\pm_{j,0}\cdot\vN_0 = 0$. For a special case $\rho^+=0$, the problem reduces to another type of free boundary problem for idea incompressible elastodynamics, that is, \begin{equation}\label{elsf} \left\{ \begin{array}{ll} \pa_t \vu+ \vu\cdot\nabla \vu+ \nabla p= \sum\limits^3_{j=1}\vF_j\cdot\nabla \vF_j&\text{in}\qquad \Om_t^-, \\ \mbox{div} \vu = 0,\quad\mbox{div} \vF^\top=0&\text{in}\qquad \Om_t^-,\\ \pa_t \vF_j + \vu\cdot\nabla \vF_j = \vF_j\cdot\nabla\vu&\text{in}\qquad \Om_t^-, \end{array} \right. \end{equation} where $\vF_j=(F_{1j},F_{2j},F_{3j})$ and $\Om_t^-$ and $\Gamma_t$ are defined as above. On the free boundary $\Gamma_t$, the boundary conditions are given by \begin{align}\label{freebp:interface-bc} p=0,\qquad \vu\cdot \vN=\pa_tf, \qquad \vN\cdot\vF_j=0\qquad \text{on}\quad \Gamma_t, \end{align} while on the bottom boundary $\Gamma^-$, it holds that \begin{align}\label{freebp:bot-bc} u_3=0,\qquad F_{3j}=0\qquad \text{on}~\Gamma^-. \end{align} This system is supplemented with initial data \begin{align}\label{elsfi} \vu(0,x)=u_0(x),\qquad \vF(0,x)=\vF_0\qquad \text{in}~\Om_0^-. \end{align} \subsection{Background} For incompressible inviscid flow, the Kelvin-Helmholtz instability has been known for over a century \cite{Maj}. It was well known that the surface tension can stabilize the Kelvin-Helmholtz and Rayleigh-Taylor instability \cite{AM, CCS, SZ2}. Syrovatskij \cite{Sy} and Axford \cite{Ax} found that the magnetic field has a stabilization effect on the Kelvin-Helmholtz instability. Recently, there are many important works devoted to confirming this stabilizing mechanism. For the current-vortex sheet problem, we refer to \cite{Tra1, Tra2, Chen, WY} for compressible case and \cite{MTT1, Tra-in, CMST, SWZ1} for incompressible case. For plasma-vacuum problem, we refer to \cite{Tra-JDE, ST} for compressible case and \cite{MTT2, SWZ2} for incompressible case. We also refer to some related works \cite{HL, Hao, GW} on the incompressible plasma-vacuum problem. For the inviscid elastodynamics, there are several recent progress on the free boundary problems. For 2-D compressible vortex sheet problem in elastodynamics, Chen-Hu-Wang \cite{CHW} analyzed the linearized stability and proved the stabilization effect of elasticity on vortex sheets. In \cite{Tra3}, Trakhinin proved the well-posedness of the one-fluid free boundary problem in compressible elastodynamics under the condition that there are two columns of the $3\times 3$ deformation tensor which are non-collinear at each point of the initial surface. For the incompressible case, Hao-Wang \cite{HW} proved a priori estimates for solutions in Sobolev spaces under the Rayleigh-Taylor sign condition. The aim of this paper is to show the local well-posedness for both two free boundary problems in incompressible elastodynamics under a natural stability condition by using the method developed in \cite{SWZ1}. The basic idea is to derive an evolution equation describing the free boundary so that it is strictly hyperbolic under a suitable stability condition. This idea is very effective to study the free boundary problems of the incompressible Euler equations \cite{Wu1, Wu2, ZZ, SZ1}. \subsection{Main results} To ensure the stability of the system (\ref{els})-(\ref{elsi}) and the system (\ref{elsf})-(\ref{elsfi}), certain stability conditions are required. In this paper, we assume the following stability condition for (\ref{els})-(\ref{elsi}): \begin{align}\label{condition:s1} \inf_{\Gamma_t} \,\,\,\inf_{\xi\in\mathbb{S}^2,\,\xi\cdot\vN=0} \Big\{(\rho^++\rho^-)\Big[\rho^+(\xi\cdot{\vF}^+)^2+\rho^-(\xi\cdot{\vF}^-)^2\Big]-\rho^+\rho^-(\xi\cdot[\vu])^2\Big\}>0, \end{align} where $\xi\cdot\vF=(\xi_iF_{ij})_{1\le j\le 3}$ and $[\vu]=\frac{1}{2}(\vu^+-\vu^-)$ on $\Gamma_f$. (\ref{condition:s1}) is equivalent to \begin{align}\label{condition:s2} \Lambda(\vFpm,\vv)\buildrel\hbox{\footnotesize def}\over =&\inf_{x\in\Gamma_t}\inf_{\ph_1^2+\ph_2^2=1}\sum^3_{j=1}\Big(\frac{\rho^+}{\r^++\r^-}(F^+_{1j}\ph_1+F^+_{2j}\ph_2)^2 +\frac{\rho^-}{\r^++\r^-}(F^-_{1j}\ph_1+F^-_{2j}\ph_2)^2\Big)\\ \nonumber &-(v_1\ph_1+v_2\ph_2)^2>0, \end{align} where $(v_1,v_2,v_3)=\frac{\sqrt{\r^+\r^-}}{\r^++\r^-}[\vu]$. Our first main result is stated as follows. \begin{theorem}\label{thm:1} Let $s\ge3$ be an integer and assume that \begin{align*} f_0\in H^{s+ \frac{1}{2}}(\mathbb{T}^2),\quad \vu^\pm_0,\,\vF^\pm_0\in H^s(\Om_0^\pm). \end{align*} Furthermore we assume that there exists $c_0>0$ so that \begin{itemize} \item[1.] $-(1-2c_0)\le f_0\le (1-2c_0)$; \item[2.] $\Lambda(\vF_0^\pm,\vv_0)\ge 2c_0$. \end{itemize} Then there exists $T>0$ such that the system (\ref{els}) admits a unique solution $(f, \vu, \vF)$ in $[0,T]$ satisfying \begin{itemize} \item[1.] $f\in L^\infty([0,T), H^{s+\frac{1}{2}}(\mathbb{T}^2))$; \item[2.] $\vu^\pm,\,\vF^\pm\in L^\infty\big(0,T;H^{s}(\Omega^\pm_t)\big)$; \item[3.] $-(1-c_0)\le f\le (1-c_0)$; \item[4.] $\Lambda(\vF^\pm,\vv)\ge c_0$. \end{itemize} \end{theorem} When $\rho^+=0$, the stability condition (\ref{condition:s1}) reduces to $(\xi\cdot{\vF}^-)^2>0$ on $\Gamma_t$ for any $\xi\in\mathbb{S}^2$ with $\xi\cdot\vN=0$, which is equivalent to $\text{rank}(\vF)=2.$ Therefore, as a corollary of Theorem \ref{thm:1}, we have the following result which concerns the well-posedness for the system (\ref{elsf})-(\ref{elsfi}). \begin{theorem}\label{thm:2} Let $s\ge3$ be an integer and assume that \begin{align*} f_0\in H^{s+ \frac{1}{2}}(\mathbb{T}^2),\quad \vu_0,\,\vF_0\in H^s(\Om_0^\pm). \end{align*} Furthermore we assume that there exists $c_0>0$ so that \begin{itemize} \item[1.] $-(1-2c_0)\le f_0\le (1-2c_0)$; \item[2.] $\rm{rank}(\vF_0)=2$ on $\Gamma_0$. \end{itemize} Then there exists $T>0$ such that the system (\ref{elsf}) admits a unique solution $(f, \vu, \vF)$ in $[0,T]$ satisfying \begin{itemize} \item[1.] $f\in L^\infty([0,T), H^{s+\frac{1}{2}}(\mathbb{T}^2))$; \item[2.] $\vu,\,\vF\in L^\infty\big(0,T;H^{s}(\Omega^\pm_t)\big)$; \item[3.] $-(1-c_0)\le f\le (1-c_0)$; \item[4.] $\rm{rank}(\vF)=2$ on $\Gamma_t$. \end{itemize} \end{theorem} \begin{remark} We remark that the assumption $\rm{rank}(\vF_0)=2$ on $\Gamma_0$ is weaker than the assumption proposed by Trakhinin \cite{Tra3}, which says that among the three vectors $\vF_1$, $\vF_2$ and $\vF_3$ there are two which are non-collinear at each point of $\Gamma_0$. There is another type of stability condition which would also ensure the existence of solutions to the system (\ref{elsf})-(\ref{elsfi}) : \begin{align} -\vN\cdot\nabla p>0, \quad \text{ on }\Gamma_t, \end{align} see \cite{HW} for a priori estimates results. One would also be interested in studying the wellposedness under the following mixed type of stability condition: \begin{align} \{ x\in \Gamma_t: {\rm{rank}}(\vF(x))=2 \}\cup \{ x\in \Gamma_t: -\vN\cdot\nabla p>0 \} =\Gamma_t. \end{align} These cases can also be handled in this framework, which will be left in a forthcoming work. \end{remark} \begin{remark} Our method could be applied to 2-D case, which in particular means that the elasticity has a stabilization effect on the Rayleigh-Taylor instability. Indeed, for the 2-D case, we have that $\vF^\pm$ are $2\times 2$ matrices, and $\vF_j^\pm\cdot\vN=0, [\vu]\cdot\vN=0$, which implies that $\vF_j^\pm, [\vu]$ are collinear to each other. Therefore, the stability condition (\ref{condition:s1}) for (\ref{els})-(\ref{elsi}) reduces to \begin{align} (\rho^++\rho^-)\Big[\rho^+|{\vF}^+|^2+\rho^-|{\vF}^-|^2\Big]-\rho^+\rho^-|[\vu]|^2>0, \end{align} and for the system (\ref{elsf})-(\ref{elsfi}), the stability condition $\text{rank}(\vF)=2$ reduces to \begin{align} |\vF|>0. \end{align} The solutions can be constructed in a similar way as Theorem \ref{thm:1} and \ref{thm:2}. \end{remark} The rest of this paper is organized as follows. In Section 2, we will introduce the reference domain, harmonic coordinate, and the Dirichlet-Neumann operator. In Section 3, we reformulate the system into a new formulation. In Section 4, we present the uniform estimates for the linearized system. In Section 5, we prove the existence and uniqueness of the solution. In Section 6, we present a sketch of the proof of Theorem \ref{thm:2}. \section{Reference domain, harmonic coordinate and Dirichlet-Neumann Operator} For free boundary problems, as the domain of the fluid is changing with time $t$, we always draw the moving domain back to a fixed domain which is called reference domain \cite{SWZ1}. Let $\Gamma_*$ be a fixed graph given by \begin{align*} \Gamma_*=\{(y_1,y_2,y_3):y_3=f_*(y_1,y_2)\}. \end{align*} The reference domain $\Om^\pm_*$ is given by \begin{align*} \Om_*=\mathbb{T}^2\times(-1,1),\quad \Om^\pm_*=\{y\in\Om_*|y_3\lessgtr f_*(y_1,y_2)\}. \end{align*} We will look for the free boundary which lies in a neighborhood of the reference domain. As a result, we define \begin{align*} \Upsilon(\delta,k)&\buildrel\hbox{\footnotesize def}\over =\Big\{f\in H^k(\mathbb{T}^2): \|f-f_*\|_{H^k(\mathbb{T}^2)}\le \delta \Big\}. \end{align*} For $f\in \Upsilon(\delta,k)$, we can define the graph $\Gamma_f$ by \begin{align*} \Gamma_f\buildrel\hbox{\footnotesize def}\over =\left\{x\in \Om_t| x_3=f(t,x'), \int_{\mathbb{T}^2}f(t,x')dx'=0 \right\}. \end{align*} The graph $\Gamma_f$ separates $\Omega_t$ into two parts: \begin{align*} \Om_f^{+}=\Big\{ x \in \Om_t| x_3 > f(t,x')\Big\}, \quad \Om_f^{-}=\Big\{ x \in \Om_t| x_3 < f(t,x')\Big\}. \end{align*} Let $\vN_f=(N_1,N_2,N_3)$ be the outward normal vector of $\Om_f^-$ where \begin{align*} \vN_f\triangleq(-\partial_1f, -\partial_2f, 1),\quad \vn_f\triangleq\vN_f/\sqrt{1+|\nabla f|^2}. \end{align*} Then we need to find the draw back maps. For this purpose, we introduce the harmonic coordinate. Given $f\in \Upsilon(\delta,k)$, we define a map $\Phi_f^\pm:\Omega_*^\pm\to\Omega_f^\pm$ by harmonic extension: \begin{equation} \left\{ \begin{array}{ll} \Delta_y \Phi_f^\pm=0, &y\in \Omega_*^\pm,\\ \Phi_f^\pm(y',f_*(y'))=(y',f(y')), &y'\in\mathbb{T}^2,\\ \Phi_f^\pm(y',\pm1)=(y',\pm1), &y'\in\mathbb{T}^2.\\ \end{array} \right. \end{equation} Given $\Gamma_*$, there exists $\delta_0=\delta_0(\|f_*\|_{W^{1,\infty}})>0$ so that $\Phi_f^\pm$ is a bijection when $\delta\le \delta_0$. Then we can define an inverse map $\Phi_f^{\pm-1}:\Omega_f^\pm\to\Omega_*^\pm$ such that \begin{equation}\nonumber \Phi_f^{\pm-1}\circ\Phi_f^\pm=\Phi_f^\pm\circ\Phi_f^{\pm-1}=\mathrm{Id}. \end{equation} The following properties come from \cite{SWZ1}. \begin{lemma}\label{lem:basic} Let $f\in \Upsilon(\delta_0,s-\f12)$ for $s\ge 3$. Then there exists a constant $C$ depending only on $\delta_0$ and $\|f_*\|_{H^{s-\f12}}$ so that \begin{itemize} \item[1.] If $u\in H^{\sigma}(\Om_f^\pm)$ for $\sigma\in [0,s]$, then \begin{align*} \|u\circ\Phi_f^\pm\|_{H^\sigma(\Om^\pm_*)}\le C\|u\|_{H^\sigma(\Om_f^\pm)}. \end{align*} \item[2.] If $u\in H^{\sigma}(\Om_*^\pm)$ for $\sigma\in [0,s]$, then \begin{align*} \|u\circ\Phi_f^{\pm-1}\|_{H^{\sigma}(\Om_f^\pm)}\le C\|u\|_{H^\sigma(\Om_*^\pm)}. \end{align*} \item[3.] If $u, v\in H^{\sigma}(\Om_*^\pm)$ for $\sigma\in [2,s]$, then \begin{align*} \|uv\|_{H^\sigma(\Omega_f^\pm)}\le C\|u\|_{H^\sigma(\Omega_f^\pm)}\|v\|_{H^\sigma(\Omega_f^\pm)}. \end{align*} \end{itemize} \end{lemma} We will use the Dirichlet-Neumann operator, which maps the Dirichlet boundary value of a harmonic function to its Neumann boundary value. That is to say, for any $g(x')=g(x_1,x_2)\in H^k(\mathbb{T}^2)$, we denote by $\mathcal{H}_f^\pm g$ the harmonic extension to $\Omega^\pm_f$: \begin{equation} \left\{ \begin{array}{ll} \Delta \mathcal{H}_f^\pm g =0,& x\in \Omega_f^\pm,\\ (\mathcal{H}_f^\pm g)(x',f(x'))=g(x'),& x'\in\mathbb{T}^2,\\ \partial_3\mathcal{H}_f^\pm g(x',\pm1)=0,& x'\in\mathbb{T}^2. \end{array} \right. \end{equation} Then the Dirichlet-Neumann operator is defined by \begin{align} \mathcal{N}^\pm_fg\overset{def}{=}\mp\vN_f\cdot(\nabla\mathcal{H}^\pm_fg)\big|_{\Gamma_f}. \end{align} We will use the following properties from \cite{ABZ, SWZ1}. \begin{lemma}\label{lem:DN} It holds that \begin{itemize} \item[1.] $\mathcal{N}^\pm_f$ is a self-adjoint operator: \begin{align*} (\mathcal{N}^\pm_f\psi,\phi)=(\psi,\mathcal{N}^\pm_f\phi),\quad\forall \phi,\, \psi\in H^\frac{1}{2}(\mathbb{T}^2); \end{align*} \item[2.] $\mathcal{N}^\pm_f$ is a positive operator: \begin{align*} (\mathcal{N}^\pm_f\phi,\phi)=\|\na\mathcal{H}_f^\pm\phi\|_{L^2(\Omega_f)}^2\ge 0,\quad \forall \phi\in H^\frac{1}{2}(\mathbb{T}^2); \end{align*} Especially, if $\int_{\mathbb{T}^2}\phi(x')dx'=0$, there exists $c>0$ depending on $c_0, \|f\|_{W^{1,\infty}}$ such that \begin{align*} (\mathcal{N}^\pm_f\phi,\phi)\ge c\|\mathcal{H}_f^\pm\phi\|_{H^1(\Omega_f)}^2\ge c\|\phi\|_{H^\frac{1}{2}}^2. \end{align*} \item[3.] $\mathcal{N}^\pm_f$ is a bijection from $H^{k+1}_0(\mathbb{T}^2)$ to $H^{k}_0(\mathbb{T}^2)$ for $k\ge 0$, where \begin{align*} H^{k}_0(\mathbb{T}^2)\buildrel\hbox{\footnotesize def}\over = H^k(\mathbb{T}^2)\cap\{\phi\in L^2(\mathbb{T}^2):\int_{\mathbb{T}^2}\phi(x')dx'=0\}. \end{align*} \end{itemize} \end{lemma} We will use $x=(x_1,x_2,x_3)$ or $y=(y_1,y_2,y_3)$ to denote the coordinates in the fluid region, and use $x'=(x_1,x_2)$ or $y'=(y_1,y_2)$ to denote the natural coordinates on the interface or on the top/bottom boundary. In addition, we will use the Einstein summation notation where a summation from 1 to 2 is implied over repeated index, while a summation from 1 to 3 over repeated index will be explicitly figured out by the symbol $\sum$ (i.e. $a_ib_i=a_1b_1+a_2b_2, \sum_{i=1}^3a_ib_i=a_1b_1+a_2b_2+a_3b_3$). For a function $g:\Om\to\mathbb R$, we denote $\nabla g=(\pa_1g,\pa_2g,\pa_3g)$, and for a function $\eta:\mathbb{T}^2\to \mathbb R$, $\nabla\eta=(\pa_1\eta,\pa_2\eta)$. For a function $g:\Om^\pm_f\to\mathbb R$, we can define its trace on $\Gamma_f$, which is denoted by $\underline g(x')$. Thus, for $i=1,2$, \begin{align*} \pa_i\underline g(x')=\pa_i g(x',f(x'))+\pa_3g(x',f(x'))\pa_if(x'). \end{align*} We denote by $||\cdot||_{H^s(\Om)}$ the Sobolev norm in $\Om$, and by $||\cdot||_{H^s}$ the Sobolev norm in $\mathbb{T}^2$. \section{Reformulation of the problem} In this section, we derive a new system which is equivalent to the original system (\ref{els})-(\ref{elsi}). The system consists of the evolution equations of the following quantities: \begin{itemize} \item The height function of the interface: $f$; \item The scaled normal velocity on the interface: $\theta=\vu^\pm\cdot\vN_f$; \item The curl part of velocity and deformation tensor in the fluid region: $\vom^\pm=\nabla\times\vu^\pm$, $\vG_j^\pm=\nabla\times\vF_j^\pm$; \item The average of tangential part of velocity and deformation tensor field on top and bottom fixed boundary: \begin{align*} \beta_i^\pm(t)=\int_{\mathbb{T}^2}u_i^\pm(t,x',\pm1)dx',\quad \gamma^\pm_{ij}(t)=\int_{\mathbb{T}^2}F_{ij}^\pm(t,x',\pm1)dx'\,\,(i=1,2; j=1,2,3). \end{align*} \end{itemize} \subsection{Evolution of the scaled normal velocity} Let \begin{align} \theta(t,x'){=}\vu^\pm(t,x',f(t,x'))\cdot\vN_f(t,x'). \end{align} Then we have \begin{align}\label{eq:form:f} \partial_tf(t,x')=\theta(t,x'). \end{align} First of all, one can easily obtain the following elementary lemma, which is useful in the derivation of the evolution of $\theta$. \begin{lemma}\cite{SWZ1}\label{rel-uh} For $\vu=\vupm,\vF^\pm_j$, we have \begin{align}\nonumber &({\vu}\cdot{\nabla\vu})\cdot\vN_f-{\partial_3u}_jN_j({\vu}\cdot\vN_f)\big|_{x_3=f(t,x')}\nonumber\\ &=\underline{u}_1\partial_1(\underline{u}_jN_j)+\underline{u}_2\partial_2(\underline{u}_jN_j) +\sum_{i,j=1,2}\underline{u}_i\underline{u}_j\partial_i\partial_jf. \end{align} \end{lemma} Combining the first equation of (\ref{els}) and Lemma \ref{rel-uh}(recall $\vF^\pm_j\cdot\vN_f=0$ on $\Gamma_t$), one can obtain \begin{align}\nonumber \partial_t\theta=&(\partial_t\vu^++\partial_3\vu^+\partial_tf)\cdot\vN_f+\vu^+\cdot\partial_t\vN_f\big|_{x_3=f(t,x')}\\ =&(-\vup\cdot\nabla\vup+\sum^3_{j=1}(\vF_j\cdot\nabla)\vF_j-\nabla p^++\partial_3\vu^+\partial_tf)\cdot\vN_f\nonumber\\ &- \vu^+\cdot(\partial_1\partial_tf,\partial_1\partial_tf,0)\big|_{x_3=f(t,x')}\nonumber\\\nonumber =&\big(-(\vup\cdot\nabla)\vup+\partial_3\vu^+(\vup\cdot\vN_f)\big)\cdot\vN_f+\sum^3_{j=1}(\vF_j\cdot\nabla)\vF_j\cdot\vN_f\\ &-\vN_f\cdot\nabla p^+-\vu^+\cdot(\partial_1\theta, \partial_2\theta,0)\big|_{x_3=f(t,x')}\nonumber\\ =&-2(\underline{u}^+_1\partial_1\theta+\underline{u}^+_2\partial_2\theta)-\frac{1}{\r^+}\vN\cdot\underline{\nabla p}^+-\sum^2_{s,r=1} \underline{u}_s^+\underline{u}^+_r\partial_s\partial_r +\sum^3_{j=1}\sum^2_{s,r=1}\underline{F}^+_{sj}\underline{F}^+_{rj}\partial_s\partial_rf,\label{eq:theta-d} \end{align} and similarly, \begin{align}\label{eq:theta-d-2} \partial_t\theta =&-2(\underline{u}^-_1\partial_1\theta+\underline{u}^-_2\partial_2\theta)-\frac{1}{\r^-}\vN\cdot\underline{\nabla p}^--\sum^2_{s,r=1} \underline{u}_s^-\underline{u}^-_r\partial_s\partial_r +\sum^3_{j=1}\sum^2_{s,r=1}\underline{F}^-_{sj}\underline{F}^-_{rj}\partial_s\partial_rf.\nonumber \end{align} Taking the divergence to the first equation of (\ref{els}), we get \begin{align} \Delta p^\pm=\rho^\pm\big(\sum\limits^3_{j=1}\mathrm{tr}(\nabla\vF^\pm_j)^2-\mathrm{tr}(\nabla\vu^\pm)^2\big). \end{align} Recall that $\underline{p}^\pm=p^\pm|_{\Gamma_f}$ and $\mathcal{H}^\pm_f$ is the harmonic extension from $\Gamma_f$ to $\Omega^\pm_f$. Then for the pressure $p^\pm$, we have the following important representation: \begin{align} p^\pm=\mathcal{H}_f^\pm\underline{p}^\pm+\rho^\pm p_{\vupm, \vupm}-\rho^\pm\sum^3_{j=1}p_{\vF^\pm_j, \vF^\pm_j}, \end{align} where $p_{\vu_1, \vu_2}$ is the solution of elliptic equation \begin{equation}\label{eqp} \left\{ \begin{array}{ll} \Delta p_{\vu_1^\pm, \vu_2^\pm}= -\mathrm{tr}(\nabla\vu_1^\pm\nabla\vu_2^\pm) &\text{in}\quad\Omega^\pm_f,\\ p_{\vu_1^\pm, \vu_2^\pm}=0&\text{on}\quad\Gamma_f,\\ \ve_3\cdot\nabla p_{\vu_1^\pm, \vu_2^\pm}=0&\text{on}\quad\Gamma^\pm. \end{array}\right. \end{equation} Thus, from (\ref{eq:theta-d}) and (\ref{eq:theta-d-2}), we have on $\Gamma_f$ that, \begin{align*} &\frac{1}{\r^+}\vN_f\cdot\nabla \mathcal{H}^+_f\underline{p}^+ -\frac{1}{\r^-}\vN_f\cdot\nabla \mathcal{H}^-_f\underline{p}^-\\ &=-\Big[2(\underline{u}^+_1\partial_1\theta+\underline{u} ^+_2\partial_2\theta)+\vN_f\cdot\underline{\nabla(p_{\vup, \vup}-\sum^3_{j=1}p_{\vF^+_j, \vF^+_j})} +\sum^2_{s,r=1}(\underline{u} _s^+\underline{u} ^+_r-\sum^3_{j=1}\underline{F} ^+_{sj}\underline{F} ^+_{rj})\partial_s\partial_rf\Big]\\ &\quad+\Big[2(\underline{u} ^-_1\partial_1\theta+\underline{u} ^-_2\partial_2\theta)+\vN_f\cdot\underline{\nabla(p_{\vum, \vum}-\sum^3_{j=1}p_{\vF^-_j, \vF^-_j})} +\sum^2_{s,r=1}(\underline{u} _s^-\underline{u} ^-_r-\sum^3_{j=1}\underline{F} ^-_{sj}\underline{F} ^-_{rj})\partial_s\partial_rf\Big]\\ &\triangleq -g^++g^-. \end{align*} From the definition of DN operator, one has \begin{align} -\frac{1}{\r^+}\mathcal{N}^+_f\underline{p}^+-\frac{1}{\r^-}\mathcal{N}^-_f\underline{p}^-=-g^++g^-. \end{align} As $ \underline{p}^+-\underline{p}^-=0$ on $\Gamma_f$, we have \begin{align*} \underline{p}^\pm=\widetilde{\mathcal{N}}^{-1}_f(g^+-g^-), \end{align*} where \begin{align*} \widetilde{\mathcal{N}}_f=\frac{1}{\r^+}{\mathcal{N}}_f^++\frac{1}{\r^-}{\mathcal{N}}_f^-. \end{align*} In addition, we can write \begin{align*} {\mathcal{N}}_f^+=&(\frac{1}{\r^+}+\frac{1}{\r^-})^{-1}(\widetilde{\mathcal{N}}_f+\frac{1}{\r^-}({\mathcal{N}}_f^+-{\mathcal{N}}_f^-)),\\ {\mathcal{N}}_f^-=&(\frac{1}{\r^+}+\frac{1}{\r^-})^{-1}(\widetilde{\mathcal{N}}_f-\frac{1}{\r^-}({\mathcal{N}}_f^+-{\mathcal{N}}_f^-)), \end{align*} which implies \begin{align*} \frac{1}{\r^+}\mathcal{N}^+_f\widetilde{\mathcal{N}}^{-1}_f g^-+\frac{1}{\r^-}\mathcal{N}^-_f\widetilde{\mathcal{N}}^{-1}_fg^+ =\frac{\r^+g^++\r^-g^-}{\r^++\r^-}-\frac{1}{\r^++\r^-}(\mathcal{N}^+_f-\mathcal{N}^-_f)\widetilde{\mathcal{N}}^{-1}_f(g^+-g^-). \end{align*} Consequently, we can obtain \begin{align} \partial_t\theta\nonumber =&\,\frac{1}{\r^+}\mathcal{N}^+_f\underline{p}^+-g^+=\frac{1}{\r^+}\mathcal{N}^+_f\widetilde{\mathcal{N}}^{-1}_f(g^+-g^-)-g^+\\\nonumber =&-\frac{1}{\r^+}\mathcal{N}^+_f\widetilde{\mathcal{N}}^{-1}_f g^--\frac{1}{\r^-}\mathcal{N}^-_f\widetilde{\mathcal{N}}^{-1}_f g^+\\\nonumber =&-\frac{\r^+g^++\r^-g^-}{\r^++\r^-}+\frac{1}{\r^++\r^-}(\mathcal{N}^+_f-\mathcal{N}^-_f)\widetilde{\mathcal{N}}^{-1}_f(g^+-g^-)\\\nonumber =&-\frac{2}{\r^++\r^-} \big((\r^+\underline{u}^+_1+\r^-\underline{u}^-_1)\partial_1\theta+(\r^+\underline{u}^+_2+\r^-\underline{u}^-_2)\partial_2\theta\big)\\\nonumber &-\frac{1}{\r^++\r^-}\sum^2_{s,r=1}\big(\r^+\underline{u}_s^+\underline{u}^+_r- \rho^+\sum^3_{j=1}\underline{F} _{sj}^+\underline{F} _{rj}^+\big)\partial_s\partial_rf\\\nonumber &-\frac{1}{\r^++\r^-}\sum^2_{s,r=1}\big(\r^-\underline{u}^-_s\underline{u}^-_r-\rho^-\sum^3_{j=1} \underline{F} _{sj}^-\underline{F} _{rj}^-\big)\partial_s\partial_rf\\\nonumber &+\frac{1}{\r^++\r^-}(\mathcal{N}^+_f-\mathcal{N}^-_f)\widetilde{\mathcal{N}}^{-1}_f\mathcal P\big(\sum^2_{s,r=1}\big(\underline{u}_s^+\underline{u}^+_r-\sum^3_{j=1}\underline{F} _{sj}^+\underline{F} _{rj}^+\big)\partial_s\partial_rf\big)\\\nonumber &-\frac{1}{\r^++\r^-}(\mathcal{N}^+_f-\mathcal{N}^-_f)\widetilde{\mathcal{N}}^{-1}_f\mathcal P\big(\sum^2_{s,r=1}\big(\underline{u}^-_s\underline{u}^-_r-\sum^3_{j=1}\underline{F} _{sj}^-\underline{F} _{rj}^-\big)\partial_s\partial_rf\big)\\\nonumber &+\frac{1}{\r^++\r^-}(\mathcal{N}^+_f-\mathcal{N}^-_f)\widetilde{\mathcal{N}}^{-1}_f\mathcal P \big((\underline{u}^+_1-\underline{u}^-_1)\partial_1\theta+(\underline{u}^+_2-\underline{u}^-_2)\partial_2\theta\big)\\\nonumber &+\frac{1}{\r^++\r^-}N_f\cdot\underline{\nabla(\r^+p_{\vup, \vup}-\rho^+\sum^3_{j=1}p_{\vF^+_j, \vF^+_j})}\\\nonumber &+\frac{1}{\r^++\r^-}N_f\cdot\underline{\nabla(\r^-p_{\vum,\vum}-\rho^-\sum^3_{j=1}p_{\vF^-_j, \vF^-_j})}\\\nonumber &-\frac{1}{\r^++\r^-}(\mathcal{N}^+_f-\mathcal{N}^-_f)\widetilde{\mathcal{N}}^{-1}_f\mathcal P N_f\cdot\underline{\nabla\big(p_{\vup, \vup}-\sum^3_{j=1}p_{\vF^+_j, \vF^+_j}-p_{\vum,\vum}+\sum^3_{j=1}p_{\vF^-_j, \vF^-_j}\big)}.\\\label{eq:theta} \end{align} Here $\mathcal P:L^2(\mathbb{T}^2)\to L^2(\mathbb{T}^2)$ is a projection operator defined by \begin{align*} \mathcal Pg=g-\langle g\rangle \end{align*} with $\langle g\rangle \buildrel\hbox{\footnotesize def}\over =\int_{\mathbb{T}^2}gdx'$. We can apply the operator $\mathcal P$ to some of the terms in (\ref{eq:theta}) for the same reasons as in \cite{SWZ1}, because it does not change the formulation of this system owing to $\mathcal P g^\pm=g^\pm$. \subsection{Equations for the vorticity and the curl of deformation tensor} We derive the equations for \begin{align*} \vom^\pm=\nabla\times \vu^\pm,\qquad \vG_j^\pm=\nabla\times \vF^\pm_j. \end{align*} It is direct to obtain from (\ref{els}) that $(\vom^\pm,\vG_j^\pm)$ satisfies \begin{equation}\label{eq.vor} \left\{ \begin{array}{ll} \pa_t\vom^\pm+\vu^\pm\cdot\nabla\vom^\pm-\sum\limits^3_{j=1}\vF^\pm_j\cdot\nabla \vG^\pm_j=\vom^\pm\cdot\nabla \vu^\pm-\sum\limits^3_{j=1}\vG^\pm_j\cdot\nabla \vF^\pm_j&\text{in}\quad \Om_t;\\ \pa_t \vG^\pm_j+\vu^\pm\cdot\nabla \vG^\pm_j-\vF^\pm_j\cdot\nabla\vom^\pm=\vG^\pm_j\cdot\nabla \vu^\pm-\vom^\pm\cdot\nabla \vF^\pm_j-2\sum\limits^3_{i=1}\nabla u^\pm_i\times \nabla F^\pm_{ij}&\text{in}\quad \Om_t. \end{array} \right. \end{equation} \subsection{The evolution of tangential parts of $\vu$ and $\vF_j$ on top and bottom boundaries} As in \cite{SWZ1}, we derive the evolution of \begin{align*} \b^\pm_i=\int_{\mathbb{T}^2}u^\pm_i(t,x',\pm1)dx',\quad \gamma^\pm_{ij}(t)=\int_{\mathbb{T}^2}F_{ij}^\pm(t,x',\pm1)dx'\quad \text{for } i=1,2\text{ and } j=1,2,3. \end{align*} As $u^\pm_3(t,x',\pm1)\equiv0$, we deduce that for $i=1,2$ \begin{align*} \pa_t u^\pm_i+u^\pm_s\pa_s u^\pm_i-\sum^3_{j=1}F^\pm_{sj}\pa_s F^\pm_{ij}-\pa_i p^\pm=0 \qquad \text{on} ~ \Gamma^\pm. \end{align*} Consequently, one has \begin{align*} \pa_t\b^\pm_i+\int_{\Gamma^\pm}\big(u^\pm_s\pa_s u^\pm_i-\sum^3_{s,j=1}F^\pm_{sj}\pa_s F^\pm_{ij}\big)dx'=0, \end{align*} or equivalently \begin{align*} \b^\pm_i(t)=\b^\pm_i(0)-\int^t_0\int_{\Gamma^\pm}\big(u^\pm_s\pa_s u^\pm_i-\sum^3_{j=1}F^\pm_{sj}\pa_s F^\pm_{ij}\big)dx'd\tau. \end{align*} Similarly, we have \begin{align*} \gamma^\pm_{ij}(t)=\gamma^\pm_{ij}(0)-\int^t_0\int_{\Gamma^\pm}\big(u^\pm_s\pa_s F^\pm_{ij}-F^\pm_{sj}\pa_s u_i\big)dx'd\tau. \end{align*} \subsection{Solvability conditions of Div-Curl system} To recover the divergence-free velocity field or deformation tensor field from its curl part, we solve the following div-curl system: \begin{equation}\label{div-curl-temp} \left\{ \begin{array}{ll} \mathop{\rm curl}\nolimits \vu^\pm=\om^\pm,\quad \mbox{div}\vu^\pm=g^\pm\quad\text{in}\quad \Omega_f^\pm,\\ \vu^\pm\cdot\vN_f =\theta^\pm\quad\text{on}\quad\Gamma_{f}, \\ \vu^\pm\cdot\ve_3 = 0 \text{ on }\Gamma^{\pm}, \quad \int_{\Gamma^\pm} u_i^\pm dx'=\beta^\pm_i (i=1,2).& \end{array} \right. \end{equation} The solvability of the above system was obtained in \cite{SWZ1} under the following compatibility conditions: \begin{itemize} \item[C1.] $\mbox{div}\vom^\pm=0$\, in $\Omega_f^\pm$, \item[C2.] $\int_{\Gamma^\pm}\vom_3^\pm dx'=0$, \item[C3.] $\int_{\mathbb{T}^2}\theta dx'=\mp\int_{\Gamma^\pm} g^\pm dx'$, \end{itemize} and the main result are stated in Proposition \ref{prop:div-curl}. \section{Uniform estimates for the linearized system} In this section, we will present the uniform energy estimates for the linearized system around given functions $(f,\vu^\pm,\vF^\pm)$. We assume that there exists $T>0$ for any $t\in [0,T]$: \begin{align} &\|(\vu^\pm, \vF^\pm)(t)\|_{L^{\infty}(\Gamma_f)}\le L_0,\label{ass:regularity}\\ &\|f(t)\|_{H^{s+\frac{1}{2}}(\mathbb{T}^2)}+\|\pa_tf(t)\|_{H^{s-\f12}(\mathbb{T}^2)}+\|\vu^\pm(t)\|_{H^{s}(\Omega_f^\pm)} +\|\vF^\pm(t)\|_{H^{s}(\Omega_f^\pm)}\le L_1,\\ &\|(\partial_t\vu^\pm, \partial_t\vh^\pm)(t)\|_{L^{\infty}(\Gamma_f)}\le L_2,\\ &\|f(t)-f_*\|_{H^{s- \frac{1}{2}}}\le \delta_0,\\ &-(1-c_0)\le f(t,x')\le (1-c_0),\\ &\Lambda(\vF^\pm, \vv)(t)\ge c_0,\label{ass:stability} \end{align} together with \begin{equation}\label{ass:boun} \left\{ \begin{array}{ll} \mbox{div}\vu^\pm=\mbox{div}\vF_j^\pm=0\quad&\text{in}\quad\Omega_f^\pm,\\ {\vF_j}^\pm\cdot\vN_f=0&\text{on}\quad\Gamma_f,\\ \partial_tf=\underline{\vu}^\pm\cdot\vN_f,\quad &\text{on}\quad\Gamma_f,\\ u_3^\pm=F_{3j}^\pm=0\quad&\text{on}\quad \Gamma^\pm. \end{array} \right. \end{equation} Here $L_0, L_1, L_2,c_0,\delta_0$ are positive constants. \subsection{The linearized system for the height function of the interface} For the system (\ref{eq:form:f}) and (\ref{eq:theta}), we introduce the following linearized system: \begin{equation}\label{sys:linear-H} \left\{ \begin{array}{l} \pa_t\bar f=\bar\theta;\\ \partial_t\bar\theta=-\frac{2}{\r^++\r^-} \big((\r^+\underline{u}^+_1+\r^-\underline{u} ^-_1)\partial_1\bar\theta+(\r^+\underline{u} ^+_2+\r^-\underline{u} ^-_2)\partial_2\bar\theta\big)\\ \quad-\frac{1}{\r^++\r^-}\sum\limits^2_{s,r=1}\big(\r^+\underline{u} _s^+\underline{u} ^+_r- \rho^+\sum\limits^3_{j=1}\underline{F} _{sj}^+\underline{F} _{rj}^+\big)\partial_s\partial_r\bar f\\ \quad-\frac{1}{\r^++\r^-}\sum\limits^2_{s,r=1}\big(\r^-\underline{u} ^-_s\underline{u} ^-_r-\rho^-\sum\limits^3_{j=1} \underline{F} _{sj}^-\underline{F} _{rj}^-\big)\partial_s\partial_r\bar f+\mathfrak g,\\ \end{array} \right. \end{equation} where \begin{align}\label{eq:g-def} \mathfrak g=&\frac{1}{\r^++\r^-}(\mathcal{N}^+_f-\mathcal{N}^-_f)\widetilde{\mathcal{N}}^{-1}_f\mathcal P\big(\sum^2_{s,r=1}\big(\underline{u} _s^+\underline{u} ^+_r-\underline{u} ^-_s\underline{u} ^-_r+\sum^3_{j=1} (\underline{F} _{sj}^-\underline{F} _{rj}^--\underline{F} _{sj}^+\underline{F} _{rj}^+)\big)\partial_s\partial_rf\big)\\\nonumber &+\frac{1}{\r^++\r^-}(\mathcal{N}^+_f-\mathcal{N}^-_f)\widetilde{\mathcal{N}}^{-1}_f\mathcal P \big((\underline{u}^+_1-\underline{u} ^-_1)\partial_1\theta+(\underline{u} ^+_2-\underline{u} ^-_2)\partial_2\theta\big)\\\nonumber &+\frac{\rho^+}{\r^++\r^-}N_f\cdot\underline{\nabla(p_{\vup, \vup}-\sum^3_{j=1}p_{\vF^+_j, \vF^+_j})}+\frac{\rho^-}{\r^++\r^-}N_f\cdot\underline{\nabla(p_{\vum,\vum}-\sum^3_{j=1}p_{\vF^-_j, \vF^-_j})}\\\nonumber &-\frac{1}{\r^++\r^-}(\mathcal{N}^+_f-\mathcal{N}^-_f)\widetilde{\mathcal{N}}^{-1}_f\mathcal P N_f\cdot\underline{\nabla\big(p_{\vup, \vup}-\sum^3_{j=1}p_{\vF^+_j, \vF^+_j}-p_{\vum,\vum}+\sum^3_{j=1}p_{\vF^-_j, \vF^-_j}\big)}\\\nonumber \triangleq&g_1+g_2+g_3+g_4. \end{align} Here we need to be careful that $\int_{\mathbb{T}^2}\bar\theta dx'$ may not equal to 0. \begin{remark} Let $D_t=\pa_t+w_1\pa_1+w_2\pa_2$. Thus, we have \begin{align*} D_t^2\bar f=\sum\limits^2_{s,r=1}\Big(-v_sv_r+\frac{\rho^+}{\r^++\r^-}\sum\limits^3_{j=1}F^+_{sj}F^+_{rj}+\frac{\rho^-}{\r^++\r^-}\sum\limits^3_{j=1}F^-_{sj}F^-_{rj}\Big)\pa_s\pa_r \bar f+\text{low order terms}. \end{align*} The principal symbol of the operator on the right-hand side is \begin{align} (v_i\xi_i)^2- \Big(\frac{\rho^+}{\r^++\r^-}\sum\limits^3_{j=1}(F^+_{ij}\xi_i)^2+\frac{\rho^-}{\r^++\r^-}\sum\limits^3_{j=1}(F^-_{ij}\xi_i)^2\Big). \end{align} The negativity of this symbol is ensured by the stability condition (\ref{ass:stability})(see (\ref{condition:s2})). Therefore, $f$ satisfies a strictly hyperbolic equation, and thus the system should be linearly well-posed. \end{remark} Define the energy functional $E_s$ as \begin{align}\nonumber E_s(\partial_t\bar{f},\bar{f})\buildrel\hbox{\footnotesize def}\over = &\big\|(\partial_t+w_i\partial_i)\langle \nabla\rangle^{s- \frac{1}{2}}\bar{f}\big\|_{L^2}^2 -\big\|v_i\partial_i\langle \nabla\rangle^{s- \frac{1}{2}}\bar{f}\big\|_{L^2}^2\\ &+\frac{\rho^+}{\r^++\r^-}\sum\limits^3_{j=1}\big\|\underline{F} _{ij}^+\partial_i\langle \nabla\rangle^{s- \frac{1}{2}}\bar{f}\big\|_{L^2}^2 +\frac{\rho^-}{\r^++\r^-}\sum\limits^3_{j=1}\big\|\underline{F} _{ij}^-\partial_i\langle \nabla\rangle^{s- \frac{1}{2}}\bar{f}\big\|_{L^2}^2, \end{align} where $\langle \na\rangle^s f=\mathcal{F}^{-1}((1+|\xi|^2)^{\f s 2}\widehat{f})$ and \begin{align*} w_i=\frac{1}{\r^++\r^-}(\r^+\underline{u} ^+_i+\r^-\underline{u} ^-_i),\qquad v_i=\frac{\sqrt{\r^+\r^-}}{\r^++\r^-}(\underline{u} ^+_i-\underline{u} ^-_i). \end{align*} Obviously, there exists $C(L_0)>0$ so that \begin{align}\label{linear:equi-norm-1} E_s(\partial_t\bar{f},\bar{f})\le C(L_0)\left(\|\partial_t\bar{f}\|_{H^{s-\f12}}^2 +\|\bar{f}\|_{H^{s+\f12}}^2\right). \end{align} In addition, we deduce from the stability condition (\ref{ass:stability}) that there exists $C(c_0,L_0)$ so that \begin{align}\label{linear:equi-norm-2} &\|\partial_t \bar{f}\|_{H^{s-\f12}}^2+\|\bar{f}\|_{H^{s+\f12}}^2 \le C(c_0,L_0)\Big\{E_s(\partial_t\bar{f},\bar{f})+\|\partial_t\bar{f}\|_{L^2}^2+\|\bar{f}\|_{L^2}^2\Big\}. \end{align} Firstly, we have the estimate of $\mathfrak{g}$ defined by (\ref{eq:g-def}). \begin{lemma}\label{lem:non-g} It holds that \begin{align*} \|\mathfrak{g}\|_{H^{s-\f12}}\le C(L_1). \end{align*} \end{lemma} \begin{proof} The proof is similar to Lemma 6.2 in \cite{SWZ1}. By using Proposition \ref{prop:DN-Hs} and Proposition \ref{prop:DN-inverse}, we have \begin{align*} \|\mathfrak{g}_1\|_{H^{s-\f12}}\le& C(L_1)\left\|(\underline{u}_i^+\underline{u}^+_j-\underline{F}^+_i\underline{F}^+_j -\underline{u}^-_i\underline{u}^-_j+\underline{F}^-_i\underline{F}^-_j)\partial_i\partial_jf \right\|_{H^{s-\f32}}\\ \le& C(L_1)\|(\underline{\vu}^\pm,\underline{\vF}^\pm)\|_{H^{s-\f32}}\|f\|_{H^{s+\f12}}\le C(L_1),\\ \|\mathfrak{g}_2\|_{H^{s-\f12}} \le& C(L_1)\|\underline{\vu}^\pm\|_{H^{s-\f32}}\|\theta\|_{H^{s-\f12}}\le C(L_1),\\ \|(\mathfrak{g}_3, \mathfrak{g}_4)\|_{H^{s-\f12}} \le& C(L_1)\Big(\|\underline{\nabla(p_{\vum,\vum}-\sum^3_{j=1}p_{\vF^-_j, \vF^-_j})}\|_{H^{s-\f12}}+\|\underline{\nabla(p_{\vup, \vup}-\sum^3_{j=1}p_{\vF^+_j, \vF^+_j})}\|_{H^{s-\f12}}\Big)\\ \le& C(L_1)\Big(\big\|\nabla(p_{\vum,\vum}, \sum^3_{j=1}p_{\vF^-_j, \vF^-_j})\big\|_{H^{s}(\Om_f^-)} +\big\|\nabla(p_{\vup, \vup}, \sum^3_{j=1}p_{\vF^+_j, \vF^+_j})\big\|_{H^{s}(\Om_f^+)}\Big)\\ \le& C(L_1)\|(\vu^\pm,\vF^\pm)\|_{H^s(\Om_f^\pm)}\le C(L_1). \end{align*} The proof is finished. \end{proof} Then we have the following estimate. \begin{proposition}\label{prop:f-L} Assume that $\mathfrak{g}\in L^\infty(0,T;H^{s- \frac{1}{2}}(\mathbb{T}^2))$. Given the initial data $(\bar \theta_0, \bar f_0)\in H^{s- \frac{1}{2}}\times H^{s+\frac{1}{2}}(\mathbb{T}^2)$, there exists a unique solution $(\bar f,\bar \theta)\in C\big([0,T];H^{s+\frac{1}{2}}\times H^{s- \frac{1}{2}}(\mathbb{T}^2)\big)$ to the system (\ref{sys:linear-H}) so that \begin{align*} &\sup_{t\in[0,T]}\left(\|\partial_t\bar{f}(t)\|_{H^{s- \frac{1}{2}}}^2+\|\bar{f}(t)\|_{H^{s+\frac{1}{2}}}^2\right)\\ &\quad\le C(c_0,L_0)\left(\|\bar\theta_0\|_{H^{s- \frac{1}{2}}}^2+\|\bar f_0\|_{H^{s+\frac{1}{2}}}^2+\int_0^T\|\mathfrak{g}(\tau)\|_{H^{s- \frac{1}{2}}}d\tau\right)e^{C(c_0, L_1,L_2)T}. \end{align*} \end{proposition} \begin{proof} It suffices to prove the uniform estimates. From the equation (\ref{sys:linear-H}), we obtain \begin{align*} \pa^2_t\bar f=&-2(w_1\partial_1\bar\theta+w_2\partial_2\bar\theta)+\sum_{s,t=1,2}(-w_sw_t-v_sv_t)\partial_s\partial_t\bar f\\ &+\sum^2_{s,r=1}(\frac{\rho^+}{\r^++\r^-}\sum^3_{j=1}\underline{F} _{sj}^+\underline{F} _{rj}^++\frac{\rho^-}{\r^++\r^-}\sum^3_{j=1}\underline{F} _{sj}^-\underline{F} _{rj}^-)\partial_s\partial_r\bar f+\mathfrak g, \end{align*} which yields that \begin{align*} \frac{1}{2}\frac{d}{dt}&\big\|(\partial_t+w_i\partial_i)\langle \nabla\rangle^{s- \frac{1}{2}}\bar{f}\big\|_{L^2(\mathbb{T}^2)}^2\\ =& \Big\langle(\partial_t+w_i\partial_i)\langle \nabla\rangle^{s- \frac{1}{2}}\bar{f}, \langle \nabla\rangle^{s- \frac{1}{2}}\partial_{t}^2\bar{f}+w_i\partial_i(\langle \nabla\rangle^{s- \frac{1}{2}}\partial_t\bar{f}) +\partial_tw_i\partial_i\langle \nabla\rangle^{s- \frac{1}{2}}\bar{f}\Big\rangle\\ =& \Big\langle(\partial_t+w_i\partial_i)\langle \nabla\rangle^{s- \frac{1}{2}}\bar{f},\langle \nabla\rangle^{s- \frac{1}{2}}(-2w_i\partial_i\pa_t\bar f+\sum^2_{s,r=1}(-w_sw_r-v_sv_r)\partial_s\partial_r\bar f\\ &+\sum^2_{s,r=1}(\frac{\rho^+}{\r^++\r^-}\sum^3_{j=1}\underline{F} _{sj}^+\underline{F} _{rj}^++\frac{\rho^-}{\r^++\r^-}\sum^3_{j=1}\underline{F} _{sj}^-\underline{F} _{rj}^-)\partial_s\partial_r\bar f)\Big\rangle\\ &+\big\langle(\partial_t+w_i\partial_i)\langle \nabla\rangle^{s- \frac{1}{2}}\bar{f},\langle \nabla\rangle^{s- \frac{1}{2}}\mathfrak g+w_i\pa_i(\langle \nabla\rangle^{s- \frac{1}{2}}\pa_t\bar f)+\pa_tw_i\pa_i\langle \nabla\rangle^{s- \frac{1}{2}}\bar f\big\rangle\\ =&\big\langle(\partial_t+w_i\partial_i)\langle \nabla\rangle^{s- \frac{1}{2}}\bar{f}, -w_i\partial_i\langle \nabla\rangle^{s- \frac{1}{2}}\partial_t\bar{f}\big\rangle\\ &+\big\langle(\partial_t+w_i\partial_i)\langle \nabla\rangle^{s- \frac{1}{2}}\bar{f},\sum^2_{s,r=1}(-w_sw_r-v_sv_r)\partial_s\partial_r\langle \nabla\rangle^{s- \frac{1}{2}}\bar f\\ &\qquad+\sum^2_{s,r=1}(\frac{\rho^+}{\r^++\r^-}\sum^3_{j=1}\underline{F} _{sj}^+\underline{F} _{rj}^++\frac{\rho^-}{\r^++\r^-}\sum^3_{j=1}\underline{F} _{sj}^-\underline{F} _{rj}^-)\partial_s\partial_r\langle \nabla\rangle^{s- \frac{1}{2}}\bar f)\big\rangle\\ &+2\big\langle (\partial_t+w_i\partial_i)\langle \nabla\rangle^{s- \frac{1}{2}}\bar{f},[w_i,\langle \nabla\rangle^{s- \frac{1}{2}}]\pa_i\pa_t\bar f\big\rangle\\ &+\big\langle (\partial_t+w_i\partial_i)\langle \nabla\rangle^{s- \frac{1}{2}}\bar{f},\\ &\qquad \big[w_sw_r+v_sv_r -\frac{\rho^+}{\r^++\r^-}\sum^3_{j=1}\underline{F} _{sj}^+\underline{F} _{rj}^+-\frac{\rho^-}{\r^++\r^-}\sum^3_{j=1}\underline{F} _{sj}^-\underline{F} _{rj}^-,\langle \nabla\rangle^{s- \frac{1}{2}}\big]\pa_s\pa_r\bar f \big\rangle \\ &+\big\langle (\partial_t+w_i\partial_i)\langle \nabla\rangle^{s- \frac{1}{2}}\bar{f},\langle \nabla\rangle^{s- \frac{1}{2}}\mathfrak g+\pa_t w_i\pa_i\langle \nabla\rangle^{s- \frac{1}{2}}\bar f \big\rangle\\ \triangleq& I_1+\cdots I_5. \end{align*} From Lemma \ref{lem:commutator}, one has \begin{align*} I_3\le& 2\|(\partial_t+w_i\partial_i)\langle \nabla\rangle^{s- \frac{1}{2}}\bar{f}\|_{L^2}\big\|\big[w_i, \langle \nabla\rangle^{s- \frac{1}{2}}\big]\partial_i\partial_t\bar{f}\big\|_{L^2} \\ \le& CE_s(\partial_t\bar{f},\bar{f})^\f12\|w\|_{H^{s-\f12}}\|\pa_t\bar f\|_{H^{s-\f12}}, \end{align*} and \begin{align*} I_4\le CE_s(\partial_t\bar{f},\bar{f})^{\frac{1}{2}}\Big(\|w\|_{H^{s-\frac{1}{2}}}^2+\|v\|_{H^{s-\frac{1}{2}}}^2+\|\underline{\vF}^\pm\|_{H^{s-\frac{1}{2}}}^2\Big)\|\bar f\|_{H^{s+\frac{1}{2}}}. \end{align*} In addition, it holds that \begin{equation}\nonumber I_5\le E_s(\partial_t\bar{f},\bar{f})^\f12\big(\|\mathfrak{g}\|_{H^{s-\f12}}+\|\partial_tw\|_{L^\infty}\|\bar f\|_{H^{s+\f12}}\big). \end{equation} It follows from integration by parts that \begin{align*} &\Big\langle\partial_t\langle \nabla\rangle^{s- \frac{1}{2}}\bar{f}, ~-w_i\partial_i\langle \nabla\rangle^{s- \frac{1}{2}}\partial_t\bar{f}\Big\rangle \le \|\partial_iw_i\|_{L^\infty}\|\partial_t\bar{f}\|_{H^{s-\f12}}^2,\\ &\Big\langle w_i\partial_i\langle \nabla\rangle^{s- \frac{1}{2}}\bar{f}, ~-w_i\partial_i\langle \nabla\rangle^{s- \frac{1}{2}}\partial_t\bar{f}\Big\rangle +\frac12\frac{d}{dt}\|w_i\partial_i\langle \nabla\rangle^{s- \frac{1}{2}}\bar{f}\|_{L^2}^2\\ &\quad\qquad=\Big\langle w_i\partial_i \langle \nabla\rangle^{s- \frac{1}{2}}\bar{f},\partial_tw_i\partial_i\langle \nabla\rangle^{s- \frac{1}{2}}\bar{f}\Big\rangle\le \|w\|_{L^\infty} \|\partial_tw\|_{L^\infty}\|\bar{f}\|_{H^{s+\f12}}^2, \end{align*} which implies \begin{equation} I_1\le -\frac12\frac{d}{dt}\|w_i\partial_i\langle \nabla\rangle^{s- \frac{1}{2}}\bar{f}\|_{L^2}^2+\big(1+\|w\|_{W^{1,\infty}}+\|\partial_tw\|_{L^\infty}\big)^2 \Big(\|\bar{f}\|_{H^{s+\f12}}^2+\|\partial_t\bar{f}\|_{H^{s-\f12}}^2\Big). \end{equation} To estimate $I_2$, we can derive \begin{align*} &\Big\langle\partial_t\langle \nabla\rangle^{s- \frac{1}{2}}\bar{f}, -w_iw_j\partial_i\partial_j\langle \nabla\rangle^{s- \frac{1}{2}}\bar{f}\Big\rangle -\frac12\frac{d}{dt}\|w_i\partial_i\langle \nabla\rangle^{s- \frac{1}{2}}\bar{f}\|_{L^2}^2\\ &=-\Big\langle w_i\partial_i\langle \nabla\rangle^{s- \frac{1}{2}}\bar{f}, ~\partial_tw_i \partial_i\langle \nabla\rangle^{s- \frac{1}{2}}\bar{f}\Big\rangle+\Big\langle\langle \nabla\rangle^{s- \frac{1}{2}}\partial_t\bar{f}, ~\partial_i(w_iw_j)\partial_j\langle \nabla\rangle^{s- \frac{1}{2}}\bar{f}\Big\rangle\\ &\le\|w\|_{L^\infty}\big(\|\partial_tw\|_{L^\infty}+\|\nabla w\|_{L^\infty}\big) \Big(\|\bar{f}\|_{H^{s+\f12}}^2+\|\partial_t\bar{f}\|_{H^{s-\f12}}^2\Big), \end{align*} and similarly \begin{align*} &\Big\langle w_k\partial_k\langle \nabla\rangle^{s- \frac{1}{2}}\bar{f}, -w_iw_j\partial_i\partial_j\langle \nabla\rangle^{s- \frac{1}{2}}\bar{f}\Big\rangle le C \|w\|^2_{L^\infty}\|\nabla w\|_{L^\infty}\|\bar{f}\|_{H^{s+\f12}}^2, \end{align*} as well as \begin{align*} &\Big\langle\partial_t\langle \nabla\rangle^{s- \frac{1}{2}}\bar{f}+w_k\partial_k\langle \nabla\rangle^{s- \frac{1}{2}}\bar{f}, \big(\frac{\rho^+}{\r^++\r^-}\sum^3_{j=1}\underline{F} _{sj}^+\underline{F} _{rj}^++\frac{\rho^-}{\r^++\r^-}\sum^3_{j=1}\underline{F} _{sj}^-\underline{F} _{rj}^-\big)\partial_s\partial_r\langle \nabla\rangle^{s- \frac{1}{2}}\bar{f}\Big\rangle\\ &\le- \frac{1}{2}\frac{\rho^\pm}{\rho^++\rho^-}\frac{d}{dt}\sum^3_{j=1}\|\underline{F} _{ij}^\pm\pa_i\langle \nabla\rangle^{s- \frac{1}{2}}\bar f\|_{L^2}^2\\\nonumber &\quad+C\|\underline{\vF}^\pm\|_{L^\infty}(1+\|\underline{\vF}^\pm\|_{L^\infty}) \big(\|\partial_t\underline{\vF}^\pm\|_{L^\infty}+\|\nabla \underline{\vF}^\pm\|_{L^\infty}\big) \Big(\|\bar{f}\|_{H^{s+\f12}}^2+\|\partial_t\bar{f}\|_{H^{s-\f12}}^2\Big). \end{align*} Therefore, we obtain \begin{align*} I_2\le& \frac12\frac{d}{dt}\|w_i\partial_i\langle \nabla\rangle^{s- \frac{1}{2}}\bar{f}\|_{L^2}^2+\frac12\frac{d}{dt}\|v_i\partial_i\langle \nabla\rangle^{s- \frac{1}{2}}\bar{f}\|_{L^2}^2\\ &- \frac{1}{2}\frac{\rho^\pm}{\rho^++\rho^-} \frac{d}{dt}\sum^3_{j=1}\|\underline{F} _{ij}^\pm\pa_i\langle \nabla\rangle^{s- \frac{1}{2}}\bar f\|_{L^2}^2\\ &+C\big(1+\|(\underline{\vu}^\pm,\underline{\vF}^\pm)\|_{W^{1,\infty}}+\|\pa_t(\underline{\vu}^\pm,\underline{\vF}^\pm)\|_{L^\infty}\big)^3 \Big(\|\bar{f}\|_{H^{s+\f12}}^2+\|\partial_t\bar{f}\|_{H^{s-\f12}}^2\Big). \end{align*} Combining the estimates of $I_1,\cdots, I_5$ together yields that \begin{align*} \frac{d}{dt}E_s(&\partial_t\bar{f},\bar{f})\le \|\mathfrak{g}\|_{H^{s-\f12}}^2\\ +&C(L_0)\big(1+\|(\underline{\vu}^\pm,\underline{\vF}^\pm)\|_{H^{s-\f12}}+\|\partial_t(\underline{\vu}^\pm,\underline{\vF}^\pm)\|_{L^\infty}\big)^3\Big(\|\bar{f}\|_{H^{s+\f12}}^2+\|\partial_t\bar{f}\|_{H^{s-\f12}}^2\Big). \end{align*} On the other hand, it is easy to show that \begin{align*} \frac{d}{dt}\big(\|\partial_t\bar{f}\|_{L^2}^2+\|\bar{f}\|_{L^2}^2\big)\le C(L_0)\Big(\|\bar{f}\|_{H^{s+\f12}}^2+\|\partial_t\bar{f}\|_{H^{s-\f12}}^2\Big) +\|\mathfrak{g}\|_{L^2}^2. \end{align*} Let $\mathcal{E}(t)\triangleq\|\bar{f}(t)\|_{H^{s+\f12}}^2+\|\partial_t\bar{f}(t)\|_{H^{s-\f12}}^2$. It follows from (\ref{linear:equi-norm-2}) that \begin{align*} \mathcal{E}(t)\le C(c_0,L_0)\Big(&\|\bar\theta_0\|_{H^{s-\f12}}^2+\|\bar f_0\|_{H^{s+\f12}}^2+\int_0^t\|\mathfrak{g}(\tau)\|_{H^{s-\f12}}^2d\tau\\ &+\int_0^t\big(1+\|(\underline{\vu}^\pm,\underline{\vF}^\pm)(\tau)\|_{H^{s-\f12}}+\|\partial_t(\underline{\vu}^\pm, \underline{\vF}^\pm)(\tau)\|_{L^\infty}\big)^3\mathcal{E}(\tau)d\tau\Big), \end{align*} which together with Lemma \ref{lem:basic} gives \begin{align*} \mathcal{E}(t)\le C(c_0,L_0)\Big(&\|\bar\theta_0\|_{H^{s-\f12}}^2+\|\bar f_0\|_{H^{s+\f12}}^2+\int_0^t\|\mathfrak{g}(\tau)\|_{H^{s-\f12}}^2d\tau+C(L_1,L_2)\int_0^t\mathcal{E}(\tau)d\tau\Big). \end{align*} By using Gronwall's inequality, we conclude the desired estimate. \end{proof} \subsection{The linearized system of $(\vom^\pm,\vG^\pm)$} For the vorticity system (\ref{eq.vor}), we introduce the following linearized system: \begin{equation}\label{eq:vorticity-w-L} \left\{ \begin{array}{l} \pa_t\bar\vom^\pm+\vu^\pm\cdot\nabla \bar\vom^\pm-\sum\limits^3_{j=1}\vF^\pm_j\cdot\nabla \bar{\vG}^\pm_j=\bar{\vom}^\pm\cdot\nabla \vu^\pm-\sum\limits^3_{j=1}\bar{\vG}^\pm_j\cdot\nabla \vF^\pm_j,\\ \pa_t\bar \vG^\pm_j+\vu^\pm\cdot\nabla \bar \vG^\pm_j-\vF^\pm_j\cdot\nabla\bar\vom^\pm=\bar \vG^\pm_j\cdot\nabla \vu^\pm-\bar\vom^\pm\cdot\nabla \vF^\pm_j-2\sum\limits^3_{s=1}\nabla u^\pm_s\times\nabla F^\pm_{sj},\\ \bar\vom^\pm(0,x)=\bar\vom_0^\pm,\qquad \bar\vG^\pm_j(0,x)= \bar\vG_{j,0}^\pm. \end{array} \right. \end{equation} We first assume the existence of solutions to (\ref{eq:vorticity-w-L}). Then it holds the following estimate. \begin{proposition}\label{prop:vorticity} It holds that \begin{align}\label{eq:linearwg} \sup_{t\in[0,T]}(\|\bar\vom^\pm(t)\|^2_{H^{s-1}(\Omega^\pm_f)}&+\sum^3_{j=1}\|\bar \vG_j^\pm(t)\|^2_{H^{s-1}(\Omega^\pm_f)})\\ &\le\Big(1+\|\bar\vom_0^\pm\|^2_{H^{s-1}(\Omega^\pm_f)}+\sum^3_{j=1}\|\bar \vG_{j,0}^\pm\|^2_{H^{s-1}(\Omega^\pm_f)}\Big)e^{C(L_1)T}.\nonumber \end{align} \end{proposition} \begin{proof}Using $\pa_t f=u^\pm\cdot \vN_f$ and integrating by parts, we obtain \begin{align*} &\frac{1}{2}\frac{d}{dt}\int_{\Om^\pm_f}|\nabla^{s-1}\bar\vom^\pm(t,x)|^2+\sum^3_{j=1}|\nabla^{s-1}\bar \vG^\pm_j(t,x)|^2dx\\ =&\int_{\Om^\pm_f}\nabla^{s-1}\bar\vom^\pm\cdot \nabla^{s-1}\pa_t\bar\vom^\pm +\sum^3_{j=1}\nabla^{s-1}\bar \vG_j^\pm\cdot \nabla^{s-1}\pa_t\bar G_j^\pm dx\\ &\mp\frac{1}{2}\int_{\Gamma_f}(|\nabla^{s-1}\bar\vom^\pm|^2+\sum^3_{j=1}|\nabla^{s-1}\bar G^\pm_j|^2)(\vu^\pm\cdot\vn)d\sigma. \end{align*} From (\ref{eq:vorticity-w-L}) and the fact that $\vF^\pm_j\cdot \vN_f=0$, we can derive \begin{align*} \frac{1}{2}\frac{d}{dt}&\int_{\Om^\pm_f}|\nabla^{s-1}\bar\vom^\pm(t,x)|^2+\sum^3_{j=1}|\nabla^{s-1}\bar \vG^\pm_j(t,x)|^2dx\\ \le&\int_{\Om^\pm_f}\nabla^{s-1}\bar\vom^\pm\cdot\nabla^{s-1}[-\vu^\pm\cdot\nabla\bar\vom^\pm]+\sum^3_{j=1}\nabla^{s-1}\bar\vG_j^\pm\cdot\nabla^{s-1}[-\vu^\pm\cdot\nabla \bar\vG_j^\pm]dx\\ &+\int_{\Om^\pm_f}\nabla^{s-1}\bar\vom^\pm\cdot\nabla^{s-1}[\sum^3_{j=1} \vF_j^\pm\cdot\nabla \bar\vG_j^\pm]+\sum^3_{j=1}\nabla^{s-1}\bar\vom^\pm\cdot\nabla^{s-1}[\vF_j^\pm\cdot\nabla \bar\vG_j^\pm]dx\\ &\mp\frac{1}{2}\int_{\Gamma_f}(|\nabla^{s-1}\bar\vom^\pm|^2+\sum^3_{j=1}|\nabla^{s-1}\bar \vG^\pm_j|^2)(\vu^\pm\cdot\vn)d\sigma\\ &+C(L_1)\Big(1+\|\bar\vom^\pm(t)\|^2_{H^{s-1}(\Omega^\pm_f)}+\sum^3_{j=1}\|\bar \vG_j^\pm(t)\|^2_{H^{s-1}(\Omega^\pm_f)}\Big)\\ \le&\frac{1}{2}\int_{\Om^\pm_f}-\vu^\pm\cdot\nabla(|\nabla^{s-1}\bar\vom^\pm|^2+\sum^3_{j=1}|\nabla^{s-1}\bar \vG_j^\pm|^2)dx\\ &\mp\frac{1}{2}\int_{\Gamma_f}(|\nabla^{s-1}\bar\vom^\pm|^2+\sum^3_{j=1}|\nabla^{s-1}\bar \vG^\pm_j|^2)(\vu^\pm\cdot\vn)d\sigma\\ &+\sum^3_{j=1}\int_{\Om^\pm_f}\vF_j^\pm\cdot\nabla(\nabla^{s-1}\bar\vom^\pm\cdot\nabla^{s-1}\bar \vG_j^\pm)dx\\ &+C(L_1)\Big(1+\|\bar\vom^\pm(t)\|^2_{H^{s-1}(\Omega^\pm_f)}+\sum^3_{j=1}\|\bar \vG_j^\pm(t)\|^2_{H^{s-1}(\Omega^\pm_f)}\Big)\\ \le&C(L_1)\Big(1+\|\bar\vom^\pm(t)\|^2_{H^{s-1}(\Omega^\pm_f)}+\sum^3_{j=1}\|\bar \vG_j^\pm(t)\|^2_{H^{s-1}(\Omega^\pm_f)}\Big). \end{align*} Then the proposition follows from Gronwall's inequality. \end{proof} Now we turn to the existence of solutions of the linearized system (\ref{eq:vorticity-w-L}).\smallskip First of all, we consider the linear system \begin{equation}\label{eq.linear} \left\{ \begin{array}{ll} \pa_t\vom^\pm +\vu^\pm\cdot\nabla\vom^\pm -\sum\limits^3_{j=1}\vF^\pm_j\cdot\nabla \vG^\pm_j=\vg^\pm_0,&(t,x)\in Q_T^\pm,\\ \pa_t \vG^\pm_j+\vu^\pm\cdot\nabla \vG^\pm_j-\vF^\pm_j\cdot\nabla \vom^\pm=\vg^\pm_j,&(t,x)\in Q_T^\pm,\\ \vom^\pm(0,x)=\vom_0^\pm,\qquad \vG^\pm_j(0,x)=\vG^\pm_{j,0}, &x\in\Om^\pm_{f_0}. \end{array} \right. \end{equation} Here $ Q_T^\pm$ is defined by $f$ in the same way as before. \begin{lemma}\label{lem:ex} Assume that $f,\vu^\pm,\vF^\pm$ satisfy (\ref{ass:regularity})-(\ref{ass:stability}). Given the initial data $(\vom_0^\pm, \vG_{j,0}^\pm)=(0,0)$, $(\vg^\pm_0,\vg^\pm_j)\in L^1([0,T];H^{s-1}(\Om_f))$, there exists a unique solution $(\vom^\pm, \vG^\pm_j)\in C([0,T];H^{s-1}(\Omega^\pm_f)\times H^{s-1}(\Omega^\pm_f))$ to the system (\ref{eq.linear}) satisfying the following estimate \begin{align*} \sup_{t\in[0,T]}\Big(\|\vom^\pm(t)\|_{H^{s-1}(\Omega^\pm_f)}&+\sum^3_{j=1}\|\vG_j^\pm(t)\|_{H^{s-1}(\Omega^\pm_f)}\Big)\le C(L_1,T)\|(\vg^\pm_0,\vg^\pm_j)\|_{L^1([0,T];H^{s-1}(\Omega^\pm_f))}. \end{align*} \end{lemma} \begin{proof} Let $\vW^\pm=(\vom^\pm,\vG^\pm)$. We rewrite the system as \begin{align*} L(\vW^\pm)=\vg^\pm. \end{align*} We define the flow map $X^\pm(t,\cdot)$ as \begin{align*} \frac{dX^\pm(\tilde t,\tilde x)}{d\tilde t}=\vu^\pm(\tilde t,X^\pm(\tilde t,\tilde x)),\qquad (\tilde t,\tilde x)\in [0,T]\times\Om^\pm_{f_0} , \end{align*} with $t=\tilde t$. Now we write $(t,x)\in Q_T^\pm$ and $(\tilde t,\tilde x)\in [0,T]\times\Om^\pm_{f_0}$. Then we rewrite $L$ in the new coordinate as \begin{align*} \widetilde L(\widetilde\vW^\pm)=\pa_{\tilde t}\widetilde \vW^\pm+\widetilde M(\widetilde \vW^\pm)=\widetilde{\vg}^\pm, \end{align*} where $\widetilde\vW^\pm(\tilde t, \tilde x)=\vW^\pm\big(\tilde t, X^\pm(\tilde t,\tilde x)\big), \widetilde\vg^\pm(\tilde t, \tilde x)=\vg^{\pm}\big(\tilde t, X^\pm(\tilde t,\tilde x)\big)$, and $\widetilde M$ is given by \begin{equation} \widetilde M(\widetilde\vW^\pm)= \left( \begin{array}{l} -\sum\limits^3_{j=1}\big(\widetilde\vF^\pm_j(\tilde t,\tilde x)\cdot \frac{\pa X^{\pm-1}(\tilde t,\cdot)}{\pa\tilde x}\cdot\nabla_{\tilde x}\big)\widetilde\vG^\pm_j(\tilde t,\tilde x)\\ -\big(\widetilde\vF^\pm_1(\tilde t,\tilde x)\cdot \frac{\pa X^{\pm-1}(\tilde t,\cdot)}{\pa\tilde x}\cdot\nabla_{\tilde x}\big)\tilde\vom^\pm(\tilde t,\tilde x)\\ -\big(\widetilde\vF^\pm_2(\tilde t,\tilde x)\cdot \frac{\pa X^{\pm-1}(\tilde t,\cdot)}{\pa\tilde x}\cdot\nabla_{\tilde x}\big)\tilde\vom^\pm(\tilde t,\tilde x)\\ -\big(\widetilde\vF^\pm_3(\tilde t,\tilde x)\cdot \frac{\pa X^{\pm-1}(\tilde t,\cdot)}{\pa\tilde x}\cdot\nabla_{\tilde x}\big)\tilde\vom^\pm(\tilde t,\tilde x)\\ \end{array} \right).\nonumber \end{equation} We define \begin{align*} D\buildrel\hbox{\footnotesize def}\over =\big\{\tilde \vv^\pm=(\tilde\vv^\pm_0,\tilde\vv^\pm_1,\tilde\vv^\pm_2,\tilde\vv^\pm_3)\in C^\infty([0,T]\times\Om^\pm_{f_0})|\tilde\vv^\pm(T,\tilde x)=0\big\}. \end{align*} Then $\vW^\pm$ solves (\ref{eq.linear}) if and only if for every $\tilde\vv^\pm\in D$, \begin{align*} \int^T_0\int_{\Om_{f_0}^\pm}\widetilde L(\widetilde \vW^\pm)\cdot\tilde\vv^\pm d\tilde xd\tilde t=\int^T_0\int_{\Om_{f_0}^\pm}\tilde \vg^\pm\cdot\tilde \vv^\pm d\tilde xd\tilde t. \end{align*} Thanks to $\mbox{div} \vF^\pm_j=0$, $\vF^\pm_j\cdot \vN_f=0$ on $\Gamma_f$, using the flow map $X^\pm(t,\cdot)$, it is easy to show that $\vW^\pm$ solves (\ref{eq.linear}) if and only if for every $\tilde \vv^\pm\in D$, \begin{align}\label{eq:linear-integral} \int^T_0\int_{\Om^\pm_{f_0}}\widetilde\vW^\pm\cdot L^*(\tilde \vv^\pm)d\tilde xd\tilde t=\int^T_0\int_{\Om^\pm_{f_0}}\vg^\pm\cdot\tilde\vv^\pm d\tilde xd\tilde t, \end{align} where $L^*$ denotes the dual of $L$, i.e., \begin{equation} L^*(\tilde \vv)={-}\left( \begin{array}{l} \pa_{\tilde t}\tilde\vv^\pm_0-\sum\limits^3_{j=1}\widetilde\vF^\pm_j\cdot \frac{\pa X^{\pm-1}}{\pa\tilde x}\cdot\nabla_{\tilde x}\tilde\vv^\pm_j\\ \pa_{\tilde t}\tilde\vv^\pm_1-\widetilde\vF^\pm_1\cdot \frac{\pa X^{\pm-1}}{\pa\tilde x}\cdot\nabla_{\tilde x}\tilde\vv^\pm_0\\ \pa_{\tilde t}\tilde\vv^\pm_2-\widetilde\vF^\pm_2\cdot \frac{\pa X^{\pm-1}}{\pa\tilde x}\cdot\nabla_{\tilde x}\tilde\vv^\pm_0\\ \pa_{\tilde t}\tilde\vv^\pm_3-\widetilde\vF^\pm_3\cdot \frac{\pa X^{\pm-1}}{\pa\tilde x}\cdot\nabla_{\tilde x}\tilde\vv^\pm_0\\ \end{array} \right). \nonumber \end{equation} We denote \begin{align*} L^*(\tilde\vv^\pm)=\widetilde \vV^\pm. \end{align*} It is easy to show that \begin{align*} \sup_{t\in[0,T]} \|\tilde\vv^\pm\|_{L^2(\Om^\pm_{f_0})}(t) \le C(L_1)\|\widetilde \vV^\pm\|_{L^1([0,T];L^2(\Om^\pm_{f_0}))}. \end{align*} Hence, the operator $L^*$ is a bijection from $D$ to $L^*(D)$. Let $N_0$ be its inverse. By Hahn-Banach theorem, we can extend $N_0$(denoted by $N$ its extension) to the space $L^1([0,T];L^2(\Om^\pm_{f_0}))$: \begin{align*} N:L^1([0,T];L^2(\Om^\pm_{f_0}))\to C([0,T];L^2(\Om^\pm_{f_0})),\qquad \widetilde\vV^\pm\to\tilde \vv^\pm. \end{align*} We denote by $N^*$ the dual of $N$: \begin{align*} N^*:\mathcal M([0,T];L^2(\Om^\pm_{f_0}))\to L^\infty([0,T];L^2(\Om^\pm_{f_0})),\quad \tilde\vg^\pm\to\widetilde\vW^\pm. \end{align*} Then for $\tilde\vg^\pm\in L^1([0,T];L^2(\Om^\pm_{f_0}))$, $ \widetilde\vW^\pm=N^*(\tilde\vg^\pm)$ satisfies \eqref{eq:linear-integral} and \begin{align*} \|\widetilde\vW^\pm\|_{L^\infty([0,T];L^2(\Om^\pm_{f_0}))}\le C(L_1)\|\tilde\vg^\pm\|_{L^1([0,T];L^2(\Om^\pm_{f_0}))}. \end{align*} This proves the existence of the solution. The regularity of the solution could be proved by using standard difference quotient method. The uniqueness is obvious. \end{proof} Now we consider the system (\ref{eq.linear}) with nonzero initial data \begin{align*} \vom^\pm(0,x)=\vom^\pm_0,\quad \vG^\pm_j(0,x)=\vG^\pm_{j0}, \end{align*} where $(\vom^\pm_0,\vG^\pm_{j0})\in H^{s-1}(\Omega^\pm_{f_0})\times H^{s-1}(\Omega^\pm_{f_0})$. Let $\widehat\vW^\pm=\vW^\pm-(\vom^\pm_0,\vG^\pm_{j0})$. Then the problem is reduced to the case of zero initial data with $\vg^\pm$ replace by $\vg^\pm-M(\vom^\pm_0,\vG^\pm_{j0})$. From Lemma \ref{lem:ex}, we know that the solution $\widehat\vW^\pm$ exists but with the loss of regularity. To recover the desired regularity, we may first mollify the initial data, and then use the following uniform estimate for smooth solutions: \begin{align*} \sup_{t\in[0,T]}\Big(&||\vom^\pm(t)||^2_{H^{s-1}(\Omega^\pm_f)}+\sum^3_{j=1}||\vG_j^\pm(t)||^2_{H^{s-1}(\Omega^\pm_f)}\Big)\\ &\le C(L_1,T)\Big(||\vg^\pm||_{L^2([0,T];H^{s-1}(\Om^\pm_f))}+||\vom_0^\pm||^2_{H^{s-1}(\Omega^\pm_f)}+\sum^3_{j=1}||\vG_{j,0}^\pm||^2_{H^{s-1}(\Omega^\pm_f)}\Big). \end{align*} Thus, we can conclude the following proposition. \begin{proposition}\label{prop:ex} Assume that $f,\vu^\pm,\vF^\pm$ satisfy (\ref{ass:regularity})-(\ref{ass:stability}). Given the initial data $(\bar\vom_0^\pm,\bar \vG_{j,0}^\pm)\in H^{s-1}(\Omega^\pm_{f_0})\times H^{s-1}(\Omega^\pm_{f_0})$, there exists a unique solution $(\bar \vom^\pm, \bar \vG^\pm_j)\in C([0,T];H^{s-1}(\Omega^\pm_f)\times H^{s-1}(\Omega^\pm_f))$ to the system (\ref{eq:vorticity-w-L}) satisfying the estimate (\ref{eq:linearwg}). \end{proposition} For the solutions to (\ref{eq:vorticity-w-L}), we also have \begin{lemma} It holds that \begin{align*} \frac{d}{dt}\int_{\Gamma^\pm}\bar\om^\pm_3dx'=0,\qquad\frac{d}{dt}\int_{\Gamma^\pm}\bar G_{3j}^\pm dx'=0. \end{align*} \end{lemma} \begin{proof}These are direct consequences of (\ref{eq:vorticity-w-L}) and (\ref{ass:boun}). From the fact that $\pa_i u^\pm_3=\pa_i F^\pm_{3j}=0$ ($i=1,2$) on $\Gamma^\pm$, we have \begin{align*} \frac{d}{dt}\int_{\Gamma^+}\bar\om^+_3dx'=&\int_{\Gamma^+}(-u^+_1\pa_1\bar\om^+_3-u^+_2\pa_2\bar\om^+_3+\pa_3\bar\om^+_3)dx'\\ &+\int_{\Gamma^+}\sum^3_{j=1}(F^+_{1j}\pa_1\bar G^+_{3j}+F^+_{2j}\pa_1\bar G^+_{3j}-\bar G^+_{3j}\pa_3F^+_{3j})dx'\\ =&\int_{\Gamma^+}(\pa_1 u^+_1+\pa_2 u^+_2+\pa_3 u^+_3)\bar\om^+_3dx'\\ &-\int_{\Gamma^+}\sum^3_{j=1}(\pa_1 F^+_{1j}+\pa_2 F^+_{2j}+\pa_3 F^+_{3j})\bar G^+_{3j}dx'\\ =&0. \end{align*} Similarly, it holds that \begin{align*} \frac{d}{dt}\int_{\Gamma^+}\bar G^+_{3j}dx'=&-2\int_{\Gamma^+}\sum_i(\pa_1u_i^+\pa_2F^+_{ij}-\pa_2u^+_i\pa_1F^+_{ij})dx'\\ =&2\int_{\Gamma^+}\sum_i(u_i^+\pa_1\pa_2F^+_{ij}-u^+_i\pa_2\pa_1F^+_{ij})dx'\\ =&0. \end{align*} The proof for $\bar\om^-_3$, $\bar G^-_{3j}$ is similar. \end{proof} \section{Construction and contraction of the iteration map} We assume that \begin{align*} f_0\in H^{s+\f12}(\mathbb{T}^2),\quad \vu_0^\pm,\,\vF_0^\pm\in H^{s}(\Omega_{f_0}^\pm). \end{align*} In addition, we assume that there exists $c_0>0$ such that \begin{itemize} \item[1.] $-(1-2c_0)\le f_0(x')\le (1-2c_0)$; \item[2.] $\Lambda(\vF_0^\pm,\vv)\ge 2c_0$. \end{itemize} Let $f_*=f_0$, and $\Omega_*^\pm=\Omega_{f_0}^\pm$ be the reference region. We take the initial data $(f_I,(\partial_tf)_I,\vom_{*I}^\pm,$ $\vG_{*I}^\pm,\beta_{Ii}^\pm,\gamma_{Ii}^\pm)$ for the equivalent system as follows \begin{align*} &f_I=f_0,\quad (\partial_tf)_I=\vu_0^\pm(x',f_0(x'))\cdot(-\partial_1f_0,-\partial_2f_0,1),\\ &\vom_{*I}^\pm=\mathop{\rm curl}\nolimits\vu_0^\pm,\quad \vG_{*I}^\pm=\mathop{\rm curl}\nolimits\vF_0^\pm,\\ &\beta_{Ii}^\pm=\int_{\BT}u_{0i}^\pm(x',\pm1)dx',\quad \gamma_{Iij}^\pm=\int_{\BT}F_{0ij}^\pm(x',\pm1)dx', \end{align*} which satisfy \begin{align} \|f_I\|_{H^{s+\f12}}+\|(\vom_{I*}^\pm, \vG_{I*}^\pm)\|_{H^{s-1}(\Omega_*^\pm)} +\|(\partial_tf)_I\|_{H^{s-\f12}}+|\beta^\pm_{Ii}|+|\gamma^\pm_{Iij}| \le M_0 \end{align} for some $M_0>0$. Then we define the following functional space. \begin{definition}\label{def:X} Given two positive constants $M_1, M_2>0$ with $M_1>2M_0$, we define the space $\mathcal{X}=\mathcal{X}(T, M_1, M_2)$ be the collection of $(f, \vom_*^\pm, \vG_*^\pm, \beta^\pm_{i},\gamma^\pm_{ij})$, which satisfies \begin{align*} &\left(f(0),\partial_tf(0), \vom_*^\pm(0), \vG_*^\pm(0),\beta^\pm_{i}(0),\gamma^\pm_{ij}(0)\right)=\big(f_I, (\partial_t f)_I, \vom_{*I}^\pm, \vG_{*I}^\pm, \beta^\pm_{Ii},\gamma^\pm_{Iij}\big),\\ & \sup_{t\in[0,T]} \|f(t,\cdot)-f_*\|_{H^{s-\f12}} \le \delta_0,\\ &\sup_{t\in[0,T]}\Big(\|f(t)\|_{H^{s+\f12}}+\|\partial_tf(t)\|_{H^{s-\f12}} +\|(\vom_*^\pm, \vG_*^\pm)(t)\|_{H^{s-1}(\Omega_*^\pm)}+|\beta^\pm_{i}(t)|+|\gamma^\pm_{ij}(t)|\Big)\le M_1,\\ &\sup_{t\in[0,T]}\Big(\|\partial_{t}^2f\|_{H^{s-\f32}} +\|(\partial_t\vom_*, \partial_t\vG_*)\|_{H^{s-2}(\Omega_*^\pm)}+|\partial_t\beta^\pm_{i}|+|\partial_t\gamma^\pm_{ij}|\Big)\le M_2, \end{align*} together with the condition $\int_{\mathbb{T}^2}\pa_tf(t,x')dx'=0$. \end{definition} The main goal of this section is to construct an iteration map $(\bar{f},\bar{\vom}_*, \bar{\vG}_*,\bar{\beta}^\pm_{i},$ $\bar{\gamma}^\pm_{ij})=\mathcal{F}\big(f,\vom_*^\pm, \vG_*^\pm, \beta^\pm_{i},\gamma^\pm_{ij}\big) \in\mathcal{X}(T, M_1, M_2)$ for given $(f,\vom_*^\pm, \vG_*^\pm,\beta^\pm_{i},\gamma^\pm_{ij})\in \mathcal{X} (T, M_1, M_2)$ with suitably chosen constants $M_1, M_2$ and $T$. In addition, we will show that the map $\mathcal{F}$ is contract in $\mathcal{X}(T, M_1, M_2)$ for some suitably chosen $T, M_1, M_2$. \subsection{Recover the bulk region, velocity and deformation tensor field} Recall \begin{align*} \Om_f^{+}=\big\{ x \in \Om| x_3 > f (t,x')\big\}, \quad\Om_f^{-}=\big\{ x \in \Om| x_3 < f (t,x')\big\}, \end{align*} and the harmonic coordinate map $\Phi_f^\pm:\Omega_*^\pm\to\Omega^\pm_f$. Define \begin{align*} \tilde{\vom}^\pm\triangleq P_{f}^{\mbox{div}} (\vom _*^\pm\circ\Phi_{f }^{-1}),\quad \tilde{\vG}^\pm\triangleq P_{f}^{\mbox{div}} (\vG _*^\pm\circ\Phi_{f }^{-1}), \end{align*} where $P_{f }^{\mbox{div}}$ is an project operator which maps a vector field $\Omega_{f}^\pm$ to its divergence-free part. More precisely, $P_{f }^{\mbox{div}}\vom^\pm=\vom^\pm-\nabla\phi^\pm$ with \begin{equation}\nonumber \left\{ \begin{array}{ll} \Delta\phi^\pm=\mbox{div}\vom^\pm\quad&\text{in}\quad \Omega_f^\pm,\\ \partial_3\phi^\pm=0\quad&\text{on}\quad \Gamma^\pm,\\ \phi^\pm=0\quad&\text{on}\quad \Gamma_f. \end{array} \right. \end{equation} Obviously, we have $\mbox{div} P_{f }^{\mbox{div}}\vom^\pm=0$ in $\Omega_f^\pm$, and $\ve_3\cdot P_{f}^{\mbox{div}}\vom^\pm=\om_3^\pm$ on $\Gamma^\pm$. Thus, $P_{f }^{\mbox{div}}\vom^\pm$ satisfies conditions (C1) and (C2) on $\Omega_f^\pm$. Following the same arguments, so does $P_{f }^{\mbox{div}}\vG^\pm$. Moreover, we have \begin{align} &\|(\tilde{\vom}^\pm, \tilde{\vG}^\pm)\|_{H^{s-1}(\Omega_f^\pm)}\le C(M_1),\label{eq:wh-est1}\\ &\|(\pa_t\tilde{\vom}^\pm, \pa_t\tilde{\vG}^\pm)\|_{H^{s-2}(\Omega_f^\pm)}\le C\big(M_1, M_2\big).\label{eq:wh-est2} \end{align} Then we define $\vu^\pm$ and $\vF^\pm$ as the solution of the following system \begin{equation} \left\{ \begin{array}{ll} \mathop{\rm curl}\nolimits\vu^\pm=\tilde{\vom}^\pm,\quad\mbox{div}\vu^\pm =0\quad &\text{in}\quad\Om_f^{\pm},\\ \vu^\pm\cdot\vN_f=\partial_tf\quad&\text{on}\quad\Gamma_{f}, \\ \vu^\pm\cdot\ve_3 = 0,\quad\int_{\Gamma^\pm}u_i dx'=\beta^\pm_i (i=1,2)\quad&\text{on}\quad\Gamma^{\pm}, \end{array} \right. \end{equation} and \begin{equation} \left\{ \begin{array}{ll} \mathop{\rm curl}\nolimits \vF_j^\pm=\tilde{\vG}_j^\pm,\quad\mbox{div}\vF_j^\pm=0\quad&\text{in}\quad\Om_f^{\pm},\\ \vF_j^\pm\cdot\vN_f = 0\quad &\text{on}\quad\Gamma_{f}, \\ \vF^\pm\cdot\ve_3 = 0, \quad \int_{\Gamma^\pm} F_{ij} dx'=\gamma^\pm_i (i=1,2)\quad&\text{on}\quad\Gamma^{\pm}. \end{array} \right. \end{equation} From Proposition \ref{prop:div-curl} and (\ref{eq:wh-est1}), we deduce that \begin{align} \|\vu^\pm\|_{H^{s}(\Omega_f^\pm)}\le &C(M_1)\big(\|\tilde{\vom}^\pm\|_{H^{s-1}(\Omega_f^\pm)}+\|\partial_tf \|_{H^{s-\f12}}+|\beta^\pm_1|+|\beta^\pm_2|\big)\le C(M_1),\\ \|\vF_j^\pm\|_{H^{s}(\Omega_f^\pm)}\le &C(M_1)\big(\|\tilde{\vG}_j^\pm\|_{H^{s-1}(\Omega_f^\pm)}+|\gamma^\pm_{1j}|+|\gamma^\pm_{2j}|\big)\le C(M_1). \end{align} Moreover, there holds \begin{align*} \vu^\pm(0)=\vu_0^\pm,\quad \vF^\pm(0)=\vF_0^\pm. \end{align*} From the fact that \begin{align*} \partial_t(\vu^\pm\cdot\vN_f)=\pa_t\vu^\pm\cdot\vN_f+\vu^\pm\cdot\partial_t\vN_f =(\partial_t\vu^\pm+\partial_3\vu^\pm\partial_tf)\cdot\vN_f+\vu^\pm\cdot\partial_t\vN_f \end{align*} on $\Gamma_f$, one can easily deduce that $\partial_t\vu^\pm$ satisfies \begin{equation} \left\{ \begin{array}{ll} \mathop{\rm curl}\nolimits\partial_t\vu^\pm=\partial_t\tilde{\vom}^\pm,\quad\mbox{div}\partial_t\vu^\pm=0\quad&\text{in}\quad\Om_f^{\pm},\\ \partial_t\vu^\pm\cdot\vN_f=\partial_{t}^2f-\partial_tf\partial_3\vu^\pm \cdot\vN_f+u_1^\pm\partial_1\partial_tf+u_2^\pm\partial_2\partial_tf \quad&\text{on}\quad\Gamma_{f}, \\ \partial_t\vu^\pm\cdot\ve_3=0, \quad \int_{\Gamma^\pm}\partial_tu_i^\pm dx=\partial_t\beta^\pm_i(i=1,2)\quad&\text{on}\quad\Gamma^{\pm}. \end{array} \right. \end{equation} By Proposition \ref{prop:div-curl} again and (\ref{eq:wh-est2}), we get \begin{align*} \|\partial_t\vu^\pm\|_{H^{s-1}(\Omega_f^\pm)}\le {C}(M_1, M_2), \end{align*} which implies \begin{align}\nonumber \|\vu^\pm(t)\|_{L^\infty(\Gamma_f)}\le& \|\vu^\pm_0\|_{L^\infty(\Gamma_{f_0})}+\int_0^t\|\partial_t\vu^\pm\|_{L^\infty(\Gamma_f)}dt\\ \le& \frac{M_0}{2}+T{C}(M_1,M_2).\nonumber \end{align} Applying similar arguments, we can show that \begin{align*} &\|\partial_t\vF^\pm(t)\|_{H^{s-1}(\Omega_f^\pm)}\le {C}(M_1, M_2),\\ &\|\vF^\pm(t)\|_{L^\infty(\Gamma_f)}\le \frac{M_0}2+T{C}(M_1,M_2).\label{h-infty} \end{align*} Moreover, we have \begin{align*} &\|f(t)-f_0\|_{L^\infty}\le \|f(t)-f_0\|_{H^{s-\f12}}\le T\|\partial_tf\|_{H^{s-\f12}}\le TM_1, \\ &|\Lambda(\vF^\pm,\vv)-\Lambda(\vF_0^\pm,\vv_0)|\le TC\big(\|\partial_t\vu^\pm\|_{L^\infty(\Gamma_f)}, \|\partial_t\vF^\pm\|_{L^\infty(\Gamma_f)}\big)\le TC(M_1,M_2). \end{align*} Choose $T$ small enough such that \begin{align*} TM_1\le \min\{\delta_0,c_0\},\quad TC(M_1)+T{C}(M_1,M_2)\le \frac {M_0}2,\quad TC(M_1, M_2)\le c_0, \end{align*} and $L_0=M_0$, $L_1=M_1$, $L_2={C}(M_1, M_2)$. Then we can obtain that for any $t\in [0,T]$: \begin{itemize} \item $-(1-c_0)\le f(t,x')\le (1-c_0)$; \item $\Lambda(\vF^\pm,\vv)(t)\ge c_0$; \item $\|(\vu^\pm, \vF^\pm)(t)\|_{L^{\infty}(\Gamma_f)}\le L_0$; \item $\|f(t)-f_*\|_{H^{s-\f12}}\le \delta_0$; \item $\|f(t)\|_{H^{s+\f12}}+\|\pa_tf(t)\|_{H^{s-\f12}}+\|\vu^\pm(t)\|_{H^{s}(\Omega_f^\pm)} +\|\vF^\pm(t)\|_{H^{s}(\Omega_f^\pm)}\le L_1$; \item $\|(\partial_t\vu^\pm, \partial_t\vF^\pm)(t)\|_{L^{\infty}(\Gamma_f)}\le L_2$. \end{itemize} \subsection{Define the iteration map} Given $(f,\vu^\pm,\vF^\pm)$ and define initial data as follows: \begin{align*} &\left(\bar f_1(0),\bar\theta(0), \bar\vom^\pm(0) , \bar\vG^\pm(0)\right) =\big(f_0,(\partial_tf)_I, \vom_{*I}^\pm, \vj_{*I}^\pm\big). \end{align*} We can solve $\bar f_1$ and $(\bar\vom^\pm, \bar\vG^\pm)$ by the linearized system (\ref{sys:linear-H}) and (\ref{eq:vorticity-w-L}). We define \begin{align*} &\bar\vom_{*}^\pm=\bar\vom^\pm\circ \Phi_{f}^\pm,\quad \bar\vG_{*}^\pm=\bar\vG^\pm\circ\Phi_{f}^\pm,\\ &\bar\b^\pm_i(t)=\b^\pm_i(0)-\int^t_0\int_{\Gamma^\pm}u_s^\pm\pa_s u^\pm_i-\sum_{j=1}^3F^\pm_{sj}\pa_s F^\pm_{ij}dx'd\tau,\\ &\bar\gamma^\pm_{ij}(t)=\gamma^\pm_{ij}(0)-\int^t_0\int_{\Gamma^\pm}u^\pm_s\pa_s F^\pm_{ij}-F^\pm_{sj}\pa_s u_i^\pm dx'd\tau. \end{align*} Then we have the iteration map $\mathcal{F}$ as follows \begin{equation} \mathcal{F}\big(f,\vom_*^\pm, \vG_*^\pm, \beta^\pm_{i},\gamma^\pm_{ij}\big) \buildrel\hbox{\footnotesize def}\over = \big(\bar{f},\bar{\vom}_*^\pm, \bar{\vG}_*^\pm,\bar{\beta}^\pm_{i}, \bar{\gamma}^\pm_{ij}\big). \end{equation} To ensure $\langle \bar f\rangle=\langle f_0\rangle$ and $\int_{\mathbb{T}^2}\pa_t \bar f(t,x')dx'=0$ for $t\in [0,T]$, $\bar f$ in the above equation is given by \begin{equation} \bar f(t,x')=\bar f_1(t,x')-\langle \bar f_1\rangle+\langle f_0\rangle. \end{equation} \begin{proposition}\label{prop:iteration map} There exist $M_1, M_2, T>0$ depending on $c_0, \delta_0, M_0$ so that $\mathcal{F}$ is a map from $\mathcal{X}(T, M_1,M_2)$ to itself. \end{proposition} \begin{proof} We know that the initial conditions are automatically satisfied according to the Definition \ref{def:X}. From Proposition \ref{prop:f-L} and Proposition \ref{prop:vorticity}, we have \begin{align*}\nonumber &\sup_{t\in[0,T]}\left( \|\bar{f}(t)\|_{H^{s+\f12}}+\|\partial_t\bar{f}(t)\|_{H^{s-\f12}}+\|\bar\vom_*^\pm(t)\|_{H^{s-1} (\Omega_*^\pm)}+\|\bar\vG_*(t)\|_{H^{s-1}(\Omega_*^\pm)}\right)\\ &\le C(c_0,M_0)e^{C(M_1,M_2)T}. \end{align*} From the equation (\ref{sys:linear-H}), (\ref{eq:vorticity-w-L}), we deduce that \begin{equation} \sup_{t\in[0,T]}\Big(\|\partial_{t}^2\bar f\|_{H^{s-\f32}} +\|(\partial_t\bar\vom_*, \partial_t\bar\vG_*)\|_{H^{s-2}(\Omega_*^\pm)}\Big)\le C(M_1). \end{equation} Obviously, we have \begin{align*} &|\bar\beta^\pm_i(t)|+|\bar\gamma^\pm_{ij}(t)|\le M_0+TC(M_1),\\ &|\partial_t\bar\beta^\pm_i(t)|+|\partial_t\bar\gamma^\pm_{ij}(t)|\le C(M_1),\\ &\|\bar f(t)-f_*\|_{H^{s-\f12}}\le \int_0^t\|\pa_t \bar f(\tau)\|_{H^{s-\f12}}d\tau. \end{align*} We firstly take $M_2=C(M_1)$ and then take $M_1$ large enough so that \begin{align} C(c_0,M_0)<M_1/2. \end{align} Next, we take $T$ small enough which only depends only on $c_0, \delta_0, M_0$ so that all other conditions in Definition \ref{def:X} are satisfied. \end{proof} \subsection{Contraction of the iteration map} Now we prove the contraction of the iteration map $\mathcal{F}$. Let $\big(f^A, \vom_*^{\pm A}, \vG_*^{\pm A},\beta^{\pm A}_{i}, \gamma^{\pm A}_{ij}\big), \big(f^B$, $\vom_*^{\pm B}, \vG_*^{\pm B}, \beta^{\pm B}_{i}, \gamma^{\pm B}_{ij}\big)\in \mathcal{X}(T, M_1,M_2)$, and $\big(\bar f^C,\bar \vom_*^{\pm C}, \bar\vG_*^{\pm C}, \bar\beta^{\pm C}_{i},\bar\gamma^{\pm C}_{ij}\big)=\mathcal{F} \big(f^C$, $\vom_*^{\pm C}, \vG_*^{\pm C},\beta^{\pm C}_{i}, \gamma^{\pm C}_{ij}\big)$ for $C=A,B$. In addition, we use $g^D$ to denote the difference $g^A-g^B$. For instance, $f^D={f}^A-{f}^B, \vom_*^{\pm D}=\vom_*^{\pm A}-\vom_*^{\pm B}$. \begin{proposition}\label{prop:contraction} There exists $T>0$ depending on $c_0, \delta_0, M_0$ so that \begin{align}\nonumber \bar E^D\triangleq&~\sup_{t\in[0,T]}\Big(\|\bar{f}^D(t)\|_{H^{s-\f12}}+\|\partial_t\bar{f}^D(t)\|_{H^{s-\f32}}+\|\bar\vom_*^{\pm D}(t)\|_{H^{s-2}(\Omega_*^\pm)} \\&\qquad+\|\bar\vG_*^{\pm D}(t)\|_{H^{s-2}(\Omega_*^\pm)}+|\bar\beta^{\pm D}_{i}(t)|+|\bar\gamma^{\pm D}_{ij}(t)|\Big)\nonumber\\\nonumber \le&~\frac12\sup_{t\in[0,T]}\Big( \|{f}^D(t)\|_{H^{s-\f12}}+\|\partial_t{f}^D(t)\|_{H^{s-\f32}} +\|\vom_*^{\pm D}(t)\|_{H^{s-2}(\Omega_*^\pm)}\\&\qquad\qquad+\|\vG_*^{\pm D}(t)\|_{H^{s-2}(\Omega_*^\pm)} +|\beta^{\pm D}_{i}(t)|+|\gamma^{\pm D}_{ij}(t)|\Big)\triangleq E^D.\nonumber \end{align} \end{proposition} \begin{proof} Firstly, we have following elliptic estimate \begin{align*} \|\Phi_{f^A}^\pm-\Phi_{f^B}^\pm\|_{H^{s-1}(\Om_*^\pm)}\le C(M_1)\|f^A-f^B\|_{H^{s-\f12}}\le CE^D. \end{align*} We can not estimate the difference between $\vu^A$ and $\vu^B$ directly, since they are defined on different regions. For this end, we introduce for $C=A,B$, \begin{align*} \vu^{\pm C}_*=\vu^{\pm C}\circ\Phi_{f^C}^{\pm},\quad \vF_{j*}^{\pm C}=\vF_j^{\pm C}\circ\Phi_{f^C}^{\pm}. \end{align*} Now we show that \begin{align}\label{eq:uh-d} \|\vu^{\pm D}_*\|_{H^{s-1}(\Om^\pm_*)}+\|\vF_{j*}^{\pm D}\|_{H^{s-1}(\Om^\pm_*)}\le CE^D. \end{align} We introduce \begin{align*} &\mathop{\rm curl}\nolimits_C \vv_*^\pm=\big(\mathop{\rm curl}\nolimits (\vv_*^\pm\circ(\Phi^{\pm}_{f^C})^{-1})\big) \circ\Phi_{f^C}^\pm,\\ &\mbox{div}_C \vv_*^\pm =\big(\mbox{div}(\vv_*^\pm\circ(\Phi^{\pm}_{f^C})^{-1}\big) \circ\Phi_{f^C}^\pm, \end{align*} for vector field $\vv_*^\pm$ defined on $\Omega_*^\pm$. Then it holds for $C=A,B$ that \begin{equation} \left\{ \begin{array}{ll} \mathop{\rm curl}\nolimits_C \vu^{\pm C}_*=\tilde{\vom}^{\pm C}_*\quad&\text{in}\quad\Om^{\pm}_*,\\ \mbox{div}_C \vu_*^{\pm C}=0\quad&\text{in}\quad\Om^{\pm}_*,\\ \vu^{\pm C}_*\cdot\vN_{f^C}=\partial_tf^C\quad&\text{on}\quad\Gamma_{*},\\ \vu^{\pm C}\cdot\ve_3 = 0,\quad \int_{\Gamma^\pm} u_i^{\pm C}dx'=\beta_i^{\pm C}\quad&\text{on}\quad\Gamma^{\pm}. \end{array} \right. \end{equation} Thus, we can deduce \begin{equation} \left\{ \begin{array}{ll} \mathop{\rm curl}\nolimits_A\vu^{\pm D}_*=\tilde{\vom}^{\pm D}_*+(\mathop{\rm curl}\nolimits_B-\mathop{\rm curl}\nolimits_A)\vu^{\pm B}_*\quad&\text{in}\quad \Om^{\pm}_*,\\ \mbox{div}_A\vu^{\pm D}_*=(\mbox{div}_B-\mbox{div}_A)\vu^{\pm B}_*\quad&\text{in}\quad\Om^{\pm}_*,\\ \vu^{\pm D}_*\cdot\vN_{f^A}=\partial_tf^D+\vu^{\pm B}_*\cdot(\vN_{f^B}-\vN_{f^A})\quad&\text{on}\quad\Gamma_{*}, \\ \vu^{\pm D}_*\cdot\ve_3=0,\quad \int_{\Gamma^\pm}u_i^{\pm D}dx'=\beta_i^{\pm D}\quad &\text{on}\quad\Gamma^{\pm}. \end{array} \right. \end{equation} It is direct to obtain \begin{align*} \|(\mathop{\rm curl}\nolimits_B-\mathop{\rm curl}\nolimits_A)\vu^{\pm B}_*\|_{H^{s-2}(\Omega_*^\pm)}\le&~ C\|\Phi_{f^A}^\pm-\Phi_{f^B}^\pm\|_{H^{s-1}(\Omega_*^\pm)}\\ \le& C\|f^D\|_{H^{s-\f12}}\le CE^D, \end{align*} and similarly, \begin{align*} &\|(\mbox{div}_B-\mbox{div}_A)\vu^{\pm B}_*\|_{H^{s-2}(\Omega_*^\pm)}\le CE^D,\\ &\|\vu^{\pm B}_*\cdot(\vN_{f^B}-\vN_{f^B})\|_{H^{s-\f32}}\le CE^D. \end{align*} Then applying Proposition \ref{prop:div-curl} yields that \begin{align}\nonumber \|\vu^{\pm D}_*\|_{H^{s-1}(\Om_*^\pm)}\le C\left(\|\tilde{\vom}^{\pm D}_*\|_{H^{s-2}(\Omega_*^\pm)} +\|\partial_tf^D\|_{H^{s-1}}+E^D\right)\le CE^D. \end{align} Similarly, we have \begin{align}\nonumber \|\vF^{\pm D}_*\|_{H^{s-1}(\Om_*^\pm)}\le& CE^D. \end{align} Recall that \begin{align*} \partial_t\bar{f}_1^D=&~\bar\theta^D,\\\nonumber \partial_t\bar{\theta}^D=&-\frac{2}{\r^++\r^-} \left((\r^+\underline{u} ^{+A}_1+\r^-\underline{u} ^{-A}_1)\partial_1\bar\theta^D+(\r^+\underline{u} ^{+A}_2+\r^-\underline{u} ^{-A}_2)\partial_2\bar\theta^D\right)\\ &-\frac{1}{\r^++\r^-}\sum^2_{s,r=1}\big(\r^+\underline{u} _s^{+A}\underline{u} ^{+A}_r- \rho^+\sum^3_{j=1}\underline{F} _{sj}^{+A}\underline{F} _{rj}^{+A}\big)\partial_s\partial_r\bar f^D_1\\ &-\frac{1}{\r^++\r^-}\sum^2_{s,r=1}\big(\r^-\underline{u} ^{-A}_s\underline{u} ^{-A}_r-\rho^-\sum^3_{j=1} \underline{F} _{sj}^{-A}\underline{F} _{rj}^{-A}\big)\partial_s\partial_r\bar f^D_1+\mathfrak R,\label{linear:cauchy:eq-2} \end{align*} where \begin{align}\nonumber \mathfrak{R}=&-\frac{2}{\r^++\r^-} \left((\r^+\underline{u} ^{+D}_1+\r^-\underline{u} ^{-D}_1)\partial_1\bar\theta^B+(\r^+\underline{u} ^{+D}_2+\r^-\underline{u} ^{-D}_2)\partial_2\bar\theta^B\right)\\ \nonumber &-\frac{1}{\r^++\r^-}\sum^2_{s,r=1}\big(\r^+\underline{u} _s^{+A}\underline{u} ^{+A}_r- \rho^+\sum^3_{j=1}\underline{F} _{sj}^{+A}\underline{F} _{rj}^{+A}\big)\partial_s\partial_r\bar f^B_1\\\nonumber &+\frac{1}{\r^++\r^-}\sum^2_{s,r=1}\big(\r^+\underline{u} _s^{+B}\underline{u} ^{+B}_r- \rho^+\sum^3_{j=1}\underline{F} _{sj}^{+B}\underline{F} _{rj}^{+B}\big)\partial_s\partial_r\bar f^B_1\\\nonumber &-\frac{1}{\r^++\r^-}\sum^2_{s,r=1}\big(\r^-\underline{u} ^{-A}_s\underline{u} ^{-A}_r-\rho^-\sum^3_{j=1} \underline{F} _{sj}^{-A}\underline{F} _{rj}^{-A}\big)\partial_s\partial_r\bar f^B_1\\\nonumber &+\frac{1}{\r^++\r^-}\sum^2_{s,r=1}\big(\r^-\underline{u} ^{-B}_s\underline{u} ^{-B}_r-\rho^-\sum^3_{j=1} \underline{F} _{sj}^{-B}\underline{F} _{rj}^{-B}\big)\partial_s\partial_r\bar f^B_1\\\nonumber &+\mathfrak{g}^A-\mathfrak{g}^B,\nonumber \end{align} and for $C=A,B, $ \begin{align} \nonumber \mathfrak g^C=&\frac{1}{\r^++\r^-}(\mathcal{N}^+_{f^C}-\mathcal{N}^-_{f^C})\widetilde{\mathcal{N}}^{-1}_{f^C}\Big(\sum^2_{s,r=1}\big(\underline{u} _s^{+C}\underline{u} ^{+C}_r-\sum^3_{j=1}\underline{F} ^{+C}_{sj}\underline{F} _{rj}^{+C}\\\nonumber &\quad-\underline{u} ^{-C}_s\underline{u} ^{-C}_r+\sum^3_{j=1}\underline{F} _{sj}^{-C}\underline{F} _{rj}^{-C}\big)\partial_s\partial_r{f^C}\Big)\\\nonumber &+\frac{1}{\r^++\r^-}(\mathcal{N}^+_{f^C}-\mathcal{N}^-_{f^C})\widetilde{\mathcal{N}}^{-1}_{f^C} \big((\underline{u} ^{+C}_1-\underline{u} ^{-C}_1)\partial_1\theta^C+(\underline{u} ^{+C}_2-\underline{u} ^{-C}_2)\partial_2\theta^C\big)\\\nonumber &+\frac{1}{\r^++\r^-}N_{f^C}\cdot\underline{\nabla(\r^+p_{\vu^{+C}, \vu^{+C}}-\rho^+\sum^3_{j=1}p_{\vF^{+C}_j, \vF^{+C}_j})}\\\nonumber &+\frac{1}{\r^++\r^-}N_{f^C}\cdot\underline{\nabla(\r^-p_{\vu^{-C},\vu^{-C}}-\rho^-\sum^3_{j=1}p_{\vF^{-C}_j, \vF^{-C}_j})}\\\nonumber &-\frac{1}{\r^++\r^-}(\mathcal{N}^+_{f^C}-\mathcal{N}^-_{f^C})\widetilde{\mathcal{N}}^{-1}_{f^C} N_{f^C}\cdot\underline{\nabla\big(p_{\vu^{+C}, \vu^{+C}}-\sum^3_{j=1}p_{\vF^{+C}_j, \vF^{+C}_j}-p_{\vu^{-C},\vu^{-C}}+\sum^3_{j=1}p_{\vF^{-C}_j, \vF^{-C}_j}\big)}.\\\nonumber \end{align} Here $\underline{v}^C(x_1,x_2)$ is the trace of $v$ on $\Gamma_{f^C}$ which interpreted as $v(x_1,x_2, f^C(x_1,x_2))$. Similar to the proof of Lemma \ref{lem:non-g}, we can show that \begin{equation*} \|\mathfrak{R}\|_{H^{s-\f32}}\le CE^D. \end{equation*} We denote \begin{align}\nonumber \bar F_{s}^D(\partial_t\bar{f}_1^D,\bar{f}_1^D)\triangleq&\big\|(\partial_t+w^A_i\partial_i)\langle \na\rangle^{s-\f32}\bar{f}_1^D\big\|_{L^2}^2 -\big\|v_i^A\partial_i\langle \na\rangle^{s-\f32}\bar{f}_1^D\big\|_{L^2}^2\\ &+\frac{\rho^+}{\r^++\r^-}\sum\limits^3_{j=1}\big\|F^{+A}_{ij}\partial_i\langle \na\rangle^{s-\f32}\bar{f}_1^D\big\|_{L^2}^2 +\frac{\rho^-}{\r^++\r^-}\sum\limits^3_{j=1}\big\|F^{-A}_{ij}\partial_i\langle \na\rangle^{s-\f32}\bar{f}_1^D\big\|_{L^2}^2.\nonumber \end{align} Following the proof of Proposition \ref{prop:f-L}, one can deduce that \begin{align*} \frac{d}{dt}\Big(\bar F_{s}^D(\partial_t\bar{f}_1^D,\bar{f}_1^D)+\|\bar{f}_1^D\|_{L^2}^2+\|\partial_t\bar{f}_1^D\|_{L^2}^2\Big) \le C\big(E^D+\bar E_1^D)£¬ \end{align*} where \begin{align*} \bar E_1^D&~=\sup_{t\in[0,T]}\Big(\|\bar{f}_1^D(t)\|_{H^{s-\f12}}+\|\partial_t\bar{f}_1^D(t)\|_{H^{s-\f32}}\Big). \end{align*} Recalling the fact that \begin{align*} \|\bar{f}_1^D\|_{H^{s-\f12}}^2+\|\partial_t\bar{f}_1^D\|_{H^{s-\f32}}^2\le C \Big(\bar F_{s}^D(\bar{f}_1^D,\partial_t\bar{f}_1^D)+\|\bar{f}_1^D\|_{L^2}^2 +\|\partial_t\bar{f}_1^D\|_{L^2}^2\Big), \end{align*} we obtain \begin{align*} \sup_{t\in[0,T]}\left(\|\bar{f}_1^D(t)\|_{H^{s-1}}+\|\partial_t\bar{f}_1^D (t)\|_{H^{s-\f32}}\right)\le C(e^{CT}-1)E^D, \end{align*} which induces \begin{align}\label{eq:f-d} \sup_{t\in[0,T]}\left(\|\bar{f}^D(t)\|_{H^{s-1}}+\|\partial_t\bar{f}^D (t)\|_{H^{s-\f32}}\right)\le C(e^{CT}-1)E^D. \end{align} Similar to the proof of Proposition \ref{prop:vorticity}, it can be verified that \begin{align}\label{eq:wj-d} \sup_{t\in[0,T]}\left(\|\bar\vom_*^D(t)\|_{H^{s-2}(\Omega_*^\pm)}+\|\bar\vG_*^D\|_{H^{s-2}(\Omega_*^\pm)}\right)\le C(e^{CT}-1)E^D. \end{align} From the equation \begin{align*} \bar\b^{\pm C}_i(t)=\bar\b^{\pm C}_i(0)-\int^t_0\int_{\Gamma^\pm}u_s^{\pm C}\pa_s u^{\pm C}_i-\sum^3_{j=1}F^{\pm C}_{sj}\pa_s F^{\pm C}_{ij}dx'd\tau, \end{align*} we have \begin{align}\label{eq:be-d} |\bar\beta^{\pm D}_i(t)|\le|\beta^{\pm D}_{iI}|+TCE^D. \end{align} It is similar to show that \begin{align}\label{eq:ga-d} |\bar\gamma^{\pm D}_{ij}(t)|\le|\gamma^{\pm D}_{ijI}|+TCE^D. \end{align} Thus, thanks to (\ref{eq:uh-d}) and (\ref{eq:f-d})--(\ref{eq:ga-d}), we can conclude that \begin{align*} \bar{E}^D&\le C(e^{CT}-1+T)E^D. \end{align*} Taking $T$ small enough depending on $c_0, \delta_0, M_0$, we obtain the proof of the proposition. \end{proof} \subsection{The limit system} It follows from Proposition \ref{prop:iteration map} and Proposition \ref{prop:contraction} that there exists a unique fixed point $(f,\vom_*^\pm, \vG_*^\pm,\beta^\pm_i, \gamma^\pm_{ij})$ of the map $\mathcal{F}$ in $\mathcal{X}(T,M_1,M_2)$. In addition, from the construction of $\mathcal{F}$, we have that $(f,\vom^\pm, \vG^\pm,\beta^\pm_i, \gamma^\pm_{ij}) =(f,\vom_*^\pm\circ\Phi_f^{-1}, \vG_*^\pm\circ\Phi_f^{-1},\beta^\pm_i, \gamma^\pm_{ij})$ satisfies \begin{align} \partial_tf=&\mathcal P\theta\label{eq:limit-theta},\\\nonumber \partial_t\theta=&-\frac{2}{\r^++\r^-} \big((\r^+\underline{u}^+_1+\r^-\underline{u} ^-_1)\partial_1\theta+(\r^+\underline{u} ^+_2+\r^-\underline{u} ^-_2)\partial_2\theta\big)\\\nonumber &-\frac{1}{\r^++\r^-}\sum^2_{s,r=1}\big(\r^+\underline{u} _s^+\underline{u} ^+_r- \rho^+\sum^3_{j=1}\underline{F} _{sj}^+\underline{F} _{rj}^+\big)\partial_s\partial_rf\\\nonumber &-\frac{1}{\r^++\r^-}\sum^2_{s,r=1}\big(\r^-\underline{u} ^-_s\underline{u} ^-_r-\rho^-\sum^3_{j=1} \underline{F} _{sj}^-\underline{F} _{rj}^-\big)\partial_s\partial_rf\\\nonumber &+\frac{1}{\r^++\r^-}(\mathcal{N}^+_f-\mathcal{N}^-_f)\widetilde{\mathcal{N}}^{-1}_f\mathcal P\big(\sum^2_{s,r=1}\big(\underline{u} _s^+\underline{u} ^+_r-\sum^3_{j=1}\underline{F} _{sj}^+\underline{F} _{rj}^+\big)\partial_s\partial_rf\big)\\\nonumber &-\frac{1}{\r^++\r^-}(\mathcal{N}^+_f-\mathcal{N}^-_f)\widetilde{\mathcal{N}}^{-1}_f\mathcal P\big(\sum^2_{s,r=1}\big(\underline{u} ^-_s\underline{u} ^-_r-\sum^3_{j=1}\underline{F} _{sj}^-\underline{F} _{rj}^-\big)\partial_s\partial_rf\big)\\\nonumber &+\frac{1}{\r^++\r^-}(\mathcal{N}^+_f-\mathcal{N}^-_f)\widetilde{\mathcal{N}}^{-1}_f\mathcal P \big((\underline{u}^+_1-\underline{u} ^-_1)\partial_1\theta+(\underline{u} ^+_2-\underline{u} ^-_2)\partial_2\theta\big)\\\nonumber &+\frac{1}{\r^++\r^-}N_f\cdot\underline{\nabla(\r^+p_{\vup, \vup}-\rho^+\sum^3_{j=1}p_{\vF^+_j, \vF^+_j})}\\\nonumber &+\frac{1}{\r^++\r^-}N_f\cdot\underline{\nabla(\r^-p_{\vum,\vum}-\rho^-\sum^3_{j=1}p_{\vF^-_j, \vF^-_j})}\\ \label{limit-f} &-\frac{1}{\r^++\r^-}(\mathcal{N}^+_f-\mathcal{N}^-_f)\widetilde{\mathcal{N}}^{-1}_f\mathcal P N_f\cdot\underline{\nabla\big(p_{\vup, \vup}-\sum^3_{j=1}p_{\vF^+_j, \vF^+_j}-p_{\vum,\vum}+\sum^3_{j=1}p_{\vF^-_j, \vF^-_j}\big)}, \end{align} where $(\vu^\pm,\vF^\pm)$ sovles the div-curl system \begin{equation} \left\{ \begin{array}{ll} \mathop{\rm curl}\nolimits \vu^\pm=P_f^{div}\vom^\pm,\quad \mbox{div}\vu^\pm=0\quad &\text{in} \quad \Om_f^\pm,\\ \vu^\pm\cdot\vN_f=\partial_tf\quad&\text{on}\quad\Gamma_f,\\ u_3^\pm=0\quad&\text{on}\quad\Gamma^\pm,\\ \int_{\Gamma^\pm}u_i^\pm dx'=\beta_i^\pm,&\\ \partial_t\beta_i^\pm~=-\int_{\Gamma^\pm}(u_j^\pm\partial_ju_i^\pm-\sum\limits _{j=1}^3F^\pm_{sj}\pa_s F^\pm_{ij})dx',& \end{array} \right. \end{equation} and \begin{equation} \left\{ \begin{array}{ll} \mathop{\rm curl}\nolimits\vF_j^\pm=P_f^{div}\vG_j^\pm,\quad \mbox{div} \vF_j^\pm=0 &\text{in} \quad \Om_f^\pm,\\ \vF_j^\pm\cdot\vN_f=0 &\text{on}\quad\Gamma_f,\\ F_{3j}^\pm=0&\text{on}\quad\Gamma^\pm,\\ \int_{\Gamma^\pm}F_{ij}^\pm dx'=\gamma_{ij}^\pm,\\ \partial_t\gamma_{ij}^\pm=-\int_{\Gamma^\pm}(u^\pm_s\pa_s F^\pm_{ij}-F^\pm_{sj}\pa_s u^\pm_i)dx'. \end{array} \right. \end{equation} and \begin{align} & \pa_t\vom^\pm+\vu^\pm\cdot\nabla \vom^\pm-\sum^3_{i=1}\vF^\pm_i\cdot\nabla {\vG^\pm_i}=\vom^\pm\cdot\nabla \vu^\pm-\sum^3_{i=1}{\vG^\pm_i}\cdot\nabla \vF^\pm_i,\\ \label{limvor} & \pa_t \vG^\pm_j+\vu^\pm\cdot\nabla \vG^\pm_j-\vF^\pm_j\cdot\nabla\vom^\pm= \vG^\pm_j\cdot\nabla \vu^\pm-\vom^\pm\cdot\nabla \vF^\pm_j-2\sum^3_{s=1}\nabla u^\pm_s\times\nabla F^\pm_{sj}. \end{align} Here we recall that $p_{\vu_1^\pm,\vu_2^\pm}$ in (\ref{limit-f}) is defined by \begin{eqnarray}\nonumbe \left\{ \begin{array}{l} \Delta p_{\vu_1^\pm,\vu_2^\pm}= -\mathrm{tr}(\nabla\vu_1^\pm\nabla\vu_2^\pm) \quad \text{in}\quad\Omega^\pm_f,\\ p_{\vu_1^\pm, \vu_2^\pm}=0\quad\text{on}\quad\Gamma_f, \quad\\ \ve_3\cdot\nabla p_{\vu_1^\pm, \vu_2^\pm}=0\quad\text{on}\quad\Gamma^\pm. \end{array}\right. \end{eqnarray} To finish the proof of Theorem \ref{thm:1}, we need to show that the limit system (\ref{eq:limit-theta})-(\ref{limvor}) is equivalent to the original system (\ref{els})-(\ref{elsi}). We introduce the pressure $p^\pm$ of the fluid by \begin{align*} p^\pm=\mathcal{H}_f^\pm\underline{p}^\pm+\rho^\pm p_{\vupm, \vupm}-\rho^\pm\sum^3_{j=1}p_{\vF^\pm_j, \vF^\pm_j}, \end{align*} where \begin{align*} \underline{p}^+=\underline{p}^-=\widetilde{\mathcal{N}}^{-1}_f\mathcal{P}(g^+-g^-) \end{align*} with \begin{align*} g^\pm=&~2(\underline{u}^\pm_1\partial_1\theta+\underline{u}^\pm_2\partial_2\theta)+\vN\cdot\underline{\nabla(p_{\vupm, \vupm}-p_{\vhpm, \vhpm})} +\sum_{i,j=1}^2\big(\underline{u}_i^\pm \underline{u}^\pm_j-\sum^3_{l=1}\underline{F} ^\pm_{il}\underline{F} ^\pm_{jl}\big)\partial_i\partial_jf. \end{align*} The key idea to prove the consistence is to show that \begin{eqnarray}\label{eq:limit-u} \left\{ \begin{array}{l} \mbox{div}\vw^\pm=0,\quad \mathop{\rm curl}\nolimits\vw^\pm=0\quad\text{in}\quad\Omega^\pm_f,\\ \vw^\pm\cdot\vN_f=0\quad\text{on}\quad\Gamma_f,\\ w_3^\pm=0\quad\text{on}\quad \Gamma^\pm,\quad\int_{\Gamma^\pm}w_i^\pm dx'=0(i=1,2). \end{array}\right. \end{eqnarray} for \begin{align*} \vw^\pm=\partial_t\vu^\pm+\vu^\pm\cdot\nabla\vu^\pm-\sum^3_{j=1}\vF^\pm_j\cdot\nabla\vF^\pm_j+\nabla p^\pm, \end{align*} or \begin{align*} \vw^\pm=\pa_t \vF_j + \vu\cdot\nabla \vF_j - \vF_j\cdot\nabla\vu, \quad j=1,2,3. \end{align*} The proof of (\ref{eq:limit-u}) can be accomplished by following \cite[Section 9]{SWZ1} line by line, so we omit the details here. \section{Proof of Theorem \ref{thm:2}} In this section, we consider the system (\ref{elsf})-(\ref{elsfi}). Since the proof of Theorem \ref{thm:2} is quite analogous to the proof of Theorem \ref{thm:1}, we only present main steps which are different from the problem (\ref{els})-(\ref{elsi}). We first note that the stability condition ${\rm{rank}}(\vF)=2$ is equivalent to (\ref{condition:s1}) with $\rho^+=0$ and $\vF^-=\vF$, which further implies that there exists $c_0>0$ such that \begin{align}\label{condition:2} \Lambda(\vF)\buildrel\hbox{\footnotesize def}\over =&\inf_{x\in\Gamma_t}\inf_{\ph_1^2+\ph_2^2=1}\sum^3_{j=1}(F_{1j}\ph_1+F_{2j}\ph_2)^2\ge c_0. \end{align} Following the derivation of (\ref{eq:theta-d}), one can deduce that \begin{align} \partial_tf=&\theta,\\ \partial_t\theta =&-2(\underline{u}_1\partial_1\theta+\underline{u}_2\partial_2\theta)-\frac{1}{\r}\vN\cdot\underline{\nabla p}-\sum^2_{s,r=1} \underline{u}_s\underline{u}_r\partial_s\partial_rf+\sum^3_{j=1}\sum^2_{s,r=1}\underline{F}_{sj}\underline{F}_{rj}\partial_s\partial_rf. \end{align} with $p=\sum\limits^3_{j=1}p_{\vF_j,\vF_j}-p_{\vu,\vu}.$ By the stability condition (\ref{condition:2}), we obtain \begin{align}\nonumber E_s(\partial_tf,f)\buildrel\hbox{\footnotesize def}\over = &~\big\|(\partial_t+u_i\partial_i)\langle \nabla\rangle^{s- \frac{1}{2}} f\big\|_{L^2}^2+\sum\limits^3_{j=1}\big\|\underline{F}_{ij}\partial_i\langle \nabla\rangle^{s- \frac{1}{2}} f\big\|_{L^2}^2\\ \ge &~\big\|(\partial_t+u_i\partial_i)\langle \nabla\rangle^{s- \frac{1}{2}} f\big\|_{L^2}^2+c_0\sum_{i=1}^2\big\|\partial_i\langle \nabla\rangle^{s- \frac{1}{2}} f\big\|_{L^2}^2. \nonumber \end{align} Consequently, it holds that \begin{align* &\|\partial_t f\|_{H^{s-\f12}}^2+\|f\|_{H^{s+\f12}}^2 \le C(c_0,L_0)\Big\{E_s(\partial_t f,f)+\|\partial_t f\|_{L^2}^2+\|f\|_{L^2}^2\Big\}, \end{align*} which is actually (\ref{linear:equi-norm-2}). Then, the remain parts of the proof can follow the proof of Theorem \ref{thm:1} step by step. \begin{appendix} \section{} \subsection{Div-Curl system} From Section 5 of \cite{SWZ1}, we know that for each div-curl system \begin{equation}\label{eq:div-curl} \left\{ \begin{array}{ll} \mathop{\rm curl}\nolimits \vu =\vom,\quad\mbox{div} \vu=g& \text{ in }\quad\Om_f^+,\\ \vu\cdot\vN_f =\vartheta& \text{ on}\quad \Gamma_f, \\ \vu\cdot\ve_3 = 0,\quad \int_{\mathbb{T}^2} u_i dx'=\alpha_i (i=1,2)& \text{ on}\quad\Gamma^{+}. \end{array} \right. \end{equation} with $f\in H^{s+\frac{1}{2}}(\mathbb{T}^2)$ for $s\ge 2$ and satisfying \begin{align*} -(1-c_0)\le f\le(1-c_0), \end{align*} have a unique solution. \begin{proposition}\label{prop:div-curl} Let $\sigma \in [2,s]$ be an integer. Given $\vom, g\in H^{\sigma-1}(\Omega_f^+)$, $\vartheta\in H^{\sigma-\frac12}(\Gamma_f)$ with the compatiblity condition: \begin{align*} \int_{\Om_f^+} g dx=\int_{\Gamma_f} \vartheta ds, \end{align*} and $\vom$ satisfies \begin{align*} &\mbox{div}\vom=0\quad \text{in}\quad \Omega_f^+,\quad\int_{\Gamma^+}\om_3dx'=0, \end{align*} Then there exists a unique $\vu\in H^{\sigma}(\Omp)$ of the div-curl system (\ref{eq:div-curl}) so that \begin{align*} \|\vu\|_{H^{\sigma}(\Omega_f^+)}\le C\big(c_0,\|f\|_{H^{s+\f12}}\big)\Big(\|\vom\|_{H^{\sigma-1}(\Omega_f^+)}+\|g\|_{H^{\sigma-1}(\Omega_f^+)} +\|\vartheta\|_{H^{\sigma-\frac12}(\Gamma_f)}+|\alpha_1|+|\alpha_2|\Big). \end{align*} \end{proposition} \subsection{Commutator estimate} \begin{lemma}\label{lem:commutator} If $s>1+\frac{d}{2}$, then we have \begin{equation} \big\|[a, \langle\na\rangle^s]u\big\|_{L^2}\le C\|a\|_{H^{s}}\|u\|_{H^{s-1}}. \end{equation} \end{lemma} \subsection{Sobolev estimates of DN operator} \begin{proposition}\label{prop:DN-Hs} If $f\in H^{s+\frac{1}{2}}(\mathbb{T}^2)$ for $s>\frac{5}{2}$, then it holds that for any $\sigma\in \big[-\frac{1}{2},s- \frac{1}{2} \big]$, \begin{equation} \|\mathcal{N}^\pm_f\psi\|_{H^{\sigma}}\le K_{s+\frac{1}{2},f}\|\psi\|_{H^{\sigma+1}}. \end{equation} Moreover, it holds that for any $\sigma\in \big[\frac{1}{2},s- \frac{1}{2} \big]$, \begin{equation} \|\big(\mathcal{N}^+_f-\mathcal{N}^-_f\big)\psi\|_{H^\sigma}\le K_{s+\frac{1}{2},f}\|\psi\|_{H^\sigma}, \end{equation} where $K_{s+\frac{1}{2},f}$ is a constant depending on $c_0$ and $||f||_{H^s}$. \end{proposition} \begin{proposition}\label{prop:DN-inverse} If $f\in H^{s+\frac{1}{2}}(\mathbb{T}^2)$ for $s>\frac{5}{2}$, then it holds that for any $\sigma\in \big[-\frac{1}{2},s- \frac{1}{2}\big]$, \begin{equation} \|\mathcal{G}^\pm_f\psi\|_{H^{\sigma+1}}\le K_{s+\frac{1}{2},f}\|\psi\|_{H^{\sigma}}, \end{equation} where $\mathcal{G}^\pm_f\triangleq\big(\mathcal{N}^\pm_f\big)^{-1}$. \end{proposition} \end{appendix}
{ "timestamp": "2018-02-27T02:05:04", "yymm": "1802", "arxiv_id": "1802.08819", "language": "en", "url": "https://arxiv.org/abs/1802.08819" }
\section{Introduction} The TM model applied to double-well atomic Bose-Einstein condensates has been extensively studied in the recent years \cite{smerzi97,ragh99,anan06,jia08,albiez05,mele11,abad11,doublewell_a, doublewell_b,doublewell_c,doublewell_d,doublewell_e}. Such a model assumes that the condensate order parameter can be described as a superposition of wave functions localized in each well with time dependent coefficients \cite{smerzi97,ragh99}. The localized wave functions are straightforwardly obtained in terms of the stationary symmetric and antisymmetric states, which in turn determine the parameters involved in the TM equations of motion \cite{smerzi97,ragh99,anan06,jia08}. The corresponding dynamics exhibits Josephson and self-trapping regimes \cite{smerzi97,ragh99} which have been experimentally observed by Albiez {\it et al.} \cite{albiez05}. The self-trapping (ST) phenomenon, which is also present in extended optical lattices \cite{optlat_a,optlat_b,optlat_c,optlat_d, Anker2005,Wang2006}, is a non linear effect where the difference of populations between neighbouring sites does not change sign during the whole time evolution. There is nowadays an active research on the ST effect, which involves different types of systems, including mixtures of atomic species \cite{stlastoplat,mele11}. Research on condensates trapped in ring-shaped optical lattices is also a promising area given that successful efforts has been done in their experimental realization \cite{hen09}. The dynamics on systems with three \cite{trespozos2011} and four wells \cite{cuatropozos06} has been initially investigated through multimode models that utilized {\em ad-hoc} values for the hopping and on-site energy parameters. Whereas in \cite{jezek13b}, such parameters have been extracted for a ring-shaped optical lattice with an arbitrary number of wells, by constructing two-dimensional localized Wannier-like (WL) functions in terms of stationary Gross-Pitaevskii (GP) states. In recent works it has been shown that a correction in the TM model that involves the interaction energy should be taken into account in order to properly describe the exact dynamics \cite{jezek13a,jezek13b}. In particular in \cite{jezek13a} an effective two-mode (ETM) model has been developed with an interaction parameter which has been analytically obtained in the Thomas-Fermi (TF) limit, that completely heals this disagreement. In the present work, we will extend these studies for lower numbers of particles by numerically calculating the effective parameter that enters in the model . Here we will analyze the double-well system with the experimental conditions of \cite{albiez05}, where the number of particles is $1150$, and increase such a number to show that the correction to the on-site interaction energy parameter goes to the one predicted in the TF regime \cite{jezek13a}. The main goal of this work is to assess the accuracy of the ETM model by calculating the time periods as functions of the initial imbalance and analyze the role of the different parameters. To this end, we shall confront the values of the orbits periods predicted by this model to those obtained by numerically solving the three-dimensional Gross-Pitaevskii equations. In particular, within the effective two-mode model framework we derive closed expressions for the periods valid for any imbalance value. We then develop a simple analytical approximation to the ST period and improve the calculation of the Josephson period for small imbalances by taking into account the parameter that involves the density overlap between the localized states in neighbouring sites \cite{anan06}. This correction will be of importance for the experimental configuration of the Heidelberg group \cite{albiez05}. We will show that the critical imbalance for the transition between the Josephson and ST regimes predicted by our model is in good agreement with the experimental finding in Ref. \cite{AlbiezThesis05} for the first time. This paper is organized as follows. In Sect.\ \ref{twomode} we describe the double-well system and find the effective on-site interaction energy parameter for several particle numbers. Such a parameter is obtained from a linear approximation of the on-site interaction energy as a function of the imbalance. We will show that the corresponding second order term in the approximation turns out to be much smaller and gives rise to a third order correction in the equations of motion which can be safely disregarded. In Sect. \ref{sec:timeperiods} we derive a closed integral form for the period of the orbits with an arbitrary initial imbalance and obtain explicit analytical approximations within the ST and Josephson regimes, while the numerical results and comparisons with the GP calculations are included in Sect. \ref{sec:NumRes}. To conclude, a summary of our work is presented in Sect. \ref{sum} including a perspective of the application of these methods to multiple-well systems in configurations with high symmetries. Finally, the definition of the parameters employed in the equations of motion are gathered in the Appendix. \section{\label{twomode} Two-mode model} We consider a Bose-Einstein condensate of Rubidium atoms confined by the external potential $ V_{\mathrm{trap}}$ used in the experiment of the Heidelberg group \cite{albiez05}, \begin{equation} V_{\mathrm{trap}}({\bf r} ) = \frac{ 1 }{2 } \, m \, ( \omega_{x}^2 x^2 + \omega_{y}^2 y^2 + \omega_{z}^2 z^2 ) + V_0 \, \cos^2(\pi x/q_0) \end{equation} where $m$ is the atom mass, $ \omega_{x}= 2 \pi \times 66 $ Hz, $ \omega_{y}= 2 \pi \times 78 $ Hz, and $ \omega_{z}= 2 \pi \times 90$ Hz. The lattice parameters are given by $V_0= 2\pi\times 412 \, \hbar $ Hz and $ q_0=5.2 \,\mu$m. The number of particles used in the experiment is $N= 1150$, but we will also consider particle numbers up to $N=10^5$. \subsection{Inclusion of effective on-site interaction energy effects} In previous works \cite{jezek13a,jezek13b} we have shown that the linear dependence on the imbalance of the interaction energy integrated in each well gives rise to a lower effective on-site interaction parameter. Here we will evaluate such a parameter by using a combination of the procedures described in \cite{jezek13a,jezek13b} and also by expanding to a higher order approximation on the imbalance. In doing so, we first rewrite the TM equations of motion by assuming that the on-site interaction energy $U$ can be different in the left ($U_L$) and right ($U_R$) wells. As described in \cite{jezek13a}, $U_R$ and $U_L$ arise from introducing in the mean-field interaction term of the GP equation a more realistic density distribution that depends on the imbalance. Then the GP equation projected into two localized modes at the left and right wells yields \cite{anan06} \begin{equation} \hbar \frac{dZ}{dt} = - {2K} \sqrt{1-Z^2}\,\sin\varphi + I(1-Z^2)\sin 2\varphi \label{imb1} \end{equation} \begin{align} \hbar \frac{d\varphi}{dt} &= {U_R(Z)} N_R - U_L(Z) N_L + {2K} \left[ \frac{Z}{\sqrt{1-Z^2}}\right]\cos\varphi \nonumber \\& - I Z (2+\cos 2\varphi). \label{phase1} \end{align} The dynamical variables are the standard imbalance $ Z = (N_R - N_L)/N $ and phase difference $ \varphi= \varphi_L- \varphi_R$, where $N_R$ and $N_L$ are the number of particles in the right and left wells, respectively. As derived in \cite{jezek13a} we have \begin{equation} U_k (\Delta N) = g\int d^3{\bf r}\,\, \rho^k_N({\bf r}) \, \rho^k_{N+\Delta N}({\bf r}), \label{ur} \end{equation} where $ k=R,L$, and $\rho^k_{N}$, and $\rho^k_{N+\Delta N}$ are the localized densities in the $k$--site for systems with total number of particles $N$ and $N+\Delta N$, respectively. The remaining parameters $J$, and the interaction-driven $F$ and $I$ are defined as usual \cite{smerzi97,ragh99,anan06} in terms of the localized wave functions (see the Appendix), being $K= J+F$. Aiming at reproducing the experimental conditions of \cite{albiez05}, where the number of particles $N= 1150$ is not large enough to be in the Thomas-Fermi regime, and thus the dependence on the imbalance of the on-site interaction energy $U_R$ and $U_L$ cannot be analytically calculated, we should evaluate Eq.\ (\ref{ur}). To simplify the numerical calculation given that the wells are equal, instead of using the localized densities in Eq.\ (\ref{ur}), we can use the alternative method proposed in \cite{jezek13b} where only GP ground-state densities are involved. In that work it has been shown that $U_k(\Delta N_k)$ with $\Delta N_k=N_k-N/2$ can be evaluated as \begin{equation} \frac{U_k (\Delta N_k)}{U} = \frac{ \int d^3{\bf r}\,\, \rho_N({\bf r}) \, \rho_{N+\Delta N}({\bf r})} { \int d^3{\bf r}\,\, \rho^2_N({\bf r}) }, \label{coc} \end{equation} where $\rho_N$ and $\rho_{N+\Delta N}$ are the GP ground-state densities for systems with $N$ and $N + \Delta N $ total number of particles, respectively, being $\Delta N=2\Delta N_k$. The numerical result of $U_k/U$ as a function of $ \Delta N /N $ has been depicted in Fig. \ref{fig:URZ}, where it can be seen that it exhibits an almost linear behaviour. A second order approximation of $U_k$ \begin{equation} \frac{U_{k}(\Delta N_k)}{U} \simeq \ 1 - \alpha \frac{ 2 \Delta N_k} {N} + \beta \left( \frac{ 2 \Delta N_k} {N} \right)^2 \label{Uek} \end{equation} can be obtained by using a polynomial fit of the function with parameters $\alpha $ and $\beta$. These parameters are listed in Table \ref{tab:1} for different numbers of particles and trapping parameters. It is worthwhile mentioning that for the largest number of particles considered in this work, we have taken a larger $q_0$ value than that of the Heidelberg experiment and modified the depth of the wells since the size of the condensate increases with the number of particles. \begin{figure} \includegraphics[width=\columnwidth,clip=true]{fig1n.eps} \caption{\label{fig:URZ}(color online) On-site interaction energy ratio $U_k/U$ as a function of $\Delta N/N$, for $N= 1150 $, $N=10^4 $, and $N=10^5 $. } \end{figure} \begin{table} \caption{\label{tab:1} Coefficients $\alpha$ and $\beta$ of the quadratic fit of $U_k/U$ as a function of $\Delta N/N$ and $\gamma$ for several values of the system parameters. In the 6$^{\mathrm{th}}$ column the factor $f_{3D}$ that reduces the interaction energy parameter is also given. } \tabcolsep=3pt \begin{tabular}{lccccccc} \hline $N$ & $q_0 (\mu\mathrm{m})$ & $V_0 (2\pi \hbar\mathrm{Hz})$ & $\alpha$ & $\beta$ & $f_{3D} = 1-\alpha $ & $\gamma$ & \\[2pt] \hline $ 1150 $ & 5.2 & 412 & 0.21 & 0.06 & 0.79 & 0.064 \\[3pt] $ 10^4 $ & 5.2 & 858 & 0.28 & 0.08 & 0.72 & 0.010 \\[3pt] $ 10^5 $ & 8.0 & 1980 & 0.30 & $\;\,$0.08$^{\rm a}$ & $\;\,$0.70$^{\rm b}$ & 0.005 \\[3pt] \hline \end{tabular} $^{\rm a}$ The Thomas-Fermi limit is 77/1000.\\ $^{\rm b}$ The Thomas-Fermi limit is 7/10. \end{table} Introducing the expansions of $U_R$ and $U_L$ in the equation of motion (\ref{phase1}) for the phase difference, we obtain the on-site interaction-driven correction, \begin{equation} \frac{U_R(\Delta N_R)}{U} N_R - \frac{ U_L (\Delta N_L)}{U} N_L = \left[(1-\alpha) Z + \beta Z^3 \right] N , \label{resta} \end{equation} which yields \begin{align} \hbar \frac{d\varphi}{dt} &=\left[(1-\alpha) Z + \beta Z^3 \right] U N + {2K} \left[ \frac{Z}{\sqrt{1-Z^2}}\right]\cos\varphi \nonumber \\ &-I Z (2 + \cos 2\varphi). \label{phase2A} \end{align} We note that for all number of particles of Table \ref{tab:1} we have $\beta Z^3 \ll (1-\alpha)Z$, and hence one can safely disregard the term of third order in $Z$ in Eq.\ (\ref{phase2A}) in all cases. Then, we conclude that the effective TM model can be simply obtained by replacing the on-site interaction energy parameter $U$ by $U_{\mathrm{eff}} = (1-\alpha) U = f_{3D} U $. For the largest number of particles considered here, we have $f_{3D}= 7/10$ in accordance with the analytic result obtained in the Thomas-Fermi approximation \cite{jezek13a}, whereas for the lowest value $N=1150$, we obtain $f_{3D}=0.79 $. Such a value does not seem to depend on the ratio of the trap frequencies, since in \cite{abad15} the harmonic trap frequencies are equal in the three directions and the same value of $ f_{3D} $ was also obtained. \subsection{ Two-mode model using the effective interaction parameter} We now focus on the experimentally relevant case of $N=1150$, where we have obtained the following TM model \cite{anan06} parameters: $U=2.47 \times 10^{-3}\, \hbar \omega_x $, $J=1.89 \times 10^{-2} \, \hbar \omega_x $, $ F = 2.51 \, \times 10^{-2} \hbar \omega_x $, and $I= 5.62\times 10^{-3} \hbar\omega_x$. Using the results of the previous section, we obtain $U_{\mathrm{eff}}= f_{3D} U = 1.95 \times 10^{-3} \, \hbar \omega_x $, with $ f_{3D}=1-\alpha =0.79 $. In terms of the conjugate coordinates, imbalance $Z$ and phase difference $ \varphi$, one can define the following ETM model Hamiltonian \cite{jezek13a}: \begin{align} H_{\mathrm{ETM}} (Z,\varphi) &= \frac{1}{2} \Lambda_{\mathrm{eff}} Z^2 - \sqrt{1-Z^2}\cos\varphi \nonumber \\ &+ \frac{\gamma}{2}(1-Z^2)(2 + \cos 2\varphi) , \label{eq:HETM} \end{align} with $ \Lambda_{\mathrm{eff}} = {U_{\mathrm{eff}} N}/{(2 K) }$ and $\gamma=I/(2K)$. The corresponding equations of motion are given in Hamiltonian form by \begin{equation} \dot{Z} = - \frac{\partial}{\partial \varphi}H_{\mathrm{ETM}},\quad \mathrm{and} \quad \dot{\varphi} = \frac{\partial}{\partial Z}H_{\mathrm{ETM}} \end{equation} which yield \begin{equation} \frac{dZ}{dt} = - \sqrt{1-Z^2}\,\sin\varphi + \gamma(1-Z^2)\sin 2\varphi \label{imb} \end{equation} \begin{equation} \frac{d\varphi}{dt} = \Lambda_{\rm \mathrm{eff}} Z + \left[ \frac{Z}{\sqrt{1-Z^2}}\right]\cos\varphi -\gamma Z (2+\cos 2\varphi), \label{phase2B} \end{equation} where the time $t$ is given in units of $\hbar/2 K $. The separatrix between Josephson and ST orbits on the phase portrait $( Z, \varphi )$ has a critical imbalance $Z_c$ determined by the condition $H(Z_c,0)=H(0,\pi)$, which yields \begin{equation} Z^{\mathrm{ETM}}_c = 2\frac{\sqrt{\Lambda_{\mathrm{eff}}-3\gamma - 1 }}{\Lambda_{\mathrm{eff}} -3\gamma} \, . \label{eq:ZcETM} \end{equation} Using $ \Lambda_{\mathrm{eff}} = 25.5$ we obtain a critical imbalance $ Z_c^{\mathrm{ETM}} = 0.389$ which is much closer to that numerically found, $ Z_c^{\mathrm{GP}} = 0.39$, than the value $ Z_c^{\mathrm{TM}} = 0.347 $ obtained with the bare $ \Lambda = 32.27 $ from the TM-model version improved by Ananikian \textit{et al.} \cite{anan06}. We also note that the effect of $\gamma$ is negligible in the $Z_c$ calculation. The numerical value of $Z_c^{\mathrm{GP}}$ was obtained by analyzing the time evolutions of the GP equation with different initial conditions as done in \cite{mele11}. In contrast to previous approximations, the value of $Z_c^{\mathrm{ETM}}$ compares very well with the experimental finding of the Heidelberg group as indicated in \cite{AlbiezThesis05}. We can estimate the relative deviation between the ETM and TM models as \begin{equation} \frac{\Delta Z_c}{Z_c^{\mathrm{TM}}} \simeq \frac{1}{ \sqrt{f_{3D}}} - 1\,, \label{error} \end{equation} which goes from $0.13$ for $N=1150$ to $0.2$ for the largest $N$ considered. \section{\label{sec:timeperiods} Two-mode model periods} \subsection{Exact determination} The time periods of orbits in both the TM and ETM models can be obtained for any initial imbalance $Z_i$ and phase difference $\varphi_i$. For a classical Hamiltonian system such as that described by Eq.\ (\ref{eq:HETM}) we can obtain the period $\tau$ from the line integral over a given trajectory, $\tau=-\oint 1/(\partial H/\partial \varphi) dZ$ \cite{Huang2013}. Following this approach, an expression which does not include the parameter $\gamma$ was previously obtained in \cite{Fu2006,Huang2013} for the TM model. Here we extend that result and show that an expression incorporating $\gamma$ can also be achieved, demonstrating that this correction may be important in the Josephson regime. The period $\tau$ of a given trajectory can be calculated from the integral $\tau=\oint (1/{\dot{Z}})dZ$ where $\dot{Z}$ is given by \ (\ref{imb}). The relation between $Z$ and $\varphi$ is obtained for a given energy $E$ by setting $H(Z,\varphi)=E$, yielding a quadratic equation for $\cos{\varphi}$ with the solution $\cos\varphi=\frac{1}{2{\gamma}\sqrt{1-Z^2}}(1-\sqrt{Y})$, where $Y=1-2\gamma[(\Lambda-\gamma)Z^2-2E+\gamma]$. Taking this into account the time period is given by \begin{equation} {\tau}(Z_i,{\varphi}_i)=2\int_{Z_m}^{Z_M}dZ\frac{1}{\sqrt{Y}}\frac{1}{\sqrt{1-Z^2-\frac{1}{4{\gamma}^2}(1-\sqrt{Y})^2}} \label{eq:tau} \end{equation} where $Z_m$ ($Z_M$) is the minimum (maximum) imbalance reached by the system. The values of $Z_m$ and $Z_M$ are obtained from the phase diagram that emerges by setting $H=E$, and have different expressions depending on the regime. In the Josephson regime ($Z_M < Z_c$) the conditions are $H(Z_i,\varphi_i)=H(Z_M,0)=H(Z_m,0)$ with $Z_M>0, Z_m=-Z_M$, which give \begin{equation} Z_{m\atop M}=\mp\sqrt{\frac{2}{A^2}\left[AB-1+\sqrt{C}\right]}, \end{equation} where $A=\Lambda-3\gamma$, $B=E-{3\gamma}/{2}$ and $C=(AB-1)^2-A^2(B^2-1)$. On the other hand, in the ST regime, taking into account that the phase diagram is symmetric under the inversion of $Z_i$ we restrict the domain of $Z_i$ to $Z_i>0$. In this case the conditions read $H(Z_i,\varphi_i)=H(Z_M,0)=H(Z_m,\pi)$ valid for $Z_M > Z_c$, which yield \begin{equation} Z_{m\atop M}=\sqrt{\frac{2}{A^2}\left[AB-1\mp\sqrt{C}\right]}. \end{equation} This formulation can also be used for the ETM model by replacing $\Lambda$ by $\Lambda_{\mathrm{eff}}$. It is worthwhile mentioning that the expression (\ref{eq:tau}) for $\gamma=0$ can be written in terms of the complete Elliptic integral of the first kind $\mathcal{K}(k)$, as shown previously in \cite{ragh99} by directly integrating the equations of motion for $Z(t)$ and $\varphi(t)$. \subsection{Approximate expressions} Even though the above formalism provides a closed integral form for the time periods amenable to a numerical calculation, both in the Josephson and ST regimes, it is also useful to derive analytical expressions in specific limits. In the case of small oscillations, by retaining only quadratic terms in the Hamiltonian, Eq. (\ref{eq:tau}) can be straightforwardly integrated and we recover the expressions given by the standard formula in \cite{smerzi97,mele11} with the inclusion of $\gamma$ \cite{anan06}. Replacing $U$ by $U_{\mathrm{eff}}$, one thus obtains the ETM model period, \begin{equation} T^{\mathrm{ETM}}_{so}= \frac{\pi \hbar }{ K \sqrt{(\Lambda_{\mathrm{eff}} + 1-3\gamma)(1-2\gamma)}} \,, \label{eq:tpeqosc} \end{equation} which yields $T^{\mathrm{ETM}}_{so}= 14.91 \, \omega_x^{-1} $ in contrast to $ T_{so}^{\mathrm{TM}}= 13.29 \, \omega_x^{-1} $ obtained using the bare $\Lambda$ value. We remark that an important correction is also provided by the parameter $\gamma$. This correction diminishes for increasing $Z_i$, and it does not affect sizeably either the critical imbalance $Z_c$, or the time periods in the ST regime. In the ST regime one can also derive a limiting approximation for the time period valid for large $\Lambda$. In this case, we can neglect $\gamma$ and take the first-order approximation of the function $\mathcal{K}(k)$ around $k=0$. This yields the analytical expression for the time period $\tau_0$ \begin{equation} \tau_0 = \frac{\hbar}{2K}\frac{2\pi}{\Lambda Z_i}. \label{eq:tau0} \end{equation} A higher-order approximation of $\mathcal{K}(k)$ could also be employed to increase the accuracy of $\tau_0$, but since one should retain an important number of terms to achieve a noticeable improvement, this procedure become quite cumbersome thus relegating the simplicity of Eq.\ (\ref{eq:tau0}). However, a simple analytical formula that improves $\tau_0$ can be developed in the ST regime by performing some approximations directly in the equations of motion. We will keep assuming a large interaction parameter $ \Lambda $ and neglect the parameter $\gamma$, as it does not contribute to any significant change in the predictions of this regime. Considering the imbalance performs oscillations around a positive mean value and using $ \Lambda \gg 1$, (\ref{phase2B}) can be approximated by, \begin{equation} \frac{d\varphi}{dt} = \Lambda Z + \left[ \frac{Z}{\sqrt{1-Z^2}}\right]\cos\varphi \simeq \Lambda Z \simeq \Lambda Z_0, \label{Iphaseap} \end{equation} where $ Z_0 = \overline{Z(t)} $ denotes the mean value of the time dependent imbalance, and we have used that the second term of Eq.\ (\ref{Iphaseap}) averages approximately to zero. Then, assuming $\varphi(0)=0$ we obtain, \begin{equation} \varphi(t) = \Lambda Z_0 t, \label{Iphaseapt} \end{equation} which replaced in Eq.\ (\ref{imb}) with the further assumption that $\sqrt{1-Z^2}\simeq \sqrt{1-Z_0^2}$ yields, \begin{equation} \frac{dZ}{dt} = - \sqrt{1-Z_0^2}\,\sin( \Lambda Z_0 t). \label{dimbst} \end{equation} Integrating the last expression with respect to time and considering the initial value $Z(0)= Z_i$, we finally obtain for small $Z_0^2$ \begin{equation} Z(t) = \left(1-\frac{Z_0^2}{2}\right)\frac{\cos( \Lambda Z_0 t)}{\Lambda Z_0}- \left(1-\frac{Z_0^2}{2}\right)\frac{1}{\Lambda Z_0} + Z_i . \label{imbst} \end{equation} Furthermore, to be consistent with $Z_0$ being the mean value of $Z(t)$, we impose \begin{equation} Z_0 = - \left(1-\frac{Z_0^2}{2}\right)\frac{1}{\Lambda Z_0} + Z_i , \label{conz0} \end{equation} which yields a quadratic equation for $Z_0$ with the following solution for $\Lambda\gg 1$, \begin{equation} Z_0 = \frac{Z_i }{ 2 } \left[ 1 \pm \sqrt{1- \frac{4}{\Lambda Z_i^2}} \, \right]\, . \label{z0} \end{equation} Given that we are assuming a ST regime, which implies that $Z(t)$ from Eq.\ (\ref{imbst}) should not change sign during the evolution, we discard the minus sign in front of the square root in Eq.\ (\ref{z0}). Therefore, using Eq.\ (\ref{z0}) we can estimate the ST period $T_{st}=2\pi/(\Lambda Z_0$) as, \begin{equation} T_{st}= \frac{ Z_i \pi \hbar }{ 2 K }\left( 1 - \sqrt{1- \frac{4}{\Lambda Z_i^2}}\right) , \label{eq:tstbuenaa} \end{equation} which will be expressed in units of $\omega_x^{-1}$. The above equation can also be used by replacing $\Lambda$ by $ \Lambda_{\mathrm{eff}}$ to better take into account the effective interaction effects. For example, for an initial imbalance $Z_i=0.45$ it yields $ T^{\mathrm{ETM}}_{st}= 8.42 \, \omega_x^{-1} $ and $ T_{st}^{\mathrm{TM}}= 6.05 \, \omega_x^{-1} $ in comparison with that obtained with the GP simulation, $T_{st}^{\mathrm{GP}}=8.54\,\omega_x^{-1}$. \section{\label{sec:NumRes}Numerical results} Aiming at testing the validity of the model equations, we have numerically solved the GP equation using a second order in time, split-step spatial Fourier operator \cite{NR,bao2003} with up to $512\times 512\times 256$ grid points and time steps down to $\Delta t=5\times10^{-5}\omega_x^{-1}$. In Figs. \ref{fig:gpeTM_01} and \ref{fig:gpeTM_045} we show the GP time evolutions for initial imbalances in the Josephson and ST regimes, respectively, as compared to those given by TM models using the bare $ \Lambda $ and the effective $ \Lambda_{\mathrm{eff}}$ values. It becomes clear that the effective approach reproduces the GP results much better than the bare TM model in both regimes. We also notice that the small-oscillation period (\ref{eq:tpeqosc}) calculated from the effective interaction parameter is a much better estimate and the same holds for the period estimates given by Eq.\ (\ref{eq:tstbuenaa}) in the ST regime. \begin{figure} \includegraphics[width=\columnwidth,clip=true]{fig2n.eps} \caption{\label{fig:gpeTM_01}(color online) Time evolution of an initial imbalance in the Josephson regime using the GP equation, the TM and ETM models for the initial condition $Z_i=0.1$ and $\varphi_i=0$. The vertical arrows indicate the small-oscillation period estimates for both models, Eq. (\ref{eq:tpeqosc}). } \end{figure} \begin{figure} \includegraphics[width=\columnwidth,clip=true]{fig3n.eps} \caption{\label{fig:gpeTM_045}(color online) Time evolution of an initial imbalance in the ST regime using the GP equation, the TM and ETM models for the initial condition $Z_i=0.45$ and $\varphi_i=0$. The vertical arrows indicate the ST period estimates arising from Eq.\ (\ref{eq:tstbuenaa}) for both models. } \end{figure} In Fig. \ref{fig:periodos} we compare the time periods as a function of the imbalance using the TM and ETM models together with several periods obtained from GP simulations. We also plot with empty circles $T_{st}(Z_i)$ from Eq.\ (\ref{eq:tstbuenaa}) and with horizontal lines $T_{so}$ given by Eq.\ (\ref{eq:tpeqosc}), both for the TM and ETM models. We notice that the predictions for both the ST and the small-oscillation periods are highly accurate within both two-mode models, and that the ETM results agree well with the full GP calculation. We have also included in Fig. \ref{fig:periodos} calculations neglecting $\gamma$ (depicted in thinner lines), so as to emphasize that for the experimentally relevant case of $N=1150$ the inclusion of the parameter $\gamma$ also yields a sizable correction to the Josephson periods. On the other hand, for smaller overlaps between the densities of the localized states, the factor $\gamma$ is substantially reduced (cf. Table \ref{tab:1}) and thus it does not play any significant role in determining these periods. \begin{figure}[ht] \includegraphics[width=\columnwidth,clip=true]{fig4n.eps} \caption{\label{fig:periodos} (color online) Trajectory periods as functions of the initial imbalance $Z_i$ for the TM (dashed blue lines) and ETM (dash-dotted red lines) models according to Eq.\ (\ref{eq:tau}) with $\varphi_i=0$. Thinner lines correspond to calculations neglecting $\gamma$. The vertical lines mark the critical imbalance $Z_c$, while the circles correspond to Eq.\ (\ref{eq:tstbuenaa}) for the TM and ETM models and the horizontal solid lines correspond to the small-oscillation approximations. The stars indicate the periods obtained from the full 3D GP simulation.} \end{figure} We also compare in Fig. \ref{fig:periCompa} the exact results for $\gamma=0$ in the ST regime with the value of $\tau_0$ given by Eq.\ (\ref{eq:tau0}), and with $T_{st}$, Eq.\ (\ref{eq:tstbuenaa}). In particular, we show the results for $\Lambda=16, 25.5$, and $64$, where it may be seen that our estimate, $T_{st}$, provides a simple and improved overall approximation around an extended region in $Z_i$. For lower values of $\Lambda$ the assumption $\Lambda \gg 1$ breaks down and hence both approximations becomes less accurate. For larger values both estimates get closer to the exact result, while our prediction is able to quantitatively describe the exact curve closer to $Z_c$ much better than $\tau_0$. For values above $\Lambda\simeq 10^3$ despite the error is substantially reduced in both approximations, $T_{st}$ still improves the period calculation over $\tau_0$. \begin{figure}[ht] \includegraphics[width=\linewidth,clip=true]{fig5n.eps} \caption{\label{fig:periCompa}(color online) Comparison of the time periods $\tau$ (in units of $\hbar/(2K)$) in different approximations for $\Lambda=16, 25.5$, and $64$. The solid, dashed, and dash-dotted lines correspond to the exact results (\ref{eq:tau}), the approximation $\tau_0$ (\ref{eq:tau0}), and $T_{st}$ given by Eq.\ (\ref{eq:tstbuenaa}), respectively. The vertical dotted lines mark the critical imbalance $Z_c$ for each value of $\Lambda$.} \end{figure} \section{\label{sum} Summary and concluding remarks} We have studied the dynamics of three-dimensional Bose-Einstein condensates using a two-mode model with an effective on-site interaction parameter and compare it to the full 3D Gross-Pitaevskii simulations. We demonstrate that the periods of the orbits for two-mode models with arbitrary initial conditions can be written as a closed integral form which takes into account the effect of the overlap between the localized densities through the parameter $\gamma$. We show that this interaction-driven effect is specially important in the Josephson regime for the experimental conditions of \cite{albiez05}. Furthermore, based on the dynamical equations for the populations and phase differences in each well, we have derived a simple analytical formula for the period in the self-trapping regime, which accurately reproduces the exact integral expression of the two-mode model and correctly describes Gross-Pitaevskii simulation results for large on-site interaction energy parameters. The three-dimensional numerical simulations prove that the precise determination of the effective on-site interaction energy parameter is essential to correctly reproduce the GP results and thus to calculate accurate estimates of the time periods. The present study opens the possibility to the application of the effective two-mode model and the time period expressions to multiple-well systems with symmetric initial populations. In such cases, the dynamics can be characterized by a single imbalance and a phase difference in terms of which the two-mode Hamiltonian can be easily furnished. Studies in such direction are currently underway for a four-well system. \begin{acknowledgement} This work was supported by CONICET and Universidad de Buenos Aires through grants PIP 11220150100442CO and UBACyT 20020150100157, respectively. \end{acknowledgement} \subsection*{Author contribution statement} All authors contributed equally to the paper.
{ "timestamp": "2018-02-27T02:01:34", "yymm": "1802", "arxiv_id": "1802.08739", "language": "en", "url": "https://arxiv.org/abs/1802.08739" }
\section{Introduction} \label{section:intro} This paper is a contribution to the study of truth constants in {\L}ukasiewicz logic. Previous works on the subject, often in a wider context of other t-norm based logics, include \cite{Pavelka:Fuzzy,Hajek:1998,Hajek-Paris-Shepherdson:Pavelka,Esteva-GGN:AddingTruthConstants,Cintula:noteAxiomatizationsPavelka}. Predominantly, this paper is about constants for rational elements of $[0,1]$, but an analogous development for irrational constants in the infinitary variant of {\L}ukasiewicz logic is suggested. Since truth constants are native to propositional logic, the entire paper stays on the propositional level. Two logics are juxtaposed in this paper: (infinite-valued) {\L}ukasiewicz logic ${\mathchoice{\mbox{\rm\L}}{\mbox{\rm\L}}{\mbox{\rm\scriptsize\L}}{\mbox{\rm\tiny\L}}}$ and Rational Pavelka logic RPL. The former was introduced by Jan {\L}ukasiewicz, with later simplifications by himself and others (see \cite{Lukasiewicz-Tarski:Untersuchungen}). The latter, RPL, was introduced by H\'ajek in \cite{Hajek:1998} as a considerable simplification of earlier systems for reasoning with partially true statements as envisaged by Goguen \cite{Goguen:LogicInexactConcepts} and proposed by Pavelka \cite{Pavelka:Fuzzy}. Many resources discuss one or both of these logics very accurately, but two excellent sources are \cite{Hajek:1998} and \cite{Hajek-Paris-Shepherdson:Pavelka}. In Section \ref{section:definability}, we start with providing a semantic notion of implicit definability, referring to the standard MV-algebra $\standardL$. This is in accord with the ontology of truth constants (rational or other) in {\L}ukasiewicz logic, which arise from the real-valued semantics (intended, and presumably therefore called ``standard'') and permeate the syntax. Using the fact that that each rational is definable in $\standardL$ under this semantic definability notion, we faithfully interpret RPL in a particular theory over ${\mathchoice{\mbox{\rm\L}}{\mbox{\rm\L}}{\mbox{\rm\scriptsize\L}}{\mbox{\rm\tiny\L}}}$; this interpretation result was first presented in \cite{Hajek:1998}, where a different evidence for implicit definability is provided. Moreover, \cite[Theorem 2.6]{Hajek-Paris-Shepherdson:Pavelka} offers what is presumably the next best thing to rational constants being term definable: for each formula $\varphi$ of $\RPL$ and positive integer $n$ divisible by denominators of truth constants in $\varphi$, there is an ${\mathchoice{\mbox{\rm\L}}{\mbox{\rm\L}}{\mbox{\rm\scriptsize\L}}{\mbox{\rm\tiny\L}}}$-formula $\psi$ s.t.~$\RPL\vdash\varphi^n\equiv\psi$. Along the way, we also discuss other methods of obtaining implicit definability of rationals; e.g., one can rely on the bookkeeping formulas, which do the job very adequately, or one can refer to formulas representing certain McNaughton functions. Mimicking the finitary definition in (finitary) {\L}ukasiewicz logic, a non-finitary definability notion for irrationals in infinitary {\L}ukasiewicz logic $\infL$ is outlined, with some analogous properties (e.g., under a theory containing the defining formulas, valid bookkeeping statements for the irrationals can be derived in the logic). Implicit definability is evocative of Beth property. We briefly consider this (somewhat broader) context, and discuss how our technique provides another way of demonstrating the lack of (deductive) Beth property in {\L}ukasiewicz logic, proved in \cite{Hoogland:Thesis}. The status of RPL with respect to Beth property seems to be unknown. We then weigh the message of this interpretability result, and exploit it somewhat. It seems to be often overlooked that a considerable part of the metamathematics of {\L}ukasiewicz logic with constants is already addressed by (a special case of) the metamathematics of theories over {\L}ukasiewicz logic. As an example of this phenomenon and an application of the interpretation result, in Section \ref{section:complexity} we inspect the result on complexity of propositional RPL, that is, the recognition of theoremhood and finite consequence relation. Both are $\coNP$-complete; we show that this result is implied by (the same) complexity results for ${\mathchoice{\mbox{\rm\L}}{\mbox{\rm\L}}{\mbox{\rm\scriptsize\L}}{\mbox{\rm\tiny\L}}}$. We also show that the notion of definability as such, taken as a decision problem, is hard. Finally, apparently the defining formulas allow for speaking about arbitrary rational truth values, which brings to mind previous discussions of the artificial precision problem in fuzzy logic. We briefly touch this topic in Section \ref{section:vagueness}. \section{Preliminaries} This paper deals with propositional {\L}ukasiewicz logic and its algebraic semantics. It makes no distinction between propositional formulas (in the language of {\L}ukasiewicz or other logics, possibly expanding the language) and terms of the language of the algebraic semantics; thus \emph{formulas} and \emph{terms} are the same objects. Similarly, we conflate propositional connectives with function symbols. Although {\L}ukasiewicz logic can be presented in several very succinct sets of basic connectives, we make use of a variety of connectives: the language of $\FLew$, namely, the set $\{\cdot,\to,\land,\lor, \overline{0},\overline{1} \}$, the negation $\neg$, the strong disjunction $\oplus$, and the equivalence $\equiv$. The expression $x^n$ stands for $\underbrace{x\cdot x\cdot \dots x}_{n \text{ times}}$ and $nx$ stands for $\underbrace{x\oplus x\oplus \dots x}_{n \text{ times}}$. {\L}ukasiewicz logic is amply presented and discussed in literature (see \cite{Cignoli-Ottaviano-Mundici:AlgebraicFoundations,DiNola-Leustean:HandbookMValgebras,Mundici:Advanced}); this paper takes for granted the reader's familiarity with the basics. A general semantics of {\L}ukasiewicz logic is given by the variety of MV-algebras. The \emph{standard} MV-algebra $\standardL$ is given by the {\L}ukasiewicz t-norm on the domain of the real unit interval $[0,1]$. While ${\mathchoice{\mbox{\rm\L}}{\mbox{\rm\L}}{\mbox{\rm\scriptsize\L}}{\mbox{\rm\tiny\L}}}$ is strongly complete w.r.t.~MV-algebras, it only enjoys finite strong standard completeness, but not strong standard completeness (see \cite{Hajek:1998, Cintula-EGGMN:DistinguishedSemantics}). The finite MV-chain with $n+1$ elements is denoted $\finiteMV{n+1}$. Rational Pavelka logic $\RPL$ (see \cite{Hajek:1998}) expands the language of {\L}ukasiewicz logic with constants for rationals in $[0,1]$. Extending the notational convention for the two constants $\overline{0}$ and $\overline{1}$, we use horizontal bars to distinguish the constant $\overline{r}$ from its intended interpretation $r$ (this convention extends also to irrationals used here).\footnote{We also use a horizontal bar to denote complement of a set. A vector of variables is denoted with a tilde (such as $\tilde{x}$).} The intended interpretation, i.e., $\overline{r}$ being interpreted by $r$ for each $r\in \mathds{Q}$, is referred to as the \emph{canonical} interpretation of constants, to distinguish it from other interpretations, given by algebras that contain an isomorphic copy of (a subalgebra of) the standard MV-algebra on the rationals. A \emph{bookkeeping formula} (see \cite{Hajek:1998}) is a formula recording the behaviour of an algebraic operation on elements of the domain interpreting the constants, predominantly, the rationals in $[0,1]$; this domain is closed under the operations of $\standardL$. For example, $\overline{6/13} \to \overline{5/13} \equiv \overline{12/13}$ is a bookkeeping formula. It is quite usual to include \emph{all} such bookkeeping formulas that are valid under the canonical interpretation in $\standardL$ as \emph{axioms} of a logic featuring constants; the bookkeeping axioms are then the only new axioms that are added. This is indeed the case of $\RPL$ as an axiomatic expansion of ${\mathchoice{\mbox{\rm\L}}{\mbox{\rm\L}}{\mbox{\rm\scriptsize\L}}{\mbox{\rm\tiny\L}}}$. We do not consider a general semantics for $\RPL$, only the standard one, whose {\L}-reduct is the algebra $\standardL$. By slight abuse of language, we use the notation $\standardL$ also for the $\RPL$-algebra that expands the standard MV-algebra with a canonical interpretation of each rational constant, or even for an algebra that further expands the language with constants whose intended interpretation is irrational (Subsection \ref{subsection:irrationals}). $\RPL$ enjoys finite strong standard canonical completeness (see \cite{Esteva-GGN:AddingTruthConstants}): that is, for a finite $T,\varphi$ in the language of $\RPL$, we have $T\vdash_{\RPL}\varphi$ if and only if $T$ entails $\varphi$ in $\standardL$ under the canonical interpretation of constants.\footnote{In fact, the canonical interpretation of constants is the only one that consistently expands $\standardL$ under the usual bookkeeping axioms; see Section \ref{section:definability}.} Finite strong canonical completeness of $\RPL$ (and finite strong completeness of ${\mathchoice{\mbox{\rm\L}}{\mbox{\rm\L}}{\mbox{\rm\scriptsize\L}}{\mbox{\rm\tiny\L}}}$) entails a conservativity result for $\RPL$ over ${\mathchoice{\mbox{\rm\L}}{\mbox{\rm\L}}{\mbox{\rm\scriptsize\L}}{\mbox{\rm\tiny\L}}}$ for propositional case; the conservativity proof for the two first-order logics is more laborious (see \cite{Hajek-Paris-Shepherdson:Pavelka}). Although the proof is based on \emph{finite} strong standard completeness, we do have that if $T$ is infinite and $T,\varphi$ are without constants, then $T\vdash_{\RPL}\varphi$ if and only if $T\vdash_{\mathchoice{\mbox{\rm\L}}{\mbox{\rm\L}}{\mbox{\rm\scriptsize\L}}{\mbox{\rm\tiny\L}}} \varphi$. \section{Definability} \label{section:definability} A truth constant is specified within the language of a propositional calculus: it is a nullary connective, its behaviour determined by the axioms and rules of the calculus. Providing a semantics for the calculus involves suggesting an interpretation of its terms, including the constants. Nevertheless, in addition to constants specified by the language, some other terms in the language may behave as constants too. As a ready example, many terms of either classical or fuzzy propositional logic behave as the constants $\overline{0}$ and $\overline{1}$: we refer to them as \emph{contradictions} and \emph{tautologies} respectively. Here ``behave as'' can have either of two meanings: the constant $\overline{0}$ is in the language anyway and the term in question is provably equivalent to it, or it is not in the language and one relies on the a semantics where the term in question always evaluates to the intended evaluation of the (possibly absent) constant.\footnote{The discourse in mathematical fuzzy logic so far confirms that fuzzy logics are semantics-based; accordingly, \emph{constants} are tied to the intended (real-valued) semantics, standing for the rational or the real numbers thereof. Indeed, where all rational constants from the interval $[0,1]$ are present, one may claim that semantics has leaked into syntax in a substantial way, in particular, such a process narrows down the range of algebraic interpretations of the calculus noticeably.} In either case, one would speak about term definability. We return to this kind of definability in Section \ref{subsection:beth}. Recalling that the set of contradictory terms and the set of tautologous terms of classical propositional are both $\coNP$ complete, the simple example above also hints that it is nontrivial to determine if a given term behaves as a particular constant. \begin{definition} Let $a\in [0,1]$. The value $a$ is term-definable in $\standardL$ if and only if there is an MV-term $\varphi$ such that $v(\varphi) =a$ for each valuation $v$ in $\standardL$. \end{definition} In the standard MV-algebra $\standardL$, no constants beyond the classical ones are term-definable: constant non-integer functions are not McNaughton functions (\cite{McNaughton:FunctionalRep}). So the decision problem ``Does $\varphi$ term-define $x$?'' is limited to $x\in\{0,1\}$; we are asking about unsatisfiable or tautologous terms in the real-valued semantics of propositional {\L}ukasiewicz logic. \subsection{Defining the rationals} This section aims to expose a phenomenon that is one step more nuanced. Namely, some variables may behave as constants in the framework of a particular theory: in each model of the theory, the value of such variables is fixed. We will discuss (propositional) \emph{implicit definability}. In fact we shall present two different concepts: implicit definability by a term (i.e., a finite theory) and a variant thereof where the defining theory is necessarily infinite and the logic considered is not finitary. The focus of this paper is on the former concept, which employs the more frequently considered \emph{finitary} {\L}ukasiewicz logic ${\mathchoice{\mbox{\rm\L}}{\mbox{\rm\L}}{\mbox{\rm\scriptsize\L}}{\mbox{\rm\tiny\L}}}$. \begin{definition} \label{def_definability} Let $\alg{A}$ be an MV-algebra, $a$ an element of its domain, $\varphi(x_1,\dots,x_n)$ an MV-term, and $1\leq i\leq n$. The term $\varphi$ implicitly defines the element $a$ in variable $x_i$ in $\alg{A}$ if and only if \begin{itemize} \item $\varphi$ is satisfiable in $\alg{A}$, and \item $v(\varphi)=1^\alg{A}$ implies $v(x_i)=a$ for each $\alg{A}$-valuation $v$. \end{itemize} An element $a\in \alg{A}$ is definable in $\alg{A}$ if and only if there is an MV-term that defines it there. \end{definition} It is not difficult to come up with a suitable theory that implicitly defines rationals in $\standardL$. Indeed a theory $T$ in variables indexed by the rationals (say, $x_{m/n}$ for $m,n$ coprime), and with just the usual bookkeeping formulas for these variables, obtained from the corresponding MV-operations on the indices of the variables (e.g., $x_{6/13} \to x_{5/13} \equiv x_{12/13}$), as the axioms of $T$, makes all the variables $x_{m/n}$ behave just as the constant $\overline{m/n}$ would behave in RPL. $T$ is infinite, so it does not follow the requirement of implicit definability (of each of the rationals) by a term; however, one quickly observes that to implicitly define $m/n$, only constants with denominators $n$ are needed. This allows for a much narrower theory $T$ that is finite and has a unique interpretation (namely, the canonical one) in $\standardL$. We shall return to this claim at the end of Subsection \ref{subsection:theory} by providing a sketchy example, and we also discuss a redundancy issue there. Another way to implicitly define a rational $a$ is to rely on McNaughton functions: there is a plethora of one-variable McNaughton functions that attain the value 1 on the singleton $\{a\}$, each of them corresponds to an MV-term, and each such term implicitly defines $a$. One construction using yet different term to achieve implicit definability was already given in \cite{Hajek:1998}. The book also presents a faithful interpretation of theories in RPL in theories in {\L}, which we reproduce below. We use the various versions of proving implicit definability for a general point to be made in a subsequent section. We offer a simple variant of the definability axioms. They are only marginally different from bookkeeping, one of the differences being that $1/n$ is defined by a single term without additional variables. \subsection{A theory of constants} \label{subsection:theory} From now on, \emph{definability} and \emph{implicit definability} may be conflated to the latter, for the sake of brevity. The following technical lemma is a useful tool for MV-algebras (cf.~ \cite{Torrens:CyclicElements,Gispert:UniversalClassesMV,Hanikova-Savicky:SAT}). \begin{lemma} \label{lemma_rat_l1} Let $\alg{A}$ be an MV-chain and $n\geq 2$ an integer. The equation $x = (\neg x)^{n-1}$ has a solution in $\alg{A}$ if and only if $\alg{A}$ has a subalgebra isomorphic to ${\mathchoice{\mbox{\rm\L}}{\mbox{\rm\L}}{\mbox{\rm\scriptsize\L}}{\mbox{\rm\tiny\L}}}_{n+1}$. If the solution exists, it is unique: the smallest nonzero element of ${\mathchoice{\mbox{\rm\L}}{\mbox{\rm\L}}{\mbox{\rm\scriptsize\L}}{\mbox{\rm\tiny\L}}}_{n+1}$. \end{lemma} \begin{corollary} \label{lemma1n} Let $n\geq 2$. The formula $x \equiv (\neg x)^{n-1}$ defines $1/n$ in $\standardL$. \end{corollary} \begin{corollary} \label{corollarymn} Any rational number in the unit interval is definable in $\standardL$. \end{corollary} \begin{proof} In $\standardL$, the formulas $x\equiv \overline{0}$ and $x\equiv \overline{1}$ define rationals $0$ and $1$. Moreover $1/n$ is definable for $n\geq 2$ by a formula in one variable. The two-variable formula $\varphi(x,y)$, defined as $(y\equiv(\neg y)^{n-1}) \cdot (x\equiv m y)$, defines $m/n$ in the variable $x$. \end{proof} We now establish that, under suitable axioms, variables can play the role of rational truth constants. Let $Q = \{q_{m,n} \mid m,n\in N, m\leq n, n>0\}$ be a set of variables. We write $q_{m/n}$ instead of $q_{m,n}$. Define a theory $T_Q$ in {\L}ukasiewicz logic ${\mathchoice{\mbox{\rm\L}}{\mbox{\rm\L}}{\mbox{\rm\scriptsize\L}}{\mbox{\rm\tiny\L}}}$, using (only) the variables $Q$. The theory $T_Q$ consists of: \begin{itemize} \item axiom $q_{0/n}\equiv \overline{0}$ for each $n > 0$; \item axiom $q_{1/1} \equiv \overline{1}$; \item axiom $q_{1/n} \equiv (\neg q_{1/n})^{n-1}$ for each $n\geq 2$; \item axiom $q_{m/n} \equiv m q_{1/n}$ for each $m\leq n,n\geq 2$. \end{itemize} It follows from Corollary \ref{corollarymn} that $T_Q$ has exactly one model over $\standardL$, namely, the one where each $q_{m/n}$ is interpreted by $m/n$. Under the axiom defining $1/n$, in the language containing the corresponding $q$-variable, the value $m/n$ becomes \emph{term definable} as $m q_{1/n}$; we might use this term directly, instead of introducing a new variable. It is only introduced for uniformity of presentation. \begin{lemma} (Over {\L}ukasiewicz logic,) $T_Q$ proves the following: \begin{itemize} \item each valid bookkeeping formula in the sense of $q$-variables; e.g., \\ $ q_{m/n} \cdot q_{k/l} \equiv q_{(m/n)\cdot(k/l)}$, and analogously for any other bookkeeping formula usually considered; \item $q_{m/n} \equiv q_{m'/n'}$ for $m/n = m'/n'$; \item $q_{m/n}\equiv \overline{1}$ for $m=n$ \end{itemize} \end{lemma} \begin{proof} We show the first statement; the rest is analogous. Consider a slightly stronger statement: let $\varphi$ be a particular bookkeeping statement and let $T_0\subseteq T_Q$ consist of definitions of only those rationals that are used in $\varphi$ and any auxiliary rationals needed to define them; then $T_0$ is finite. We have $T_0 \models_\standardL q_{m/n} \cdot q_{k/l} \equiv q_{(m/n)\cdot(k/l)}$ by definition of the $q$-variables. The provability claim follows from finite strong standard completeness of ${\mathchoice{\mbox{\rm\L}}{\mbox{\rm\L}}{\mbox{\rm\scriptsize\L}}{\mbox{\rm\tiny\L}}}$. \end{proof} On an analogous note (and with an analogous proof), one might substitute any other family of terms that have been proved to enable implicit definability of the rationals (with a suitable indexing of variables): they will be provable from our axioms (and vice versa, each of the our axioms will be provable from them) as a consequence of finite strong standard completeness. The axioms just presented are redundant. It is in fact sufficient to define $1/n$ for \emph{each prime} $n$. Then not only $m/n$ becomes term definable for each $m\leq n$, but also, for $n,n'$ two primes, $1/{(k n n')}$ is term definable for each $k$, because the subalgebra generated by $1/n$ and $1/n'$ in $\standardL$ contains the value $1/{(k n n')}$ and the algebra $\finiteMV{k n n'}$ generated by it; this entails that all of the elements are term definable (see \cite{DiNola-Leustean:HandbookMValgebras} and \cite{Torrens:CyclicElements}). We define a translation $\star$ of formulas of RPL into formulas of ${\mathchoice{\mbox{\rm\L}}{\mbox{\rm\L}}{\mbox{\rm\scriptsize\L}}{\mbox{\rm\tiny\L}}}$; note that we assume RPL uses a countably infinite set $X$ of variables (we refer to them as $x$-variables). For each variable $x_i$ of RPL, $x_i^\star$ is $x_i$, for each constant (other than $\overline{0}$ and $\overline{1}$), $(m/n)^\star$ is $q_{m/n}$. Extend $\star$ to formulas as commuting with all connectives. If $T$ is a set of RPL-formulas, then $T^\star$ is the set of $\star$-translations thereof. observe that $\star$ is a bijection between rational constants of RPL (different from $\overline{0}$ and $\overline{1}$) and $q$-variables of ${\mathchoice{\mbox{\rm\L}}{\mbox{\rm\L}}{\mbox{\rm\scriptsize\L}}{\mbox{\rm\tiny\L}}}$. \begin{corollary}\footnote{Cf. \cite{Hajek:1998}, Lemma 3.3.13(2)} \label{interpret1} Let $T$ be a theory and $\varphi$ a formula in the language of $\logic{RPL}$. We have $T\vdash_{\logic{RPL}}\varphi$ if and only if $T^\star \cup T_Q \vdash_{\mathchoice{\mbox{\rm\L}}{\mbox{\rm\L}}{\mbox{\rm\scriptsize\L}}{\mbox{\rm\tiny\L}}} \varphi^\star$. \end{corollary} \begin{proof} $\star$-translations of bookkeeping axioms are provable in $T_Q$. On the other hand, the $T_Q$-axioms become theorems of RPL under de-starring. \end{proof} As a special case ($T$ empty), the theory $T_Q$ proves (translations of) all theorems or RPL. Coming back to bookkeeping formulas, let us see how they might provide an alternative implicit definition of the rationals. We limit the presentation to a sketchy example, leaving the rest to the interested reader. Working in a language with $\cdot$, $\to$ and $\overline{0}$ as basic connectives, we shall show how to define rationals with a denominator $3$. The following are some (not all) bookkeeping formulas for variables indexed by such rationals: \begin{align*} x_{2/3} \cdot x_{2/3} &\equiv x_{1/3}\\ x_{1/3} \cdot x_{2/3} &\equiv x_0\\ x_{1/3} \cdot x_{1/3} &\equiv x_0\\ x_{1/3} \to x_0 &\equiv x_{2/3} \end{align*} Under the additional information that $x_0\equiv \overline{0}$, the last axiom reads $\neg x_{1/3} \equiv x_{2/3}$, which, combined with the first one, gives $(\neg x_{1/3})^2 \equiv x_{1/3}$; the latter implies $v(x_{1/3})=1/3$ in $\standardL$ for all models $v$ of the four bookkeeping formulas above, appealing to Corollary \ref{lemma1n}. Observations to be made are (a) the four formulas entail (semantically, and hence also syntactically) an instance of $x\equiv(\neg x)^{n-1}$ for $n=3$, and hence define $1/3$ in the variable $x_{1/3}$; (b) the second and third axiom are redundant; they follow (semantically, and hence syntactically) from the first and the fourth axiom. Analogously, one might find redundancies in bookkeeping for other denominators, and hence also for combining different prime denominators, for example. We do not go into detail. Thus, bookkeeping formulas are sufficient (and more than that) for implicit definition of all the rationals in $\standardL$. \subsection{Deductive Beth property} \label{subsection:beth} To formulate Beth property, we recall the definition of the two properties (of explicit and implicit definability) as usually considered, i.e., referring to \emph{variables} rather than elements of a specific semantics. Within the scope of this subsection we shall use the notions as given in Definition \ref{beth_definitions}. There are at least two ways to render the property in {\L}ukasiewicz logic (see the discussion in \cite{Montagna:Interpolation}); what is studied here is the \emph{deductive} version of Beth property. \begin{definition} \label{beth_definitions} Let $\logic{L}$ be a logic and $\varphi(x,\tilde{z})$ a term in the language of\/ $\logic{L}$. \begin{itemize} \item $\varphi$ explicitly defines $x$ in $\logic{L}$ if and only if there is a term $\delta(\tilde{z})$ (in the $\tilde{z}$-variables only) such that $\varphi(x,\tilde{z}) \vdash_{\logic{L}} x \equiv \delta(\tilde{z})$ for each $x$ and $\tilde{z}$ \item $\varphi$ implicitly defines $x$ in $\logic{L}$ if and only if $\varphi(x,\tilde{z}), \varphi(x',\tilde{z}) \vdash_{\logic{L}} x \equiv x'$ for each $x$, $x'$, and $\tilde{z}$ \end{itemize} \end{definition} Our earlier (semantic) definitions of term- and implicit definability in $\standardL$ nod to the definitions (for variables) above. In particular, any equation, or finite system thereof, that has a unique solution in a variable $x$ in $\standardL$, translates in the obvious way into a propositional equivalence (a term) that implicitly defines $x$ in the logic ${\mathchoice{\mbox{\rm\L}}{\mbox{\rm\L}}{\mbox{\rm\scriptsize\L}}{\mbox{\rm\tiny\L}}}$. This is a consequence of the finite strong standard completeness theorem of ${\mathchoice{\mbox{\rm\L}}{\mbox{\rm\L}}{\mbox{\rm\scriptsize\L}}{\mbox{\rm\tiny\L}}}$. \begin{lemma} The term $x\equiv(\neg x)^{n-1}$ implicitly defines $x$ in ${\mathchoice{\mbox{\rm\L}}{\mbox{\rm\L}}{\mbox{\rm\scriptsize\L}}{\mbox{\rm\tiny\L}}}$. \end{lemma} We say that a logic has the deductive Beth property if, whenever a term $\varphi$ implicitly defines $x$, then the term also explicitly defines $x$. \begin{corollary} {\L}ukasiewicz propositional logic does not have the deductive Beth property. \end{corollary} \begin{proof} We have that $x\equiv(\neg x)^{n-1}$ implicitly defines $x$. In order for this formula to explicitly define $x$, an MV-term needs to exist that is provably equivalent to $x$ under $x\equiv(\neg x)^{n-1}$; so in particular, in $\standardL$, the term would have to take the constant value $1/n$. There is no such MV-term; therefore, $x\equiv(\neg x)^{n-1}$ does not explicitly define $x$ in ${\mathchoice{\mbox{\rm\L}}{\mbox{\rm\L}}{\mbox{\rm\scriptsize\L}}{\mbox{\rm\tiny\L}}}$. \end{proof} This result for {\L}ukasiewicz logic was given in \cite{Hoogland:Thesis}, addressing the topic in a comprehensive way over a large landscape of logics. See \cite{Montagna:Interpolation} for the more specific area of logics extending H\'ajek's Basic Fuzzy Logic $\BL$. \subsection{Defining the irrationals} \label{subsection:irrationals} \begin{lemma} An irrational number $a\in [0,1]$ is not definable by an MV-term in $\standardL$. \end{lemma} \begin{proof} If an MV-term $\varphi(x_1,\dots,x_n)$ is satisfiable in $\standardL$, there is a rational $n$-tuple $\langle r_1,\dots,r_n\rangle$ that satisfies $\varphi$. Let $a$ be an irrational. If $\varphi(x_1,\dots,x_n)$ is satisfiable in $\standardL$ and $1\leq i\leq n$, then there is an evaluation $v$ in $\standardL$ s.t.~$v(\varphi)=1$ and $v(x_i)\not=a$. \end{proof} Nevertheless, if one compromises on the requirement that implicit definitions are finite objects (i.e., given by a term) and if one shifts from finitary to infinitary {\L}ukasiewicz logic $\infL$, one can implicitly define each irrational, as we now show. Our simple construction illustrates the technique, working with only two values. We leave any further elaboration to the reader. We do not discuss or advocate here the necessity of introducing \emph{all} irrational values as constants. We shall extend the theory $T_Q$. Consider irrationals $a,b\in [0,1]$ s.t. $a\cdot a=b$. Irrationals are cuts on the rationals, and the latter have already been defined by the axioms of $T_Q$. Let \begin{align*} A_1 &= \{q\in \mathds{Q}\cap [0,1] \mid q < a \} \\ A_2 &= \{q\in \mathds{Q}\cap [0,1] \mid a < q \} \end{align*} and analogously $B_1,B_2$ for b. Introduce fresh variables (i.e., not occurring in $T_Q$) $i_a, i_b$. Let $$T_{Q,a} = T_Q \cup \{q_{m/n} \to i_a \mid m/n \in A_1\} \cup \{ i_a \to q_{m/n} \mid m/n \in A_2\}$$ Analogously, define $T_{Q,b}$ for $b$. \begin{lemma} In any standard model of $T_{Q,a}$, the variable $i_a$ has the value $a$. \end{lemma} \begin{proof} Since $T_Q$ only admits canonical interpretations of the $q$-variables in the standard MV-algebra, the cut $(A_1,A_2)$, captured by the axioms of $T_{Q,a}$, determines the valuation of $i_a$. \end{proof} Let $T_{Q,a,b}$ be the theory whose axioms are the union of axioms for $T_{Q,a}$ and $T_{Q,b}$. \begin{corollary} $T_{Q,a,b} \vdash_{\infL} i_a \cdot i_a \equiv i_b$. \end{corollary} \begin{proof} Since the values of $i_a$ and $i_b$ are fixed in $\standardL$ by $T_{Q,a}$ and $T_{Q,b}$, we have \\ $T_{Q,a,b} \models_{\standardL} i_a \cdot i_a \equiv i_b$. Then \\ $ T_{Q,a,b} \vdash_{{\mathchoice{\mbox{\rm\L}}{\mbox{\rm\L}}{\mbox{\rm\scriptsize\L}}{\mbox{\rm\tiny\L}}}_\infty} i_a \cdot i_a \equiv i_b$ by strong standard completeness of ${\mathchoice{\mbox{\rm\L}}{\mbox{\rm\L}}{\mbox{\rm\scriptsize\L}}{\mbox{\rm\tiny\L}}}_\infty$. \end{proof} As in the case of constants for rationals, one can introduce irrational constants incrementally (as above). It is known that, if $a$ is an irrational in $\standardL$, then the subalgebra generated by $a$ is dense (\cite{Gaitan:SimpleOneGeneratedBCK}); as in the case of rationals, when just one value is defined by suitable axioms, other values become term definable within such a theory with an expanded language. \subsection{Completeness theorems} Finally, let us remark on completeness theorems in this new setting. There is nothing to add for general or standard finite strong completeness. However, one might wonder how to spell Pavelka completeness result. Let $T$ be a set of formulas and $\varphi$ a formula of ${\mathchoice{\mbox{\rm\L}}{\mbox{\rm\L}}{\mbox{\rm\scriptsize\L}}{\mbox{\rm\tiny\L}}}$ (possibly with some $q$-variables). We define $$ ||\varphi||_{T\cup T_Q} = \inf \{ v(\varphi) \mid v \mbox{ model of }T\cup T_Q\}$$ and\footnote{Cf.~also \cite{Hajek-Paris-Shepherdson:Pavelka}, Theorem 3.1 and above.} $$|\varphi|_{T\cup T_Q} = \sup\{m/n \mid T\cup T_Q\vdash_{\mathchoice{\mbox{\rm\L}}{\mbox{\rm\L}}{\mbox{\rm\scriptsize\L}}{\mbox{\rm\tiny\L}}} q_{m/n}\to \varphi \}$$ The definition rests on a very convenient indexing of the $q$-variables. The same convention occurs in the usual spelling of the provability degree in RPL: in particular, one can perform computations and take suprema on the names of the constants. Both definition rest on the link between the name of the constant and its intended value; usually the language is chosen in such a way that this link is immediate. \section{Recognition and complexity} \label{section:complexity} To provide a sample application of the interpretation result, let us look at the complexity of provability from a finite theory in propositional $\RPL$; in view of the finite strong standard (canonical) completeness theorem for RPL, this is just the finite consequence relation in $\standardL$ (for the propositional language with rational constants). Recall the problem: given a finite theory $T$ and a formula $\varphi$ of $\RPL$, does $T\vdash_{\RPL}\varphi$? For an empty $T$, this problem is just theoremhood in $\RPL$. Without wishing to discuss implementation details, we remark that each rational constant is represented as a pair of natural numbers given in binary. A dual form of the conservativity result gives that provability without constants is a language fragment of provability with constants; for finite theories, the former is $\coNP$-complete, so the latter is $\coNP$-hard. In \cite{Hajek:ComplexityRational}, H\'ajek argues $\coNP$-containment for (theoremhood and) provability from finite theories in $\RPL$ by looking at the standard semantics and asking the reader to verify that the decision method used in \cite{Hajek:1998} for {\L}ukasiewicz logic, which is obtained as a polynomial-time reduction to mixed integer programming, works even under the presence of rational constants in the formulas. Of course, the extension with rational constants is quite natural for the mixed integer programming problem. It is also unnecessary. The statement of Corollary \ref{interpret1} provides a method to argue the result without re-examining the reduction to mixed integer programming: one simply uses the $\star$-translation and appeals to a finite subtheory of $T_Q$ to provide a canonical interpretation for the new variables; in particular, for each constant $m/n$, it is necessary to include the defining axiom for $q_{1/n}$ and $q_{m/n}$ in the finite subtheory. Next to that, the only complexity result needed is that of $\coNP$-completeness of finite consequence relation in $\logic{\L}$. A small glitch is that the interpretation provided by $\star$ and (the finite subtheory of) $T_Q$ cannot be used in the form given above because they do not yield a polynomial translation: for a given $n$, the innocent-looking abbreviation $\varphi^{n-1}$ in fact stands for a term of exponential size in $|n|$ (for $n$ represented in binary). This problem can be rectified in the usual manner, using new variables for intermediate powers; in the lemma below, powers of MV-terms and the product of MV-terms denoted by $\Pi$ pertain to the multiplication symbol $\cdot$ of the language of MV-algebras. \begin{lemma} \label{lm_poly_traslation} For $n\in N$, $n\geq 2$, take the binary representation of $n-1$, i.e., let $n-1=\Sigma_{i=0}^m p_i 2^i$ with $p_i\in\{0,1\}$ and $p_m=1$. Let $I=\{i\mid p_i=1\}$. In $\standardL$, the system of equations \begin{align*} y_0 &= \neg x \\ y_1 &= y_0^2 \\ y_2 &= y_1^2 \\ &\mathrel{\makebox[\widthof{=}]{\vdots}} \\ y_m &= y_{m-1}^2\\ x &= \Pi_{i \in I} y_i \end{align*} has a unique solution, assigning the value $1/n$ to $x$. \end{lemma} \begin{proof} The system implies $x=(\neg x)^{n-1}$. This equation has a unique solution $x=1/n$ in $\standardL$ (\cite{Hanikova-Savicky:SAT}, Lemma 6.3). On the other hand, the assignment $x=1/n$ determines the values assigned to all the $y$-variables. \end{proof} In what follows, the $y$-variables from the above lemma are referred to as ``auxiliary variables''. \begin{lemma} \label{lm_poly_2} \begin{enumerate} \item For $n,m,I$ as above, the system of formulas $$\{ y_0\equiv \neg q_{1/n}, \bigwedge_{i=1}^m (y_i = y_{i-1}^2),\, q_{1/n} = \Pi_{i \in I} y_i\}$$ defines $1/n$ in the variable $q_{1/n}$ in $\standardL$. \item The size of the system is polynomial in the size of $n$. \item Using an analogous argument, one can polynomially define $m/n$. \end{enumerate} \end{lemma} \begin{theorem} Provability from finite theories in $\RPL$ is polynomially reducible to provability from finite theories in ${\mathchoice{\mbox{\rm\L}}{\mbox{\rm\L}}{\mbox{\rm\scriptsize\L}}{\mbox{\rm\tiny\L}}}$. \end{theorem} \begin{proof} Let a finite consecution $(T,\varphi)$ in the language of $\RPL$ be given. Let $\{ m_1/n_1,\dots m_k/n_k \}$ be all the rational constants in $T,q$, and let all variables therein be among $\{x_1,\dots, x_l\}$. For $i=1,\dots,k$ let $T_i$ be the set of formulas from Lemma \ref{lm_poly_2} (1) and (3), using a fresh pool of auxiliary variables to define each of $1/n_i$ and $m_i/n_i$. Let $T_Q^{fin} = \bigcup_{i=1}^k T_i$, and let $T^\star$ and $\varphi^\star$ result from $T$ and $\varphi$ by replacing its constants with the respective $q$-variables. Then $T\vdash_{\logic{RPL}}\varphi$ if and only if $T^\star \cup T_Q^{fin} \vdash_{\mathchoice{\mbox{\rm\L}}{\mbox{\rm\L}}{\mbox{\rm\scriptsize\L}}{\mbox{\rm\tiny\L}}} \varphi^\star$ as in Theorem \ref{interpret1}; moreover, the size of $ T_Q^{fin}$ (and also $T^\star$ and $\varphi^\star$) is polynomial in $|T|+|\varphi|$. \end{proof} \begin{corollary} Provability from finite theories in $\RPL$ is in $\coNP$. \end{corollary} The construction of the polynomial translation is not less laborious than the inclusion of the rational constants in the mixed integer programming problem; one cannot argue that the (coNP-containment) result became simpler by omitting the constants. However, what the construction shows is that problems such as the computational complexity of RPL are in fact problems about $\mathrm L$ (using the translation). In other words, there was no question one could ask about complexity of RPL in the first place, other than those already settled by complexity results for ${\mathchoice{\mbox{\rm\L}}{\mbox{\rm\L}}{\mbox{\rm\scriptsize\L}}{\mbox{\rm\tiny\L}}}$. \bigbreak Corollary \ref{corollarymn} provides a way of defining each of the rationals in {\L}ukasiewicz logic in the sense of Definition \ref{def_definability}. This is just one among many variants of the implicit definability result. One can observe that the construction rests on a parametrized formula where by varying the parameter one easily gets the definition of the intended constant for each $m/n$. On the other hand, consider a randomly chosen formula. What, if anything, does it define? We formulate this question as a decision problem, and show that it is hard. \begin{problem} Given an MV-term of $n$ variables $\varphi(x_1,\dots,x_n)$, an integer $1\leq i\leq n$, and a rational number $a$, determine whether $\varphi$ defines $a$ in $x_i$ (in $\standardL$). \end{problem} The decision problem on the domain of triples $\langle \varphi,i,a\rangle$ as above, consisting of exactly those where $\varphi$ defines $a$ in $x_i$ in the standard MV-algebra, is referred to as the $\DEF$ problem. We seek to estimate the complexity of the problem. The estimate will in part be an artefact of our definition. Namely, we have stipulated that each of our defining formulas be satisfiable (in $\standardL$); otherwise, if only the second item in Definition \ref{def_definability} were used, each unsatisfiable formula would implicitly define every rational (and irrational) in a trivial way. The following lemma reflects our choice. \begin{lemma} \label{lemma_def_NPhard} $\DEF$ is $\NP$-hard. \end{lemma} \begin{proof} We reduce $\SAT$ to $\DEF$ (both notions relate to $\standardL$). The reduction function $f$ assigns, to a given formula $\varphi(x_1,\dots,x_n)$, the triple $$\langle \varphi \cdot (x_{n+1}\equiv\neg x_{n+1}), n+1, 1/2\rangle$$ We show that $\varphi \in \SAT$ if and only if $f(\varphi)\in \DEF$. Let $\varphi$ be satisfiable; then so is $\varphi \cdot (x_{n+1}\equiv\neg x_{n+1})$, and moreover, if the latter is satisfied by a $v$, then $v(x_{n+1})=1/2$, because $1/2$ is the only solution to the equation $x=\neg x$ (note $x_{n+1}$ does not occur in $\varphi$). On the other hand, if $\varphi$ is unsatisfiable, then so is $\varphi \cdot (x_{n+1}\equiv\neg x_{n+1})$. \end{proof} \begin{theorem} \label{th_def_coNPhard} $\DEF$ is $\coNP$-hard. \end{theorem} \begin{proof} For a given $\langle \varphi,i,a\rangle$ as above, let us consider the condition \begin{equation}\exists v_{\standardL} ( v(\varphi)=1 \,\&\, v(x_i)\not= a)\tag{D}\end{equation} This condition negates the second condition in Definition \ref{def_definability}. It is in $\NP$ (because of small witnesses); to show it is $\NP$-hard, it is enough to consider a variant of the reduction from Lemma \ref{lemma_def_NPhard}: to find out whether $\varphi(x_1,\dots,x_n) \in \SAT$, ask the algorithm for D about $\langle \varphi \cdot (x_{n+1}\equiv\neg x_{n+1}), n+1, 1/4\rangle$. If $\varphi$ is satisfiable, then D is satisfied on $f(\varphi)$; if not, then neither is $\varphi \cdot (x_{n+1}\equiv\neg x_{n+1})$, so D fails. So D is $\NP$-complete. Now we reduce $\bar D$ (the complement of D), i.e., the set $\langle \varphi,i,a\rangle$ s.t. $v(\varphi)=1$ implies $v(x_i)=a$, to $\DEF$ (i.e., to $\bar D$ with the additional condition of satisfiability for $\varphi$). Let $\langle \varphi(\tilde{x}),i,a\rangle$ be an instance. Let $\psi(x_i,\tilde{z})$ be a formula defining $a$ in $x_i$ (as earlier in this paper). Note that \\ (i) $\psi$ is satisfiable, and for any satisfying evaluation $v$, we have $v(x_i)=a$;\\ (ii) $\psi$ can be chosen so that its size is polynomial in the size of the given instance.\\ Let $f(\langle \varphi,i,a\rangle) = \langle (\varphi \vee \psi) (\tilde{x},\tilde{z}) ,i,a\rangle$. We claim that $\langle \varphi,i,a\rangle$ satisfies (D) if and only if $f(\langle \varphi,i,a\rangle) \in \DEF$. If $\langle \varphi,i,a\rangle$ satisfies $\bar D$, then $\varphi\lor\psi$ is satisfiable (because $\psi$ is), and satisfies the condition of $\bar D$, because both $\varphi$ and $\psi$ satisfy it. So $f(\langle \varphi,i,a\rangle)\in \DEF$. If $\langle \varphi,i,a\rangle$ satisfies (D), then there is a $v$ s.t. $v(\varphi)=1$ and $v(x_i)\not=a$. This is also true about $\varphi\lor\psi$. \end{proof} (The proof of) Theorem \ref{th_def_coNPhard} answers the complexity question for a modification of Definition \ref{def_definability} omitting the satisfiability requirement. Such a problem is $\coNP$-complete. \section{More on RPL and {\L}} As stated already, ${\mathchoice{\mbox{\rm\L}}{\mbox{\rm\L}}{\mbox{\rm\scriptsize\L}}{\mbox{\rm\tiny\L}}}$ is a syntactic fragment of $\RPL$: well-formed formulas of ${\mathchoice{\mbox{\rm\L}}{\mbox{\rm\L}}{\mbox{\rm\scriptsize\L}}{\mbox{\rm\tiny\L}}}$ are exactly those well formed formulas of $\RPL$ that do not contain any constants other than $\overline{0}$ and $\overline{1}$. Dually, $\RPL$ is a conservative extension of ${\mathchoice{\mbox{\rm\L}}{\mbox{\rm\L}}{\mbox{\rm\scriptsize\L}}{\mbox{\rm\tiny\L}}}$. The conservative extension statement, however, does not quite capture the tight relation between $\RPL$ and ${\mathchoice{\mbox{\rm\L}}{\mbox{\rm\L}}{\mbox{\rm\scriptsize\L}}{\mbox{\rm\tiny\L}}}$.\footnote{This is also reflected in \cite[{\textsection}3]{Hajek-Paris-Shepherdson:Pavelka}:``Lemma 2.3 shows that $\RPLpred$ is a very conservative extension indeed of $\Lpred$. There is a sense in which even its new formulae don't express anything which can't be expressed by old formulae.''} To appreciate that, consider for example that Peano arithmetic is a conservative extension of Presburger arithmetic. But Peano arithmetic does \emph{not} stand to Presburger arithmetic as $\RPL$ stands to ${\mathchoice{\mbox{\rm\L}}{\mbox{\rm\L}}{\mbox{\rm\scriptsize\L}}{\mbox{\rm\tiny\L}}}$. Rather, what we have in $\RPL$ is an axiomatic expansion of the language as exemplified by extending the theory of groups, presented with the binary group operation only, with a new constant for the neutral element. This is an extension by definition (so it is conservative) and because we can prove existence and unicity of the solution to $x\circ x=x$, the new constant for the neutral element can be eliminated. In the case of $\RPL$, \cite[Theorem 2.6]{Hajek-Paris-Shepherdson:Pavelka} gives the exact sense in which constants can be eliminated, as mentioned in Section \ref{section:intro} above. Not so much the conservativity result, but predominantly the faithful interpretation result suggests that the logics stand too close to each other to be even considered as two genuine systems, except for some very theoretical considerations (such as the set of term-definable functions in the standard semantics, for example); for practical applications, however, it is a matter of convenience which of the two logics is used. \section{On precision and vagueness} \label{section:vagueness} Within the framework of {\L}ukasiewicz logic, with or without constants in the language, one can be surgically precise about truth values ascribed to particular propositions. We have seen that, under a simple theory such as $T_Q$, one can pinpoint the truth value of a proposition $p$ to any rational number by positing $p\equiv q_{m/n}$. There is no talk about valuations here: $T_Q$ and $p\equiv q_{m/n}$ MV-terms or sets thereof.\footnote{Admittedly, $T_Q$ imposes its own semantics: it does not have a model over algebras that do not contain a copy of the rationals.} Given that the rationals are dense within $[0,1]$, this level of precision seems sufficient for practical purposes, which presuppose finiteness. Should irrational values be needed, however (thus forfeiting finiteness), we have shown how to obtain them switching the underlying logic to the infinitary logic $\infL$. In recent papers concerning the modelling of vague predicates with fuzzy logic, the predicate \emph{tall} is considered as an example amenable to modelling by the apparatus of formal fuzzy logic. It is so amenable because it is a vague way of referring to an easily measurable quantity, namely \emph{height}, which is not vague, but is measured on a linear scale. The claim here is not that {\L}ukasiewicz logic or its unspecified theories \emph{prove} $p$ to have the truth value $m/n$ or any other value for that matter (unless $p$ turns out to be a theorem or a contradiction of {\L}ukasiewicz logic, of course). Specific theories may indeed prove $p\equiv q_{m/n}$, but then (trivially) the axioms of such theories need to be \emph{at least as strong} as $p\equiv q_{m/n}$. {\L}ukasiewicz logic however, as a formal system, does not commit to such ad hoc stipulations, but merely makes it possible to \emph{express} them, and to enable deduction on them. One possible rephrasing of the above is that, right side of the turnstile, we know rather little of how our ``truth constants'' came to be; bar some notational conventions (helping us to distinguish between constants and variables), we are presented with a set of syntactic objects that obey some bookkeeping statements. Detailed accounts \cite{Marra:PrecisionVagueness} and \cite{Behounek:WhichSenseFuzzy}, echoing also earlier works of H\'ajek, explain that ({\L}ukasiewicz or other) fuzzy logic does not insist on assigning specific real values to propositions, but rather, broadly speaking, studies consequence on, and reasoning with, propositions that in principle admit these truth values. These accounts are offered in response to some attempts at refutation of fuzzy logic on the ground of the artificial precision problem that points out the lack of incentives for preferring one intermediate truth value over another one for particular propositions. We have argued that precision can be neatly captured by the syntax of {\L}ukasiewicz logic. All of that is happening \emph{left of the turnstile}. We have also reminded ourselves of the simple fact that it is nontrivial to recognize what precisely an arbitrary theory implicitly defines. One might, therefore, understand the account presented here as a supportive argument for the thesis that precision happens in fuzzy logic, and if it happens, it must happen within the assumptions before it can happen within the conclusions, which seems to be one of the main points of the above papers. \section{Conclusion} Thanks to finite strong standard completeness of {\L}ukasiewicz logic, our semantic rendering of implicit definability blends naturally with the more commonly given implicit definability notions that refer to variables (as when introducing Beth property, see \cite{Hoogland:Thesis,Montagna:Interpolation}) or connectives (as in \cite{Caicedo:ImplicitConnectivesAlgebraizable,Caicedo:ImplicitOperationsMV}), i.e., syntactic objects. The findings of Section \ref{section:definability} and \ref{section:complexity} seem to confirm the ambivalent status of constants in {\L}ukasiewicz logic: they are useful but dispensable abbreviations, with statements of {\L}ukasiewicz logic with constants being expressible in {\L}ukasiewicz logic without any constants but those that are inherent to it. It may well be that keeping the constants is ``more elegant'', as asserted by H\'ajek in \cite{Hajek:1998}, or that ``\dots even for partial truth, Rational Pavelka logic deals with exactly the same logic as {\L}ukasiewicz logic---but in a very much more convenient way'', as \cite{Hajek-Paris-Shepherdson:Pavelka} remarks. We do not attempt to argue for or against these views. What is argued here, however, is that metamathematical statements about Rational Pavelka logic naturally translate to statements about (certain theories over) {\L}ukasiewicz logic without any added constants. In particular, it is not the case that {\L}ukasiewicz logic with truth constants is richer, more expressive, or more general than {\L}ukasiewicz logic without constants. \newpage \noindent {\bf Acknowledgements.} The author was supported by CE-ITI and GA\v CR under the grant number GBP202/12/G061, and by the long-term strategic development financing of the Institute of Computer Science RVO:67985807. \bibliographystyle{plain}
{ "timestamp": "2018-02-26T02:10:10", "yymm": "1802", "arxiv_id": "1802.08588", "language": "en", "url": "https://arxiv.org/abs/1802.08588" }
\section{Introduction} Backward Raman amplifiers (BRA) provide a promising path to the next generation of short pulse high-intensity lasers that may circumvent the damage limit of conventional materials. The main idea is to couple a short seed and a long counter-propagating pump through an electron plasma wave (EPW) in such a way that the pump energy is transferred to the seed that is amplified and compressed via Raman backscattering \cite{Malkin_PRL_99}. This mechanism was extensively studied with respect to various physical effects, including wave breaking \cite{Toroker_PoP_14,Edwards_PRE_15,Farmer_PRE_15}, longitudinal and transverse nonlinearities \cite{Malkin_PRE_14,Barth_PRE_16,Malkin_PRL_16}, precursors \cite{Tsidulko_PRL_02}, group velocity dispersion (GVD) \cite{Toroker_PRL_12}, inverse bremsstrahlung \cite{Berger_PoP_04}, Landau damping \cite{Malkin_PRE_09,Balakin_PoP_11}, and premature parametric backscattering of the pump \cite{Malkin_PRL_00}. The mechanism has also enjoyed experimental implementation \cite{Suckewer03,Suckewer04,Suckewer05}. In particular, what emerged from these studies of physical effects were techniques, exploiting the laser bandwidth, designed to improve the operation of Raman plasma amplifiers in different regimes. The exploiting of the bandwidth, and in particular chirping the frequency, underlies a number of other studies as well \cite{Hur07,Vieux11,Nuter13,Yang15,Balakin16,Mahdian17,Wu15}. Chirping the pump laser, together with exploiting a density gradient, can suppress noise and precursors \cite{Malkin_PRL_00}. It can also overcome relativistic saturation \cite{Barth_PRE_16}. Chirping the seed pulse, and exploiting GVD, can accommodate a shorter plasma for the same amplification as for an unchirped seed \cite{Toroker_PRL_12}. An alternative method to suppress backscattering from noise envisions splitting the pump energy over a few frequencies \cite{Balakin_PoP_03}, where, in order to preserve the amplification efficiency, the allowed frequency spacing is limited by the spectral width of the single-frequency amplified seed. Noise suppression by multifrequency pulses, has also been suggested for inertial confinement fusion systems, where the extent of penetration without backscattering depends on the frequency spacing \cite{Barth_PoP_16,Liu_PoP_17}. Incoherent pump lasers, with not too small a correlation time, can amplify coherent seeds similarly to coherent pump lasers, but with the advantage of less backscattering due to noise \cite{Edward_PoP_17}. Experimental studies demonstrated that, in fact, chirping the pump could compensate detuning due to density gradients thereby facilitating the Raman amplification \cite{RenPOP08,YampolskyPOP08,Yampolsky_PoP_11}. Here we show that a wave packet comprising two or more well-separated carrier frequencies can be amplified with a similar efficiency as a single-frequency pulse even in the nonlinear regime. We call this regime multifrequency BRA (MFBRA) to distinguish it from the usual single frequency BRA (SFBRA). Importantly, in addition to mitigating the premature backscattering of the pump that is common to other methods that require bandwidth \cite{Malkin_PRL_00,Balakin_PoP_03,Edward_PoP_17}, MFBRA is advantageous because of its beat-wave waveform. Here the width of each spike in the beat-wave waveform is smaller than the envelope width, a feature that can be used advantageously. By a proper preparation of the initial phases that takes into account GVD, the peak intensity of the beat-wave can be engineered to be located at the center of the amplified pulse envelope when leaving the plasma, thereby producing an output pulse with the same total fluence, but with a peak intensity higher than would be possible using SFBRA. This paper is organized as follows. In \Sec{sec2} we employ the fluid model to show that double frequency BRA (DFBRA) is possible and to analyze the conditions for such amplifiers. In \Sec{sec3}, PIC simulations are presented, confirming the effect. These simulations are used to optimize the amplification of seeds with two or more carrier frequencies. We summarize our conclusions in \Sec{sec4}. \section{Doubly three-wave interaction} \label{sec2} Consider the wave equations for the Raman-scattered electromagnetic (EM) wave and the electron plasma wave (EPW) within the linearized fluid model for unmagnetized homogeneous plasma \cite{Kruer}, \begin{eqnarray} \hat{D}_{\rm em} \mathbf{b} &=& - \omega_e^2 n_e \mathbf{a} \label{D_b} \\ \hat{D}_{\rm epw} n_e &=& \frac{c^2}{2}\partial_x^2\left(\mathbf{a}\cdot \mathbf{b}\right) \label{D_ne} \end{eqnarray} where, \begin{eqnarray} \hat{D}_{\rm em} &=& \partial_t^2+\omega_e^2-c^2\partial_x^2 \\ \hat{D}_{\rm epw} &=& \partial_t^2+\omega_e^2-3 v_e ^2\partial_x^2 \end{eqnarray} are differential operators for the EM wave and EPW respectively. In Eqs.~(\ref{D_b})-(\ref{D_ne}), the total EM vector potential is decomposed into a large and stationary pump wave, $\mathbf{a}$, and a small counterpropagating amplified seed, $\mathbf{b}$. The electromagnetic vector potentials, $\mathbf{a}$ and $\mathbf{b}$, are in the units of $m_e c^2/e$ and the electron density perturbation, $n_e$, is rescaled by the unperturbed density, $n_0$. Also, $\omega_e^2=4\pi e^2 n_0/m_e$ is the plasma frequency squared; $e$ and $m_e$ are the electron charge and mass respectively; $c$ is the speed of light; and $v_{e}=\sqrt{T_e/m_e}$ is the electron thermal velocity, with $T_e$ being the electron temperature. In addition, we neglect the ion motion and, as a result, neglect stimulated Brillouin scattering (SBS), notwithstanding that SBS can itself produce a laser compression effect \cite{Lehmann13,Weber13,Chiaramello16,Schluck16,Edwards_PoP_16}. This assumption is justified because, for the parameters of interest, stimulated Raman scattering (SRS) is dominant over SBS \cite{Forslund,Edwards_PoP_16}. For simplicity, we consider only two carrier frequencies, but the generalization to more than two frequencies is straightforward. We decompose both the pump and the seed into two spectral components, \begin{eqnarray} \mathbf{a} &=& \Re \left[a_1 e^{i(\omega_{a_1}t+k_{a_1} x)}+a_2 e^{i(\omega_{a_2}t+k_{a_2} x)}\right]\hat{y} \label{a} \\ \mathbf{b} &=& \Re \left[b_1 e^{i(\omega_{b_1}t-k_{b_1} x)}+b_2 e^{i(\omega_{b_2}t-k_{b_2} x)}\right]\hat{y}, \label{b} \end{eqnarray} where $\hat{y}$ is a unit vector in the transverse direction and $\Re$ denotes the real part. The pump carrier frequency spacing is defined as \begin{eqnarray} \label{delta} \delta=\omega_{a_2}-\omega_{a_1}. \end{eqnarray} The seed carrier frequencies, $\omega_{b_1,b_2}$, are downshifted with respect to the pump carrier frequencies, $\omega_{a_1,a_2}$, according to the Raman resonance conditions, \begin{eqnarray} \label{resonance_w} \omega_{b_1,b_2}=\omega_{a_1,a_2}-\omega_{f_1,f_2}, \end{eqnarray} where, $\omega_{f_1,f_2}$ are the EPW frequencies that are determined by the dispersion relation, \begin{eqnarray} \label{dispersion_EPW} \omega_{f_1,f_2}^2=\omega_e^2+3v_e^2 k_{f_1,f_2}^2. \end{eqnarray} Practically, for small $T_e$, we can approximate $\omega_{f_1,f_2}\approx \omega_e$ and choose the seed frequencies in \Eq{resonance_w} accordingly, such that their frequency spacing is the same as the pump frequency spacings, {i.e.,\/}\xspace $\omega_{b_2}-\omega_{b_1}=\delta$. The laser wave numbers, $k_{a_1,a_2}$ and $k_{b_1,b_2}$ are determined by the EM dispersion relations, \begin{eqnarray} \label{dispersion_EM} k_{a,b}=\frac{\omega_{a,b}}{c}\sqrt{1-\frac{\omega_e^2}{\omega_{a,b}^2}}. \end{eqnarray} The wave number of the EPW is then set by the resonance condition, \begin{eqnarray} \label{resonance_k} k_{f_1,f_2}=k_{a_1,a_2} + k_{b_1,b_2}, \end{eqnarray} where for our definitions, $k_{a,b}>0$ [see Eqs.(\ref{a})-(\ref{b})]. The underlining assumption here is that the EPW contains two, well separated, carrier frequencies. {i.e.,\/}\xspace the EPW can be decomposed similarly to the EM waves, \begin{equation} n_e = \Re \left[n_1\, e^{i(\omega_{f_1}t+k_{f_1} x)}+n_2\, e^{i(\omega_{f_2}t+k_{f_2} x)}\right] \label{n}, \end{equation} where, $n_{1,2}$ are slow complex envelopes. In \Fig{fig1}, we illustrate an example of two Raman resonance conditions and the dispersion relations in $(\omega - k)$ space, where each set of three waves fulfill both temporal and spatial resonance conditions of \Eq{resonance_w} and \Eq{resonance_k}, respectively. \begin{figure}[tb] \centering \includegraphics[width=0.6\linewidth,trim=0cm 0cm 0cm 0cm,clip]{fig1.eps} \caption{(color online). Illustration of the doubly three-wave interaction in ($\omega-k$) space. The frequencies and wave numbers of the pumps (triangles), the seeds (squares), and the EPW obey the resonance conditions of Eqs. (\ref{resonance_w}) and (\ref{resonance_k}) for both indices, 1 (filled points) and 2 (empty points). The solid and dashed lines are the dispersion curves of the EM wave and the EPW, respectively.} \label{fig1} \end{figure} We continue by substituting Eqs.~(\ref{a}-\ref{b}) into the wave equations (\ref{D_b}-\ref{D_ne}) and use the envelope approximation, {i.e.,\/}\xspace neglect the second order derivatives of the wave amplitudes $b_{1,2}$ and $n_{1,2}$. In the RHS of Eqs.~(\ref{D_b}-\ref{D_ne}) we keep only the resonant terms that obey both temporal [\Eq{resonance_w}] and spatial [\Eq{resonance_k}] resonance conditions. Besides, we neglect all amplitude derivatives in the RHS of (\ref{D_ne}) since these nonlinear terms are considered small. After applying the dispersion relations (\ref{dispersion_EPW}) and (\ref{dispersion_EM}) in the left-hand sides (LHS) one gets \begin{eqnarray} &\partial_t b_{1,2} -c_{b_{1,2}} \partial_x b_{1,2} = - \frac{\omega_e^2}{2i\omega_{b_{1,2}}}\, n_{1,2}^{*} \, a_{1,2} \label{bbb} \\ & \partial_t n_{1,2} +c_{n_{1,2}} \partial_x n_{1,2} = - \frac{c^2 \left(k_{a_{1,2}}+k_{b_{1,2}} \right)^2}{4i\omega_e}\,a_{1,2}\, b_{1,2}^{*}. \label{nnn} \end{eqnarray} Here, $c_{b_{1,2}} = c^2 k_{b_{1,2}} /\omega_{b_{1,2}}$ are the the group velocities of the EM waves. Similarly, $c_{n_{1,2}} = v_e^2 k_{n_{1,2}}/ \omega_e $ are the EPW group velocities, but they are usually negligible in BRA since the EPW is effectively localized relative to the amplified pulse that propagates at nearly the speed of light. The nonresonant terms that were neglected in the RHS of Eqs.~(\ref{bbb}-\ref{nnn}) contain exponents of the form $\exp[i(k_{a_\alpha}+k_{b_\beta}-k_{n_\gamma}) x]$, where the subindexes, $\{\alpha,\beta,\gamma \}$ are not all the same. These terms contain fast phases and, therefore, do not contribute to the averaged Raman resonant amplification dynamics. This assumption holds as long the spacing between the wave numbers of the two EPWs is larger than their width. By analysis of the linearized three-wave system [Eqs.~(\ref{bbb})-(\ref{nnn})] one can show that the resonance width of each spectral component is equal to $2\gamma$, where, for linear polarization, $\gamma=a_0 \sqrt{\omega_a \omega_e}/2$, is the linear Raman growth rate for pump frequency $\omega_a=\omega_{a_1}$ and initial pump amplitude $a_0$. Therefore, to avoid overlap between neighboring resonances, the spectral separation condition is \begin{eqnarray} \label{condition1} \delta > 4\gamma. \end{eqnarray} Note that this condition is analogous to the Chirikov criterion for resonance overlap in nonlinear oscillators \cite{Chirikov}. However, this is not the only condition on the spacing, $\delta$. Since we consider a short seed, we must guarantee that the beat frequency is large enough such that the seed envelope contains at least one oscillation. The period of the beat oscillation is $2\pi/\Omega$, where $\Omega=\delta/2$ is the beat frequency. Let us define $\tau$ to be the typical duration of the seed, {e.g.,\/}\xspace the full width at half maximum (FWHM). The resulting condition is thus, \begin{eqnarray} \label{condition2} \delta > \frac{4\pi}{\tau}. \end{eqnarray} In the next section, we confirm our analysis through PIC simulations that capture the kinetic effects that were neglected in the fluid model and that treat the laser pump and seed more generally than the envelope approximation utilized above. \section{PIC simulations} \label{sec3} \begin{figure}[tb] \centering \includegraphics[width=1.0\linewidth,trim=0cm 0cm 0.cm 0cm,clip]{fig2.eps} \caption{(color online). PIC simulation of a SFBRA. The intensity of the amplified pulse (a) is in the nonlinear regime. The spectra (b) of the amplified seed (blue), pump (dashed red) and EPW (dotted yellow) obey the Raman resonance condition, $k_{f} =k_a+k_b$, where $k_a=k_0$, $k_b=0.89 k_0$, and $k_f=1.89 k_0$. A secondary Raman backscattering of the seed is also observed as smaller spikes at $k=0.76 k_0$ in the pump spectrum and $k=1.65 k_0$ in the EPW spectrum. Some backscattering of the seed is not unexpected, since the seed reaches an intensity far greater, in fact, than the pump intensity.} \label{fig2} \end{figure} We employ the PIC code EPOCH \cite{epoch} to compare SFBRA (\Fig{fig2}) to DFBRA (\Fig{fig3}). We consider a uniform unperturbed electron density of $n_0=2.5\times10^{19} {\rm cm}^{-3}$, electron temperature of $T_e=30$eV, and immobile ions. In the simulations we use $32$ cells per $\mu$m and $16$ particles per cell. To reduce the simulation time, we employ a window of $0.3$mm in width, moving with the seed. In the first example, the (single frequency) pump wavelength is $\lambda_0=800$nm, {i.e.,\/}\xspace $\omega_{0}=2\pi \times 375$ THz and the plasma is underdense with $n_0/n_{\rm cr}=0.0145 $, where $n_{\rm cr}=1.1 \times 10^{21}/(\lambda_a [\mu m])^2$ is the critical density. The pump intensity is $I_0=10^{14} {\rm W/cm}^2$ so the pump dimensionless amplitude, $a_0= 8.5 \times 10^{-10}\lambda_{0}[\mu m] \sqrt{I_0[{\rm W/cm}^2]}$ (for linear polarization) was $a_0=0.068$. In terms of \Eq{a}, we choose $a_1=a_0$, $a_2=0$, and $\omega_{a_{1}}=\omega_0$. Due to the resonance condition (\ref{resonance_w}), we downshift the seed frequency by the plasma frequency, $\omega_{b}=\omega_{a}-\omega_{e}= 2 \pi \times 330$ THz, where we neglected the thermal correction. The seed has Gaussian profile with FWHM of $80$fs. In terms of \Eq{b}, we initiate the seed envelopes by the time dependent boundary conditions at $x=0$ via \begin{eqnarray} b_{1}&=&\bar{b}_{1} e^{-\frac{(t-t_0)^2}{2\sigma^2}}\\ b_{2}&=&0 \end{eqnarray} where, $\sigma=34$fs, $t_0=100$fs. The seed amplitude is the same as the pump amplitude, $\bar{b}_{1}=a_0=0.068$. As shown in \Fig{fig2}, at time $t=20$ps ({i.e.,\/}\xspace when the seed front is at about $5.8$mm inside the plasma) the seed intensity is amplified by a factor of $600$. The efficiency, in this case, is $\eta=0.75$, {i.e.,\/}\xspace the pump transferred $75\%$ of its energy to the amplified pulse. \begin{figure}[t] \centering \includegraphics[width=1.0\linewidth,trim=0cm 0cm 0.cm 0cm,clip]{fig3.eps} \caption{(color online). PIC simulation of a DFBRA. The intensity of the amplified pulse (a) is in the nonlinear regime with beat-wave structure. The spectra (b) of the amplified seed (blue), pump (dashed red) and EPW (dotted yellow) obey the doubly Raman resonance condition, $k_{f_{1,2}} =k_{a_{1,2}}+k_{b_{1,2}}$, where $k_{a_{1}}=k_0$, $k_{a_{2}}=k_0+\delta/c=1.08 k_{0}$, $k_{b_{1,2}}=k_{a_{1,2}}-\omega_e/c=[0.88, 0.96]\times k_0$, and $k_{f_{1,2}}=[1.89, 2.06]\times k_0$. A secondary Raman backscattering is also observed with smaller spikes. } \label{fig3} \end{figure} To illustrate the multifrequency BRA, we introduce, in \Fig{fig3}, a pump that has two frequencies with spacing $\delta$. Explicitly, $\omega_{a_1}=\omega_0$ and $\omega_{a_2}=\omega_{0}+\delta$, such that the beat frequency is $\Omega=\delta/2 \ll\omega_0$. The total pump fluence (energy per cross area) is kept the same as in the single frequency example, but now, it is equally split over the two frequencies, $I_{a_1}=I_{a_2}=I_0/2=5\times 10^{13}\,{\rm W/cm}^2$ so the dimensionless amplitudes are $a_1=a_2=a_0/\sqrt{2}=0.0048$. Then, \Eq{a} at the plasma boundary, $x=0$, becomes \begin{equation} \mathbf{a}(x=0,t)= \sqrt{2}\, a_0 \cos\left(\Omega t\right) \, \cos\left(\tilde{\omega}_0 t \right) \hat{y}, \end{equation} where, $\tilde{\omega}_0=\omega_0+\Omega$ is the fast $(\tilde{\omega}_0 \gg \Omega)$ carrier frequency. Note that now, for the same pump fluence, the maximum pump intensity is twice that of the the previous example. The seed also comprises two carrier frequencies, $\omega_{b_{1,2}}=\omega_{a_{1,2}}-\omega_{e}$, where, as before, we neglected the thermal correction. Therefore, the seed spacing is the same as the pump spacing, {i.e.,\/}\xspace $\omega_{b_2}=\omega_{b_1}+\delta$. For simplicity, we choose both initial envelopes in a Gaussian form, \begin{eqnarray} b_{1,2} = \bar{b}_{1,2} e^{-\frac{(t-t_0)^2}{2\sigma^2}}e^{i\phi_{1,2}}, \end{eqnarray} where, $\bar{b}_{1,2}$ are real amplitudes, and $\phi_{1,2}$ are the phases of each spectral component. To keep the total seed fluence as in the previous example, we choose $\bar{b}_{1,2}=a_0/\sqrt{2}=0.0048$ so the initial seed at the plasma boundary, $x=0$, reads \begin{eqnarray} \notag \mathbf{b} = \sqrt{2} \, a_0 e^{-\frac{(t-t_0)^2}{2\sigma^2}}\, \cos\left(\tilde{\omega}_b t+ \tilde{\phi} \right) \, \cos\left(\Omega t+\varphi \right), \end{eqnarray} where, $\tilde{\omega}_b=(\omega_{b_1}+\omega_{b_2})/2=\tilde{\omega}_0-\omega_e$, $\varphi=(\phi_2-\phi_1)/2$, $\tilde{\phi}= (\phi_1+\phi_2)/2$, and, as before, the beat frequency is $\Omega=(\omega_{b_2}-\omega_{b_1})/2=\delta/2$. In the example shown in \Fig{fig3}, the spacing is $\delta=2\pi \times 30$ THz, which is about $8\%$ of the pump frequency, and no initial phase difference is introduced, {i.e.,\/}\xspace $\phi_1=\phi_2=0$. All other parameters are kept the same as in the SFBRA example [\Fig{fig2}]. In this example, the linear growth rate is $\gamma=2\pi \times 3.1$THz, so the separation condition in \Eq{condition1} is met. Also, the seed FWHS is $\tau=80$\,fs, {i.e.,\/}\xspace $4\pi/\tau=2\pi\times 25 \,{\rm THz}<\delta$ as required in \Eq{condition2}. \Fig{fig3} shows that a seed comprising two carrier frequencies can be Raman amplified if the pump also consists of two frequencies that are upshifted by the plasma frequency. This amplification that begins in the linear regime continues in the pump depletion regime despite the nonlinear interaction between the waves. By connecting the local maxima, we can define the beat-wave envelope. It is notable that, for the same simulation time, the beat-wave envelope is wider but with a higher peak than that of the single frequency pulse in \Fig{fig2}. Nevertheless, the efficiency is $\eta=0.64$, which is similar to the efficiency of the SFBRA ($\eta=0.75$). This difference results from the slower linear stage because of the smaller pump amplitude, $a_{1,2}<a_0$. However, in the nonlinear (pump depletion) stage, both examples have the same growth rates (slopes). This means that the rates of the energy transfer from the pump to the seed are equal, {i.e.,\/}\xspace the efficiency in the nonlinear stage is the same while the linear stage last different times resulting in a delay in entering the nonlinear stage for DFBRA compare to SFBRA [see \Fig{fig5}]. The spectra of the amplified seed, the pump, and the EPW, are shown in \Fig{fig3}b. All of them comprise two dominant frequencies that each triplet fulfills the three-wave interaction resonance condition. This example demonstrates the mechanism of the two-frequency BRA that was introduced in \Sec{sec2} and in \Fig{fig1}. It is clear that, in this example, both seed frequencies, $\omega_{b_{1,2}}$ are independently amplified via a three-wave interaction of \Eq{resonance_w}. Importantly, we can conclude that these two resonances remain well separated and the resonance overlap is insignificant also in the nonlinear regime when the linear analysis of \Eq{condition1} is invalid. \begin{figure}[tb] \centering \includegraphics[width=1.0\linewidth,trim=0cm 0cm 0.cm 0cm,clip]{fig4.eps} \caption{(color online). PIC simulation of a DFBRA as in \Fig{fig3} but with an initial phase difference of $\pi$. The spectra (b) of the amplified seed (solid blue), pump (dashed red) and EPW (dotted yellow) is similar to that without initial phase difference [\Fig{fig3}] but the intensity of the amplified pulse (a) has a maximum at the center of the beat-wave envelope.} \label{fig4} \end{figure} To optimize the peak intensity of the amplified pulse, one can manipulate the phases of the seed pulse such that one of the local maxima of the beat-wave would coincide with the maximum of the beat-wave envelope just when it exits the plasma. However, this is not the case in the example shown in \Fig{fig2}, where the two highest local maxima are located at the shoulders of the beat-wave envelope and are about $20$ percent lower than the envelope maximum. Fortunately, such manipulation can be accomplished by taking advantage of GVD \cite{Toroker_PRL_12} that differentiates between the seed spectral components, {i.e.,\/}\xspace $c_{b_1}\ne c_{b_2}$ [see \Eq{bbb}]. As a result, the relative phase between the two frequencies changes during the passage of the amplified pulse in the plasma leading to a migration of the location of the highest local maximum inside the beat-wave envelope. This relative phase can be neutralized by an initial phase difference between the two seed spectral components. In \Fig{fig4}, we present an example of such manipulation in which we consider an initial phase difference of $\phi_2-\phi_1=\pi$ between the seed frequencies. Although it is not the most optimum phase difference, the central peak, at $t=20$ps, is located almost at the maximum of the beat-wave envelope resulting in an increase of about $20\%$ in the maximum intensity. Notably, both the spectrum and the efficiency are almost the same as the previous example [\Fig{fig3}] where the initial phase difference was zero. \begin{figure}[t] \centering \includegraphics[width=1.0\linewidth,trim=0cm 0cm 0cm 0cm,clip]{fig5.eps} \caption{(color online) A comparison between the dynamics of SFBRA (solid blue) and of DFBRA with initial relative phase differences of zero (dashed red) and $\pi$ (dotted yellow). Presented are the maximum intensities (a) and total fluences (b) versus the propagation time of the amplified pulse in the plasma. } \label{fig5} \end{figure} In \Fig{fig5}, we study the dynamics of SFBRA and DFBRA by comparing the maximum intensities (\Fig{fig5}a) and the total fluences (\Fig{fig5}b) of the amplified pulses in both cases. The dynamics of the three examples of Figs.~\ref{fig2}--\ref{fig4} are presented versus the propagation time of the amplified pulse. Notably, the linear growth rate of the SFBRA (blue) is higher than the DFBRA (red and yellow), but the efficiencies in the nonlinear (pump depletion) regime are almost the same. This can be seen in \Fig{fig5}b where the total fluence of the SFBRA grows faster than that of the DFBRA at $t<5$ps, but, at later times, both systems have similar slopes of fluence versus time. It is notable that although the maximum intensity of the SFBRA is higher than that of the DFBRA at smaller times, the beat-wave waveform of the DFBRA has a higher peak intensity at later stages, $t>17$ps. Moreover, as shown in \Fig{fig4}, by optimization of the phase between the two carrier frequencies, the peak intensity can be even higher at the time when the amplified pulse exits the plasma ({e.g.,\/}\xspace $t=20$ps). It is clear that the efficiency does not depend on the phase difference between the two seed frequencies but the locations of the local maxima in the envelope change in time due to GVD. GVD also causes the superimposed oscillations of the monotonically increasing maximum intensity. Notably, the difference between the two examples is a result of the phase difference between the seed frequencies, which is zero in the first case and $\pi$ in the second one. Usefully, one can design the initial seed phases according to the plasma length and density, which determine the phase accumulation of the spectral components. It particular, it can be arranged that at the plasma edge a local maximum would coincide with the global maximum of the beat-wave envelope. \begin{figure}[tb] \centering \includegraphics[width=1.0\linewidth,trim=0cm 0cm 0cm 0cm,clip]{fig6.eps} \caption{(color online) PIC simulation of a MFBRA comprising three carrier frequencies. The intensity of the amplified pulse (a) is in the nonlinear regime with beat-wave form of three frequencies. The total fluences versus the propagation time is plotted in the inset of panel (a). The spectra (b) of the amplified seed (blue), pump (dashed red) and EPW (dotted yellow) obey the Raman resonance condition, $k_{f_{1,2,3}} =k_{a_{1,2,3}}+k_{b_{1,2,3}}$, where, $k_{a_{1,2,3}}=[1,1.08,1.16]\times k_0$, $k_{b_{1,2,3}}=[0.88, 0.96,1.04]\times k_0$, and $k_{f_{1,2,3}}=[1.88, 2.04,2.2]\times k_0$. A secondary Raman backscattering is also observed with smaller spikes. } \label{fig6} \end{figure} Finally, we note that pulses with more than two frequencies can also be Raman amplified in a similar way. In this case, more care should be taken to avoid resonance overlap when many frequencies are involved, and we leave it to future work. Nevertheless, we present, in \Fig{fig6}, an example of seed and pump that comprises three, evenly spaced, carrier frequencies, $\omega_{a_3}=\omega_{a_2}+\delta=\omega_{a_1}+2\delta$. Similarly, for the seed frequencies, $\omega_{b_3}=\omega_{b_2}+\delta=\omega_{b_1}+2\delta$, where each pair fulfill the Raman three-wave resonance, $\omega_{b_i}=\omega_{a_i}-\omega_e$ for $i=1,2,3$. In this example we used the same laser and plasma parameters as in the previous examples, {e.g.,\/}\xspace $\omega_{a_1}=\omega_0$, $\delta=2\pi \times 30$ THz. However, to maintain the same total fluence, we used a smaller intensity per spectral component, $I_{a_{1,2,3}}=I_0/3 = 3.33 \times 10^{13}\,{\rm W/cm}^2$. Additionally, in analogy to \Eq{condition2}, we choose a longer seed, $\tau_{\rm FWHM}=120$fs (instead of $80$fs previously) that can contain the triple-frequency beat-wave waveform of the seed. \section{Conclusions} \label{sec4} In summary, we show that a multifrequency seed can be amplified and compressed by using a multifrequency pump with the same frequency spacing. In the linear regime, a simple fluid model suggests that multifrequency BRA is possible, provided that sufficiently large spacing is employed, {i.e.,\/}\xspace $\delta>\gamma$, where $\gamma$ is the growth rate. Moreover, PIC simulations show that the multifrequency amplification continues in the nonlinear regime with similar efficiencies as the SFBRA. We could not predict this fact from the linearized fluid model. The advantages of amplifying multifrequency pulses are as follows. First, similar to other spectral approaches, such pulses experience a reduced premature reflectivity of the pump due to a smaller linear growth rate of each spectral component. However, uniquely to MFBRA, the secondary backscattering of the amplified seed is also reduced since the seed also comprises multiple carrier frequencies. Therefore, the total (unwanted) reflectivity is reduced, and the amplification efficiency increases. Second, the duration of each spike in the beat-wave envelope is smaller than that of the envelope, and thus, one can get a much shorter pulse without additional compression. Third, by engineering the initial phases of the seed components, the maximum intensity can be optimized for the same efficiency. Additionally, following the recent study that found that the total critical intensity for self-focusing might be higher for multicolor beams \cite{Sukhinin_PRA_17}, we expect to find a similar delay in the transverse filamentation instability since the same nonlinear Kerr term is responsible for both effects. Such a delay might enable longer amplification before encountering this transverse instability, making MFBRA even more favorable over SFBRA. Although a relatively large bandwidth is required in MFBRA due to the spacing conditions in Eq.~(\ref{condition1}) and Eq.~(\ref{condition2}), there are considerable benefits in having a shorter spike in the beat-wave form, a higher peak intensity, noise suppression, and the possibly reduced transverse instability. We also note that similar multifrequency amplification might also be realized for Brillouin amplifiers, but further work is required to verify the separation conditions between possible resonances. Also, although we consider here linear polarizations, similar results are predicted for circularly polarized waves or linearly polarized waves, but with perpendicular polarization. These types of waves have the property of reduced parasitic backscattering \cite{Barth_PoP_16}, but since they do not have a beat-wave waveform, no improvement in the maximum intensity is expected. These results should also carry over to using a multifrequency plasma seed instead of a seed laser \cite{Qu_PRL_17} or to pulses with nonzero orbital angular momentum \cite{Arteaga_PoP_2017}. \acknowledgments{ This work was supported by NNSA Grant No.~DE-NA0002948 and AFOSR Grant No.~FA9550-15-1-0391. }
{ "timestamp": "2018-02-27T02:07:47", "yymm": "1802", "arxiv_id": "1802.08906", "language": "en", "url": "https://arxiv.org/abs/1802.08906" }
\section{Introduction} There are several systems with more than one Lagrangian for given equations of motion, but when quantised lead to inequivalent quantum systems \cite{Book_1}. They do not differ by total derivatives of functions. We call these Lagrangians as {\em weakly} inequivalent Lagrangians. Simple examples of weakly inequivalent Lagrangians on $\mathbb{R}^N$ are: \begin{equation} L_M = \frac{1}{2} \dot{x}^i M_{ij} \dot{x}^j , \quad \dot{x}^i \equiv \frac{dx^i}{dt}, \quad M \in \mathrm{Mat}_{N\times N} ( \mathbb{R}),\quad \det(M) \neq 0 , \qquad M^T = M \label{eem} \end{equation} for different choices of the non-singular symmetric matrix $M$. Here $x = ( x^1, x^2, \cdots, x^N) $ are the coordinates. The matrix $M$ is not varied for finding the equations of motion (\ref{eem}). The latter are independent of $M$, \begin{equation} M \frac{d^2 x^i}{dt^2 } = 0 \Rightarrow \frac{d^2 x^i}{dt^2}=0 \end{equation} $M$ being non-singular. For $M = m \mathbb{I}_{N \times N}, m\neq 0$, $L_M$ has the symmetry $ x \rightarrow R x$ , $ R \in SO(N)$ but that is not the case for generic $M$. Also the canonical momentum $p_i = M_{ij} \dot{x}^j $ depends on $M$. For such reasons, quantum theories for different $M$ mutually differ, although that is not the case with the equations of motion. But there are also familiar Lagrangians in field theories which share the same symmetries and which are regarded as equivalent. Examples are the Einstein-Hilbert, the Einstein-Palatini and the self-dual theory of the Einstein gravity. All of these actions are diffeomorphism invariant, but the last two actions have redundant variables in the form of vierbein variables. The Lagrangian formalism gives rise to first class Gauss law constraints which help us in the elimination of these redundant variables. Thus the Gauss law is central to establish some sort of equivalence between these Lagrangians. A naive application of Gauss law does suggest this equivalence. But the Gauss law operator is also a distribution, and therefore must be smeared with test functions. When this aspect is also considered, one finds leftover quantum state vectors associated with spatial infinity carrying new quantum numbers. We have discussed them in various contexts before \cite{Bal_2} and we describe them again here, in section 2. In the framework of local quantum physics, they would be associated with superselection rules. We apply the methods described in this section to self-dual gravity and show how edge states on the sphere at spatial infinity, $S^2_\infty$ arise from the Gauss law. Similar remarks can be made about the Einstein-Palatini action. The conclusion is that actions based on vierbeins are not equivalent to the Einstein-Hilbert action as they contain additional edge states. The discussion brings out that edge states are to be expected associated with every spatial boundary, such as the black hole horizon \cite{Bal_Chandar}. Section 3 briefly discusses the known material in the theory of self-dual action. It has a Gauss law based on the non-compact group $SL(2, \mathbb{C})$. Its edge states, emergent from this Gauss law, are discussed in section 4. In sections 5 and 6, it is argued that if the observables of this action are to coincide with those of the Einstein-Hilbert action, the above state vectors must lead to {\em mixed} states. This is analogous to the results of Balachandran, de Queiroz and Vaidya \cite{Mix} that the colored states in QCD are all mixed. These remarks are also readily adapted to the Einstein-Palatini action. { \it Important Remark:} There is recent extensive important work on edge states and low energy theorems by Strominger et al\cite{Strominger}. We believe that the present work has minimal overlap with theirs. \section{The Gauss Law and Edge States in QED} Some of the important points of interest in the treatment of Gauss law are already brought out by QED. At a fixed time, if $E^i$ is the electric field and $J_0$ the charge density, the Gauss law classically is \begin{equation} \partial_i E^i (x) + J_0 (x) =0. \label{gauss} \end{equation} In quantum theory, following Dirac, (\ref{gauss}) is regarded as a condition on allowed states vectors $|\cdot \rangle$ : \begin{equation} (\partial_i E^i + J_0) (x) | \cdot \rangle =0. \label{gauss_2} \end{equation} We can regard (\ref{gauss_2}) as a condition which picks out the domain of the observables. But $E^i$ in quantum theory are distributions. Thus, we must integrate (\ref{gauss_2}) after multiplying by suitable test functions and rewrite it so that derivatives appear only on the test functions. That is because derivatives of distributions are understood dually, in terms of test functions. So let $C^\infty$ denote infinitely differentiable functions , and $C_0^\infty \subset C^\infty$, those functions which also vanish fast at spatial infinity. Then we can rewrite (\ref{gauss_2}) as \begin{equation} G(\Lambda) |\cdot \rangle := \int d^3x~ \left[ -E^i \partial_i \Lambda + \Lambda (x) J_0(x)\right] |\cdot \rangle =0 , \label{gauss_3} \end{equation} with $\Lambda \in C_0^\infty$. Classically, (\ref{gauss_3}) implies (\ref{gauss_2}) by partial integration since $\Lambda$ vanishes fast at infinity. The exponentiation of $G(\Lambda)$, namely , $e^{i G(\Lambda)}$ , generates a group. The electron field $\Psi_0$ under its action acquires a local phase $e^{i \Lambda(x)}$ at $x$, \begin{equation} e^{iG(\Lambda)} \Psi_0(x) e^{-iG(\Lambda)} = e^{i \Lambda(x)} \Psi_0(x) \label{phase1} \end{equation} and the phase becomes $\mathbb{I}$ at $\infty$, since $\Lambda(x) \rightarrow 0$ as $|\mathbf{x} | \rightarrow \infty$. Thus the group associated with $G(\Lambda)$ can be thought of as the group $\mathcal{G}_0^\infty$ of maps from $\mathbb{R}^3$ to $U(1)$ which becomes identity at $\infty$, the superscript $\infty $ denoting this fact while the subscript $0$ denoting that it is connected to identity. We note that all ``observables'' are required to commute with only with these ``small'' gauge transformations $e^{i G(\Lambda)}$ . The electron field $\Psi_0$ is not invariant under small gauge transformations. In any case, $\Psi_0$ changes sign under $2\pi$-rotations and thus it is not an observable. Reverting to (\ref{gauss_3}), we can next consider \begin{equation} Q(\mu) = \int d^3x~ \left[ -E^i ~\partial_i \mu + \mu (x) J_0(x)\right] \end{equation} where $\mu$ approaches a constant $\mu_0$ at $\infty$. Let us call the group generated by such $Q(\mu)$ as $\mathcal{G}_0$. For $\mu \neq \Lambda$ , $Q(\mu) | \cdot \rangle $ need not be zero. If $\mu = \Lambda$, one has $Q(\mu) = G( \mu) $ and thus $Q(\Lambda) | \cdot \rangle =0$. This is the subgroup $\mathcal{G}^\infty_0 = \left\{ e^{i Q(\Lambda)}\right\}$ of $\mathcal{G}_0$ which acts trivially on $|\cdot \rangle$. Thus the effective group acting on $|\cdot \rangle $ is $ \mathcal{G}_0 / \mathcal{G}_0^\infty$. The group $\mathcal{G}_0 / \mathcal{G}_0^\infty$ is isomorphic to $U(1)$. For, any element of this quotient is entirely characterized by the value of $\mu$ at spatial infinity. As an example, choose for $\mu$ a globally constant $\mu = \mu_0$. In that case, \begin{equation} Q(\mu) = \mu_0 \int d^3x~ J_0(x) = \mu_0 Q( \mathbb{I}) \equiv \mu_0 Q_0 \label{charge_q} \end{equation} where $\mathbb{I}$ is the constant function with the value $1$ and $Q_0$ is the canonically normalised charge. \\ An important question now is: how do we create sectors with $Q(\mu)\neq 0$ from the vacuum? For this purpose, we can consider \begin{equation} W(x) = \exp{ i \int_x^\infty dx^\lambda ~ A_\lambda (x)} \label{Wil_line} \end{equation} where the integral is at a fixed time along the spacelike straight line \begin{equation} \{ x + \hat{n} l , 0 \leq l < \infty\} \end{equation} in the direction of a unit vector, say $\hat{n}$. Under a gauge transformation, \begin{equation} A_\lambda \rightarrow A_\lambda + \partial_\lambda \mu \end{equation} one has \begin{equation} W(x) \rightarrow e^{i \mu_\infty ( \hat{n} )} W(x) e^{-i \mu(x)}, \end{equation} where \begin{equation} \mu_\infty(\hat{n}) = \lim_{l \rightarrow \infty } \mu( x + \hat{n} l ) \label{angle_inf} \end{equation} Thus, in view of (\ref{phase1}) and as Dirac observed \cite{Dirac},\cite{Mandelstam}, \begin{equation} W(x) \Psi_0(x) = \Psi(x) \end{equation} is invariant under small gauge transformations whereas under large ones, \begin{equation} e^{i Q(\mu)} \Psi(x) e^{-iQ(\mu)} = e^{ i \mu_\infty ( \hat{n} ) } \Psi(x) \label{sky} \end{equation} Therefore, if $\mu_\infty$ is equal to a constant $\mu_0$, the state $\Psi(x)| 0 \rangle $ , where $| 0 \rangle$ is the vacuum, is the vector with $Q_0=1$. But at the same time, it is constant under small gauge transformations. In $\mathcal{G}_0$,we can let $\mu _\infty$ be an angle-dependent function. We can generate such elements of $\mathcal{G}_0$ by letting $\mu(x)$ approach an angle dependent limit $\mu_\infty( \hat{n})$, as indicated by (\ref{angle_inf}). Then on exponentiation, and restricting $\hat{n}$ to a Cauchy slice, we get the Sky group of \cite{Mix} isomorphic to the maps from the celestial sphere $S^2_\infty$ at infinity to U(1). As $\mu$ runs over its possibilities, we get a collection of Sky charges $\mu_\infty (\hat{n})$ where $\hat{n}$ now fixes the representation of $\mathcal{G}_0 / \mathcal{G}_0^\infty$. We have in this case, if $Q(\mu) | \cdot \rangle=0$, \begin{equation} Q(\mu) \Psi(x) |0 \rangle = \mu_\infty ( \hat{n}) \Psi(x) | 0 \rangle \label{rep_sky} \end{equation} Since the representation given by (\ref{rep_sky}) depends only on the asymptotic data $\mu_\infty(\hat{n})$, the corresponding quantum states are {\it edge states}. In the discussion above, we have assumed that the charge of the electron is unity in suitable units. For a field of charge $q$, $A_\lambda$ must multiplied by $q$ while we can also easily restore $q$ appropriately in the above equations. { \it Important Remarks} : In previous work \cite{Bal_QCD} ( In particular, see the discussion in the appendix), the in-state was constructed using $\hat{n}$ as a time-like unit vector. We can do that here also. But then the commutators of $A_\lambda$ with other fields are known only after solving field equations whereas by choosing $x+ \hat{n}l$ to lie on a Cauchy slice, we know all the equal time commutators including those of $W$ and $\Psi$. For this reason, in what follows, we choose $\hat{n}$ to be spacelike and be a unit tangent to a spacelike surface. \subsection{ On Gauge Redundancies in QED} The fields in the QED Lagrangians are connections $A_\mu$ and the electron field $\Psi_0(x)$. Gauge invariance implies that there are redundant degrees of freedom in the QED Lagrangian. The new quantum state vectors which may arise from these redundant degrees of freedom get restricted by the Gauss law. They do not get entirely eliminated however. The charged vectors with charges $q$ and the Sky group vectors characterised by $\mu( \hat{n}) $ are examples. Thus the redundant degrees of freedom in the Lagrangian leave an imprint on the state vectors. It is important that the redundant degrees of freedom implicit in the Lagrangian do not appear at the level of local observables \cite{Bal_2, Mix}. They must by definition commute with $G(\Lambda)$, but because of the locality, they then commute also with $Q(\mu)$. The reasoning is as follows. Let $K$ be a compact spatial region and let $\mathcal{O}_K$ be a local observable localised in $K$. Then by (\ref{gauss_3}), and its generalisation to local observables \begin{equation} \left[ Q(\mu), \mathcal{O}_K \right] = \left[ Q(\mu|_K), \mathcal{O}_K \right] \end{equation} where $\mu|_K$ = $\mu$ restricted to $K$. But we can find a $\Lambda \in C^\infty_0$ such that $\Lambda|_K = \mu|_K$. In fact there are infinitely many such $\Lambda$. Hence, \begin{equation} \left[ Q(\mu), \mathcal{O}_K \right] = \left[ Q(\Lambda|_K), \mathcal{O}_K \right] = \left[ G(\Lambda|_K), \mathcal{O}_K \right] = \left[ G(\Lambda), \mathcal{O}_K \right] \label{trick} \end{equation} where the last step folows from the fact that the commutator depends only on $\Lambda|_K$. Hence, \begin{equation} \left[ Q(\mu), \mathcal{O}_K \right] = 0. \end{equation} As the local observables commute with $Q(\mu)$, they are all constructed from electric and magnetic fields. Thus $W(x)$ is {\em not} a local observable. The important result explained in the last few paragraphs is that observables, being local, commute with both small and large gauge transformations. \section{The Self-Dual Action for Gravity} The material in this section is rather well-known \cite{Ashtekar}. It is added here for completeness. The self-dual gravity action in four dimensions is \begin{equation} S = \int d^4x~ ( det ~e) e_C^a~ e_D^b ~\mathcal{F}_{ab}^{CD} ( A) \label{sdual} \end{equation} where the upper case $C,D\in {1,2,3,4}$ are Lorentz group indices and $a,b$ are spacetime indices. The connection $A$ and its curvature $\mathcal{F}$ are Lorentz Lie algebra-valued, in the self-dual $(1,0)$ representation of the Lie algebra. They are complex. (The antisymmetric product of two four-vector representations contains $(1,0)$ of course). For the Palatini action in its real form, $A$ and $\mathcal{F}$ are real in (\ref{sdual}) . The connection and the curvature are valued in the four-vector representation of the $SL(2,\mathbb{C}) $ Lie algebra. As mentioned earlier, (\ref{sdual}) has redundant degress of freedom associated with $e^a_A$. This is accompanied by $SL(2, \mathbb{C})$ - gauge invariance. After Legendre transformation and associated manipulations, one finds the canonically conjugate variables $A^C_i, \tilde{E}^j_D$ \cite{AshJo}: \begin{eqnarray} \left[ A^C_i(x), \tilde{E}^j_D\right] = \delta^C_D \delta^j_i \delta^3(\mathbf{x} - \mathbf{y}) \\ \label{sd2} \tilde{E}^A _i= \frac{1}{2} \epsilon^{ABC} \epsilon_{ijk} e^j_B e^k_C \Leftrightarrow e^i_A = \frac{1}{2} \frac{1}{ | \det \tilde{E}|^\frac{1}{2} } \epsilon_{ABC} \epsilon^{ijk} \tilde{E}_j^B \tilde{E}_k^C \label{sd3} \end{eqnarray} at a fixed time slice. All other commutators involving $A$ and $\tilde{E}$ vanish on this slice. Let $T(C)$ be the $SO(3)$ Lie algebra generators in the spin 1 representation: \begin{equation} T(C)_{AB} = - \epsilon_{CAB} \end{equation} and set \begin{equation} A_i = A_i^C T(C), \qquad \tilde{E}^j = \tilde{E}^j_D T(D). \end{equation} Then $\mathcal{F}_{ij}(A) = {F}_{ij}^C (A) T(C)$ is also Lie algebra valued, $F_{ij}^C(A) T(C)$ being the curvature of the connection $A_i$. For the self-dual Lagrangian,there are also the following first class constraints in the absence of matter fields: \begin{enumerate}[(i)] \item The Gauss law: \begin{equation} \mathcal{D}_{i} \tilde{E}^{i} | \cdot \rangle = 0 \end{equation} on allowed vector state vectors $|\cdot \rangle$. Here we have used a standard notation: \begin{equation} \mathcal{D}_i \tilde{E}^i = \partial_i \tilde{E}^i + A_i^\alpha \tilde{E}^{i \beta} [ T(\alpha) , T(\beta) ]. \end{equation} \item The vector constaint: \begin{equation} \Tr \tilde{E}^i F_{ij} | \cdot \rangle = 0 . \end{equation} \item The scalar constraint: \begin{equation} \Tr \tilde{E}^i \tilde{E}^j F_{ij} | \cdot \rangle = 0. \end{equation} \end{enumerate} \section{ The Self-Dual Action: Edge States} We focus on the Gauss law. We must smear it with test functions as before. For this purpose, we let $\Lambda$ and $\mu$ henceforth be Lie algebra valued, $\Lambda \equiv \Lambda^C T(C), \mu = \mu^C T(C)$ where the functions $\Lambda^C \in \mathcal{C}^\infty_0$ while $\mu^C \in C^\infty$, i.e. the $\mu^C$ can approach angle dependent limits at infinity. Then with \begin{equation} G(\Lambda) = - \int d^3x ~ \Tr ( \mathcal{D}_i \Lambda) \tilde{E}^i , \end{equation} one has \begin{equation} G(\Lambda) | \cdot \rangle = 0 \end{equation} which are the Gauss law constraints. We can also define $Q(\mu)$: \begin{equation} Q(\mu) = - \int d^3x ~ \Tr ( \mathcal{D}_i \mu) \tilde{E}^i . \label{new} \end{equation} As mentioned, we now allow the possibility that $\mu^c(x)$ approach angle-dependent limits $\mu^c_\infty(\hat{n})$, as infinity is approached, as in the abelian case. In what follows, $\mu(\hat{n}) $ denotes $\mu^c(\hat{n}) T(c)$. The group that $G(\Lambda)$'s generate is identified with $\mathcal{G}^\infty_0$, the group of maps from $\mathbb{R}^3$ to complexified $SO(3)$ which become identity maps at $\infty$ and are also connected to identity. The group that $Q(\mu)$'s generate is instead $\mathcal{G}_0$, which are maps from $\mathbb{R}^3$ to complexified $SO(3)$, whose elements may not approach identity at $\infty$, but are connected to identity. If $g$ is an element of $\mathcal{G}_0$ or $\mathcal{G}^\infty_0$, we denote its representative element in quantum theory by $U(g)$. On quantum states, the effective group is $\mathcal{G}_0 / \mathcal{G}^\infty_0$. Its representations give the edge states of the self-dual action. These are edge states since their action on quantum fields only depends on the asymptotic data of $S^2_\infty$. In particular, in analogy to the $U(1)$ case, if $\mu_\infty$ are restricted to constant functions, $Q(\mu)$ generate the complex $SO(3)$ Lie algebra while if $\mu_\infty$ is allowed to have $\hat{n}$-dependence, we get the Sky group or a subgroup thereof. This Sky group is a natural generalisation of the one formulated for gauge theories \cite{Bal_2} to the self-dual Palatini action. To understand this material better, we generalise (\ref{sky}) and (\ref{trick}). Instead of $\Psi_0$ we use $\tilde{E}^i$. Then (\ref{phase1}) becomes \begin{equation} U(e^{i\Lambda}) \tilde{E}^i (x) U(e^{-i\Lambda}) = e^{ i \Lambda(x)} \tilde{E}^i (x) \label{Pal_Sky} \end{equation} where now $\Lambda(x)$ is a matrix like $T(\alpha)$ while $E^{~i}$ is a column vector ( $E^{~i}_C $ ) in the adjoint representation and not its contraction with $T(C)$ as in section 3. Thus we are changing notation for convenience. The Wilson line $W(x)$ looks the same as (\ref{Wil_line}), except that it now includes a path-ordering $\mathcal{P}$: \begin{equation} W(x) = \mathcal{P} \exp{i \int_x^\infty dx^\lambda A_\lambda(x)} \label{Wil_2} \end{equation} \begin{equation} U(e^{ i\mu}) W(x) U(e^{-i\mu}) = e^{ i \mu_\infty ( \hat{n} ) } W(x) e^{-i \mu(x)} \label{Wil_3} \end{equation} In (\ref{sky}), we change $\Psi(x)$ to \begin{equation} W(x) \tilde{E}^i(x) \label{change} \end{equation} so that \begin{equation} U(e^{i \mu}) W(x) \tilde{E}^i (x) U( e^{-i\mu}) = e^{ i \mu_\infty ( \hat{n} ) } W(x) \tilde{E}^i(x) . \end{equation} Note that $\mu_\infty (\hat{n})$ is a matrix. We can create vector states characterised by $\mu_\infty(\hat{n})$ in its transformation law by starting with a vector $| \cdot \rangle$ invariant under $U(e^{i\mu})$: \begin{equation} U(e^{i\mu}) | \cdot \rangle = | \cdot \rangle. \end{equation} Then applying (\ref{change}) on the vacuum, we get such a state since one gets \begin{equation} U(e^{i\mu}) W(x) \tilde{E}^i(x) | 0 \rangle = U(e^{i\mu_\infty( \hat{n})}) W(x) \tilde{E}^i(x) | \cdot \rangle . \label{mixing} \end{equation} From this equation, we read off that if $\mu(\hat{n}) $ runs over all constant values, we get the complexified $SO(3)$ whereas if $\mu_\infty$ runs over all functions on $S^2_\infty$, we get the complexified Sky group. \subsection*{Remarks on Unitarity} The transformation property (\ref{Wil_3}) of $W(x)$ depends only on $A$ transforming as a connection and not on its reality. Loop quantum gravity program \cite{Ashtekar} extensively employs this fact. But $W(x)$ is not even formally unitary since $A_i$ are complex. The implications of this observation are not clear. \section{ On Observables and Superselection Rules} Locality is a concept which is difficult to formulate in quantum gravity, as the concept has to be diffeomorphism invariant. Because of this requirement, no useful diffeomorphism invariant {\em local} observables have been found on $\mathbb{R}^3$. Besides this issue, we have the requirement that whatever be the definition of observables, they must commute with $G(\Lambda)$. Now $\tilde{E}^a_i$ and $e_a^i$ undergo $SL(2, \mathbb{C})$ - gauge transformations under the action of $U(e^{i\Lambda})$ . For example, \begin{equation} U(e^{i\Lambda}) \tilde{E}_i^a(x) U(e^{-i\Lambda}) = \left(e^{i\Lambda(x)}\right)^a_{~~b} \tilde{E}^b_i (x) \label{trans} \end{equation} where $e^{i\Lambda}$ become identity as $x \rightarrow \infty$. Invariance of observables under (\ref{trans}) means that no observable can depend on $\tilde{E}^a_i$ or $e^i_a$ for finite $x$ except in $SL(2, \mathbb{C})$ invariant combinations. But they can depend on the frames ``at infinity'', the asymptotic forms of $e^i_a(x)$, as $x \rightarrow \infty$. In QED, because of locality, local observables commuted with $Q(\mu)$ due to locality\cite{Bal_2} since they did so with $G(\Lambda)$ . But that argument is not available for the case of gravity. But suppose that we want the equivalence of the self-dual gravity at the level of observables with the Einstein-Hilbert action. Then we must assume that all its observables $\mathcal{O}$ commute with $Q(\mu)$. To proceed, let us agree on this suggestion. Then if $\mathcal{O}$ is any observable, (\ref{mixing}) shows that \begin{equation} U(e^{i\mu}) \mathcal{O} W(x) \tilde{E}^i(x) | 0 \rangle = U(e^{i\mu_\infty( \hat{n})}) \mathcal{O}W(x) \tilde{E}^i(x) | \cdot \rangle . \label{tw} \end{equation} ( Here $\mathcal{O}$ is not a matrix while $\tilde{E}^{~i}$ is a column vector ( $ \tilde{E}^{~i}_b $ ) in the adjoint representation. ). It follows that the collection of numbers $\{ \mu_\infty (\hat{n})\}$ is characteristic of the representation of the algebra of observables $\mathcal{O}$. It is thus superselected, as $\mathcal{O}$ does not affect this set. [ We note that if $U(e^{i\mu}) \rightarrow U(e^{i \mu_\infty})$ in (\ref{tw}) , and similarly $U(e^{i\mu' })\rightarrow U(e^{i \mu'_\infty}) $, then $U(e^{i\mu})U( e^{i \mu'}) \rightarrow U(e^{i \mu'_\infty})U(e^{i \mu_\infty})$, which shows an inversion of the usual ordering. ( Just as for $U(e^{i\mu}), U(e^{i\mu_\infty})$ and similar expressions represent the quantum operators for their arguments.) ] We can change $\{ \mu_\infty ( \hat{n} ) \}$ to $\{ \mu_\infty ( \hat{n}' ) \}$ by changing the direction of the Wilson line and choosing the latter to be \begin{equation} \{ x + \hat{n}' l , \quad 0 \leq l < \infty \} . \label{direc2} \end{equation} Since by (\ref{trans}), no observable can change $\{ \mu_\infty ( \hat{n} ) \}$ to $\{ \mu_\infty ( \hat{n'} ) \}$ , they are different representations of $\mathcal{A}$. Thus we have an infinity of representations characterised by $\hat{n}$, that is, points on the celestial sphere $S^2_\infty$. They are absent for the observables coming from the Einstein-Hilbert action which does not use the frames. \section{Mixed States in Self-Dual Gravity?} Let $A_i, \tilde{E}^i$ represent a particular representation of these fields as conjugate pairs. Then a large gauge transformation $e^{iQ(\mu)} $ transforms them to another equivalent representation ( which however may not be unitarily equivalent!) , namely $A_i, \tilde{E}^i$ to $ e^{iQ(\mu)}A_i e^{-iQ(\mu)}$ and $ e^{iQ(\mu)}\tilde{E}^i e^{-iQ(\mu)}$. Hence also $W$ is changed to $ e^{iQ(\mu)}W e^{-iQ(\mu)}$ . Thus the new state vector, presumably equivalent in some sense to $W(x)\tilde{E}^i(x) | \cdot \rangle$ is, \begin{equation} U(e^ {i\mu}) W(x) \tilde{E}^i | \cdot \rangle = e^{i \mu_\infty(\hat{n}) } W(x) \tilde{E}^i (x) |\cdot \rangle \label{6.1} \end{equation} since $| \cdot \rangle $ is invariant by $U(e^{i\mu})$ by the assumption in Sec. 5.(Cf.(4.9).) Observables commute with $U(e^{i\mu})$. But still matrix elements of observables between vectors like (\ref{6.1} ) can depend on $U(e^{i\mu}) $ if it is not unitary. We shall henceforth avoid this problem by considering only $SO(3)$ - ( and not $SL(2,\mathbb{C})$-)valued gauge transformations. In that case, $U(e^{i \mu })$ being unitary , matrix elements of $\mathcal{O}$ between vectors (\ref{6.1}), are not expected to depend on $e^{i \mu_\infty(\hat{n})} $. By considering $U(e^{i\nu })U( e^{i\mu}) $ etc. in (\ref{6.1}) where also $e^{i\nu_\infty( \hat{n}) } \in SO(3)$, we get a family of vectors in (\ref{6.1}) giving equal matrix elements for observables $\mathcal{O}$, parametrised by maps of $S^2_\infty$ to $SO(3)$. Consider the density matrices \begin{equation} \rho (\mu ) = U(e^{i\mu}) W_a^{~b}(x) \tilde{E}^i_{~b}(x)| \cdot \rangle \langle \cdot | (\tilde{E}_{~c}^i)^\dagger {W^{-1}}^c_{~a}(x) U(e^{-i\mu}). \label{6.2} \end{equation} for fixed $a$. By the preceding considerations, for this $\rho$, the expectation value of $\mathcal{O} \in \mathcal{A}$ is independent of $\mu$. That is also the case for convex linear combinations \begin{equation} \sum \lambda_i \rho(\hat{\mu}^{(i)} ) , \qquad \lambda_i \geq 0 , \sum_i \lambda_i = 1 \label{6.3} \end{equation} of such $\rho(\hat{\mu}^{(i)})$. That means that $\rho(\hat{\mu})$ is a mixed state, , having many decompositions (\ref{6.3}). We can generalise (\ref{6.3}) by replacing $U(e^{\pm i\mu})$ in (\ref{6.2}) by $U(e^{\pm i \nu} ) U( e^{\pm i\mu}) $ . In this way, we get the decomposition of $\rho(\mu ) $ as the convex sums of many states. The vector $| \cdot \rangle $ can be any vector invariant by the action of $U(e^{i \Lambda})$, that is , a small gauge transformation. It can in particular be an $SO(3)$ singlet. Still (\ref{6.2}) depends on $\mu$ since the remaining factors do not commute with $U(e^{ i \mu})$.Thus (6.3) depends on $\lambda_i$. We thus find that the above $SO(3)$ non-singlet states are mixed. There is a similar result for {\em all} $SO(3)$ non-singlet states. For this purpose, consider a generic $SO(3)$ non-singlet state $| \psi, \cdot \rangle$ normalised to unity and fulfilling $ G( \Lambda) | \psi \rangle=0$. The transformations of $SO(3)$ are implemented by $U(e^{i\mu })$ where $\mu_\infty$ is a constant on $S^2_\infty$. Since $SO(3)$ is compact, we can reduce its action to a direct sum of irreducible representations. Hence we can write \begin{equation} | \psi \rangle = \sum C_{I I_3 \lambda } | I, I_3, \lambda \rangle , \qquad I \in \{0,1,2, \cdots\} , \qquad I_3 \in [ -I, +I]. \label{6.4} \end{equation} where $\lambda$ accounts for labels not covered by $I,I_3$ . Also $| I, I_3, \lambda \rangle$ are orthonormal . The observables do not mix $I$ and $I_3$ as they commute with $Q(\mu)$. Assume for a moment that they do not mix $\lambda$ as well. Then the density matrix $|\psi \rangle \langle \psi|$ gives the same expectation values as \begin{equation} \sum_{I, I_3, \lambda} C_{II_3 \lambda} |I, I _3, \lambda \rangle \langle I, I_3, \lambda| \bar{C}_{II_3 \lambda}. \label{6.4a} \end{equation} Each term in (\ref{6.4a}) with $I \neq 0$ leads to a mixed state. That is, the density matrix \begin{equation} \rho_{I I_3 \lambda} = | I, I_3, \lambda \rangle \langle I, I_3, \lambda | \label{6.5} \end{equation} despite appearance, is mixed \cite{C2H4}! Thus for example $ \Tr \rho_{I I_3 \lambda} \mathcal{O} = \Tr \rho_{I I'_3 \lambda} \mathcal{O}$, if the two $\rho$'s are related by an $SO(3)$-transformation. That is because the latter are generated by $Q(\mu)$ with constant $\mu_\infty$ and $Q(\mu)$ commutes with observables by assumption in Sec.5. More generally, we see that as states on observables, $\rho_{I I_3 \lambda}$ and $ \xi \rho_{I I_3 \lambda} + ( 1 - \xi) \rho_{I I'_3 \lambda}$ ( $0 \leq \xi \leq 1)$ all define the same state (( give the same expectation values). Hence $(\ref{6.4a})$ is mixed. In general, $\mathcal{O}$ will mix different $\lambda$'s , but the argument is easily adapted to the case of several $\lambda$'s in the density matrix (\ref{6.4a}) , its term for fixed $I, I_3$ then being \begin{equation} \sum_{\lambda, \lambda'} | I, I_3, \lambda \rangle C_{I, I_3, \lambda} \bar{C}_{I, I_3, \lambda'} \langle I, I_3, \lambda' | \label{6.6} \end{equation} where $C_{I,I_3, \lambda}$ are complex numbers. One sees this from (\ref{6.4}). The mixed states implied by the existence of the edge states in the self-dual model are quite interesting. They may give clues for the black hole entropy problem. \section*{Acknowledgements} A.P.B. thanks Rakesh Tibrewala for kindly reminding him about the equation (\ref{sd2}). We are deeply grateful to Giorgio Immirzi for very helpful suggestions and correcting errors. After submitting this article to the arXiv, our attention was drawn to the reference \cite{marc_g} where edge states in the context of three dimensional gravity is discussed.
{ "timestamp": "2018-04-24T02:09:35", "yymm": "1802", "arxiv_id": "1802.08624", "language": "en", "url": "https://arxiv.org/abs/1802.08624" }
\section{Introduction} Let $X$ be a nonsingular toric variety over $\mathbb C$ under the action of the algebraic torus $T$. A vector bundle $\pi : \mathcal{E} \rightarrow X$ is said to be a toric vector bundle on $X$ if it has an equivariant $T$-structure, i.e. an action of the torus on $\mathcal{E}$ which is linear on fibres and the morphism $\pi$ is $T$-equivariant. Toric vector bundles were first classified by Kaneyama in \cite{kane}. In \cite{kly}, Klyachko gave a description of toric vector bundles in terms of finite dimensional vector spaces with the family of decreasing $\mathbb Z$-graded filtrations which satisfy a compatibility condition. Tangent bundles of nonsingular toric varieties arise as natural examples of toric vector bundles. Following \cite{ehlers}, the tangent bundle of any nonsingular toric variety has a natural topological decomposition into sum of line bundles, each of which arises from a one-dimensional cone of the associated fan. This result is not true in case of equivariant algebraic splitting. In Proposition \ref{18} we characterize all nonsingular toric varieties whose tangent bundle splits equivariantly into direct sum of line bundles. A Bott tower of height $n$ $$M_n \rightarrow M_{n-1} \rightarrow \cdots \rightarrow M_2 \rightarrow M_1 \rightarrow M_0=\{ \text{point} \}$$ is defined inductively as an iterated projective bundle so that each stage $M_k$ of the tower is of the form $\mathbb{P}(\mathcal{O}_{M_{k-1}} \oplus \mathcal{L}$) for an arbitrarily chosen line bundle $\mathcal{L}$ over the previous stage $M_{k-1}$. Bott towers were shown to be deformation of Bott-Samelson varieties by Grossberg and Karshon in \cite{grossbergkarshon}. The Bott tower of height $n$ is completely determined by $\frac{n(n-1)}{2}$ integers $\{c_{i, j}\}_{\{1\leq i < j \leq n \}}$, known as the Bott numbers and these are arranged in an $n \times n$ upper triangular matrix with $1$'s in the diagonal, called the Bott matrix. Without loss of generality, the entries of the Bott matrix can be assumed to be non-negative (Theorem \ref{12}). In Proposition \ref{15} we show that the tangent bundle of $M_k$ splits equivariantly if and only if the Bott matrix is the identity matrix. Let $\mathcal{L}$ be a line bundle on a projective variety $X$. Then $\mathcal{L}$ is said to be very ample if the rational map $X \dashrightarrow \mathbb{P}H^0(X, \mathcal{L})$, determined by its complete linear system $|\mathcal{L}|$, is everywhere defined morphism which becomes an embedding. Generalizing this, the notion of higher-order embeddings were introduced by Beltrimetti and Sommese in \cite{rider}, among which the notion of $s$-jet ampleness is the strongest. In Theorem \ref{16}, we give a criterion for $s$-jet ampleness of line bundles on Bott towers using Bott numbers. A nonsingular projective algebraic variety is said to be Fano (respectively, weak Fano) if its anticanonical divisor is ample (respectively, nef and big). We give ampleness and nefness criterion for line bundles on Bott towers\footnote{While working on this project, it came to our notice that Narasimha B. Chary has obtained similar criterion for ample and nef line bundles on Bott towers in \cite{bonala2017mori} using a different method.} in Corollary \ref{2} and characterize nef and big line bundles on Bott towers in Theorem \ref{5}, hence in particular, give a characterization of Fano (respectively, weak Fano) Bott towers. If a vector bundle $\mathcal{E}$ is ample, then it is known that the determinant bundle $\text{det}(\mathcal{E})$ is also ample, the converse need not be true in general. But if $\mathcal{E}$ is semi-stable vector bundle on a smooth projective curve over algebraically closed field, the converse holds (see \cite[Theorem $3.2.7$]{huybrechts2010geometry}). In this paper we prove that the converse holds for discriminant zero semi-stable toric vector bundles on nonsingular projective toric variety in Proposition \ref{7}, using the results of Hering, Musta{\c{t}}{\u{a}} and Payne \cite{hering2010positivity}. We characterize semi-stable bundles on Hirzebruch surface in Proposition \ref{9}. \noindent {\bf Acknowledgements:} We thank our advisors Arijit Dey and V. Uma for their valuable guidance throughout this project. We also thank D.S. Nagaraj for his valuable comments and suggestions on an earlier version of the manuscript. Finally we thank Council of Scientific and Industrial Research (CSIR) for their financial support. \section{Some basic results} \subsection{Toric vector bundle} Let $T \cong \left(\mathbb C^*\right) ^n$ be an algebraic torus. Let $M=\text{Hom}(T, \mathbb C^*) \cong \mathbb Z^n$ be its character lattice and $N=\text{Hom}_{\mathbb Z}(M, \mathbb Z)$ be the dual lattice. Let $\Delta$ be a fan in $N_{\mathbb R}:=N \otimes_{\mathbb Z} \mathbb R$ which defines a toric variety $X=X(\Delta)$ under the action of the torus $T$. Let $\Delta(1)$ denote the edges of $\Delta$ and $\sigma(1)$ denote the edges of a cone $\sigma$ in $\Delta$. For each $\rho \in \Delta(1)$, let $D_{\rho}$ denote the $T$-invariant prime divisor corresponding to $\rho$. \\ A vector bundle $\pi : \mathcal{E} \rightarrow X$ is a toric or $T$-equivariant vector bundle on $X$ if $\mathcal{E}$ has a lift of the action of $T$ such that: \begin{enumerate} \item The action of $T$ on $\mathcal{E}$ is linear on fibres of $\pi$. \item $\pi$ is $T$-equivariant i.e., for all $ e \in \mathcal{E}$ and $t \in T$, $\pi(t \cdot e) = t \cdot \pi(e)$. \end{enumerate} \noindent It is known that any line bundle on toric variety is isomorphic to a toric line bundle. In fact, any line bundle on $X$ is isomorphic to $\mathcal{O}_X(D)$ for some $T$-invariant Cartier divisor $D$, which determines the equivariant structure of the line bundle. The tangent bundle and cotangent bundle of a nonsingular toric variety are natural examples of toric vector bundles. \noindent Toric vector bundles were first classified by Kaneyama in \cite{kane}. Later Klyachko gave a classification in terms of filtrations of certain vector space as follows: \begin{thm}\label{1} \cite[Theorem $2.2.1$]{kly} The category of toric vector bundles over the toric variety $X=X(\Delta)$ is equivalent to the category of vector spaces $E$, with a family of decreasing $\mathbb Z$-filtrations $E^{\rho}(i)$, for $\rho \in \Delta(1)$, which satisfy the following compatibility condition:\\ $({\bf C})$ for any $\sigma \in \Delta$, there exists a $\widehat{T}_{\sigma}$-grading $E=\oplus_{\chi \in \widehat{T}_{\sigma}}E^{[\sigma]}(\chi)$, for which $$E^{\rho}(i)=\sum_{\langle \chi, v_{\rho} \rangle \geqslant i}E^{[\sigma]}(\chi)$$ for all $\rho \in \sigma(1)$, where $v_{\rho}$ denotes the primitive ray generator of the ray $\rho$. \end{thm} Recall that a family of linear subspaces $\{V_\lambda \}_{\lambda \in \Lambda}$ of a finite dimensional vector space $V$ is said to form a distributive lattice if, there exists a basis $B$ of $V$ such that $B \cap V_\lambda$ is a basis of $V_\lambda$ for every $\lambda \in \Lambda$. If $X$ is nonsingular, then condition $({\bf C})$ of Theorem \ref{1} is equivalent to the following: for each $\sigma \in \Delta$, the collection of subspaces $\{E^{\rho}(i) \}_{\rho \in \sigma(1), i \in \mathbb Z}$ of $E$ forms a distributive lattice (See \cite[Remark $2.2.2$]{kly}). To see this, note that if the condition $({\bf C})$ is satisfied, for each $\sigma \in \Delta$, the eigen basis of the $\widehat{T}_{\sigma}$-grading $E=\oplus_{\chi \in \widehat{T}_{\sigma}}E^{[\sigma]}(\chi)$ serves as the required basis. Conversely, for $\sigma \in \Delta$, let $B_{\sigma}=\{e_1, \ldots, e_n \}$ be a basis of $E$ such that $E^{\rho}(i) \cap B_{\sigma}$ is a basis for $E^{\rho}(i)$. Then for each $e_j$ there exists an integer $n^j_{\rho}$ such that $e_j \in E^{\rho}(i) \cap B_{\sigma}$ for $i \leq n^j_{\rho}$ and $e_j \notin E^{\rho}(i) \cap B_{\sigma}$ for $i > n^j_{\rho}$. Since $\sigma$ is nonsingular cone, define characters of $T_{\sigma}$ by $\chi_j:N_{\sigma} \rightarrow \mathbb Z$, $v_{\rho} \mapsto n^j_{\rho}$ for $\rho \in \sigma(1)$, which yields the required $\widehat{T}_{\sigma}$-grading $E=\oplus_{j=1}^n \text{Span}(e_j)$. \begin{ex}[Filtrations for line bundles]{\rm Let $\mathcal{L}=\mathcal{O}_X(D)$ be a toric line bundle on $X$ for some $T$-invariant Cartier divisor $D=\sum_{\rho \in \Delta(1)}a_{\rho}D_{\rho}$, $a_{\rho} \in \mathbb Z$. Then the associated filtrations $(L, \{L^{\rho}(i)\}_{\rho \in \Delta(1)})$ are given by: \[ L^{\rho}(i) = \left\{ \begin{array} {r@{\quad \quad}l} \mathbb C & i \leqslant a_{\rho} \\ 0 & i > a_{\rho} \end{array} \right. \] \noindent Note that the filtrations define a function $f: \Delta(1) \rightarrow \mathbb Z$, given by $\rho \mapsto a_{\rho}$. Conversely when $X$ is nonsingular, any such function $f$ defines a toric line bundle whose associated filtrations are given by: \[ L^{\rho}(i) = \left\{ \begin{array} {r@{\quad \quad}l} \mathbb C & i \leqslant f(\rho) \\ 0 & i > f(\rho) \end{array} \right. \] } \end{ex} Note that for any two toric vector bundles $\mathcal{E}$ and $\mathcal{F}$ with the associated filtrations $(E, \{ E^{\rho}(i) \}_{\rho \in \Delta(1)})$ and $(F, \{ F^{\rho}(i) \}_{\rho \in \Delta(1)})$ respectively, the associated filtrations for $\mathcal{E \oplus F}$ and $\mathcal{E \otimes F}$ are given by $(E \oplus F, \{E^{\rho}(i) \oplus F^{\rho}(i)\}_{\rho \in \Delta(1)})$ and $(E \otimes F, \{\sum_{s+ t=i} E^{\rho}(s) \otimes F^{\rho}(t) \}_{\rho \in \Delta(1)})$ respectively. In particular, if $\mathcal{F}$ is a toric line bundle $\mathcal{O}_X(D)$, for some $T$-invariant Cartier divisor $D=\sum_{\rho \in \Delta(1)}a_{\rho}D_{\rho}$ with the associated filtrations $(L, \{ L^{\rho}(i) \}_{\rho \in \Delta(1)})$ then the associated filtrations for $\mathcal{E} \otimes \mathcal{O}_X(D)$ is given by $(E , \{ E^{\rho}(i-a_{\rho}) \}_{\rho \in \Delta(1)} )$. We say that a toric vector bundle splits if it is equivariantly isomorphic to a direct sum of toric line bundles. \begin{ex}[Filtrations for tangent bundles]\label{10}{\rm Let ${T}_X$ be the tangent bundle on the nonsingular toric variety $X=X(\Delta)$. Then the associated filtrations $(E, \{E^{\rho}(i)\}_{\rho \in \Delta(1)})$ are given by:\\ \[ E^{\rho}(i) = \left\{ \begin{array} {r@{\quad \quad}l} N_{\mathbb C} & i \leqslant 0 \\ \text{Span }( v_{\rho} ) & i=1 \\ 0 & i > 1 \end{array} \right. \] } \end{ex} Let $X=X(\Delta)$ be a nonsingular toric variety and $\pi : \mathcal{E} \rightarrow X$ be a toric vector bundle on $X$ with associated filtrations $\left( E, \{E^{\rho}(i) \}_{\rho \in \Delta(1)} \right)$. Let $\pi \mid_{\mathcal{F}} : \mathcal{F} \rightarrow X$ be a toric subbundle with associated filtrations $\left( F, \{F^{\rho}(i) \}_{\rho \in \Delta(1)} \right)$. We characterize the filtration data for $\mathcal{F}$ as follows: \begin{prop}\label{6} $\mathcal{F}$ is a toric subbundle of $\mathcal{E}$ if and only if $F \subset E, \ F^{\rho}(i)= E^{\rho}(i) \cap F$ and for each $\sigma \in \Delta$, there exists a basis $B^F_{\sigma}$ of $F$ which extends to a basis $B^E_{\sigma}$ of $E$ such that $F^{\rho}(i) \cap B^F_{\sigma}$ forms a basis of $F^{\rho}(i)$ and $E^{\rho}(i) \cap B^E_{\sigma}$ forms a basis of $E^{\rho}(i)$ for all $\rho \in \sigma(1)$. \end{prop} \noindent {\bf Proof:} Fix $\sigma \in \Delta$, then $F$ is a $T_{\sigma}$ stable subspace of $E$. Since $T_{\sigma}$ is diagonalizable, we can write $E=F \oplus F'$, where $F'$ is the $T_{\sigma}$ stable complement of $F$ in $E$. Let $\{ f_1, \ldots, f_r \}$ be an eigen basis of $F$ (i.e, $t \cdot f_i =\chi_i(t) f_i$ for all $t \in T_{\sigma}$) which extends to an eigen basis of $E$. Hence we can write $E=\text{Span}\{ f_1, \ldots, f_r, f'_1, \ldots, f'_s \}\left(=\bigoplus_{\chi \in T_{\sigma} }E_{\chi}, \text{say} \right)$, such that $t \cdot f'_j =\chi'_j(t) f'_j$ for all $t \in T_{\sigma}$. Note that $\chi_i, \ \chi'_j$'s need not be distinct. Also by compatibility, $E^{\rho}(i)=\bigoplus_{\langle \chi , v_{\rho} \rangle \geq i} E_{\chi}$. Here $E_{\chi}=\{f \in E : t \cdot f = \chi(t) f \ \forall \ t \in T_{\sigma} \}=\text{Span}\{f_i : t \cdot f_i =\chi(t) f_i \ \forall \ t \in T_{\sigma} \}+ \text{Span}\{f'_j : t \cdot f'_j =\chi(t) f'_j \ \forall \ t \in T_{\sigma} \}$. We see that $F^{\rho}(i)=E^{\rho}(i) \cap F$. Then the choice of the bases $B^F_{\sigma}=\{ f_1, \ldots, f_r \}$ and $B^E_{\sigma}=\{ f_1, \ldots, f_r, f'_1, \ldots, f'_s \}$ works. For the other direction, we only need to show that for each cone $\sigma$, the $T_{\sigma}$ action on $\mathcal{F}$ is induced from the $T_{\sigma}$ action on $\mathcal{E}$. Let us call $B^F_{\sigma}=\{ f_1, \ldots, f_r \}$ and $B^E_{\sigma}=\{ f_1, \ldots, f_r, f'_1, \ldots, f'_s \}$, where $B^F_{\sigma}, \ B^E_{\sigma}$ are as in the hypothesis. Fix $\rho \in \sigma(1)$, since $E^{\rho}(i)$ is decreasing full filtration, for each $f_j$ there is an integer $n^j_{\rho}$ such that $f_j \in E^{\rho}(i) \cap B^E_{\sigma}$ for $i \leq n^j_{\rho}$ and $f_j \notin E^{\rho}(i) \cap B^E_{\sigma}$ for $i > n^j_{\rho}$. Similarly, for each $f'_j$, we have $m_{\rho}^j$. Now define the characters of $T_{\sigma}$ as $\chi_j: N_{\sigma} \rightarrow \mathbb Z$ which sends $\rho$ to $ n^j_{\rho}$, and $\chi'_j: N_{\sigma} \rightarrow \mathbb Z$ which sends $\rho$ to $ m^j_{\rho}$. Then we can define the $T_{\sigma}$ action on $E$ by $$t \cdot f_j=\chi_j(t)f_j \text{ and } t \cdot f'_j=\chi'_j(t)f'_j$$ Set $E_{{\chi}_j}=$ Span $f_j$ and $E_{{\chi'}_j}=$ Span $f'_j$. We see that with respect to this $E$ has a $\widehat{T}_{\sigma}$ grading $E=\sum E_{{\chi}_j} + \sum E_{{\chi'}_j}$ such that $E^{\rho}(i)=\bigoplus_{\langle \chi , v_{\rho} \rangle \geq i} E_{\chi}$. Now, by construction, we see that $F$ has a $\widehat{T}_{\sigma}$ grading $F=\sum E_{\chi_j}$ such that $F^{\rho}(i)=\bigoplus_{\langle \chi , v_{\rho} \rangle \geq i} F_{\chi}$, which concludes the proof. $\hfill\square$ \begin{rmk}{\rm Let $X$, $\mathcal{E}$ and $\mathcal{F}$ be as above and $\mathcal{F}$ be a toric subbundle of $\mathcal{E}$. Now define a filtration on $\frac{E}{F}$ by setting $\left( \frac{E}{F} \right)^{\rho}(i):=\frac{E^{\rho}(i)}{F^{\rho}(i)}$, then $B_{\sigma}=B^E_{\sigma} \setminus B^F_{\sigma}$ serves as the required basis of $\frac{E}{F}$ which satisfies $(\bf{C})$ of Theorem \ref{1}. Then we have a short exact sequence in the category of filtered vector space with compatible filtrations: \begin{center} $\ses{F}{E}{\frac{E}{F}}{\imath}{\pi}$ and \end{center} \begin{center} $\ses{F^{\rho}(i)}{E^{\rho}(i)}{\frac{E^{\rho}(i)}{F^{\rho}(i)}}{\imath}{\pi}$ for all $\rho \in \sigma(1)$ and $i \in \mathbb Z$. \end{center} \noindent So these induces the following short exact sequence in the category of toric vector bundles, \begin{center} $0 \rightarrow \mathcal{F} \rightarrow \mathcal{E} \rightarrow \mathcal{Q} \rightarrow 0$. \end{center} Hence quotient of $\mathcal{E}$ by $\mathcal{F}$ exists in the category of toric vector bundles, which is surprising since this fact is not true in general varieties. } \end{rmk} It is known that the tangent bundle of any nonsingular toric variety splits topologically into direct sum of line bundles, although this result is not true in case of equivariant algebraic splitting, for example consider the tangent bundle of $\mathbb{P}^2$. Recall that, a toric vector bundle $\mathcal{E}$ over a nonsingular toric variety $X=X(\Delta)$ splits if and only if the filtrations $E^{\rho}(i)$, $\rho \in \Delta(1)$ generate a distributive lattice (see \cite[Corollary $2.2.3$]{kly}). Using this, we obtain a splitting criterion of tangent bundle of a nonsingular toric variety: \begin{prop}\label{18} Let $\text{dim}(X)=n$, then ${T}_X$ splits if and only if for all $\sigma \in \Delta(n)$ and any ray $\rho \in \Delta(1) \setminus \sigma(1)$ the primitive ray generator $v_\rho$ of $\rho$ is the negative of some primitive ray generator of the cone $\sigma$. \end{prop} \noindent {\bf Proof: }Suppose that ${T}_X$ splits. Then there exists a basis $B$ of $E$ such that $B \cap E^\rho(1)$ is a basis of $E^\rho(1)=\text{Span}(v_\rho)$ for all $\rho \in \Delta(1)$. So for all $\rho \in \Delta(1)$ either $v_\rho$ or $- v_\rho \in B$, which concludes the forward direction. Conversely, fix a cone $\sigma=\text{Cone }(v_{\rho_1}, \ldots, v_{\rho_n}) \in \Delta(n)$, then $\{v_{\rho_1}, \ldots, v_{\rho_n}\} \cap E^\rho(1)$ is a basis of $E^\rho(1)$ for all $\rho \in \Delta(1)$. Hence ${T}_X$ splits. $\hfill\square$ \begin{cor} If $|\Delta(1)| > 2n$, then ${T}_X$ does not split. \end{cor} \subsection{Bott tower} \noindent We now briefly recall the construction of Bott tower and some basic results related to it. For more details see \cite{grossbergkarshon} and \cite{civanyusuf}. Let $e_1, \ldots, e_r$ be the standard basis for $\mathbb{R}^r$, for any $r \in \mathbb N$. Given $\frac{n(n-1)}{2}$ integers $\{c_{i, j}\}_{\{1\leq i < j \leq n \}}$, the Bott tower of height $n$ $$M_n \rightarrow M_{n-1} \rightarrow \cdots \rightarrow M_2 \rightarrow M_1 \rightarrow M_0=\{ \text{point} \}$$ is constructed recursively as follows: Let $M_0$ be a point and $\xi_0$ be the trivial line bundle on $M_0$. Define $M_1$ to be $\mathbb{P}(\mathcal{O}_{M_0} \oplus \xi_0)=\mathbb{P}^1$. Let $\Delta_1 \subset \mathbb R$ be the fan of $M_1$ consisting of rays $v^1_1=e_1, v^1_2=-e_1$. Note that $\text{Pic}(M_1)=\mathbb Z [D_2]$, where $D_2$ is the invariant prime divisor corresponding to $v^1_2$. Let $\xi_1$ be the toric line bundle on $M_1$ whose associated filtrations are given by: \begin{center} $ \xi_1^{v^1_1}(l) = \left\{ \begin{array}{ccc} \mathbb C & l \leqslant 0 \\ 0 & l > 0 \end{array} \right. \ \ \ \ \text{and \ \ \ \ } $ $ \xi_1^{v^1_2}(l) = \left\{ \begin{array}{ccc} \mathbb C & l \leqslant c_{1, 2} \\ 0 & l > c_{1, 2} \end{array} \right. $ \end{center} \noindent Let $M_2:=\mathbb{P}(\mathcal{O}_{M_1} \oplus \xi_1)$ be the $\mathbb{P}^1$-bundle on $M_1$, where the fibre $\mathbb{P}^1$ has ray generators $e'_1, e'_0=-e'_1$. Consider the map $p : \mathbb R \rightarrow \mathbb R \oplus \mathbb R$, given by $v^1_1 \mapsto (v^1_1, 0)$ and $v^1_2 \mapsto (v^1_2, c_{1, 2}e'_1)$. Then the ray generators of $M_2$ are $v^2_1=e_1,v^2_2=e_2,v^2_3=-e_1 + c_{1, 2}e_2, v^2_4=-e_2$. The four maximal cones are $\text{Cone }(v^2_1, v^2_2), \text{Cone }(v^2_1, v^2_4), \text{Cone }(v^2_2, v^2_3)$ and $\text{Cone }(v^2_3, v^2_4)$. Repeating this process, at the $k$-th stage, we have a $k$-dimensional nonsingular projective toric variety $M_k$, whose fan structure is as follows: Let $v_i=e_i$ for $i=1, \ldots, k$, $v_{k+i}=-e_i+c_{i, i+1}e_{i+1}+\cdots +c_{i, k}e_k \text{ for }i=1, \ldots, k-1$ and $v_{2k}=-e_k$.\\ The fan $\Delta_k$ of $M_k$ is complete and consists of these $2k$ edges and $2^k$ maximal cones of dimension $k$ generated by these edges such that no cone contains both the edges $v_i$ and $v_{k+i}$ for $i=1, \ldots, k$.\\ Consider a line bundle $\xi_k$ on $M_k$ whose associated filtrations are given by: \begin{center} $ \xi_k^{v_i}(l) = \left\{ \begin{array}{ccc} \mathbb C & l \leqslant 0 \\ 0 & l > 0 \end{array} \right. \ \ $ and \ \ \ \ $ \xi_k^{v_{k+i}}(l) = \left\{ \begin{array}{ccc} \mathbb C & l \leqslant c_{i, k+1} \\ 0 & l > c_{i, k+1} \end{array} \right. $ for $i=1, \ldots, k$. \end{center} \noindent Then $M_{k+1}:=\mathbb{P}(\mathcal{O}_{M_k} \oplus \xi_k)$. The integers $c_{i, j}$'s are called Bott numbers and they are arranged in a $n \times n$ upper triangular matrix with $1$'s in the diagonal, called Bott matrix. \noindent Let $D_i:=D_{v_i}$ denote the invariant prime divisor corresponding to an edge $v_i$. The Picard group of the Bott tower is given by $\text{Pic}(M_k)=\mathbb Z [D_{k+1}] \oplus \ldots \oplus \mathbb Z [D_{2k}]$ as \begin{center} $D_i \sim_{\text{lin}} D_{k+i}-c_{1, i}D_{k+1}-\cdots-c_{i-1, i}D_{k+i-1}$ for $i=1, \ldots, k$. \end{center} Hence any divisor $D$ on $M_k$ can be written as $D=\sum_{i=1}^{k}a_iD_{k+i}$, $a_i \in \mathbb Z$ for $i=1, \ldots, k$.\\ \noindent The key observation regarding the Bott numbers is the following: \begin{thm}\label{12} The integers $\{c_{i, j}\}_{\{1\leq i < j \leq n \}}$ can be assumed to be non-negative. \end{thm} \noindent {\bf Proof:} We prove it by induction on $n$.\\ For $n=2$ the corresponding integer is $c_{1, 2}$ and $M_2=\mathcal{H}_{c_{1, 2}}$, Hirzebruch surface and it is well known that $c_{1, 2}$ can be assumed to be non-negative. By induction hypothesis assume that $c_{i, j}$'s are non-negative for $1\leq i < j \leq k $.\\ Consider a line bundle $\eta_k$ on $M_k$ with associated filtrations: \begin{center} $ \eta_k^{v_i}(l) = \left\{ \begin{array}{ccc} \mathbb C & l \leqslant 0 \\ 0 & l > 0 \end{array} \right. \ \ \ $ and \ \ \ $ \eta_k^{v_{k+i}}(l) = \left\{ \begin{array}{ccc} \mathbb C & l \leqslant |c_{i, k+1}| \\ 0 & l > |c_{i, k+1}| \end{array} \right. $ for $i=1, \ldots, k$. \end{center} \noindent We show that there is a line bundle $L_k$ on $M_k$ such that $(\mathcal{O}_{M_k} \oplus \xi_k)\otimes L_k \cong \mathcal{O}_{M_k} \oplus \eta_k$, where $\xi_k$ is as before. Let $S=\{v_{k+i} | c_{i, k+1} < 0 \}$ and the filtrations associated to $L_k$ be as follows:\\ If $\alpha \notin S$, $ L_k^{\alpha}(l) = \left\{ \begin{array}{ccc} \mathbb C & l \leqslant 0 \\ 0 & l > 0 \end{array} \right. \ \ $ and if $\alpha \in S$, say $\alpha=v_{k+i}$ , $L_k^{\alpha}(l) = \left\{ \begin{array}{ccc} \mathbb C & l \leqslant -c_{i, k+1} \\ 0 & l > -c_{i, k+1} \end{array} \right. $ \\ \noindent Note that the associated filtrations for $\mathcal{O}_{M_k} \oplus \eta_k$ are given by: \begin{center} $ (\mathcal{O}_{M_k} \oplus \eta_k)^{v_i}(l) = \left\{ \begin{array}{ccc} \mathbb C \oplus \mathbb C & l \leqslant 0 \\ 0 & l > 0 \end{array} \right. $ \end{center} \begin{center} when $\alpha=v_{k+i} \notin S $, $(\mathcal{O}_{M_k} \oplus \eta_k)^{\alpha}(l) = \left\{ \begin{array}{ccc} \mathbb C \oplus \mathbb C & l \leqslant 0 \\ \mathbb C & 0 < l \leq c_{i, k+1} \\ 0 & l > c_{i, k+1} \end{array} \right. $ \end{center} \begin{center} when $\alpha \in S$, say $\alpha=v_{k+i}$, $ (\mathcal{O}_{M_k} \oplus \eta_k)^{\alpha}(l) = \left\{ \begin{array}{ccc} \mathbb C \oplus \mathbb C & l \leqslant 0 \\ \mathbb C & 0 < l \leq -c_{i, k+1} \\ 0 & l > -c_{i, k+1} \end{array} \right. $ \end{center} \noindent By similar computations, the filtrations associated to the vector bundle $( \mathcal{O}_{M_k} \oplus \xi_k)\otimes L_k$ are same. Hence by Theorem \ref{1} these two are $T$-equivariantly isomorphic. Thus $M_{k+1}:=\mathbb{P}(\mathcal{O}_{M_k} \oplus \xi_k)$ and $\mathbb{P}(\mathcal{O}_{M_k} \oplus \eta_k)$ are isomorphic as abstract varieties, hence they are isomorphic as toric varieties also (see \cite[Theorem $4.1$]{berchtold2003lifting}). $\hfill\square$\\ \noindent Let $T_{M_k}$ be the tangent bundle on $M_k$ with associated filtrations $(E, \{ E^\rho(i)\}_{\rho \in \Delta_k, i \in \mathbb Z})$ (see Example \ref{10}). \begin{prop}\label{15} $T_{M_k}$ splits if and only if $c_{i, j}=0$ for all $ i \text{ and } j$. \end{prop} \noindent {\bf Proof:} Suppose that $T_{M_k}$ splits. Then there exists a basis $B$ of $\mathbb C^k$ such that $E^{v_i}(1) \cap B$ is a basis of $E^{v_i}(1)=\text{Span }(e_i)$ for all $i=1, \ldots, k$, which implies that $B=\{ c_1e_1, \ldots,c_k e_k \}$, where $c_i$'s are $1$ or $-1$. But $E^{v_{k+i}}(1) \cap B$ has to be a basis of $E^{v_{k+i}}(1)=\text{Span }(-e_i+c_{i, i+1}e_{i+1}+ \ldots + c_{i, k}e_k)$. This forces $c_{i, j}=0$. Conversely, let $c_{i, j}=0$ for all $i$ and $j$. Then the choice of the basis $B= \{ e_1, \ldots, e_k\}$ of $\mathbb C^k$ works.$\hfill\square$ \begin{rmk}{\rm In general, consider the toric line bundles $\mathcal{L}_0, \mathcal{L}_1, \ldots, \mathcal{L}_m$ over a nonsingular toric variety $X=X(\Delta)$. Then $X'= \mathbb{P}(\mathcal{L}_0 \oplus \mathcal{L}_1 \oplus \ldots \oplus \mathcal{L}_m)$ is also a nonsingular toric variety with fan $\Delta'$ given as follows: Let $\mathcal{L}_0, \mathcal{L}_1, \ldots, \mathcal{L}_m$ correspond to $\Delta$-linear support functions $h_0, h_1, \ldots, h_m$ respectively. Choose a $\mathbb Z$-basis $\{e_1, \ldots, e_m\}$ of $\mathbb R^m$ and let $e_0=-e_1- \ldots - e_m$. Consider the $\mathbb R$-linear map $\Phi: \mathbb R^n \rightarrow \mathbb R^n \oplus \mathbb R^m$ which sends $y \mapsto (y, -\sum_{j=0}^m h_j(y)e_j)$. Now let $\tilde{\sigma}_i = \text{Cone}(e_0, \ldots, \widehat{e_i}, \ldots, e_m)$ for each $0\leq i \leq m$ (here by $\widehat{e}_i$ we mean that $e_i$ is omitted from the relevant collection). Let $\tilde{\Delta}$ be the fan in $\mathbb R^m$ generated by $\tilde{\sigma}_i$ for $0\leq i \leq m$. Then $\Delta'=\{\Phi(\sigma) + \tilde{\sigma} : \sigma \in \Delta, \tilde{\sigma}\in \tilde{\Delta}\}$. \begin{enumerate} \item For $m \geq 2$, $T_{X'}$ does not split. To see this, consider the maximal cone $\sigma'= \Phi(\sigma) + \text{Cone}(e_0, \ldots, \widehat{e_i}, \ldots, e_m)$, where $\sigma \in \Delta$ is a maximal cone. The ray generated by $(0, e_i)$ is not in $\sigma'$ and $(0, e_i)$ is not the negative of any of the primitive ray generators of $\sigma'$. \item For $m=2$, if $T_{X'}$ splits then $T_X$ also splits. \end{enumerate} } \end{rmk} \subsection{s-jet ampleness} \noindent Let us recall the notion of $s$-jet ampleness, see \cite{rider} for details. Let $X$ be a nonsingular algebraic variety. For a point $x \in X$ let $\mathfrak{m}_x$ be the maximal ideal sheaf of $x$ in $X$. \begin{defn} A line bundle $\mathcal{L}$ on $X$ is said to be $s$-jet ample, if for every finite collection of points $x_1, \ldots, x_r$ the restriction map $$H^0(X; \mathcal{L}) \longrightarrow H^0\left(X; \mathcal{L} \otimes \frac{\mathcal{O}_X}{\mathfrak{m}_{x_1}^{k_1} \otimes \cdots \otimes \mathfrak{m}_{x_r}^{k_r}} \right)$$ is surjective, where $k_1, \ldots, k_r$ are positive integers and $\sum_{i=1}^r k_i = s + 1$. \end{defn} Note that $\mathcal{L}$ is $0$-jet ample if and only if $\mathcal{L}$ is globally generated and $\mathcal{L}$ is $1$-jet ample if and only if $\mathcal{L}$ is very ample (see \cite[Section $2.3$]{jetample}). On nonsingular complete toric varieties the notion of ample and very ample line bundles coincide (see \cite[Theorem $6.1.16$]{Cox}). Let us recall the following generalization of Toric-Nakai criterion for $s$-jet ampleness. \begin{thm} \cite[Proposition $3.5$, Theorem $4.2$]{DiRocco}\label{11} Let $\mathcal{L}$ be a line bundle on a nonsingular toric variety $X=X(\Delta)$. Then the following are equivalent: \begin{itemize} \item[(1)] $\mathcal{L}$ is $s$-jet ample. \item[(2)] $\mathcal{L} \cdot C \geq s$, for any $T$-invariant curve C. \end{itemize} \end{thm} \section{Positivity of line bundles on Bott towers} \subsection{s-jet ampleness on Bott tower} \noindent We use Theorem \ref{11} to give a necessary and sufficient criterion for $s$-jet ampleness of line bundles on Bott tower in terms of the Bott numbers. \begin{thm}\label{16} Let $D=\sum_{i=1}^{k}a_iD_{k+i}$ be a $T$-invariant Cartier divisor on $M_k$. Then $D$ is $s$-jet ample if and only if $a_i \geq \ s$ for all $i=1, \ldots, k$. \end{thm} \noindent {\bf Proof:} Let $D$ be $s$-jet ample. For $i=1, \ldots, k$, consider the cones $\tau_i=\text{Cone}(v_1, \ldots, \widehat{v}_i, \ldots, v_k)$. Now $D \cdot V(\tau_i) \geq s$ for $i=1, \ldots, k$ by Theorem \ref{11}. Note that $D \cdot V(\tau_i)=a_i$ since $D_{k+l} \cdot V(\tau_i)=0$ for $l \neq i$ and $D_{k+i} \cdot V(\tau_i)=1$ (\cite[Corollary $6.3.3$]{Cox}). Conversely, let $a_i \geq \ s$ for all $i=1, \ldots, k$. Again by Theorem \ref{11}, it suffices to show that $D \cdot V(\tau) \geq s$ for any $k-1$ dimensional cone $\tau \in \Delta_k$. Note that any such cone is of the form \begin{center} $\tau=\text{Cone}(v_1, \ldots, \widehat{v}_{i_1}, \ldots, \widehat{v}_{i_r}, \ldots, v_k,v_{k+i_1}, v_{k+i_2}, \ldots, \widehat{v}_{k+i_j}, \ldots, v_{k+i_r})$ \end{center} for some $j=1, \ldots, r$. We have\\ $ D_{k+m} \cdot V(\tau) = \left\{ \begin{array}{ccc} 1 & m=i_j \\ 0 & m \neq i_1, \ldots, i_r \\ \end{array} \right. \ \ $ and \ \ \ $ D_{k+m} \cdot V(\tau) = \left\{ \begin{array}{ccc} 0 & m=i_l, 1 \leq l <j ,\\ b_{i_l} & m =i_l, j < l \leq k ,\\ \end{array} \right. \\ $ \noindent where $b_{i_l}$'s are such that the following wall relation is satisfied: \begin{center} $v_{i_j}+d_1v_1+\cdots+\widehat{v}_{i_1}+\cdots+\widehat{v}_{i_r}+\cdots+d_kv_k+v_{k+i_j}+\sum_{t=j+1}^r b_{i_t}v_{k+i_t}=0$ \end{center} where $d_1, \ldots, d_k$ are integers(\cite[Corollary $6.3.3$ and Lemma $6.3.4$]{Cox}). Then we have, \begin{center} $b_{i_{j+1}}=c_{i_j, i_{j+1}}$; $b_{i_{j+2}}=c_{i_{j+1}, i_{j+2}} b_{i_{j+1}}+c_{i_{j}, i_{j+2}}$;\\ $\vdots$\\ $b_{i_r}=c_{i_{j+1}, i_r} b_{i_{j+1}} + c_{i_{j+2}, i_r} b_{i_{j+2}}+ \cdots+c_{i_{r-1}, i_r} b_{i_{r-1}} + c_{i_j,\ i_r} $. \end{center} Hence $D \cdot V(\tau)=a_{i_j}+\sum_{l=j+1}^r a_{i_l} b_{i_l} \geq s$ since $c_{i, j} \geq 0$ by Theorem \ref{12}. $\hfill\square$\\ \noindent We have the following applications of the above theorem. \begin{cor}\label{2} The divisor $D=\sum_{i=1}^{k}a_iD_{k+i}$ on $M_k$ is ample (respectively, nef) if and only if $a_i > (\text{respectively, } \geq) \ 0$ for all $i=1, \ldots, k$. The anticanonical divisor $-K_{M_k}= D_1 + \ldots + D_{2k}$ is at most $2$-jet ample. Moreover, it is $2$-jet ample if and only if $c_{i, j}=0$ for all $i$ and $j$. Further, $M_k$ is Fano if and only if $\sum_{j=i+1}^k c_{i, j} \leq 1, i=1, \ldots, k-1$. Finally, the cotangent bundle on $M_k$ is never ample since $ K_{ M_k}$ is not ample. \end{cor} \begin{rmk}{\rm Let $D=\sum_{i=1}^{k}a_iD_{k+i}$ be an ample $T$-invariant Cartier divisor on $M_k$. Then by the above ampleness criterion, $D-D_j$ is ample if and only if \begin{itemize} \item $a_j >1$ if $1 \leq j \leq k$ and \item $a_{j-k}>1$ if $k+1 \leq j \leq 2k$. \end{itemize} For $j <j'$, $D-D_j-D_{j'}$ is not nef if and only if $j'=k+j$ and $a_j=1$. In fact, $D-D_j-D_{j'}$ is ample if and only if \begin{itemize} \item $a_j > 2$, when $j'=k+j$. \item $b_j > 1$ and $b_{j'} > 1$, otherwise where \[ b_l = \left\{ \begin{array} {r@{\quad \quad}l} a_l & l \leqslant k \\ a_{l-k} & l > k \end{array} \right. \] \end{itemize} This generalizes Mustata's criterion (see \cite{van}) in case of Bott tower. } \end{rmk} \noindent {\bf Generalization of ampleness criterion for toric vector bundles on Bott tower:} Let us recall the notion of semi-stability of a nonzero torsion-free coherent sheaf $\mathcal{E}$ on $X$, where $X$ is a normal projective variety of dimension $n$ with an ample divisor $H$ on $X$. The slope of $\mathcal{E}$ is defined to be $\mu(\mathcal{E}):=\frac{\text{deg }(\mathcal{E})}{\text{rank }\mathcal{E}}$, where $\text{deg }(\mathcal{E})=c_1(\mathcal{E}) \cdot H^{n-1}$. The sheaf $\mathcal{E}$ is then said to be semi-stable if $\mu(\mathcal{F}) \leq \mu(\mathcal{E})$ for all proper nonzero subsheaves $\mathcal{F}\subsetneq \mathcal{E}$. If a vector bundle $\mathcal{E}$ is ample, then it is known that determinant bundle $\text{det}(\mathcal{E})$ is ample, the converse need not be true in general. But if $\mathcal{E}$ is semi-stable vector bundle on a smooth projective curve over algebraically closed field, the converse holds (see \cite[Theorem $3.2.7$]{huybrechts2010geometry}). The following theorem generalizes this result for any nonsingular projective toric variety with some additional assumptions on $\mathcal{E}$. \begin{prop}\label{7} Let $\mathcal{E}$ be a toric bundle on a nonsingular projective toric variety $X=X(\Delta)$. Assume that $\mathcal{E}$ is semi-stable and discriminant $\Delta(\mathcal{E})=0$, where the discriminant of $\mathcal{E}$ is defined to be the characteristic class $\Delta(\mathcal{E}):=c_2(\mathcal{E}) - \frac{r-1}{2r} c_1(\mathcal{E})^2$. Then $\mathcal{E}$ is ample (respectively, nef) if and only if $\text{det}(\mathcal{E})$ is ample (respectively, nef). \end{prop} \noindent {\bf Proof:} By \cite[Theorem $2.1$]{hering2010positivity}, $\mathcal{E}$ is ample (respectively, nef) if and only if $\mathcal{E}|_{V(\tau)}$ is ample (respectively, nef) for all walls $\tau$ (here wall means codimension $1$ cone). This is equivalent to say that $\text{deg}(\mathcal{E}|_{V(\tau)})> (\text{respectively, } \geq) 0$ for all walls $\tau$ by \cite[Theorem $3.2.7$]{huybrechts2010geometry} and \cite[Theorem $2.5$]{bruzzo2013restricting}. Now $\text{deg}(\mathcal{E}|_{V(\tau)})=\text{det}(\mathcal{E}) \cdot V(\tau)$. Hence we have $\mathcal{E}$ is ample (respectively, nef) if and only if $\text{det}(\mathcal{E}) \cdot V(\tau) > (\text{respectively, }\geq) 0$ for all walls $\tau$ if and only if $\text{det}(\mathcal{E})$ is ample (respectively, nef) by toric Nakai criterion \cite[Theorem $2.18$]{Oda}. $\hfill\square$ \subsection{Criterion for nef and big line bundle on Bott tower:} In this subsection we give a criterion for nef and big line bundle on Bott tower. A characterization for big divisors on an irreducible projective variety is given in \cite[Corollary $2.2.7$]{lazarsfeld2004positivity}. In case of toric varieties, we have the following: \begin{prop}[Characterization of $T$-invariant big divisors]\label{3} Let $X=X(\Delta)$ be a nonsingular projective toric variety. Let $D$ be a $T$-invariant Cartier divisor on X, then the following are equivalent: \begin{itemize} \item[(i)]$D$ is big. \item[(ii)]For any $T$-invariant ample divisor $A$ on $X$, there exists a positive integer $n$ and $T$-invariant effective divisor $N$ on $X$, such that $nD \sim_{\text{lin}} A+N$. \item[(iii)] same as in $(ii)$ for some $T$-invariant ample divisor $A$ on $X$. \item[(iv)]There exists $T$-invariant ample divisor $A$ on $X$, a positive integer $n$ and $T$-invariant effective divisor $N$ on $X$, such that $nD \equiv_{\text{num}} A+N$. \end{itemize} \end{prop} \noindent {\bf Proof:} Assume $D$ is big. Since $A$ is $T$-invariant ample, there exists a positive integer $r$ such that both $rA$ and $(r+1)A$ are globally generated. So we can choose nonzero global sections $\chi^{m_1} \in H^0(X, \mathcal{O}_X(rA))$ and $\chi^{m_2} \in H^0(X, \mathcal{O}_X((r+1)A))$. Then $rA \sim_{\text{lin}}(\chi^{m_1})_0$ and $rA \sim_{\text{lin}}(\chi^{m_2})_0$, where $H_r:=(\chi^{m_1})_0, \ H_{r+1}:=(\chi^{m_2})_0$ are effective divisors (\cite[Proposition II $7.7$]{hartshorne2013algebraic}). In fact, both the divisors $H_r$ and $H_{r+1}$ are also $T$-invariant. To see this, let $\{m_\sigma\}_{\sigma \in \Delta}$ be the Cartier data for $rA$. We have the isomorphism $\phi_\sigma: \mathcal{O}_X(rA)\mid_{U_\sigma} \simeq \mathcal{O}_{U_\sigma}$ given by multiplication by $\chi^{-m_\sigma}$. Then the Cartier data for $H_r$ is given by $\{m_1-m_\sigma\}_{\sigma \in \Delta}$, which shows that $H_r$ is $T$-invariant and similarly $H_{r+1}$ is also $T$-invariant. Now by Kodaira's Lemma (\cite[Proposition $2.2.6$]{lazarsfeld14500positivity}) we have $H^0(X, \mathcal{O}_X(nD-H_{r+1})) \neq 0$ for some large positive integer $n$. Since $nD-H_{r+1}$ is $T$-invariant, arguing as above can choose nonzero global section $\chi^{m_3} \in H^0(X, \mathcal{O}_X(nD-H_{r+1}))$. Then $nD-H_{r+1} \sim_{\text{lin}} nD-H_{r+1}+\text{div}(\chi^{m_3} ) \geq 0$. Take $N'= nD-H_{r+1}+\text{div}(\chi^{m_3} )$ which is effective and $T$-invariant. Now $nD-H_{r+1} \sim_{\text{lin}} N'$, which implies $nD \sim_{\text{lin}} H_{r+1}+N' \sim_{\text{lin}} (r+1)A +N' \sim_{\text{lin}} A +H_r+N'$. Taking $N=H_r+N'$, which is $T$-invariant effective we get $nD \sim_{\text{lin}} A+N$. \\ \noindent $(ii)\Rightarrow (iii)\Rightarrow (iv)$ Obvious.\\ \noindent $(iv)\Rightarrow (i)$ Follows from the same argument as in (\cite[Corollary $2.2.7$]{lazarsfeld2004positivity}). $\hfill\square$ \begin{cor}\label{4} Let $X=X(\Delta)$ be a nonsingular projective toric variety. Let $D$ be a $T$-invariant Cartier divisor on $X$, then $D$ is nef and big if and only if there exists $T$-invariant effective divisor $N$ on $X$, such that $D-\frac{1}{n} N$ is ample for $n\gg 0$. \end{cor} \noindent {\bf Proof:} Let $D$ be nef and big. Then by Proposition \ref{3} there exists a positive integer $k$, $T$-invariant ample Cartier divisor $A$ and $T$-invariant effective Cartier divisor $N$ on $X$, such that $kD \equiv_{\text{num}} A+N$. Take $n > k$, then $nD\equiv_{\text{num}} (n-k)D+A+N$. Since $D$ is nef and $A$ is ample, by \cite[Corollary $1.4.10$, Theorem $1.2.23$, Theorem $1.4.9$]{lazarsfeld14500positivity} $(n-k)D+A$ is ample. So $nD-N $ is ample, which implies $D-\frac{1}{n} N$ is ample for $n\gg 0$. Converse direction follows by the same argument as in \cite[Example 2.2.19]{lazarsfeld14500positivity}. $\hfill\square$ \begin{rmk}\label{19}\rm Using the above criterion for bigness, for a $T$-invariant divisor $D=\sum_{i=1}^{k}a_iD_{k+i}$ on $M_k$ to be big one must have $a_k>0$. Hence the canonical divisor $K_{M_k}$ is never big. \end{rmk} \noindent We now give a necessary and sufficient criterion for nef and big divisor on Bott tower. \begin{thm}\label{5} Let $D=\sum_{i=1}^{k}a_iD_{k+i}$ be a $T$-invariant divisor on $M_k$. Then $D$ is nef and big if and only if $a_k>0$, $a_i \geq 0$ for $1 \leq i < k$ and when $a_i=0$ for some $i=1, \ldots, k-1$, then at least one of $\{c_{i, j} : j>i\}$ is nonzero. \end{thm} \noindent {\bf Proof:} Let $D$ be nef and big. Then by Corollary \ref{2} and Remark \ref{19} we have $a_i \geq 0$ for $1 \leq i \leq k-1$ and $a_k>0$. Again Corollary \ref{4} implies that, there exists $T$-invariant effective divisor $N=\sum_{i=1}^{2k}b_iD_{k}$ on $M_k$ $(b_i \geq 0 \text{ for all } i)$, such that $D-\frac{1}{n} N$ is ample for $n\gg 0$. Write $N \sim_{\text{lin}} \sum_{i=1}^{k}c_iD_{k+i} $, where $c_k=b_k + b_{2k}$ and $c_i=b_i+b_{k+i}- \sum_{j=i+1}^kc_{i, j}b_j$, for $i=1, \ldots, k-1$. Take $d_i=a_i-\frac{1}{n} c_i$ and write $D-\frac{1}{n} N=\sum_{i=1}^{k}d_iD_{k+i}$. Now assume $a_i=0$ for some $i=1, \ldots, k-1$. By Corollary \ref{2}, $d_i > 0$ i.e., $(b_i+b_{k+i}- \sum_{j=i+1}^k c_{i, j}b_j) < 0$, which implies that at least one of $\{c_{i, j} : j>i\}$ must be nonzero. Conversely, by Corollary \ref{2} we have $D$ is nef. Consider the $T$-invariant effective divisor $N=\sum_{i=1}^{2k}b_iD_i$ on $M_k$, where $\{ b_1 , b_2 , \ldots , b_k\}$ is a strictly increasing sequence of positive integers and $b_{k+i}=0$ for $i=1, \ldots, k$. Write $N \sim_{\text{lin}} \sum_{i=1}^{k}c_iD_{k+i} $, with $c_k=b_k $ and $c_i=b_i- \sum_{j=i+1}^kc_{i, j}b_j$, for $i=1, \ldots, k-1$. By Corollary \ref{4} it suffices to show that $D-\frac{1}{n} N=\sum_{i=1}^k(a_i- \frac{1}{n}c_i)$ is ample for $n\gg 0$. Since $a_k \geq 1$, for some positive integer $n_0$ we have $a_k- \frac{1}{n}c_k>0$ whenever $n > n_0$. If $a_i > 0$ for some $i \in \{1, \ldots, k-1\}$, then $ a_i - \frac{1}{n}c_i=a_i-\frac{1}{n}(b_i-\sum_{j=i+1}^k c_{i, j}b_j) > 0$ whenever $n > n_i$ for some large positive integer $n_i$. Now if $a_i =0$ for some $i \in \{1, \ldots, k-1\}$, by hypothesis $c_{i, j'} \neq 0$ for some $j'>i$. Then $a_i - \frac{1}{n}c_i=- \frac{1}{n}(b_i- \sum_{j=i+1}^k c_{i, j}b_j )=- \frac{1}{n}(b_i-c_{i, j'}b_{j'})- \frac{1}{n} \left( \sum_{j=i+1,j \neq j'}^k c_{i, j}b_j \right)> 0$ since $b_i-c_{i, j'}b_{j'} < 0$ and $\sum_{j=i+1,j \neq j'}^k c_{i, j}b_j \geq 0$ by the choice of $b_i$'s. This competes the proof. $\hfill\square$ Recall that a nonsingular projective algebraic variety is said to be weak Fano if its anticanonical divisor is nef and big. As an immediate corollary we obtain the following. \begin{cor}\label{17} $M_k$ is weak Fano if and only if $\sum_{j=i+1}^k c_{i, j} \leq 2, i=1, \ldots, k-1$. \end{cor} \begin{cor} Every nef and big divisor on $M_k$ is ample if and only if $c_{i, j}=0$ for all $i$ and $j$. \end{cor} \noindent {\bf Proof:} Let $c_{i, j} \neq 0$ for some $i, \ j$. Then consider $D= D_{k+1}+ \ldots + D_{k+i-1}+D_{k+i+1}+\ldots +D_{2k}$. Hence $D$ is nef and big by Theorem \ref{5}, but not ample by Corollary \ref{2}. Converse is immediate from Theorem \ref{5} and Corollary \ref{2}. $\hfill\square$ \subsection{Semi-stability of toric vector bundles on Hirzebruch surface:} \noindent Let $\mathcal{E}$ be a toric bundle on the Hirzebruch surface $M_2=\mathcal{H}_r$ with associated filtrations $\{E, E^{\rho}(i)\}_{\rho \in \Delta(1)}$. Then $c_1 ( \mathcal{E})= \sum_{i\in \mathbb Z, \rho \in \Delta(1)} i \text{ dim }E^{[\rho]}(i)D_\rho$ where $E^{[\rho]}(i)= E^{\rho}(i)/E^{\rho}(i+1)$ (see \cite[Remark $3.2.4$]{kly}). Fix a ample divisor $H=aD_3+bD_4$ $(a, b >0)$. Then $\text{deg }(\mathcal{E})= \text{deg }c_1 ( \mathcal{E})=c_1 ( \mathcal{E}) \cdot H= \sum_{i\in \mathbb Z} i \left( b \text{ dim }E^{[1]}(i) +a \text{ dim }E^{[2]}(i)+b \text{ dim }E^{[3]}(i)+(a+br) \text{ dim }E^{[4]}(i) \right)$, since $D_1 \cdot D_3=D_3 \cdot D_3=D_2 \cdot D_4=0$, $D_2 \cdot D_3=D_4 \cdot D_3=D_1 \cdot D_4=1$ and $D_4 \cdot D_4=r$. Thus, $$\mu (\mathcal{E})= \frac{1}{\text{dim }E} \left( \sum_{i\in \mathbb Z} i \left( \text{ b dim }E^{[1]}(i) +a \text{ dim }E^{[2]}(i)+b \text{ dim }E^{[3]}(i)+(a+br) \text{ dim }E^{[4]}(i) \right) \right).$$ \begin{prop}\label{9} Let $\mathcal{E}$ be a toric bundle on $M_2=\mathcal{H}_r$. Then $\mathcal{E}$ is semi-stable with respect to $H=aD_3+bD_4$ $(a, b >0)$ if and only if $\frac{1}{\text{dim }F} \left( \sum_{i\in \mathbb Z} i \left( b \text{ dim }F^{[1]}(i) + a \text{ dim }F^{[2]}(i)+b \text{ dim }F^{[3]}(i)+(a+br) \text{ dim }F^{[4]}(i) \right) \right) $ \\ $\leq \frac{1}{\text{dim }E} \left( \sum_{i\in \mathbb Z} i \left( \text{ b dim }E^{[1]}(i) +a \text{ dim }E^{[2]}(i)+b \text{ dim }E^{[3]}(i)+(a+br) \text{ dim }E^{[4]}(i) \right) \right)$\\ \noindent for every proper subspace $F$ of $E$, where $F^{\rho}(i)=F \cap E^{\rho}(i)$ for any ray $\rho$. \end{prop} \noindent {\bf Proof:} By \cite[Proposition $4.13$]{kool}, to check semi-stability of a toric vector bundle, it suffices to consider only toric subbundles of smaller rank which have been characterized in Proposition \ref{6}. Since for any nonsingular toric surface compatibility condition is automatically satisfied, this completes the proof. $\hfill\square$ \noindent \begin{cor}\label{14} For $X=\mathcal{H}_r$, the tangent bundle $T_X$ is semi-stable with respect to $H=aD_3+bD_4$ if and only if $r=0, 1$. In particular, the semi-stability of $T_X$ does not depend on the choice of the polarization $H$. \end{cor} \noindent {\bf Proof:} We see that $$\mu (T_X)= \frac{\text{deg }(T_X)}{\text{rank }(T_X)}= \frac{2a+b(r+2)}{2}$$ Note that the only toric line subbundles of $T_X$ are $D_1, D_2, D_3, D_4$ and $D_2+D_4$ with $\mu=$ $b, \ a, \ b, \ a+br$ and $2a+br$ respectively. Then by Proposition \ref{9}, $T_X$ is semi-stable if and only if $br \leq 2(b-a)$ and $r \leq 2$ which happens if and only if $r$ is $0$ or $1$. $\hfill\square$ Using Corollary \ref{14}, we give an example of a vector bundle satisfying the hypothesis of Proposition \ref{7}. \begin{ex}{\rm Note that $T_X$ is semi-stable for $r=1$, $\Delta(T_X)=2$ and $\text{det}(T_X)=-K_X$ is ample but $T_X$ is not ample by Mori's theorem \cite[Remark $6.3.2$]{lazarsfeld14500positivity}. Consider the line bundle $\mathcal{L}=\mathcal{O}_{X}(-D_1)$ on $X=\mathcal{H}_r$. Note that $c(T_X)=\prod_{i=1}^4(1+D_i)$ (see \cite[Example $3.2.3$]{kly}), so $c_1(T_X)=D_1+D_2+D_3+D_4$ and $c_2(T_X)=4$. Hence $\Delta(T_X))=c_2(T_X)-\frac{r-1}{2r} \ c_1^2(T_X)=2$ and we get that $\Delta(T_X \otimes \mathcal{L})=\Delta(T_X)+c_1(T_X) c_1(\mathcal{L})=0$. Note that $T_X \otimes \mathcal{L}$ is semi-stable if and only if $r=0$ or $1$. Let $\left( F, \{F^{\rho}(i) \}_{\rho \in \Delta(1)} \right)$ be the associated filtrations for the bundle $T_X \otimes \mathcal{L}$. For simplicity we write $F^j(i)$ to denote $F^{v_j}(i)$. Then $F^{1}(i)=E^{1}(i-1)$ and $F^{j}(i)=E^{j}(i)$ for $j =2, 3, 4$. \begin{center} $ \text{dim}(F^{[1]}(i)) = \left\{ \begin{array}{ccc} 1 & i = 1\\ 1 & i=2 \\ 0 & \text{otherwise} \end{array} \right. $ and $ \text{dim}(F^{[j]}(i)) = \left\{ \begin{array}{ccc} 1 & i = 0\\ 1 & i=1 \\ 0 & \text{otherwise} \end{array} \right. $ for $j =2, 3, 4$. \end{center} \noindent $\text{det}(T_X \otimes \mathcal{L})=\sum_{i \in \mathbb Z, \ j=1}^4 i \text{ dim}(F^{[j]}(i)) D_j=3D_1+D_2+D_3+D_4$. Now $\text{det}(T_X \otimes \mathcal{L})$ is ample if and only if $r<4$ by Corollary \ref{2}. Hence $T_X \otimes \mathcal{L}$ is ample by Proposition \ref{7}. } \end{ex} \bibliographystyle{plain}
{ "timestamp": "2018-02-27T02:04:13", "yymm": "1802", "arxiv_id": "1802.08803", "language": "en", "url": "https://arxiv.org/abs/1802.08803" }
\section{Introduction} \medskip There are several relatively recent works on comparison theorems on K\"ahler manifolds. In \cite{LW}, for a K\"ahler manifold $(M^m, g)$, Li and Wang introduced the condition ``bisectional curvature bounded from below by a constant $\lambda$" defined as \begin{equation}\label{eq:lw-1} R(Z, \overline{Z}, W, \overline{W})\ge \lambda \left( |Z|^2 |W|^2+\left|\langle Z, \overline{W}\rangle\right|^2\right) \end{equation} for any $(1, 0)$ vectors $Z, W\in T'M$ satisfying either $\langle Z, \overline{W}\rangle =0$ or $Z=W$, where the complexified tangent space $T_\mathbb{C} M=T'M \oplus T''M$, namely is decomposed (with respect to the almost complex structure $C$) into the holomorphic subspace ($T'M$) and antiholomorphic subspace ($T''M$). Let $m$ be the complex dimension of $M$ and $n=2m$ be the real dimension. Here $\langle \cdot, \cdot\rangle$ is the bilinear extension of the Riemannian product and the curvature $R$ follows the convention of \cite{Tian}. Under this condition the authors derived the complex Hessian comparison theorem for the distance function $\rho_{p}(x)$ to a fixed point $p$ of any K\"ahler manifold $M$ with the bisectional curvature bounded from below, with the corresponding distance function of a complex space form with constant holomorphic sectional curvature $2\lambda$. (For the case $\lambda=0$, the result can also be derived from the Li-Yau-Hamilton type estimate for the heat kernel \cite{Cao-Ni}.) The authors of \cite{LW} also derived a diameter estimate (for $\lambda>0$) as well as a volume comparison result. More recently, the volume comparison result was generalized to the distance function to a complex submanifold by Tam and Yu in \cite{TamYu}. The reformulation in \cite{TamYu} seems stronger than the original one stated above by demanding (\ref{eq:lw-1}) on all $Z, W\in T'M$. In \cite{Liu}, the partial complex Hessian (only in the complex plane spanned by $\{\nabla \rho, C( \nabla \rho)\}$ with $C$ being the almost complex structure) comparison theorem was proved under the assumption of the holomorphic sectional curvature is nonnegative. This result plays the crucial role \cite{Liu} in establishing the three-circle property for holomorphic functions on such K\"ahler manifolds. More recently, in \cite{XYang}, the projectivity was proved for compact K\"ahler manifolds with positive holomorphic sectional curvature. The common theme of the papers involving the comparison theorems is that the results were derived by applying the Bochner formula to the length of the gradient $\|\nabla \rho\| (=1)$ in a similar spirit as the proofs of the Hessian and Laplacian comparison theorems in \cite{BZ,HK} (cf. also \cite{petersen}, where the Hessian comparison was made almost trivial for the case that the curvature is bounded from above), and \cite{GHL} respectively. Namely they are based on Ricatti's type inequality on the Hessian of $\rho$, or the Bochner formula applying to $\|\nabla \rho\|$, instead of the more classical approach of Rauch via the comparison of the index forms and Jacobi fields. On the other hand, the consideration via the second variation and the index forms has a lot of success in understanding the geometry and topology of the Riemannian manifolds. Even for K\"ahler manifolds with positive holomorphic sectional curvature there exists the work of Tsukamoto on the diameter estimate and the simply connectedness \cite{Tsu}. Note that the diameter estimate of Li-Wang is a special result of Tsukamoto since the lower bound on the bisectional curvature posed by (\ref{eq:lw-1}) implies that the holomorphic sectional curvature $H(Z)=R(Z, \overline{Z}, Z, \overline{Z})\ge 2\lambda |Z|^4$. There are also Leftschez type theorems for complex or Levi flat real submanifolds in a nonnegatively curved K\"ahler manifold utilizing the index estimates of the energy functional such as \cite{Schoen-Wolfson} and \cite{Ni-Wolfson}. Despite the work and the effort mentioned above the K\"ahler analogue of the sharp volume comparison (Bishop type) and the sharp diameter estimate (Bonnet-Myers type) are still elusive. One goal of this paper is to apply the second variational/index form consideration to the K\"aher setting and prove several comparison and rigidity results generalizing some of the results mentioned above with the hope of bridging the gap between the Riemannian and K\"ahler setting. Another goal is to study a condition which is complementary to the holomorphic sectional curvature. Namely we shall study the comparison and vanishing theorems under conditions on the {\it orthogonal Ricci curvature}. The vanishing theorem proved in this paper implies the projective embedding (namely projectivity of the underlying K\"ahler manifold) related to this curvature condition. This suggests that an algebraic geometric characterization of such K\"ahler manifolds perhaps is an interesting problem in view of the Fano varieties characterized via the Yau's solution to the Calabi conjecture. Before getting into the statement of the result, we first recall various notions of curvature for K\"ahler manifolds. If $Z=\frac{1}{\sqrt2} \left(X-\sqrt{-1} C(X)\right), W=\frac{1}{\sqrt2} \left(Y-\sqrt{-1} C(Y)\right)$, the first Bianchi gives the following expansion in terms of real vectors $$ R(Z, \overline{Z}, W, \overline{W})= R(X, C(X), C(Y), Y)=R(X, Y, Y, X)+R(X, C(Y), C(Y), X). $$ Besides the bisectional curvature, there exists the notion of the orthogonal bisectional curvature $R(Z, \overline{Z}, W, \overline{W})$ for any pair $Z, W$ with $\langle Z, \overline{W}\rangle=0$. Note that $\langle Z, \overline{W}\rangle =0$ means that $\langle X, Y\rangle=\langle X, C(Y)\rangle=0$. The holomorphic sectional curvature can be expressed as $$R(Z, \overline{Z}, Z, \overline{Z})=R(X, C(X), C(X), X).$$ For the sake of convenience in writing, we will sometimes use $H$ to denote the holomorphic sectional curvature and $B^{\perp}$ to denote the orthogonal bisectional curvature. Clearly the lower bound on holomorphic sectional curvature, on orthogonal bisectional curvature, or on bisectional curvature, are quite different assumptions. There are unitary symmetric metrics on $\mathbb{C}^m$ with nonnegative orthogonal bisectional curvature (abbreviated as (NOB)) but not with nonnegative bisectional curvature. There even exists algebraic K\"ahler curvature $R$ with nonnegative holomorphic sectional curvature and nonnegative orthogonal bisectional curvature, but not nonnegative bisectional curvature \cite{Tam}. There is also a weaker notion called {\it quadratic orthogonal bisectional curvature,} or {\it quadratic bisectional curvature} for short, denoted as $QB$, which is defined for any real vector $\{a_i\}_{i=1}^m$ and any unitary frame $\{E_i\}$ of $T'M$, $QB(a)=\sum_{i, j}R_{i\bar{i}j\bar{j}}(a_i-a_j)^2$. Its nonnegativity, abbreviated as (NQOB), means that $QB(a)\geq 0$ for any $a$ and any unitary frame $\{ E_i\}$. This curvature condition was formally introduced by Wu-Yau-Zheng in 2009 in \cite{WuYauZheng}, although it appeared implicitly in the work of Bishop of Goldberg \cite{BishopGoldberg} in 1965 already, where they showed that compact K\"ahler manifold with positive bisectional curvature must have its second Betti number equal to $1$. The first example of compact K\"ahler manifold with (NQOB) but not (NOB) was established by Li-Wu-Zheng \cite{LiWuZheng} in 2013, and shortly after, Chau-Tam \cite{Chau-Tam} fully classified all (NQOB) K\"ahler C-spaces of classical types. See also \cite{NT2} for the role of (NQOB) in solving the Poincar\'e-Lelong equation. More recently in \cite{Ni-Niu2}, a gap theorem and a Liouville theorem were proved under (NOB) and nonnegative Ricci curvature. Note that the nonnegative holomorphic sectional curvature does not imply the nonnegative Ricci curvature as shown by Hitchin's examples \cite{Hitchin}. On the other hand, there exists the notion of {\it antiholomorphic Ricci curvature $Ric^{\perp}(X, X)$ }(for any real vector $X$, coined for example in \cite{Mi-Pal}, but no geometric implication of it was given) which is defined as $$ Ric^{\perp}(X, X)=\sum R(X, e_i, e_i, X)=Ric(X, X)-\frac{1}{|X|^2}R(X, C(X), C(X), X), $$ where $\{e_i\}$ is any orthonormal frame of $\{X, C(X)\}^{\perp}$. In view of the above notions of (NOB) and (NQOB) it seems more sensible to called it {\it orthogonal Ricci curvature}. Let $E_i=\frac{1}{\sqrt2}(e_i-\sqrt{-1} C(e_i))$ be a unitary frame such that $e_1=\frac{X}{|X|}=\tilde{X}$. Following the convention $e_{n+i}=C(e_i)$, direct calculation shows that \begin{eqnarray*} \frac{1}{|X|^2}\Ric^{\perp}(X, X)&=&Ric^{\perp}(\tilde{X}, \tilde{X})=Ric(\tilde{X}, \tilde{X})-R(\tilde{X}, C(\tilde{X}), C(\tilde{X}), \tilde{X})\\ &=& Ric(E_1, \overline{E}_1)-R(E_1, \overline{E_1}, E_1, \overline{E}_1)=\sum_{j=2}^{m}R(E_1, \overline{E}_1, E_j, \overline{E}_j). \end{eqnarray*} Hence $Ric^{\perp}(\tilde{X}, \tilde{X})=Ric(E_1, \overline{E}_1)-R_{1\bar{1}1\bar{1}}$. Here we have also used that $Ric(E_i,\overline{E}_i)=Ric(e_i, e_i)$. By Proposition 3.1 of \cite{HT} (see also \cite{Niu}), the nonnegativity of quadratic orthogonal bisectional curvature implies the nonnegativity of the orthogonal Ricci curvature. On the other hand, the example constructed there shows that there exist some unitary symmetric metrics on $\mathbb{C}^m$ such that the curvature has nonnegative quadratic orthorgonal bisectional curvature (hence the orthogonal Ricci curvature is nonnegative), but the Ricci curvature is negative somewhere. In the later section of this paper we show examples of metrics with even nonnegative orthogonal bisectional curvature (which is stronger than the (NQOB)), but Ricci curvature, as well as the holomorphic sectional curvature, can be negative somewhere. This shows that the $Ric^{\perp}(\cdot, \cdot)$ is a sensible notion for K\"ahler manifolds, and is different from the Ricci tensor. Nevertheless (NQOB) does imply the nonnegativity of the scalar curvature as shown in \cite{Chau-Tam, Niu}. In fact the non-negativity of $Ric^{\perp}$ also implies the nonnegativity of the scalar curvature from the following estimate. \begin{lemma}\label{lemma-easy} The nonnegative orthogonal Ricci curvature implies the nonnegativity of the scalar curvature $S$. In fact there exists the following pointwise estimate for $m\ge 2$: $$ S(y)\ge \frac{2m(m+1)}{m-1} \min_{Z\in \mathbb{S}^{2m-1}_y\subset T'M } Ric^{\perp}(Z, \overline{Z}). $$ \end{lemma} Since the nonnegativity of the quadratic orthogonal bisectional curvature (NQOB) implies the nonnegativity of the orthogonal Ricci (abbreviated as $Ric^{\perp}\ge 0$), Lemma \ref{lemma-easy} implies the result on the nonnegativity of the scalar curvature of \cite{Chau-Tam, Niu}). Given any fixed point $p$, let $\rho(x)$ be the distance function to $p$. The Hessian of $\nabla^2 \rho(\cdot, \cdot)$ can be extended bi-linearly to $T_p^\mathbb{C} M$. Direct calculation shows that $$ \nabla^2 \rho(E_i, \overline{E}_i)=\frac{1}{2}\left(\nabla^2 \rho( e_i, e_i)+\nabla^2 \rho(C(e_i), C(e_i))\right). $$ This shows that $\Delta \rho=\sum_{i=1}^m \nabla^2 \rho((E_i, \overline{E}_i)=\frac{1}{2}\sum_{i=1}^m \left(\nabla^2 \rho( e_i, e_i)+\nabla^2 \rho(C(e_i), C(e_i))\right)$. Here $\{E_i\}$ is a unitary frame. We define $\Delta^{\perp}$ the orthogonal Laplacian to be $$ \Delta^{\perp} \rho=\Delta \rho-\nabla^2 \rho(Z, \overline{Z}) $$ where $Z=\frac{1}{\sqrt2}(\nabla \rho-\sqrt{-1}C(\nabla \rho))$. We call the last term {\it holomorphic Hessian } of $\rho$. The first comparison theorem we prove is on the orthogonal Laplacian assuming the orthogonal Ricci curvature comparison and the holomorphic Hessian comparison assuming the holomorphic sectional curvature comparison. \begin{theorem}\label{thm-com1}(i) Let $(M^m, g)$ be a K\"ahler manifold with $Ric^{\perp}\ge (m-1) \lambda $. Let $(\tilde{M}, \tilde{g})$ be the complex space form with constant holomoprhic sectional curvature $2\lambda$. Let $\rho(x)$ be the distance function to a point $p$ (and $\tilde{\rho}$ be the corresponding distance function to a point $\tilde{p}$). Then for point $x$, which is not in the cut locus of $p$, $$ \Delta^{\perp} \rho (x) \le \Delta^{\perp}\tilde{\rho}\left.\right|_{\tilde{\rho}=\rho(x)}=(m-1)\cot_{\frac{\lambda}{2}}(\rho). $$ (ii) Let $(M^m, g)$ be a K\"ahler manifold with holomorphic sectional curvature $H\ge 2\lambda $. Let $(\tilde{M}, \tilde{g})$ be the complex space form with constant holomoprhic sectional curvature $2\lambda$. Let $\rho(x)$ ($\tilde{\rho}$) be the distance function to a complex submanifold $P$ in $M$ ($\tilde{P}$ in $\tilde{M}$). Then for $x$ not in the focal locus of $P$, $$ \left.\nabla^2\rho (Z, \overline{Z})\right|_{x} \le \left. \nabla^2\tilde{\rho}(\tilde{Z}, \overline{\tilde{Z}})\right|_{ \tilde{\rho}=\rho(x)}. $$ Here $Z=\frac{1}{\sqrt{2}}(\nabla \rho-\sqrt{-1} C(\nabla \rho))$, $\tilde{Z}=\frac{1}{\sqrt{2}}(\nabla \tilde{\rho}-\sqrt{-1} C(\nabla \tilde{\rho}))$. In particular, if $\lambda=0$ and $\tilde{P}$ is a point $$ \left. \nabla^2\rho (Z, \overline{Z})\right|_{x} \le \frac{1}{2\rho(x)} \iff \nabla^2 \log \rho (Z, \overline{Z})\le 0. $$ \end{theorem} \begin{remark} The part (ii) was proved by G. Liu in \cite{Liu} for the case that $P$ and $\tilde{P}$ are two points. The proof of \cite{Liu} follows the argument in \cite{LW}. The results provide a generalization of the comparison theorem proved in \cite{LW}. Besides the point that the proof here uses a different argument, more importantly, the results signify the the geometric implications of orthogonal Ricci curvature and holomorphic sectional curvature. \end{remark} If both assumptions in (i) and (ii) are satisfied, the estimates in Theorem \ref{thm-com1} implies the volume comparison as in \cite{LW}. \begin{corollary}\label{coro-wang} Assume that $(M^m, g)$ satisfies that $Ric^{\perp}\ge (m-1) \lambda $ and $H\ge 2\lambda $. Then for any points $x\in M$, $\tilde{x}\in \tilde{M}$, $\Delta \rho (x) \le \left.\Delta \tilde{\rho}\right|_{\rho(x)}$, and for any $0<r\le R$, $$ \frac{Vol(B(x, R))}{Vol(B(x, r))}\le \frac{Vol(\tilde{B}(\tilde{x}, R))}{Vol(\tilde{B}(\tilde{x}, r))} $$ where $\tilde{B}(\tilde{x}, R)$ is the ball in the complex space form. Equality holds if and only if $B(x, R)$ is holomorphic-isometric to the ball in the complex space form. \end{corollary} Note that the lower bounds of the orthogonal Ricci and holomorphic sectional curvature implies the Ricci lower bound $Ric(X, X)\ge (m+1) |X|^2$. But the comparison in the K\"ahler case is sharper than the Riemannian setting. The first part of Theorem \ref{thm-com1} can be generalized to the cases of complex hypersurfaces, which can be viewed as the K\"ahler version of Heintze-Karcher theorem \cite{HK} with the assumption on the Ricci curvature being replaced by the orthogonal Ricci. \begin{theorem}\label{thm-com11} Let $(M^m, g)$ be a K\"ahler manifold with $Ric^{\perp}\ge (m-1) \lambda $. Let $(\tilde{M}, \tilde{g})$ be the complex space form with constant holomoprhic sectional curvature $2\lambda$. Let $\rho(x)$ be the distance function to a complex hypersurface $P$ (and $\tilde{\rho}$ be the corresponding distance function to a totally geodesic complex hypersurface $\tilde{P}$). Then for point $x$, which is not in the focal locus of $P$, $$ \Delta^{\perp} \rho (x) \le \Delta^{\perp}\tilde{\rho}\left.\right|_{\tilde{\rho}=\rho(x)}=(m-1)\tan_{\frac{\lambda}{2}}(\rho). $$ \end{theorem} Note that $\tan_{\frac{\lambda}{2}}(t)$ is a little different from the conventional trigonometric function. In fact for $\lambda>0$, $\tan_{\frac{\lambda}{2}}(t)=-\sqrt{\frac{\lambda}{2}}\cdot \frac{\sin(\sqrt{\frac{\lambda}{2}}t)}{\cos (\sqrt{\frac{\lambda}{2}}t)}$. The above result strengthens that the sensible notion $Ric^{\perp}$ is related to the orthogonal Laplacian $\Delta^{\perp}$. If one assumes additionally the bound on the holomorphic sectional curvature, then one has the level hypersurface area comparison result similar to that of \cite{HK}, but sharper than the Riemannian setting due to K\"ahlerity. Similarly one can consider the orthogonal Hessian of a real function $u$ to be $\nabla^2 u(Z, \overline{Z})$ restricted to the space consisting of all $Z\perp \{ \nabla u, C(\nabla u)\}$. By now it is natural to infer that the orthogonal bisectional curvature gives comparison theorem for the orthogonal Hessian. \begin{theorem}\label{thm-com2} Let $(M^m, g)$ be a K\"ahler manifold with $R(Z, \overline{Z}, W, \overline{W})\ge \lambda |Z|^2|W|^2$ for any $Z\perp W$ (namely the orthogonal bisectional curvature is bounded from the below by $\lambda$, which we abbreviate as $B^{\perp}\ge \lambda$). Let $(\tilde{M}, \tilde{g})$ be the complex space form with constant holomoprhic sectional curvature $2\lambda$. Let $\rho(x)$ be the distance function to a point $p$ (and $\tilde{\rho}$ be the corresponding distance function to a point $\tilde{p}$). Then for point $x$, which is not in the cut locus of $p$, restricted to the spaces of vectors $Z$ which are perpendicular to $\{ \nabla \rho, C(\nabla \rho)\}$ (as well as to $\{ \nabla \tilde{\rho}, C(\nabla \tilde{\rho})\}$) $$ \nabla^2 \rho (x) \le \nabla^2\tilde{\rho}\left.\right|_{\tilde{\rho}=\rho(x)}. $$ \end{theorem} A similar argument as in the classical Bonnet-Myers theorem implies that any complete K\"ahler manifold whose $Ric^{\perp}$ is bounded from below by a positive constant must be compact. This implies that any compact K\"ahler manifold with positive orthogonal Ricci curvature must have finite fundamental group. For compact K\"ahler manifolds, in the following we will focus on the relation between the holomorphic sectional curvature $H$, the Ricci curvature $Ric$, and the orthogonal Ricci curvature $Ric^{\perp }$. In terms of their strength, all three notions of curvature are sitting between bisectional curvature and scalar curvature, in the sense that when the bisectional curvature is positive, all three are positive, while when any one of them is positive, the scalar curvature is positive. However, the relationship between these three curvature conditions is quite subtle, except the fact that $Ric= H + Ric^{\perp}$. By Yau's solution to the Calabi conjecture \cite{Yau}, compact K\"ahler manifolds with positive Ricci are exactly the projective manifolds with positive first Chern class, namely the Fano manifolds. For compact K\"ahler manifolds with positive $H$, it was conjectured by Yau (cf. Problem 47, \cite{YauP}), and recently proved by X. Yang \cite{XYang} that such manifolds are all projective. Hence by the recent work of Heier and Wong \cite{HeierWong} (see also \cite{XYang} for an alternative proof) they are all rationally-connected, meaning that any two points on the manifold can be joined by a rational curve. On the other hand, it was conjectured by Yau also that any rational or unirational manifold admits K\"ahler metrics with positive $H$. But this is far from being settled, as even on the surface ${\mathbb P}^2\# 2\overline{{\mathbb P}^2}$, the blowing up of ${\mathbb P}^2$ at two points, it is still an open question whether there exists such a metric. It is certainly a natural question to understand the class of compact K\"ahler manifolds with positive $Ric^{\perp}$. We propose the following: \begin{conjecture} Let $M^m$ ($m\geq 2$) be a compact K\"ahler manifold with $Ric^{\perp} >0$ everywhere. Then for any $1\leq p\leq m$, there is no non-trivial global holomorphic $p$-form, namely, the Hodge number $h^{p,0}=0$. In particular, $M^m$ is projective and simply-connected. \end{conjecture} Let us first explain the ``in particular" part in the above conjecture. Note that once we have the vanishing of $h^{p,0}$ for all $1\leq p\leq m$, then the vanishing of $h^{2,0}$ implies that $M^m$ is projective. Also, now since $$ \chi({\mathcal O}_M) = 1 - h^{1,0} + h^{2,0} - \cdots + (-1)^m h^{m,0} =1,$$ where ${\mathcal O}_M$ is the structure sheaf, we know that such a manifold $M^m$ must be simply-connected since $\pi_1(M)$ is finite, and the Riemann-Roch theorem which asserts that the arithmetic genus $\chi$ is given as the integral over $M$ of a polynomial in Chern classes, as in \cite{Ko}. We remark that for $M^m$ in the conjecture, $h^{1,0}=0$ since $\pi_1(M)$ is finite, and $h^{m,0}=0$ since $M^m$ has positive scalar curvature, thus the canonical line bundle cannot admit any non-trivial global holomorphic section. In fact, its Kodaira dimension must be $-\infty $ as it has positive total scalar curvature. So the conjecture is really about the cases $2\leq p\leq m-1$. We also remark that, when $m=2$, the only compact K\"ahler surface with positive $Ric^{\perp}$ is (biholomorphic to) ${\mathbb P}^2$. This is because $Ric^{\perp }$ is equivalent to orthogonal bisectional curvature $B^{\perp}$ when $m=2$. By a result of Gu and Zhang \cite{GuZhang}, any compact, simply-connected K\"ahler manifold $M^m$ with positive $B^{\perp}$ is biholomorphic to ${\mathbb P}^m$ since the K\"ahler-Ricci flow takes any such metric into a metric with positive bisectional curvature (see also an alternate argument by Wilking in \cite{Wilking}). It would certainly be an interesting question to understand the class of threefolds or fourfolds with the $Ric^{\perp } >0$ condition. In this direction we prove the following partial result. \begin{theorem}\label{thm-1connect} Let $M^m$ ($m\geq 2$) be a compact K\"ahler manifold with $Ric^{\perp} >0$ everywhere. Then its Hodge numbers $h^{m-1,0}=h^{2,0}=0$. In particular, $M^m$ is always projective. Also, it is simply-connected when $m\leq 4$. \end{theorem} In fact in Section 4 a stronger result is shown. Namely $h^{2, 0}=0$ (hence $M$ is projective) if the average of $Ric^\perp$ over two-planes is positive. An analogous result for $2$-scalar curvature was proved recently by authors \cite{Ni-Zheng2}. For compact manifolds with $Ric^{\perp}<0$, one can obtain the following analogue of a result of Bochner \cite{wu}, which implies the finiteness of the automorphism group of such manifolds. \begin{proposition}\label{bochner} Let $M^m$ be a compact K\"ahler manifold with $Ric^{\perp}<0$. Then there does not exists any nonzero holomorphic vector field. \end{proposition} It is an interesting question to find out whether or not such a manifold always admits a metric of negative Ricci curvature. That is, if its first Chern class is negative, or equivalently, if its canonical line bundle is ample. Examples of K\"ahler metrics concerning various curvatures mentioned above and their relations can be found in sections 4-8. Among them we construct unitary complete K\"ahler metrics on $\mathbb{C}^m$ which have (NOB), positive Ricci, but negative holomorphic sectional curvature somewhere. This answers affirmatively a question raised recently in \cite{Ni-Niu2}. \section{Proof of comparisons} We first prove Lemma \ref{lemma-easy}. It is an easy consequence of a result of Berger. \begin{proof} By a formula due to Berger, at any point $p\in M$, a K\"ahler manifold, $$ S(p)= \frac{m(m+1)}{Vol(\mathbb{S}^{2m-1})}\int_{|Z|=1, Z\in T'_pM} H(Z)\, d\theta(Z). $$ On the other hand it is easy to check that $$ S(p)= \frac{2m}{Vol(\mathbb{S}^{2m-1})}\int_{|Z|=1, Z\in T_p'M} Ric(Z, \overline{Z})\, d\theta(Z). $$ They imply that \begin{equation}\label{eq:scalar-ricciperp} \frac{m-1}{2m(m+1)}S(p)=\frac{1}{Vol(\mathbb{S}^{2m-1})}\int_{|Z|=1, Z\in T_p'M} Ric^{\perp}(Z, \overline{Z})\, d\theta(Z). \end{equation} The claimed result follows from (\ref{eq:scalar-ricciperp}) easily. \end{proof} One can also prove the following estimate on the holomorphic sectional curvature in terms of the orthogonal Ricci curvature. \begin{corollary} \label{Gold} When $m\ge 2$, at any point $p$, for unitary $Z\in T_p'M$ with $H_p(Z)=\max_{|W|=1} H_p(W)$, $$ H_p(Z)\ge \frac{2}{m-1} Ric^{\perp}(Z, \overline{Z}). $$ In fact for any $W$ which is perpendicular to $Z$, $H_p(Z)\ge 2 R(Z, \overline{Z}, W, \overline{W}).$ Similarly if unitary $Z'\in T'_pM$ satisfying $H_p(Z')=\min_{|W|=1} H_p(W)$, then for unitary $W\perp Z'$ $$ H_p(Z')\le 2 R(Z', \overline{Z'}, W, \overline{W});\quad H_p(Z')\le \frac{2}{m-1} Ric^{\perp}(Z', \overline{Z'}). $$ \end{corollary} \begin{proof} For any complex number $a, b$ and $Z, W\in T'_pM$, it is easy to check that \begin{eqnarray*}&\,& H(aZ\!+\!bW)+H(aZ\!-\!bW)+H(aZ\!+\!\sqrt{\!-\!1} bW)+H(aZ\!-\!\sqrt{\!-\!1}bW)\\ &\,= & 4|a|^4 H(Z) +4 |b|^4 H(W)+16 |a|^2 |b|^2 R(Z, \overline{Z}, W, \overline{W}). \end{eqnarray*} For the unitary vector $Z$ and $W$ we choose $a, b$ such that $|a|^2+|b|^2=1$. Then if $Z$ attains the maximum of the holomorphic sectional curvature, for $W\perp Z$, $$ 4H(Z)\ge 4|a|^4 H(Z) +4 |b|^4 H(W)+16 |a|^2 |b|^2 R(Z, \overline{Z}, W, \overline{W}). $$ The estimate $H(Z)\ge 2 R(Z, \overline{Z}, W, \overline{W})$ follows from the above. The claim on the orthogonal Ricci follows easily. For the minimal holomorphic sectional curvature, one can simply flip the above argument. A more direct approach is to consider function $ f(\theta)=H(\cos \theta Z'+\sin \theta W) $. The second derivative test applying to $f(\theta)$ and the one replacing $W$ by $\sqrt{\!-\!1}W$ implies the claimed estimate. \end{proof} Before we prove the comparison theorem, let us recall some basics regarding the normal geodesics, the Jacobi fields with respect to a submanifold, the distance function and the tubular hypersurface with respect to a Riemannian submanifold $P$ (only later we assume that $P$ is a complex submanifold). Let $\mathcal{N}(P)$ denote the normal bundle of $P$. For any section $\nu(x)$ of the normal bundle the exponential map $\exp_P$ can be defined as $\exp_x(\nu(x))$. First recall the concept of the {\it $P$-Jacobi field} along a {\it normal} geodesic $\gamma_u(\eta)$ with $u=\gamma'(0)\perp P$ at $p=\gamma(0)$. A Jacobi field $J(\eta)$ is called a $P$-Jacobi field along $P$ if it satisfies $J(0)\in T_p P$ and $J'(0)-A_{\gamma'(0)} J(0)\perp T_p P$, where $A_{u}(\cdot)$ is the shape operator in the normal direction $u$. It is easy to check that if $\gamma(\eta, t)$ is a family of normal geodesics, with $\gamma(0, t)\in P$ and $\frac{D \gamma}{\partial \eta} (0, t)\in T^{\perp}_{\gamma(0,t)}P$, $J(\eta)=\frac{D}{\partial t }\gamma(\eta, 0)$ is a $P$-Jacobi field. An elementary fact is that $\left. d\exp_P\right|_{\ell u}$ is degenerate if any only if there exists a non-zero $P$-Jacobi field $J(\eta)$ such that $J(\ell)=0$. The point $\gamma_u(\ell)$ is called a {\it focal point} (with respect to $P$). The boundary operator $\frac{D J}{\partial \eta}-A_{\gamma'(0)} J(0)$ also arises from the second variation of the energy for a variation of pathes $\gamma(\eta, t)$ with the initial points in $P$ and a fixed end point: $$ \left. \frac{d^2}{dt^2}\right|_{t=0} \mathcal{E}(\gamma)=\int_0^{\ell} |\nabla X|^2-\langle R(X, \gamma')\gamma', X\rangle \, d\eta +\langle A_{\gamma'(0)}(X(0)), X(0)\rangle $$ with $X=\frac{D \gamma}{\partial t} (\eta, 0)$ being the tangent vector. Here $\mathcal{E}(\gamma)=\int_0^\ell |\frac{D \gamma}{\partial \eta}|^2\, d\eta$. The polarization of the right hand side is called the index form. Namely the index form $I(X, Y)$ is given by $$ I(X, Y)=\int_0^{\ell}\langle \nabla X, \nabla Y\rangle -\langle R(X, \gamma')\gamma', Y\rangle \, d\eta +\langle A_{\gamma'(0)}(X(0)), Y(0)\rangle-\langle A_{\gamma'(\ell)}(X(\ell)), Y(\ell)\rangle. $$ Here the second boundary term enters only for the more general case that the ending points $\gamma(\ell, t)$ lying inside another submanifold $P'$. Allowing this flexibility is useful in \cite{Ni-Wolfson, Schoen-Wolfson}, but not needed when consider the distance function $\rho(x)$. We denote the index form (along $\gamma$) with $P'$ being a point as $I^{P}_{\gamma}(\cdot, \cdot)$ (otherwise we denote it as $I^{P, P'}_{\gamma}(\cdot, \cdot)$). An easy but useful observation is the following relation between the Hessian of the distance function and the index form. Namely \begin{equation}\label{eq-trans} \left.\operatorname{Hessian}(\rho)\right|_{\rho(x)=\ell} (X, Y)= II_{\nabla \rho}(X, Y)=I^P_{\gamma [0, \ell]}(J_1, J_2) \end{equation} if $J_i(\eta)$ are $P$-Jacobi fields (in the case $P=\{p\}$ a point the assumption is equivalent to $J_i(0)=0$) and $J_1(\ell)=X$, $J_2(\ell)=Y$. Here $II$ denotes the second fundamental form of hypersurface $\{x\, |\, \rho(x)=\ell\}$. In short {\it the Hessian of $\rho$, restricted to the subspace perpendicular to $\nabla \rho$, is the same as the index form, which in turn is the same as the second fundamental form of the tubular hypersurface (of $P$) with respect to the unit exterior normal $\nabla \rho$.} Another useful result is the index comparison lemma. \begin{lemma}\label{lmm-index} Assume that $\gamma:[0, \ell]$ is a normal geodesic originated from $P$. Assume that there exists no focal point along $\gamma$. Let $X$ and $Y$ be two vector fields along $\gamma$ with $X$ being a $P$-Jacobi field, such that $ Y(0)\in T_{\gamma(0)}P$ and $X(\ell)=Y(\ell)$. Then $$ I^P_{\gamma}(X, X)\le I^P_{\gamma}(Y, Y). $$ The equality holds if any only if $Y(\eta)=X(\eta)$ for $\eta\in [0, \ell]$. \end{lemma} One can refer to \cite{Sakai} (cf. Chaper III, Lemma 2.10). In fact for any such $Y$, there exists a $P$-Jacobi field $X$ such that $Y(\ell)=X(\ell)$. An alternate proof is the following. First the index form can be used (replacing the Dirichlet energy) to define a Reilly quotient on the vector fields which are perpendicular to $\gamma'(\eta)$ and are tangent to the submanifolds (in the case $P'=\{x_0\}$, requiring vanishing boundary at $\gamma(\ell)$) at both ends. Then clearly the associated infinimum, namely the associated eigenvalue (which satisfies a Robin boundary condition at $\eta=0$ and Dirichlet condition at $\eta=\ell$) is very positive for $\ell $ small. The positivity remains until a zero eigenvalue, namely a conjugate point (which is defined as when a non-zero eigenvector satisfying the Euler-Lagrange equation of the index form with suitable boundary condition, namely a $P$-Jacobi vector, can be obtained) is reached. For complex space form a useful lemma for this case is the following. \begin{lemma} \label{lmm-csf}If $(\tilde{M}^m, g)$ is a K\"ahler manifold with constant holomorphic sectional curvature $2\lambda$. Let $n=2m$ be the real dimension and $\{\tilde{e}_i\}$ be a orthornormal then $$ R_{\tilde{e}_n, \tilde{e}_k} \tilde{e}_n =\frac{\lambda}{2} \tilde{e}_k, \mbox { if } \tilde{e}_k\perp \tilde{e}_n, C(\tilde{e}_n); R_{\tilde{e}_n, \tilde{e}_k} \tilde{e}_n=2\lambda \tilde{e}_k, \mbox{ if } \tilde{e}_k=C(\tilde{e}_n). $$ \end{lemma} If one only wants a formula in the right hand side of the comparison, and does not care about the geometric meanings of the right hand side (such as in \cite{Liu}), one does not need the above lemma. Now we can prove Theorem \ref{thm-com1}. Assume that $\gamma(\eta)$ and $\tilde{\gamma}(\eta)$ are two minimizing geodesics in $M$ and $\tilde{M}$. At $\gamma(\ell)$, let $\{e_i\}_{i=1}^{n=2m}$ be an orthonormal frame with $e_{2k}=C(e_{2k-1})$, and $e_n=\nabla \rho$ and $e_{n-1}=-C(e_n)$ (namely $e_n=C(e_{n-1})$). By the definition, $\Delta^{\perp} \rho=\frac{1}{2}\sum_{i=1}^{2m-2} \nabla^2 \rho( e_i, e_i)$. Let $\{\tilde{e}_i\}$ be the corresponding frame at $\tilde{\gamma}(\ell)$. Parallel transplant them along $\gamma$ and $\tilde{\gamma}$. By Lemma \ref{lmm-csf}, the Jacobi fields are given by $\tilde{J}_i(\eta)=\frac{S_{\frac12 \lambda} (\eta)}{S_{\frac12 \lambda}(\ell)} \tilde{e}_i(\eta)$ , for $1\le i\le 2m-2$, and $\tilde{J}_i(\eta)=\frac{S_{2 \lambda} (\eta)}{S_{2 \lambda}(\ell)} \tilde{e}_i(\eta)$ for $i=2m-1$. Here $$ S_\kappa(t) \doteqdot \left\{ \begin{matrix} \frac{1}{\sqrt{\kappa}}\sin \sqrt{\kappa} t, & \, \kappa>0,\cr t, &\, \kappa=0, \cr \frac{1}{\sqrt{|\kappa|}}\sinh \sqrt{|\kappa|}t, &\, \kappa<0;\end{matrix}\right.\quad S'_\kappa(t)\doteqdot \frac{d}{dt} S_\kappa(t); \quad \cot_\kappa(t)=\frac{S'_\kappa(t)}{S_\kappa(t)}. $$ Transplant $\{\tilde{J}_i(\eta)\}_{i=1}^{2m-2}$ along $\gamma(\eta)$ by letting $\overline{J}_i(\eta)= \frac{S_{\frac12 \lambda} (\eta)}{S_{\frac12 \lambda}(\ell)} e_i(\eta)$ we obtain $2m-2$ orthogonal vector fields along $\gamma(\eta)$ with $\overline{J}_i(\ell)=e_i(\ell)$ and $\overline{J}_i(0)=0$. Let $J_i(\eta)$ be the Jacobi fields with $J_i(\ell)=e_i$. Then \begin{eqnarray*} \left. 2\Delta^{\perp} \rho\right|_{\rho(x)=\ell}&=& \sum_{i=1}^{2m-2} \langle J'_i(\ell), J_i(\ell)\rangle= \sum_{i=1}^{2m-2}I_{\gamma[0, \ell]}(J_i, J_i);\\ \left. 2\Delta^{\perp} \tilde{\rho}\right|_{\tilde{\rho}(x)=\ell}&=& \sum_{i=1}^{2m-2} \langle \tilde{J}'_i(\ell), \tilde{J}_i(\ell)\rangle \sum_{i=1}^{2m-2}I_{\tilde{\gamma}[0, \ell]}(\tilde{J}_i, \tilde{J}_i). \end{eqnarray*} The curvature assumption, together with the initial conditions $J_i(0)=\tilde{J}_i(0)=0$, implies that $$ \sum_{i=1}^{2m-2}I_{\gamma[0, \ell]}(\overline{J}_i, \overline{J}_i)\le \sum_{i=1}^{2m-2}I_{\tilde{\gamma}[0, \ell]}(\tilde{J}_i, \tilde{J}_i). $$ The result then follows from the index form comparison Lemma \ref{lmm-index}. This completes the proof on the comparison of $\Delta^{\perp} \rho$. To prove the comparison on the complex Hessian, note that $\nabla^2 \rho(Z, \overline{Z}) =\frac{1}{2}\nabla^2 \rho (e_{n-1}, e_{n-1})$, where $Z=\frac{1}{\sqrt{2}}\left(\nabla \rho-\sqrt{-1}C(\nabla \rho)\right)$. Now let $\overline{J}_{n-1}(\eta)=\frac{S_{2\lambda}(\eta)}{S_{2\lambda}(\ell)} e_{n-1}$ as before. It is easy to check that $ \overline{J}_{n-1}(0)=0$ and $\overline{J}'_{n-1}(0)\perp T_{\gamma(0)}P$ (no need to check this for the previous case since $P=\{x_0\}$ being a point). Now the assumption on the holomorphic sectional curvature implies that $$ I_{\gamma[0, \ell]}(\overline{J}_{n-1}, \overline{J}_{n-1})\le I_{\tilde{\gamma}[0, \ell]}(\tilde{J}_{n-1}, \tilde{J}_{n-1}). $$ The claimed result again follows from the index form comparison Lemma \ref{lmm-index}. \section{Extensions} First we prove the Theorems \ref{thm-com11} and \ref{thm-com2}. The proof of Theorem \ref{thm-com2} follows verbatim as the proof of Theorem \ref{thm-com1}. For Theorem \ref{thm-com11}, we construct of the vector fields $\{\overline{J}_i\}$ satisfying different boundary conditions at $\eta=0$. First we define $$ C_\kappa(t) \doteqdot \left\{ \begin{matrix} \frac{1}{\sqrt{\kappa}}\cos \sqrt{\kappa} t, & \, \kappa>0,\cr 1, &\, \kappa=0, \cr \frac{1}{\sqrt{|\kappa|}}\cosh \sqrt{|\kappa|}t, &\, \kappa<0;\end{matrix}\right.\quad C'_\kappa(t)\doteqdot \frac{d}{dt} C_\kappa(t); \quad \tan_\kappa(t)=\frac{C'_\kappa(t)}{C_\kappa(t)}. $$ Now we let $\overline{J}_i(\eta)=\frac{C_{\frac{\lambda}{2}}(\eta)}{C_{\frac{\lambda}{2}}(\ell)} e_i(\eta)$. Since at $\eta=0$, $e_n(0)=\gamma'(0)$ and $e_{n-1}(0)=C(\gamma'(0))$ are perpendicular to $P$, $\{e_{i}(0)\}_{i=1}^{2m-2}$ are tangent to $P$. Since $P$ is minimal $$ \sum_{i=1}^{2m-2} \langle A_{\gamma'(0)}(\overline{J}_i(0)), \overline{J}_i(0)\rangle =0. $$ Hence (if we adapt the Einstein convention) $$ \sum_{i=1}^{2m-2}I_{\gamma[0, \ell]}(\overline{J}_{i}, \overline{J}_{i})= \int_0^\ell \|\overline{J}'_{i}\|^2-\langle R_{\overline{J}_i, \gamma'}\gamma', \overline{J}_i\rangle =\frac{1}{C^2_{\frac{\lambda}{2}}(\ell)} \int_0^\ell (2m-2)(C'_{\frac{\lambda}{2}})^2-C^2_{\frac{\lambda}{2}}Ric^{\perp}(\gamma', \gamma') . $$ Then Theorem \ref{thm-com11} follows from the index comparison Lemma \ref{lmm-index} and direct calculation of the right hand above (for $\sum_{i=1}^{2m-2}I_{\tilde{\gamma}[0, \ell]}(\tilde{J}_{i}, \tilde{J}_{i})$). The argument above can be extended to the case that $P$ is a Levi-flat real hypersurface, observing that the boundary term vanishes due to the Levi-flatness (cf. \cite{Ni-Wolfson}). \begin{corollary}\label{coro-levi}Let $(M^m, g)$ be a K\"ahler manifold with $Ric^{\perp}(X, X)\ge (m-1) \lambda |X|^2$. Let $(\tilde{M}, \tilde{g})$ be the complex space form with constant holomoprhic sectional curvature $2\lambda$. Let $\rho(x)$ be the distance function to a real Levi flat hypersurface $P$ (and $\tilde{\rho}$ be the corresponding distance function to a totally geodesic complex hypersurface $\tilde{P}$). Then for point $x$, which is not in the focal locus of $P$, $$ \Delta^{\perp} \rho (x) \le \Delta^{\perp}\tilde{\rho}\left.\right|_{\tilde{\rho}=\rho(x)}=(m-1)\tan_{\frac{\lambda}{2}}(\rho). $$ \end{corollary} In \cite{Tsu}, it was proved that if a K\"ahler manifold $(M^m, g)$ has positive lower bound $2\lambda$ on its holomorphic sectional curvature, then it must be compact with diameter bounded from above by $\frac{\pi}{\sqrt{2\lambda}}$. The following generalizes this slightly. \begin{proposition}\label{prop-index} Let $(M^m, g)$ be a compact K\"ahler manifold with holomorphic sectional curvature bounded from below by $2\lambda>0$. Then for any geodesic $\gamma(\eta): [0, \ell]\to M$ with length $\ell>\frac{\pi}{\sqrt{2\lambda}}$, the index $i(\gamma)\ge 1$. \end{proposition} \begin{proof} Let $e_{n-1}(\eta)=C(\gamma'(s))$. Let $X(\eta)=\sin\left(\frac{\pi}{\ell}\eta\right)e_{n-1}(\eta)$. Then \begin{eqnarray*} I(X, X)&=&\int_0^\ell \left(\frac{\pi}{\ell}\right)^2 \cos^2\left(\frac{\pi}{\ell}\eta\right)-\sin^2 \left(\frac{\pi}{\ell}\eta\right)\langle R_{C(\gamma'), \gamma'}\gamma', C(\gamma')\rangle\\ &\le &\left(\frac{\pi}{\ell}\right)^2\int_0^\ell \cos^2\left(\frac{\pi}{\ell}\eta\right)-2\lambda \int_0^\ell \sin^2 \left(\frac{\pi}{\ell}\eta\right)<0. \end{eqnarray*} This proves the claim. \end{proof} Moreover it was also proved in \cite{Tsu} that $M$ must be simply-connected. The following is a generalization on the simply-connectedness. \begin{proposition}\label{prop-tran} Let $(M^m, g)$ be a compact K\"ahler manifold with positive holomorphic sectional curvature. Then any holomorphic isometry of $M$ must have at least one fixed point. \end{proposition} \begin{proof} Assume that there exists such a map $\phi: M\to M$ with no fixed point. Then there exists $p$ such that $d(p, \phi(p))=\min_{q\in M} d(q, \phi(q))$. Let $\gamma$ be the minimal geodesic joining $p$ to $\phi(p)$ with $\ell$ being the length. First observe that $d\phi(\gamma'(0))=\gamma'(\ell)$. This follows from the triangle inequality and the estimate: $$ d(\gamma(\eta), \phi(\gamma(\eta)))\le d(\gamma(\eta), \phi(p))+d(\phi(p), \phi(\gamma(\eta)))=d(\gamma(\eta), \phi(p))+d(p, \gamma(\eta))=d(p, \phi(p)). $$ Now let $e_n=\gamma'(\eta)$. Let $e_{n-1}=C(e_n)$. Clearly $e_{n-1}(\eta)$ is parallel. On the other hand, $e_{n-1}(0)=C(\gamma'(0))$, $e_{n-1}(\ell)=C(\gamma'(\ell))=C(d\phi(\gamma'(0)))=d\phi(e_{n-1})$. This shows that if $\beta(s)$ is a geodesic starting from $p$ with $\beta'(0)=e_{n-1}$, $\tilde{\beta}(s)=\phi(\beta(s))$ will be a geodesic starting from $\gamma(\ell)$ with $\tilde{\beta}'(0)=e_{n-1}(\ell)$. Consider the variation $\gamma(\eta, s)=\exp_{\gamma(\eta)}(se_{n-1}(\eta))$. The second variation formula on the energy $\mathcal{E}(s)=\frac{1}{2}\int_0^\ell |\frac{\partial \gamma}{\partial \eta}|^2$ gives that $$ \frac{d^2}{ds^2} \mathcal{E}(0)=-\int_0^\ell \langle R_{e_{n-1}, \gamma'}\gamma', e_{n-1}\rangle <0. $$ This contradicts to that $\gamma_0(\eta)=\gamma(\eta, 0)$ is length minimizing (hence also energy minimizing) among all $\gamma_s(\eta)=\gamma(\eta, s)$, which joins $\beta(s)$ to $\tilde{\beta}(s)=\phi(\beta(s))$. \end{proof} Regarding to the diameter estimate we have the following result under the assumption of the orthogonal Ricci lower bound, whose proof is the same as that of Myers' theorem. \begin{theorem}\label{thm-diameter} Let $(M^m, g)$ be a K\"ahler manifold with $Ric^{\perp}(X, X)\ge (m-1) \lambda |X|^2$ with $\lambda>0$. Then $M$ is compact with diameter bounded from the above by $\sqrt{\frac{2}{\lambda}}\cdot \pi$. Moreover, for any geodesic $\gamma(\eta): [0, \ell]\to M$ with length $\ell>\sqrt{\frac{2}{\lambda}}\cdot \pi$, the index $i(\gamma)\ge 1$. \end{theorem} Note that this estimate is not sharp for Fubini-Study metrics. It is an interesting question {\it whether or not a compact K\"ahler manifold with positive orthogonal Ricci curvature is simply-connected.} The case for Ricci curvature is a theorem of S. Kobayashi \cite{Ko}. The following result provides a generalization of a result of Tam and Yu \cite{TamYu}. \begin{corollary}Assume that $(M^m, g)$ satisfies that $Ric^{\perp}(X, X)\ge (m-1) \lambda |X|^2$ and $H(X)\ge 2\lambda |X|^4$ with $\lambda>0$. Assume that there exists a complex hypersurface $P$ and a point $Q\in M$ such that $d(P, Q)=\frac{\pi}{\sqrt{2\lambda}}$. Then $(M^m, g)$ is holomorphic-isometric to a complex projective space with Fubini-Study metric. \end{corollary} \begin{proof} Without the loss of generality we let $\lambda=2$. Under the assumption, it is known that $d(P, Q)\le \frac{\pi}{2}$. The assumption and the comparison theorems proved above implies that the area element with respect to level circle of a complex hypersurface over the area element of the level circle of $\mathbb{CP}^{m-1}\subset \mathbb{CP}^m$, and the area element with respect to the level spheres (to a point) over that of sphere in $\mathbb{CP}^m$ are all monotone decreasing. This shows that for any $\ell \in (0, \frac{\pi}{2})$, $B(P, \ell)\cap B(Q, \frac{\pi}{2}-\ell)=\emptyset$ and \begin{eqnarray*} 1 &\ge& \frac{Vol (B(P,\ell))}{Vol (M)}+\frac{Vol(B(Q, \frac{\pi}{2} -\ell))}{Vol(M)}\\ &\ge& \frac{1}{Vol (\mathbb{CP}^m)}\left(\int_{\mathbb{CP}^{m-1}}\int_0^\ell 2\pi \cos ^{2m-1} t \cdot \sin t\, dt+ \int_{\mathbb{S}^{2m-1}}\int_0^{\frac{\pi}{2}-\ell} \sin^{2m-1} t \cdot \cos t\, dt\right)\\ &=&1. \end{eqnarray*} The claimed rigidity follows from the equality case in the volume/area comparison as classical case.\end{proof} \section{Proof of the vanishing theorem} In this section we shall prove Theorem \ref{thm-1connect}. In a recent paper \cite{XYang}, X. Yang proved that any compact K\"ahler manifold $M^m$ with positive holomorphic sectional curvature $H$ satisfies $h^{p,0}=0$ for all $1\leq p\leq m$, using the form version of the Bochner identity. By employing this method we prove that, under the $Ric^{\perp } >0$ condition, $h^{m-1,0}=h^{2,0}=0$. Let $s$ be a global holomorphic $p$-form on $M^m$. The Bochner identity (cf. Ch III, Proposition 1.5 of \cite{Ko2}, as well as Porposition 2.1 of \cite{Ni-JDG}) gives $$ \partial \overline{\partial } |s|^2 = \langle \nabla s , \overline{\nabla s} \rangle - \widetilde{R}(s, \overline{s}, \cdot , \cdot ) $$ where $\widetilde{R}$ stands for the curvature of the Hermitian bundle $\bigwedge^p\Omega$, and $\Omega=(T'M)^*$ is the holomorphic cotangent bundle of $M$. The metric on $\bigwedge^p\Omega$ is derived from the metric of $M^m$. It is useful to note that $\tilde{R}$ acts on $(p, 0)$ forms as special case of the curvature action on tensors. Precisely we have the following formula for any holomorphic $(p, 0)$-form $s$ and any given tangent direction $v$ at the point $x_0$, namely, there will be local frame $\{dz_i\}$ which is unitary at a point $x_0$, such that \begin{equation}\label{eq:40} \langle \sqrt{-1}\partial\bar{\partial} |s|^2, \frac{1}{\sqrt{-1}}v\wedge \bar{v}\rangle =\langle \nabla_v s, \bar{\nabla}_{\bar{v}} \bar{s}\rangle +\frac{1}{p!}\sum_{I_p} \sum_{k=1}^p R_{v\bar{v}i_k \bar{i}_k}|a_{I_p}|^2, \end{equation} where $s=\frac{1}{p!}\sum_{I_p} a_{I_p}dz^{i_1}\wedge \cdots \wedge dz^{i_p}$ and $I_p=(i_1, \cdots, i_p)$. The $\langle\cdot , \cdot\rangle$ in the left hand side is the scalar product between the $(1, 1)$-forms and their dual, instead of bilinear extension of the Hermitian product. If $M$ admits metric of positive holomorphic section curvature, the second variation argument as in the proof of Corollary \ref{Gold} implies that $R_{v\bar{v}i_k \bar{i}_k}>0$ for $v$, a unit vector which attains the minimum of the holomorphic sectional curvature among all unit vector $w\in T_{x_0}'M$ at the given $x_0$. This is the argument of \cite{XYang} proving the vanishing of $h^{p, 0}$ under the positivity of the holomorphic sectional curvature. Now we adapt this to prove Theorem \ref{thm-1connect}. If $s$ is not identically zero, then $|s|^2$ will attain its nonzero maximum somewhere, say $x_0$, and at this point we have \begin{equation*} \widetilde{R}(s, \overline{s},v, \overline{v}) \geq 0 \end{equation*} for any type $(1,0)$ tangent vector $v\in T_{x_0}M$. We want to show that this will contradict the assumption $Ric^{\perp} >0$ when either $p=m-1$ or $p=2$. The $p=m-1$ case is easy. In a small neighborhood of $x_0$, we can write $s= f \varphi_2\wedge \cdots \wedge \varphi_m$, where $f\neq 0$ is a function and $\{ \varphi_1, \varphi_2, \ldots , \varphi_m\} $ are local $(1,0)$-forms forming a coframe dual to a local tangent frame $\{E_1, \ldots , E_m\}$, which is unitary at $x_0$. Since $$ \widetilde{R} (s, \overline{s},v, \overline{v}) = - |f|^2 \sum_{i=2}^m R_{i\overline{i}v\overline{v}} \geq 0 $$ for any tangent direction $v$, where $R$ is the curvature tensor of $M$. If we take $v=E_1$, we would get $Ric^{\perp} (E_1, \overline{E}_1) \leq 0$, a contradiction. Now consider the $p=2$ case. Suppose that $s$ is a non-trivial global holomorphic $2$-form on $M^m$. Let $r\geq 1$ be the largest positive integer such that the wedge product $s^r$ is not identically zero. Since we already have $h^{m,0}=h^{m-1,0}=0$, we know that $2r\leq m-2$. We will apply the $\partial\bar{\partial}$-Bochner formula to the $2r$-form $\sigma = s^r$. Let $x_0$ be a maximum point of $|\sigma |^2$. At $x_0$, let us write $s=\sum_{i,j} f_{ij}\varphi_i\wedge \varphi_j$ under any unitary coframe $\{\varphi_j\}$ which is dual to a local unitary tangent frame $\{E_j\}$. The $m\times m$ matrix $A= (f_{ij})$ is skew-symmetric. As is well-known (cf. \cite{Hua}), there exists unitary matrix $U$ such that $\ ^t\!U AU$ is in the block diagonal form where each non-zero diagonal block is a constant multiple of $E$, where $$ E=\left[ \begin{array}{cc} 0 & 1 \\ -1 & 0 \end{array} \right]. $$ In other words, we can choose a unitary coframe $\varphi$ at $x_0$ such that $$ s = \lambda_1 \varphi_1\wedge \varphi_2 + \lambda_2 \varphi_3 \wedge \varphi_4 + \cdots + \lambda_k \varphi_{2k-1}\wedge \varphi_{2k}, $$ where $k$ is a positive integer and each $\lambda_i\neq 0$. Clearly, $k\leq r$ since $s^k\neq 0$ at $x_0$. If $k<r$, then $\sigma = s^r =0$ at $x_0$, which is a maximum point for $|\sigma|^2$, implying $\sigma \equiv 0$, a contradiction. So we must have $k=r$. Thus $\sigma = \lambda \varphi_1\wedge \cdots \wedge \varphi_{2r}$, where $\lambda = \lambda_1 \cdots \lambda_k\neq 0$. From the Bochner formula, we get that \begin{equation}\label{eq:41} \sum_{i=1}^{2r} R_{i\overline{i}v\overline{v}} \leq 0 \end{equation} for any tangent direction $v$ of type $(1,0)$ at $x_0$. From this we shall derive a contradiction to our assumption that $Ric^{\perp}>0$. Denote by $W\cong {\mathbb C}^{2r}$ the subspace in $T_{x_0}'M$ spanned by $E_1, \ldots , E_{2r}$. By letting $v\in W$, we see that the `Ricci' of the restriction $R|_W$ of the curvature tensor $R$ on $W$ is nonpositive, thus the `scalar' curvature of $R|_W$ is also nonpositive: \begin{equation}\label{eq:42} S|_W = \sum_{i,j=1}^{2r} R_{i\overline{i}j\overline{j}} \leq 0. \end{equation} On the other hand, for each $1\leq j\leq 2r$, $Ric^{\perp}(E_j, \overline{E}_j)>0$. By adding them up, we get \begin{equation}\label{eq:43} 0< \sum_{j=1}^{2r} Ric^{\perp}(E_j, \overline{E}_j) = \sum_{1\leq i\neq j \leq 2r} R_{i\overline{i}j\overline{j}} + \sum_{j=1}^{2r} \sum_{\ell =2r+1}^{m} R_{j\overline{j}\ell \overline{\ell }}. \end{equation} By applying (\ref{eq:41}) to $v=E_{\ell}$ for each $\ell$, we know that the second term on the right hand side of (\ref{eq:43}) is nonpositive, therefore we get \begin{equation}\label{eq:44} \sum_{1\leq i\neq j \leq 2r} R_{i\overline{i}j\overline{j}} = S|_W - \sum_{i=1}^{2r} H(E_i) > 0. \end{equation} Note that for any $P\in U(2r)$, if we replace $\{ E_1, \ldots , E_{2r} \}$ by $\{ \tilde{E}_1, \ldots , \tilde{E}_{2r} \}$ where $\tilde{E}_i = P_{ij} E_j$, then the above inequality still holds. Taking the average integral $\aint \ $ over $U(2r)$, and using Berger's lemma, we get $$ 0< S|_W - 2r \aint H(PE_1) = S|_W - 2r \frac{2}{2r(2r+1)} S|_W = \frac{2r-1}{2r+1} S|_W, $$ so $S|_W>0$, contradicting (4.3). This proves that $h^{2,0}=0$ for any compact K\"ahler manifold $M^m$ with $Ric^{\perp}>0$ everywhere, and we have completed the proof of Theorem \ref{thm-1connect}. The proof in fact yields the following more general result, which is in the same spirit of the result in \cite{Ni-Zheng2}. \begin{corollary}\label{coro:31} The vanishing of Hodge number $h^{2,0}(M)$ follows if $(M, g)$ is compact and for any unitary pair $\{E_i\}_{i=1, 2}$ with $E_1\perp E_2$ $$ Ric^{\perp}(E_1, \overline{E}_1)+Ric^{\perp}(E_2, \overline{E}_2)>0. $$ In particular, $M$ is projective. \end{corollary} Modifying the argument also proves the following result which in fact is different from the above corollary since $Ric^\perp(Z, \overline{Z})$ does not come from a Hermitian symmetric sesquilinear form. Similar to \cite{Ni-Zheng2}, for any $k$-subspace $\Sigma\subset T_x'M$, we define $$ Ric^{\perp}_k(x, \Sigma)=\aint_{Z\in \Sigma, |Z|=1} Ric^\perp(Z, \overline{Z})\, d\theta(Z) $$ where $\aint f (Z)\, d\theta(Z)$ denotes $\frac{1}{Vol(\mathbb{S}^{2k-1})}\int_{\mathbb{S}^{2k-1}}f(Z)\, d\theta(Z)$. We say $Ric_k^{\perp}(x)>0$ if for any $k$-subspace $\Sigma\subset T_x'M$, $Ric^{\perp}_k(x, \Sigma)>0$. \begin{theorem} Let $(M, g)$ be a compact K\"ahler manifolds such that $Ric^{\perp}_2(x)>0$ for any $x\in M$. Then $h^{2, 0}=0$. In particular, $M$ is projective. \end{theorem} \begin{proof} First it is easy to see that $Ric_l^{\perp}(x)>0$ implies that $Ric_k^{\perp}(x)>0$ for any $k\ge l$. We observe that, if $\Sigma=\operatorname{span}\{E_1, E_2, \cdots, E_l\}$, \begin{eqnarray*} Ric^{\perp}_l(x, \Sigma)&=&\aint_{Z\in \Sigma, |Z|=1} Ric^\perp(Z, \overline{Z})\, d\theta(Z)=\aint_{Z\in \Sigma, |Z|=1} Ric(Z, \overline{Z})-H(Z)\, d\theta(Z)\\ &=&\aint \frac{1}{Vol(\mathbb{S}^{2m-1})}\int_{\mathbb{S}^{2m-1}} mR(Z, \overline{Z}, W, \overline{W})-H(Z)\, d\theta(W)\, d\theta(Z)\\ &=&\frac{1}{Vol(\mathbb{S}^{2m-1})}\int_{\mathbb{S}^{2m-1}}\left( \aint mR(Z, \overline{Z}, W, \overline{W})-H(Z)\, d\theta(Z)\right)d\theta(W)\\ &=& \frac{1}{l}\left(Ric(E_1, \overline{E}_1)+Ric(E_2, \overline{E}_2)+\cdots +Ric(E_l, \overline{E}_l) \right)-\frac{2}{l(l+1)}S_l(x, \Sigma) \end{eqnarray*} where $S_l(x, \Sigma)$ is the scalar curvature of $R$ restricted to $\Sigma$ (cf. \cite{Ni-Zheng2}). Now we adapt the proof of Theorem \ref{thm-1connect} above, for $W=\operatorname{span}\{E_1, \cdots, E_{2r}\}$, the $\partial\bar{\partial}$-Bochner formula implies that $S_{2r}(x_0, W)\le 0$. On the other hand the above calculation and (\ref{eq:42}) implies that $$ \frac{2r-1}{2r(2r+1)}S_{2r}(x_0, W)\ge Ric^{\perp}_{2r}(x_0,W)>0. $$ The contradiction implies the theorem. \end{proof} Note that it is well known that (cf. \cite{Ko2}, Theorem 3.4 of Ch. 3) if $$Ric_k(x)=\min{\{E_i\}} \left(Ric(E_1, \overline{E}_1)+Ric(E_2, \overline{E}_2)+\cdots +Ric(E_k, \overline{E}_k) \right)>0$$ everywhere $h^{p, 0}=0$ for any $p\ge k$. It was recently proved in \cite{Ni-Zheng2} that the same result holds if $S_k>0$. Given the above relation between $Ric^\perp_k(x)$, $Ric_k(x)$, and $S_k(x)$, it is natural to conjecture that $h^{p, 0}=0$ if $Ric^\perp_k(x)>0$. Clearly an affirmative answer to this question would imply the main conjecture in the introduction. To prove Proposition \ref{bochner}, observe that for any holomorphic vector field $s$ the $\partial\bar{\partial}$-Bochner formula can be applied to obtain that $$ \langle \sqrt{-1}\partial\bar{\partial} |s|^2, \frac{1}{\sqrt{-1}}v\wedge \bar{w}\rangle \ = \ \langle \nabla_v s, \bar{\nabla}_{\bar{w}} \bar{s}\rangle- R_{v\bar{w}s\bar{s}}. $$ If $s$ is nonzero, as before at the point $x_0$, where $|s|^2$ attains its maximum we have that $$ R_{v\bar{v} s\bar{s}}\ge 0 $$ for any direction $v$. Summing over an unitary basis of $\{s\}^{\perp}$ we have a contradiction with $Ric^{\perp}<0$. \vspace{0.2in} Next we examine the correlation between the positivity of the three curvatures: $Ric$, $Ric^{\perp}$, and $H$. First of all, we observe that the positivity of two of them does not imply that of the third one, except the obvious case caused by $Ric = Ric^{\perp }+H$. $ \bullet$ Examples with $Ric>0$, $H>0$ but $Ric^{\perp}\ngeq 0$. To see such an example, let us consider the surface $M^2={\mathbb P}^2 \# \overline{{\mathbb P}^2}$, the blowing up of ${\mathbb P}^2$ at one point. We have $$ M^2= \{ ([z_0:z_1:z_2], [w_1:w_2]) \in {\mathbb P}^2 \times {\mathbb P}^1 \mid z_1w_2=z_2w_1\} .$$ For $\lambda >0$, let $\omega_{\lambda}$ be the metric on $M^2$ which is the restriction of $$ \sqrt{-1} \partial \overline{\partial } \log (|z_0|^2+ |z_1|^2+|z_2|^2) + \lambda \sqrt{-1} \partial \overline{\partial } \log (|w_1|^2+|w_2|^2) $$ on the product manifold ${\mathbb P}^2 \times {\mathbb P}^1$. By a straight forward computation, which will be included in the appendix, we will show that $Ric>0$ everywhere if and only if $\lambda>\frac{1}{2}$, and $H>0$ everywhere if and only if $\lambda >1$. So for any $\lambda >1$, we get an example with the desired curvature condition. Note that the metric has $Ric^{\perp} \ngeq 0$. In fact, $M^2$ does not admit any K\"ahler metric with $Ric^{\perp }\geq 0$ everywhere by the result of Gu-Zhang \cite{GuZhang}. $ \bullet$ Examples with $Ric>0$, $Ric^{\perp}>0$ but $H\ngeq 0$. In later sections, we will construct examples of complete $\mathsf{U}(m)$-invariant K\"ahler metrics on ${\mathbb C}^m$, such that its Ricci curvature and orthogonal bisectional curvature $B^{\perp}$ are both everywhere positive, yet the holomorphic sectional curvature $H\ngeq 0$. In fact, there are such examples where $H$ is negative in some tangent directions at every point outside a compact subset. Note that as we mentioned before, $B^{\perp}>0\Rightarrow QB>0\Rightarrow Ric^{\perp} >0$. We would also point out that, there are examples of K\"ahler metrics where one of these three curvature is positive, while the other two are not. $ \bullet$ Examples with $H>0$ but $Ric\ngeq 0$, $Ric^{\perp}\ngeq 0$. For instance, consider the Hirzebruch surface $F_n={\mathbb P}( {\mathcal O}_{{\mathbb P}^1} \oplus {\mathcal O}_{{\mathbb P}^1}(-n))$ with $n>2$. By a well-known result of Hitchin \cite{Hitchin}, all $F_n$ admit K\"ahler metric with $H>0$ everywhere. On the other hand, when $n> 2$, the first Chern class $c_1(F_n) \ngeq 0$, so there is no K\"ahler metric with $Ric\geq 0$. There is no K\"ahler metric with $Ric^{\perp} \geq 0$ either, by the result of Gu-Zhang. $ \bullet$ Examples with $Ric>0$ but $Ric^{\perp} \ngeq 0$, $H\ngeq 0$. To see such an example, we can simply take the previous example (see the Appendix for details) of metric on ${\mathbb P}^2 \# \overline{{\mathbb P}^2}$ with the parameter $\lambda $ in $(\frac{1}{2}, 1)$. In this case one has $Ric >0$ everywhere, but $H$ is negative somewhere in some directions. The surface does not admit any metric with $Ric^{\perp}\geq 0$ for the reason given above. Note that on this surface, there are metrics with $H>0$. In fact, it is conjectured by Yau that any rational surface (or any rational manifolds in higher dimensions) admits K\"ahler metric with $H>0$ everywhere, although this is still open for most of the rational surfaces. $ \bullet$ Examples with $Ric^{\perp } >0$ but $Ric \ngeq 0$, $H\ngeq 0$. Examples of complete $\mathsf{U}(m)$-invariant K\"ahler metrics on ${\mathbb C}^m$ with the above curvature properties will be constructed in a later section. In fact the metric constructed will have $B^\perp>0$, but $Ric<0$ and $H<0$ for some directions at every point outside a compact subset. \section{Examples--preliminary} We will follow the notations of \cite{WZ}, \cite{HT}, and \cite{Ni-Niu2}. Let $g$ be a $\mathsf{U}(m)$-invariant K\"ahler metric on ${\mathbb C}^m$ with K\"ahler form $\omega_g$. Denote by $(z_1, \ldots , z_m)$ the standard holomorphic coordinates of ${\mathbb C}^m$ and write $r=|z_1|^2+ \cdots + |z_m|^2$. Since $g$ is $\mathsf{U}(m)$-invariant, one can write $\omega_g = \sqrt{\!-\!1}\partial \overline{\partial }P(r)$ for some smooth function $P$ on $[0,\infty )$. Note that $\omega_g >0$ means that the smooth functions $f=P'>0$ and $h=(rf)'>0$, and the metric is complete if and only if $$ \int_0^{\infty } \sqrt{ \frac{h}{r} } \ dr = \infty . $$ Here we adapt the constructions in \cite{WZ, HT, Ni-Niu2} to illustrate unitary symmetric metrics on $\mathbb{C}^m$ with various properties promised in the last section. The basic is the ansatz and computation laid out in \cite{WZ}. Below is a summary. In \cite{WZ}, Wu and Zheng considered the $\mathsf{U}(m)$-invariant K\"ahler metrics on $\mathbb{C}^m$ and obtained necessary and sufficient conditions for the nonnegativity of the curvature operator, nonnegativity of the sectional curvature, as well as the nonnegativity of the bisectional curvature respectively. In \cite{YZ}, Yang and Zheng later proved that the necessary and sufficient condition in \cite{WZ} for the nonnegativity of the sectional curvature holds for the nonnegativity of the complex sectional curvature under the unitary symmetry. In \cite{HT}, Huang and Tam obtained the necessary and sufficient conditions for (NOB) and (NQOB) respectively. Moreover they constructed a $\mathsf{U}(m)$-invariant K\"ahler metric on $\mathbb{C}^m$, which is of (NQOB), but does not have (NOB) nor nonnegativity of the Ricci curvature. In \cite{Ni-Niu2}, the construction was modified to illustrate an example with (NOB), but the holomorphic sectional curvature is negative somewhere. In later sections we will construct $\mathsf{U}(m)$-invariant K\"ahler metrics on $\mathbb{C}^m$ which has (NOB) but Ricci curvature is negative somewhere (this of course implies that holomorphic sectional curvature must be negative somewhere). We will also construct examples which has (NOB) and positive Ricci curvature, but the holomorphic sectional curvature is negative somewhere. We follow the same notations as in \cite{WZ, YZ}. Let $(z_1, \cdots, z_m)$ be the standard coordinate on $\mathbb{C}^m$ and $r=|z|^2$. An $\mathsf{U}(m)$-invariant metric on $\mathbb{C}^m$ has the K\"ahler form \begin{equation} \omega=\frac{\sqrt{\!-\!1}}{2}\p\bar{\p} P(r) \end{equation} where $P\in C^\infty\left([0, +\infty)\right)$. Under the local coordinates, the metric has the components: \begin{equation}\label{eq:g} g_{i\bar{j}}=f(r)\delta_{ij}+f'(r)\bar{z}_i z_j. \end{equation} We further denote: \begin{equation}\label{eq:g1} f(r)=P'(r), \quad h(r)=(rf)'. \end{equation} It is easy to check that $\omega$ will give a complete K\"ahler metric on $\mathbb{C}^n$ if and only if \begin{equation}\label{eq:10} f>0, \, h>0, \, \int_0^\infty \frac{\sqrt{h}}{\sqrt{r}}dr=+\infty. \end{equation} If $h>0$, then $\xi=-\frac{rh'}{h}$ is a smooth function on $[0, \infty)$ with $\xi(0)=0$. On the other hand, if $\xi$ is a smooth function on $[0, \infty)$ with $\xi(0)=0$, one can define $h(r)=\exp(-\int_0^r \frac{\xi(s)}{s}ds)$ and $f(r)=\frac{1}{r}\int_0^r h(s)$ ds with $h(0)=1$. It is easy to see that $\xi(r)=-\frac{rh'}{h}$. Then (\ref{eq:g}) defines a $\mathsf{U}(m)$-invariant K\"ahler metric on $\mathbb{C}^m$. The components of the curvature operator of a $\mathsf{U}(m)$-invariant K\"ahler metric under the orthonormal frame $\{E_1=\frac{1}{\sqrt{h}}\p_{z_1}, E_2=\frac{1}{\sqrt{f}}\p_{z_2}, \cdots, E_m=\frac{1}{\sqrt{f}} \p_{z_m}\}$ at $(z_1, 0, \cdots, 0)$ are given as follows, see \cite{WZ}: \begin{eqnarray} A&=& R_{1\bar{1}1\bar{1}}=-\frac{1}{h}\left(\frac{rh'}{h}\right)'=\frac{\xi'}{h}; \label{eq:A}\\ B&=& R_{1\bar{1}i\bar{i}}=\frac{f'}{f^2}-\frac{h'}{hf}=\frac{1}{(rf)^2}\left[rh-(1-\xi)\int_0^r h(s)\, ds\right],\, i\ge 2;\label{eq:B}\\ C&=& R_{i\bar{i}i\bar{i}}=2R_{i\bar{i}j\bar{j}}=-\frac{2f'}{f^2}=\frac{2}{(rf)^2}\left(\int_0^r h(s)\, ds-rh\right),\, i\neq j, i, j\ge 2.\label{eq:C} \end{eqnarray} The other components of the curvature tensor are zero, except those obtained by the symmetric properties of curvature tensor. The following result was proved in \cite{WZ}, which plays an important role in the construction. \begin{theorem}[Wu-Zheng] (1) If $0<\xi<1$ on $(0, \infty)$, then $g$ is complete. (2) $g$ is complete and has positive bisectional curvature if and only if $\xi'>0$ and $0<\xi<1$ on $(0, \infty)$, where $\xi'>0$ is equivalent to $A>0, B>0$ and $C>0$. (3) Every complete $\mathsf{U}(m)$-invariant K\"ahler metric on $\mathbb{C}^m$ with positive bisectional curvature is given by a smooth function $\xi$ in (2). \end{theorem} It was proved in \cite{WZ}, \cite{HT}, and \cite{Ni-Niu2} that the following result holds. \begin{proposition}\label{prop-51} Let $g$ be a $\mathsf{U}(m)$-invariant K\"ahler metric on ${\mathbb C}^m$, with positive functions $f$, $h$ on $[0, \infty )$ described as above. Then \newline (i)\quad $g$ has positive bisectional curvature $\iff$ $A>0$, $B>0$, $C>0$ $\iff$ $A>0$. \newline (ii) \quad If $m\geq 3$, then $g$ has positive orthogonal bisectional curvature $\iff$ $B>0$, $C>0$, $A+C>0$. \newline (iii)\quad If $m\geq 3$, then $g$ has positive orthogonal bisectional and positive Ricci curvature $\iff$ $B>0$, $C>0$, $A+C>0$, $A+(m-1)B>0$. \end{proposition} Note that when $m=2$, the positivity of the orthogonal bisectional curvature no longer guarantees $C>0$, and the curvature condition for (ii) actually becomes $B>0$ and $A+C>0$; while the condition for (iii) becomes $B>0$, $A+B>0$, $C+B>0$, and $A+C>0$. In particular, the ``$\Longleftarrow $" part of (ii) and (iii) are still valid when $m =2$. As noted in \cite{WZ}, there are plenty of metrics satisfying (i). In \cite{HT}, the authors perturbed metrics in (i) to obtain metrics in (ii) that are not in (i). For case (iii), as well as the comparison theorem proved earlier, the following questions are natural (the first question was raised in (\cite{Ni-Niu2}): \noindent {\bf Questions.} 1) Does there exist a complete $\mathsf{U}(m)$-invariant K\"ahler metric on ${\mathbb C}^m$ with positive orthogonal bisectional curvature, positive Ricci curvature, but does not have nonnegative holomorphic sectional curvature? Namely, a metric $g$ such that $B$, $C$, $A+C$, $A+(m-1)B$ are positive functions on $[0,\infty )$, while $A$ is negative somewhere. 2) Does there exist a complete $ \mathsf{U}(m)$-invariant K\"ahler metric on ${\mathbb C}^m$ with positive orthogonal bisectional curvature and negative Ricci curvature somewhere? \vspace{0.2cm} In \cite{WZ}, the authors used the $\xi$ function to describe $\mathsf{U}(m)$-invariant K\"ahler metrics on ${\mathbb C}^m$, which is defined by $ \xi = - \frac{rh'}{h}$. Clearly, $\xi$ is smooth on $[0,\infty )$ with $\xi (0)=0$, and is determined by $g$. Conversely, $\xi$ determines $h$ and $f$ up to a positive constant multiple, and as proved in \cite{WZ}, if $0<\xi <1$ in $(0,\infty )$, then the metric $g$ determined by $\xi$ is complete. In terms of $\xi$, the above question (i) can be rephrased (see the last paragraph of \cite{Ni-Niu2}) as finding a smooth function $\xi$ on $[0,\infty )$ with $\xi(0)=0$ and $0< \xi <1$ on $(0,\infty )$, such that $\xi'<0$ somewhere, yet \begin{eqnarray*} & & rh - (1-\xi )\int_0^r \! h(s)ds > 0; \\ & & \int_0^r \! h(s)ds - rh \ > 0;\\ & & \xi' + \frac{2h}{(rf)^2} \ \big( \int_0^r\!h(s)ds - rh \big) \ > 0; \\ & & \xi' + \frac{(m-1)h}{(rf)^2} \ \big( rh - (1-\xi )\int_0^r \! h(s)ds \big) \ > 0 \end{eqnarray*} everywhere on $(0,\infty )$. It is not obvious why such a function must exist. So we will resort to another characterization of $\mathsf{U}(m)$-invariant metrics in \S 5 of \cite{WZ} by the generating surface of revolution. \section{Examples--a characterization} Let us first recall the characterization of $\mathsf{U}(m)$-invariant metrics by surface of revolutions given in \S 5 of \cite{WZ}. Let $g$ be a complete $\mathsf{U}(m)$-invariant K\"ahler metric on ${\mathbb C}^m$, with $h$, $f$ defined as before. Let us assume that $h'<0$ everywhere. Write $\xi = - \frac{rh'}{h}$, then we have $0<\xi < 1$ on $(0,\infty )$ by the assumption $h'<0$ and the completeness of $g$. Write $x=\sqrt{rh}$. On $(0,\infty )$, we have $x'= \frac{\sqrt{h} (1-\xi )} {2\sqrt{r}} >0$, so $x$ is a strictly increasing function and $x'^2< \frac{h}{4r}$. Define a positive, strictly increasing function $y$ on $(0,\infty )$ so that $y(0^+)=0$ and $$ x'^2 + y'^2 = \frac{h}{4r}.$$ The metric $g$ is determined by the smooth function $y=F(x)$ on $(0,x_0)$, where $x_0=\lim_{r\rightarrow \infty }\sqrt{rh}\leq \infty $. It is easy to see that $F$ is actually smooth on $[0,x_0)$ and $F(0)=0$. From the definition, we have the relationship $$ 1+ \big(\frac{dF}{dx}\big)^2 = \frac{1}{(1-\xi )^2}.$$ As computed in \cite{WZ}, in terms of this generating function $F(x)$, the curvature component functions are $$ A = \frac{F'F''}{2x(1+F'^2)^2} , \ \ \ \ B = \frac{1}{v^2} \big( x^2\!- \!\frac{v}{\sqrt{1+F'^2}} \big) , \ \ \ \ C = \frac{2}{v^2} (v-x^2), $$ where $$ v (x)= rf = \int_0^x 2\tau \sqrt{1+F'^2(\tau )} d\tau.$$ To simplify these expressions, let us use the trick in \cite{WZ} by letting $$ F(x) = \frac{1}{2}p(x^2), \ \ \ \ p(t)=\int_0^t\! \sqrt{q(\tau )}\ d\tau , \ \ \ \ q(t) = \frac{(k(t))^2-1} {t} $$ where $k(t)$ is a smooth function on $[0,\infty )$ such that $k(0)=1$ and $k(t)>1$ when $t>0$. We have $$ F'(x) = xp'(x^2)=x \sqrt{q(x^2)},$$ therefore $$ 1+F'^2(x) = 1+x^2q(x^2)=(k(x^2))^2. $$ Now let us denote by $t=x^2$, and $u(t)=\int_0^tk(\sigma )d\sigma $, then by a straight forward computation, we get $$ A=\frac{k'}{k^3}, \ \ \ \ B = \frac{1}{ku^2}(tk-u), \ \ \ \ C=\frac{2}{u^2}(u-t). $$ Write $u(t)=t+t\alpha (t)$. Then $k=u'=1+\alpha +t\alpha'$, and \begin{equation}\label{eq:ch1} A=\frac{t\alpha''+2\alpha'}{(1+\alpha +t\alpha')^3}, \ \ B = \frac{\alpha'}{(1+\alpha +t\alpha')(1+\alpha)^2}, \ \ C=\frac{2\alpha }{t(1+\alpha )^2}. \end{equation} \section{Examples with (NOB), positive Ricci, but negative holomoprhic sectional curvature somewhere} The goal here is to prove the following result, which affirmatively answers a question in \cite{Ni-Niu2}. \begin{theorem}\label{thm-example1} For any $m\geq 2$, there are complete $\mathsf{U}(m)$-invariant K\"ahler metrics on ${\mathbb C}^m$ with positive Ricci curvature and positive orthogonal bisectional curvature everywhere, yet the holomorphic sectional curvature is negative somewhere. \end{theorem} Now that the expressions of the curvature components are reasonably simple, we could try to find functions $\alpha$ so that the desired curvature conditions are satisfied. For instance, let us consider the smooth function $\alpha (t)$ given by \begin{equation}\label{eq-71} \alpha (t) = \lambda \big(1 - \frac{1}{(1+t^2)^a}\big) , \end{equation} where $a$, $\lambda$ are positive constants with $a\in (\frac{1}{2},1)$. We have $\alpha (0)=0$, and $$\alpha '=\frac{2a\lambda t}{(1+t^2)^{a+1}}.$$ So $\alpha $ and $\alpha'$ are positive on $(0,\infty )$. Note that the function $\alpha'$ and $\frac{\alpha }{t}$ are actually also positive at $t=0$. By formula (6.1) in the previous section, we have $B>0$, $C>0$ everywhere. Note that $A(0)>0$ as well, so the bisectional curvature of the metric $g$ is positive at the origin. Let us examine the situation away from the origin. For a constant $b>0$, we compute $$ (t^b \alpha')' = \frac{2a\lambda t^b}{(1+t^2)^{a+2}} \big( (b+1)+(b-1-2a)t^2\big). $$ For $b=2$, the right hand side factor becomes $3-(2a-1)t^2$, so the sign of $A$, or equivalently the sign of $t\alpha''+2\alpha'$, is the same as that of $(t_0-t)$, where $t_0=\sqrt{\frac{3}{2a-1}}$. That is, we have $$ A>0 \ \ \mbox{on} \ [0, t_0) ,\ \ \ \mbox{and} \ \ A<0 \ \ \mbox{on} \ (t_0, \infty ). $$ For $b=3$, $b-1-2a=2-2a>0$, so $(t^3\alpha')'>0$, thus by formula (6.1) $$ k^3(A+(n-1)B) \geq k^3(A+B) \geq t\alpha'' + 3\alpha' > 0.$$ It remains to check the condition $A+C>0$. We have $$k^3C\geq (1+\alpha +t\alpha' )\frac{2\alpha}{t} .$$ So when $2\alpha \geq 1$, we have $k^3C\geq \alpha '$, hence $k^3(A+C) \geq t\alpha '' +3\alpha ' >0$. Let us fix $a\in (\frac{1}{2}, 1)$, and choose $\lambda $ sufficiently large so that $$ \frac{1}{2\lambda -1} < \left(1+\frac{3}{2a-1}\right)^a - 1,$$ in this case we have $$ \left( \frac{2\lambda }{2\lambda -1} \right)^{\frac{1}{a}} -1 < \frac{3}{2a-1}. $$ Note that $$ 2\alpha <1 \iff 1-\frac{1}{(1+t^2))^a} <\frac{1}{2\lambda} \iff t^2 < \left(\frac{2\lambda }{2\lambda -1}\right)^{\frac{1}{a}} -1 .$$ So by our choice of $\lambda$ we have $t< t_0=\sqrt{\frac{3}{2a-1}}$. But in this case $A>0$, thus $A+C>0$ as well. This completes the proof of Theorem \ref{thm-example1}. Note that the metric $g$ given by $\alpha$ in (2) has positive bisectional curvature in a ball $B_c$, while outside the ball, at every point the holomorphic sectional curvature is negative in some direction. One can also construct examples satisfying Theorem \ref{thm-example1} while the bisectional curvature is positive outside an annulus, in particular, outside a compact subset. To see such an example, let us consider \begin{equation} \alpha = t-2at^2+t^3, \end{equation} where $a>0$ is a constant to be determined. We have \begin{eqnarray*} && \frac{\alpha }{t} = 1 - 2at + t^2 \\ & & \alpha ' = 1 - 4at +3t^2\\ & & t\alpha'' +2\alpha '= 2(1-6at+6t^2)\\ & & t\alpha'' +3\alpha '=3-16at+15t^2\\ && t\alpha'' +2\alpha '+\frac{2\alpha}{t} = 2(2-8at+7t^2) \end{eqnarray*} We want to choose $a$ so that the middle line is negative somewhere, while the other four are positive everywhere in $(0,\infty )$. The first two guarantee that $B>0$, $C>0$, while last two imply that $A+B>0$, $A+C>0$. The middle term shares the same sign with $A$. Note that for positive constants $a$, $b$, $c$, the polynomial $a-bt+ct^2$ will be everywhere positive on $[0,\infty )$ if and only if $b^2 < 4ac$, and when $b^2>4ac$, the polynomial will be negative in the interval $[t_1, t_2]$ where $t_1>t_2>0$ are the two roots. Applying this criteria to the five quadratic polynomials above, we know that we want respectively $$ a^2<1, \ \ \ a^2<\frac{3}{4}, \ \ \ a^2> \frac{2}{3}, \ \ \ a^2 < \frac{45}{64}, \ \ \ a^2<\frac{7}{8} .$$ Since $\frac{2}{3} < \frac{45}{64} < \frac{3}{4}$, if we choose $a>0$ so that $\frac{2}{3}<a^2<\frac{45}{64}$, then the corresponding metric $g$ will have positive orthogonal bisectional and positive Ricci curvature everywhere, while the holomorphic sectional curvature is negative in some directions at every point in an annulus. The bisectional curvature is positive outside the annulus. \section{The examples with (NOB) but negative Ricci curvature and negative holomorphic sectional curvature somewhere} We present here two constructions. The first one is along the line of \cite{HT} (see also \cite{Ni-Niu2}). Let $\xi$ be a smooth function on $[0, \infty)$ with $\xi(0)=0, \xi'(r)>0$ and $0<\xi(r)<1$ for $0<r<\infty$. Let $a=\lim_{r\rightarrow \infty} \xi(r)$. Then $0<a\le 1$. By the the discussion in the pervious sections, this gives a complete $\mathsf{U}(m)$-invariant metric on $\mathbb{C}^m$ with positive bisectional curvature. The strategy of \cite{HT} is to perturb this metric by adding a perturbation term to $\xi$. This then yields one metric with the needed property. It starts with some estimates for the metrics with positive bisectional curvature. In \cite{HT, WZ} the following estimates (cf. Lemma 4.1 of \cite{HT}) were obtained. \begin{lemma}\label{lm:HT} Let $\xi$ be as above with $\lim_{r\to \infty}\xi =a \, (\in (0, 1))$. The following holds\\ (1) $\lim_{r\to \infty} h(r)=0$ and $\lim_{r\to\infty} \frac{h(r+r_0)}{h(r)}=1$ for any $r_0>0$.\\ (2) For any $r>0$, $\left( r h-(1-\xi)\int_0^r h\right)' >0$, and $$ \lim_{r\to \infty} \int_0^r h=\infty, \quad \lim_{r\to \infty} h=0, \quad \lim_{r\to \infty} \frac{rh}{\int_0^r h}=1-a. $$ (3) For any $\epsilon>0$, and for any $r_0>0$, there is $R>r_0$ such that $$ \xi'(R)-\epsilon h(R)C(R)<0. $$ (4) $\lim_{r\rightarrow \infty} h(r)C(r)=0$.\\ (5) For all $\epsilon>0$, there exists $\delta>0$ such that if $R\ge 3$, $\delta\ge \eta\ge 0$ is a smooth function with support in $[R-1, R+1]$, then for all $r\ge 0$, $$ h(r)\le \bar{h}(r)\le (1+\epsilon)h(r), \quad\mbox{and}\, \int_0^r h\le \int_0^r \bar{h}\le (1+\epsilon)\int_0^r h, $$ where $\bar{h}(r)=\exp(-\int_0^r \frac{\bar{\xi}}{t}dt)$ and $\bar{\xi}=\xi-\eta$. \end{lemma} Let $\phi$ be a cutoff function on $\mathbb{R}$ as in \cite{HT} such that (i) $0\le \phi\le c_0$ with $c_0$ being an absolute constant; (ii) $\mbox{supp} (\phi) \subset [-1, 1]$; (iii) $\phi'(0)=1$ and $|\phi'|\le 1$. The construction is to perturb $\xi$ into $\bar{\xi}(r)=\xi(r)-\alpha h(R)C(R)\phi(r-R)$ for suitable choice of $R$, $\alpha$. Note that this only changes the value of $\xi$ on a compact set. Once $\bar{h}$ is defined, equations (\ref{eq:A})--(\ref{eq:C}) define the corresponding curvature components $\bar{A}, \bar{B}, \bar{C}$ of the perturbed metric. \begin{theorem}\label{thm-example} There is $1>\alpha>0$ such that for any $r_0>0$ there is $R>r_0$ satisfying the following: If $\bar{\xi}(r)=\xi(r)-\alpha h(R)C(R)\phi(r-R)$, then $\bar{\xi}$ determines a complete $\mathsf{U}(m)$-invariant K\"ahler metric on $\mathbb{C}^m$ such that \begin{enumerate} \item $\bar{A}+\bar{C}>0$ on $[R-1, R+1]$; \item $\bar{B}(r)>0$ for all $r$; \item $\bar{C}(r)>0$ for all $r$; and \item $\bar{A}(R)+(m-1)\bar{B}(R)<0$. \end{enumerate} Then $\bar{\xi}$ will give a compete $\mathsf{U}(m)$-invariant K\"ahler metric which satisfies (NOB) but does not have nonnegative Ricci curvature, nor nonnegative holomorphic sectional curvature. \end{theorem} \begin{proof} Note that (1)-(3) implies the (NOB). The estimate (4) shows the negativity of the Ricci somewhere. First for any $\alpha>0$, by choosing $R$ large, $\bar{\xi}$ (along with the $\bar{h}$ and $\bar{f}$) defines a complete K\"ahler metric on $\mathbb{C}^m$. Recall that $a\in (0, 1)$ is the limit of $\lim_{r\to \infty}\xi(r)$, $c_0$ being the bound of $|\phi|.$ The proof of (2) and (3) is exactly the same as in \cite{Ni-Niu2}, which does not involve the careful picking of $\alpha>0$. We need to choose the constant $\alpha>0$ a bit more carefully here to achieve both (1) and (4) simultaneously. Note that in \cite{HT}, metrics were constructed with both $\bar{A}(R)+\bar{C}(R)$ and $\bar{A}(R)+(m-1)\bar{B}(R)$ being negative. By (\ref{eq:A}) and (\ref{eq:C}) for (1) we only need to prove if for $r\in [R-1, R+1]$. By the formula (\ref{eq:C}) and the proof of Lemma 4.2 in \cite{HT} (precisely (4.6) of \cite{HT}), we may choose a large $r_1$ so that if $R>r_1$ and for $r\in [R-1, R+1]$, $$ \bar{C}(r)\ge \frac{2}{(1+\epsilon)^2\int_0^R h}(a-2\epsilon+a\epsilon-\epsilon^2) $$ provided $a-2\epsilon+a\epsilon-\epsilon^2>0$. For $\epsilon>0$ sufficiently small it clearly satisfies this condition. Here $a>$ is the constant from Lemma \ref{lm:HT}. On the other hand, $$ C(R)\le \frac{2}{\int_0^R h}(a+\epsilon) $$ if $r_1$ is large enough depending only on $\epsilon$ and $R>r_1$. Hence, if $\epsilon$ and $r_1$ satisfy the above conditions, then for $r\in [R-1, R+1]$, $$ \bar{C}(r)\ge \frac{a-2\epsilon+a\epsilon-\epsilon^2}{(a+\epsilon)(1+\epsilon)^2}C(R). $$ Therefore, if $\epsilon>0$ satisfies $a>\epsilon$ and $a-2\epsilon+a\epsilon-\epsilon^2>0$, we can find $r_1>r_0$ such that if $R>r_1$, then it holds for $r\in [R-1, R+1]$, \begin{eqnarray} \bar{A}(r)+\bar{C}(r)&\ge& \frac{\xi'(r)-\beta}{\bar{h}}+\bar{C}(r)\nonumber\\ &\ge& \frac{-\beta}{\bar{h}(r)}+ \frac{a-2\epsilon+a\epsilon-\epsilon^2}{(a+\epsilon)(1+\epsilon)^2}C(R)\\ &\ge & -\frac{\beta}{(1-\epsilon)h(R)}+\frac{a-2\epsilon+a\epsilon-\epsilon^2}{(a+\epsilon)(1+\epsilon)^2}C(R)\nonumber\\ &= &\frac{1}{(1-\epsilon)h(R)}[-\beta+(1-\epsilon)\frac{a-2\epsilon+a\epsilon-\epsilon^2}{(a+\epsilon)(1+\epsilon)^2}h(R)C(R)]\nonumber. \end{eqnarray} In the third line we have used the fact that $-\frac{\beta}{\bar{h}(r)}\ge -\frac{\beta}{h(r)}\ge -\frac{\beta}{h(R+1)}$ and $\lim_{r\to \infty} \frac{h(r)}{h(r+r_0)}=1$. Hence for $r\in [R-1, R+1]$, $$ \bar{A}(r)+\bar{C}(r)\ge \frac{1}{(1-\epsilon)h(R)}[-\alpha +(1-\epsilon)\frac{a-2\epsilon+a\epsilon-\epsilon^2}{(a+\epsilon)(1+\epsilon)^2}]h(R)C(R). $$ Hence if we pick $\alpha=\frac{1}{2}$, for sufficiently small $\epsilon$ we can be sure that $\bar{A}(r)+\bar{C}(r)>0$. This proves (1). On the other hand, as in \cite{HT}, for $r_1\ge r_0$ sufficiently large and $R\ge r_1$, \begin{eqnarray*} \bar{C}(r)&=&\frac{2}{\int_0^r \bar{h}}\left(1-\frac{r\bar{h}}{\int_0^r \bar{h}}\right)\\ &\le& \frac{2}{\int_0^r \bar{h}}\left(1-\frac{r h}{(1+\epsilon)\int_0^r h}\right)\\ &\le& \frac{2}{\int_0^r h}\frac{a+2\epsilon}{1+\epsilon}. \end{eqnarray*} Here we have used part (ii) of Lemma \ref{lm:HT}. But $$ C(r)= \frac{2}{\int_0^r h}\left(1-\frac{rh}{\int_0^r h}\right)\ge \frac{2}{\int_0^r h}(a-\epsilon).$$ Hence for small $\epsilon$ $$ \bar{C}(r)\le \frac{a+2\epsilon}{(a-\epsilon)(1+\epsilon)} C(r) $$ This implies that \begin{eqnarray*} \bar{A}(R)+\frac{1}{3}\bar{C}(R)&=&\frac{\xi'(R)-\alpha C(R)h(R)}{\bar{h}(R)}+\frac13\bar{C}(R)\\ &\le&\frac{\xi'(R)-\alpha C(R)h(R)}{(1+\epsilon)h(R)}+\frac13 \frac{a+2\epsilon}{(a-\epsilon)(1+\epsilon)} C(R)\\ &=&\frac{1}{(1+\epsilon) h(R)}\left(\xi'(R)-\alpha C(R)h(R)+\frac13 \frac{a+2\epsilon}{a-\epsilon} C(R)h(R)\right). \end{eqnarray*} Noting part (iii) of Lemma \ref{lm:HT}, and that we picked $\alpha=\frac{1}{2}$, for sufficiently small $\epsilon$ we have that \begin{equation}\label{eq:ric-hp1} \bar{A}(R)+\frac{1}{3}\bar{C}(R)\le 0. \end{equation} On the other hand, similar calculation as the above shows that \begin{eqnarray*} \bar{C}(r)&\ge& \frac{2}{(1+\epsilon)\int_0^r h}\left(1-\frac{(1+\epsilon) rh}{\int_0^r h}\right)\\ &\ge& \frac{2}{(1+\epsilon)\int_0^r h} \left(1-(1+\epsilon)(1-a+\epsilon)\right)\\ &\ge&\frac{2(a-\epsilon)}{\int_0^r h}. \end{eqnarray*} Here $\epsilon$ is small and we may choose a different one in the last line. Thus together with (\ref{eq:ric-hp1}) we have $$ \bar{A}(R)\le -\frac13 \frac{2(a-\epsilon)}{\int_0^r h}. $$ On the other hand, as in Lemma 4.2 of \cite{HT}, for $R$ sufficiently large, $$ \bar{B}(R)\le \frac{\epsilon}{\int_0^R h}. $$ Combining them we conclude that $\bar{A}(R)+(m-1)\bar{B}(R)<0$ for $R\ge r_1$. This proves (4). \end{proof} One could also construct $\mathsf{U}(m)$-invariant complete K\"ahler metrics on ${\mathbb C}^m$ with $B^{\perp }>0$ but $Ric\ngeq 0$ and $H\ngeq 0$, using the notations and the construction in the previous section. Below are the details. For the sake of simplicity, we will work with the $m=2$ case. In this case, $B^{\perp}$ coincides with $Ric^{\perp}$, and by Proposition \ref{prop-51} and the remark afterwards, its positivity means $B>0$ and $A+C>0$. So to ensure that the Ricci and the holomorphic sectional curvature $H$ are not everywhere nonnegative, we need $A\ngeq 0$ and $A+B\ngeq 0$. That is, it suffices to find such a metric satisfying $$B>0, \ \ C>0, \ \ A+C>0, \ \ A\ngeq 0, \ \ A+B\ngeq 0,$$ where the functions $A$, $B$, $C$ are expressed in terms of the $\alpha $ function by formulae in $(\ref{eq:ch1})$. As in the previous sections, we may start with the function \begin{equation*} \alpha (t) = \lambda \big(1 - \frac{1}{(1+t^2)^a}\big) , \end{equation*} where $a$, $\lambda$ are positive constants with $a> \frac{1}{2}$. We will specify the range of $a$ and $\lambda $ later. As before, we have $\alpha (0)=0$, and $$\alpha '=\frac{2a\lambda t}{(1+t^2)^{a+1}}.$$ So $\alpha $ and $\alpha'$ are positive on $(0,\infty )$. Also, the function $\alpha'$ and $\frac{\alpha }{t}$ are positive at $t=0$, so we have $B>0$, $C>0$ everywhere, while $A$ has the same sign with $t_0-t$, where $t_0=\sqrt{\frac{3}{2a-1}}$. In particular, $A\ngeq 0$. As noticed before, for any $b>0$, we have $$ (t^b \alpha')' = \frac{2a\lambda t^b}{(1+t^2)^{a+2}} \big( (b+1)+(b-1-2a)t^2\big). $$ This function will be positive on $(0,\infty )$ if $b\geq 1+2a$, and negative for large $t$ if $b<1+2a$. In the following, we will take $a=6$. So $t_0=\sqrt{\frac{3}{11}}$. Clearly, we can choose $\lambda >0$ large enough so that $\alpha(t_0)>6$. Since $\alpha$ is strictly increasing, when $\alpha<6$, we must have $t<t_0$, thus $A>0$ hence $A+C>0$. While when $\alpha \geq 6$, we have \begin{eqnarray*} (A+C) (1+\alpha +t\alpha')^3 & = & t\alpha'' +2\alpha ' + \frac{2\alpha }{t} (1+\alpha +t\alpha') \frac{(1+\alpha +t\alpha')^2}{(1+\alpha )^2} \\ & \geq & t\alpha'' +2\alpha ' + \frac{2\alpha }{t} (1+\alpha +t\alpha') \\ & \geq & t\alpha'' +2\alpha ' + 12 \alpha' \ = \ t^{-13} (t^{14}\alpha')' \ > \ 0 \end{eqnarray*} since $14>2a+1=13$. This demonstrates that $A+C>0$ everywhere. To see that $A+B\ngeq 0$, let us observe that the inequality $$ t\alpha' \leq 2\alpha $$ always holds, as it is implied by $$ \frac{at^2}{(1+t^2)^{a+1}} \leq 1 - \frac{1}{(1+t^2)^{a}}, $$ which is true since $1+at^2 \leq (1+t^2)^a$ for any $t$. So we now have $$1+\alpha +t\alpha' \leq 3(1+\alpha).$$ Thus the quantity $(A+B) (1+\alpha +t\alpha')^3 $ can be estimated as \begin{eqnarray*} (A+B) (1+\alpha +t\alpha')^3 & = & t\alpha'' +2\alpha' + \big(\frac{1+\alpha +t\alpha'}{1+\alpha }\big)^2 \alpha' \\ & \leq & t\alpha'' +2\alpha' + 3^2\alpha' \\ & = & t^{-10} (t^{11}\alpha')'. \end{eqnarray*} Since $11<2a+1=13$, $(t^{11}\alpha')' < 0$ when $t$ is large. So we have $A+B\ngeq 0$ as desired. This construction gives the metric which has $B^{\perp}>0$, but has negative holomorphic sectional curvature and negative Ricci curvature outside a compact subset. \section{Appendix} In this appendix, we will give the calculation of the curvature for the surface $M^2= {\mathbb P}^2 \# \overline{{\mathbb P}^2 }$, when the metric is the restriction of the product metric. Consider $$ M^2 = \{ ([u_0\!:\!u_1\!:\!u_2], \ [v_1\!:\!v_2]) \in {\mathbb P}^2 \times {\mathbb P}^1 \mid u_1v_2 = u_2v_1 \}, $$ and let $\omega_g$ be the restriction on $M$ of the product metric $$ \omega_g = \sqrt{\!-\!1} \partial \overline{\partial} \log (|u_0|^2 \!+\!|u_1|^2 \!+\! |u_2|^2) + \lambda \sqrt{\!-\!1} \partial \overline{\partial} \log (|v_1|^2 \!+\!|v_2|^2 ) $$ where $\lambda >0$ is a constant. We will prove the following \begin{proposition} The surface $(M^2,\omega_g)$ will have its Ricci curvature positive everywhere if and only if $\lambda >\frac{1}{2}$, and it will have its holomorphic sectional curvature positive everywhere if and only if $\lambda >1$. \end{proposition} To see this, let us fix an arbitrary point $p\in M$. First let us consider the case when $u_0(p)\neq 0$. By a unitary change of coordinate in $(u_1,u_2)$ and $(v_1,v_2)$, we may assume that $p=([1\!:\!a\!:\!0], \ [1\!:\!0])$, where $a\in [0,\infty )$. So in a neighborhood of $p$, we have local holomorphic coordinate $(z_1,z_2)$ which corresponds to the point $([1\!:\!z_1\!:\! z_1z_2], [1\!:\!z_2])$, and $p=(a,0)$. In this neighborhood, the metric $\omega_g$ becomes $$ \omega_g = \sqrt{\!-\!1}\partial \overline{\partial} \log \eta + \lambda \sqrt{\!-\!1}\partial \overline{\partial} \log \sigma $$ where $\sigma = 1+|z_2|^2$ and $\eta = 1+|z_1|^2\sigma = 1+ |z_1|^2 + |z_1z_2|^2$. We compute that $$ g_{1\overline{1}} = \frac{\sigma }{\eta^2}, \ \ g_{1\overline{2}} = \frac{\overline{z}_1z_2}{\eta^2}, \ \ g_{2\overline{2}} = \frac{|z_1|^2(|z_1|^2\!+\!1)}{\eta^2} + \frac{\lambda }{\sigma^2} .$$ From this, we get \begin{eqnarray*} g_{1\overline{1},1} & = & -\frac{2}{\eta^3} \sigma^2 \overline{z}_1, \ \ \ \ \ \ \ g_{2\overline{2},2} \ = \ -\frac{2}{\eta^3}|z_1|^4(|z_1|^2\!+\!1)\overline{z}_2- \lambda\frac{2}{\sigma^3} \overline{z}_2, \\ g_{1\overline{2},1} & = & -\frac{2}{\eta^3} \sigma \overline{z}_1^2z_2, \ \ \ \ \ \ \ \ \ \ \ g_{1\overline{2},2} \ = \ \frac{1}{\eta^2} \overline{z}_1 -\frac{2}{\eta^3}|z_1|^2|z_2|^2\overline{z}_1, \\ g_{1\overline{2},\overline{1}} & = & \frac{1}{\eta^2} z_2 - \frac{2}{\eta^3} \sigma |z_1|^2 z_2, \ \ \ \ \ \ \ \ \ g_{1\overline{2},\overline{2}} \ = \ -\frac{2}{\eta^3}|z_1|^2\overline{z}_1z_2^2. \end{eqnarray*} Under the local coordinate $(z_1,z_2)$, the curvature components are given by $$ R_{i\overline{j}k\overline{\ell}} = - g_{i\overline{j},k\overline{\ell}} + \sum_{p,q=1}^2 g_{i\overline{p},k} \ \overline{g_{j\overline{q},\ell }} \ g^{\overline{p}q}.$$ At the point $p=(a,0)$, we have $\eta=1+a^2$, $\sigma=1$, and \begin{eqnarray*} && g_{1\overline{1}}=\frac{1}{\eta^2}, \ \ \ g_{1\overline{2}}=0, \ \ \ g_{2\overline{2}}=\frac{a^2\!+\!\lambda \eta}{\eta}, \\ && g_{1\overline{2},1} = g_{2\overline{1},1} = g_{2\overline{1},2} = g_{1\overline{1},2} = g_{2\overline{2},2}=0. \end{eqnarray*} From these, we get that at $p$ $$ R_{1\overline{2}1\overline{2}} = - g_{1\overline{2},1\overline{2}} + \frac{1}{g_{1\overline{1}}} g_{1\overline{1},1} \ \overline{g_{2\overline{1},2}} + \frac{1}{g_{2\overline{2}}} g_{1\overline{2},1} \ \overline{g_{2\overline{2},2}} \ = \ 0 .$$ Similarly, we also get $ R_{1\overline{1}1\overline{2}}= R_{1\overline{2}2\overline{2}} =0$ at $p$. Next, we compute at $p$ \begin{eqnarray*} R_{1\overline{1}2\overline{2}} & = & - g_{1\overline{2},2\overline{1}} + \frac{1}{ g_{1\overline{1}} } |g_{1\overline{1},2}|^2 + \frac{1}{ g_{2\overline{2}} } |g_{1\overline{2},2}|^2 \\ & = & -\frac{1}{\eta^2} + \frac{2a^2}{\eta^3} + 0 + \frac{\eta}{a^2+\lambda \eta } \ \frac{a^2}{\eta^4} \ = \ \frac{a^2-1}{\eta^3} + \frac{a^2}{\eta^3(a^2+\lambda \eta )} , \\ R_{1\overline{1}1\overline{1}} & = & - g_{1\overline{1},1\overline{1}} + \frac{1}{ g_{1\overline{1}} } |g_{1\overline{1},1}|^2 + \frac{1}{ g_{2\overline{2}} } |g_{1\overline{2},1}|^2 \\ & = & \frac{2}{\eta^3} -\frac{6a^2}{\eta^4} + \eta^2 \ \frac{4a^2}{\eta^6} + 0 \ = \ \frac{2}{\eta^4}, \\ R_{2\overline{2}2\overline{2}} & = & - g_{2\overline{2},2\overline{2}} + \frac{1}{ g_{1\overline{1}} } |g_{2\overline{1},2}|^2 + \frac{1}{ g_{2\overline{2}} } |g_{2\overline{2},2}|^2 \\ & = & - g_{2\overline{2},2\overline{2}} \ = \ \frac{2a^4}{\eta^2} + 2\lambda. \end{eqnarray*} Now let us compute the component of the Ricci curvature at $p$. We have $R_{1\overline{2}}=0$, and \begin{eqnarray*} R_{1\overline{1}} & = & \eta^2 R_{1\overline{1}1\overline{1}} + \frac{\eta}{a^2\!+\!\lambda\eta} R_{1\overline{1}2\overline{2}} \\ & = & \frac{2}{\eta^2} + \frac{a^2-1}{\eta^2(a^2\!+\!\lambda \eta ) } + \frac{a^2}{\eta^2(a^2\!+\!\lambda \eta )^2 },\\ R_{2\overline{2}} & = & \eta^2 R_{1\overline{1}2\overline{2}} + \frac{\eta}{a^2\!+\!\lambda \eta} R_{2\overline{2}2\overline{2}} \\ & = & \frac{a^2-1}{\eta} + \frac{a^2+2a^4+2\lambda \eta^2 }{\eta (a^2+\lambda \eta )}. \end{eqnarray*} Since $2\lambda \eta^2 > \lambda \eta$, we know that $R_{2\overline{2}} >0$ for all $a\geq 0$. For $R_{1\overline{1}}$, if we let $f(t)$ be the function of $t=a^2$ which represents the quantity $\eta^2(a^2+\lambda \eta )^2R_{1\overline{1}}$, then $$ f(t) = (\lambda +1)(2\lambda +3)t^2 + 4\lambda (\lambda +1)t +\lambda (2\lambda -1). $$ Hence $R_{1\overline{1}}>0$ for all $a\geq 0$ if and only if $\lambda > \frac{1}{2}$. Next let us examine the holomorphic sectional curvature $H$ at the point $p$. For any tangent direction $X=x_1 \frac{\partial}{\partial z_1}+x_2\frac{\partial}{\partial z_2}$ at $p$, we have \begin{eqnarray} R_{X\overline{X}X\overline{X}} & = & |x_1|^4R_{1\overline{1}1\overline{1}} + |x_2|^4R_{2\overline{2}2\overline{2}} +4|x_1x_2|^2R_{1\overline{1}2\overline{2}}\label{eq:a1}\\ & = & \frac{2}{\eta^4}|x_1|^4 + \frac{2}{\eta^2} (a^4\!+\!\lambda \eta^2) |x_2|^4 + \frac{4}{\eta^3} \big(a^2\!-\!1\!+\!\frac{a^2}{a^2\!+\!\lambda \eta } \big) |x_1x_2|^2. \nonumber \end{eqnarray} In particular, when $a=0$, we have $$ R_{X\overline{X}X\overline{X}} = 2|x_1|^4+2\lambda |x_2|^4 - 4|x_1x_2|^2, $$ so if $\lambda < 1$, then there are $X\neq 0$ with $R_{X\overline{X}X\overline{X}}<0$, while when $\lambda =1$, we have $R_{X\overline{X}X\overline{X}}\geq 0$ but attains $0$. Now suppose that $\lambda >1$. If $x_2=0$, then $R_{X\overline{X}X\overline{X}}> 0$. If $x_2\neq 0$, then by (\ref{eq:a1}) we have \begin{eqnarray*} R_{X\overline{X}X\overline{X}} &\geq & \frac{2}{\eta^4} |x_1|^4 + 2\lambda |x_2|^4 - \frac{4}{\eta^3} |x_1x_2|^2 \\ & > & \frac{2}{\eta^4} |x_1|^4 + 2 |x_2|^4 - \frac{4}{\eta^3} |x_1x_2|^2 \\ &\geq & 2\sqrt{ \frac{2}{\eta^4}|x_1|^4 \ 2|x_2|^4} - \frac{4}{\eta^3} |x_1x_2|^2\\ & = & \frac{4}{\eta^2} |x_1x_2|^2 - \frac{4}{\eta^3} |x_1x_2|^2 \ = \ \frac{4a^2}{\eta^3} |x_1x_2|^2 \ \geq \ 0. \end{eqnarray*} So when $\lambda >1$, the holomorphic sectional curvature at $p$ is positive. Now let us assume that $u_0(p)=0$, namely, $p$ lies in the line at infinity with respect to the point of blowing up. Again by a simultaneous unitary coordinate change on the $(u_1,u_2)$ and $(v_1,v_2)$ if necessary, we may assume that $p=([0\!:\!1\!:\!0], [1\!:\!0])$. Let us choose holomorphic coordinate $(z_1,z_2)$ near $p$ by letting it correspond to the point $(z_1\!:\!1\!:\!z_2], [1\!:\!z_2])$. Then $p=(0,0)$, and the metric in this case is given by $$ \omega_g = \sqrt{\!-\!1}\partial \overline{\partial} \log (1\!+\!|z_1|^2\!+\!|z_2|^2) + \lambda \sqrt{\!-\!1}\partial \overline{\partial} \log (1\!+\!|z_2|^2). $$ Again if we denote by $\eta = 1\!+\!|z_1|^2\!+\!|z_2|^2$ and $\sigma = 1\!+\!|z_2|^2$, then we have \begin{eqnarray*} g_{i\overline{j}} & = & \frac{1}{\eta} \delta_{ij} - \frac{1}{\eta^2} \overline{z}_iz_j +\frac{\lambda }{\sigma^2} \delta_{i2}\delta_{j2}, \\ g_{i\overline{j},k} & = & - \frac{1}{\eta^2} ( \delta_{ij}\overline{z}_k +\delta_{kj}\overline{z}_i)+\frac{2}{\eta^3}\overline{z}_i\overline{z}_kz_j -\frac{2\lambda}{\sigma^3}\overline{z}_2\delta_{i2}\delta_{j2}\delta_{k2}. \end{eqnarray*} At $p=(0,0)$, we have $g_{1\overline{1}}=1$, $g_{1\overline{2}}=0$, $g_{2\overline{2}}=1+\lambda$, and $g_{i\overline{j},k}=0$. So the curvature components at $p$ are given by $$ R_{i\overline{j}k\overline{\ell} } = - g_{ i\overline{j},k\overline{\ell} } = \delta_{ij} \delta_{k\ell } + \delta_{i\ell } \delta_{jk} + 2\lambda \delta_{i2} \delta_{j2} \delta_{k2} \delta_{\ell 2}. $$ So for any tangent direction $X$ at $p$, the holomorphic sectional curvature $$ R_{X\overline{X}X\overline{X}} = 2 (|x_1|^2+|x_2|^2)^2 + 2\lambda |x_2|^4 ,$$ which is always positive. For the Ricci curvature, one has $R_{1\overline{2}}=0$, and \begin{eqnarray*} R_{1\overline{1}} & = & R_{1\overline{1}1\overline{1}} + \frac{1}{1+\lambda } R_{1\overline{1}2\overline{2}} \ = \ 2 + \frac{1}{1+\lambda }, \\ R_{2\overline{2}} & = & R_{1\overline{1}2\overline{2}} + \frac{1}{1+\lambda } R_{2\overline{2}2\overline{2}} \ = \ 1 + \frac{1}{1+\lambda } (2+2\lambda ) \ = \ 3. \end{eqnarray*} So the Ricci curvature at $p$ is also always positive. This completes the proof of the proposition.
{ "timestamp": "2018-05-10T02:09:26", "yymm": "1802", "arxiv_id": "1802.08732", "language": "en", "url": "https://arxiv.org/abs/1802.08732" }
\section{Introduction} \label{s1} Given a real (presumably irrational!) number $\gamma$, how can one prove that it is irrational? In certain cases (like for square roots of rationals) this is an easy task. A more general strategy proceeds by the construction of a sequence of rational approximations $r_n=q_n\gamma-p_n\ne0$ such that $\delta_nq_n$, $\delta_np_n$ are integers for some positive integers $\delta_n$ and $\delta_nr_n\to0$ as $n\to\infty$. This indeed guarantees that $\gamma$ is not rational. Usually, as a bonus, such a construction also allows one to estimate the irrationality of~$\gamma$ in a quantitative form. Producing such a sequence of rational Diophantine approximations, even with a weaker requirement on the growth, like $r_n\to0$ as $n\to\infty$, is a difficult problem. For certain specific `interesting' numbers $\gamma\in\mathbb R$ such sequences are constructed as values of so-called hypergeometric functions; for related definitions of the latter in the ordinary and basic ($q$-) situations we refer the reader to the books \cite{Ba35,Sl66,GR04}. One of the underlying mechanisms behind the hypergeometric settings is the existence of numerous transformations of hypergeometric functions, that is, identities that represent the same numerical (or $q$-) quantity in different looking ways. An arithmetic significance of such transformations is the production of identities of the form $r_n=\wt r_n$ say, where $r_n=q_n\gamma-p_n$ and $\wt r_n=\wt q_n\gamma-\wt p_n$ for $n=0,1,2,\dots$, while an analysis of the asymptotic behaviour of $r_n$ or $\wt r_n$, and of the corresponding (\emph{a priori} different) denominators $\delta_n$ or $\wt\delta_n$ are simpler for one of them than for the other. In several situations, the machinery can be inverted: the equality $r_n=\wt r_n$ is predicted by computing a number of first approximations, and then established by demonstrating that both sides satisfy the same linear recursion. Such instances naturally call for finding purely hypergeometric proofs, which in turn may offer more general forms of the approximations. It comes as no surprise that our computations below have been carried out using the \textsl{Mathematica} packages HYP and HYPQ \cite{Kr95}. The symbiosis of arithmetic and hypergeometry is the main objective of the present note, with special emphasis on (hypergeometric) rational approximations to the following three mathematical constants (in order of their appearance below): \begin{itemize} \item Catalan's constant $G=\displaystyle\sum_{k=0}^\infty\frac{(-1)^k}{(2k+1)^2}$, \qquad \raisebox{-4mm}{\hbox{\includegraphics[scale=0.012]{catalan}}} \item $\displaystyle\log2=\sum_{k=1}^\infty\frac{(-1)^{k-1}}{k}$, \qquad \raisebox{-4mm}{\hbox{\includegraphics[scale=0.012]{log2}}} \qquad\qquad and \item $\displaystyle\frac{\pi^2}6=\zeta(2)=\sum_{k=1}^\infty\frac1{k^2}$, \qquad \raisebox{-4mm}{\hbox{\includegraphics[scale=0.012]{pi-squared}}} \end{itemize} which are discussed in Sections \ref{s2}, \ref{s3}, and \ref{s4}, respectively. We intentionally personify these mathematical constants here, to stress their significance in the arithmetic-hypergeometric context. The construction in Section~\ref{s4} indicates a certain cancellation phenomenon, which we record in Lemma~\ref{lem1}. Application of this new ingredient to a general construction of linear forms in the values of Riemann's zeta function $\zeta(s)$ at positive odd integers leads to the following result. \begin{theorem} \label{th-zeta} For any $\lambda\in\mathbb R$, each of the two collections $$ \biggl\{\zeta(2m+1)-\lambda\,\frac{2^{2m}(2^{2m+2}-1)|B_{2m+2}|}{(2^{2m+1}-1)(m+1)(2m)!}\,\pi^{2m+1}:m=1,2,\dots,19\biggr\} $$ and $$ \biggl\{\zeta(2m+1)-\lambda\,\frac{2^{2m}(2^{2m}-1)|B_{2m}|}{(2^{2m+1}-1)m(2m)!}\,\pi^{2m+1}:m=1,2,\dots,21\biggr\} $$ contains at least one irrational number. Here $B_{2m}$ denotes the $2m$-th Bernoulli number. \end{theorem} We prove this theorem in Section~\ref{s5}. Notice that $$ \frac{2^{2m-1}|B_{2m}|}{(2m)!} =\frac{\zeta(2m)}{\pi^{2m}}\in\mathbb Q \quad\text{for}\; m=1,2,\dotsc. $$ The only result in the literature we can compare our Theorem~\ref{th-zeta} with is the one given in \cite[Theorems~3 and 4]{HP06}, which implies the irrationality of at least one number in each collection $$ \biggl\{\zeta(2m+1)-\lambda\,\frac{2^{2m}|B_{2m}|}{m(2m)!}\,\pi^{2m+1}:m=1,2,\dots,169\biggr\} $$ and $$ \biggl\{\zeta(2m+1)-\lambda\,\frac{2^{2m}|B_{2m+2}|}{(m+1)(2m)!}\,\pi^{2m+1}:m=1,2,\dots,169\biggr\}, $$ where $\lambda\in\mathbb R$ is arbitrary. \section*{Acknowledgement} We thank Victor Zudilin for beautifully portraying the mathematical constants involved here. We kindly acknowledge the referee's very attentive reading of the original version. \section{Catalan's constant} \label{s2} \null\hfill\smash{\raisebox{-12mm\relax {\includegraphics[scale=0.03]{catalan}}}\strut \\[-15mm] \parshape 5 0pt 0.8\textwidth 0pt 0.8\textwidth 0pt 0.8\textwidth 0pt 0.8\textwidth 0pt \textwidth A long time ago, in joint work with T.~Rivoal \cite{RZ03}, the second author considered very-well-poised hypergeometric series that represent linear forms in Catalan's and related constants. The approximations to Catalan's constant itself were given by \begin{align*} r_n &=\sum_{t=0}^\infty(2t+n+1)\frac{n!\prod_{j=1}^n(t+1-j)\prod_{j=1}^n(t+n+j)}{\prod_{j=0}^n(t+j+\frac12)^3}\,(-1)^{n+t} \\ &=\frac{\sqrt\pi\,\Gamma(3n+2)\,\Gamma(n+\frac12)^2\Gamma(n+1)}{4^n\,\Gamma(2n+\frac32)^3} \\ &\qquad\times {}_6F_5\biggl[\begin{matrix} 3n+1, \, \frac{3n}2+\frac32, \, n+\frac12, \, n+\frac12, \, n+\frac12, \, n+1 \\[2pt] \frac{3n}2+\frac12, \, 2n+\frac32, \, 2n+\frac32, \, 2n+\frac32, \, 2n+1 \end{matrix}; -1\biggr]. \end{align*} The approximations possess different hypergeometric forms, for example, as a $_3F_2(1)$-series and as a Barnes-type integral as discussed in \cite{Zu02b} and \cite{Zu03}. The use of partial-fraction decomposition in \cite{Zu02b} suggests considering a different family of approximations: \begin{align*} \wt r_n &=2^{2(n+1)}\sum_{t=1}^\infty(2t-1)\frac{(2n+1)!\prod_{j=0}^{2n-1}(t-n+j)}{\prod_{j=0}^{2n+1}(2t-n-\frac32+j)^2} \\ &=\frac{2^{2(n+1)}\Gamma(2n+2)^2\Gamma(n+\frac12)^2}{\Gamma(3n+\frac52)^2} \\ &\qquad\times {}_6F_5\biggl[\begin{matrix} 2n+1, \, n+\frac32, \, \frac n2+\frac14, \, \frac n2+\frac14, \, \frac n2+\frac34, \, \frac n2+\frac34 \\[2pt] n+\frac12, \, \frac{3n}2+\frac74, \, \frac{3n}2+\frac74, \, \frac{3n}2+\frac54, \, \frac{3n}2+\frac54 \end{matrix}; 1\biggr]. \end{align*} This is again a very-well-poised ${}_6F_5$-series, but this time evaluated at 1. In addition, it is reasonably easy to show that $2^{4n}d_{2n-1}^2\wt r_n\in\mathbb Z+\mathbb Z\,G$, where $d_N$ denotes the least common multiple of $1,\dots,N$, using an argument similar to the one in \cite{Zu02b}. Amazingly, we have $r_n=\wt r_n$, which accidentally came out of the recursion satisfied by $\wt r_n$. Our first result is a general identity, of which the equality is a special case (namely, $c=d=n+\frac12$). \begin{theorem} \label{th-cat} We have \begin{align} & \,{}_6F_5\biggl[\begin{matrix} 3n+1, \, \frac{3n}2+\frac32, \, n+\frac12, \, n+1, \, c, \, d \\[2pt] \frac{3n}2+\frac12, \, 2n+\frac32, \, 2n+1, \, 3n+2-c, \, 3n+2-d \end{matrix}\,; -1\biggr] \nonumber\\ &\quad =\frac{\Gamma(4n+3)\,\Gamma(3n+2-c)\,\Gamma(3n+2-d)\,\Gamma(4n+3-c-d)}{\Gamma(3n+2)\,\Gamma(4n+3-c)\,\Gamma(4n+3-d)\,\Gamma(3n+2-c-d)} \nonumber\\ &\quad\qquad\times {}_6F_5\biggl[\begin{matrix} 2n+1, \, n+\frac32, \, \frac c2, \, \frac c2+\frac12, \, \frac d2, \, \frac d2+\frac12 \\[2pt] n+\frac12, \, 2n+2-\frac c2, \, 2n+\frac32-\frac c2, \, 2n+2-\frac d2, \, 2n+\frac32-\frac d2 \end{matrix}; 1\biggr]. \label{id-cat} \end{align} \end{theorem} \begin{proof} We start with Rahman's quadratic transformation \cite[Eq.~(7.8), $q\to 1$, reversed]{RV93} \begin{align*} & {}_8F_7\biggl[\begin{matrix} 2 a - e, \, 1 + a - \frac{e}{2}, \, \frac{1}{2} + a - e, \, c, \, d, \, e, \\[1.5pt] a - \frac{e}{2}, \, \frac{1}{2} + a, \, 1 + 2 a - c - e, \, 1 + 2 a - d - e, \, 1 + 2 a - 2 e, \end{matrix} \\ &\qquad\qquad\qquad\qquad\qquad\qquad \begin{matrix} 1 + 4 a - c - d - e + n, \, -n \\[1.5pt] -2 a + c + d - n, \, 1 + 2 a - e + n \end{matrix}\, ; 1\biggr] \displaybreak[2]\\ &\quad =\frac{(1 + 2 a - c)_n\, (1 + 2 a - d)_n\, (1 + 2 a - e)_n\, (1 + 2 a - c - d - e)_n} {(1 + 2 a)_n\, (1 + 2 a - c - d)_n\, (1 + 2 a - c - e)_n\, (1 + 2 a - d - e)_n} \\ &\quad\qquad \times {}_{11}F_{10}\biggl[\begin{matrix} a, \, 1 + \frac{a}{2}, \, e, \, \frac{c}{2}, \, \frac{1}{2} + \frac{c}{2}, \, \frac{d}{2}, \, \frac{1}{2} + \frac{d}{2}, \\[1.5pt] \frac{a}{2}, \, 1 + a - e, \, 1 + a - \frac{c}{2}, \, \frac{1}{2} + a - \frac{c}{2}, \, 1 + a - \frac{d}{2}, \, \frac{1}{2} + a - \frac{d}{2}, \end{matrix} \\ &\qquad\qquad\qquad \begin{matrix} \frac{1}{2} + 2 a - \frac{c}{2} - \frac{d}{2} - \frac{e}{2} + \frac{n}{2}, \, 1 + 2 a - \frac{c}{2} - \frac{d}{2} - \frac{e}{2} + \frac{n}{2}, \, \frac{1}{2} - \frac{n}{2}, \, -\frac{n}{2} \\[1.5pt] \frac{1}{2} - a + \frac{c}{2} + \frac{d}{2} + \frac{e}{2} - \frac{n}{2}, \, - a + \frac{c}{2} + \frac{d}{2} + \frac{e}{2} - \frac{n}{2}, \, \frac{1}{2} + a + \frac{n}{2}, \, 1 + a + \frac{n}{2} \end{matrix} ; 1\biggr], \end{align*} in which we let $n$ tend to $\infty$: \begin{align*} & {}_6F_5\biggl[\begin{matrix} 2 a - e, \, 1 + a - \frac{e}{2}, \, \frac{1}{2} + a - e, \, c, \, d, \, e \\[1.5pt] a - \frac{e}{2}, \, \frac{1}{2} + a, \, 1 + 2 a - c - e, \, 1 + 2 a - d - e, \, 1 + 2 a - 2 e \end{matrix}\, ; -1\biggr] \\ &\quad =\frac{\Ga(1 + 2 a) \, \Ga(1 + 2 a - c - d) \, \Ga(1 + 2 a - c - e) \, \Ga(1 + 2 a - d - e)} {\Ga(1 + 2 a - c) \, \Ga(1 + 2 a - d) \, \Ga(1 + 2 a - e) \, \Ga(1 + 2 a - c - d - e)} \\ &\quad\qquad \times {}_7F_6\biggl[\begin{matrix} a, \, 1 + \frac{a}{2}, \, e, \, \frac{c}{2}, \, \frac{1}{2} + \frac{c}{2}, \, \frac{d}{2}, \, \frac{1}{2} + \frac{d}{2} \\[1.5pt] \frac{a}{2}, \, 1 + a - e, \, 1 + a - \frac{c}{2}, \, \frac{1}{2} + a - \frac{c}{2}, \, 1 + a - \frac{d}{2}, \, \frac{1}{2} + a - \frac{d}{2} \end{matrix} ; 1\biggr]. \end{align*} Now set $a=2n+1$ and $e=n+1$ to deduce \eqref{id-cat}. \end{proof} The corresponding $q$-version, which we record here for completeness, reads \begin{align*} & \,{}_8\phi_7\biggl[\begin{matrix} q^{3n+1}, \, q^{\frac{3n}2+\frac32}, \, -q^{\frac{3n}2+\frac32}, \, q^{n+\frac12}, \, -q^{n+\frac12}, \, q^{n+1}, \, c, \, d \\[2pt] q^{\frac{3n}2+\frac12}, \, -q^{\frac{3n}2+\frac12}, \, q^{2n+\frac32}, \, -q^{2n+\frac32}, \, q^{2n+1}, \, q^{3n+2}/c, \, q^{3n+2}/d \end{matrix}\,; q,-\frac{q^{3n+2}}{cd}\biggr] \\ &\quad =\frac{(q^{3n+2},q^{4n+3}/c,q^{4n+3}/d,q^{3n+2}/cd;q)_\infty}{(q^{4n+3},q^{3n+2}/c,q^{3n+2}/d,q^{4n+3}/cd;q)_\infty} \\ &\quad\qquad\times {}_7\phi_6\biggl[\begin{matrix} q^{4n+2}, \, q^{2n+3}, \, -q^{2n+3}, \, c, \, cq, \, d, \, dq \\[2pt] q^{2n+1}, \, -q^{2n+1}, \, q^{4n+4}/c, \, q^{4n+3}/c, \, q^{4n+4}/d, \, q^{4n+3}/d \end{matrix}\,; q^2,\frac{q^{6n+4}}{c^2d^2}\biggr]. \end{align*} \section{Logarithm of $2$} \label{s3} \null\hfill\smash{\raisebox{-12mm\relax {\includegraphics[scale=0.03]{log2}}}\strut \\[-15mm] \parshape 5 0pt 0.8\textwidth 0pt 0.8\textwidth 0pt 0.8\textwidth 0pt 0.8\textwidth 0pt \textwidth Another strange identity is related to the classical rational approximations to $\log2$: \begin{align*} r_n &=(-1)^{n+1}\sum_{t=0}^\infty\frac{\prod_{j=1}^n(t-j)}{\prod_{j=0}^n(t+j)}\,(-1)^t \\ &=\frac{\Gamma(n+1)^2}{\Gamma(2n+2)}\,{}_2F_1\biggl[\begin{matrix} n+1, \, n+1 \\ 2n+2 \end{matrix}; -1\biggr] \\ &=\int_0^1\frac{x^n(1-x)^n}{(1+x)^{n+1}}\,\d x. \end{align*} The sequence satisfies the recurrence equation $(n+1)r_{n+1}-3(2n+1)r_n+nr_{n-1}=0$, and with the help of the latter we find out that $r_n=\wt r_n$ for \begin{align*} \wt r_n &=\sum_{t=0}^\infty\frac{(2n+1)!\,\prod_{j=1}^n(t-j)}{n!\,\prod_{j=0}^{2n+1}(2t-n-1+j)} \\ &=\frac{\Gamma(n+1)\,\Gamma(2n+2)}{\Gamma(3n+3)}\,{}_3F_2\biggl[\begin{matrix} n+1, \, \frac n2+\frac12, \, \frac n2+1 \\[2pt] \frac{3n}2+2, \, \frac{3n}2+\frac32 \end{matrix}; 1\biggr]. \end{align*} The finding is a particular case of another general identity. \begin{theorem} \label{th-ln2} We have \begin{equation} {}_2F_1\biggl[\begin{matrix} x, \, 2a \\ 2b-x \end{matrix}; -1\biggr] =\frac{\Gamma(2b-x)\,\Gamma(2b-2a)}{\Gamma(2b)\,\Gamma(2b-2a-x)} \,{}_3F_2\biggl[\begin{matrix} x, \, a, \, a+\frac12 \\[2pt] b, \, b+\frac12 \end{matrix}; 1\biggr]. \label{id-ln2} \end{equation} \end{theorem} \begin{proof} This is a specialisation of a transformation of Whipple \cite[Sec.~4.6, Eq.~(3)]{Ba35}: set $b=\kappa-a$ there and reparametrise. \end{proof} A companion $q$-version is \begin{multline*} {}_6\phi_7\biggl[\begin{matrix} -b/q, \, \sqrt{-bq}, \, -\sqrt{-bq}, \, x, \, -x, \, a \\ \sqrt{-b/q}, \, \sqrt{-b/q}, \, -b/x, \, b/x, \, -b/a, \, 0, \, 0 \end{matrix}\,; q,-\frac{b^2}{ax^2}\biggr] \\ =\frac{(b^2,b^2/(ax)^2;q^2)_\infty}{(b^2/a^2,b^2/x^2;q^2)_\infty} \,{}_3\phi_2\biggl[\begin{matrix} x^2, \, a, \, aq \\ b, \, bq \end{matrix}; q^2,\frac{b^2}{a^2x^2}\biggr], \end{multline*} which follows from \cite[Eq.~(3.10.4)]{GR04}. To clarify the arithmetic situation behind the right-hand side of \eqref{id-ln2}, we notice that there is a permutation group for it used for producing a sharp irrationality measure of $\zeta(2)$ in \cite{RV96}. As explained in \cite[Section~6]{Zu04}, a realisation of the group for a generic hypergeometric function \begin{equation} \frac{\Gamma(a_2)\,\Gamma(b_2-a_2)\,\Gamma(a_3)\,\Gamma(b_3-a_3)}{\Gamma(b_2)\,\Gamma(b_3)} \,{}_3F_2\biggl[\begin{matrix} a_1, \, a_2, \, a_3 \\ b_2, \, b_3 \end{matrix}; 1\biggr] \label{3F2} \end{equation} can be given by means of the ten parameters \begin{align*} c_{00}&=(b_2+b_3)-(a_1+a_2+a_3)-1, \\ c_{jk}&=\begin{cases} a_j-1, & \text{for }k=1, \\ b_k-a_j-1, & \text{for } k=2,3, \end{cases} \end{align*} as follows. If the set of parameters is represented in the matrix form \begin{equation} \bc=\begin{pmatrix} c_{00} \\ & c_{11} & c_{12} & c_{13} \\ & c_{21} & c_{22} & c_{23} \\ & c_{31} & c_{32} & c_{33} \end{pmatrix}, \label{bc} \end{equation} and $H(\bc)$ denotes the corresponding hypergeometric function in \eqref{3F2}, then the quantity \begin{equation} \frac{H(\bc)}{\Gamma(c_{00}+1)\,\Gamma(c_{21}+1)\,\Gamma(c_{31}+1)\,\Gamma(c_{22}+1)\,\Gamma(c_{33}+1)} \label{inv-H} \end{equation} is invariant under the group $\fG$ (of order 120) generated by the four involutions \begin{alignat*}{2} \fa_1&=(c_{11} \; c_{21})\,(c_{12} \; c_{22})\,(c_{13} \; c_{23}), & \fa_2&=(c_{21} \; c_{31})\,(c_{22} \; c_{32})\,(c_{23} \; c_{33}), \\ \fb&=(c_{12} \; c_{13})\,(c_{22} \; c_{23})\,(c_{32} \; c_{33}), &\quad\text{and}\quad \fh&=(c_{00} \; c_{22})\,(c_{11} \; c_{33})\,(c_{13} \; c_{31}). \end{alignat*} Notice that the permutations $\fa_1$, $\fa_2$, and $\fb$ correspond to the rearrangements $a_1\leftrightarrow a_2$, $a_2\leftrightarrow a_3$, and $b_2\leftrightarrow b_3$, respectively, of the function \eqref{3F2}, so that the invariance of \eqref{inv-H} under their action is trivial. It is only the permutation $\fh$, underlying Thomae's transformation \cite[Sec.~3.2, Eq.~(1)]{Ba35} and Whipple's transformation \cite[Sec.~4.4, Eq.~(2)]{Ba35}, that makes the action of the group on \eqref{inv-H} non-trivial. With the method in \cite[Section 3.3]{FZ10}, if \begin{equation} a_1,a_2,b_2\in\mathbb Z \quad\text{and}\quad a_3,b_3\in\mathbb Z+\tfrac12 \label{cond} \end{equation} are chosen such that $c_{jk}\ge-\frac12$ for all~$j$ and~$k$, then the quantity $H(\bc)$ representing~\eqref{3F2} satisfies $$ H(\bc)\in\mathbb Q\log2+\mathbb Q. $$ It is a tough task to produce a sharp integer $D(\bc)$ such that $D(\bc)H(\bc)\in\mathbb Z\log2+\mathbb Z$ in the general case; it can be given in the particular situation where $a_3-a_2=b_3-b_2=\pm\frac12$ with the help of \eqref{id-ln2} and the known information for the corresponding ${}_2F_1(1)$-series. Observe that the group $\fG=\langle\fa_1,\,\fa_2,\,\fb,\,\fh\rangle$ cannot be arithmetically used in its full force when the parameters of \eqref{3F2} are subject to \eqref{cond}. However, apart from the initial representative \eqref{bc}, there are five more with the constraint that entries $1\,3$, $2\,3$, $3\,1$ and $3\,2$ are from $\mathbb Z+\frac12$, namely \begin{equation} \begin{gathered} \begin{pmatrix} c_{22} \\ & c_{33} & c_{12} & c_{31} \\ & c_{21} & c_{00} & c_{23} \\ & c_{13} & c_{32} & c_{11} \end{pmatrix}, \quad \begin{pmatrix} c_{12} \\ & c_{11} & c_{00} & c_{13} \\ & c_{33} & c_{22} & c_{31} \\ & c_{23} & c_{32} & c_{21} \end{pmatrix}, \quad \begin{pmatrix} c_{33} \\ & c_{22} & c_{21} & c_{13} \\ & c_{12} & c_{11} & c_{23} \\ & c_{31} & c_{32} & c_{00} \end{pmatrix}, \\ \begin{pmatrix} c_{11} \\ & c_{00} & c_{21} & c_{31} \\ & c_{12} & c_{33} & c_{23} \\ & c_{13} & c_{32} & c_{22} \end{pmatrix}, \quad\text{and}\quad \begin{pmatrix} c_{21} \\ & c_{22} & c_{33} & c_{13} \\ & c_{00} & c_{11} & c_{31} \\ & c_{23} & c_{32} & c_{12} \end{pmatrix}, \end{gathered} \label{5bc} \end{equation} and another six which are obtained from \eqref{bc} and \eqref{5bc} by further action of $\fa_1$. Remarkably enough, the choices $x=12n+1$, $a=14n+1$, $b=28n+2$ and $x=14n+1$, $a=12n+1$, $b=28n+2$ in \eqref{id-ln2}, which originate from the trivial transformation of the $_2F_1(-1)$-side and which correspond to an early (`pre-Raffaele'~\cite{Ma09}) irrationality measure record \cite{Ha90,Ru87,Vi97,Zu04b}, produce $\fG$-disjoint collections $$ {\left( \begin{smallmatrix} 14n+1 \\ & 12n+1 & 8n+1 & 8n+\frac12 \\[1pt] & 7n+1 & 13n+1 & 13n+\frac12 \\[1pt] & 7n+\frac12 & 13n+\frac12 & 13n+1 \end{smallmatrix}\right) } \quad\text{and}\quad {\left( \begin{smallmatrix} 16n+1 \\ & 14n+1 & 7n+1 & 7n+\frac12 \\[1pt] & 6n+1 & 15n+1 & 15n+\frac12 \\[1pt] & 6n+\frac12 & 15n+\frac12 & 15n+1 \end{smallmatrix}\right) } $$ on the $_3F_2(1)$-side. \section(\003\300 squared){$\pi$ squared} \label{s4} \null\hfill\smash{\raisebox{-12mm\relax {\includegraphics[scale=0.03]{pi-squared}}}\strut \\[-15mm] \parshape 5 0pt 0.8\textwidth 0pt 0.8\textwidth 0pt 0.8\textwidth 0pt 0.8\textwidth 0pt \textwidth Our next hypergeometric entry \emph{a priori} produces linear forms not only in $1$ and $\zeta(2)=\pi^2/6$ but also in $\zeta(4)=\pi^4/90$, with rational coefficients. It originates from the well-poised hypergeometric series \begin{align*} r_n &=\sum_{t=1}^\infty\frac{2^{8n}n!^4(2n)!^2\prod_{j=0}^{4n-1}(t-n+j)}{(4n)!\,\prod_{j=0}^{2n}(t-\frac12+j)^4} \\ &=\frac{\pi^2\Gamma(2n+1)^6}{\Gamma(3n+\frac32)^4} \,{}_5F_4\biggl[\begin{matrix} 4n+1, \, n+\frac12, \, n+\frac12, \, n+\frac12, \, n+\frac12 \\[2pt] 3n+\frac32, \, 3n+\frac32, \, 3n+\frac32, \, 3n+\frac32 \end{matrix}; 1\biggr]. \end{align*} It is standard to sum the rational function $$ R_n(t)=\frac{2^{8n}n!^4(2n)!^2\prod_{j=0}^{4n-1}(t-n+j)}{(4n)!\,\prod_{j=0}^{2n}(t-\frac12+j)^4} $$ by expanding it into the sum of partial fractions; the well-poised symmetry $R_n(t)=R_n(2n-1-t)$ (and the residue sum theorem) imply then that $$ r_n\in\mathbb Q\pi^4+\mathbb Q\pi^2+\mathbb Q $$ for $n=0,1,2,\dots$\,. At the same time, $$ r_0=\frac16\,\pi^4, \quad r_1=\frac{19}{6}\,\pi^4-\frac{125}{4}\,\pi^2, $$ and the sequence $r_n$ satisfies a second order recurrence equation, so that $r_n=a_n\pi^4-b_n\pi^2\in\mathbb Q\pi^4+\mathbb Q\pi^2$ for all $n$. This happens because the function $R_n(t)$ vanishes at $t=1,0,-1,\dots,-n+2$ so that $$ r_n =\sum_{t=-n+1}^\infty R_n(t), $$ and in view of the following result. \begin{lemma} \label{lem1} Assume that a rational function $$ R(t)=\sum_{i=1}^s\sum_{k=0}^n\frac{a_{i,k}}{(t+k)^i} $$ satisfies $R(t)=R(-n-t)$. Put $m=\lfloor(n-1)/2\rfloor$. Then $a_{i,n-k}=(-1)^ia_{i,k}$ and \begin{align*} \sum_{t=-m}^\infty R(t-\tfrac12) &=\sum_{\substack{i=2\\i\;\text{\em even}}}^sa_i\sum_{\ell=1}^\infty\frac1{(\ell-\frac12)^i}+a_0 \\ &=\sum_{\substack{i=2\\i\;\text{\em even}}}^sa_i(2^i-1)\zeta(i)+a_0 \in\mathbb Q+\mathbb Q\,\pi^2+\mathbb Q\,\pi^4+\dots+\mathbb Q\,\pi^{2\lfloor s/2\rfloor}, \end{align*} where $$ a_i=\sum_{k=0}^na_{i,k}, \quad\text{for }i=2,\dots,s, \quad\text{and}\quad a_0=\begin{cases} 0, &\text{for $n$ even}, \\ \tfrac12\,R(-m-\tfrac12), &\text{for $n$ odd}. \end{cases} $$ \end{lemma} \begin{proof} The property $a_{i,n-k}=(-1)^ia_{i,k}$ is straightforward to see from $R(t)=R(-n-\nobreak t)$. Furthermore, we have \begin{align*} \sum_{t=-m}^\infty R(t-\tfrac12) &=\sum_{i=1}^s\sum_{k=0}^na_{i,k}\sum_{t=-m}^\infty\frac1{(t+k-\frac12)^i} =\sum_{i=1}^s\sum_{k=0}^na_{i,k}\sum_{\ell=k-m}^\infty\frac1{(\ell-\frac12)^i} \\ &=\sum_{i=1}^s\sum_{k=0}^na_{i,k}\cdot\sum_{\ell=1}^\infty\frac1{(\ell-\frac12)^i} \\ &\qquad +\sum_{i=1}^s\biggl(\sum_{k=0}^ma_{i,k}\sum_{\ell=k-m}^0\frac1{(\ell-\frac12)^i} -\sum_{k=m+1}^na_{i,k}\sum_{\ell=1}^{k-m-1}\frac1{(\ell-\frac12)^i}\biggr) \\ &=\sum_{\substack{i=2\\i\;\text{even}}}^sa_i\sum_{\ell=1}^\infty\frac1{(\ell-\frac12)^i} +a_0, \end{align*} with the constant term equal to \begin{align*} a_0 &=\sum_{i=1}^s\biggl(\sum_{k=0}^ma_{i,k}\sum_{\ell=k-m}^0\frac1{(\ell-\frac12)^i} -\sum_{k=m+1}^na_{i,k}\sum_{\ell=1}^{k-m-1}\frac1{(\ell-\frac12)^i}\biggr) \\ &=\sum_{i=1}^s(-1)^i\biggl(\sum_{k=0}^ma_{i,n-k}\sum_{\ell=k-m}^0\frac1{(\ell-\frac12)^i} -\sum_{k=m+1}^na_{i,n-k}\sum_{\ell=1}^{k-m-1}\frac1{(\ell-\frac12)^i}\biggr) \\ \intertext{(take $k'=n-k$)} &=\sum_{i=1}^s(-1)^i\biggl(\sum_{k'=n-m}^na_{i,k'}\sum_{\ell=n-m-k'}^0\frac1{(\ell-\frac12)^i} -\sum_{k'=0}^{n-m-1}a_{i,k'}\sum_{\ell=1}^{n-m-1-k'}\frac1{(\ell-\frac12)^i}\biggr). \end{align*} If $n$ is odd, then $n-m=m+1$ and \begin{align*} a_0 &=\sum_{i=1}^s(-1)^i\biggl(\sum_{k'=m+1}^na_{i,k'}\sum_{\ell=m+1-k'}^0\frac1{(\ell-\frac12)^i} -\sum_{k'=0}^ma_{i,k'}\sum_{\ell=1}^{m-k'}\frac1{(\ell-\frac12)^i}\biggr) \\ &=\sum_{i=1}^s\biggl(\sum_{k'=m+1}^na_{i,k'}\sum_{\ell=1}^{k'-m}\frac1{(\ell-\frac12)^i} -\sum_{k'=0}^ma_{i,k'}\sum_{\ell=k'-m+1}^0\frac1{(\ell-\frac12)^i}\biggr) \\ &=\sum_{i=1}^s\biggl(\sum_{k'=m+1}^na_{i,k'} \sum_{\ell=1}^{k'-m-1}\frac1{(\ell-\frac12)^i} +\sum_{k'=m+1}^na_{i,k'}\frac1{(k'-m-\frac12)^i} \\ &\qquad -\sum_{k'=0}^ma_{i,k'}\sum_{\ell=k'-m}^0\frac1{(\ell-\frac12)^i} +\sum_{k'=0}^ma_{i,k'}\frac1{(k'-m-\frac12)^i}\biggr) \\ &=-a_0+\sum_{i=1}^s\sum_{k'=0}^n\frac{a_{i,k'}}{(k'-m-\frac12)^i} =-a_0+R(-m-\tfrac12). \end{align*} Similarly, if $n$ is even, then $n-m=m+2$ and \begin{align*} a_0 &=\sum_{i=1}^s(-1)^i\biggl(\sum_{k'=m+2}^na_{i,k'}\sum_{\ell=m+2-k'}^0\frac1{(\ell-\frac12)^i} -\sum_{k'=0}^{m+1}a_{i,k'}\sum_{\ell=1}^{m+1-k'}\frac1{(\ell-\frac12)^i}\biggr) \\ &=\sum_{i=1}^s\biggl(\sum_{k'=m+2}^na_{i,k'}\sum_{\ell=1}^{k'-m-1}\frac1{(\ell-\frac12)^i} -\sum_{k'=0}^{m+1}a_{i,k'}\sum_{\ell=k'-m}^0\frac1{(\ell-\frac12)^i}\biggr) \\ &=\sum_{i=1}^s\biggl(\sum_{k'=m+1}^na_{i,k'}\sum_{\ell=1}^{k'-m-1}\frac1{(\ell-\frac12)^i} -\sum_{k'=0}^ma_{i,k'}\sum_{\ell=k'-m}^0\frac1{(\ell-\frac12)^i}\biggr) \\ &=-a_0. \end{align*} This implies the required formula for $a_0$. \end{proof} The characteristic polynomial of the recursion for $r_n$ as above is $\lambda^2-123\lambda+1$, and its zeroes are quite recognisable: $((1\pm\sqrt5)/2)^{10}$. After performing some experiments, it turns out that $$ r_n =\frac{\pi^2(2n)!^4}{(4n+1)!^2} \,{}_3F_2\biggl[\begin{matrix} 2n+1, \, 2n+1, \, 2n+1 \\ 4n+2, \, 4n+2 \end{matrix}; 1\biggr], $$ where the latter is a `rarified' sequence of the Ap\'ery approximations to $\zeta(2)$. This follows as a consequence of the hypergeometric identity \begin{multline} \label{eq:2n+1/4n+2} \frac{\Gamma(4n+2)^2\Gamma(2n+1)^2}{\Gamma(3n+\frac32)^4} \,{}_5F_4\biggl[\begin{matrix} 4n+1, \, n+\frac12, \, n+\frac12, \, n+\frac12, \, n+\frac12 \\[2pt] 3n+\frac32, \, 3n+\frac32, \, 3n+\frac32, \, 3n+\frac32 \end{matrix};1\biggr] \\ ={}_3F_2\biggl[\begin{matrix} 2n+1, \, 2n+1, \, 2n+1 \\ 4n+2, \, 4n+2 \end{matrix};1\biggr], \end{multline} which is in turn the particular case where $a=b=c=2n+1$ of the following general transformation. \begin{theorem} \label{th-pi2} We have \begin{align} & {} _{3} F _{2} \biggl[ \begin{matrix} a, \, b, \, c \\ \, -a+2 b+c, \, -a+b+2 c \end{matrix}\,; 1\biggr] =\frac {\Ga(-\frac{a}{2}+b+c+\frac{1}{2})\, \Ga(-\frac{3 a}{2}+2 b+c+\frac{1}{2})} {\Ga(-a+b+c+\frac{1}{2})\, \Ga(-a+2 b+c+\frac{1}{2})} \nonumber\\ &\qquad\times \frac { \Ga(-a+b+2 c)\, \Ga(-3 a+2 b+2 c) \,\Ga(-2 a+2 b+2 c) \,\Ga(-2 a+4 b+2 c)} {\Ga(-\frac{3 a}{2}+b+2 c)\, \Ga(-\frac{5 a}{2}+2 b+2 c)\, \Ga(-a+2 b+2 c)\, \Ga(-3 a+4 b+2 c)} \nonumber\\ &\qquad\times {}_5{F}_4 \biggl[ \begin{matrix} -2 a+2 b+2 c-1, \, c-\frac{a}{2}, \, -\frac{3 a}{2}+b+c, \, \frac{a}{2}, \, b-\frac{a}{2} \\[1.5pt] -\frac{3 a}{2}+2 b+c, \, -\frac{a}{2}+b+c, \, -\frac{5a}{2}+2 b+2 c, \, -\frac{3 a}{2}+b+2 c \end{matrix}\, ; 1\biggr]. \label{eq:3F2-5F4} \end{align} \end{theorem} \begin{proof} We start with the transformation formula (cf.\ \cite[Eq.~(3.5.10), $q\to 1$, reversed]{GR04}) \begin{multline} \label{eq:T3240} {} _{3} F _{2} \biggl[ \begin{matrix} a, \, b, \, c \\ d, \, d - b + c \end{matrix}\, ; 1\biggr] = \frac {\Gamma( 2 d)\,\Gamma( 2 d - 2 b - a)\,\Gamma( d - b + c)\,\Gamma( d - a + c)} {\Gamma( 2 d - 2 b)\,\Gamma( 2 d - a)\,\Gamma( d + c)\, \Gamma( d - b - a + c)} \\ \times {}_7{F}_6 \biggl[ \begin{matrix} d-\frac{1}{2}, \, \frac{d}{2}+\frac{3}{4}, \, \frac{d}{2}-\frac{c}{2}, \, b, \, \frac{a}{2}, \, \frac{a}{2}+\frac{1}{2}, \, -\frac{c}{2}+\frac{d}{2}+\frac{1}{2} \\[1.5pt] \frac{d}{2}-\frac{1}{4}, \,\frac{c}{2}+\frac{d}{2}+\frac{1}{2}, \, -b+d+\frac{1}{2}, \, -\frac{a}{2}+d+\frac{1}{2}, \, d-\frac{a}{2}, \, \frac{c}{2}+\frac{d}{2} \end{matrix} ; 1\biggr]. \end{multline} To the very-well-poised $_7F_6$-series on the right-hand side we apply the transformation formula (cf.\ \cite[Sec.~7.5, Eq.~(2)]{Ba35}) \begin{align} & {} _{7} F _{6} \biggl[\begin{matrix} a, \, \frac{a}{2}+1, \, b, \, c, \, d, \, e, \, f \\[1.5pt] \frac{a}{2}, \, a-b+1, \, a-c+1, \, a-d+1, \, a-e+1, \, a-f+1\end{matrix};1 \biggr] \nonumber\displaybreak[2]\\ &\quad =\frac{\Ga(a-c+1) \,\Ga(a-d+1)\, \Ga(a-e+1)\, \Ga(a-f+1)} {\Ga(a+1)\, \Ga(b)\, \Ga(2 a-b-c-d-e+2)\, \Ga(2 a-b-c-d-f+2)} \nonumber\\ &\quad\qquad\kern-8pt \times \frac {\Ga(3 a-2 b-c-d-e-f+3) \,\Ga(2 a-b-c-d-e-f+2)} { \Ga(2 a-b-c-e-f+2)\, \Ga(2 a-b-d-e-f+2)} \nonumber\displaybreak[2]\\ &\quad\qquad\kern-8pt \times {}_7F_6 \biggl[\begin{matrix} 3 a-2 b-c-d-e-f+2, \, \frac{3a}{2}-b-\frac{c}{2}-\frac{d}{2}-\frac{e}{2}-\frac{f}{2}+2, \, a-b-c+1, \\[1.5pt] \frac{3a}{2}-b-\frac{c}{2}-\frac{d}{2}-\frac{e}{2}-\frac{f}{2}+1, \, 2 a-b-d-e-f+2, \end{matrix} \nonumber\\ &\quad\qquad \qquad\qquad\qquad \begin{matrix} a-b-d+1, \, a-b-e+1, \\ 2 a-b-c-e-f+2, \, 2 a-b-c-d-f+2, \end{matrix} \nonumber\\ &\quad\qquad \qquad\qquad\qquad\qquad\qquad \begin{matrix} a-b-f+1, \, 2 a-b-c-d-e-f+2\\ 2 a-b-c-d-e+2, \, a-b+1 \end{matrix};1 \biggr]. \label{eq:T7635} \end{align} Thus, we obtain \begin{align*} & {} _{3} F _{2} \biggl[ \begin{matrix} a, \, b, \, c \\ d, \, d - b + c \end{matrix} \,; 1\biggr] = \frac {\Ga(2 d)\, \Ga(d-\frac{a}{2})\, \Ga(-\frac{a}{2}+d+\frac{1}{2})\, \Ga(-b+d+\frac{1}{2})\, \Ga(\frac{c}{2}+\frac{d}{2})} {\Ga(d+\frac{1}{2}) \,\Ga(2 d-a) \,\Ga(2 d-2 b)\, \Ga(\frac{d}{2}-\frac{c}{2})\, \Ga(c+d)} \\ &\qquad\times \frac {\Ga(-a-2 b+2 d)\, \Ga(-b+c+d) \, \Ga(-a-b+\frac{3 c}{2}+\frac{3 d}{2}+\frac{1}{2})} {\Ga(-\frac{a}{2}-b+c+d)\, \Ga(-\frac{a}{2}-b+c+d+\frac{1}{2})\, \Ga(-a-b+\frac{c}{2}+\frac{3 d}{2}+\frac{1}{2})} \displaybreak[2]\\ &\qquad\times {} _{7} F _{6} \biggl[ \begin{matrix} -a-b+\frac{3 c}{2}+\frac{3 d}{2}-\frac{1}{2}, \, -\frac{a}{2}-\frac{b}{2}+\frac{3 c}{4}+\frac{3 d}{4}+\frac{3}{4}, \, -\frac{a}{2}+\frac{c}{2}+\frac{d}{2}, \, c, \\[1.5pt] -\frac{a}{2}-\frac{b}{2}+\frac{3 c}{4}+\frac{3 d}{4}-\frac{1}{4}, \, -\frac{a}{2}-b+c+d+\frac{1}{2}, \, -a-b+\frac{c}{2}+\frac{3 d}{2}+\frac{1}{2}, \end{matrix} \\ &\qquad\qquad\qquad\qquad \begin{matrix} -a-b+c+d, \, -b+\frac{c}{2}+\frac{d}{2}+\frac{1}{2}, \, -\frac{a}{2}+\frac{c}{2}+\frac{d}{2}+\frac{1}{2} \\[1.5pt] \frac{c}{2}+\frac{d}{2}+\frac{1}{2}, \, -a+c+d, \, -\frac{a}{2}-b+c+d \end{matrix} ; 1 \biggr] . \end{align*} Next we apply the transformation formula (cf.\ \cite[Sec.~7.5, Eq.~(1)]{Ba35}) \begin{align} & {} _{7} F _{6} \biggl[\begin{matrix} a, \, \frac{a}{2}+1, \, b, \, c, \, d, \, e, \, f \\[1.5pt] \frac{a}{2}, \, a-b+1, \, a-c+1, \, a-d+1, \, a-e+1, \, a-f+1\end{matrix};1 \biggr] \nonumber\displaybreak[2]\\ &\quad =\frac {\Ga(a-e+1) \, \Ga(a-f+1) \, \Ga(2 a-b-c-d+2) \, \Ga(2 a-b-c-d-e-f+2) } {\Ga(a+1) \, \Ga(a-e-f+1) \, \Ga(2 a-b-c-d-e+2) \, \Ga(2 a-b-c-d-f+2)} \nonumber\displaybreak[2]\\ &\quad\qquad \times {} _{7} F _{6} \biggl[ \begin{matrix} 2 a-b-c-d+1, \, a-\frac{b}{2}-\frac{c}{2}-\frac{d}{2}+\frac{3}{2}, \, a-c-d+1, \, a-b-d+1, \\[1.5pt] a-\frac{b}{2}-\frac{c}{2}-\frac{d}{2}+\frac{1}{2}, \, a-b+1,\, a-c+1, \end{matrix} \nonumber\\ &\qquad\qquad\qquad\qquad \begin{matrix} a-b-c+1, \, e, \, f \\ a-d+1, \, 2 a-b-c-d-e+2, \, 2 a-b-c-d-f+2 \end{matrix}\,; 1\biggr] . \label{eq:T7634} \end{align} We arrive at \begin{align*} & {} _{3} F _{2} \biggl[ \begin{matrix} a, \, b, \, c \\ d, \, d - b + c \end{matrix} \,; 1\biggr] =\frac {\Ga(2 d)\, \Ga(d-\frac{a}{2})\, \Ga(\frac{c}{2}+\frac{d}{2})\, \Ga(-a-2 b+2 d)} {\Ga(d+\frac{1}{2}) \,\Ga(2 d-a)\, \Ga(2 d-2 b)\, \Ga(c+d)} \\ &\qquad\times \frac{\Ga(-a+c+d)\, \Ga(-b+c+d)\, \Ga(-\frac{a}{2}-b+\frac{c}{2}+\frac{3 d}{2}+1)} {\Ga(-\frac{a}{2}+\frac{c}{2}+\frac{d}{2}-\frac{1}{2})\, \Ga(-\frac{a}{2}-b+c+d+\frac{1}{2})\, \Ga(-a-b+\frac{c}{2}+\frac{3 d}{2}+\frac{1}{2})} \displaybreak[2]\\ &\qquad\times {} _{7} F _{6} \biggl[ \begin{matrix} -\frac{a}{2}-b+\frac{c}{2}+\frac{3 d}{2}, \, -\frac{a}{4}-\frac{b}{2}+\frac{c}{4}+\frac{3 d}{4}+1, \, -\frac{a}{2}+\frac{c}{2}+\frac{d}{2}+\frac{1}{2}, \, -\frac{c}{2}+\frac{d}{2}+\frac{1}{2}, \\[1.5pt] -\frac{a}{4}-\frac{b}{2}+\frac{c}{4}+\frac{3 d}{4}, \, -b+d+\frac{1}{2}, \, -\frac{a}{2}-b+c+d+\frac{1}{2}, \end{matrix} \\ &\qquad\qquad\qquad\qquad\qquad\qquad \begin{matrix} \frac{a}{2}+\frac{1}{2}, \, -\frac{a}{2}-b+d+\frac{1}{2}, \, -b+\frac{c}{2}+\frac{d}{2}+\frac{1}{2} \\[1.5pt] -a-b+\frac{c}{2}+\frac{3 d}{2}+\frac{1}{2}, \, \frac{c}{2}+\frac{d}{2}+\frac{1}{2}, \, -\frac{a}{2}+d+\frac{1}{2} \end{matrix} ; 1\biggr] . \end{align*} Now we apply again \eqref{eq:T7635}. As a result, we obtain \begin{align} & {} _{3} F _{2} \biggl[ \begin{matrix} a, \, b, \, c \\ d, \, d - b + c \end{matrix} \,; 1\biggr] =\frac {\Ga(2 d)\, \Ga(-\frac{a}{2}+d+\frac{1}{2})\, \Ga(\frac{c}{2}+\frac{d}{2}+\frac{1}{2})\, \Ga(-a-2 b+2 d)} {\Ga(d+\frac{1}{2}) \,\Ga(2 d-a)\, \Ga(2 d-2 b) \,\Ga(c+d) } \nonumber\\ &\qquad\times \frac {\Ga(-a+c+d) \,\Ga(-b+c+d)\, \Ga(-\frac{a}{2}-b+\frac{c}{2}+\frac{3 d}{2})} {\Ga(-\frac{a}{2}+\frac{c}{2}+\frac{d}{2}+\frac{1}{2})\, \Ga(-\frac{a}{2}-b+c+d)\, \Ga(-a-b+\frac{c}{2}+\frac{3 d}{2})} \nonumber\displaybreak[2]\\ &\qquad\times {} _{7} F _{6} \biggl[ \begin{matrix} -\frac{a}{2}-b+\frac{c}{2}+\frac{3 d}{2}-1, \, -\frac{a}{4}-\frac{b}{2}+\frac{c}{4}+\frac{3 d}{4}+\frac{1}{2}, \, -b+\frac{c}{2}+\frac{d}{2}, \, -\frac{a}{2}-b+d, \\[1.5pt] -\frac{a}{4}-\frac{b}{2}+\frac{c}{4}+\frac{3 d}{4}-\frac{1}{2}, \, d-\frac{a}{2}, \, \frac{c}{2}+\frac{d}{2}, \end{matrix} \nonumber\\ &\qquad\qquad\qquad\qquad\qquad\qquad \begin{matrix} \frac{a}{2}, \, \frac{d}{2}-\frac{c}{2}, \, -\frac{a}{2}+\frac{c}{2}+\frac{d}{2}-\frac{1}{2} \\[1.5pt] -a-b+\frac{c}{2}+\frac{3 d}{2}, \, -\frac{a}{2}-b+c+d, \, -b+d+\frac{1}{2} \end{matrix} ; 1\biggr]. \label{eq:3F2-7F6} \end{align} Here, we equate the second upper parameter and the last lower parameter in the $_7F_6$-series, that is, $$ \textstyle -\frac{a}{4}-\frac{b}{2}+\frac{c}{4}+\frac{3 d}{4}+\frac{1}{2} =-b+d+\frac{1}{2} , $$ or, equivalently, $d=c+2b-a$. If we make this substitution in \eqref{eq:3F2-7F6}, then the $_7F_6$-series reduces to a $_5F_4$-series. The corresponding transformation formula is \eqref{eq:3F2-5F4}. \end{proof} $q$-Analogues of \eqref{eq:2n+1/4n+2}, \eqref{eq:3F2-5F4}, and \eqref{eq:3F2-7F6} can be obtained by going through the analogous computations when using the $_8\phi_7$-transformation formula \cite[Eq.~(3.5.10)]{GR04} instead of \eqref{eq:T3240}, the $_8\phi_7$-transformation formula \cite[Appendix (III.24)]{GR04} instead of \eqref{eq:T7635}, and the $_8\phi_7$-transformation formula \cite[Eq.~(2.10.1)]{GR04} instead of \eqref{eq:T7634}. The $q$-analogue of \eqref{eq:3F2-7F6} obtained in this way is \begin{multline} \label{eq:3F2-7F6Q} {} _{8} \phi _{7} \biggl[ \begin{matrix} \def\frac#1#2{#1 / #2} -cd/q, i \sqrt{c d q}, -i \sqrt{c d q}, b, -b, c, -c, a \\ i\sqrt{c d/q}, -i\sqrt{c d/q}, -c d/b, c d/b, -d, d, -c d/a \end{matrix} ; q, \frac {d^2} {ab^2}\biggr] \\ =\frac {\def\frac#1#2{#1 / #2} (d^2 q, \frac{c d q}{a}, \frac{c d^3}{a^2 b^2}, \frac{c^2 d^2}{a b^2}; q^2)_\infty\, (\frac{d^2}{b^2}, \frac{d^2}{a}, c d, -c d, \frac{c d}{a b}, -\frac{c d}{a b}; q)_\infty} {\def\frac#1#2{#1 / #2} (\frac{c^2 d^2}{a^2 b^2}, c d q, \frac{d^2 q}{a}, \frac{c d^3}{a b^2}; q^2)_\infty \, (d^2, \frac{d^2}{a b^2}, \frac{c d}{a}, -\frac{c d}{a}, \frac{c d}{b}, -\frac{c d}{b}; q)_\infty} \\ \times {} _{8} \phi _{7} \biggl[ \begin{matrix} \def\frac#1#2{#1 / #2} \frac{c d^3}{a b^2 q^2}, \sqrt{\frac{c d^3 q^2}{a b^2 }}, - \sqrt{\frac{c d^3 q^2}{a b^2 }}, \frac{c d}{b^2}, \frac{d^2}{a b^2}, a, \frac{d}{c}, \frac{c d}{a q}\\ \def\frac#1#2{#1 / #2} \sqrt{\frac{c d^3 }{a b^2 q^2}}, - \sqrt{\frac{c d^3 }{a b^2 q^2}}, \frac{d^2}{a}, c d, \frac{c d^3}{a^2 b^2}, \frac{c^2 d^2}{a b^2}, \frac{d^2 q}{b^2} \end{matrix} ; q^2, \frac {cdq} {a}\biggr] . \end{multline} Similarly to before, we equate the second upper parameter and the last lower parameter in the $_8\phi_7$-series on the right-hand side, that is, $$ \sqrt{\frac{c d^3 q^2}{a b^2 }}=\frac{d^2 q}{b^2}, $$ or, equivalently, $d=cb^2/a$. If we substitute this in \eqref{eq:3F2-7F6Q}, then we obtain \begin{align*} & {} _{8} \phi _{7} \biggr[ \begin{matrix} \def\frac#1#2{#1 / #2} -\frac{b^2 c^2}{a q}, \frac{i b c \sqrt{q}}{\sqrt{a}}, -\frac{i b c \sqrt{q}}{\sqrt{a}}, b, -b, c, -c, a \\ \def\frac#1#2{#1 / #2} \frac{i b c}{\sqrt{a} \sqrt{q}}, -\frac{i b c}{\sqrt{a} \sqrt{q}}, -\frac{b c^2}{a}, \frac{b c^2}{a}, -\frac{b^2 c}{a}, \frac{b^2 c}{a}, -\frac{b^2 c^2}{a^2} \end{matrix} ; q, \frac {b^2 c^2} {a^3}\biggr] \displaybreak[2]\\ &\quad = \frac {\def\frac#1#2{#1 / #2} (\frac{b^4 c^2 q}{a^2}, \frac{b^2 c^2 q}{a^2}, \frac{b^4 c^4}{a^5}, \frac{b^2 c^4}{a^3}; q^2)_\infty} {\def\frac#1#2{#1 / #2} (\frac{b^2 c^4}{a^4}, \frac{b^2 c^2 q}{a}, \frac{b^4 c^2 q}{a^3}, \frac{b^4 c^4}{a^4}; q^2)_\infty} \\ &\quad\qquad \times \frac {\def\frac#1#2{#1 / #2} (\frac{b^4 c^2}{a^3}, \frac{b^2 c^2}{a}, -\frac{b^2 c^2}{a}, \frac{b c^2}{a^2}, -\frac{b c^2}{a^2}; q)_\infty} {\def\frac#1#2{#1 / #2} (\frac{b^4 c^2}{a^2}, \frac{b^2 c^2}{a^3}, -\frac{b^2 c^2}{a^2}, \frac{b c^2}{a}, -\frac{b c^2}{a}; q)_\infty} \\ &\quad\qquad \times {} _{6} \phi _{5} \biggl[ \begin{matrix} \def\frac#1#2{#1 / #2} \frac{b^4 c^4}{a^4 q^2}, -\frac{b^2 c^2 q}{a^2}, \frac{c^2}{a}, \frac{b^2 c^2}{a^3}, a, \frac{b^2}{a}\\ \def\frac#1#2{#1 / #2} -\frac{b^2 c^2}{a^2 q}, \frac{b^4 c^2}{a^3}, \frac{b^2 c^2}{a}, \frac{b^4 c^4}{a^5}, \frac{b^2 c^4}{a^3} \end{matrix} ; q^2, \frac {b^2 c^2 q} {a^2}\biggr] , \end{align*} a $q$-analogue of \eqref{eq:3F2-5F4}. Setting all of $a,b,c$ equal to $q^{2n+1}$, we arrive at a $q$-analogue of \eqref{eq:2n+1/4n+2}, namely \begin{align*} & {} _{8} \phi _{7} \biggl[ \begin{matrix} \def\frac#1#2{#1 / #2} -q^{6 n+2}, i q^{3 n+2}, -i q^{3 n+2}, q^{2 n+1}, -q^{2 n+1}, q^{2 n+1}, -q^{2 n+1}, q^{2 n+1} \\ \def\frac#1#2{#1 / #2} i q^{3 n+1}, -i q^{3 n+1}, -q^{4 n+2}, q^{4 n+2}, -q^{4 n+2}, q^{4 n+2}, -q^{4 n+2} \end{matrix} ; q, q^{2n+1}\biggr] \\ &\quad = \frac {(q^{6 n+3}, q^{6 n+3}, -q^{6 n+3}, -q^{2 n+1}; q)_\infty\, (q^{8 n+5}, q^{4 n+3}, q^{6 n+3}, q^{6 n+3}; q^2)_\infty } {(q^{8 n+4}, -q^{4 n+2}, q^{4 n+2}, -q^{4 n+2}; q)_\infty\, (q^{4 n+2}, q^{6 n+4}, q^{6 n+4}, q^{8 n+4}; q^2)_\infty} \\ &\quad\qquad \times {} _{6} \phi _{5} \biggl [ \begin{matrix} \def\frac#1#2{#1 / #2} q^{8 n+2}, -q^{4 n+3}, q^{2 n+1}, q^{2 n+1}, q^{2 n+1}, q^{2 n+1} \\ -q^{4 n+1}, q^{6 n+3}, q^{6 n+3}, q^{6 n+3}, q^{6 n+3} \end{matrix} ; q^2, q^{4n+3}\biggr] . \end{align*} \section{Zeta values} \label{s5} In this section we prove Theorem~\ref{th-zeta}. \begin{proof}[Proof of Theorem~\textup{\ref{th-zeta}}] Fix an even integer $s\ge8$ and define the rational functions \begin{align*} R(t)=R_n(t) &=\frac{n!^{s-6}\cdot2^{12n+1}(t+\frac n2)\prod_{j=1}^{3n}(t-n-\frac12+j)^2}{\prod_{j=0}^n(t+j)^s}, \\ \wh R(t)=\wh R_n(t) &=\frac{n!^{s-6}\cdot2^{12n}\prod_{j=1}^{3n}(t-n-\frac12+j)^2}{\prod_{j=0}^n(t+j)^s}, \end{align*} both vanishing together with their derivatives at $t=\nu-n+\frac12$ for $\nu=0,1,\dots,3n-1$. Then Lemma~\ref{lem1} and the results from \cite[Section 2]{Zu18} apply, and we obtain the linear forms \begin{align*} r_n&=\sum_{\nu=1}^\infty R_n(\nu-\tfrac12) =\sum_{\substack{i=2\\i\;\text{odd}}}^sa_i(2^i-1)\zeta(i)+a_0, \\ r_n'&=-\sum_{\nu=1}^\infty\frac{\d R_n}{\d t}(\nu-\tfrac12) =\sum_{\substack{i=2\\i\;\text{odd}}}^sa_ii(2^{i+1}-1)\zeta(i+1), \displaybreak[2]\\ \wh r_n&=\sum_{\nu=1}^\infty\wh R_n(\nu-\tfrac12) =\sum_{\substack{i=2\\i\;\text{even}}}^s\hat a_i(2^i-1)\zeta(i), \\ \wh r_n'&=-\sum_{\nu=1}^\infty\frac{\d\wh R_n}{\d t}(\nu-\tfrac12) =\sum_{\substack{i=2\\i\;\text{even}}}^s\hat a_ii(2^{i+1}-1)\zeta(i+1)+\hat a_0, \end{align*} with the following inclusions available: $$ d_n^{s-i}a_i,\; d_n^{s-i}\hat a_i\in\mathbb Z \quad\text{for}\; i=2,3,\dots,s, \quad\text{and}\quad d_n^sa_0,\; d_n^{s+1}\hat a_0\in\mathbb Z. $$ Here $d_n$ denotes the least common multiple of $1,\dots,n$. Its asymptotic behaviour $d_n^{1/n}\to e$ as $n\to\infty$ follows from the prime number theorem. The standard asymptotic machinery \cite[Section 2]{Zu02a} implies that $$ \lim_{n\to\infty}|r_n|^{1/n} =\lim_{n\to\infty}|\wh r_n|^{1/n} =g(x_0) $$ and $$ \lim_{n\to\infty}|r_n'|^{1/n} =\lim_{n\to\infty}|\wh r_n'|^{1/n} =g(x_0'), $$ where $$ g(x)=\frac{2^{12}(x+3)^6(x+1)^s}{(x+2)^{2s}}, $$ and $x_0$, $x_0'$ are the real zeroes of the polynomial $$ x^2(x+2)^s-(x+3)^2(x+1)^s $$ on the intervals $x>0$ and $-1<x<0$, respectively. It can also be observed numerically for each choice of even $s$ that $0<g(x_0')<g(x_0)$, so that $$ \lim_{n\to\infty}|r_n-\mu r_n'|^{1/n} =\lim_{n\to\infty}|\wh r_n-\hat\mu\wh r_n'|^{1/n} =g(x_0) $$ for any real $\mu$ and $\wh\mu$. Theorem~\ref{th-zeta} follows from taking $\mu=\lambda/\pi$ for the first collection, $\hat\mu=4\lambda\pi$ for the second one, and noticing that, when $s=40$, we obtain $$ g(x_0)=\exp(-40.54232882\dots) \quad\text{and}\quad g(x_0')=\exp(-40.54234026\dots), $$ while for $s=42$ we get \begin{equation*} g(x_0)=\exp(-43.31492040\dots) \quad\text{and}\quad g(x_0')=\exp(-43.31492612\dots). \qedhere \end{equation*} \end{proof} Finally, we remark that further variations on Theorem~\ref{th-zeta} are possible by combining the two hypergeometric constructions from this section and \cite{Zu18} (see also related applications in \cite{FSZ18} and \cite{RZ18}). As the corresponding results remain similar in spirit to the theorem, we do not pursue this line here.
{ "timestamp": "2018-08-07T02:14:00", "yymm": "1802", "arxiv_id": "1802.08856", "language": "en", "url": "https://arxiv.org/abs/1802.08856" }
\section{Introduction} \label{s:intro} \noindent Let $G$ be a group given by a free presentation $G=F/R$. Then the {\it Schur multiplier} of $G$ can be defined via Hopf's formula as $\M (G)=([F,F]\cap R)/[F,R]$. It is known that if $G$ is a finite group, then $\M (G)\cong \HH_2(G,\mathbb{Z})\cong \HH^2(G,\mathbb{Q}/\mathbb{Z})$; see, for example, Beyl and Tappe's book \cite{Bey82} for details and applications of Schur multipliers. A significant part of the theory of Schur multipliers consists of estimating their size, rank, or exponent. Focusing on the latter one, it was already Schur \cite{Sch07} who showed that if $G$ is a finite group, then $(\exp \M (G))^2$ divides $|G|$. On the other hand, it was shown in \cite{Mor07} that, for every finite group $G$, the exponent of $\M(G)$ can be bounded in terms of $\exp G$ only. Good bounds of this kind are however still out of reach. In practice it often happens that $\exp \M(G)$ divides $\exp G$. One may conjecture that this is always the case, yet a counterexample of exponent 4 with Schur multiplier of exponent 8 was constructed by Bayes, Kautsky and Wamsley \cite{Bay73}. We mention that there no known examples of odd order groups $G$ with $\exp \M (G)>\exp G$, though it seems plausible that they exist. On the other hand, $\exp \M (G)$ always divides $\exp G$ at least when $G$ is of one of the following types: nilpotent of class $\le 3$ \cite{Jon74,Mor08a}, powerful $p$-group \cite{Lub87}, a $p$-group of maximal class \cite{Mor11}. In addition to that, we have the following. \begin{theorem}[\cite{Mor07,Mor08a,Mor08}] \label{t:expm} Let $G$ be a group of finite exponent. \begin{enumerate} \item If $G$ is nilpotent of class $\le 4$, then $\exp \M(G)$ divides $2\cdot \exp G$. \item If $G$ is a $4$-Engel group, then $\exp \M(G)$ divides $10\cdot \exp G$. \item If $\exp G=4$, then $\exp \M(G)$ divides 8. \item If $G$ is metabelian, then $\exp \M(G)$ divides $(\exp G)^2$. \end{enumerate} \end{theorem} There are numerous other estimates for $\exp \M(G)$, but we do not go further into this here, see, for example, Sambonet's paper \cite{Sam15} for a short review of such results. Given a free presentation $G=F/R$, define \begin{align*} \B_0(G) &= ([F,F]\cap R) / (\langle \K(F)\cap R\rangle),\\ G\curlywedge G &= [F,F] / (\langle \K(F)\cap R\rangle), \end{align*} where $\K(F)$ is the set of commutators in $F$. It is shown in \cite{Mor12} that if $G$ is a finite group and $V$ a faithful representation of $G$ over $\mathbb{C}$, then the dual of $\B_0(G)$ is naturally isomorphic to the unramified Brauer group $\HH^2_{\rm nr}(\mathbb{C}(V)^G,\mathbb{Q}/\mathbb{Z} )$ introduced by Artin and Mumford \cite{Art72}. This invariant represents an obstruction to {\it Noether's problem} \cite{Noe16} asking as to whether the field $\mathbb{C}(V)^G$ is purely transcendental over $\mathbb{C}$. The above mentioned result of \cite{Mor12} is based on a result of Bogomolov \cite{Bog88} who showed that $\HH^2_{\rm nr}(\mathbb{C}(V)^G,\mathbb{Q}/\mathbb{Z} )$ is naturally isomorphic to the intersection of the kernels of restriction maps $\HH^2(G,\mathbb{Q}/\mathbb{Z})\to \HH^2(A,\mathbb{Q}/\mathbb{Z})$, where $A$ runs through all (two-generator) abelian subgroups of $G$. The latter group is also known as the {\it Bogomolov multiplier} of $G$. Here we use the same name for $\B_0(G)$. Our main result is the following. \begin{theorem} \label{t:expb0} Let $G$ be a group of finite exponent. If $G$ satisfies one of the following properties, then $\exp (G\curlywedge G)$ divides $\exp G$: \begin{enumerate} \item Nilpotent of class $\le 5$, \item Metabelian, \item 4-Engel, \item $\exp G=4$. \end{enumerate} \end{theorem} This result thus complements Theorem \ref{t:expm}. It also implies, that, in the cases listed in Theorem \ref{t:expb0}, it always happens that $\exp \B_0(G)$ divides $\exp G$. Note that the question of determining $\exp \B_0(G)$ was recently addresed by Garc\' ia-Rodr\' iguez, Jaikin-Zapirain, and Jezernik \cite[Theorem 6]{Gar17}. In view of the above result and further extensive computational evidence we pose the following conjecture: \begin{conjecture} \label{conj:expb0} Let $G$ be a finite group. Then $\exp\B_0(G)$ divides $\exp G$, and thus, in particular, $\exp \M(G)$ divides $(\exp G)^2$. \end{conjecture} In order to justify this conjecture, we first mention that Saltman \cite{Sal84} showed that for every $p>2$ there exists a $p$-group of exponent $p$ with non-trivial Bogomolov multiplier of exponent $p$, thus the bound given in the conjecture is sharp. On the other hand, if $\exp G$ is not prime, then $\B_0(G)$ is usually small (or of small exponent), compared to $G$. Fernandez-Alcober and Jezernik \cite{Fer18} recently constructed examples of finite $p$-groups $G$ of maximal class with $\exp\B_0(G)\approx\sqrt{\exp G}$, and it appears that this might be close to the worst case. In fact, we have not been able to find a group $G$ with $\exp (G\curlywedge G)>\exp G$, so it may be possible that even the stronger conjecture with $\B_0(G)$ replaced by $G\curlywedge G$ might hold true. The proof of Theorem \ref{t:expb0} goes roughly as follows. First, one may assume without loss of generality that $G$ is a finite group. By \cite{Jez15a}, there exists a so-called CP cover $H$ of $G$ (see Section \ref{s:prelim} for the details) whose main feature is that $G\curlywedge G$ is isomorphic to $[H,H]$, and all commutator relations of $G$ lift to commutator relations in $H$. The calculations are then performed in $[H,H]$. In the metabelian case, the proof is fairly straightforward. In the remaining cases, it relies on a careful examination of the power structure of the lower central series of $H$, which uses information on the free groups in a given variety. For class $\le 5$ groups, this can be obtained using elementary commutator calculus. In the exponent 4 and 4-Engel case, we mainly use M. Hall's description of the free 3-generator group of exponent 4 \cite{Hal73}, and Nickel's computations of free 4-Engel groups of low ranks \cite{Nic99}, along with Havas and Vaughan-Lee's proof of local nilpotency of 4-Engel groups \cite{Hav05}. \section{Preliminaries} \label{s:prelim} \subsection{CP covers} \label{ss:cp} Let $G$ be a group and $Z$ a $G$-module. Denote by $e=(\chi , H,\pi )$ the extension \begin{equation*} \xymatrix{ 1\ar[r] & Z\ar[r]^\chi & H\ar[r]^\pi & G\ar[r] & 1} \end{equation*} of $Z$ by $G$. Following \cite{Mor12}, we say that $e$ is a {\it CP extension} if commuting pairs of elements of $G$ have commuting lifts in $H$. A stem central CP extension of $Z$ by $G$, where $|Z|=|\B_0(G)|$, is called a {\it CP cover} of $G$. CP covers are analogs of the usual covers in the theory of Schur multipliers. It is proved in \cite{Jez15a} that every finite group has a CP cover. It also follows from \cite{Jez15a} that if $e=(\chi , H,\pi )$ of the above form is a CP cover of $G$, then $Z^\chi\cap \K (H)=1$. This in particular implies that any commutator law satisfied by $G$ is also satisfied by $H$. \subsection{Collection process} \label{ss:collect} \noindent We recall \cite[Theorem 11.2.4]{Hal59} that Hall's Basis Theorem implies that if $F$ is a free nilpotent group of class $c$ and $a,b\in F$, then the word $(ab)^n$, where $n$ is a non-negative integer, can be written uniquely as a product $c_1^{n_1}c_2^{n_2}\cdots c_t^{n_t}$, where $c_i$ are basic commutators in $\{a,b\}$ of weights $1,2,\ldots ,c$, and $n_i=b_1{n\choose 1}+b_2{n\choose 2}+\cdots +b_r{n\choose r}$, where $r$ is the weight of $c_i$ and $b_j$ are non-negative integers not depending on $n$. Specifically, we will be interested in the case when $F$ is free nilpotent of class 6. We need to determine the coefficients $b_i$ explicitly, and this can be done using the collection process described in \cite[Section 12.3]{Hal59}. We omit the details regarding calculations, and only record the values of $b_i$ for all basic commutators of weight $\le 6$ in $\{ a,b\}$ in Table \ref{tablebi}. \begin{table}[h!tb] \begin{tabular}{|l|c|c|c|c|c|c|} \hline Commutator $c_i$ & $b_1$ & $b_2$ & $b_3$ & $b_4$ & $b_5$ & $b_6$\\ \hline $a$ & 1 & & & & & \\ $b$ & 1 & & & & & \\ $[b,a]$ & & 1 & & & & \\ $[b,a,a]$ & & & 1 & & & \\ $[b,a,b]$ & & 1 & 2 & & & \\ $[b,a,a,a]$ & & & & 1 & & \\ $[b,a,a,b]$ & & & 2 & 3 & & \\ $[b,a,b,b]$ & & & 2 & 3 & & \\ $[b,a,a,[b,a]]$ & & & 1 & 7 & 6 & \\ $[b,a,b,[b,a]]$ & & & 6 & 18 & 12 & \\ $[b,a,a,a,a]$ & & & & & 1 & \\ $[b,a,a,a,b]$ & & & & 3 & 4 & \\ $[b,a,a,b,b]$ & & & 1 & 6 & 6 & \\ $[b,a,b,b,b]$ & & & & 3 & 4 & \\ $[b,a,b,[b,a,a]]$ & & & 4 & 21 & 36 & 20\\ $[b,a,a,a,[b,a]]$ & & & & 3 & 13 & 10\\ $[b,a,a,b,[b,a]]$ & & & 2 & 24 & 52 & 30\\ $[b,a,b,b,[b,a]]$ & & & 3 & 27 & 54 & 30\\ $[b,a,a,a,a,a]$ & & & & & & 1\\ $[b,a,a,a,a,b]$ & & & & & 4 & 5\\ $[b,a,a,a,b,b]$ & & & & 3 & 12 & 10\\ $[b,a,a,b,b,b]$ & & & & 3 & 12 & 10\\ $[b,a,b,b,b,b]$ & & & & & 4 & 5\\ \hline \end{tabular} \caption{Coefficients in exponents of $c_i$.} \label{tablebi} \end{table} \section{Proof of Theorem \ref{t:expb0}} \label{s:proof} \noindent In what follows, $G$ will be a group of finite exponent satisfying one of the properties listed in Theorem \ref{t:expb0}. In each of those cases, $G$ is then locally finite. As $\B_0$ commutes with direct limits \cite{Mor12}, one may assume without loss of generality that $G$ is a finite group; furthermore, Bogomolov's results \cite{Bog88} imply that we can restrict ourselves to the case when $G$ is a finite $p$-group. Let $$\xymatrix{ 1\ar[r] & Z\ar[r] & H \ar[r]^{\pi} & G\ar[r] & 1}$$ be a CP cover of $G$, where $Z$ is a central subgroup of $H$ with the property that $Z\cong \B_0(G)$ and $Z\cap \K(H)=1$. From here on the proof goes on by considering each case separately. \subsection{Metabelian groups} \label{s:meta} The case of metabelian groups is easy: \begin{theorem} \label{t:metabel} Let $G$ be a metabelian group of finite exponent. Then the exponent of $G\curlywedge G$ divides $\exp G$. \end{theorem} \proof Put $\exp G=e$. Note that $H$ is also metabelian, hence it suffices to prove that $[x,y]^e=1$ for all $x,y\in H$. We expand $1=[x,y^e]=[x,y]^e\prod _{k=2}^e[x,{}_ky]^{{e\choose k}}$. Observe that $\prod _{k=2}^e[x,{}_ky]^{{e\choose k}}\in Z$. Furthermore, $$\prod _{k=2}^e[x,{}_ky]^{{e\choose k}}=\left [ \prod _{k=2}^e[x,{}_{k-1}y]^{{e\choose k}},y\right ]\in\K(H),$$ therefore $[x,y]^e=1$, as required. \endproof \subsection{Exponent 4} \label{s:exp4} At first we list some properties of groups of exponent 4 that will be used in the proof of this case. \begin{lemma} \label{l:exp4} Let $G$ be a group of exponent 4 and $a,b,c\in G$. \begin{enumerate}[(a)] \item The group $\langle a,b\rangle$ is nilpotent of class $\le 5$, $\langle a,b,c\rangle$ is nilpotent of class $\le 7$, and $\langle [a,b],c\rangle$ is nilpotent of class $\le 4$, \item $[[a,b]^2,a]=1$, \item $[a,b,a,a^2[a,b]]=1$, \item $[c,[a,b],[a,b],[a,b]]=1$, \end{enumerate} \end{lemma} \proof All the above properties can be deduced immediately from a polycyclic presentation of $B(3,4)$, see also \cite{Hal73}. \endproof \begin{theorem} \label{t:exp4} Let $G$ be a group of exponent 4. Then the exponent of $G\curlywedge G$ divides 4. \end{theorem} \proof As noted above, we may assume without loss of generality that $G$ is a finite group. Choose $x,y,z\in H$. First note that $[[x,y]^2,x]\in Z\cap \K(H)=1$ by Lemma \ref{l:exp4}, therefore \begin{equation} \label{eq:exp4_1} 1=[x,y,x]^2[x,y,x,[x,y]]. \end{equation} Take $w\in \{ x,y\}$. As $\langle x,y\rangle$ is nilpotent of class $\le 5$, we get $1=[[x,y]^2,x,w]=[x,y,x,w]^2$. From here it follows that \begin{equation} \label{eq:exp4_2} \gamma _4(\langle x,y\rangle )^2=1. \end{equation} We also have that $[x,y,z]^4=1$ by \cite[Proof of Theorem 2.6]{Mor07}. Now we expand $1=[x^4,y]$ using \cite[Lemma 9]{Mor08a}: \begin{align*} 1 &= [x^4,y]\\ &= [x,y]^4[x,y,x]^6[x,y,x,x]^4[x,y,x,x,x][x,y,x,[x,y]]^{14}\\ &= [x,y]^4[x,y,x]^2[x,y,x,x,x]. \end{align*} Lemma \ref{l:exp4} implies that $[x,y,x,x^2[x,y]]=1$. Expanding this using the class restriction, we obtain $1=[x,y,x,x]^2[x,y,x,x,x][x,y,x,[x,y]]$, and this implies $[x,y,x,x,x]=[x,y,x,[x,y]]$. From \eqref{eq:exp4_1} and the above expansion we get that $[x,y]^4=1$. By Lemma \ref{l:exp4}, the group $\langle [x,y],z\rangle$ is metabelian and nilpotent of class $\le 4$, and we also have that $[z,[x,y],[x,y],[x,y]]=1$. We now expand $([x,y]z)^4$ using Subsection \ref{ss:collect} and \eqref{eq:exp4_2}: $$([x,y]z)^4=z^4[z,[x,y]]^2[z,[x,y],[x,y],z][z,[x,y],z,z].$$ Denote $w=[z,[x,y]]^2[z,[x,y],[x,y],z][z,[x,y],z,z]$ and consider the following words: \begin{align*} w_1 &= [x,z,z,x^2z^2[z,y,x][z,x,y,y]],\\ w_2 &= [y^2z^2,[z,y,z]],\\ w_3 &= [y^2z^2,[y,x,x,x][z,x,x,x]],\\ w_4 &= [y^2z^2,[z,x,z,z][z,y,y,x,x]],\\ w_5 &= [[z,y][z,x]z^2,z^2[z,x][z,y][z,x,x][z,x,y][z,y,x][z,y,y][z,x,x,x][z,y,y,x,x]],\\ w_6 &= [[z,y][z,x]z^2, [z,y,z,x][z,y,z,y][z,y,z,x,x]],\\ w_7 &= [[z,y][z,x]z^2, [z,x,z,z]]. \end{align*} The subgroup $\langle x,y,z\rangle$ is an image of $$K=\langle a,b,c \mid \hbox{class } 7, \hbox{laws } [x_1^4,x_2]=[x_1,x_2]^4=[[x_1,x_2]^2,x_1]=[x_1,_{3}[x_2,x_3]]=1\rangle.$$ Expansions of the above defined words in $K$ into products of basic commutators reveals that $w=w_1w_2\cdots w_7$. On the other hand, inspection of the presentation of $B(3,4)$ shows that $w_i\in \K(H)\cap Z=1$ for all $i=1,2,\ldots ,7$, therefore $w=1$. This immediately implies $([x,y]z)^4=z^4$ for all $x,y,z\in H$. From here it is not difficult to conclude that $\exp \gamma _2(H)$ divides 4, and this finishes the proof. \endproof \subsection{4-Engel groups} \label{s:4engel} \noindent The aim of this section is to prove \begin{theorem} \label{t:4eng} Let $G$ be a $4$-Engel group of finite exponent. Then the exponent of $G\curlywedge G$ divides $\exp G$. \end{theorem} As $4$-Engel groups are locally nilpotent \cite{Hav05}, the situation can be easily reduced to the case when $G$ is a finite $p$-group. If $p\neq 2,5$, then it follows from \cite{Mor08} that even $\exp (G\wedge G)$ divides $\exp G$. This is no longer true when $p=2$ or $p=5$. In the case when $p=5$, there is a short proof of Theorem \ref{t:4eng}. Let $H$ be a CP cover of $G$ and denote $\exp G=5^e$. Note that $H$ is a 4-Engel 5-group, hence it is regular \cite{Hav07}. It follows from \cite{Mor08} that if $x,y\in H$, then $[x,y]^{5^e}=1$. Regularity now implies that $\gamma_2(H)$ has exponent dividing $5^e$. We are thus left with $4$-Engel $2$-groups. The argument here is more involved. We start with some preliminaries. \begin{lemma} \label{l:4engexp} Let $G$ be a $4$-Engel group of exponent $2^e$ and $a,b,c\in G$. \begin{enumerate}[(a)] \item $\gamma _7(\langle a,b\rangle )=\gamma _8(\langle a,b,c\rangle )^2=\gamma _9(\langle a,b,c\rangle )=1$, \item $[a,b,a]^{2^{e-1}}=[a,b,b]^{2^{e-1}}=1$, \item $\gamma _4(\langle a,b,c\rangle )^{2^{e-1}}=1$, \end{enumerate} \end{lemma} \proof It follows from \cite{Nic99} that if $\langle a,b,c\rangle$ is a $4$-Engel group, then $\gamma _7(\langle a,b\rangle )=(\gamma _8(\langle a,b,c\rangle )/\gamma _9(\langle a,b,c\rangle ))^{30}=\gamma _9(\langle a,b,c\rangle )^3=1$. This proves (a). The fact that the exponent of $\gamma _4(\langle a,b,c\rangle )$ divides $2^{e-1}$ is proved in \cite[Lemma 4.6]{Mor08}, whereas the proof of that Lemma also yields (b). \endproof We will also use the following: \begin{lemma}[cf Lemma 4.4 of \cite{Mor08}] \label{l:xny} Let $G$ be a 4-Engel group, $a,b\in G$ and $n$ a non-negative integer. Then \begin{equation*} [a^n,b]= [a,b]^n[a,b,a]^{{n\choose 2}}[a,b,a,a]^{{n\choose 3}}[a,b,a,[a,b]]^{{n\choose 2}+2{n\choose 3}} \end{equation*} \end{lemma} Referring to a polycyclic presentation of the free 4-Engel group with two or three generators obtained in \cite{Nic99}, we have: \begin{lemma} \label{l:e24} Let $G$ be a $4$-Engel group and $a,b\in G$. \begin{enumerate}[(a)] \item $[b,a,a,[b,a],a]=[b,a,a,a,b,a]=1$, \item $[b,a,a,b,a]^3[b,a,b,b,a]=[a,b,a,[a,b]][b,a,b,a,b,a]^3$, \item If $G$ has no elements of order 3, then $[c,[a,b],[a,b],[a,b]]\in\gamma _7(\langle a,b,c\rangle )^2\gamma _8(\langle a,b,c\rangle )$. \end{enumerate} \end{lemma} \begin{proposition} \label{p:4engid} Let $G$ be 4-Engel group of exponent $2^e$. Then \begin{enumerate}[(a)] \item $[[a,b]^{2^{e-1}},a]=1$, \item $[c,[a,b],[a,b],[a,b]]^{2^{e-2}}=1$. \end{enumerate} \end{proposition} \proof We may assume that $e>2$. Let us expand $(ab)^{2^e}=1$ using Subsection \ref{ss:collect} and Lemma \ref{l:4engexp}. We obtain \begin{equation} \label{eq:4engid_0} \begin{split} [a,b]^{{2^e \choose 2}} ={} & ( [b,a,a,a][b,a,a,b][b,a,b,b][b,a,a,[b,a]][b,a,a,a,b]\\ & \times [b,a,b,[b,a,a]][b,a,a,a,b,b] ) ^{{2^e\choose 4}}. \end{split} \end{equation} We commute this with $a$ and apply class restriction and Lemma \ref{l:e24} (a): \begin{equation} \label{eq:4engid_1} [[a,b]^{{2^e\choose 2}},a]=\left ( [b,a,a,b,a][b,a,b,b,a]\right ) ^{{2^e \choose 4}}. \end{equation} Using Lemma \ref{l:4engexp} and Lemma \ref{l:xny}, we obtain after a short calculation that \begin{equation} \label{eq:4engid_2} [[a,b]^{2^{e-1}},a]=[a,b,a,[a,b]]^{{2^{e-1}\choose 2}}. \end{equation} The equations \eqref{eq:4engid_1} and \eqref{eq:4engid_2}, together with Lemma \ref{l:e24} (b), give $ [b,a,b,a,b,a]^{2^{e-2}}=1. $ This immediately yields $\gamma _6(\langle a,b\rangle )^{2^{e-2}}=1$. Now replace $b$ by $ab$ in \eqref{eq:4engid_0} and use \eqref{eq:4engid_0}. Expansion under given class restriction gives \begin{equation} \label{eq:4engid_3} 1=\left ( [b,a,b,a][b,a,a,b]\right )^{2^{e-2}}. \end{equation} If we further replace $b$ by $ab$ in \eqref{eq:4engid_3} and apply \eqref{eq:4engid_3}, we obtain $[b,a,a,b,a]^{2^{e-2}}=1$. Replacing $a$ by $ba$ in this identity, we conclude that also $[b,a,b,b,a]^{2^{e-2}}=1$. Equation \eqref{eq:4engid_1} now gives $[[a,b]^{{2^e\choose 2}},a]=1$. This proves (a), whereas (b) follows directly from Lemma \ref{l:e24} (c) and Lemma \ref{l:4engexp} (c), as $e>2$. \endproof \begin{theorem} \label{t:4eng2e} Let $G$ be a $4$-Engel group of exponent $2^e$. Then the exponent of $G\curlywedge G$ divides $2^e$. \end{theorem} \proof Note that $H$ is a 4-Engel group. Take $x,y,z\in H$ and let $a=x^\pi$, $b=y^\pi$, $c=z^\pi$. Proposition \ref{p:4engid} implies \begin{align*} 1 &= [[x,y]^{2^{e-1}},x]\\ &= [x,y,x]^{2^{e-1}}[x,y,x,[x,y]]^{2^{e-1}\choose 2}. \end{align*} Equation \eqref{eq:4engid_2} implies $[a,b,a,[a,b]]^{2^{e-1}\choose 2}=1$, therefore $[x,y,x,[x,y]]^{2^{e-1}\choose 2}=[[x,y,x]^{2^{e-1}\choose 2},[x,y]]\in \K (H)\cap Z=1$. This gives $[x,y,x]^{2^{e-1}}=1$. From Lemma \ref{l:xny} we get $$1=[x^{2^e},y]=[x,y]^{2^e}[x,y,x]^{2^e\choose 2}[x,y,x,x]^{2^e \choose 3}[x,y,x,[x,y]]^{{2^e\choose 2}+2{2^e\choose 3}}.$$ Using the above equations, we see that this identity implies $[x,y]^{2^e}=1$. Now note that the subgroup $\langle [x,y],z\rangle$ is nilpotent of class $\le 5$, since $H$ is 4-Engel. We expand $([x,y]z)^{2^e}$ using collection process (see Subsection \ref{ss:collect}): \begin{equation} \label{eq:4engH_1} ([x,y]z)^{2^e}=z^{2^e}([z,[x,y],[x,y],[x,y]][z,[x,y],[x,y],z][z,[x,y],z,z])^{2^e\choose 4}. \end{equation} Note that $[z,[x,y],[x,y],[x,y]]^{2^e\choose 4}=[[z,[x,y]]^{2^e\choose 4},[x,y],[x,y]]\in \K(H)$, and Proposition \ref{p:4engid} implies that $[z,[x,y],[x,y],[x,y]]^{2^e\choose 4}\in Z$. This immediately shows that $[z,[x,y],[x,y],[x,y]]^{2^e\choose 4}=1$. Thus $([z,[x,y],[x,y],z][z,[x,y],z,z])^{2^e\choose 4}\in Z$. Furthermore, the class restriction yields that $([z,[x,y],[x,y],z][z,[x,y],z,z])^{2^e\choose 4}=[[z,[x,y],[x,y]][z,[x,y],z],z]^{2^e\choose 4}=[([z,[x,y],[x,y]][z,[x,y],z])^{2^e\choose 4},z]\in \K(H)$, therefore we conclude that $([z,[x,y],[x,y],z][z,[x,y],z,z])^{2^e\choose 4}=1$. Equation \eqref{eq:4engH_1} thus gives $([x,y]z)^{2^e}=z^{2^e}$, and induction on the commutator length shows that $\exp H'$ divides $2^e$. \endproof \subsection{Groups of nilpotency class $\le 5$} \label{s:nilpotent} We will prove the following: \begin{theorem} \label{t:class5} Let $G$ be a group of finite exponent and class $\le 5$. Then the exponent of $G\curlywedge G$ divides $\exp G$. \end{theorem} Again, we may assume that $G$ is a finite $p$-group of class $\le 5$ and exponent $p^e$. The CP cover $H$ of $G$ is then also nilpotent of class $\le 5$. Let $x,y\in H$. Assume first that $p>2$. Then it follows from the proof of \cite[Theorem 13]{Mor08a} that $[x,y]^{p^e}=1$. As $[H,H]$ is nilpotent of class $\le 2$, it is regular, hence $\exp [H,H]$ divides $p^e$. Thus we are left with the case when $G$ is a $2$-group. Without loss we can asssume that $e>2$. Take $g,h,k\in G$. Then the expansion of $(g[h,k])^{2^e}=1$ yields $$1=[h,k,g]^{{2^e\choose 2}}[h,k,g,[h,k]]^{{2^e\choose 2}+2{2^e\choose 3}} [h,k,g,g,g]^{{2^e\choose 4}}.$$ If we replace $h$ by a commutator $[h_1,h_2]$ in the above equation, we get, after renaming the variables, that $$[h_1,h_2,h_3,h_4]^{2^{e-1}}=1,$$ therefore $\gamma _4(G)^{2^{e-1}}=1$. By the class restriction this implies $\gamma _4(H)^{2^{e-1}}=1$. Take now $x,y,z\in H$. Then $$1=[[x,y]^{2^e},z]=[x,y,z]^{2^e}[x,y,z,[x,y]]^{{2^e\choose 2}}=[x,y,z]^{2^e},$$ hence $\gamma _3(H)^{2^e}=1$. As $[H,H]$ is nilpotent of class $\le 2$, it suffices to prove that $[x,y]^{2^e}=1$ for all $x,y\in H$, and then Theorem \ref{t:class5} follows. Take $x,y\in H$. Then \begin{equation} \label{eq:c5e1} 1=[x^{2^e},y]=[x,y]^{2^e}[x,y,x]^{{2^e \choose 2}}[x,y,x,x,x]^{{2^e\choose 4}}. \end{equation} If we interchange $x$ and $y$ in \eqref{eq:c5e1}, we get \begin{equation} \label{eq:c5e2} 1=[x,y]^{2^e}[x,y,y]^{{2^e\choose 2}}[x,y,y,y,y]^{{2^e\choose 4}}. \end{equation} Now we replace $x$ by $yx$ in \eqref{eq:c5e1} and apply \eqref{eq:c5e1} and \eqref{eq:c5e2}. After a short calculation we obtain \begin{equation} \label{eq:c5e3} [x,y]^{2^e}=\left ( [x,y,x,x,y][x,y,x,y,x][x,y,x,y,y][x,y,y,x,x][x,y,y,x,y][x,y,y,y,x]\right )^{{2^e\choose 4}}. \end{equation} As $H$ is nilpotent of class $\le 5$, we have that $[x,y,x,y,x]=[x,y,y,x,x]$ and $[x,y,x,y,y]=[x,y,y,x,y]$. Thus \eqref{eq:c5e3} can be rewritten as \begin{equation} \label{eq:c5e4} [x,y]^{2^e}=\left ( [x,y,x,x,y][x,y,y,y,x]\right )^{{2^e\choose 4}}. \end{equation} Denote $f={2^e\choose 4}$, and \begin{align*} u &= [y,x][y,x,y],\\ v &= [y,x]^{-1}[y,x,y]^{-1}[y,x,x,x][y,x,y,x],\\ w &= [y^fu, y^{-f}v]. \end{align*} We expand $w$: \begin{align*} w &= [y^fu,v][y^fu,y^{-f}]^v\\ &= [y^f,v]^u[u,v][u,y^{-f}]^v\\ &= [y^f,v][y^f,v,u][u,v][u,y^{-f}][u,y^{-f},v]\\ &= [y^f,v][u,y^{-f}][u,v]\left ([y,v,u][u,y,v]^{-1} \right )^f. \end{align*} Note that $$[u,v] =[[y,x],[y,x,y]^{-1}][[y,x,y],[y,x]^{-1}]=[[y,x],[y,x,y]]^{-1}[[y,x,y],[y,x]]^{-1}=1,$$ and the Hall-Witt identity gives $1=[y,v,u][v,u,y][u,y,v]=[y,v,u][u,y,v]$. Thus $w=[y^f,v][u,y^{-f}]$. Now, $$[u,y^{-f}]=[(y^{-1})^f,u]^{-1}=[y^{-1},u]^{-f}[y^{-1},u,y^{-1}]^{-{f\choose 2}}[y^{-1},u,y^{-1},y^{-1}]^{-{f\choose 3}}.$$ Quick calculation shows that $[y^{-1},u]=[y,x,y]$, $[y^{-1},u,y^{-1}]=[y,x,y,y,y][y,x,y,y]^{-1}$, and $[y^{-1},u,y^{-1},y^{-1}]=[y,x,y,y,y]$. Therefore $$[u,y^{-f}]=[y,x,y]^{-f}[y,x,y,y]^{{f\choose 2}}[y,x,y,y,y]^{-{f\choose 2}-{f\choose 3}}.$$ On the other hand, we easily get that $$[y,v]=[y,x,y][y,x,y,[x,y]][y,x,y,y][y,x,x,x,y][y,x,y,x,y],$$ and thus \begin{align*} [y^f,v] &= [y,v]^f[y,v,y]^{{f\choose 2}}[y,v,y,y]^{{f\choose 3}}\\ &= [y,x,y]^f[y,x,y,[x,y]]^f[y,x,y,y]^{f+{f\choose 2}}[y,x,x,x,y]^f [y,x,y,x,y]^f[y,x,y,y,y]^{{f\choose 2}+{f\choose 3}}. \end{align*} We thus get $$w=[y,x,y,y]^{f+2{f\choose 2}}[y,x,y,[x,y]]^f\left ([y,x,x,x,y][y,x,y,x,y]\right ) ^f.$$ Note that $[y,x,y,y]^{f+2{f\choose 2}}=[y,x,y,y]^{f^2}=1$, since $f^2$ is divisible by $2^{2e-4}\ge 2^{e-1}$. As $H$ has class $\le 5$, we also have $[y,x,y,[x,y]]=[y,x,y,x,y][x,y,y,y,x]$. This, together with \eqref{eq:c5e4}, implies $w=\left ([x,y,x,x,y][x,y,y,y,x]\right )^f=[x,y]^{2^e}$. We conclude that $[x,y]^{2^e}\in \K(H)\cap Z=1$, and this proves Theorem \ref{t:class5}.
{ "timestamp": "2018-02-27T02:06:52", "yymm": "1802", "arxiv_id": "1802.08877", "language": "en", "url": "https://arxiv.org/abs/1802.08877" }
\section{Introduction} The problem of determining the orbits closures, or the irreducible components of varieties of algebras, have been studied for many structures: Lie algebras (\cite{B1}, \cite{B2}, \cite{BS}, \cite{Ch}, \cite{GO'H}, \cite{KN}, \cite{L}, \cite{NP}, \cite{S}, \cite{W}), Jordan algebras (\cite{ACGS}, \cite{AFM}, \cite{GKP}, \cite{KM}, \cite{KP}), Leibniz algebras (\cite{CKLO}, \cite{IKV}, \cite{KPPV}), pre-Lie algebras in \cite{BB1}, Novikov algebras in \cite{BB2}, Filippov algebras in \cite{dAIP}, binary Lie and nilpotent Malcev algebras in \cite{KPV}, superalgebras in \cite{AZ}, etc. However, we have found no literature about Lie superalgebras. The aim of this paper is to give necessary conditions for the existence of degenera\-tions between two complex Lie superalgebras of dimension $(m,n)$. For this goal, some invariants are studied (see section $2.1$). As an application, we study the variety of $(2,2)$-dimensional Lie superalgebras, where the group $\GL_2 (\mathbb{C}) \oplus \GL_2(\mathbb{C})$ acts by ``change of basis" producing fourteen orbits, five orbits depending of one parameter, and one orbit depending of two parameters (see Theorem \ref{thm:alg_clas}). After that, we obtain the Zariski closure of every orbit, and the irreducible components of this variety (see Theorem \ref{2-2-irred-comp}). Moreover, we obtain a nilpotent rigid Lie superalgebra, i.e., a nilpotent Lie superalgebra whose Zariski orbit is open (see Theorem \ref{thm:rigid}). It is important to notice that in the classical case, there are no known examples of nilpotent rigid Lie algebras. \subsection{Preliminaries} A supervector space $V=V_0\oplus V_1$ over the field $\mathbb{F}$, is a $\mathbb{Z}_2$-graded $\mathbb{F}$-vector space, i.e., a vector space decomposed into a direct sum of subspaces. The elements on $V_0\setminus\{ 0 \}$ (respectively, on $ V_1 \setminus \{0\}$) are called even (respectively, odd). Even and odd elements together are called homogeneous; the degree of an homogeneous element is $i$, denoted by $|v|=i$, if $v \in V_i \setminus \{ 0 \}$, for $i \in \mathbb{Z}_2$. If $\dim_\mathbb{F} (V_0)=m$ and $\dim_\mathbb{F} (V_1)=n$, we say that the dimension of $V$ is $(m,n)$. The vector space $\End(V)$ can be viewed as a supervector space, denoted by $\End(V_0|V_1)$, where $\End( V_0 | V_1)_i = \{ T \in \End(V) \; | \; T(V_j)\subset V_{i+j}, \; j \in \mathbb{Z}_2 \}$, for $i\in \mathbb{Z}_2$. Given an homogeneous basis $\{e_1, \dots, e_{m}, f_1, \dots, f_n\}$ for $V=V_0 \oplus V_1$, (that is, $V_0 = \Span_\mathbb{F}\{ e_1, \dots, e_m\} $ and $V_1 = \Span_\mathbb{F}\{ f_1, \dots, f_n\}$), it follows that $\End(V_0 | V_1)_i$ can be identified with $(\Mat_{(m|n)}(\mathbb{F}))_i$, for $ i \in \mathbb{Z}_2$, where $$ \begin{array}{l} (\Mat_{(m|n)}(\mathbb{F}))_0= \left\{ \begin{psmallmatrix} A & 0 \\ 0 & D \end{psmallmatrix} |\, A \in \Mat_m(\mathbb{F}), \; D \in \Mat_n(\mathbb{F}) \right\}, \quad \text{ and } \\ \\ (\Mat_{(m|n)}(\mathbb{F}))_1= \left\{ \begin{psmallmatrix} 0 & B \\ C & 0 \end{psmallmatrix} |\, C \in \Mat_{n \times m}(\mathbb{F}), \; B \in \Mat_{m\times n}(\mathbb{F}) \right\}. \end{array} $$ In particular, $\Aut(V_0|V_1)$ is a $\mathbb{Z}_2$-graded group such that $\Aut(V_0|V_1)_0$ can be identified with $\GL_m(\mathbb{F}) \oplus \GL_n(\mathbb{F})$. A Lie superalgebra is a supervector space $\mathfrak g=\mathfrak g_0 \oplus \mathfrak g_1$ endowed with a bilinear map $\[ \cdot, \cdot \] : V \times V \to V$ satisfying the following \begin{enumerate}[(i)] \item $\[\mathfrak g_i, \mathfrak g_j\] \subset \mathfrak g_{i+j}$, for $i, j \in \mathbb{Z}_2$. \item The super skew-symmetry: $\[x,y\]= -(-1)^{|x||y|}\[y, x\]$. \item The super Jacobi identity: $$(-1)^{|x||z|}\[ \[ x,y \], z \] +(-1)^{|x||y|}\[ \[ y,z \], x \] + (-1)^{|y||z|}\[ \[ z,x \], y \] =0$$ \end{enumerate} for $x,y,z \in (\mathfrak g_0 \cup \mathfrak g_1) \setminus \{ 0 \}$. A linear map between Lie superalgebras $\Phi: \mathfrak g \to \mathfrak g^\prime$ is called a Lie superalgebra morphism if $\Phi$ is even (i.e. $\Phi(\mathfrak g _i) \subset \mathfrak g_i $, for $ i \in \mathbb{Z}_2$), and $\Phi (\[ x,y\] )= \[ \Phi(x), \Phi(y) \] ^\prime$, for $x,y \in \mathfrak g$. (see \cite{Sc} for standard terminology on Lie superalgebras). Let $V=V_0\oplus V_1$ be a complex $(m,n)$-dimensional supervector space with a fixed homogeneous bases $\left\{e_1,\dots,e_m, f_1,\dots,f_n\right\}$. Given a Lie superalgebra structure $\[ \cdot, \cdot \]$ on $V$, we can identify $ \mathfrak g=(V , \, \[ \cdot, \cdot \])$ with its set of structure cons\-tants $\left\{c_{ij}^k,\rho_{ij}^k,\Gamma_{ij}^k\right\}\in\mathbb{C}^{m^3+2mn^2}$, where $$\[e_i,e_j\]=\sum_{k=1}^mc_{ij}^ke_k,\quad \[e_i,f_j\]=\sum_{k=1}^n\rho_{ij}^kf_k,\quad\text{and}\quad \[f_i,f_j\]=\sum_{k=1}^m\Gamma_{ij}^ke_k.$$ Since every set of structure constants must satisfy the polynomial equations given by the super skew-symmetry and the super Jacobi identity, the set of all Lie superalgebras of dimension $(m,n)$ is an affine variety in $\mathbb{C}^{m^3+2nm^2}$, denoted by $\LS$. Notice that every point $\{ c_{ij}^k, \rho_{ij}^k , \Gamma_{ij}^k \} \in \LS$ represents a Lie superalgebra of dimension $(m,n)$. On the other hand, since Lie superalgebra isomorphisms are even maps, it follows that the group $G=\GL_m(\mathbb{C})\oplus\GL_n(\mathbb{C})$ acts on $\LS$ by ``change of basis'': $g\cdot\[x,y\]=g\[g^{-1}x,g^{-1}y\]$, for $g\in G$ and $x,y\in \mathfrak g$. Observe that the set of $G$- orbits of this action are in one to one correspondence with the isomorphism classes in $\LS$. \section{Degenerations on the variety $\LS$} We recall some known facts for irreducible varieties. A nonempty algebraic set is said to be irreducible if it cannot be written as the union of two proper, nonempty, algebraic subsets. Also, every nonempty algebraic set $Y$ can be written as the union of a finite number of maximal irreducible algebraic subsets $X = X_1\cup\dots\cup X_r$; these are called the {\bf irreducible components} of $X$. Notice that such decomposition is unique. As a first step for finding such irreducible components we use the following result: \begin{lemma} If $G$ is a connected algebraic group acting on a variety $X$, then the irreducible components of $X$ are stable under the action of $G$. Moreover, the irreducible components of the variety $X$ are closures of single orbits or closures of infinite families of orbits. \end{lemma} \begin{definition}\label{def:rigid} An element $x\in X$ is called \textit{rigid}, if its orbit $\mathcal{O}(x)$ is open in $X$. \end{definition} Rigid elements of the variety are important due to the fact that if $x$ is rigid in $X$, then there exists an irreducible component $\mathcal{C}$ such that $\mathcal{C}\cap\mathcal{O}(x)$ is a non-empty open in $\mathcal{C}$ and thus $\mathcal{C}\subset\overline{\mathcal{O}(\mathfrak g)}$. \bigskip Consider now the variety $\LS$. Given two Lie superalgebras $\mathfrak g$ and $\mathfrak h$, we say that $\mathfrak g$ \emph{degenerates} to $\mathfrak h$, denoted by $\mathfrak g\rightarrow \mathfrak h$, if $\mathfrak h$ lies in the Zariski closure of the $G$-orbit $\mathcal{O}(\mathfrak g)$, where $G=\GL_m(\mathbb{C})\oplus\GL_n(\mathbb{C})$. Since each orbit $\mathcal{O}(\mathfrak g)$ is a constructible set, its closures relative to the Euclidean and the Zariski topologies are the same (see \cite{M}, 1.10 Corollary 1, p. 84). As a consequence the following is obtained. \begin{lemma} Let $\mathbb{C}(t)$ be the field of fractions of the polynomial ring $\mathbb{C}[t]$. If there exists an operator $g_t\in\GL_m(\mathbb{C}(t))\oplus\GL_n(\mathbb{C}(t))$ such that $\displaystyle\lim_{t\rightarrow 0}g_t\cdot\mathfrak g=\mathfrak h$, then $\mathfrak g\rightarrow\mathfrak h$. \label{def-equiv-defor} \end{lemma} Given $\mathfrak g=\mathfrak g_0\oplus\mathfrak g_1 \in \LS$, recall the identification of $\mathfrak g$ with its set of structure cons\-tants, $\mathfrak g=\left\{c_{ij}^k,\rho_{ij}^k,\Gamma_{i,j}^k\right\}$. From the previous Lemma, it follows that every Lie superalgebra $\mathfrak g\in\LS$ degenerates to the Lie superalgebra $\mathfrak{a}=\{0,0,0\}$. In fact, take $g_t=t^{-1}\left(I_{m}\oplus I_{n}\right)$, where $I_{k}$ is the identity map in $\GL_{k}(\mathbb{C})$, then $\displaystyle\lim_{t\rightarrow 0}g_t\cdot\mathfrak g=\frak a$. Thus, $ \mathfrak g \rightarrow \mathfrak{a}$. \subsection{Invariants} Next we list the invariants for the variety $\LS$ that will be used to obtain non-degenerations in the variety. Most of these invariants have already been used in the study of the variety of Lie algebras. Given $\mathfrak g \in \LS$, we use the notation $\Gamma_\mathfrak g$ for $\left(\Gamma_{ij}^k\right)$. On the other hand, $\mathfrak{D}(\alpha,\beta,\gamma)(\mathfrak g)$ denotes the space of $(\alpha,\beta,\gamma)$-derivations for the Lie superalgebra $\mathfrak g$ (see \cite{ZZ} for more details). We define the map $\mathcal{A}:\LS\rightarrow \LS$ by $\left\{c_{ij}^k,\rho_{ij}^k,\Gamma_{i,j}^k\right\}\mapsto\left\{c_{ij}^k,0,0\right\}$. Notice that the map $\mathcal{A}$ is a morphism that `forgets' the Lie superalgebra structure on $\mathfrak g$ and retrieves the Lie (super)algebra $\mathfrak h=\mathfrak g_0\oplus\mathbb{C}^n \hookrightarrow \LS$. It is easy to see that, the map $\mathcal{A}$ is $G$-equivariant, that is, $\mathcal{A}\left(g\cdot\left\{c_{ij}^k,\rho_{ij}^k,\Gamma_{ij}^k\right\}\right)\\=g\cdot \mathcal{A}\left(\left\{c_{ij}^k,\rho_{ij}^k,\Gamma_{ij}^k\right\}\right)$, for $g\in G$. Moreover, $$\mathcal{A}\left(\overline{G\cdot\left\{c_{ij}^k,\rho_{ij}^k,\Gamma_{ij}^k\right\}}\right)=\overline{G\cdot\left\{c_{ij}^k,0,0\right\}}. $$ As a consequence we obtain the following useful result: \begin{lemma}\label{lem:deg_alg} Let $ \mathfrak g=\mathfrak g_0\oplus\mathfrak g_1,\ \mathfrak h=\mathfrak h_0\oplus\mathfrak h_1 \in \LS$. If $\mathfrak g \rightarrow\mathfrak h$, then $\mathfrak g_0\rightarrow\mathfrak h_0$. \end{lemma} Also, we have the following relations: \begin{lemma}\label{lem:invariants} Let $\mathfrak g,\mathfrak h\in\LS$. If $\mathfrak g\rightarrow\mathfrak h$, then the following relations must hold: \begin{enumerate}[(i)] \item $\dim \mathcal{O}(\mathfrak g)>\dim\mathcal{O}(\mathfrak h)$. \item If $\Gamma_\mathfrak g\equiv 0$ then $\Gamma_\mathfrak h\equiv 0$. \item $\dim(\mathfrak g^1)_i\geq\dim(\mathfrak h^1)_i$ for $i\in\mathbb{Z}_2$, where $\mathfrak g^1=[\mathfrak g,\mathfrak g]$. \item $\dim \mathfrak{D}(\alpha,\beta,\gamma)_i(\mathfrak g)\leq\dim \mathfrak{D}(\alpha,\beta,\gamma)_i(\mathfrak h)$, for $i\in\mathbb{Z}_2$. \item Let $\mathfrak{B}$ and $\mathfrak{B}'$ be fixed bases for $\mathfrak g$ and $\mathfrak h$, respectively. If $\tr\ad(x)=0$ for every $x\in\mathfrak{B}$, then $\tr\ad(y)=0$, for every $y\in\mathfrak{B}'$. \end{enumerate} \end{lemma} \begin{proof} Item $(i)$ follows by the Closed Orbit Lemma (see \cite{B} I. Lemma 1.8, p. 53). The remainning follow by proving that the corresponding sets are closed, using in some cases the upper semi continuity of appropriated functions (see \cite{C}, \S 3 Theorem 2, p. 14). \end{proof} \begin{definition} Let $\mathfrak g=\left\{c_{ij}^k, \rho_{ij}^k,\Gamma_{ij}^k\right\}$ be a Lie superalgebra in $\LS$. \begin{enumerate} \item The Lie superalgebra $\ab(\mathfrak g)$ is defined by $\ab(\mathfrak g)= \left\{0, 0 ,\Gamma_{ij}^k\right\} $. \item The Lie (super)algebra $\mathcal{F}(\mathfrak g)$ is defined by $\mathcal{F}(\mathfrak g) = \left\{c_{ij}^k,\rho_{ij}^k, 0 \right\}$. \end{enumerate} \end{definition} \begin{lemma}\label{lem:abel} Let $\mathfrak g,\mathfrak h\in\LS$. If $\mathfrak g\rightarrow\mathfrak h$, then \begin{enumerate}[(i)] \item $\ab(\mathfrak g)\rightarrow\ab(\mathfrak h)$, \item $\mathcal{F}(\mathfrak g)\rightarrow \mathcal{F}(\mathfrak h)$. \end{enumerate} \end{lemma} \begin{proof} Let $\{e_1,\dots,e_m,f_1,\dots,f_n\}$ be an homogeneous basis for the underlying supervector space $V=V_0\oplus V_1$ of $\mathfrak g$ and $\mathfrak h$. Since $\mathfrak g\rightarrow\mathfrak h$ it follows that there exists a map $g_t\in\GL_m(\mathbb{C}(t))\oplus\GL_n(\mathbb{C}(t))$ such that $\displaystyle\lim_{t\rightarrow 0}g_t\cdot{\mathfrak g}={\mathfrak h}$. In particular \begin{enumerate}[(i)] \item $\displaystyle\lim_{t\rightarrow 0}g_t\[g_t^{-1}f_i,g_t^{-1}f_j\]_{\mathfrak g}=\[f_i,f_j\]_{\mathfrak h}$. Hence, $\ab(\mathfrak g)\rightarrow\ab(\mathfrak h)$. \item $\displaystyle\lim_{t\rightarrow 0}g_t\[g_t^{-1}e_i,g_t^{-1}e_j\]_{\mathfrak g}=\[e_i,e_j\]_{\mathfrak h}\,$ and $\, \displaystyle\lim_{t\rightarrow 0}g_t\[g_t^{-1}e_i,g_t^{-1}f_j\]_{\mathfrak g}=\[e_i,f_j\]_{\mathfrak h}$. Therefore, $\mathcal{F}(\mathfrak g)\rightarrow \mathcal{F}(\mathfrak h)$. \end{enumerate} \end{proof} {\bf The $(i,j)$-invariant.} Let $\mathfrak g \in \LS$ and $(i,j)$ be a pair of positive integers. Let $$c_{i,j}=\frac{\tr(\ad x)^i\tr(\ad y)^j}{\tr\left((\ad x)^i\circ(\ad y)^j\right)}$$ be a quotient of two polynomials in the structure constants of $\mathfrak g$, for all $x,y\in\mathfrak g$ such that both polynomials are not zero. If $c_{i,j}$ is independent of the choice of $x$ and $y$, then it is an invariant called the {\bf $(i,j)$-invariant of $\mathfrak g$}. \begin{lemma}\label{lemma:ij-inv} Let $\mathfrak g,\mathfrak h\in\LS$. If $\mathfrak g\rightarrow\mathfrak h$ and the $(i,j)$-invariant exists for both Lie superalgebras, then $\mathfrak g$ and $\mathfrak h$ have the same $(i,j)$-invariant. \end{lemma} \section{The variety ${\mathcal{LS}}^{(2,2)}$} Lie superalgebras of dimension 4 over $\mathbb{R}$ were classified in \cite{Back}. However, Backhouse's classification does not include the classification of $(2,2)$-dimensional Lie superalgebras over $\mathbb{C}$. We provide in this section a complete classification, up to isomorphism, of all complex Lie superalgebras of dimension $(2,2)$. The representatives of the isomorphism classes are denoted by ${\mathcal LS}_n$, where $n\in \{ 0, \dots, 19\}$. We write the pro\-ducts for each of them, in terms of an homogeneous basis $\{ e_1, e_2, f_1, f_2 \} $, where $({\mathcal LS_n})_0 = \Span \{ e_1, e_2 \}$ and ${(\mathcal LS_n})_1 = \Span \{ f_1, f_2 \}$. Some of these Lie superalgebras depend on one or two complex parameters, producing non-isomorphic families of Lie superalgebras as the following Theorem shows. \begin{theorem}\label{thm:alg_clas} Let $\mathfrak g$ be a complex Lie superalgebra of dimension $(2,2)$. Then, $\mathfrak g$ is isomorphic to one and only one of the following Lie superalgebras: $$\begin{array}{llll} \mathcal {LS}_{0}: &\[\cdot,\cdot \]=0. &&\\ \mathcal {LS}_{1}:& \[ f_1, f_1 \]= e_1, & \[f_2, f_2 \]=e_2.& \\ \mathcal {LS}_{2}:& \[ f_1, f_1 \]= e_1, & \[f_2, f_2 \]=e_1. &\\ \mathcal {LS}_{3}:& \[ f_1, f_1 \]= e_1.&&\\ \mathcal {LS}_{4}: & \[ f_1, f_2 \] = e_1, & \[f_2, f_2\]=e_2.\\ \mathcal {LS}_{5}:& \[ e_1, f_1 \]= f_1, & \[e_2, f_2 \]=f_2.& \\ \mathcal {LS}_{6}^{\alpha}:& \[ e_2, f_1 \]= f_1, & \[e_2, f_2 \]=\alpha f_2. \\ \mathcal {LS}_{7}:& \[ e_2, f_1 \]= f_1, & \[e_2, f_2 \]=-f_2, &\[f_1, f_2 \]=e_1. \\ \mathcal {LS}_{8}:& \[ e_2, f_1 \]= f_1, & \[f_2, f_2 \]=e_1. & \\ \mathcal {LS}_{9}:& \[ e_1, f_1 \]= f_1, & \[e_1, f_2 \]=f_2,& \[e_2, f_2\]=f_1. \\ \mathcal {LS}_{10}: & \[e_2, f_1 \]=f_1, & \[ e_2,f_2\]= f_1+f_2. \\ \mathcal {LS}_{11}: & \[e_2, f_2 \] = f_1. & & \\ \mathcal {LS}_{12}: & \[ e_2, f_2\] = f_1, & \[ f_2, f_2 \] = e_1. \\ \mathcal {LS}_{13}^{\alpha,\beta}: & \[ e_1, e_2 \] =e_1,& \[ e_2, f_1\] = \alpha f_1, & \[ e_2, f_2\] = \beta f_2. \\ \mathcal {LS}_{14}^{\alpha}: & \[ e_1, e_2 \] =e_1,& \[ e_2, f_1\] = \alpha f_1, & \[ e_2, f_2\] =-(\alpha+1) f_2, \\&\[ f_1, f_2 \]= e_1, & & \\ \mathcal {LS}_{15}^{\alpha}: & \[ e_1, e_2 \] =e_1,& \[ e_2, f_1\] = \alpha f_1, & \[ e_2, f_2\] = -\frac{1}{2} f_2, \\ & \[ f_2, f_2 \]= e_1. & & \\ \mathcal{LS}_{16}^{\alpha}: & \[ e_1, e_2 \] =e_1,& \[ e_2, f_1\] = \alpha f_1,& \[ e_2, f_2\] = f_1+ \alpha f_2. \\ \mathcal{LS}_{17}: & \[ e_1, e_2 \] =e_1,& \[ e_2, f_1\] = - \frac{1}{2} f_1, & \[ e_2, f_2\] = f_1 - \frac{1}{2} f_2. \\ & \[ f_2, f_2 \]= e_1. & & \\ \mathcal{LS}_{18}^{\alpha}: & \[ e_1, e_2 \] =e_1,& \[ e_1, f_2 \]=f_1, & \[ e_2, f_1\] = \alpha f_1, \\ & \[ e_2, f_2\] = (\alpha+1) f_2. & & \\ \mathcal{LS}_{19}: & \[ e_1, e_2 \] =e_1,& \[ e_1, f_2 \]=f_1, & \[ e_2, f_1\] = -f_1, \\ & \[ f_1, f_2 \]= e_1, & \[ f_2, f_2 \]= 2e_2, & \\ \end{array}$$ where $\alpha, \beta \in \mathbb{C}$. Furthermore, $\mathcal {LS}_{n}^{\alpha} \simeq \mathcal {LS}_{n}^{\alpha'}$ if and only if $\alpha= \alpha'$, for $n\in \{6,15,16, 18\}$; $\mathcal {LS}_{14}^{\alpha} \simeq \mathcal {LS}_{14}^{\alpha'}$ if and only if either $\alpha=\alpha'$ or $\alpha+\alpha'=-1$; and $\mathcal {LS}_{13}^{\alpha,\beta} \simeq \mathcal {LS}_{13}^{\alpha', \beta'}$ if and only if $\{\alpha, \beta\}= \{\alpha', \beta'\}$. \end{theorem} \subsection{Invariants in the variety ${\mathcal{LS}}^{(2,2)}$} In order to obtain all possible degenerations in ${\mathcal{LS}}^{(2,2)}$, we list some of the invariants for every $(2,2)$-dimensional Lie superalgebra. We list all Lie superalgebras from higher to lower orbit dimension. {\scriptsize \begin{longtable}{|c|c|c|c|c|} \caption[]{Invariants} \label{Table1}\\ \hline $\mathfrak g$ & $\dim\mathcal{O}(\mathfrak g)$ & $\rk\Gamma_\mathfrak g$ & $(\dim(\mathfrak g^1)_0,\dim(\mathfrak g^1)_1)$ & $c_{i,j}(\mathfrak g)$ \\ \hline \endfirsthead \caption[]{(continued)}\\ \hline $\mathfrak g$ & $\dim\mathcal{O}(\mathfrak g)$ & $\rk\Gamma_\mathfrak g$ & $(\dim(\mathfrak g^1)_0,\dim(\mathfrak g^1)_1)$ & $c_{i,j}(\mathfrak g)$ \\ \hline \endhead {$\mathcal{LS}_1$} & {6} & {2} & {(2,0)} & {$\not\exists$} \\ [1.5mm] \hline {$\mathcal{LS}_5$} & {6} & {0} & {(0,2)} & {$\not\exists$} \\ [1.5mm] \hline $\mathcal{LS}_{19}$ & 6 & 2 & (2,1) & $c_{2i,2j}=2$ \\ [1.5mm] \hline {$\mathcal{LS}_4$} & {5} & {2} & {(2,0)} & {$\not\exists$}\\ [1.5mm] \hline {$\mathcal{LS}_7$} & {5} & {1} & {(1,2)} & {$c_{2i,2j}=2$} \\ [1.5mm] \hline {$\mathcal{LS}_8$} & {5} & {1} & {(1,1)} & {$c_{i,j}=1$} \\ [1.5mm] \hline {$\mathcal{LS}_9$} & {5} & {0} & {(0,2)} & {$c_{i,j}=2$} \\ [1.5mm] \hline $\mathcal{LS}_{14}^{\alpha}$ & 5 & 1 & $(1,2-\delta_{0,\alpha}-\delta_{1,\alpha})$ & $c_{i,j}=\frac{\left((-1)^i+\alpha^i+\left(-\alpha-1\right)^i\right)\left((-1)^j+\alpha^j+\left(-\alpha-1\right)^j\right)}{(-1)^{i+j}+\alpha^{i+j}+\left(-\alpha-1\right)^{i+j}}$ \\ [1.5mm] \hline $\mathcal{LS}_{15}^{\alpha\neq-\frac{1}{2}}$ & 5 & 1 & $(1,2-\delta_{0,\alpha})$ & $c_{i,j}=\frac{\left((-1)^i+\alpha^i+\left(-\frac{1}{2}\right)^i\right)\left((-1)^j+\alpha^j+\left(-\frac{1}{2}\right)^j\right)}{(-1)^{i+j}+\alpha^{i+j}+\left(-\frac{1}{2}\right)^{i+j}}$ \\ [1.5mm] \hline $\mathcal{LS}_{17}$ & 5 & 1 & (1,2) & $c_{i,j}=\frac{\left((-1)^i+2\left(-\frac{1}{2}\right)^i\right)\left((-1)^j+2\left(-\frac{1}{2}\right)^j\right)}{(-1)^{i+j}+2\left(-\frac{1}{2}\right)^{i+j}}$ \\ [1.5mm] \hline $\mathcal{LS}_{18}^{\alpha}$ & 5 & 0 & (1,$2-\delta_{\alpha,-1}$) & $c_{i,j}=\frac{\left((-1)^i+\alpha^i+(1+\alpha)^i\right)\left((-1)^j+\alpha^j+(1+\alpha)^j\right)}{(-1)^{i+j}+\alpha^{i+j}+(1+\alpha)^{i+j}}$ \\ [1.5mm] \hline {$\mathcal{LS}_2$} & {4} & {1} & {(1,0)} & {$\not\exists$} \\ [1.5mm] \hline {$\mathcal{LS}_6^{\alpha\neq 1}$} & {4} & {0} & {$(0,2-\delta_{0,\alpha})$} & {$c_{(1+\delta_{-1,\alpha})i,(1+\delta_{-1,\alpha})j}=\frac{\left(1+\alpha^i\right)\left(1+\alpha^j\right)}{1+\alpha^{i+j}}$} \\ [1.5mm] \hline {$\mathcal{LS}_{10}$} & {4} & {0} & {(0,2)} & {$c_{i,j}=2$} \\ [1.5mm] \hline {$\mathcal{LS}_{12}$} & {4} & {1} & {(1,1)} & {$\not\exists$} \\ [1.5mm] \hline $\mathcal{LS}_{13}^{\alpha\neq\beta}$ & 4 & 0 & $(1,2-\delta_{0,\alpha}-\delta_{0,\beta})$ & $c_{i,j}=\frac{((-1)^i+\alpha^i+\beta^i)((-1)^j+\alpha^j+\beta^j)}{(-1)^{i+j}+\alpha^{i+j}+\beta^{i+j}}$ \\ [1.5mm] \hline $\mathcal {LS}_{15}^{-\frac{1}{2}}$ & 4 & 1 & (1,2) & $c_{i,j}=\frac{\left((-1)^i+2\left(-\frac{1}{2}\right)^i\right)\left((-1)^j+2\left(-\frac{1}{2}\right)^j\right)}{(-1)^{i+j}+2\left(-\frac{1}{2}\right)^{i+j}}$ \\ [1.5mm] \hline $\mathcal{LS}_{16}^{\alpha}$ & 4 & 0 & $(1,2-\delta_{0,\alpha})$& $c_{i,j}=\frac{((-1)^i+2\alpha^i)((-1)^j+2\alpha^j)}{(-1)^{i+j}+2\alpha^{i+j}}$ \\ [1.5mm] \hline {$\mathcal{LS}_3$} & {3} & {1} & {(1,0)} & {$\not\exists$} \\ [1.5mm] \hline {$\mathcal{LS}_{11}$} & {3} & {0} & {(0,1)} & {$\not\exists$} \\ [1.5mm] \hline {$\mathcal{LS}_6^1$} & {2} & {0} & {(0,2)} & {$c_{i,j}=2$} \\ [1.5mm] \hline $\mathcal{LS}_{13}^{\alpha,\alpha}$ & 2 & 0 & $(1,2-2\delta_{0,\alpha})$ & $c_{i,j}=\frac{((-1)^i+2\alpha^i)((-1)^j+2\alpha^j)}{(-1)^{i+j}+2\alpha^{i+j}}$ \\ [1.5mm] \hline {$\mathcal{LS}_0$} & {0} & {0} & {(0,0)} & {$\not\exists$}\\ [1.5mm] \hline \end{longtable}} \normalsize \subsubsection{Non-degenerations} Most of the non-degeneration arguments can be seen from Table \ref{Table1} and by using Lemmas \ref{lem:deg_alg}, \ref{lem:invariants} and \ref{lemma:ij-inv . We list here the non-degenerations obtained by comparing $(\alpha,\beta,\gamma)$-derivations or by using Lemma \ref{lem:abel}. In every case, $\mathfrak g$ represents the Lie superalgebra on the left of the arrow, and $\mathfrak h$ represents the Lie superalgebra on the right of the arrow. {\scriptsize \begin{longtable}{|l|l|} \caption[]{Non-degenerations} \label{Table2}\\ \hline Non-degeneration $\mathfrak g\to\mathfrak h$ & Reason \\ \hline \endfirsthead $\mathcal{LS}_{13}^{1,\frac{1}{\gamma}}\not\rightarrow \mathcal{LS}_{13}^{\gamma,\gamma}$, $\gamma\neq -1$ & $\dim\mathfrak{D}\left(1,1,-1\right)_1(\mathfrak g)=1>0=\dim\mathfrak{D}\left(1,1,-1\right)_1(\mathfrak h)$ \\ $\mathcal{LS}_{13}^{\alpha,\beta}\not\rightarrow \mathcal{LS}_{6}^1$ & $\dim\mathfrak{D}\left(\frac{1}{\alpha},1,-1\right)_1(\mathfrak g)=1>0=\dim\mathfrak{D}\left(\frac{1}{\alpha},1,-1\right)_1(\mathfrak h)$\\ $\mathcal{LS}_{13}^{1,2}\not\rightarrow \mathcal{LS}_{13}^{\frac{1}{2},\frac{1}{2}}$ & $\dim\mathfrak{D}\left(1,1,-1\right)_1(\mathfrak g)=1>0=\dim\mathfrak{D}\left(1,1,-1\right)_1(\mathfrak h)$\\ $\mathcal{LS}_{13}^{1,-2}\not\rightarrow \mathcal{LS}_{13}^{-\frac{1}{2},-\frac{1}{2}}$ & $\dim\mathfrak{D}\left(1,1,-1\right)_1(\mathfrak g)=1>0=\dim\mathfrak{D}\left(1,1,-1\right)_1(\mathfrak h)$\\ $\mathcal{LS}_{13}^{1,-\frac{1}{2}}\not\rightarrow \mathcal{LS}_{13}^{-2,-2}$ & $\dim\mathfrak{D}\left(1,1,-1\right)_1(\mathfrak g)=1>0=\dim\mathfrak{D}\left(1,1,-1\right)_1(\mathfrak h)$\\ $\mathcal{LS}_{13}^{\alpha,-\frac{1}{2}}\not\rightarrow \mathcal{LS}_{6}^1$ & $\dim\mathfrak{D}\left(-2,1,-1\right)_1(\mathfrak g)=1>0=\dim\mathfrak{D}\left(-2,1,-1\right)_1(\mathfrak h)$\\ \hline $\mathcal{LS}_9\not\rightarrow \mathcal{LS}_6^{-1}$ & $\dim\mathfrak{D}(1,1,0)_1(\mathfrak g)=1>0=\dim\mathfrak{D}(1,1,0)_1(\mathfrak h)$\\ \hline $\mathcal{LS}_{14}^{\alpha}\not\rightarrow \mathcal{LS}_{13}^{\beta,\beta+1},\ \mathcal{LS}_{13}^{\beta,-\frac{1}{2}},\ \mathcal{LS}_6^1$ & $\mathcal{F}(\mathfrak g)\simeq\mathcal{LS}_{13}^{\alpha,-(\alpha+1)}\not\rightarrow \mathfrak h=\mathcal{F}(\mathfrak h)$\\ $\mathcal{LS}_{14}^1\not\rightarrow \mathcal{LS}_{16}^{-\frac{1}{2}},\mathcal {LS}_{15}^{-\frac{1}{2}}$ & $\dim\mathfrak{D}\left(-\frac{1}{2},1,-1\right)_1(\mathfrak g)=1>0=\dim\mathfrak{D}\left(-\frac{1}{2},1,-1\right)_1(\mathfrak h)$\\ $\mathcal{LS}_{14}^0\not\rightarrow \mathcal{LS}_6^{-1}$ & $\dim\mathfrak{D}\left(0,1,-1\right)_1(\mathfrak g)=8>2=\dim\mathfrak{D}\left(0,1,-1\right)_1(\mathfrak h)$\\ $\mathcal {LS}_{14}^{-\frac{1}{2}}\not\rightarrow \mathcal{LS}_{13}^{\alpha,\alpha+1}$ & $\mathcal{F}(\mathfrak g)\simeq\mathcal{LS}_{13}^{-\frac{1}{2},-\frac{1}{2}}\not\rightarrow \mathfrak h=\mathcal{F}(\mathfrak h)$\\ $\mathcal {LS}_{14}^{-\frac{1}{2}}\not\rightarrow \mathcal{LS}_{13}^{\alpha,-(\alpha+1)}$ & $\mathcal{F}\left(\mathfrak g\right)\simeq\mathcal{LS}_{13}^{-\frac{1}{2},-\frac{1}{2}}\not\rightarrow \mathfrak h=\mathcal{F}(\mathfrak h)$\\ $\mathcal {LS}_{14}^{-\frac{1}{2}}\not\rightarrow \mathcal{LS}_{16}^{-\frac{1}{2}}$ & $\dim\mathfrak{D}\left(-2,1,-1\right)_1(\mathfrak g)=2>\dim\mathfrak{D}\left(-2,1,-1\right)_1(\mathfrak h)$\\ $\mathcal {LS}_{14}^{-\frac{1}{2}}\not\rightarrow \mathcal{LS}_{11},\mathcal{LS}_{12}$ & $\mathcal{F}(\mathfrak g)\simeq \mathcal{LS}_{13}^{-\frac{1}{2},-\frac{1}{2}}\not\rightarrow \mathcal{LS}_{11}=\mathcal{F}(\mathfrak h)$\\ \hline $\mathcal {LS}_{15}^{-\frac{1}{2}}\not\rightarrow \mathcal{LS}_{11}$ & $\mathcal{F}(\mathfrak g)\simeq \mathcal{LS}_{13}^{-\frac{1}{2},-\frac{1}{2}}\not\rightarrow \mathcal{LS}_{11}=\mathcal{F}(\mathcal{LS}_{11})$\\ $\mathcal{LS}_{15}^{\alpha}\not\rightarrow \mathcal{LS}_{2}$ & $\ab(\mathfrak g)\simeq\mathcal{LS}_{3}\not\rightarrow \mathfrak h=\ab(\mathfrak h)$\\ $\mathcal{LS}_{15}^{\alpha}\not\rightarrow \mathcal{LS}_6^{\beta},\mathcal{LS}_{10},\ \mathcal{LS}_{13}^{\beta,\gamma},\ \mathcal{LS}_{16}^{\beta}$ & $\mathcal{F}(\mathfrak g)\simeq\mathcal{LS}_{13}^{\alpha,-\frac{1}{2}}\not\rightarrow\mathfrak h=\mathcal{F}(\mathfrak h)$\\ $\mathcal{LS}_{15}^{\alpha}\not\rightarrow \mathcal {LS}_{15}^{-\frac{1}{2}}$ & $\mathcal{F}(\mathfrak g)\simeq\mathcal{LS}_{13}^{\alpha,-\frac{1}{2}}\not\rightarrow \mathcal{LS}_{13}^{-\frac{1}{2},-\frac{1}{2}}\simeq\mathcal{F}(\mathfrak h)$\\ \hline $\mathcal{LS}_{17}\not\rightarrow \mathcal{LS}_{13}^{\alpha,\alpha+1},\ \mathcal{LS}_{13}^{\alpha,-(\alpha+1)}$ & $\mathcal{F}(\mathfrak g)\simeq\mathcal{LS}_{16}^{-\frac{1}{2}}\not\rightarrow \mathfrak h=\mathcal{F}(\mathfrak h)$\\ \hline $\mathcal{LS}_{18}^{-\frac{2}{3}}\not\rightarrow \mathcal{LS}_{13}^{-\frac{3}{2},-\frac{1}{2}}$ & $\dim\mathfrak{D}\left(\frac{2}{3},1,-1\right)_1(\mathfrak g)=1>0=\dim\mathfrak{D}\left(\frac{2}{3},1,-1\right)_1(\mathfrak h)$\\ $\mathcal{LS}_{18}^{-2}\not\rightarrow \mathcal{LS}_{13}^{\frac{1}{2},-\frac{1}{2}}$ & $\dim\mathfrak{D}\left(-1,1,-1\right)_1(\mathfrak g)=1>0=\dim\mathfrak{D}\left(-1,1,-1\right)_1(\mathfrak h)$\\ $\mathcal{LS}_{18}^{-\frac{1}{3}}\not\rightarrow \mathcal{LS}_{13}^{\frac{3}{2},-\frac{1}{2}}$ & $\dim\mathfrak{D}\left(\frac{1}{3},1,-1\right)_1(\mathfrak g)=1>0=\dim\mathfrak{D}\left(\frac{1}{3},1,-1\right)_1(\mathfrak h)$\\ $\mathcal{LS}_{18}^{-3}\not\rightarrow \mathcal{LS}_{13}^{\frac{3}{2},-\frac{1}{2}}$ & $\dim\mathfrak{D}\left(3,1,-1\right)_1(\mathfrak g)=1>0=\dim\mathfrak{D}\left(3,1,-1\right)_1(\mathfrak h)$\\ $\mathcal{LS}_{18}^1\not\rightarrow \mathcal{LS}_{16}^{\frac{1}{2}},\mathcal{LS}_{13}^{\frac{1}{2},\frac{1}{2}}$ & $\dim\mathfrak{D}\left(\frac{1}{2},1,-1\right)_1(\mathfrak g)=1>0=\dim\mathfrak{D}\left(\frac{1}{2},1,-1\right)_1(\mathfrak h)$\\ $\mathcal{LS}_{18}^{\frac{\gamma}{1-\gamma}}\not\rightarrow \mathcal{LS}_{13}^{\gamma,1-\gamma}$, $\gamma\neq\frac{1\pm\sqrt{-3}}{2},0,2$ & $\dim\mathfrak{D}\left(1-\gamma,1,-1\right)_1(\mathfrak g)=1>0=\dim\mathfrak{D}\left(1-\gamma,1,-1\right)_1(\mathfrak h)$\\ $\mathcal{LS}_{18}^{\frac{\gamma}{1-\gamma}}\not\rightarrow \mathcal{LS}_{13}^{\gamma,1-\gamma}$, $\gamma\in\left\{\frac{1\pm\sqrt{-3}}{2},2\right\}$ & $\dim\mathfrak{D}\left(\frac{\gamma}{\gamma-1},1,0\right)_1(\mathfrak g)=1>0=\dim\mathfrak{D}\left(\frac{\gamma}{\gamma-1},1,0\right)_1(\mathfrak h)$\\ $\mathcal{LS}_{18}^{\frac{1-\gamma}{\gamma}}\not\rightarrow \mathcal{LS}_{13}^{1-\gamma,\gamma}$,\ $\gamma\neq\frac{1\pm\sqrt{-3}}{2},\pm 1$ & $\dim\mathfrak{D}\left(\gamma,1,-1\right)_1(\mathfrak g)=1>0=\dim\mathfrak{D}\left(\gamma,1,-1\right)_1(\mathfrak h)$\\ $\mathcal{LS}_{18}^{\frac{1-\gamma}{\gamma}}\not\rightarrow \mathcal{LS}_{13}^{1-\gamma,\gamma}$,\ $\gamma\in\left\{\frac{1\pm\sqrt{-3}}{2},-1\right\}$ & $\dim\mathfrak{D}\left(\frac{\gamma-1}{\gamma},1,0\right)_1(\mathfrak g)=1>0=\dim\mathfrak{D}\left(\frac{\gamma-1}{\gamma},1,0\right)_1(\mathfrak h)$\\ $\mathcal{LS}_{18}^0\not\rightarrow \mathcal{LS}_6^1,\ \mathcal{LS}_{10}$ & $\dim\mathfrak{D}\left(0,1,-1\right)_1(\mathfrak g)=8>2=\dim\mathfrak{D}\left(0,1,-1\right)_1(\mathfrak h)$\\ $\mathcal{LS}_{18}^{\alpha}\not\rightarrow \mathcal{LS}_6^{\beta},\ \beta\neq 0,1$ & $\dim\mathfrak{D}\left(1,1,-1\right)_1(\mathfrak g)=2>0=\dim\mathfrak{D}\left(1,1,-1\right)_1(\mathfrak h)$\\ \hline \end{longtable}} \subsection{Degenerations} Next we summarize the essential degenerations $\mathfrak g\rightarrow\mathfrak h$ in the variety $\mathcal{LS}^{(2,2)}$. In every case, we provide a parametric change of basis $\{x_1^t,x_2^t,y_1^t,y_2^t\}$ of $\mathfrak g$ such that for $t=0$ we obtain the Lie products of $\mathfrak h$. For example, consider the first degeneration in Table \ref{Table3}: $\mathcal{LS}_{19}\to\mathcal{LS}_4$. The parameterized base products of $\mathcal{LS}_{19}$ are: $$\[x_1^t,x_2^t\]=2tx_1^t,\quad\[x_1^t,y_2^t\]=ty_1^t,\quad\[x_2^t,y_1^t\]=-2ty_1^t,\quad\[y_1^t,y_2^t\]=x_1^t,\quad\[y_2^t,y_2^t\]=x_2^t.$$ When $t=0$ we obtain the products of $\mathcal{LS}_4$. {\scriptsize \begin{longtable}{|l|llll|} \caption[]{Degenerations} \label{Table3}\\ \hline $\mathfrak g\to\mathfrak h$ & \multicolumn{4}{|c|}{Parametrized bases} \\ \hline \endfirsthead \caption[]{(continued)}\\ \hline $\mathfrak g\to\mathfrak h$ & \multicolumn{4}{|c|}{Parameterized bases} \\ \hline \endhead $\mathcal{LS}_{19}\rightarrow \mathcal{LS}_4$ & $x_1^t=te_1$, & $x_2^t=2te_2$, & $y_1^t=\sqrt{t}f_1$, & $y_2^t=\sqrt{t}f_2$\\ \hline $\mathcal{LS}_{19}\rightarrow \mathcal{LS}_{12}$ & $x_1^t=te_2$, & $x_2^t=te_1$, & $y_1^t=\sqrt{\frac{t}{2}}f_1$, & $y_2^t=\sqrt{\frac{t}{2}}f_2$\\ \hline $\mathcal{LS}_{19}\rightarrow \mathcal{LS}_{18}^{-1}$ & $x_1^t=e_1$, & $x_2^t=e_2$, & $y_1^t=tf_1$, & $y_2^t=tf_2$\\ \hline $\mathcal{LS}_{19}\rightarrow \mathcal{LS}_{14}^0$ & $x_1^t=te_1$, & $x_2^t=e_2$, & $y_1^t=f_2$, & $y_2^t=tf_1$\\ \hline $\mathcal{LS}_1\rightarrow \mathcal{LS}_4$ & $x_1^t=\sqrt{t}\left(-e_1+e_2\right)$, & $x_2^t=e_1+e_2$, & $y_1^t=\sqrt{t}\left(-f_1+f_2\right)$, & $y_2^t=f_1+f_2$\\ \hline $\mathcal{LS}_5\rightarrow \mathcal{LS}_{9}$ & $x_1^t=e_1+e_2$, & $x_2^t=te_1$, & $y_1^t=tf_1$, & $y_2^t=f_1+f_2$\\ \hline $\mathcal{LS}_5\rightarrow \mathcal{LS}_6^{\alpha}$ & $x_1^t=te_1$, & $x_2^t=e_1+\alpha e_2$, & $y_1^t=f_1$, & $y_2^t=f_2$\\ \hline $\mathcal{LS}_4\rightarrow \mathcal{LS}_2$ & $x_1^t=e_1$, & $x_2^t=t(t-1)e_1+te_2$, & $y_1^t=f_1$, & $y_2^t=\frac{1}{2t}f_1+tf_2$\\ \hline $\mathcal{LS}_7\rightarrow \mathcal{LS}_2$ & $x_1^t=e_1$, & $x_2^t=te_2$, & $y_1^t=-\frac{i}{2}f_1+if_2$, & $y_2^t=\frac{1}{2}f_1+f_2$\\ \hline $\mathcal{LS}_7\rightarrow \mathcal{LS}_6^{-1}$ & $x_1^t=^\frac{1}{t}e_1$, & $x_2^t=e_2$, & $y_1^t=f_1$, & $y_2^t=f_2$\\ \hline $\mathcal{LS}_{7}\rightarrow \mathcal{LS}_{12}$ & $x_1^t=e_1$, & $x_2^t=te_2$, & $y_1^t=f_1$, & $y_2^t=\frac{1}{2t}f_1+tf_2$\\ \hline $\mathcal{LS}_{8}\rightarrow \mathcal{LS}_2$ & $x_1^t=e_1$, & $x_2^t=te_2$, & $y_1^t=f_1+f_2$, & $y_2^t=f_2$\\ \hline $\mathcal{LS}_{8}\rightarrow \mathcal{LS}_6^0$ & $x_1^t=e_1$, & $x_2^t=e_2$, & $y_1^t=f_1$, & $y_2^t=tf_2$\\ \hline $\mathcal{LS}_{8}\rightarrow \mathcal{LS}_{12}$ & $x_1^t=e_1$, & $x_2^t=te_2$, & $y_1^t=f_1$, & $y_2^t=\frac{1}{t}f_1+f_2$\\ \hline $\mathcal{LS}_{9}\rightarrow \mathcal{LS}_{10}$ & $x_1^t=te_1$, & $x_2^t=e_1+e_2$, & $y_1^t=f_1$, & $y_2^t=f_2$\\ \hline $\mathcal{LS}_{14}^{\alpha}\rightarrow \mathcal{LS}_{13}^{\alpha,-(\alpha+1)}$ & $x_1^t=\frac{1}{t}e_1$, & $x_2^t=e_2$, & $y_1^t=f_1$, & $y_2^t=f_2$\\ \hline $\mathcal{LS}_{14}^{\alpha}\rightarrow \mathcal{LS}_2$ & $x_1^t=e_1$, & $x_2^t=te_2$, & $y_1^t=-if_1+\frac{i}{2}f_2$, & $y_2^t=f_1+\frac{1}{2}f_2$\\ \hline $\mathcal{LS}_{15}^{\alpha\neq-\frac{1}{2}}\rightarrow \mathcal{LS}_{12}$ & $x_1^t=e_1$, & $x_2^t=te_2$, & $y_1^t=f_1$, & $y_2^t=\frac{2}{t(2\alpha+1)}f_1+f_2$\\ \hline $\mathcal{LS}_{15}^{\alpha}\rightarrow \mathcal{LS}_{13}^{\alpha,-\frac{1}{2}}$ & $x_1^t=\frac{1}{t}e_1$, & $x_2^t=e_2$, & $y_1^t=f_1$, & $y_2^t=f_2$\\ \hline $\mathcal {LS}_{14}^{-\frac{1}{2}}\rightarrow \mathcal {LS}_{15}^{-\frac{1}{2}}$ & $x_1^t=e_1$, & $x_2^t=e_2$, & $y_1^t=f_1$, & $y_2^t=tf_2$\\ \hline $\mathcal{LS}_{17}\rightarrow \mathcal{LS}_{10}$ & $x_1^t=e_1$, & $x_2^t=te_2$, & $y_1^t=f_1+f_2$, & $y_2^t=f_2$\\ \hline $\mathcal{LS}_{17}\rightarrow \mathcal{LS}_{12}$ & $x_1^t=t^2e_1$, & $x_2^t=te_2$, & $y_1^t=t^2f_1$, & $y_2^t=tf_2$\\ \hline $\mathcal{LS}_{17}\rightarrow \mathcal{LS}_{16}^{-\frac{1}{2}}$ &$x_1^t=\frac{1}{t}e_1$, & $x_2^t=e_2$, & $y_1^t=f_1$, & $y_2^t=f_2$\\ \hline $\mathcal{LS}_{17}\rightarrow \mathcal {LS}_{15}^{-\frac{1}{2}}$ & $x_1^t=e_1$, & $x_2^t=e_2$, & $y_1^t=f_2$, & $y_2^t=\frac{1}{t}f_1$\\ \hline $\mathcal{LS}_{18}^{\alpha}\rightarrow \mathcal{LS}_{13}^{\alpha,\alpha+1}$ & $x_1^t=e_1$, & $x_2^t=e_2$, & $y_1^t=\frac{1}{t}f_1$, & $y_2^t=f_2$\\ \hline $\mathcal{LS}_2\rightarrow \mathcal{LS}_3$ & $x_1^t=e_1$, & $x_2^t=e_2$, & $y_1^t=f_1$, & $y_2^t=tf_2$\\ \hline $\mathcal{LS}_{6}^{\alpha\neq 1}\rightarrow \mathcal{LS}_{11}$ & $x_1^t=\frac{1}{t}e_1$, & $x_2^t=-te_2$, & $y_1^t=tf_2$, & $y_2^t=f_1-\frac{1}{\alpha-1}f_2$\\ \hline $\mathcal{LS}_{10}\rightarrow \mathcal{LS}_{11}$ & $x_1^t=e_1$, & $x_2^t=te_2$, & $y_1^t=f_1$, & $y_2^t=\frac{1}{t}f_2$\\ \hline $\mathcal{LS}_{10}\rightarrow \mathcal{LS}_{6}^1$ & $x_1^t=e_1$, & $x_2^t=e_2$, & $y_1^t=\frac{1}{t}f_1$, & $y_2^t=f_2$\\ \hline $\mathcal{LS}_{12}\rightarrow \mathcal{LS}_{11}$ & $x_1^t=te_1$, & $x_2^t=e_2$, & $y_1^t=f_1$, & $y_2^t=f_2$\\ \hline $\mathcal{LS}_{12}\rightarrow \mathcal{LS}_{3}$ & $x_1^t=e_1$, & $x_2^t=te_2$, & $y_1^t=f_2$, & $y_2^t=f_1$\\ \hline $\mathcal{LS}_{13}^{\alpha\neq\beta}\rightarrow \mathcal{LS}_{11}$ & $x_1^t=\frac{1}{t}e_1$, & $x_2^t=-te_2$, & $y_1^t=tf_2$, & $y_2^t=f_1+\frac{1}{\alpha-\beta}f_2$\\ \hline $\mathcal{LS}_{16}^{\alpha}\rightarrow \mathcal{LS}_{11}$ & $x_1^t=e_1$, & $x_2^t=te_2$, & $y_1^t=f_1$, & $y_2^t=\frac{1}{t}f_2$\\ \hline $\mathcal{LS}_{16}^{\alpha}\rightarrow \mathcal{LS}_{13}\left(\alpha,\alpha\right)$ & $x_1^t=e_1$, & $x_2^t=e_2$, & $y_1^t=\frac{1}{t}f_1$, & $y_2^t=f_2$\\ \hline $\mathcal {LS}_{15}^{-\frac{1}{2}}\rightarrow \mathcal{LS}_3$ & $x_1^t=e_1$, & $x_2^t=te_2$, & $y_1^t=f_1$, & $y_2^t=f_2$\\ \hline $\mathcal {LS}_{15}^{-\frac{1}{2}}\rightarrow \mathcal{LS}_{13}^{-\frac{1}{2},-\frac{1}{2}}$ & $x_1^t=\frac{1}{t}e_1$, & $x_2^t=e_2$, & $y_1^t=f_1$, & $y_2^t=f_2$\\ \hline \end{longtable}} \normalsize With all this, we can draw the Hasse diagram of essential degenerations for the variety $\mathcal{LS}^{(2,2)}$: \begin{landscape} {\begin{center} \begin{tikzpicture}[->,>=stealth',shorten >=0.01cm,auto,node distance=1.5cm, thick,main node/.style={rectangle,draw,fill=white!12,rounded corners=1.5ex,font=\sffamily \bf \bfseries }, blue node/.style={rectangle,draw, color=blue,fill=white!12,rounded corners=1.5ex,font=\sffamily \bf \bfseries }, thick,rigid node/.style={rectangle,draw,fill=black!12,rounded corners=1.5ex,font=\sffamily \bf \bfseries }, thick,bluerigid node/.style={rectangle,draw,color=blue,fill=black!12,rounded corners=1.5ex,font=\sffamily \bf \bfseries }, style={draw,font=\sffamily \scriptsize \bfseries }] \node (71) {}; \node (61) [below of=71] {}; \node (51) [below of=61] {}; \node (41) [below of=51] {}; \node (31) [below of=41] {}; \node (21) [below of=31] {}; \node (11) [below of=21] {}; \node (01) [below of=11] {}; \node (62) [right of =61] { }; \node (63) [right of =62] { }; \node (64) [right of =63] { }; \node (65) [right of =64] {}; \node (66) [right of =65] { }; \node (67) [right of =66] { }; \node (68) [right of =67] { }; \node (69) [right of =68] { }; \node (610) [right of =69] { }; \node (611) [right of =610] { }; \node (612) [right of =611] { }; \node [rigid node] (19) [right of =62] {$\mathcal{LS}_{19}$}; \node [bluerigid node] (1) [right of =69] {$\mathcal{LS}_1$ }; \node [rigid node] (5) [right of =612] {$\mathcal{LS}_5$}; \node (52) [right of =51] { }; \node (53) [right of =52] { }; \node (54) [right of =53] { }; \node (55) [right of =54] {}; \node (56) [right of =55] { }; \node (57) [right of =56] { }; \node (58) [right of =57] { }; \node (59) [right of =58] { }; \node (510) [right of =59] { }; \node (511) [right of =510] { }; \node (512) [right of =511] { }; \node (513) [right of =512] { }; \node (514) [right of =513] { }; \node [main node] (18a) [left of =51] {$\mathcal{LS}_{18}^{\alpha}$}; \node [main node] (17) [right of =51] {$\mathcal{LS}_{17}$}; \node [main node] (14a) [right of =53] {$\mathcal{LS}_{14}^{\alpha}$}; \node [main node] (15a) [right of =55] {$\mathcal{LS}_{15}^{\alpha\neq-\frac{1}{2}}$}; \node [main node] (7) [right of =57] {$\mathcal{LS}_7$}; \node [main node] (8) [right of =59] {$\mathcal{LS}_8$}; \node [blue node] (4) [right of =511] {$\mathcal{LS}_4$}; \node [main node] (9) [right of =513] {$\mathcal{LS}_9$}; \node (42) [right of =41] { }; \node (43) [right of =42] { }; \node (44) [right of =43] { }; \node (45) [right of =44] {}; \node (46) [right of =45] { }; \node (47) [right of =46] { }; \node (48) [right of =47] { }; \node (49) [right of =48] { }; \node (410) [right of =49] { }; \node (411) [right of =410] { }; \node (412) [right of =411] { }; \node (413) [right of =412] { }; \node [main node] (1512) [left of =41] {$\mathcal{LS}_{15}^{-\frac{1}{2}}$}; \node [main node] (16a) [right of =41] {$\mathcal{LS}_{16}^{\alpha}$}; \node [main node] (10) [right of =43] {$\mathcal{LS}_{10}$}; \node [main node] (12) [right of =45] {$\mathcal{LS}_{12}$}; \node [main node] (13ab) [right of =47] {$\mathcal{LS}_{13}^{\alpha\neq\beta}$}; \node [blue node] (2) [right of =49] {$\mathcal{LS}_2$}; \node [main node] (6a) [right of =411] {$\mathcal{LS}_6^{\alpha\neq 1}$}; \node (32) [right of =31] { }; \node (33) [right of =32] { }; \node (34) [right of =33] { }; \node (35) [right of =34] { }; \node (36) [right of =35] { }; \node (37) [right of =36] { }; \node (38) [right of =37] { }; \node (39) [right of =38] { }; \node (310) [right of =39] { }; \node (311) [right of =310] { }; \node (312) [right of =311] { }; \node [blue node] (3) [right of =35] {$\mathcal{LS}_3$}; \node [blue node] (11) [right of =37] {$\mathcal{LS}_{11}$}; \node (22) [right of =21] { }; \node (23) [right of =22] { }; \node (24) [right of =23] { }; \node (25) [right of =24] {}; \node (26) [right of =25] { }; \node (27) [right of =26] { }; \node (28) [right of =27] { }; \node (29) [right of =28] { }; \node (210) [right of =29] { }; \node (211) [right of =210] { }; \node (212) [right of =211] { }; \node [main node] (61) [right of =24] {$\mathcal{LS}_6^1$}; \node [main node] (13aa) [right of =21] {$\mathcal{LS}_{13}^{\alpha,\alpha}$}; \node (02) [right of =01] { }; \node (03) [right of =02] { }; \node (04) [right of =03] { }; \node (05) [right of =04] {}; \node (06) [right of =05] { }; \node [blue node] (0) [right of =06] {$\mathcal{LS}_0$ }; \path[every node/.style={font=\sffamily\tiny}] (19) edge [bend right=0, color=black] node[above=-5, right=-30, fill=white]{$\alpha=-1$} (18a) (19) edge [bend right=30, color=black] node[above=0, right=-10, fill=white]{$\alpha=0$} (14a) (19) edge [bend right=-40, color=black] node{} (4) (19) edge [bend right=-20, color=black] node{} (12) (1) edge [bend right=0, color=blue] node{} (4) (5) edge [bend right=0, color=black] node{} (9) (5) edge [bend right=-10, color=black] node{} (6a) (18a) edge [bend right=-45, color=black] node[above=0, right=-10, fill=white]{$\beta=\alpha+1$} (13ab) (17) edge [bend right=0, color=black] node{} (10) (17) edge [bend right=0, color=black] node{} (12) (17) edge [bend right=0, color=black] node[above=2, right=-15, fill=white]{$\alpha=-\frac{1}{2}$} (16a) (17) edge [bend right=0, color=black] node{} (1512) (14a) edge [bend right=35, color=black] node{} (2) (14a) edge [bend right=-50, color=black] node[above=2, right=-70, fill=white]{$\beta=-(\alpha+1)$} (13ab) (14a) edge [bend right=10, color=black] node[above=-20, right=-30, fill=white]{$\alpha=-\frac{1}{2}$} (13aa) (4) edge [bend right=0, color=blue] node{} (2) (7) edge [bend right=0, color=black] node{} (2) (7) edge [bend right=-40, color=black] node[above=0, right=-70, fill=white]{$\alpha=-1$} (6a) (7) edge [bend right=-10, color=black] node{} (12) (8) edge [bend right=0, color=black] node{} (2) (8) edge [bend right=-10, color=black] node[above=7, right=-25, fill=white]{$\alpha=0$} (6a) (8) edge [bend right=0, color=black] node{} (12) (9) edge [bend right=40, color=black] node{} (10) (15a) edge [bend right=0, color=black] node{} (12) (15a) edge [bend right=25, color=black] node[above=12, right=-30, fill=white]{$\beta=-\frac{1}{2}$} (13ab) (10) edge [bend right=0, color=black] node{} (11) (10) edge [bend right=0, color=black] node{} (61) (12) edge [bend right=0, color=black] node{} (11) (12) edge [bend right=0, color=black] node{} (3) (13ab) edge [bend right=0, color=black] node{} (11) (2) edge [bend right=0, color=blue] node{} (3) (6a) edge [bend right=0, color=black] node{} (11) (61) edge [bend right=0, color=black] node{} (0) (1512) edge [bend right=0, color=black] node{} (3) (1512) edge [bend right=10, color=black] node[above=0, right=-15, fill=white]{$\alpha=-\frac{1}{2}$} (13aa) (16a) edge [bend right=0, color=black] node{} (11) (16a) edge [bend right=0, color=black] node{} (13aa) (3) edge [bend right=0, color=blue] node{} (0) (11) edge [bend right=0, color=blue] node{} (0) (61) edge [bend right=0, color=black] node{} (0) (13aa) edge [bend right=0, color=black] node{} (0); \end{tikzpicture} \end{center} } \end{landscape} \subsection{The irreducible components of $\mathcal{LS}^{(2,2)}$} In order to describe the irreducible components which are closures of single orbits, we must obtain the rigid Lie superalgebras in $\mathcal{LS}^{(2,2)}$. Analogous to the Lie algebras case, it follows that if the cohomology group $(H^2(\mathfrak g, \mathfrak g))_0=0$ then the Lie superalgebra $\mathfrak g$ is rigid. Then we obtain: \begin{theorem}\label{thm:rigid} There are 3 rigid Lie superalgebras in the variety $\mathcal{LS}^{(2,2)}$: $\mathcal{LS}_{19}$, $\mathcal{LS}_1$ and $\mathcal{LS}_5$. Moreover, $\mathcal{LS}_1$ is a rigid nilpotent Lie superalgebra. \end{theorem} \begin{proof} The second cohomology groups are: \begin{itemize} \item $H^2(\mathcal{LS}_{19},\mathcal{LS}_{19})=\langle 0\rangle\oplus \langle e^1\wedge e^2\otimes f_1-e^1\wedge f^2\otimes e_1+e^2\wedge f^1\otimes e_1\rangle$. \item $H^2(\mathcal{LS}_1,\mathcal{LS}_1)=\langle 0\rangle\oplus \langle 2e^1\wedge f^2\otimes e_1+f^1\wedge f^2\otimes f_1,2e^2\wedge f^1\otimes e_2+f^1\wedge f^2\otimes f_2\rangle$. \item $H^2(\mathcal{LS}_5,\mathcal{LS}_5)=0$. \end{itemize} \end{proof} The degenerations in the next table, will help us to obtain the irreducible components which are closure of the union of infinite families of Lie superalgebras. Suppose $\mathcal{L}=\{\mathfrak g(\alpha)\}_{\alpha}$ is an infinite family of Lie superalgebras, then we consider $\alpha$ ($=\alpha(t)$) as parameterized by $t$ and construct a degeneration from $\mathfrak g(\alpha(t))\to\mathfrak h$ as usual. {\scriptsize \begin{longtable}{|l|llll|} \caption[]{Degenerations} \label{Table4}\\ \hline $\mathfrak g(\alpha)\to\mathfrak h$ & \multicolumn{4}{|c|}{Parametrized bases} \\ \hline \endfirsthead \caption[]{(continued)}\\ \hline $\mathfrak g\to\mathfrak h$ & \multicolumn{4}{|c|}{Parameterized bases} \\ \hline \endhead $\mathcal{LS}_{14}^{\alpha(t)}\rightarrow \mathcal{LS}_{17}$ & $x_1^t=e_1$, & $x_2^t=e_2$, & $y_1^t=-i\sqrt{t}f_1$, & $y_2^t=\frac{1}{2}f_1+f_2$\\ $\alpha(t)=-\frac{1}{2}-i\sqrt{t}$ &&&& \\ \hline $\mathcal{LS}_{14}^{\alpha(t)}\rightarrow \mathcal{LS}_{7}$ & $x_1^t=e_1$, & $x_2^t=-te_2$, & $y_1^t=f_1$, & $y_2^t=f_2$\\ $\alpha(t)=-\frac{1}{t}$ &&&& \\ \hline $\mathcal{LS}_{15}^{\alpha(t)}\rightarrow \mathcal{LS}_{8}$ & $x_1^t=e_1$, & $x_2^t=-2te_2$, & $y_1^t=f_1$, & $y_2^t=f_2$\\ $\alpha(t)=-\frac{1}{2t}$ &&&& \\ \hline $\mathcal{LS}_{18}^{\alpha(t)}\rightarrow \mathcal{LS}_{9}$ & $x_1^t=-te_2$, & $x_2^t=e_1$, & $y_1^t=f_1$, & $y_2^t=f_2$\\ $\alpha(t)=-\left(\frac{1+t}{t}\right)$ &&&& \\ \hline $\mathcal{LS}_{13}^{\alpha(t),\beta(t)}\rightarrow \mathcal{LS}_{6}^{\gamma}$ & $x_1^t=e_1$, & $x_2^t=te_2$, & $y_1^t=f_1$, & $y_2^t=f_2$\\ $\alpha(t)=\frac{1}{t},\ \beta(t)=\frac{\gamma}{t}$ &&&& \\ \hline $\mathcal{LS}_{13}^{\alpha(t),\beta(t)}\rightarrow \mathcal{LS}_{16}^{\gamma}$ & $x_1^t=e_1$, & $x_2^t=e_2$, & $y_1^t=2\sqrt{t}f_1$, & $y_2^t=f_1+f_2$\\ $\alpha(t)=\frac{\gamma+\sqrt{t}}{1+t},\ \beta(t)=\frac{\gamma-\sqrt{t}}{1+t}$ &&&& \\ \hline \end{longtable}} \normalsize With all this, we can enunciate the main theorem of this Section: \begin{theorem} The irreducible components of the variety $\mathcal{LS}^{(2,2)}$ are: \begin{itemize} \item $\mathcal{C}_1=\overline{\mathcal{O}(\mathcal{LS}_{19})}$, \item $\mathcal{C}_2=\overline{\mathcal{O}(\mathcal{LS}_{1})}$, \item $\mathcal{C}_3=\overline{\mathcal{O}(\mathcal{LS}_{5})}$, \item $\mathcal{C}_4=\displaystyle\overline{\bigcup_{\alpha}\mathcal{O}(\mathcal{LS}_{14}^{\alpha})}$, \item $\mathcal{C}_5=\displaystyle\overline{\bigcup_{\alpha}\mathcal{O}(\mathcal{LS}_{15}^{\alpha})}$, \item $\mathcal{C}_6=\displaystyle\overline{\bigcup_{\alpha}\mathcal{O}(\mathcal{LS}_{18}^{\alpha})}$, \item $\mathcal{C}_7=\displaystyle\overline{\bigcup_{\alpha,\beta}\mathcal{O}(\mathcal{LS}_{13}^{\alpha,\beta})}$. \end{itemize} \label{2-2-irred-comp} \end{theorem} \section*{Aknowledgements} The first author was supported by grant FOMIX- CONACYT YUC-2013-C14-221183 and Becas Iberoam\'erica de J\'ovenes Profesores e Investigadores, Santander Universidades. The second author was supported by grants FOMIX-CONACYT YUC-2013-C14-221183 and 222870. The first author express her gratitude to Universidad Aut\'onoma de Yucat\'an and Centro de Investigaci\'on en Matem\'aticas - Unidad M\'erida for their hospitality during her research stays in both centers. Both authors thank Ivan Kaygorodov for useful comments about the presentation of this paper.
{ "timestamp": "2018-02-27T02:00:43", "yymm": "1802", "arxiv_id": "1802.08707", "language": "en", "url": "https://arxiv.org/abs/1802.08707" }
\section{Introduction} The symmetry rank of a Riemannian manifold is the rank of its isometry group, that is, the dimension of a maximal torus in $\mathrm{Iso}(M)$. It was proved by K.~Grove and C.~Searle in \cite{grove} that a closed Riemannian manifold with positive sectional curvature and maximal symmetry rank is diffeomorphic to a sphere, a real or complex projective space or a lens space. A generalization of this result for Alexandrov spaces was obtained recently in \cite{harvey}. Here we show the following transverse analog of these results for Killing foliations, that is, complete Riemannian foliations with globally constant Molino sheaf (see Section \ref{subsection: Molino Sheaf and Killing Foliations}). This class of foliations includes Riemannian foliations on complete simply-connected manifolds and foliations induced by isometric actions on complete manifolds \cite[Lemme III]{molino3}. \begin{thmx}\label{theorem: harvey-searle para folheações} Let $\f$ be a $q$-codimensional, transversely orientable Killing foliation of a compact manifold $M$. If the transverse sectional curvature $\sec_{\f}$ of $\f$ is positive, then $$\dim(\overline{\f})-\dim(\f)\leq\left\lfloor\frac{\codim(\f)+1}{2}\right\rfloor$$ where $\overline{\f}$ denotes the singular Riemannian foliation of $M$ by the closures of the leaves of $\f$. Moreover, if equality holds above, there is a closed Riemannian foliation $\g$ of $M$ arbitrarily close to $\f$ and such that $M/\g$ is homeomorphic to either \begin{enumerate}[(i)] \item a quotient of a sphere $\mathbb{S}^q/\Lambda$, where $\Lambda$ is a finite subgroup of the centralizer of the maximal torus in $\mathrm{O}(q+1)$, or \item a quotient of a weighted complex projective space $|\mathbb{CP}^{q/2}[\lambda]|/\Lambda$, where $\Lambda$ is a finite subgroup of the torus acting linearly on $\mathbb{CP}^{q/2}[\lambda]$ (this case occurs only when $q$ is even). \end{enumerate} \end{thmx} Recall that a generalized Seifert fibration is a foliation $\g$ whose leaves are all closed submanifolds. In this case we also say that $\g$ is a \textit{closed} foliation. The number $d=\dim(\overline{\f})-\dim(\f)$, where $\dim(\overline{\f})$ is the maximal dimension of the leaf closures of $\f$, is called the \textit{defect} of $\f$ and equals the dimension of the structural algebra $\mathfrak{g}_\f$ of $\f$ (see Section \ref{subsection: Molino Theory}). When $\f$ is a Killing foliation this number is also a lower bound for the symmetry rank of $\f$, that is, the maximal number of linearly independent, commuting transverse Killing vector fields that $\f$ admits. Theorem \ref{theorem: harvey-searle para folheações} has a direct application for Riemannian foliations on positively curved manifolds (see Corollary \ref{Corollary to H-S for foliations}). We prove Theorem \ref{theorem: harvey-searle para folheações} by reducing it to application of the orbifold version of the aforementioned Grove--Searle classification \cite[Corollary E]{harvey}, after deforming $\f$ into a closed Riemannian foliation $\g$ while maintaining the relevant transverse geometric properties, using the result below. In what follows $\mathcal{T}^*(\f)$ denotes the algebra of $\f$-basic tensor fields (see Section \ref{section: basic cohomology}). \begin{thmx}\label{theorem: Haefliger deformation} Let $(\f,\mathrm{g}^T)$ be a Killing foliation of a compact manifold $M$. There is a homotopic deformation $\f_t$ of $\f=\f_0$ into a closed foliation $\g=\f_1$ such that \begin{enumerate}[(i)] \item the deformation occurs within the closures of the leaves of $\f$ and $\g$ can be chosen arbitrarily close to $\f$, \item for each $t$ there is an injection $\iota_t:\mathcal{T}^*(\f)\to\mathcal{T}^*(\f_t)$ such that $\iota_t(\xi)$ is a smooth time-dependent tensor field on $M$, for any $\xi\in\mathcal{T}^*(\f)$, and $\iota_t(\mathrm{g}^T)$ is a transverse metric for $\f_t$, \item the quotient orbifold $M/\!/\g$ admits a smooth effective action of a torus $\mathbb{T}^d$, where $d$ is the defect of $\f$, which is isometric with respect to the metric induced by $\iota_t(\mathrm{g}^T)$ and such that $M/\overline{\f}\cong(M/\g)/\mathbb{T}^d$. \end{enumerate} In particular, $\symrank(\g)\geq d$ and, if $\sec_{\f}> 0$ and $\g$ sufficiently close to $\f$, then $\sec_{\g}> 0$. \end{thmx} The fact that $\iota_t(\mathrm{g}^T)$ is a transverse metric for $\f_t$ is actually a particular case of a stronger property of the deformation. In fact, $\iota(\xi)$ essentially consists of a deformation of the kernel of $\xi$, so for any transverse geometric structure defined by a tensor field $\sigma\in\mathcal{T}^*(\f)$ (such as a transverse Riemannian metric, a basic orientation form, a transverse symplectic structure etc), $\iota_t(\sigma)$ will be a transverse geometric structure of the same kind for $\f_t$. In particular, $\iota_t$ restricts to an injection $\Omega_B^*(\f)\to\Omega_B^*(\f_t)$ between the algebras of basic differential forms. It is also worth to mention that, if $\pi:M\to M/\!/\g$ denotes the canonical projection, $\pi_*\circ\iota$ is an isomorphism between $\mathcal{T}^*(\f)$ and the subalgebra $\mathcal{T}^*(M/\!/\g)^{\mathbb{T}^d}$ of $\mathbb{T}^d$-invariant tensor fields on $M/\!/\g$ that, similarly, relates $\f$-transverse geometric structures to $\mathbb{T}^d$-invariant geometric structures on $M/\!/\g$. The proof of Theorem \ref{theorem: Haefliger deformation} is based on a result by A.~Haefliger and E.~Salem \cite{haefliger2} that expresses $\f$ as the pullback of a homogeneous foliation of an orbifold. As Theorem \ref{theorem: harvey-searle para folheações} exemplifies, with this technique we are able to reduce results about the transverse geometry of Killing foliations to theorems about Riemannian orbifolds. Another application is the following ``closed leaf'' result, that complements a theorem by G.~Oshikiri \cite[Proposition and Theorem 2]{oshikiri} by allowing noncompact ambient manifolds. We show it by reducing to an application of the orbifold version of the Synge--Weinstein Theorem. \begin{thmx}\label{theorem: Berger for foliations} Let $\f$ be an even-codimensional, complete Riemannian foliation of a manifold $M$ with $|\pi_1(M)|<\infty$. If $\sec_{\f}\geq c>0$ then $\f$ possesses a closed leaf. \end{thmx} For any smooth foliation $(M,\f)$ one can define the cohomology of the differential subcomplex $\Omega_B^*(\f)\subset \Omega^*(M)$, the basic cohomology of $\f$ (see Section \ref{section: basic cohomology}). We study the basic Euler characteristic $\chi_B(\f)$, the alternate sum of the dimensions of the basic cohomology groups, when $\f$ is a Killing foliation. We show that in this case this invariant localizes to the set $\Sigma^{\dim\f}$ that consists of the union of the closed leaves of $\f$. \begin{thmx}\label{theorem: basic euler char localizes to closed leaves} If $\f$ is a Killing foliation of a compact manifold $M$, then $\chi_B(\f)=\chi(\Sigma^{\dim\f}/\f)$. \end{thmx} Equivalently, using the language of transverse actions (see Section \ref{subsection: Molino Sheaf and Killing Foliations}), $\f|_{\Sigma^{\dim\f}}$ coincides with $\f^{\mathfrak{g}_\f}$, the fixed-point set of the action of the structural algebra $\mathfrak{g}_\f$ on $\f$ (see Section \ref{subsection: Molino Theory}), so the formula in Theorem \ref{theorem: basic euler char localizes to closed leaves} becomes $\chi_B(\f)=\chi_B(\f^{\mathfrak{g}_\f})$, in analogy with the localization of the classical Euler characteristic to the fixed-point set of a torus action. Theorem \ref{theorem: basic euler char localizes to closed leaves} enables us to show that the basic Euler characteristic is preserved by the deformations devised in Theorem \ref{theorem: Haefliger deformation} (see Theorem \ref{theorem: basic euler char is preserved by deformations}). Using this fact we obtain the following transverse analog of the partial answer to Hopf's conjecture by T.~Püttmann and C.~Searle for manifolds with large symmetry rank \cite[Theorem 2]{puttmann}, by reducing it to an orbifold version of this same result that we also prove (see Corollary \ref{corollary: Putmann for orbifolds}). \begin{thmx}\label{theorem: puttmann for foliations} Let $\f$ be an even-codimensional, transversely orientable Killing foliation of a closed manifold $M$. If $\sec_\f>0$ and $$\dim(\overline{\f})-\dim(\f)\geq\frac{\codim(\f)}{4}-1,$$ then $\chi_B(\f)>0$. \end{thmx} Finally, we combine the results of A.~Haefliger on the classifying spaces of holonomy groupoids that appear in \cite{haefliger3} with the invariance of $\chi_B$ by deformations to obtain the following topological obstruction for Riemannian foliations. \begin{thmx}\label{teo: topological obstruction} Any Riemannian foliation of a compact manifold $M$ with $|\pi_1(M)|<\infty$ and $\chi(M)\neq0$ is closed. \end{thmx} It is shown in \cite[Théorème 3.5]{ghys} that a Riemannian foliation of a compact simply-connected manifold $M$ satisfying $\chi(M)\neq 0$ must have a closed leaf, a fact that also follows from the Poincaré--Hopf index theorem (see Proposition \ref{prop: X killing com Zero=Sigma dim f}). Notice that Theorem \ref{teo: topological obstruction} shows, in fact, that all leaves of such a foliation must be closed. \section{Preliminaries}\label{section: preliminaires} In this section we review some of the basics on Riemannian foliations and establish our notation. All objects are supposed to be of class $C^\infty$ (smooth) unless otherwise stated. Throughout this article, $\f$ will denote a $p$-dimensional foliation of a \emph{connected} manifold $M$ without boundary, of dimension $n=p+q$. The subbundle of $TM$ consisting of the spaces tangent to the leaves will be denoted by $T\f$ and the Lie algebra of the vector fields with values in $T\f$ by $\mathfrak{X}(\f)$. The number $q$ is the \textit{codimension} of $\f$. A foliation $\f$ is \textit{transversely orientable} if its normal space $\nu\f:=TM/T\f$ is orientable. The set of the closures of the leaves in $\f$ is denoted by $\overline{\f}:=\{\overline{L}\ |\ L\in\f\}$. In the simple case when $\overline{\f}=\f$, we say that $\f$ is a \textit{closed} foliation. Recall that $\f$ can be alternatively defined by an open cover $\{U_i\}_{i\in I}$ of $M$, submersions $\pi_i:U_i\to \overline{U}_i$, with $\overline{U}_i\subset\mathbb{R}^q$, and diffeomorphisms $\gamma_{ij}:\pi_j(U_i\cap U_j)\to\pi_i(U_i\cap U_j)$ satisfying $\gamma_{ij}\circ\pi_j|_{U_i\cap U_j}=\pi_i|_{U_i\cap U_j}$ for all $i,j\in I$. The collection $\{\gamma_{ij}\}$ is a \textit{Haefliger cocycle} representing $\f$ and each $U_i$ is a \textit{simple open set} for $\f$. We will assume without loss of generality that the fibers $\pi_i^{-1}(\overline{x})$ are connected. A \textit{foliate morphism} between $(M,\f)$ and $(N,\g)$ is a smooth map $f:M\to N$ that sends leaves of $\f$ into leaves of $\g$. In particular, we may consider $\f$-foliate diffeomorphisms $f:M\to M$. The infinitesimal counterparts of this notion are the \textit{foliate vector fields} of $\f$, that is, the vector fields in the Lie subalgebra $$\mathfrak{L}(\f)=\{X\in\mathfrak{X}(M)\ |\ [X,\mathfrak{X}(\f)]\subset\mathfrak{X}(\f)\}.$$ If $X\in\mathfrak{L}(\f)$ and $\pi:U\to \overline{U}$ is a submersion locally defining $\f$, then $X|_U$ is $\pi$-related to some vector field $\overline{X}_{\overline{U}}\in\mathfrak{X}(\overline{U})$. In fact, this characterizes $\mathfrak{L}(\f)$ \cite[Section 2.2]{molino}. The Lie algebra $\mathfrak{L}(\f)$ also has the structure of a module, whose coefficient ring consists of the \textit{basic functions} of $\f$, that is, functions $f\in C^{\infty}(M)$ such that $Xf=0$ for every $X\in\mathfrak{X}(\f)$. We denote this ring by $\Omega^0_B(\f)$. The quotient of $\mathfrak{L}(\f)$ by the ideal $\mathfrak{X}(\f)$ yields the Lie algebra $\mathfrak{l}(\f)$ of \textit{transverse vector fields}. For $X\in\mathfrak{L}(\f)$, we denote its induced transverse field by $\overline{X}\in\mathfrak{l}(\f)$. Note that each $\overline{X}$ defines a unique section of $\nu\f$ and that $\mathfrak{l}(\f)$ is also a $\Omega^0_B(\f)$-module. Let $(M,\f)$ be represented by the Haefliger cocycle $\{(U_i,\pi_i,\gamma_{ij})\}$. The pseudogroup of local diffeomorphisms generated by $\gamma=\{\gamma_{ij}\}$ acting on $T_\gamma:=\bigsqcup_i \overline{U}_i$ is the \textit{holonomy pseudogroup} of $\f$ associated to $\gamma$, that we denote by $\mathscr{H}_\gamma$. If $\delta$ is another Haefliger cocycle defining $\f$ then $\mathscr{H}_\delta$ is equivalent to $\mathscr{H}_\gamma$, meaning that there is a maximal collection $\Phi$ of diffeomorphisms $\varphi$ from open sets of $T_\delta$ to open sets of $T_\gamma$ such that $\{\mathrm{Dom}(\varphi)\ |\ \varphi\in\Phi\}$ covers $T_\delta$, $\{\mathrm{Im}(\varphi)\ |\ \varphi\in\Phi\}$ covers $T_\gamma$ and, for all $\varphi,\psi\in\Phi$, $h\in\mathscr{H}_\delta$ and $h'\in\mathscr{H}_\gamma$, we have $\psi^{-1}\circ h'\circ\varphi\in\mathscr{H}_\delta$, $\psi\circ h\circ\varphi^{-1}\in\mathscr{H}_\gamma$ and $h'\circ\varphi\circ h\in\Phi$. We will write $(T_\f,\mathscr{H}_\f)$ to denote a representative of the equivalence class of these pseudogroups. Note that the orbit space $T_\f/\mathscr{H}_\f$ is precisely $M/\f$. We denote the germinal holonomy group of a leaf $L$ at $x\in L$ by $\mathrm{Hol}_x(L)$. Recall that there is a surjective homomorphism $h:\pi_1(L,x)\to\mathrm{Hol}_x(L)$. Also, it can be shown that leaves without holonomy are generic, in the sense that $\{x\in M\ |\ \mathrm{Hol}_x(L)=0\}$ is residual in $M$ \cite[Chapter 2]{candel}. \subsection{Foliations of orbifolds}\label{section: orbifolds} Orbifolds are generalizations of manifolds that arise naturally in many areas of mathematics. We refer to \cite[Chapter 1]{adem}, \cite[Section 2.4]{mrcun} or \cite{kleiner} for quick introductions. Let $\mathcal{O}$ be a smooth orbifold with an atlas $\mathcal{A}=\{(\widetilde{U}_i,H_i,\phi_i)\}_{i\in I}$. We denote the underlying space of $\mathcal{O}$ by $|\mathcal{O}|$. Recall that each chart of $\mathcal{A}$ consists of an open subset $\widetilde{U}$ of $\mathbb{R}^n$, a finite subgroup $H$ of $\mathrm{Diff}(\widetilde{U})$ and an $H$-invariant map $\phi:\widetilde{U}\to |\mathcal{O}|$ that induces a homeomorphism between $\widetilde{U}/H$ and some open subset $U\subset |\mathcal{O}|$. Given a chart $(\widetilde{U},H,\phi)$, and $x=\phi(\tilde{x})\in U$, the \textit{local group} $\Gamma_x$ of $\mathcal{O}$ at $x$ is the isotropy subgroup $H_{\tilde{x}}<H$. Its isomorphism class is independent of both the chart and the lifting $\tilde{x}$, and for every $x\in|\mathcal{O}|$ we can always find a compatible chart $(\widetilde{U}_x,\Gamma_x,\phi_x)$ \textit{around} $x$, that is, such that $\phi^{-1}(x)$ consists of a single point. \begin{example} If $\f$ is a $q$-codimensional foliation of a manifold $M$ such that every leaf is compact and with finite holonomy, then $M/\f$ has a canonical $q$-dimensional orbifold structure relative to which the local group of a leaf in $M/\f$ is its holonomy group \cite[Theorem 2.15]{mrcun}. We will denote the orbifold obtained this way by $M/\!/\f$, in order to distinguish it from the topological space $M/\f$. Analogously, if a Lie group $G$ acts properly and almost freely on $M$, we will denote the quotient orbifold by $M/\!/G$ (see also Example \ref{exe: foliated actions}). \end{example} Similarly to the construction of the holonomy pseudogroup of a foliation, consider $U_\mathcal{A}:=\bigsqcup_{i\in I}\widetilde{U}_i$ and $\phi:=\{\phi_i\}_{i\in I}:U_\mathcal{A}\to |\mathcal{O}|$, that is, $x\in \widetilde{U}_i\subset U_\mathcal{A}$ implies $\phi(x)=\phi_i(x)$. A \textit{change of charts} of $\mathcal{A}$ is a diffeomorphism $h:V\to W$, with $V,W\subset U_\mathcal{A}$ open sets, such that $\phi\circ h=\phi|_V$. The collection of all changes of charts of $\mathcal{A}$ form a pseudogroup $\mathscr{H}_{\mathcal{A}}$ of local diffeomorphisms of $U_\mathcal{A}$, and $\phi$ induces a homeomorphism $U_\mathcal{A}/\mathscr{H}_{\mathcal{A}}\to|\mathcal{O}|$. The germs of the elements in $\mathscr{H}_{\mathcal{A}}$ have a natural structure of a Lie groupoid that furnishes an alternative framework to develop the theory of orbifolds (see \cite[Chapter 5]{mrcun} or \cite[Section 1.4]{adem}). A smooth differential form on $\mathcal{O}$ can be seen as an $\mathscr{H}_{\mathcal{A}}$-invariant differential form on $U_\mathcal{A}$. The De Rham cohomology of $\mathcal{O}$ is defined in complete analogy with the manifold case. The following result can be seen as an orbifold version of De Rham's Theorem (see \cite[Theorem 1]{satake} or \cite[Theorem 2.13]{adem}). \begin{theorem}[Satake]\label{theorem: Satake} Let $\mathcal{O}$ be an orbifold. Then $H_{\mathrm{dR}}^i(\mathcal{O})\cong H^i(|\mathcal{O}|,\mathbb{R})$. \end{theorem} There are several distinct definitions of smooth maps between orbifolds in the literature (see, for instance, \cite{borzellino2}, \cite{chenruan} and \cite{kleiner}), the most common being to define a continuous map $f:|\mathcal{O}|\to|\mathcal{P}|$ to be smooth when, for every $x\in|\mathcal{O}|$, there are charts $(\widetilde{U},H,\phi)$ around $x$ and $(\widetilde{V},K,\psi)$ around $f(x)$ such that $f(U)\subset V$ and there is a smooth map $\widetilde{f}:\widetilde{U}\to\widetilde{V}$ with $\psi\circ\widetilde{f}=f\circ\phi$. There are relevant refinements of this notion, such as the \textit{good maps} defined in \cite{chenruan}, that are needed in some constructions, for example in order to coherently pull orbibundles back by smooth maps. We remark here that this notion of good map matches the notion of smooth morphisms when the orbifolds are viewed as Lie groupoids (see \cite[Proposition 5.1.7]{lupercio}). In particular, a smooth map $M\to\mathcal{O}$ ``in the orbifold sense'', as defined in \cite[p. 715]{haefliger2}, corresponds to a good map. Following \cite[Section 3.2]{haefliger2}, we define a smooth foliation $\f$ of $\mathcal{O}$ to be a smooth foliation of $U_\mathcal{A}$ which is invariant by $\mathscr{H}_{\mathcal{A}}$. The atlas can be chosen so that on each $\widetilde{U}_i$ the foliation is given by a surjective submersion with connected fibers onto a manifold $T_i$. \subsection{Basic cohomology}\label{section: basic cohomology} A covariant tensor field $\xi$ on $M$ is $\f$-basic if $\xi(X_1,\dots,X_k)=0$, whenever some $X_i\in\mathfrak{X}(\f)$, and $\mathcal{L}_X\xi=0$ for all $X\in\mathfrak{X}(\f)$. These are the tensor fields that project to tensor fields on $T_\f$ and are invariant by $\mathscr{H}_\f$. In particular, we say that a differential form $\alpha\in\Omega^i(M)$ of degree $i$ is \textit{basic} when it is basic as a tensor field. By Cartan's formula, $\alpha$ is basic if, and only if, $i_X\alpha=0$ and $i_X(d\alpha)=0$ for all $X\in\mathfrak{X}(\f)$. We denote the $\Omega^0_B(\f)$-module of basic $i$-forms of $\f$ by $\Omega^i_B(\f)$. By definition, $\Omega^*_B(\f)$ is closed under the exterior derivative, so we can consider the complex $$\cdots \stackrel{d}{\longrightarrow} \Omega^{i-1}_B(\f) \stackrel{d}{\longrightarrow} \Omega^i_B(\f) \stackrel{d}{\longrightarrow} \Omega^{i+1}_B(\f) \stackrel{d}{\longrightarrow}\cdots .$$ The cohomology groups of this complex are the \textit{basic cohomology groups} of $\f$, that we denote by $H^i_B(\f)$. When the dimensions $\dim(H^i_B(\f))$ are all finite we define the \textit{basic Euler characteristic} of $\f$ as the alternate sum $$\chi_B(\f)=\sum_i(-1)^i\dim(H^i_B(\f)).$$ In analogy with the manifold case, we say that $b_B^i(\f):=\dim(H^i_B(\f))$ are the \textit{basic Betti numbers} of $\f$. When $\f$ is the trivial foliation by points, we recover the classical Euler characteristic and Betti numbers of $M$. Since we have an identification between $\f$-basic forms and $\mathscr{H}_\f$-invariant forms on $T_\f$, and an identification between differential forms on an orbifold $\mathcal{O}$ and $\mathscr{H}_{\mathcal{O}}$-invariant forms on $U_\mathcal{O}$, the following result is clear. \begin{proposition}\label{prop: basic cohomology of closed foliations} Let $(M,\f)$ be a smooth foliation whose every leaf is compact and with finite holonomy. Then the projection $\pi:M\to M/\!/\f$ induces an isomorphism of differential complexes $\pi^*:\Omega^*(M/\!/\f)\to\Omega_B^*(\f)$. In particular, $H_B^*(\f)\cong H_{\mathrm{dR}}^*(M/\!/\f)$. \end{proposition} \subsection{Riemannian foliations} Let $\f$ be a smooth foliation of $M$. A \textit{transverse metric} for $\f$ is a symmetric, positive, basic $(2,0)$-tensor field $\mathrm{g}^T$ on $M$. In this case $(M,\f,\mathrm{g}^T)$ is called a \textit{Riemannian foliation}. A Riemannian metric $\mathrm{g}$ on $M$ is \textit{bundle-like} for $\f$ if for any open set $U$ and any vector fields $Y,Z\in\mathfrak{L}(\f|_U)$ that are perpendicular to the leaves, $\mathrm{g}(Y,Z)\in\Omega^0_B(\f|_U)$. Any bundle-like metric $\g$ determines a transverse metric by $\mathrm{g}^T(X,Y):=\mathrm{g}(X^\bot,Y^\bot)$ with respect to the decomposition $TM=T\f\oplus T\f^\perp$. Conversely, given $\mathrm{g}^T$ one can always choose a bundle-like metric on $M$ that induces it \cite[Proposition 3.3]{molino}. With a chosen bundle-like metric, we will identify the bundles $\nu\f\equiv T\f^\perp$. A metric $\mathrm{g}$ is bundle-like for $(M,\f)$ if and only if a geodesic that is perpendicular to a leaf at one point remains perpendicular to all the leaves it intersects. Moreover, geodesic segments perpendicular to the leaves project to geodesic segments on the local quotients $\overline{U}$. It follows that the leaves of a Riemannian foliation are locally equidistant. \begin{example}\label{exe: foliated actions} If a foliation $\f$ on $M$ is given by an almost free action of a Lie group $G$ and $\mathrm{g}$ is a Riemannian metric on $M$ such that $G$ acts by isometries, then $\mathrm{g}$ is bundle-like for $\f$ \cite[Remark 2.7(8)]{mrcun}. In other words, a foliation induced by an isometric action is Riemannian. A foliation given by the action of a Lie group is said to be \textit{homogeneous}. For a specific example, consider the flat torus $\mathbb{T}^2=\mathbb{R}^2/\mathbb{Z}^2$. For each $\lambda\in(0,+\infty)$ consider the almost free $\mathbb{R}$-action $(t,[x,y]) \longmapsto [x+t,y+\lambda t]$. The resulting foliation is the \textit{$\lambda$-Kronecker foliation} of the torus. Observe that when $\lambda$ is irrational each leaf is dense in $\mathbb{T}^2$. A more relevant example is the following. Let us fix $\lambda=(\lambda_0,\dots,\lambda_n)\in\mathbb{N}^{n+1}$ satisfying $\gcd(\lambda_0,\dots,\lambda_n)=1$ and modify the standard action of $\mathbb{C}^\times$ on $\mathbb{C}^{n+1}\setminus\{0\}$ by adding weights given by $\lambda$. Precisely, let $z\in\mathbb{C}^\times$ act by $z\!\cdot\!(z_0,\dots,z_n)=(z^{\lambda_0}z_0,\dots,z^{\lambda_n}z_n)$. This is an almost free isometric action that restricts to an action of $\mathbb{S}^1<\mathbb{C}^\times$ on $\mathbb{S}^{2n+1}\subset\mathbb{C}^{n+1}$ with the same quotient. The quotient orbifold $\mathbb{S}^{2n+1}/\!/\mathbb{S}^1$ is called a \textit{weighted projective space} and denoted $\mathbb{CP}^n[\lambda]$ (for further details, see \cite[Section 4.5]{boyer}). Note that $\mathbb{CP}^1[p,q]$, for example, is simply the $\mathbb{Z}_p$-$\mathbb{Z}_q$-football orbifold, that is, a sphere with two cone singularities (of order $p$ and $q$) at the poles. \end{example} \begin{example}\label{example: Riemannian suspensions} Let $(T,\mathrm{g})$ be a Riemannian manifold. A foliation $\f$ defined by the suspension of a homomorphism $h:\pi_1(B,x_0)\to\mathrm{Iso}(T)$ is naturally a Riemannian foliation \cite[Section 3.7]{molino}. \end{example} It follows from the definition that $\mathrm{g}^T$ projects to Riemannian metrics on the local quotients $\overline{U}_i$ of a Haefliger cocycle $\{(U_i,\pi_i,\gamma_{ij})\}$ defining $\f$. The holonomy pseudogroup $\mathscr{H}_\f$ then becomes a pseudogroup of local isometries of $T_\f$ and, with respect to a a bundle-like metric, the submersions defining $\f$ become Riemannian submersions. We will say that $\f$ has positive \textit{transverse sectional curvature} when $(T_\f,\mathrm{g}^T)$ has positive sectional curvature. In this case we denote $\sec_\f>0$. The notions of negative, nonpositive and nonnegative transverse sectional curvature are defined analogously, as well as the corresponding notions for \textit{transverse Ricci curvature}. The basic cohomology of Riemannian foliations on compact manifolds have finite dimension, as shown in \cite[Théorème 0]{kacimi2}. The hypothesis that $M$ is compact can be relaxed to $M/\overline{\f}$ being compact, provided that $\f$ is a \textit{complete Riemannian foliation}, that is, that $M$ is a complete Riemannian manifold with respect to some bundle-like metric for $\f$ \cite[Proposition 3.11]{goertsches}. Hence, if this is the case, $\chi_B(\f)$ is always defined. The following transverse analogue of the Bonnet--Myers Theorem due to J.~Hebda will be useful \cite[Theorem 1]{hebda}. \begin{theorem}[Hebda]\label{theorem: hebda} Let $\f$ be a complete Riemannian foliation satisfying $\ric_\f\geq c>0$. Then $M/\f$ is compact and $H^1_B(\f)=0$. \end{theorem} \subsection{Molino theory}\label{subsection: Molino Theory} In this section we summarize the structure theory for Riemannian foliations due mainly to P.~Molino \cite{molino}. Recall that a transverse metric induces a Riemannian metric on the quotient of each foliation chart $U$. The pullbacks of the Levi-Civita connections on $\overline{U}$ glue together to a well-defined connection $\nabla^B$ on $TM$, the \textit{canonical basic Riemannian connection}, which induces a covariant derivative on $\mathfrak{l}(\f|_U)$ (that we also denote by $\nabla^B$). Let $\pi^\Yup:M^\Yup_\f\to M$ be the principal $\mathrm{O}(q)$-bundle of $\f$-transverse orthonormal frames\footnote{When $\f$ is transversely orientable $M^\Yup_\f$ consists of two $\mathrm{SO}(q)$-invariant connected components that correspond to the possible orientations. In this case we will assume that one component was chosen and, by abuse of notation, denote it also by $M^\Yup_\f$. Everything stated in this section then will carry over to this case by changing $\mathrm{O}(q)$ to $\mathrm{SO}(q)$.}. The normal bundle $\nu\f$ is associated to $M^\Yup_\f$, so the basic Riemannian connection $\nabla^B$ on $\nu\f$ induces a connection form $\omega_{\f}$ on $M^\Yup_\f$. This connection form in turn defines a horizontal distribution $\mathcal{H}:=\ker(\omega_{\f})$ on $M^\Yup_\f$ that allows us to horizontally lift $\f$, obtaining a foliation $\f^\Yup$ of $M^\Yup_\f$. The advantage of lifting $\f$ to $\f^\Yup$ is that the latter admits a global transverse parallelism, that is, $\nu\f^\Yup$ is parallelizable by fields in $\mathfrak{l}(\f^\Yup)$ \cite[p. 82, p.148]{molino}. If $\f$ is complete, those fields admit complete representatives in $\mathfrak{L}(\f^\Yup)$ \cite[Section 4.1]{goertsches}, so $\f$ is \textit{transversally complete}, in the terminology of \cite[Section]{molino}. The theory of transversely parallelizable foliations then states that the partition $\overline{\f^\Yup}$ of $M^\Yup_\f$ is a \textit{simple foliation}, that is, $W:=M^\Yup_\f/\overline{\f^\Yup}$ is a manifold and $\overline{\f^\Yup}$ is given by the fibers of a locally trivial fibration $b:M^\Yup_\f\to W$ \cite[Proposition 4.1']{molino}. Molino shows that a leaf closure $\overline{L}\in\overline{\f}$ is the image by $\pi^\Yup$ of a leaf closure of $\f^\Yup$, which implies that each leaf closure is an embedded submanifold of $M$ \cite[Lemma 5.1]{molino}. Let us fix $L^\Yup\in\f^\Yup$, denote $J=\overline{L^\Yup}$, and consider the foliation $(J,\f^\Yup|_J)$ and the Lie algebra $\mathfrak{g}_{\f}:=\mathfrak{l}(\f^\Yup|_J)$. Then $\f^\Yup|_J$ is a complete Lie $\mathfrak{g}_{\f}$-foliation, in the terminology of E.~Fedida, whose work establishes that such foliations are \textit{developable}, meaning that they lift to simple foliations of the universal coverings \cite[Theorem 4.1]{molino}. The restriction of $\f^\Yup$ to the closure of a different leaf is isomorphic to $(J,\f^\Yup|_J)$, so $\mathfrak{g}_{\f}$ is an algebraic invariant of $\f$, called its \textit{structural algebra}. We say that $d:=\dim(\mathfrak{g}_{\f})$ is the \textit{defect} of $\f$, motivated by the results in the next section. \subsection{Molino sheaf and Killing foliations}\label{subsection: Molino Sheaf and Killing Foliations} A field $X\in\mathfrak{X}(M)$ is a \textit{Killing vector field for $\mathrm{g}^T$} if $\mathcal{L}_X\mathrm{g}^T=0$. These fields form a Lie subalgebra of $\mathfrak{L}(\f)$ \cite[Lemma 3.5]{molino} and there is, thus, a corresponding Lie algebra of \textit{transverse Killing vector fields} that we denote by $\mathfrak{iso}(\f,\mathrm{g}^T)$ (we will omit the transverse metric when it is clear from the context). The elements of $\mathfrak{iso}(\f,\mathrm{g}^T)$ are precisely the transverse fields that project to Killing vector fields on the local quotients of $\f$. Now suppose that $\f$ is a complete Riemannian foliation and consider on $M^\Yup_\f$ the sheaf of Lie algebras $\mathscr{C}_{\f^\Yup}$ that associates to an open set $U^\Yup\subset M^\Yup_\f$ the Lie algebra $\mathscr{C}_{\f^\Yup}(U^\Yup)$ of the $\f^\Yup$-transverse fields in $U^\Yup$ that commute with all the global fields in $\mathfrak{l}(\f^\Yup)$. Each field in $\mathscr{C}_{\f^\Yup}(U^\Yup)$ is the natural lift of a $\f$-transverse Killing vector field on $\pi^\Yup(U^\Yup)$ \cite[Proposition 3.4]{molino}. The push-forward $\pi^\Yup_*(\mathscr{C}_{\f^\Yup})$ will be called the \textit{Molino sheaf} of $\f$, that we denote simply by $\mathscr{C}_{\f}$. From what we just saw, it is the sheaf of the Lie algebras consisting of the local transverse Killing vector fields that lift to local sections of $\mathscr{C}_{\f^\Yup}$. The sheaf $\mathscr{C}_{\f}$ is Hausdorff \cite[Lemma 4.6]{molino} and its stalk on each point is isomorphic to the Lie algebra $\mathfrak{g}_{\f}^{-}$ opposite to $\mathfrak{g}_{\f}$ \cite[Proposition 4.4]{molino}. The main motivation for the study of $\mathscr{C}_{\f}$ is that its orbits are the closures of the leaves of $\f$ \cite[Theorem 5.2]{molino}, in the sense that $$\{X_x\ |\ X\in(\mathscr{C}_{\f})_x\}\oplus T_xL_x=T_x\overline{L_x}.$$ Notice that the relationship between $\overline{\f}$, $\mathscr{C}_{\f}$ and $\mathfrak{g}_\f$ enables us to write the defect of $\f$ as $$d=\dim(\mathfrak{g}_\f)=\dim(\overline{\f})-\dim(\f)=\codim(\f)-\codim(\overline{\f}).$$ An interesting class of Riemannian foliations is the one consisting of complete Riemannian foliations that have a \textit{globally} constant Molino sheaf. Such foliations are called \textit{Killing foliations}, following \cite{mozgawa}. In other words, if $\f$ is a Killing foliation then there exists $\overline{X}_1,\dots,\overline{X}_d\in\mathfrak{iso}(\f)$ such that $T\overline{\f}=T\f\oplus\langle X_1,\dots, X_d \rangle$. A complete Riemannian foliation $\f$ is a Killing foliation if and only if $\mathscr{C}_{\f^\Yup}$ is globally constant, and in this case $\mathscr{C}_{\f^\Yup}(M^\Yup_\f)$ is the center of $\mathfrak{l}(\f^\Yup)$. Hence $\mathscr{C}_{\f}(M)$ is central (but not necessarily the full center of) $\mathfrak{l}(\f)$. It follows that the structural algebra of a Killing foliation is Abelian, because we have $\mathfrak{g}_{\f}^{-}\cong\mathscr{C}_{\f}(M)\cong(\mathscr{C}_{\f})_x$ for each $x\in M$. In particular, $\mathfrak{g}_{\f}^{-}\cong \mathfrak{g}_{\f}$ if $\f$ is Killing. A complete Riemannian foliation $\f$ of a simply-connected manifold is automatically a Killing foliation \cite[Proposition 5.5]{molino}, since in this case $\mathscr{C}_{\f}$ cannot have holonomy. Homogeneous Riemannian foliations provide another relevant class of Killing foliations. \begin{example}\label{example: homogeneous foliations are killing} If $\f$ is a Riemannian foliation of a complete manifold $M$ given by the connected components of the orbits of a locally free action of $H<\mathrm{Iso}(M)$, then $\f$ is a Killing foliation and $\mathscr{C}_{\f}(M)$ consists of the transverse Killing vector fields induced by the action of $\overline{H}\subset\mathrm{Iso}(M)$ \cite[Lemme III]{molino3}. \end{example} In the terminology of transverse actions introduced in \cite[Section 2]{goertsches}, if $\f$ is a Killing foliation we can simply say that there is an effective isometric transverse action of $\mathfrak{g}_{\f}$ on $(M,\f)$, given by the Lie isomorphism $\mathfrak{g}_{\f}\ni V\mapsto \overline{V}^*\in\mathscr{C}_{\f}(M)<\mathfrak{iso}(\f)$, such that the singular foliation everywhere tangent to the distribution of varying rank $\mathfrak{g}_{\f}\!\cdot\!\f$ defined by $(\mathfrak{g}_{\f}\!\cdot\!\f)_x:=\{\overline{V}^*_x\ |\ V\in\mathfrak{g}_{\f}\}\oplus T_x\f$ is the singular foliation of $M$ given by the closures of the leaves of $\f$ \cite[Theorem 2.2]{goertsches}. For short, we write this as $\mathfrak{g}_{\f}\!\cdot\!\f=\overline{\f}$. \section{Transverse Killing vector fields and the structural algebra} In this section we prove some basic facts about the zero sets of transverse Killing vector fields that will later be used. We also study the behavior of the structural algebra when a Riemannian foliation is lifted to a finitely-sheeted covering space. \subsection{Properties of transverse Killing vector fields}\label{section: Elementary Properties of Transverse Killing Fields} If $(M,\f,\mathrm{g}^T)$ is a Riemannian foliation and $\overline{X}\in\mathfrak{iso}(\f,\mathrm{g}^T)$ is a transverse Killing vector field, we denote by $\Zero(\overline{X})$ the set where $\overline{X}$ vanishes. We say that an $\f$-saturated submanifold $N\subset M$ is \textit{horizontally totally geodesic} if it projects to totally geodesic submanifolds in the local quotients $\overline{U}$ of $\f$. \begin{proposition}\label{proposition: propriedade zero killing} Let $(M,\f,\mathrm{g}^T)$ be a Riemannian foliation and let $\overline{X}\in\mathfrak{iso}(\f)$ be a transverse Killing vector field. Then each connected component $N$ of $\Zero(\overline{X})$ is a closed submanifold of $M$ of even codimension. Moreover, $N$ is horizontally totally geodesic and saturated by the leaves of $\f$ and $\overline{\f}$, and if $\f$ is transversely orientable then $(N,\f|_N)$ is transversely orientable. \end{proposition} \begin{proof} Since $\overline{X}$ is transverse and $\Zero(\overline{X})$ is closed, $N$ is saturated by the leaves of both $\f$ and $\overline{\f}$. Recall that, for a local trivialization $\pi:U\to \overline{U}$ of $\f$, $\overline{X}$ projects to a Killing vector field $\overline{X}_{\overline{U}}$ on $(\overline{U},\pi_*(\mathrm{g}^T))$, so we see that $N\cap U=\pi^{-1}(\overline{N})$, where $\overline{N}$ is a connected component of $\Zero(\overline{X}_{\overline{U}})$, which in turn is known to be a totally geodesic submanifold of $\overline{U}$ of even codimension \cite[Theorem 5.3]{kobayashi}. Using this it is now easy to prove that $N$ is an even-codimensional, horizontally totally geodesic, closed submanifold of $M$. For the last claim, note that $(\nabla^B\overline{X})_x: \nu_x\f\to\nu_x\f$ is skew-symmetric and vanishes on $T_xN/T_xL_x$, so it preserves the $\mathrm{g}^T$-orthogonal complement $W_x:=(T_xN/T_xL_x)^\perp$. Choosing an adequate orthonormal basis, $(\nabla^B\overline{X})_x|_{W_x}$ has a matrix representation of the type $$\begin{bmatrix} 0 & \alpha_1 & & & \\ -\alpha_1& 0 & & & \\ & & \ddots & & \\ & & & 0 & \alpha_l \\ & & & -\alpha_l & 0\end{bmatrix},$$ where $l=\dim(W_x)/2=\codim(N)/2$. The eigenvalues $\pm i\alpha_j$ remain constant on $N$ because on each simple open set $U\ni x$ these are the eigenvalues of $(\nabla^{\overline{U}}(\overline{X}_{\overline{U}}))_{\overline{x}}$, that are known to be constant on $\overline{N}$ \cite[p. 61]{kobayashi}. The bundle $W=(TN/T\f)^\perp$ therefore decomposes into $\nabla^B\overline{X}$-invariant subbundles $W=E_1\oplus\dots\oplus E_l$ and we see that $J|_{E_j}=1/\alpha_j\nabla^B\overline{X}$ defines a complex structure on $W$. In particular, we have that $W$ is orientable and, since $$\left.\frac{TM}{T\f}\right|_N=\frac{TN}{T\f}\oplus W,$$ the result follows. \end{proof} Recall that the symmetry rank of a Riemannian manifold is the rank of its isometry group. In analogy, if $(M,\f,\mathrm{g}^T)$ is a Riemannian foliation we define the \textit{transverse symmetry rank} of $\f$ by $$\symrank(\f):=\max_{\mathfrak{a}}\Big\{\dim(\mathfrak{a})\Big\},$$ where $\mathfrak{a}$ runs over all the Abelian subalgebras of $\mathfrak{iso}(\f)$. If $\mathfrak{a}<\mathfrak{iso}(\f)$ is a subalgebra satisfying $\dim(\mathfrak{a})=\symrank(\f)$, we will denote by $\mathcal{Z}(\mathfrak{a})$ the set consisting of all proper connected components of the zero sets of the transverse Killing vector fields in $\mathfrak{a}$. \begin{proposition}\label{proposition: propriedades de Zero killing} For any $N,N'\in\mathcal{Z}(\mathfrak{a})$ the following holds: \begin{enumerate}[(i)] \item Every transverse Killing vector field in $\mathfrak{a}$ is tangent to $N$ (that is, every foliate field representing a field in $\mathfrak{a}$ is tangent to $N$) and, therefore, the restriction to $N$ of the fields in $\mathfrak{a}$ yields a commutative Lie algebra $\mathfrak{a}|_N$ of transverse Killing vector fields of $\f|_N$. \item If $N$ is maximal in $\mathcal{Z}(\mathfrak{a})$ with respect to set inclusion, then $$\dim(\mathfrak{a}|_N)=\dim(\mathfrak{a})-1.$$ \item Each connected component of $N\cap N'$ also belongs to $\mathcal{Z}(\mathfrak{a})$. \end{enumerate} \end{proposition} The proof of the analogous properties for Killing vector fields on a Riemannian manifold $M$ \cite[Proposition 30]{petersen} adapts directly to the present setting if one works on the normal spaces $\nu_x\f$ in place of $T_xM$ and with the basic connection $\nabla^B$, so we will omit the details. \subsection{The canonical stratification} Given a complete Riemannian foliation $\f$ of $M$, we define $\dim(\overline{\f})=\max_{L\in\f}\{\dim(\overline{L})\}$. For $s$ satisfying $\dim(\f)\leq s\leq\dim(\overline{\f})$, let us denote by $\Sigma^s$ the subset of points $x\in M$ such that $\dim(\overline{L_x})=s$. Then we get a decomposition $$M=\bigsqcup_x \Sigma_x,$$ called the \textit{canonical stratification} of $\f$, where $\Sigma_x$ is the connected component of $\Sigma^s$ that contains $x$. Each component $\Sigma_x$ is an embedded submanifold \cite[Lemma 5.3]{molino} called a \textit{stratum} of $\f$. The restriction $\overline{\f}|_{\Sigma_x}$ now has constant dimension and forms a (regular) Riemannian foliation \cite[Lemma 5.3]{molino}. The \textit{regular stratum} $\Sigma^{\dim\overline{\f}}$ is an open, connected and dense subset of $M$, and each other stratum $\Sigma_x\neq \Sigma^{\dim\overline{\f}}$ is called \textit{singular} and satisfies $\codim(\Sigma_x)\geq2$ \cite[p. 197]{molino}. The subset $\Sigma^{\dim\f}$ will be called the \textit{stratum of the closed leaves}, even though it is not, in general, a canonical stratum. \begin{proposition}\label{prop: X killing com Zero=Sigma dim f} Let $(M,\f)$ be a Killing foliation. There exists a transverse Killing vector field $\overline{X}\in\mathfrak{iso}(\f)$ such that $\Zero(\overline{X})=\Sigma^{\dim\f}$. \end{proposition} \begin{proof} Choose an (at most) countable cover $\{U_i\}$ of $M\setminus\Sigma^{\dim\f}$ by simple open sets. Since there are no closed leaves in $U_i$, the algebra $\mathscr{C}_\f(U_i)=\mathscr{C}_\f(M)|_{U_i}$ projects on the quotient $\overline{U}_i$ to an Abelian algebra $\mathfrak{c}_i$ of Killing vector fields whose orbits have dimension at least $1$. It is then known that the set of Killing vector fields in $\mathfrak{c}_i$ that do not vanish at any point of $\overline{U}_i$ is residual \cite[Lemme]{mozgawa}. Clearly, $\overline{X}\in\mathscr{C}_\f(M)$ vanishes at $x\in U_i$ if, and only if, the induced Killing vector field $\overline{X}_i\in\mathfrak{c}_i$ vanishes at $\overline{x}$. Hence, since we have only countable many open sets $U_i$, it follows that the set of fields in $\mathscr{C}_\f(M)$ not vanishing at any point of $M\setminus\Sigma^{\dim\f}$ is residual in $\mathscr{C}_\f(M)$. In other words, a generic $\overline{X}\in\mathscr{C}_\f(M)<\mathfrak{iso}(\f)$ satisfies $\Zero(\overline{X})=\Sigma^{\dim\f}$. \end{proof} In particular, each connected component $N$ of $\Sigma^{\dim\f}$ is a horizontally totally geodesic, closed submanifold of $M$ of even codimension, and $\f|_N$ is transversely orientable when $\f$ is (see Proposition \ref{proposition: propriedade zero killing}). \subsection{The structural algebra and finite coverings} Let us now study the behavior of the structural algebra when a Riemannian foliation is lifted to a finitely-sheeted covering space. \begin{lemma}\label{lemma: levantamentos finitos} Let $\f$ be a smooth foliation of a smooth manifold $M$ and let $\rho:\widehat{M}\to M$ be a finitely-sheeted covering map. For any $\widehat{L}\in\widehat{\f}$, we have $\overline{\rho(\widehat{L})}=\rho(\overline{\widehat{L}})$, where $\widehat{\f}=\rho^*(\f)$ is the lifted foliation of $\widehat{M}$. In particular, $\rho:\overline{\widehat{L}}\to\overline{\rho(\widehat{L})}$ is a finitely-sheeted covering. \end{lemma} The proof is elementary, so we will omit it. \begin{proposition}\label{proposition: Molino sheaf under liftings} Let $\f$ be a complete Riemannian foliation of $M$ and let $\rho:\widehat{M}\to M$ be a finitely-sheeted covering map. Then $\mathscr{C}_{\widehat{\f}}=\rho^*(\mathscr{C}_\f)$, where $\widehat{\f}=\rho^*(\f)$. \end{proposition} \begin{proof} We can identify $\widehat{M}^\Yup_{\widehat{\f}}$ with the pullback bundle $(\rho^\Yup)^*(M^\Yup_\f)$, so we have a commutative diagram $$\xymatrix{ \widehat{M}^\Yup_{\widehat{\f}} \ar[r]^{\rho^\Yup} \ar[d]_{\hat{\pi}} & M^\Yup_\f \ar[d]^{\pi}\\ \widehat{M} \ar[r]^\rho & M}$$ where the horizontal arrows are finitely-sheeted covering maps. Moreover, $(\rho^\Yup)^*(\mathscr{C}_{\f^\Yup})$ can be identified with the lift $\hat{\pi}^*(\rho^*(\mathscr{C}_\f))$, hence, since the Molino sheaves of $\f$ and $\widehat{\f}$ are defined in terms of the sheaves of the corresponding lifted foliations, it remains to show that $(\rho^\Yup)^*(\mathscr{C}_{\f^\Yup})=\mathscr{C}_{\widehat{\f}^\Yup}$. Indeed, $\mathscr{C}_{\widehat{\f}^\Yup}$ commutes with the Lie algebra $\mathfrak{l}(\widehat{\f}^\Yup)$ of $\widehat{\f}^\Yup$-transverse fields, hence, in particular, it commutes with $(\rho^\Yup)^*(\mathfrak{l}(\f^\Yup))$, so $\mathscr{C}_{\widehat{\f}^\Yup}$ is a subsheaf of $(\rho^\Yup)^*(\mathscr{C}_{\f^\Yup})$. This implies that if we consider an open subset $U\subset \widehat{M}^\Yup_{\widehat{\f}}$ where both $\mathscr{C}_{\widehat{\f}^\Yup}$ and $(\rho^\Yup)^*(\mathscr{C}_{\f^\Yup})$ are constant and such that $\rho^\Yup|_U$ is a diffeomorphism, then $\rho_*^\Yup(\mathscr{C}_{\widehat{\f}^\Yup}(U))<\mathscr{C}_{\f^\Yup}(\rho^\Yup(U))$. Now, by Lemma \ref{lemma: levantamentos finitos}, the leaf closures in $\widehat{\f}^\Yup$ and $\f^\Yup$ have the same dimension, hence we must have $$\rho_*^\Yup(\mathscr{C}_{\widehat{\f}^\Yup}(U))=\mathscr{C}_{\f^\Yup}(\rho^\Yup(U)),$$ therefore $\mathscr{C}_{\widehat{\f}^\Yup}(U)=(\rho^\Yup)^*(\mathscr{C}_{\f^\Yup})(U)$ for any small enough $U$.\end{proof} \begin{corollary}\label{corollary: algebra estrutural e levantamentos finitos} Let $\f$ be a complete Riemannian foliation of $M$ and let $\rho:\widehat{M}\to M$ be a finitely-sheeted covering. Then $\mathfrak{g}_\f\cong\mathfrak{g}_{\widehat{\f}}$, where $\widehat{\f}$ is the lifted foliation of $\widehat{M}$. In particular, if $|\pi_1(M)|<\infty$, then $\mathfrak{g}_\f$ is Abelian. \end{corollary} \section{Deformations of Killing foliations} There are several notions of deformations of foliations available in the literature \cite[Section 3.6]{candel}. Here we will be interested in deformations of the following type: two smooth foliations $\f_0$ and $\f_1$ of a manifold $M$ are $C^\infty$-\textit{homotopic} if there is a smooth foliation $\f$ of $M\times [0,1]$ such that $M\times\{t\}$ is saturated by leaves of $\f$, for each $t\in[0,1]$, and $$\f_i=\f|_{M\times\{i\}},$$ for $i=0,1$. In this case we will also say that $\f$ is a \textit{homotopic deformation} of $\f_0$ into $\f_1$ \begin{example} Consider, for $\lambda_1,\lambda_2\in\mathbb{R}$, the $\lambda_i$-Kronecker foliations $\f(\lambda_i)$ of $\mathbb{T}^2$ (see Example \ref{exe: foliated actions}). Clearly, a homotopic deformation between $\f(\lambda_1)$ and $\f(\lambda_2)$ is given by $\f((1-t)\lambda_1+t\lambda_2)$, $t\in [0,1]$. Notice that if $\lambda_1$ is irrational and we choose $\lambda_2\in\mathbb{Q}$, then we obtain a deformation of a foliation with dense leaves into a closed foliation. We are primarily interested in deformations with this property. \end{example} \subsection{Proof of Theorem \ref{theorem: Haefliger deformation}}\label{Section: proof of the deformation theorem} To deform any Killing foliation $\f$ with compact leaf closures into a closed foliation, whilst maintaining some properties of its transverse geometry, we will use the following theorem by A.~Haefliger and E.~Salem \cite[Theorem 3.4]{haefliger2}. \begin{theorem}[Haefliger--Salem]\label{theorem: Haefliger-Salem} There is a bijection between the set of equivalence classes of Killing foliations $\f$ with compact leaf closures of $M$ and the set of equivalence classes of quadruples $(\mathcal{O},\mathbb{T}^N,H,\mu)$, where $\mathcal{O}$ is an orbifold, $\mu$ is an action of $\mathbb{T}^N$ on $\mathcal{O}$ and $H$ is a dense contractible subgroup of $\mathbb{T}^N$ whose action on $\mathcal{O}$ is locally free\footnote{In this paragraph, two foliations $\f$ and $\mathcal{F'}$ are \textit{equivalent} if their holonomy pseudogroups are equivalent and two quadruples $(\mathcal{O},\mathbb{T}^N,H,\mu)$ and $(\mathcal{O}',\mathbb{T}^{N'},H',\mu')$ are \textit{equivalent} if there is an isomorphism between $\mathbb{T}^N$ and $\mathbb{T}^{N'}$ and a diffeomorphism of $\mathcal{O}$ on $\mathcal{O}'$ that conjugates the actions $\mu$ and $\mu'$}. This bijection associates $\f$ to a canonical realization $(\mathcal{O},\f_H)$ of the classifying space of its holonomy pseudogroup, where $\f_H$ is the foliation of $\mathcal{O}$ determined by the orbits of $H$. In particular, there is a smooth good map $\Upsilon:M\to\mathcal{O}$ such that $\Upsilon^*(\f_H)=\f$. \end{theorem} Suppose that $M$ is compact and $\f$ is a Killing foliation of $M$. We can deform $\f$ using this result as follows. Let $\mathfrak{h}$ be the Lie algebra of $H$ and consider a Lie subalgebra $\mathfrak{k}<\mathrm{Lie}(\mathbb{T}^N)\cong\mathbb{R}^N$, with $\dim(\mathfrak{k})=\dim(\mathfrak{h})$, such that its corresponding Lie subgroup $K<\mathbb{T}^N$ is closed. We can suppose $\mathfrak{k}$ close enough to $\mathfrak{h}$, as points in the Grassmannian $\mathrm{Gr}^{\dim\mathfrak{h}}(\mathrm{Lie}(\mathbb{T}^N))$, so that the action $\mu|_K$ remains locally free. Because $K$ is a closed subgroup, the leaves of the foliation $\g_K$ defined by the orbits of $K$ are all closed. Taking $\mathfrak{k}$ even closer to $\mathfrak{h}$ if necessary, we can suppose that $\Upsilon$ remains transverse to $\g_K$, so $\g:=\Upsilon^*(\g_K)$ is the desired approximation of $\f$. In this sense, $\f$ can be arbitrarily approximated by such a closed foliation $\g$. Moreover, this allows us to suppose that a submanifold $T\subset M$ that is a total transversal for $\f$ is also a total transversal for $\g$, so that there are representatives of the holonomy pseudogroups $\mathscr{H}_\f$ and $\mathscr{H}_\g$ both acting on $T$. The same goes for $\mathscr{H}_{\f_H}$ and $\mathscr{H}_{\g_K}$, they both act on the transversal $S=\Upsilon(T)$. Theorem \ref{theorem: Haefliger-Salem} states that $(\mathcal{O},\f_H)$ is a realization of the classifying space of $\mathscr{H}_\f$. Roughly speaking, this classifying space is a space with a foliation such that the holonomy covering of each leaf is contractible and whose holonomy pseudogroup is equivalent to $\mathscr{H}_\f$. In particular, $(T,\mathscr{H}_\f)$ is equivalent to $(S,\mathscr{H}_{\f_H})$, the equivalence being generated by restrictions of $\Upsilon$ to open sets in $T$ where it becomes a diffeomorphism. One can prove that also $(T,\mathscr{H}_\g)\cong(S,\mathscr{H}_{\g_K})$, but $(\mathcal{O},\g_K)$ is not a realization of the classifying space of $\mathscr{H}_\g$, because a generic leaf is no longer contractible. It follows that $\Upsilon^*$ defines isomorphisms $\mathcal{T}^*(\f_H)\to \mathcal{T}^*(\f)$ and $\mathcal{T}^*(\g_K)\to \mathcal{T}^*(\g)$. Now choose a $\mathbb{T}^N$-invariant normal bundle $\nu\f_H\subset T\mathcal{O}$. This can be done, for instance, by choosing a Riemannian metric on $\mathcal{O}$ with respect to which $\mathbb{T}^N$ acts by isometries. Let $\xi\in\mathcal{T}^*(\f)$ and let $\xi_H$ be the corresponding $\f_H$-basic tensor field on $\mathcal{O}$, that is, $\xi_H$ is $H$-invariant (hence $\mathbb{T}^N$-invariant), and $\xi_H(X_1,\dots,X_k)=0$ whenever some $X_i\in T\f_H$. Define a tensor field $\xi_K$ on $\mathcal{O}$ by declaring that $\xi_K=\xi_H$ on $\nu\f_H$ and that $\xi_K(X_1,\dots,X_k)=0$ whenever some $X_i\in T\g_K$. Then from what we saw above it follows that $\xi_K$ is $K$-invariant. Since $\xi_K$ vanishes in $T\g_K$ by construction, it is therefore $\g_K$-basic. The association $\xi\mapsto \Upsilon^*(\xi_K)$ defines the desired injection $\iota:\mathcal{T}^*(\f)\hookrightarrow \mathcal{T}^*(\g)$. By construction it is clear that $\iota(\mathrm{g}^T)$ is a transverse Riemannian metric for $\g$. In fact, for any $\f$-transverse structure given by an $\f$-basic tensor field, $\iota$ will induce a $\g$-transverse structure of the same kind. Since $M$ is complete, $\mathscr{H}_\f$ is a complete pseudogroup of local isometries (in the sense of \cite[Definition 2.1]{salem}), with respect to $\mathrm{g}^T$ (see \cite[Proposition 2.6]{salem}). By \cite[Proposition 2.3]{salem}, its closure $\overline{\mathscr{H}_\f}$ in the $C^1$-topology is also a complete pseudogroup of local isometries, whose orbits are the closures of the orbits of $\mathscr{H}_\f$, which in turn correspond to leaf closures in $\overline{\f}$. The same goes for the orbits of $\overline{\mathscr{H}_{\f_H}}$: they correspond to the closures of the leaves of $\f_H$, that is, the orbits of $\mathbb{T}^N$. Hence, by the equivalence $\mathscr{H}_\f\cong\mathscr{H}_{\f_H}$, we have $M/\overline{\f}\cong \mathcal{O}/\mathbb{T}^N$. It is clear that the $\mathbb{T}^N$-action projects to an action of the torus $\mathbb{T}^N/K\cong \mathbb{T}^{N-\dim\mathfrak{k}}$ on the orbifold $\mathcal{O}/\!/\g_K$. Therefore $$\frac{M}{\overline{\f}}\cong \frac{|\mathcal{O}|}{\mathbb{T}^N}\cong \frac{|\mathcal{O}|/\g_K}{\mathbb{T}^{N-\dim\mathfrak{k}}}.$$ Observe that, since $N=\dim(\overline{\f_H})$, we have $N-\dim(\mathfrak{k})=N-\dim(\f_H)=\codim(\f)-\codim(\overline{\f})=d$, where $d=\dim(\mathfrak{g}_\f)$ is the defect of $\f$. On the other hand, $\mathcal{O}/\!/\g_K\cong M/\!/\g$, so we get a smooth $\mathbb{T}^d$ action on $M/\!/\g$ satisfying $$\frac{M}{\overline{\f}}\cong\frac{M/\g}{\mathbb{T}^d}.$$ If we denote the canonical projection $M\to M/\!/\g$ by $\pi^\g$, then $\pi^\g_*\circ\iota$ defines an isomorphism between $\mathcal{T}^*(\f)$ and $\mathcal{T}^*(M/\!/\g)^{\mathbb{T}^d}$. In fact, for this it suffices to see that $\xi_H\mapsto \pi^{\g_K}_*(\xi_K)$ is an isomorphism $\mathcal{T}^*(\f_H)\to\mathcal{T}^*(\mathcal{O}/\!/\g_K)^{\mathbb{T}^d}$, but since $\xi_H\mapsto\xi_K$ defines an isomorphism between $\mathcal{T}^*(\f_H)$ and $\mathcal{T}^*(\g_K)^{\mathbb{T}^N}$, this is clear. In particular, the $\mathbb{T}^d$-action on $M/\!/\g$ is isometric with respect to $\pi^\g_*\circ\iota(\mathrm{g}^T)$. Of course, we could further consider a smooth path $\mathfrak{h}(t)$, for $t\in[0,1]$, on the Grassmannian $\mathrm{Gr}^{\dim\mathfrak{h}}(\mathrm{Lie}(\mathbb{T}^N))$ connecting $\mathfrak{h}$ to $\mathfrak{k}$ such that the action $\mu|_{H(t)}$ of each corresponding Lie subgroup $H(t)$ is locally free and the induced foliation is transverse to $\Upsilon$. Then $\f_t:=\Upsilon^*(\f_{H(t)})$ defines a $C^{\infty}$-homotopic deformation of $\f$ into $\g$. In this case we have an injection $\iota_t:\mathcal{T}^*(\f)\hookrightarrow \mathcal{T}^*(\f_t)$ for each $t$. It is clear that $\iota_t(\xi)$ is a smooth time-dependent tensor field on $M$ for each $\xi\in\mathcal{T}^*(\f)$, that is, $\iota(\xi)$ is smooth as a map $[0,1]\times M\to \bigotimes^*TM$. Since both the deformation $\f_t$ and $\iota_t(\mathrm{g}^T)$ depend smoothly on $t$, the transverse sectional curvature $\sec_{\f_t}$ with respect to $\iota_t(\mathrm{g}^T)$ is a smooth function on $t$. In particular, if $\sec_\f> c$, then since $M$ is compact we actually have $\sec_\f\geq c'> c$, hence if $\g$ is sufficiently close to $\f$ then $\sec_\g>c$, by continuity. Of course, similarly we can choose $\g$ satisfying $\sec_\g< c$ when $\sec_\f< c$.\qed \vspace{8pt} We will take advantage of the notation established above to observe one more fact that will be useful later. Choose a sequence $\g_i=\Upsilon^*(\g_{K_i})$ of closed foliations approaching $\f$. As the deformation respects $\overline{\f}$, if $\f$ has a closed leaf $L$, then $L\in\g_i$ for each $i$. Let us denote by $h(\g_i)$ the order of the holonomy group of $L$ as a leaf of $\g_i$. We claim that $$\lim_{i\to\infty}h(\g_i)=\infty.$$ In fact, if $L_{\mathcal{O}}=Hx$ is the closed orbit in $\mathcal{O}$ corresponding to $L$, then $h(\g_i)\geq|(K_i)_x|$, since the $\g_{K_i}$-holonomy group of $L_{\mathcal{O}}$ is an extension of $(K_i)_x$ by the local group $\Gamma_x$ of $\mathcal{O}$. On the other hand, the stabilizer $\mathbb{T}^N_x$ is transverse to $H$ in $\mathbb{T}^N$ and, as $H$ is dense, $H_x=\mathbb{T}^N_x\cap H$ is infinite, hence it is clear that if $K_i$ approaches $H$, that is, if $\mathfrak{k}_i\to\mathfrak{h}$, then $|\mathbb{T}^N_x\cap K_i|\to \infty$. Let us state this below. \begin{lemma}\label{lemma: variando a holonomia} Let $\f$ be a nonclosed Killing foliation of a compact manifold $M$. If $\g_i$ is a sequence of closed foliations approaching $\f$, given by Theorem \ref{theorem: Haefliger deformation}, and $\f$ has a closed leaf $L$, then $h(\g_i)\to\infty$, where $h(\g_i)$ is the order of the holonomy group of $L$ as a leaf of $\g_i$. \end{lemma} \section{A closed leaf theorem} Let us proceed with the proof of Theorem \ref{theorem: Berger for foliations}. Suppose that $\f$ has no closed leaves and consider the lift $\widehat{\f}$ of $\f$ to the universal covering space $\rho:\widehat{M}\to M$ of $M$. Then, endowed with $\rho^*(\mathrm{g}^T)$, $\widehat{\f}$ is a Killing foliation also satisfying $\sec_{\widehat{\f}}\geq c>0$. By Proposition \ref{proposition: Molino sheaf under liftings}, $\widehat{\f}$ does not have any closed leaves either. By Theorem \ref{theorem: hebda} it follows that $\widehat{M}/\overline{\widehat{\f}}$ is compact, therefore the holonomy pseudogroup $(\widehat{T},\mathscr{H}_{\widehat{\f}})$ of $\widehat{\f}$ is a complete pseudogroup of local isometries \cite[Proposition 2.6]{salem} whose space of orbit closures is compact (as it coincides with $\widehat{M}/\overline{\widehat{\f}}$). Moreover, since there is a surjective homomorphism $$\pi_1\big(\widehat{M}\big)\to\pi_1\big(\mathscr{H}_{\widehat{\f}}\big)$$ (see \cite[Section 1.11]{salem}), we have that $\mathscr{H}_{\widehat{\f}}$ is $1$-connected. We can now apply \cite[Theorem 3.7]{haefliger2} to conclude that there exists a Riemannian foliation $\f'$ of a simply-connected compact manifold $M'$ with $(T',\mathscr{H}_{\f'})$ equivalent to $(\widehat{T},\mathscr{H}_{\widehat{\f}})$. In particular, $\f'$ is Killing and also satisfies $\sec_{\f'}\geq c>0$, since it is endowed with the transverse metric $(\mathrm{g}^T)'$ on $T'$ induced by $\rho^*(\mathrm{g}^T)$ via the equivalence. Furthermore, $\f'$ has no closed leaves, otherwise $\mathscr{H}_{\f'}$ would have a closed orbit, contradicting $\mathscr{H}_{\f'}\cong\mathscr{H}_{\widehat{\f}}$. We now apply Theorem \ref{theorem: Haefliger deformation} to $(M',\f')$, deforming it into an even-codimensional closed Riemannian foliation $\g'$ with $\sec_{\g'}>0$. Since $M'$ is simply-connected we have that $\g'$ is transversely orientable, hence $M'/\!/\g'$ is orientable. Fixing an orientation for it, we have an even-dimensional, compact, oriented Riemannian orbifold $M'/\!/\g'$ with positive sectional curvature and an isometric action of a torus $\mathbb{T}^d$, where $d=\dim(\mathfrak{g}_{\f'})>0$. This action has no fixed points, since $M'/\overline{\f'}=(M'/\g')/\mathbb{T}^d$ and $\f'$ has no closed leaves. It is possible to choose a $1$-parameter subgroup $\mathbb{S}^1<\mathbb{T}^d$ such that $(M'/\!/\g')^{\mathbb{S}^1}=(M'/\!/\g')^T$ (see \cite[Lemma 4.2.1]{allday}), that is, without fixed points. In particular, this gives us an isometry of $M'/\!/\g'$ that has no fixed points, which contradicts the Synge--Weinstein Theorem for orbifolds (see \cite[Theorem 2.3.5]{yeroshkin}). This finishes the proof. \section{Positively curved foliations with maximal-dimensional closure} A generalization of the Grove--Searle classification of positively curved manifolds with maximal symmetry rank for Alexandrov spaces was obtained recently in \cite{harvey}. Since the underlying space of a positively curved orbifold is naturally a positively curved Alexandrov space, this result also furnishes a classification for orbifolds with maximal symmetry rank \cite[Corollary E]{harvey}. Theorem \ref{theorem: harvey-searle para folheações} now follows easily by combining this classification with Theorem \ref{theorem: Haefliger deformation}. \subsection{Proof of Theorem \ref{theorem: harvey-searle para folheações}}\label{section: proof of havery-searle} Let $\g$ be a closed foliation, given by Theorem \ref{theorem: Haefliger deformation}, satisfying $\sec_{\g}>0$. Then the leaf space of $\g$ is a $q$-dimensional Riemannian orbifold $M/\!/\g$ whose sectional curvature is positive. By the Bonnet--Myers Theorem for orbifolds \cite[Corollary 21]{borzellino3} it follows that $M/\!/\g$ is compact. Moreover, if $\omega$ is a basic orientation form for $\f$ then $\iota(\omega)$ is an orientation form for $\g$, so $M/\!/\g$ is orientable. In particular, $M/\!/\g$ has no $1$-codimensional strata. The union of the closures of the $1$-codimensional strata coincides with the Alexandrov boundary of $M/\g$, so it follows that $M/\!/\g$ is closed in the sense of \cite{harvey}. Furthermore, since $M/\!/\g$ admits an isometric action of the torus $\mathbb{T}^d$, where $d=\dim(\overline{\f})-\dim(\f)$ is the defect of $\f$, it follows directly from \cite[Corollary E]{harvey} that $d\leq\lfloor(q+1)/2\rfloor$ and, in case of equality, that $M/\g$ is homeomorphic to one of the listed spaces.\qed \vspace{8pt} \begin{corollary}\label{Corollary to H-S for foliations} Let $(M,\f)$ be a Riemannian foliation. If $\sec_M\geq c>0$ with respect to a bundle-like metric for $\f$, then $$\dim(\overline{\f})-\dim(\f)\leq\left\lfloor\frac{\codim(\f)+1}{2}\right\rfloor$$ and, if equality holds, then the universal cover $\widehat{M}$ of $M$ fibers over $\mathbb{S}^n$ or $|\mathbb{CP}^{n/2}[\lambda]|$, meaning that it admits a closed foliation $\g$ such that $\widehat{M}/\g$ is homeomorphic to one of those spaces. \end{corollary} \begin{proof} Let $(\widehat{M},\widehat{\f})$ be the lifted foliation of the universal covering space of $M$. Since $\sec_M\geq c>0$ it follows from the Bonnet--Myers Theorem that $M$ is compact and $|\pi_1(M)|<\infty$. Hence, using Proposition \ref{proposition: Molino sheaf under liftings} and Theorem \ref{theorem: harvey-searle para folheações}, we obtain that $$\dim(\overline{\f})-\dim(\f)=\dim(\overline{\widehat{\f}})-\dim(\widehat{\f})\leq\left\lfloor\frac{\codim(\f)+1}{2}\right\rfloor=\left\lfloor\frac{\codim(\widehat{\f})+1}{2}\right\rfloor,$$ since $\widehat{\f}$ is a Killing foliation and $\sec_{\widehat{\f}}\geq\sec_{\widehat{M}}$, by O'Neill's equation. In case of equality, let $\g$ be the closed foliation of $\widehat{M}$ given by Theorem \ref{theorem: harvey-searle para folheações}. The fact that $\widehat{M}$ is simply-connected guarantees that $\pi_1^{\mathrm{orb}}(\widehat{M}/\!/\g):=\pi_1(B(\widehat{M}/\!/\g))=0$ (see Section \ref{section: topological obstruction}), thus excluding the possibility that $\widehat{M}/\!/\g$ is a quotient of $\mathbb{S}^n$ or $\mathbb{CP}^{n/2}[\lambda]$ by a (non trivial) finite group $\Lambda$. \end{proof} \subsection{Isolated closed leaves} In \cite[Proposition 8.1]{goertsches} it is shown that if a Killing foliation $\f$ of codimension $q$ admits a closed leaf, then $2d\leq q$, where $d=\dim(\mathfrak{g}_\f)$. When $q$ is even this is in line with what we obtained in Theorem \ref{theorem: harvey-searle para folheações} under the hypothesis $\sec_\f\geq c>0$ (see also Theorem \ref{theorem: Berger for foliations}). If $q$ is odd our defect bound becomes $\lfloor(q+1)/2\rfloor=(q+1)/2$, so if there exists such an odd-codimensional $\f$ satisfying $2d=q+1$ (and the hypotheses in Theorem \ref{theorem: harvey-searle para folheações}), it cannot have closed leaves. Moreover, \cite[Proposition 8.1]{goertsches} also states that if there is an \emph{isolated} closed leaf, then $q$ is even. When $2d=q$ we have the following partial converse. \begin{proposition}\label{proposition: dim g max then isolated closed leaves} Let $\f$ a Killing foliation of a closed manifold $M$. If $\codim(\f)=2\codim(\overline{\f})$, then $\f$ has at most finitely many (hence isolated) closed leaves. \end{proposition} \begin{proof} Let $d$ be the defect of $\f$ and denote $q=\codim(\f)$, so $\codim(\f)=2\codim(\overline{\f})$ becomes $q=2d$. If $\f$ has no closed leaves there is nothing to prove, so suppose that $\Sigma^{\dim\f}$ is nonempty. To prove that the closed leaves must be isolated, consider a closed foliation $\g$ of $M$ given by Theorem \ref{theorem: Haefliger deformation}. Note that, in view of Proposition \ref{proposition: Molino sheaf under liftings}, we may suppose without loss of generality that $\f$ and $\g$ are transversely oriented. Then, $\mathcal{O}:=M/\!/\g$ is a $q$-dimensional, closed, oriented orbifold admitting an effective $\mathbb{T}^d$-action such that $M/\overline{\f}\cong |\mathcal{O}|/\mathbb{T}^d$. In particular, the fixed-point set $|\mathcal{O}|^{\mathbb{T}^d}$ is nonempty, so $(\mathcal{O},\mathbb{T}^d)$ is a torus orbifold, in the sense of \cite[Definition 3.1]{galazgarcia} and, therefore, $|\mathcal{O}|^{\mathbb{T}^d}$ consists of finitely many isolated points, by \cite[Lemma 3.3]{galazgarcia}. Since the closed leaves of $\f$ correspond to the inverse image of the points in $|\mathcal{O}|^{\mathbb{T}^d}$ by the quotient projection, the result follows. \end{proof} \begin{corollary} Let $\f$ be a Riemannian foliation of a closed manifold $M$ satisfying $|\pi_1(M)|<\infty$. If $\codim(\f)=2\codim(\overline{\f})$, then $\f$ has only finitely many closed leaves. \end{corollary} \begin{proof} The lift $\widehat{\f}$ of $\f$ to the universal covering space of $M$ is a Killing foliation and satisfies $\codim(\widehat{\f})=\codim(\f)=2\codim(\overline{\f})=2\codim(\overline{\widehat{\f}})$, by Corollary \ref{corollary: algebra estrutural e levantamentos finitos}. Therefore $\widehat{\f}$ (and, consequently, $\f$) have finitely many closed leaves. \end{proof} \section{Localization of the basic Euler characteristic} We will prove in this section that the basic Euler characteristic (see Section \ref{section: basic cohomology}) of a Killing foliation $(M,\f)$ localizes to the zero set of any transverse Killing vector field in $\mathscr{C}_{\f}(M)$ and hence, in particular, to $\Sigma^{\dim \f}$. A useful tool will be a transverse version of Hopf's Index Theorem that appears in \cite[Theorem 3.18]{belfi}. It states that, for $\f$ a Riemannian foliation on a compact manifold $M$, if $X\in\mathfrak{L}(\f)$ is $\f$-nondegenerate, then $$\chi_B(\f)=\sum_{J}\mathrm{ind}_{J}(X)\chi_B(J,\f,\mathrm{Or}_{J}(X)),$$ where the sum ranges over all critical leaf closures $J$ of $\f$. The concepts involved, such as $\f$-nondegeneracy and the index $\mathrm{ind}_{J}(X)$, are generalizations of the classical analogs, and $\chi_B(J,\f,\mathrm{Or}_{J}(X))$ is the alternate sum of the cohomology groups of the complex of $\f|_J$-basic forms with values in the orientation line bundle of $X$ at $J$ (for details, see \cite[Section 3]{belfi}). \begin{theorem}\label{theo: localization basic euler char} Let $\f$ be a Riemannian foliation of a closed manifold $M$. If $\overline{X}\in\mathfrak{iso}(\f)$, then $\chi_B(\f)=\chi_B(\f|_{\Zero(\overline{X})})$. \end{theorem} \begin{proof} We are going to construct a suitable vector field $W\in\mathfrak{L}(\f)$ and apply the Basic Hopf Index Theorem. Let $N_1,\dots,N_k$ be the connected components of $\Zero(\overline{X})$. For each $i\in\{1,\dots, k\}$, choose $f_i\in\Omega_B^0(\f|_{N_i})$ a basic Bott--Morse function and $\mathrm{Tub}_\varepsilon(N_i)$ a saturated tubular neighborhood of $N_i$ of radius $\varepsilon>0$ with orthogonal projection $\pi_i:\mathrm{Tub}_\varepsilon(N_i)\to N_i$. Assume $\varepsilon$ sufficiently small so that the tubular neighborhoods are pairwise disjoint. It is not difficult to see that each $\pi_i$ is a foliate map, so we have that $$\widetilde{f}_i:=f_i\circ\pi_i:\mathrm{Tub}_{\varepsilon/2}(N_i)\longrightarrow\mathbb{R}$$ is a basic function. Consider $\phi_i$ a basic bump function for $\mathrm{Tub}_{\varepsilon/4}(N_i)$ satisfying $\mathrm{supp}(\phi_i)\subset\mathrm{Tub}_{\varepsilon/2}(N_i)$ and define $Y_i:=\phi_i\grad(\widetilde{f}_i)$. Then $Y_i$ is foliate and if $v$ is a vector in the vertical bundle of $\pi_i$ then $$\mathrm{g}(Y_i,v)=\mathrm{g}(\phi_i\grad(\widetilde{f}_i),v)=\phi_i\dif(f_i\circ\pi_i)v=\phi_i\dif f_i(\dif \pi_i v)=0,$$ hence $Y_i$ is $\pi_i$-horizontal. Now let $\{\varphi,\psi\}$ be a basic partition of unity subordinate to the saturated open cover $$M=\left[\bigsqcup_i\mathrm{Tub}_\varepsilon(N_i)\right]\cup\left[M\Big\backslash\overline{\bigsqcup_i\mathrm{Tub}_{\varepsilon/2}(N_i)}\right]$$ and define $Z_i:=\varphi\grad(d^2_{N_i})$, where $d_{N_i}:M\to\mathbb{R}$ is the distance function to $N_i$. Then clearly $Z_i$ is $\pi_i$-vertical and $\mathrm{supp}(Z_i)\subset\mathrm{Tub}_{\varepsilon/2}(N_i)$. Moreover, one proves that the flow of each $Z_i$ preserves $\f$, hence $Z_i$ is foliate. The vector field that we are interested in is $$W:=\psi X+\sum_{i=1}^k(Y_i+Z_i),$$ where $X\in\mathfrak{L}(\f)$ is a fixed representative for $\overline{X}$ (and it is understood that $Y_i$ and $Z_i$ are extended by zero outside their original domains). On $M\setminus\mathrm{Tub}_\varepsilon(\Zero(\overline{X}))$ we have $\overline{W}=\overline{X}\neq 0$. By passing to local quotients we easily see that $X$ is $\pi_i$-horizontal on each $\mathrm{Tub}_\varepsilon(N_i)$, therefore, on $\mathrm{Tub}_\varepsilon(N_i)\setminus\mathrm{Tub}_{\varepsilon/2}(N_i)$, we have $\overline{W}=\psi \overline{X}+\overline{Z}_i\neq0$, because $Z_i$ is $\pi$-vertical. Thus, the critical leaf closures of $W$ must be within $\mathrm{Tub}_{\varepsilon/2}(N_i)$, where we have $W=Y_i+Z_i$. Since $Y_i$ is $\pi$-horizontal and $\Zero(Z_i|_{\mathrm{Tub}_{\varepsilon/2}(N_i)})=N_i$, we conclude that the critical leaf closures of $W|_{\mathrm{Tub}_{\varepsilon}(N_i)}$ coincide with those of $Y_i$ (and are, therefore, within $N_i$). Let $J=\overline{L}$ be one of those critical leaf closures of $W|_{\mathrm{Tub}_{\varepsilon}(N_i)}$ and let $(x_1,\dots,x_p)$ be normal coordinates on a neighborhood $V\subset L$. Now choose an orthonormal frame $$(E_1,\dots,E_r,E_{r+1},\dots,E_{\overline{q}},E_{\overline{q}+1},\dots,E_q)$$ for $TL^\perp$ on $V$ such that $(E_1,\dots,E_r)$ is an orthonormal frame for $TJ^\perp\cap TN_i$ and $(E_{r+1},\dots,E_{\overline{q}})$ is an orthonormal frame for $TN_i^\perp$ (in particular, $(E_1,\dots,E_{\overline{q}})$ forms an orthonormal frame for $TJ^\perp$). Applying $\exp^\perp_{L}$ we get coordinates $$(x_1,\dots,x_p,y_1,\dots,y_r,y_{r+1},\dots,y_{\overline{q}},y_{\overline{q}+1},\dots,y_q)$$ for a tubular neighborhood $\mathrm{Tub}_\delta(V)$ such that $(x_1,\dots,x_p,y_1,\dots,y_r,y_{\overline{q}+1},\dots,y_q)$ are local coordinates for $N_i$. Choose $\delta$ small enough so that $\phi_i|_{\mathrm{Tub}_\delta(V)}\equiv 1\equiv\varphi|_{\mathrm{Tub}_\delta(V)}$. The expression of $W|_\Theta=\grad(\widetilde{f}_i)+\grad(d^2_{N_i})$ in those coordinates is simply $$W=\sum_{j=1}^r\frac{\partial \widetilde{f}_i}{\partial y_j}\frac{\partial}{\partial y_j}+\sum_{j=r+1}^{\overline{q}}2y_j\frac{\partial}{\partial y_j},$$ so, noting that $\displaystyle\frac{\partial^2 \widetilde{f}_i}{\partial y_j\partial y_k}(x)=\displaystyle\frac{\partial^2 f_i}{\partial y_j\partial y_k}(x)$ for $1\leq j,k\leq r$, we see that, for $x\in J\cap V$, the linear part $W_{J,x}$ of $W$ (see \cite[p. 325]{belfi}) has the matrix representation $$ \sbox0{$\begin{matrix} \displaystyle\frac{\partial^2 f_i}{\partial y_1\partial y_1}(x) & \dots & \displaystyle\frac{\partial^2 f_i}{\partial y_1\partial y_r}(x)\\ \vdots & \ddots & \vdots \\ \displaystyle\frac{\partial^2 f_i}{\partial y_r\partial y_1}(x) & \dots & \displaystyle\frac{\partial^2 f_i}{\partial y_r\partial y_r}(x)\vspace{7pt}\end{matrix}$} \sbox1{$\stackrel{\ }{\begin{matrix} 2& & \\ & \ddots & \\ & & 2\end{matrix}}$} \sbox2{\LARGE $0$} W_{J,x}\equiv\left[ \begin{array}{c:c} \usebox{0} & \usebox{2}\\ \hdashline \usebox{2} & \usebox{1}\end{array}\right] $$ on $TJ^\perp$. Therefore $W$ is $\f$-nondegenerate. On the other hand, consider $(N_i,\f|_{N_i})$ and the restriction $W|_{N_i}$. Using the coordinates $(y_1,\dots,y_r,y_{\overline{q}+1},\dots,y_q)$ we see that the matrix representation of $(W|_{N_i})_{J,x}$ coincides with the first block of $W_{J,x}$. In particular, $\ind_{J}(W)=\ind_{J}(W|_{N_i})$. Moreover, this also shows that we can identify $\mathrm{Or}_{J}(W)$ with $\mathrm{Or}_{J}(W|_{N_i})$, since the negative directions of $W_{J,x}$ and $(W|_{N_i})_{J,x}$ coincide. Therefore $\chi_B(J,\f,\mathrm{Or}_{J}(W))=\chi_B(J,\f|_{N_i},\mathrm{Or}_{J}(W|_{N_i}))$. We now apply \cite[Theorem 3.18]{belfi} to $(M,\f)$ and to each $(N_i,\f|_{N_i})$, obtaining \begin{eqnarray*} \chi_B(\f)&=&\sum_{J}\mathrm{ind}_{J}(W)\chi_B(J,\f,\mathrm{Or}_{J}(W))\\ & = & \sum_i\sum_{J}\mathrm{ind}_{J}(W|_{N_i})\chi_B(J,\f,\mathrm{Or}_{J}(W|_{N_i}))\\ &=& \sum_i \chi_B(\f|_{N_i}).\end{eqnarray*} It is clear that basic cohomology is additive under disjoint unions, so the result follows.\end{proof} \subsection{Proof of Theorem \ref{theorem: basic euler char localizes to closed leaves} and some corollaries} It is now easy to establish Theorem \ref{theorem: basic euler char localizes to closed leaves}. By Proposition \ref{prop: X killing com Zero=Sigma dim f} we can choose a transverse Killing vector field $\overline{X}\in\mathfrak{iso}(\f)$ such that $\Zero(\overline{X})=\Sigma^{\dim\f}$, thus Theorem \ref{theo: localization basic euler char} gives us $\chi_B(\f)=\chi_B(\f|_{\Sigma^{\dim\f}})$. Now Proposition \ref{prop: basic cohomology of closed foliations} yields $$\chi_B(\f|_{\Sigma^{\dim\f}})=\chi(\Sigma^{\dim\f}/\!/\f)=\chi(\Sigma^{\dim\f}/\f),$$ where the last equality follows from Theorem \ref{theorem: Satake}.\qed \vspace{8pt} Theorem \ref{theorem: basic euler char localizes to closed leaves} generalizes \cite[Corollary 1]{gorokhovsky}, where this result is obtained from the study of the index of transverse operators under additional assumptions on $\f$. \begin{corollary}\label{corollary: if no closed leaves then euler char vanishes} If a Killing foliation $\f$ of a compact manifold has no closed leaves, then $\chi_B(\f)=0$. \end{corollary} The theorem by P.~Conner \cite{conner} mentioned in the beginning of this chapter is originally stated for the set of fixed points of a torus action instead of $\Zero(X)$ (the statement for $\Zero(X)$ can be recovered by considering the action of the closure in $\mathrm{Iso}(M)$ of the subgroup generated by the flow of $X$). In analogy, if $\f$ is a Killing foliation, the stratum of closed leaves $\Sigma^{\dim\f}$ corresponds to the fixed point set of the transverse action of the Abelian algebra $\mathfrak{g}_\f$. The following result can be seen, thus, as a version of Conner's Theorem for foliations. \begin{corollary}\label{corollary: Conner via goertsches} Let $\f$ be a transversely oriented Killing foliation of a compact manifold $M$. Then \begin{enumerate}[(i)] \item $\displaystyle \sum_i b^{2i}_B(\f)\geq \sum_i b^{2i}_B(\f|_{\Sigma^{\dim\f}})$, \item $\displaystyle \sum_i b^{2i+1}_B(\f)\geq \sum_i b^{2i+1}_B(\f|_{\Sigma^{\dim\f}})$. \end{enumerate} \end{corollary} \begin{proof} It is shown in \cite[Theorem 5.5]{goertsches} that \begin{equation}\label{eq: toben conner}\sum_i b^{i}_B(\f)\geq\sum_i b^i_B(\f|_{\Sigma^{\dim\f}}).\end{equation} On the other hand, $\chi_B(\f)=\chi(\Sigma^{\dim\f}/\f)$ gives \begin{equation}\label{eq: corolary expanded} \sum_i (-1)^ib^{i}_B(\f)=\sum_i (-1)^ib^{i}_B(\f|_{\Sigma^{\dim\f}}).\end{equation} By adding \eqref{eq: toben conner} and \eqref{eq: corolary expanded} we get the first item and by subtracting \eqref{eq: corolary expanded} from \eqref{eq: toben conner} we get the second one. \end{proof} \subsection{The basic Euler characteristic under deformations} We will show now that the basic Euler characteristic is preserved by the deformation given in Theorem \ref{theorem: Haefliger deformation}. This is somewhat surprising, because the basic Betti numbers, in general, are not preserved. \begin{theorem}\label{theorem: basic euler char is preserved by deformations} Let $\f$ be a Killing foliation of a compact manifold $M$ and let $\g$ be obtained from $\f$ by a deformation as in Theorem \ref{theorem: Haefliger deformation}. Then $$\chi_B(\f)=\chi_B(\g).$$ \end{theorem} \begin{proof} From Theorem \ref{theorem: Haefliger deformation} we have that $M/\overline{\f}=(M/\g)/\mathbb{T}^d$. In particular, the points in $(M/\g)^{\mathbb{T}^d}$ correspond to the closed leaves of $\f$, that is $(M/\g)^{\mathbb{T}^d}=\Sigma^{\dim\f}/\f$. Hence $$\chi_B(\g)=\chi(M/\g)=\chi\left((M/\g)^{\mathbb{T}^d}\right)=\chi(\Sigma^{\dim\f}/\f) = \chi_B(\f),$$ where the second equality follows from the theory of continuous torus actions \cite[Theorem 10.9]{bredon} and the last one from Theorem \ref{theo: localization basic euler char}. \end{proof} \section{Transverse Hopf conjecture} A well-know conjecture by H.~Hopf states that any even-dimensional, compact Riemannian manifold $M$ with positive sectional curvature must have positive Euler characteristic. In dimension $2$ the conjecture holds, for if $\ric_M$ is bounded below by a positive constant we have $|\pi(M)|<\infty$, hence $H_1(M,\mathbb{R})=0$ by the Hurewicz Theorem. In dimension $4$, passing to the orientable double cover and using Poincaré duality we see that the conjecture also holds. For dimensions larger than $4$ the full conjecture is still an open problem, but it holds, however, when the symmetry rank of $M$ is sufficiently large. For example, if $\dim(M)=6$ and $M$ admits a non-trivial Killing vector field $X$, then, by the lower dimensional cases, $0<\chi(\Zero(X))=\chi(M)$. For dimensions larger than $6$, there is the following result by T.~Püttmann and C.~Searle \cite[Theorem 2]{puttmann}. \begin{theorem}[Püttmann--Searle]\label{theorem: Puttmann-Searle classic} Let $M$ be an $n$-dimensional complete Riemannian manifold such that $\sec_M\geq c>0$. If $n$ is even and $\symrank(M)\geq n/4-1$, then $\chi(M)>0$.\end{theorem} Our goal in this section is to obtain a transverse version of this result for Killing foliations. Let us begin by trying to directly generalize the low codimensional cases from the manifold counterparts that we saw above. Suppose that $\f$ is a $2$-codimensional complete Riemannian foliation on $M$ and that $\ric_\f\geq c>0$. By Theorem \ref{theorem: hebda} if follows that $H_B^1(\f)=0$. Furthermore, since $\Omega_B^0(\f)$ consists of the basic functions, which are precisely the functions $f:M\to\mathbb{R}$ that are constant on the closures of the leaves, we have $H_B^0(\f)= H_{\mathrm{dR}}^0(M)$, thus $$\chi_B(\f)=b_B^0(\f)+b_B^2(\f)=\dim H_{\mathrm{dR}}^0(M)+b_B^2(\f)\geq 1,$$ which establishes the following. \begin{proposition}\label{proposition: chi positive for codim 2} Let $\f$ be a $2$-codimensional complete Riemannian foliation satisfying $\ric_\f\geq c>0$. Then $\chi_B(\f)>0$. \end{proposition} In order to prove the foliation version of Hopf's conjecture to $4$-codimensional foliations we need Poincaré duality to hold for the basic cohomology complex. Combining Theorem \ref{theorem: hebda} with the results in \cite{lopez} we see that $\ric_\f>0$ is a sufficient condition for this to happen, provided the foliation is also transversely oriented \cite[Corollary 6.2, Theorem 6.4 and Corollary 6.5]{lopez}. \begin{lemma}[López]\label{lemma: poincare duality for positive ricci} Let $\f$ be a transversely oriented complete Riemannian foliation of a compact manifold such that $\ric_\f>0$. Then $H^i_B(\f)\cong H^{q-i}_B(\f)$. \end{lemma} \begin{proposition}\label{proposition: chi positive for codim 4} Let $\f$ be a $4$-codimensional, transversely oriented Riemannian foliation of a compact manifold $M$ satisfying $\ric_\f>0$. Then $\chi_B(\f)> 0$. \end{proposition} \begin{proof} Since $H_B^1(\f)=0$, by Theorem \ref{theorem: hebda}, and $H_B^1(\f)\cong H_B^3(\f)$ by Lemma \ref{lemma: poincare duality for positive ricci}, we have $\chi_B(\f)=b_B^0(\f)+b_B^2(\f)+b_B^4(\f)\geq 2$. \end{proof} \subsection{Foliations admitting a Killing vector field with a large zero set} The Grove--Searle classification of manifolds with maximal symmetry rank and positive sectional curvature is achieved in \cite{grove} by reducing it to the classification of positively curved manifolds admitting a Killing vector field $X$ such that $\Zero(X)$ has a connected component $N$ with $\codim(N)=2$ \cite[Theorem 1.2]{grove}. In this section we will study, analogously, the leaf spaces of closed Riemannian foliations that admit a transverse Killing vector field $\overline{X}$ such that $\codim(N)=2$ for some connected component $N$ of $\Zero(\overline{X})$. We do so because it will be useful later in our pursuit of a transverse version of Theorem \ref{theorem: Puttmann-Searle classic}. We begin with the following transverse version of Frankel's Theorem. \begin{lemma}\label{lemma: Fraenkel for foliations} Let $(M,\f)$ be a complete Riemannian foliation with $\sec_\f\geq c>0$ and let $N$ and $N'$ be $\f$-saturated, horizontally totally geodesic, compact submanifolds such that $\codim(\f|_{N})+\codim(\f|_{N'})\geq\codim(\f)$. Then $N\cap N'\neq \emptyset$. \end{lemma} The proof is similar to that of the classical case \cite[Theorem 1]{fraenkel} if one works with $\mathscr{H}_\f$-geodesics on a total transversal $T_\f$, so we will omit it. \begin{theorem}\label{theorem: quotient when there is a codimension 2 zero set} Let $\f$ be a $q$-codimensional, closed Riemannian foliation of a closed manifold $M$. Suppose that $q$ is even, $\sec_\f c>0$ and $\overline{X}\in\mathfrak{iso}(\f)$ satisfies $\codim(N)=2$ for some connected component $N$ of $\Zero(\overline{X})$. Then $M/\f$ is homeomorphic to the quotient space of either $\mathbb{S}^q$ or $|\mathbb{CP}^{q/2}[\lambda]|$ by the action of a finite group. \end{theorem} \begin{proof} Denote $\mathcal{O}:=M/\!/\f$. Then $\overline{X}$ induces $\overline{X}_\mathcal{O}\in\mathfrak{iso}(\mathcal{O})$. Let $T$ be the closure of the subgroup generated by the flow of $\overline{X}_\mathcal{O}$ in $\mathrm{Iso}(\mathcal{O})$. It is clear that $T$ is a torus and that $\overline{N}:=\pi(N)$ is a connected component of the fixed-point set $\mathcal{O}^T$. Choose a closed $1$-parameter subgroup $\mathbb{S}^1<T$ such that $\mathcal{O}^{\mathbb{S}^1}=\mathcal{O}^T$ \cite[Lemma 4.2.1]{allday}. Then $|\mathcal{O}|$, with the distance function induced by $\mathrm{g}^T$, is a positively curved Alexandrov space admitting fixed-point homogeneous action of $\mathbb{S}^1$, in the terminology of \cite[Section 6]{harvey}. By \cite[Theorem 6.5]{harvey}, there is a unique orbit $\mathbb{S}^1x$ at maximal distance from $\overline{N}$, the ``soul'' orbit, and there is an $\mathbb{S}^1$-equivariant homeomorphism $$|\mathcal{O}|\cong \frac{\nu_x*\mathbb{S}^1}{\mathbb{S}^1_x},$$ where $*$ denotes the join operation, $\nu_x$ is the space of normal directions to $\mathbb{S}^1x$ at $x$ and $\mathbb{S}^1_x$ acts on the left on $\nu_x*\mathbb{S}^1$, the action on $\nu_x$ being the isotropy action and the action on $\mathbb{S}^1$ being the inverse action on the right. Notice that $\nu_x$ can be identified with $\mathbb{S}^{\codim(\mathbb{S}^1x)-1}/\Gamma_x$, where $\mathbb{S}^{\codim(\mathbb{S}^1x)-1}$ is the unit sphere in $T_x(\mathbb{S}^1x)^\perp\subset T_x\mathcal{O}$ and $\Gamma_x$ the local group of $\mathcal{O}$ at $x$. By \cite[Proposition 2.12 and Corollary 2.13]{galazgarcia} we can choose an orbifold chart $(\widetilde{U},\Gamma_x,\phi)$ and an extension $$0\longrightarrow\Gamma_x\longrightarrow\widetilde{\mathbb{S}}^1_x\stackrel{\rho}{\longrightarrow} \mathbb{S}^1_x\to 0$$ acting on $\widetilde{U}$ with $\widetilde{U}/\widetilde{\mathbb{S}}^1_x=U/\mathbb{S}^1_x$ (let us denote this action by $\mu$). We now consider separately the cases when $\mathbb{S}^1_x$ is a finite cyclic group $\mathbb{Z}_r$ and when $\mathbb{S}^1_x=\mathbb{S}^1$. Suppose that $\mathbb{S}^1_x\cong \mathbb{Z}_r$. Then $\dim(\mathbb{S}^1x)=1$, hence $\nu_x\cong \mathbb{S}^{q-2}/\Gamma_x$, and $\widetilde{\mathbb{S}}^1_x$ is finite. Recall that there is an isometry $\mathbb{S}^{m}*\mathbb{S}^{n}\cong \mathbb{S}^{m+n+1}$ when we realize $\mathbb{S}^{m}*\mathbb{S}^{n}$ via the map $$ \begin{array}{rcl} \mathbb{S}^m\times\mathbb{S}^n\times [0,1]& \longrightarrow &\mathbb{S}^{m+n+1}\subset\mathbb{R}^{m+n+2}\\ (s_1,s_2,t) & \longmapsto & \left(\cos\left(\frac{\pi}{2}t\right)s_1,\sin\left(\frac{\pi}{2}t\right)s_2\right). \end{array}$$ If we define an isometric action of $\widetilde{\mathbb{S}}^1_x$ on $\mathbb{S}^q\cong\mathbb{S}^{q-2}*\mathbb{S}^1$ via this map by $$\left(g,\left(\cos\left(\frac{\pi}{2}t\right)s_1,\sin\left(\frac{\pi}{2}t\right)s_2\right)\right)\longmapsto \left(\cos\left(\frac{\pi}{2}t\right)\dif(\mu^g)_{\widetilde{x}}s_1,\sin\left(\frac{\pi}{2}t\right)s_2\rho(g)^{-1}\right),$$ we get $$|\mathcal{O}|\cong \frac{\mathbb{S}^{q-2}/\Gamma_x*\mathbb{S}^1}{\mathbb{Z}_r}\cong\frac{\mathbb{S}^q}{\widetilde{\mathbb{S}}^1_x},$$ which exhibits $|\mathcal{O}|$ as a finite quotient of a sphere. Now suppose $\mathbb{S}^1_x\cong \mathbb{S}^1$. Then $\dim(\mathbb{S}^1x)=0$ and $\nu_x\cong \mathbb{S}^{q-1}/\Gamma_x$, so, similarly, we have $$|\mathcal{O}|\cong \frac{\mathbb{S}^{q-1}/\Gamma_x*\mathbb{S}^1}{\mathbb{S}^1}\cong \frac{\mathbb{S}^{q-1}*\mathbb{S}^1}{\widetilde{\mathbb{S}}^1_x}\cong \frac{\mathbb{S}^{q+1}}{\widetilde{\mathbb{S}}^1_x}.$$ In this case $x$ is a fixed point of the $\mathbb{S}^1$ action and therefore corresponds to a leaf $L$ of $\f$ where $\overline{X}$ vanishes. We claim that $x$ is an isolated fixed point. Indeed, if this was not the case, since $q=\codim(\f)$ is even, the connected component $N_L$ of $\Zero(\overline{X})$ containing $L$ would satisfy $\codim(N_L)\leq q-2$ (see Proposition \ref{proposition: propriedade zero killing}), hence $$\codim(\f|_N)+\codim(\f|_{N_L})=2q-\codim(N)-\codim(N_L)\geq q,$$ so, by Lemma \ref{lemma: Fraenkel for foliations}, $N_L\cap N\neq\emptyset$, which translates to $x\in \overline{N}$, absurd. It follows, therefore, that $\widetilde{\mathbb{S}}^1_x$ acts almost freely on the first join factor $\mathbb{S}^{q-1}$ that corresponds to the space of normal directions to $\mathbb{S}^1x$ and, hence, that its induced action on $\mathbb{S}^{q+1}$ is also almost free. Let $\mathcal{E}$ be the Riemannian foliation of $\mathbb{S}^{q+1}$ given by the connected components of the orbits of $\widetilde{\mathbb{S}}^1_x$. Notice that $\widetilde{\mathbb{S}}^1_x$ may be disconnected, but the connected component $(\widetilde{\mathbb{S}}^1_x)_0$ of the identity is a circle whose action defines the same foliation $\mathcal{E}$. Thus, denoting by $\Lambda$ the finite group $\widetilde{\mathbb{S}}^1_x/(\widetilde{\mathbb{S}}^1_x)_0$, it follows that $$\frac{\mathbb{S}^{q+1}}{\widetilde{\mathbb{S}}^1_x}\cong\frac{\mathbb{S}^{q+1}/(\widetilde{\mathbb{S}}^1_x)_0}{\Lambda}=\frac{\mathbb{S}^{q+1}/\mathcal{E}}{\Lambda},$$ where the action of $\Lambda$ identifies the points in $\mathbb{S}^{q+1}/\mathcal{E}$ corresponding to the same $\widetilde{\mathbb{S}}^1_x$-orbit. In view of the classification of Riemannian $1$-foliations of the sphere \cite[Theorem 5.4]{gromoll2}, and since $\mathcal{E}$ is closed, we obtain $|\mathcal{O}|\cong |\mathbb{CP}^{n/2}[\lambda]|/\Lambda$. \end{proof} \subsection{Püttmann--Searle Theorem for orbifolds} Recall that we obtained a transverse version of Conner's Theorem in Corollary \ref{corollary: Conner via goertsches}. Notice, however, that a complete analog of Conner's theorem should be stated for the zero set of any transverse Killing vector field, in place of $\Sigma^{\dim\f}$. Unfortunately, the result in \cite{goertsches} that we use in the proof of Corollary \ref{corollary: Conner via goertsches} cannot be adapted to show this for a general Killing foliation. The following version for closed foliations, however, will be useful. \begin{proposition}\label{proposition: Conner for closed foliations} Let $\f$ be a closed Riemannian foliation of a closed manifold $M$ and let $\overline{X}\in\mathfrak{iso}(\f)$. Then $$\sum_i b^{2i+k}_B(\f)\geq \sum_i b^{2i+k}_B(\f|_{\Zero(\overline{X})}),$$ for $k=0,1$. \end{proposition} \begin{proof} Let us denote $\mathcal{O}:=M/\!/\f$. By Proposition \ref{prop: basic cohomology of closed foliations} and Theorem \ref{theorem: Satake} we have $$b^i_B(\f)=\dim(H_{\mathrm{dR}}^i(\mathcal{O}))=\dim(H^i(|\mathcal{O}|,\mathbb{R})).$$ Furthermore, since $|\mathcal{O}|$ is paracompact and locally contractible, its singular cohomology coincides with its \v{C}ech cohomology, so we also have $$b^i_B(\f)=\rank (\check{H}^i(|\mathcal{O}|,\mathbb{R})).$$ Now consider the closure of the subgroup generated by the flow of the induced Killing vector field $\overline{X}_{\mathcal{O}}\in\mathfrak{iso}(\mathcal{O})$. This subgroup is a torus $T<\mathrm{Iso}(\mathcal{O})$ and $\Zero(\overline{X}_{\mathcal{O}})=|\mathcal{O}|^T$, the fixed-point set of its action. From the theory of continuous torus actions \cite[Theorem 10.9]{bredon} it follows that $$\sum_i \rank (\check{H}^{2i+k}(|\mathcal{O}|,\mathbb{R}))\geq \sum_i \rank (\check{H}^{2i+k}(|\mathcal{O}|^T,\mathbb{R})).\qedhere$$ \end{proof} The lemma below is a transverse version of Theorem \ref{theorem: Puttmann-Searle classic} for closed foliations, from which the orbifold analogue of the Püttmann--Searle Theorem will be a direct consequence. \begin{lemma}\label{lemma: putmann searle for closed foliations} Let $\f$ be a $q$-codimensional, transversely orientable, closed Riemannian foliation of a closed manifold $M$ and let $N\in\mathcal{Z}(\mathfrak{a})$, where $\mathfrak{a}<\mathfrak{iso}(\f)$ is any Abelian Lie subalgebra such that $\dim(\mathfrak{a})=\symrank(\f)$. If $q$ is even, $\sec_\f>0$ and $\symrank(\f)\geq q/4-1$, then $\chi_B(\f|_N)>0$. In particular, $\chi_B(\f)>0$. \end{lemma} \begin{proof} We proceed by induction on $q$. Notice that $\codim(\f|_N)=\codim(\f)-\codim(N)$ is always even and $\sec_{\f|_N}>0$, by Proposition \ref{proposition: propriedade zero killing}. For $q< 6$ the result follows directly from Propositions \ref{proposition: chi positive for codim 2} and \ref{proposition: chi positive for codim 4}. For the induction step, take a maximal $N'\in\mathcal{Z}(\mathfrak{a})$ such that $N\subset N'$. We now have two cases. If $\codim(N')\geq 4$, then $\codim(\f|_{N'})\leq\codim(\f)-4$. Therefore, by Proposition \ref{proposition: propriedades de Zero killing}, \begin{eqnarray*} \symrank(\f|_{N'}) & = & \dim(\mathfrak{a}|_{N'})=\dim(\mathfrak{a})-1=\symrank(\f)-1\\ & \geq & \frac{\codim(\f)-4}{4}-1\geq\frac{\codim(\f|_{N'})}{4}-1,\end{eqnarray*} so $(N',\f|_{N'})$ satisfy the induction hypothesis and $\chi_B(\f|_N)>0$, because $N\in\mathcal{Z}(\mathfrak{a}|_{N'})$. If $\codim(N')= 2$, then using Theorem \ref{theorem: quotient when there is a codimension 2 zero set} we have that $$M/\f\cong\begin{cases} |\mathbb{CP}^{\frac{q}{2}}[\lambda]|/\Lambda \text{ or}\\ \mathbb{S}^q/\Lambda, \end{cases}$$ where $\Lambda$ is a finite group in either case. Now, by \cite[Theorem 7.2]{bredon}, $$\check{H}^*(M/\f,\mathbb{R})\cong\begin{cases} \check{H}^*(\mathbb{CP}^{\frac{q}{2}}[\lambda],\mathbb{R})^\Lambda \text{ or}\\ \check{H}^*(\mathbb{S}^q,\mathbb{R})^\Lambda. \end{cases}$$ Since all odd Betti numbers of both $\mathbb{CP}^{q/2}[\lambda]$ and $\mathbb{S}^q$ vanish, we obtain $b_{2i+1}(M/\f)=b^{2i+1}_B(\f)=0$ for all $i>0$. From Proposition \ref{proposition: Conner for closed foliations}, $$0\geq \sum_i b^{2i+1}_B(\f|_{\Zero(\overline{X})})\geq \sum_i b^{2i+1}_B(\f|_N),$$ so $b^{2i+1}_B(\f|_N)$ also vanishes for all $i$. In particular, $\chi_B(\f|_N)>0$. \end{proof} \begin{corollary}[Püttmann--Searle Theorem for orbifolds]\label{corollary: Putmann for orbifolds} Let $\mathcal{O}$ be a compact, orientable $n$-dimensional Riemannian orbifold such that $\sec_{\mathcal{O}}>0$. If $n$ is even and $\symrank(\mathcal{O})\geq n/4-1$, then $\chi(|\mathcal{O}|)>0$. \end{corollary} \begin{proof} By \cite[Proposition 5.21]{alex}, $\mathcal{O}$ is the leaf space of a closed, transversely orientable, $n$-codimensional Riemannian foliation $\f$ of a closed manifold, satisfying $\sec_\f =\sec_{\mathcal{O}}>0$. Moreover, the identification $\mathfrak{iso}(\mathcal{O})\cong\mathfrak{iso}(\f)$ gives us that $\symrank(\f)\geq n/4-1$, therefore $\f$ satisfies the hypotheses of Lemma \ref{lemma: putmann searle for closed foliations}. From Proposition \ref{prop: basic cohomology of closed foliations} and Theorem \ref{theorem: Satake} it follows that $$\chi(|\mathcal{O}|)=\sum_i(-1)^i\dim(H_{\mathrm{dR}}^i(\mathcal{O}))=\chi_B(\f)>0.\qedhere$$ \end{proof} \subsection{Proof of Theorem \ref{theorem: puttmann for foliations}} Since $M$ is compact we actually have $\sec_\f\geq c>0$ for some constant $c$. Therefore, we can use Theorem \ref{theorem: Haefliger deformation} to deform $\f$ into a closed Riemannian foliation $\g$ such that $\sec_\g>0$ and $\symrank(\g)\geq d=\dim(\overline{\f})-\dim(\f)$. By Lemma \ref{lemma: putmann searle for closed foliations} and Theorem \ref{theorem: basic euler char is preserved by deformations} we have $$\chi_B(\f)=\chi_B(\g)>0.$$ \section{A topological obstruction}\label{section: topological obstruction} Let $(M,\f)$ be a Riemannian foliation and denote by $\mathrm{G}_\f$ the \textit{holonomy groupoid} of $\f$ (see \cite[Proposition 5.6]{mrcun} or \cite[Section 1.1]{haefliger3}). The study of classifying spaces of topological groupoids that appears in \cite{haefliger3} shows that there is a locally trivial fibration $E\mathrm{G}_\f\to B\mathrm{G}_\f$, whose fiber is a generic leaf of $\f$ \cite[Corollaire 3.1.5]{haefliger3}, and a commutative diagram $$\xymatrix{ E\mathrm{G}_\f \ar[dd] \ar[dr]^-{\zeta} & \\ & M \ar[dl]^-{\Upsilon}\\ B\mathrm{G}_\f &}$$ where $\zeta$ is a homotopy equivalence \cite[Coloraire 3.1.4]{haefliger3}. Analogously, this construction can be applied to the groupoid $\mathrm{G}^T_\f$ of a representative $(T_\f,\mathscr{H}_\f)$ of the holonomy pseudogroup of $\f$ and, since $\mathrm{G}_\f$ and $\mathrm{G}^T_\f$ are equivalent \cite[p. 81]{haefliger3}, it follows from \cite[Corollaire 3.1.3]{haefliger3} that $B\mathrm{G}_\f$ and $B\mathrm{G}^T_\f$ are homotopy equivalent. Moreover, notice that when $\f$ is a closed foliation $B\mathrm{G}^T_\f$ coincides with the classifying space of the orbifold $M/\!/\f$, so the above results show that, at least from the homotopy theoretic point of view, closed foliations behave essentially like fibrations. We can now combine these facts with the invariance of the basic Euler characteristic under deformations (see Theorem \ref{theorem: basic euler char is preserved by deformations}) to obtain the following. \begin{theorem}\label{theorem: closed leaf + transverse symmetry implies charM vanishes} Let $\f$ be a Riemannian foliation of a closed, simply-connected manifold $M$. If $\chi(M)\neq 0$ then $\f$ is closed. \end{theorem} \begin{proof} Assume that $\chi(M)\neq 0$ and that $\f$ is not closed. By \cite[Théorème 3.5]{ghys}, there is a closed leaf $L\in\f$. If we choose a closed foliation $\g$ near $\f$ via Theorem \ref{theorem: Haefliger deformation}, then $L$ is also a leaf of $\g$, because the deformation respects $\overline{\f}$. Fix a generic leaf $\widehat{L}\in\g$ near $L$. By the results in \cite{haefliger3} presented above, we have a locally trivial fibration $E\mathrm{G}_\g\to B\mathrm{G}_\g$ with fiber $\widehat{L}$ and a commutative diagram $$\xymatrix{ & E\mathrm{G}_\g \ar[dd] \ar[dr]^-{\zeta} & \\ & & M \ar[dl]^-{\Upsilon}\\ B\mathcal{O}\ar[r]^-{h} & B\mathrm{G}_\g &}$$ where $\mathcal{O}$ denotes $M/\!/\g$ and both $\zeta$ and $h$ are homotopy equivalences. By the homotopy exact sequence of $E\mathrm{G}_\g\to B\mathrm{G}_\g$ we see that $B\mathrm{G}_\g$ is also simply-connected, hence the Euler characteristic (with real coefficients) has the product property $$\chi(M)=\chi(E\mathrm{G}_\g)=\chi(\widehat{L})\chi(B\mathrm{G}_\g).$$ Moreover, it is known that $H^*(B\mathcal{O},\mathbb{R})\cong H^*(|\mathcal{O}|,\mathbb{R})$ \cite[Corollary 4.3.8]{boyer}, so \begin{equation}\chi(M)=\chi(\widehat{L})\chi(|\mathcal{O}|)=\chi(\widehat{L})\chi_B(\g)=\chi(\widehat{L})\chi_B(\f),\label{eq: fibration property basic euler char}\end{equation} where we use Theorem \ref{theorem: basic euler char is preserved by deformations} and Proposition \ref{prop: basic cohomology of closed foliations}. Local Reeb stability asserts that the restriction of the orthogonal projection $\mathrm{Tub}_\varepsilon(L)\to L$ to $\widehat{L}$ is a $h(\g)$-sheeted covering map $\widehat{L}\to L$, where $h(\g)=|\mathrm{Hol}(L)|<\infty$ (the holonomy being with respect to $\g$). Hence $\chi(\widehat{L})=h(\g)\chi(L)$ and equation \eqref{eq: fibration property basic euler char} can be rewritten as \begin{equation}\chi(M)=h(\g)\chi(L)\chi_B(\f).\label{eq: fibration property basic euler char 2}\end{equation} On the other hand, by Lemma \ref{lemma: variando a holonomia}, for a sequence $\g_i$ of closed foliations approaching $\f$ we must have $h(\g_{i})\to\infty$. In particular, we can change the number $h(\g)$ by varying $\g$. This violates equation \eqref{eq: fibration property basic euler char 2}. \end{proof} \subsection{Proof of Theorem \ref{teo: topological obstruction}} Suppose $M$ is a compact manifold satisfying $|\pi_1(M)|<\infty$ and $\chi(M)\neq0$ and let $\f$ be a Riemannian foliation of $M$. We denote by $\widehat{\f}$ the lift of $\f$ to the universal covering $\widehat{M}$. Then we have $\chi(\widehat{M})=|\pi_1(M)|\chi(M)\neq0$, so $\widehat{\f}$ is a closed foliation, by Theorem \ref{theorem: closed leaf + transverse symmetry implies charM vanishes}. It now follows from Corollary \ref{corollary: algebra estrutural e levantamentos finitos} that $\f$ is also closed.\qed
{ "timestamp": "2018-06-11T02:18:02", "yymm": "1802", "arxiv_id": "1802.08922", "language": "en", "url": "https://arxiv.org/abs/1802.08922" }
\section{Introduction} \label{sec:introduction} \textit{Multiobjective optimization} is the study of algorithms for optimization problems associated with two or more objective functions. Such problems arise naturally in prescriptive decision making, where decision makers often face conflicting criteria that balance trade-offs associated with a proposed solution. Practical applications of multiobjective optimization are pervasive and found in a diverse set of domains. This includes, for example, production planning for supply chains \citep{dickersbach2015supplychain}, radiotherapy optimization in healthcare \citep{radiotherapyYu2009}, and aerodynamic design \citep{aerodynamicWang2011}, which may involve from a few up to potentially hundreds of objective functions. Furthermore, multiobjective optimization can also be used as an alternative method to solve single-objective problems \citep{bodur2016decomposition}, expanding even further on the importance of the study of techniques to tackle these computationally challenging problems. \carlos{ In this paper, we focus on multiobjective \textit{discrete} optimization problems (MODOs) that admit recursive formulations; in such problems, variables may only assume values from a finite set.} There is a rich history of the study of techniques for addressing MODOs, often leveraging advances in integer programming (IP) technology; see, e.g., in-depth surveys by \citealt{ehrgott2006discussion} and \citealt{ZhoQuHiZhaSugZha11}. Specifically, current state-of-the-art methodologies rely on parametric single-objective reformulations of MODOs, employing commercial IP solvers as black-boxes of their algorithms. The Pareto frontier, however, cannot be fully recovered by such linear parametric models in general \citep{sayin2005multiobjective}. These methods therefore rely on explicit enumeration techniques, limiting the size of problems that can be solved both in terms of the number of variables and the number of objective functions. \textit{Contributions.} This paper presents a new approach for modeling and solving MODOs \carlos{that admit recursive formulations}. Our framework reformulates the problem of identifying points in the Pareto frontier as a multicriteria shortest path problem \citep{GarGioTav10} over a structured network. In particular, the network implicitly represents the objective space in a compact way by exploiting symmetry and dominance relationship between solutions, which can be defined both generally or in a problem-specific form. Our first contribution is the design of two approaches for obtaining a valid network for a MODO. One is based on a recursive model and the other is extracted from a decision diagram representation of the problem \citep{bergman2016decision}. Both techniques leverage construction procedures already available in the literature, some of which are inspired by early dynamic programming (DP) approaches for multiobjective optimization \citep{carraway1990generalized}. Nonetheless, we exploit the network structure as opposed to the recursive formulation directly, which provides two benefits. First, network representations do not require linear formulations, thereby broadening modeling expressiveness over other popular techniques. Second, network models inherently exploit symmetry by combining subpaths that correspond to common objective function evaluations, decreasing the enumeration requirements from previous techniques. As our second contribution, we propose \textit{validity-preserving operations} (VPOs) designed to reduce the size of a network while maintaining validity. A network for a MODO is, in general, exponentially large in the input size of the problem. Even if the network itself is of manageable size, computing a multicriteria shortest path may take a prohibitively long time. VPOs simplify the network without modifying the Pareto frontier, leading to significant reductions on the number of arcs and nodes and hence computational time. We explore VPOs that are based solely on the network itself (e.g., removing arcs/nodes, merging nodes) and VPOs that explore domain-specific features of the problem (e.g., using DP state-based information to identify dominance). Finally, we present an extensive numerical study on five problem classes to compare our network model approach with two state-of-the-art MODO algorithms. We consider the knapsack, set covering, set packing, and the traveling salesperson problem, which are commonly used as benchmarks in the MODO literature, in addition to a MODO with nonlinear terms in the objective motivated from an application in regression-based models. \andre{ Our experiments indicate that the proposed approach outperforms the state-of-the-art by orders of magnitude \carlos{for problem classes where recursive models and state-space relaxations are known to be effective; in particular, the results show a significant expansion} on the size of problems that can be solved, specifically in terms of the number of objective functions (up to seven). This is a particularly limiting factor in existing approaches that severely restrict the applicability of multiobjective optimization in real scenarios beyond a few objectives, as highlighted by \cite{Duro2017}. Examples of applications where the Pareto frontier for four or more objectives is desired include protein structure prediction \citep{Regina2013}, computational sustainability \citep{Gomes2018}, storm drainage and work roll cooling \citep{Duro2017}, to name a few. In practice, the Pareto frontier fully characterizes the trade-offs among solutions and is post-processed in interactive decision support systems \citep{stewart2008real}. This allows practitioners to prioritize objectives and operational aspects based on their technical expertise, often on a case-by-case basis, which can be used as opposed to or in conjunction with typical scalarization techniques when multiple objectives are present. } The remainder of this paper is organized as follows. \S\ref{sec:literatureReview} provides a literature review of MODO, specifically as it relates to the present paper. \S\ref{sec:multi-objectiveDiscreteOptimization} and \S\ref{sec:network-models} formally define MODOs and network models, respectively. \S\ref{sec:NMConstructionAlgos} describes network construction algorithms. \S\ref{sec:VPOs} presents VPOs, and \S\ref{sec:findingTheParetoFrontier} presents the multicriteria shortest path algorithms we employ for enumerating the Pareto frontier. \S\ref{sec:copmutationalInsights} describes the results of an experimental evaluation on four problem classes. We conclude and describe future work in \S\ref{sec:conclusionAndFutureWork}. Proofs which do not appear in the main text are presented in the appendix, ~\S\ref{sec:proofs}. There is an extensive literature on exact algorithms for generating the Pareto frontier of a MODO. In general, these approaches can be divided into two main classes: those based on \emph{criterion-space search}, and those based on \emph{decision-space search} \citep{ehrgott2016exact,ehrgott2006discussion}. Criterion-space search relies on \emph{scalarizations}, most commonly based on a combination of weighted sums and $\epsilon$-constraints. Weighted-sum methods iteratively solve a single-objective optimization version of the problem where the single objective is defined by various positive-weight combinations of the original objectives. For general MODOs, however, only a portion of the Pareto frontier (those points referred to as \emph{supported} efficient points) can be identified by this approach alone. The remaining points (\emph{unsupported} efficient points) can be found through the $\epsilon$-constraint method, introduced by \cite{haimes1971bicriterion}, which optimizes one of the original objective functions with the other constraints transformed into parametrized constraints. \cite{kirlik2014new} and \cite{Ozlen2013} provide the state-of-the-art criterion-space search algorithms for MODOs with an arbitrary number of objectives. Both algorithms are based on variants of scalarization techniques. These variants build upon the early work by \cite{klein1982algorithm}, who suggested an iterative approach to generate a subset of the Pareto frontier while refining the search space by the addition of disjunctive constraints. \cite{sylva2004method} extended this work to an exact algorithm by reformulating the disjunctive conditions as big-$M$ constraints, which was further improved by \cite{lokman2013finding} and \cite{bektas2016disjunctive}. extended to more than two objectives by \cite{tenfelde2003recursive} and further enhanced by \cite{dhaenens2010k}. Another generalization of the two-phase method is proposed by \cite{przybylski2010two}. \cite{ozlen2009multi}, in turn, developed an alternative approach called the \textit{augmented} $\epsilon$-constraint method, which became one of the state-of-the-art methods after later enhancements by \cite{Ozlen2013}. Another improvement to the augmented $\epsilon$-constraint method is the recursive methodology proposed by \cite{laumanns2005adaptive,laumanns2006efficient}, further refined by \cite{kirlik2014new} into a computationally practical approach. We note that \cite{boland2016new} developed an extension of the so-called L-shape search method (specific to triobjective MODOs) that optimizes a linear function over the set of nondominated points, that can also be used to enumerate the Pareto frontier. Other scalarization methods include the (lexicographic or augmented) weighted Tchebycheff scalarization \citep{steuer1983interactive}. The majority of the criterion-space search focuses on \emph{biobjective} problems, where the special structure resulting from only having two objectives can be exploited; see, e.g., \cite{ralphs2006improved,sayin2005multiobjective,boland2015criterion,parragh2015branch}. Extensions of these ideas have also been proposed for triobjective MODOs, which iteratively decompose the search space into smaller regions, and apply efficient ways to explore and refine these regions \citep{dachert2015linear,BolandLSM2016,boland2016quadrant}. Decision-space search methods operate over the space defined by the original decision variables. These techniques are typically based on branch-and-bound search developed for mixed-integer linear programs. The first of such algorithms was proposed by \cite{mavrotas1998branch} for binary MODOs. The algorithm uses an artificial ideal point to define a bounding set, and discovers nondominated points by solving (via a criterion-space search algorithm) the multiobjective linear programs obtained when all binary variables are fixed. \cite{mavrotas2005multi} observed that this branch-and-bound in fact generates a superset of the Pareto frontier, and proposed filtering algorithms to eliminate spurious points. \cite{Vincent2013} later showed that the previous algorithm is still incomplete, and proposed a corrected and improved version for biobjective problems. Other branch-and-bound techniques have also been studied by \cite{masin2008diversity,sourd2008bb}, who suggested enhancements to the bounding aspects. The first decision-space algorithm for generic MODO based on branch-and-cut was developed by \cite{jozefowiez2012}, where discrete sets are used for lower bounds. Recently, \cite{adelgren2017branch} developed a new branch-and-bound algorithm which employs multiobjective extensions of many different aspects of branch-and-bound, such as (primal and dual) presolve, preprocessing, node processing, and dual bounding via cutting planes. Alternative methods that avoid scalarizations are based on DP and implicit enumerative methods for pure binary problems (e.g., \citealt{bitran1977linear,bitran1982combined}). DP approaches are typically focused on variants of the multiobjective knapsack problem, such as \cite{villarreal1981multicriteria} who employed lower and upper bound sets to eliminate dominated solutions. The authors extended their work to general stage-wise separable MODOs \citep{villarreal1982multicriteria}. \cite{klamroth2000dynamic} presented distinct conceptual DP models for several variants of the multiobjective knapsack problem. \cite{bazgan2009solving} developed a new DP approach enhanced with complementary dominance relations for the 0-1 knapsack case. For the biobjective knapsack, \cite{delort2010using} and \cite{rong2014dynamic} proposed a two-phase algorithm and a multiobjective DP algorithm, respectively. Similar to the above DP approaches, the most relevant works to ours also considered the multiobjective knapsack problem. \cite{captivo2003solving} proposed a transformation of biobjective 0--1 knapsack problem into a biobjective shortest path problem, which is solved via an enhanced version of the labeling algorithm for MSPs. \cite{figueira2010labeling} developed a generic labeling algorithm for the problem that applies existing reformulations from the literature. More recently, state reduction techniques for the biobjective case have been proposed by \cite{rong2011two} and \cite{rong2013reduction}, with algorithmic enhancements by \cite{figueira2013algorithmic}. \section{Multiobjective Discrete Optimization Problems} \label{sec:multi-objectiveDiscreteOptimization} \andre{ In this section we present the notation and formalism used throughout the text. For $a \in \mathbb{N}_+$, we let $[a] := \set{1, 2, \ldots, a}$. We denote by $\boldsymbol{0}$ and $\boldsymbol{1}$, a vector of zeroes and ones in appropriate dimention, respectively. The notation $\mathbb{B} := \{0,1\}$ indicates the Boolean set, while the operator $(\cdot)^\top$ denotes the transpose. For a given $v \in \mathbb{R}^p$, we denote the $i$-th component of $v$ by $v_i$ or $(v)_i$, for all $i \in [p]$. } A multiobjective discrete optimization problem (MODO) is of the form \begin{align*} \tag{$\mathcal{M}$} \max \left \{ f(x) := \left(f_1(x), f_2(x), \dots, f_K(x) \right) \, : \, x \in \mathcal{X} \right \}, \end{align*} \andre{where $\mathcal{X} \subset \mathbb{Z}^n$, $n \in \mathbb{N}_+$, is a bounded} feasible set and $f : \mathcal{X} \rightarrow \mathbb{R}^K$ maps each solution~$x \in \mathcal{X}$ into a $K$-dimensional \emph{objective vector} (or image) $y := \big(f_1(x),\hdots,f_K(x)\big)$, with $f_k : \mathcal{X} \rightarrow \mathbb{R}$, $k \in \singleindexset{K}$. The objective functions are not assumed to have any particular structure, except that they are well-defined in $\mathcal{X}$. For any two objective vectors $y, y' \in \mathbb{R}^K$, we say that $y$ \textit{dominates} $y'$ (\carlos{or, alternatively, that $y'$ \textit{is dominated by} $y$}), or simply $y x\succ y'$, if (i) $y_k \geq y'_k$ for all $k \in \singleindexset{K}$, and (ii) there exists at least one index $\tilde{k}$ for which $y_{\tilde{k}} > y'_{\tilde{k}}$. The image defined by the set of feasible solutions is denoted by $\mathcal{Y} := \{ f(x) \,:\, x \in \mathcal{X} \}$. An objective vector $y^* \in \mathcal{Y}$ is a \emph{nondominated} point if there exists no other point $y' \in \mathcal{Y}$ for which $y' \succ y^*$. The set of all nondominated points of $\mathcal{M}$ is denoted by $\yset_\textnormal{N}$, also referred to as the \textit{Pareto frontier}. The typical goal of a MODO, and the focus of this paper, is to enumerate $\yset_\textnormal{N}$. \begin{example}\label{ex:modo} We consider a set packing instance as a running example, with $K = 3$: \begin{align*} \max_{x \in \mathbb{B}^7} \quad& \big(f_1(x) = 4x_1 + 5x_2 + 3x_3 + 4x_4 + 2x_5 + 1x_6 + 2x_7,\\ \quad& \,f_2(x) = 8x_1 + 7x_2 + 1x_3 + 5x_4 + 3x_5 + 3x_6 + 8x_7,\\ \quad& \,f_3(x) = 2x_1 + 6x_2 + 8x_3 + 4x_4 + 6x_5 + 5x_6 + 2x_7 \big)\\ \textnormal{s.t.} \quad& x_1 + x_2 + x_3 \leq 1, \;\; x_2 + x_3 + x_4 \leq 1, \;\; x_4 + x_5 \leq 1, \\ \quad& x_4 + x_6 \leq 1, \;\; x_5 + x_7 \leq 1, \;\; x_6 + x_7 \leq 1. \end{align*} The nondominated set~$\yset_\textnormal{N}$ consists of the four points $y^1 = (6,7,19)$, $y^2 = (8,13,17)$, $y^3 = (7,14,13)$, and $y^4 = (10,21,8)$. They are the images, respectively, of the feasible solution vectors $x^1 = (0,0,1,0,1,1,0)$, $x^2 = (0,1,0,0,1,1,0)$, $x^3 = (1,0,0,0,1,1,0)$, and $x^4 = (1,0,0,1,0,0,1)$. \hfill $\square$ \end{example} For any $\bar{\mathcal{Y}} \subseteq \mathcal{Y}$, let $\nd{\bar{\mathcal{Y}}} := \set{y \in \bar{\mathcal{Y}}: \nexists\,y' \in \bar{\mathcal{Y}} \ \text{with} \ y' \succ y }$ be an operator that returns the set of vectors within $\bar{\mathcal{Y}}$ that are not dominated by any other vector in the same set. Note that $\yset_\textnormal{N} = \nd{\mathcal{Y}}$. This operator has been studied in the context of relational database systems, where it is known as the \emph{skyline} operator \citep{borzsony2001skyline}. We refer to the work by \cite{gudala2012} for a review of algorithms to compute $\nd{\cdot}$ and their associated complexity analysis. In particular, for $K = 2$, an efficient implementation of $\nd{\mathcal{S}}$ for a given $\mathcal{S} \subseteq \mathbb{R}^K$ has a worst-case time complexity of $\mathcal{O}(|\mathcal{S}| \log\parentheses{|\mathcal{S}|})$. For $K > 2$, it can be efficiently implemented in $\mathcal{O}\parentheses{|\mathcal{S}|\cdot \parentheses{\log\parentheses{|\mathcal{S}|}}^{K - 2}}$. \section{Network Models} \label{sec:network-models} This paper proposes the use of \textit{network models} to enumerate $\yset_\textnormal{N}$ for a MODO $\mathcal{M}$. In our context, a network model is a layered-acyclic multi-digraph $\mathcal{N} := \parentheses{\mathcal{L},\mathcal{A}}$ with node set $\mathcal{L}$ and arc set $\mathcal{A}$. Such a model is equipped with specific structure, properties, and attributes associated with $\mathcal{M}$ The node set $\mathcal{L}$ is partitioned into $n+1$ non-empty layers $\mathcal{L} := \dot{\bigcup\limits_{j \in \singleindexset{n+1}}} \layer{j}$. Layers $\layer{1}$ and $\layer{n+1}$ have cardinality one with $\layer{1} := \set{\mathbf{r}}$ and $\layer{n+1} := \set{\mathbf{t}}$. Nodes $\mathbf{r}$ and $\mathbf{t}$ are referred to as the \textit{root} node and the \textit{terminal} node, respectively. The layer index of a node $u \in \mathcal{L}$ is $\nodelayer{u}$, i.e., $u \in \layer{\nodelayer{u}}$. Each arc $a := \parentheses{\arcroot{a},\arcterminal{a}} \in \mathcal{A}$ is directed from its \emph{arc-root} $\arcroot{a} \in \layer{j}$ to its \emph{arc-terminal} $\arcterminal{a} \in \layer{j+1}$ for some $j \in \singleindexset{n}$. The layer of an arc is $\arclayer{a} := \nodelayer{\arcroot{a}}$. We denote by $\mathcal{A}^+(u) := \set{a \in \mathcal{A}: \arcroot{a} = u}$ the set of outgoing arcs from $u$ and by $\mathcal{A}^-(u) := \set{a \in \mathcal{A}: \arcterminal{a} = u}$ the set of incoming arcs to $u$. We define $\pathset{u}{v}$ as the set of arc-specified paths from node $u$ to node $v$. The arguments will be omitted when $u = \mathbf{r}$ and $v = \mathbf{t}$, i.e., $\mathcal{P} := \pathset{\mathbf{r}}{\mathbf{t}}$. Each arc $a$ has an \emph{arc-weight} vector $\arcweights{a} \in \mathbb{R}^K$. The arc-weight vectors provide the connection between arc-specified paths in $\mathcal{N}$ and the objective space of $\mathcal{M}$. Any path $p = \parentheses{a_{j_1}, \ldots, a_{j_H}}$ has \emph{path-weight} $\pathweight{p} = \sum\limits_{h \in [1,H]} \arcweights{a_{j_h}}$. The Pareto frontier of a network model $\mathcal{N}$ is defined as \[ \paretofrontier{\mathcal{N}} := \nd{ \bigcup\limits_{p \in \mathcal{P}} w(p) }. \] A network model $\mathcal{N}$ is \emph{valid} for a MODO $\mathcal{M}$ if $\paretofrontier{\mathcal{N}} = \yset_\textnormal{N}$. In our structural results, we may operate on distinct valid network models for the same $\mathcal{M}$, in which case we append a subscript to our notation so as to indicate the network. For example, $\mathbf{r}_{\mathcal{N}}$ will be used to represent the root node of $\mathcal{N}$. The subscript will be omitted when~$\mathcal{N}$ is clear from context. A network model can be interpreted as a data structure that supports the representation of a MODO $\mathcal{M}$ as a multicriteria shortest path problem (MSP). By negating the arc-weight vectors (since we consider maximization), one may apply any algorithm for solving MSPs to a network model~$\mathcal{N}$ in order to find $\paretofrontier{\mathcal{N}}$ if~$\mathcal{N}$ is valid for $\mathcal{M}$. Several MSP algorithms are available; see, e.g., the survey by \cite{GarGioTav10}. \begin{example}\label{ex:bdd} Figure~\ref{fig:bddexample} depicts a network model $\mathcal{N}$ for the MODO in Example~\ref{ex:modo}. The arc-weight vectors are shown (in black text) next to each arc (the additional details provided in the figure, in red and blue, will be introduced and described in Example \ref{ex:unidirectional}). There are 14 arc-directed paths from $\mathbf{r}$ to $\mathbf{t}$, and their path-weights are given b \begin{eqnarray*} \bigcup_{p \in \mathcal{P}} \arcweights{p} = \big\{ \parentheses{8,13,17}, \parentheses{6,10,11}, \parentheses{7,15,8}, \parentheses{6,7,19}, \parentheses{4,4,13}, \parentheses{5,9,10}, \parentheses{3,6,11}, \\ \parentheses{1,3,5}, \parentheses{2,8,2}, \parentheses{7,14,13}, \parentheses{6,13,6}, \parentheses{5,11,7}, \parentheses{6,16,4}, \parentheses{10,21,8} \big\} \end{eqnarray*} and results in \[ \paretofrontier{\mathcal{N}} = \nd{ \bigcup\limits_{p \in \mathcal{P}} w(p) } = \big\{ \parentheses{8,13,17}, \parentheses{6,7,19}, \parentheses{7,14,13}, \parentheses{10,21,8} \big\}. \] \hfill $\square$ \end{example} \begin{figure}[h!] \centering \begin{tikzpicture}[scale=0.3][font=\sffamily,\tiny] \node [scale=1.2] (x1) at (-20,-2) {$x_1$}; \node [scale=1.2] (x2) at (-20,-8) {$x_2$}; \node [scale=1.2] (x3) at (-20,-14) {$x_3$}; \node [scale=1.2] (x4) at (-20,-20) {$x_4$}; \node [scale=1.2] (x5) at (-20,-26) {$x_5$}; \node [scale=1.2] (x6) at (-20,-32) {$x_6$}; \node [scale=1.2] (x7) at (-20,-38) {$x_7$}; \node [draw,circle,scale=1] (r) at (0,0) {$\mathbf{r}$}; \node [color=red] () at (-3,0) { $\checkmark \parentheses{0,0,0}$ }; \node [color=blue] () at (5.1,2) { $\checkmark \parentheses{8,13,17}$ }; \node [color=blue] () at (4.85,1) { $\times \parentheses{7,15,8}$ }; \node [color=blue] () at (4.9,0) { $\checkmark \parentheses{6,7,19}$ }; \node [color=blue] () at (5.1,-1) { $\checkmark \parentheses{7,14,13}$ }; \node [color=blue] () at (5.1,-2) { $\checkmark \parentheses{10,21,8}$ }; \node [draw,circle,scale=1] (a) at (-6,-5) {$u^2_1$}; \node [color=blue] () at (-0.98,-3.1) { $\checkmark \parentheses{8,13,17}$ }; \node [color=blue] () at (-1.2,-4) { $\checkmark \parentheses{7,15,8}$ }; \node [color=blue] () at (-1.2,-5) { $\checkmark \parentheses{6,7,19}$ }; \node [color=blue] () at (-1.2,-6) { $\times \parentheses{5,9,10}$ }; \node [color=blue] () at (-1.2,-7) { $\times \parentheses{6,13,6}$ }; \node [color=blue] () at (-1.2,-8) { }; \node [color=red] () at (-9,-5) { $\checkmark \parentheses{0,0,0}$ }; \node [draw,circle,scale=1] (b) at (6,-5) {$u^2_2$}; \node [color=red] () at (3,-5) { $\checkmark \parentheses{4,8,2}$ }; \node [color=blue] () at (9,-4.5) { $\checkmark \parentheses{3,6,11}$ }; \node [color=blue] () at (9,-5.5) { $\checkmark \parentheses{6,13,6}$ }; \node [draw,circle,scale=1] (c) at (-12,-11) {$u^3_1$}; \node [color=red] () at (-15,-11) { $\checkmark \parentheses{5,6,7}$ }; \node [color=blue] () at (-8.8,-10.5) { $\checkmark \parentheses{3,6,11}$ }; \node [color=blue] () at (-9,-11.5) { $\checkmark \parentheses{2,8,2}$ }; \node [draw,circle,scale=1] (d) at (0,-11) {$u^3_2$}; \node [color=red] () at (-3,-11) { $\checkmark \parentheses{0,0,0}$ }; \node [color=blue] () at (3.1,-8.75) { $\checkmark \parentheses{6,7,19}$ }; \node [color=blue] () at (3.1,-9.75) { $\checkmark \parentheses{5,9,10}$ }; \node [color=blue] () at (3.1,-10.75) { $\times \parentheses{3,6,11}$ }; \node [color=blue] () at (3.1,-11.75) { $\checkmark \parentheses{6,13,6}$ }; \node [draw,circle,scale=1] (e) at (12,-11) {$u^3_3$}; \node [color=red] () at (9,-11) { $\checkmark \parentheses{4,8,2}$ }; \node [color=blue] () at (15,-10.5) { $\checkmark \parentheses{3,6,11}$ }; \node [color=blue] () at (15,-11.5) { $\checkmark \parentheses{6,13,6}$ }; \node [draw,circle,scale=1] (f) at (-6,-17) {$u^4_1$}; \node [color=red] () at (-9,-16.5) { $\checkmark \parentheses{5,7,6}$ }; \node [color=red] () at (-9,-17.5) { $\checkmark \parentheses{3,1,8}$ }; \node [color=blue] () at (-2.8,-16.5) { $\checkmark \parentheses{3,6,11}$ }; \node [color=blue] () at (-3,-17.5) { $\checkmark \parentheses{2,8,2}$ }; \node [draw,circle,scale=1] (g) at (6,-17) {$u^4_2$}; \node [color=red] () at (3,-16.5) { $\checkmark \parentheses{4,8,2}$ }; \node [color=red] () at (3,-17.5) { $\times \parentheses{0,0,0}$ }; \node [color=blue] () at (9.2,-16) { $\checkmark \parentheses{3,6,11}$ }; \node [color=blue] () at (9,-17) { $\times \parentheses{2,8,2}$ }; \node [color=blue] () at (9.2,-18) { $\checkmark \parentheses{6,13,6}$ }; \node [draw,circle,scale=1] (h) at (-6,-23) {$u^5_1$}; \node [color=red] () at (-9,-22) { $\checkmark \parentheses{5,7,6}$ }; \node [color=red] () at (-9,-23) { $\checkmark \parentheses{3,1,8}$ }; \node [color=red] () at (-9,-24) { $\checkmark \parentheses{4,8,2}$ }; \node [color=blue] () at (-1.5,-22) { $\checkmark \parentheses{3,6,11}$ }; \node [color=blue] () at (-1.7,-23) { $\checkmark \parentheses{2,8,2}$ }; \node [color=blue] () at (-1.7,-24) { $\times \parentheses{1,3,5}$ }; \node [draw,circle,scale=1] (i) at (6,-23) {$u^5_2$}; \node [color=red] () at (3,-23) { $\checkmark \parentheses{8,13,6}$ }; \node [color=blue] () at (9,-23) { $\checkmark \parentheses{2,8,2}$ }; \node [draw,circle,scale=1] (j) at (-12,-29) {$u^6_1$}; \node [color=red] () at (-14.8,-28) { $\checkmark \parentheses{7,10,12}$ }; \node [color=red] () at (-15,-29) { $\checkmark \parentheses{5,4,14}$ }; \node [color=red] () at (-15,-30) { $\checkmark \parentheses{6,11,8}$ }; \node [color=blue] () at (-9,-29) { $\checkmark \parentheses{1,3,5}$ }; \node [draw,circle,scale=1] (k) at (0,-29) {$u^6_2$}; \node [color=red] () at (-3,-28) { $\checkmark \parentheses{5,7,6}$ }; \node [color=red] () at (-3,-29) { $\checkmark \parentheses{3,1,8}$ }; \node [color=red] () at (-3,-30) { $\checkmark \parentheses{4,8,2}$ }; \node [color=blue] () at (3,-28.5) { $\checkmark \parentheses{2,8,2}$ }; \node [color=blue] () at (3,-29.5) { $\checkmark \parentheses{1,3,5}$ }; \node [draw,circle,scale=1] (l) at (12,-29) {$u^6_3$}; \node [color=red] () at (9,-29) { $\checkmark \parentheses{8,13,6}$ }; \node [color=blue] () at (14.8,-29) { $\checkmark \parentheses{2,8,2}$ }; \node [draw,circle,scale=1] (m) at (-6,-35) {$u^7_1$}; \node [color=red] () at (-11.8,-32.5) { $\checkmark \parentheses{8,13,17}$ }; \node [color=red] () at (-12,-33.5) { $\checkmark \parentheses{6,7,19}$ }; \node [color=red] () at (-11.8,-34.5) { $\checkmark \parentheses{7,14,13}$ }; \node [color=red] () at (-11.9,-35.5) { $\times \parentheses{6,10,11}$ }; \node [color=red] () at (-12.1,-36.5) { $\times \parentheses{4,4,13}$ }; \node [color=red] () at (-12.1,-37.5) { $\times \parentheses{5,11,7}$ }; \node [color=blue] () at (-3,-35) { $\checkmark \parentheses{0,0,0}$ }; \node [draw,circle,scale=1] (n) at (6,-35) {$u^7_2$}; \node [color=red] () at (2.1,-33.5) { $\times \parentheses{5,7,6}$ }; \node [color=red] () at (2,-34.5) { $\checkmark \parentheses{3,1,8}$ }; \node [color=red] () at (2,-35.5) { $\times \parentheses{4,8,2}$ }; \node [color=red] () at (2.14,-36.5) { $\checkmark \parentheses{8,13,6}$ }; \node [color=blue] () at (8.8,-35) { $\checkmark \parentheses{2,8,2}$ }; \node [draw,circle,scale=1] (t) at (0,-41) {$\mathbf{t}$}; \node [color=red] () at (-4.8,-39) { $\checkmark \parentheses{8,13,17}$ }; \node [color=red] () at (-5,-40) { $\checkmark \parentheses{6,7,19}$ }; \node [color=red] () at (-4.9,-41) { $\checkmark \parentheses{7,14,13}$ }; \node [color=red] () at (-5.15,-42) { $\times \parentheses{5,9,10}$ }; \node [color=red] () at (-4.9,-43) { $\checkmark \parentheses{10,21,8}$ }; \node [color=blue] () at (2.8,-41) { $\checkmark \parentheses{0,0,0}$ }; \path[->](r) edge node [xshift=-3.4mm,yshift=-2.6mm,left] {$\parentheses{0,0,0}$} (a); \path[->](r) edge node [xshift=3.5mm,yshift=-2.6mm,right] {$\parentheses{4,8,2}$} (b); \path[->](a) edge node [xshift=-2mm,yshift=-1mm,left] {$\parentheses{5,7,6}$} (c); \path[->](a) edge node [xshift=0.7mm,yshift=-1mm,left] {$\parentheses{0,0,0}$} (d); \path[->](b) edge node [xshift=2.3mm,yshift=-1mm,right] {$\parentheses{0,0,0}$} (e); \path[->](c) edge node [left] {$\parentheses{0,0,0}$} (f); \path[->](d) edge node [left] {$\parentheses{3,1,8}$} (f); \path[->](d) edge node [right] {$\parentheses{0,0,0}$} (g); \path[->](e) edge node [right] {$\parentheses{0,0,0}$} (g); \path[->](f) edge node [left] {$\parentheses{0,0,0}$} (h); \path[->](g) edge node [left] {$\parentheses{0,0,0}$} (h); \path[->](g) edge node [right] {$\parentheses{4,5,4}$} (i); \path[->](h) edge node [left] {$\parentheses{2,3,6}$} (j); \path[->](h) edge node [right] {$\parentheses{0,0,0}$} (k); \path[->](i) edge node [right] {$\parentheses{0,0,0}$} (l); \path[->](j) edge node [right] {$\parentheses{1,3,5}$} (m); \path[->](k) edge node [right] {$\parentheses{1,3,5}$} (m); \path[->](k) edge node [right] {$\parentheses{0,0,0}$} (n); \path[->](l) edge node [right] {$\parentheses{0,0,0}$} (n); \path[->](m) edge node [right] {$\parentheses{0,0,0}$} (t); \path[->](n) edge node [right] {$\parentheses{2,8,2}$} (t); \end{tikzpicture} \caption{A valid network model for $\mathcal{M}$ in Example~\ref{ex:modo}.} \label{fig:bddexample} \end{figure} \section{Network Model Construction} \label{sec:NMConstructionAlgos} \andre{ This section provides two frameworks for constructing valid network models for MODOs. The first approach relies on a recursive model of $\mathcal{M}$. The second is a direct transformation from decision diagrams, which is applicable when the objective functions are additively separable. } \subsection{Recursive Formulations} \label{sec:recursiveModeling} Several classes of single-objective optimization problems admit recursive formulations, often written as DP models \citep{bertsimas1997introduction}. These ideas were extended to the case of multiple objectives in the early work by \cite{villarreal1982multicriteria}, which we build upon in this paper. In particular, while DP models are intrinsically associated with a state-transition graph, we show that multiobjective recursive formulations are analogously associated with a valid network model. \andre{ Formally, a multiobjective recursive formulation of a MODO $\mathcal{M}$ is written in terms of the following elements as depicted in Figure \ref{fig:recursiongraph}: (i) $n+1$ state variables $s_0, s_1, \dots, s_n \in \mathcal{S}$ for some state space $\mathcal{S} \subseteq \mathbb{R}^m$, where the initial state $s_0$ is fixed; (ii) $n$ functions $\mathcal{V}_1, \dots, \mathcal{V}_n : \mathcal{S} \rightarrow 2^{\mathbb{Z}}$ that represent the variable-value assignments that can be applied at a state (i.e., they provide state-dependent feasible decisions); (iii) $n$ state transition functions $\tau_1, \dots, \tau_n : \mathcal{S} \times \mathbb{Z} \rightarrow \mathcal{S}$, each mapping a pair $(s,x)$ of a state $s \in \mathcal{S}$ and a variable-value assignment $x \in \mathbb{Z}$ to another state $s' \in \mathcal{S}$; and (iv) $n$ reward functions $\delta_1, \dots, \delta_n : \mathcal{S} \times \mathbb{Z} \rightarrow \mathbb{R}^K$ that map an analogous pair $(s,x)$ to a reward vector in $\mathbb{R}^K$. } \begin{figure}[h] \centering \begin{tikzpicture}[scale=0.3][font=\sffamily, \small] \node [draw,rectangle] (s0) {$s_0$}; \node [draw,rectangle,right of=s0,xshift=2.6cm] (s1) {$s_1 = \tau_1(s_0,x_1)$}; \node [draw,rectangle,right of=s1,xshift=3.5cm] (s2) {$s_2 = \tau_2(s_1,x_2)$}; \node [right of=s2,xshift=1cm] (dummy) {$\cdots$}; \node [draw,rectangle,right of=dummy,xshift=3.5cm] (sn) {$s_n = \tau_n(s_{n-1},x_n)$}; \path[->,>=stealth,line width=1pt] (s0) edge node[above] {$x_1 \in \mathcal{V}_1(s_0)$} (s1); \path[->,>=stealth,line width=1pt] (s1) edge node[above] {$x_2 \in \mathcal{V}_2(s_1)$} (s2); \path[->,>=stealth,line width=1pt] ([xshift=0.9cm] dummy.east) edge node[above] {$x_n \in \mathcal{V}_n(s_{n-1})$} (sn); \node [yshift=-1.5cm] (r1) at ($(s0.east)!0.5!(s1.west)$) {$\delta_1(s_0,x_1)$}; \node [yshift=-1.5cm] (r2) at ($(s1.east)!0.5!(s2.west)$) {$\delta_2(s_1,x_2)$}; \node [yshift=-1.5cm] (rn) at ($([xshift=0.9cm] dummy.east)!0.5!(sn.west)$) {$\delta_n(s_{n-1},x_n)$}; \draw[->,dashed,thick] ([yshift=-0.4cm] s0 -| r1) -- (r1); \draw[->,dashed,thick] ([yshift=-0.4cm] s1 -| r2) -- (r2); \draw[->,dashed,thick] ([yshift=-0.4cm] s2 -| rn) -- (rn); \end{tikzpicture} \caption{Elements of a recursive formulation} \label{fig:recursiongraph} \end{figure} Based on these four components, a multiobjective recursive problem is of the form \begin{align*} \tag{$\modo_{\mathcal{R}}$} \max_{s,x} \quad& \sum_{j=1}^n \delta_j(s_{j-1}, x_j) \\ \textnormal{s.t.} \quad & s_j = \tau_j(s_{j-1}, x_j), &\textnormal{for all $j \in [n],$}\\ & x_j \in \mathcal{V}_j(s_{j-1}), &\textnormal{for all $j \in [n].$} \end{align*} In particular, each objective function evaluation is represented by the $k$-th index of the total reward tuple $\sum_{j=1}^{n} \delta_j(s_{j-1}, x_j)$ for $k \in [K]$, i.e., \[ f_k(s,x) := \left( \sum_{j=1}^n \delta_j(s_{j-1}, x_j) \right)_k, \] while the feasible set of $\modo_{\mathcal{R}}$ is \andre{ \[ \mathcal{X} := \left\{ (s,x) \in \mathcal{S}^{n+1} \times \mathbb{Z}^n \,:\, s_j = \tau_j(s_{j-1}, x_j), \ x_j \in \mathcal{V}_j(s_{j-1}) \;\; \ \text{for all} \ \,j \in [n] \right \}. \] } Assume, without loss of generality, that $\tau_n(s,x) = s_\mathbf{t}$ for a fixed $s_\mathbf{t}$ and for all $s \in \mathcal{S}$, $x \in \mathbb{Z}$. That is, the final transition always leads to a common \textit{terminal} state $s_\mathbf{t}$, which can always be accomplished by appropriately defining $\tau_n(\cdot)$. If a MODO can be written in the form of $\modo_{\mathcal{R}}$, it exposes a recursive structure that can be immediately leveraged for the construction of a network model. More specifically, the network $\mathcal{N} = (\mathcal{L}, \mathcal{A})$ is the state-transition graph defined by the feasible set $\mathcal{X}$, which is composed as follows (see, e.g., \citealt{Bertsekas2017} for algorithmic details): \begin{itemize} \item Node set $\mathcal{L}$: The nodes of $\mathcal{N}$ are associated with nodes of the state-transition graph. For any $j \in [n+1]$, there exists a node $u \in \layer{j}$ for every possible value of the state variable $s_{j-1}$ in $\mathcal{X}$. More specifically, at layer $j$ of $\mathcal{N}$, there exists one node for each state in the set $\text{Proj}_{s_{j-1}}(\mathcal{X})$, defined as the projection of the feasible set $\mathcal{X}$ into the space of the variable $s_{j-1}$. With a slight abuse of notation, we let $\layer{j} = \{ u^j_1, \hdots, u^j_{|\layer{j}|} \} = \text{Proj}_{s_{j-1}}(\mathcal{X})$. Note that since $s_0$ and $s_\mathbf{t}$ are fixed, layers $\layer{1}$ and $\layer{n+1}$ are singletons; namely $u^1_1 = s_0$ and $u^{n+1}_1 = s_\mathbf{t}$. \item Arc set $\mathcal{A}$: The arcs of $\mathcal{N}$ are associated with arcs of the state-transition graph. In other words, there exists an arc in $\mathcal{N}$ for every possible transition in the state-transition graph. Again with a slight abuse of notation, we represent an arc as a triplet, rather than a pair, $(\arcroot{a},\arcterminal{a},x)$ appending the variable-value information. Then, we have $$\mathcal{A} = \dot{\bigcup\limits_{j \in [n]}} \left\{ (u_i^j,u_{i'}^{j+1},x) : u_i^j \in \layer{j}, \ u_{i'}^{j+1} \in \layer{j+1}, \ x \in \mathcal{V}_j(u_i^j) \ \text{with} \ u_{i'}^{j+1} = \tau_j(u_i^j,x) \right\}.$$ \item Arc weights: The reward functions provide the arc weights for $\mathcal{N}$. That is, if $a = (u_i^j,u_{i'}^{j+1},x) \in \mathcal{A}$, then $\arcweights{a} = \delta_{j}(u_i^j, x)$. \end{itemize} It follows from the definition of arc weights that a path weight corresponds to the objective function in $\modo_{\mathcal{R}}$ and, by construction of the state-transition graph, that there is a one-to-one mapping between paths of $\mathcal{N}$ and solutions in $\mathcal{X}$. Thus, $\mathcal{N}$ is a valid network model for $\modo_{\mathcal{R}}$. \begin{example} \label{ex:modorec} The instance in Example \ref{ex:modo} is a set packing instance, so it can be written as \begin{align*} \max \left \{ \left( (c^1)^\top x, (c^2)^\top x, \dots, (c^K)^\top x \right) : Ax \le \boldsymbol{1}, x \in \mathbb{B}^n \right \} \end{align*} for an $m \times n$ matrix $A$ with elements $a_{ij} \in \mathbb{B}^n$, $i \in [m], j \in [n]$, and cost vectors $c^j \in \mathbb{R}^n$, $j \in [n]$. A recursive formulation can be obtained as follows. A state variable $s \in \mathbb{B}^m$ is defined so that for any $i \in [m]$, $s_i = 1$ if and only if the $i$-th constraint of $Ax \le \boldsymbol{1}$ is satisfied as an equality or all the decision variables appearing in the constraint have already been assigned to zero. Therefore, we have the initial state $s_0 = \boldsymbol{0}$, whereas any final transition will lead to the terminal state $s_{\mathbf{t}} = \boldsymbol{1}$. Variable-value assignments $x_j \in \mathbb{B}$ represent the packing of element~$j$. We are not allowed to set a variable $x_j = 1$ if any constraint which includes $x_j$ holds as an equality; i.e., \[ \mathcal{V}_j(s) := \{ b \in \mathbb{B} \,:\, s_i + b \le 1 \ \text{for all} \ \,i \in [m] \ \text{with} \ a_{ij} = 1 \}. \] The transition function ensures the consistency of the current state $s$ and the next state $\tilde{s}$ when setting $x_j = b$, $b \in \mathcal{V}_j(s)$. Specifically, let $m_i := \argmax \{ j \in [n] : a_{ij} = 1\}$ be the maximum index of the variables with a nonzero coefficient in the $i$-th constraint. Then, we have $\tau_j(s, b) = \tilde{s}$ where $$\tilde{s}_i = \left\{ \begin{array}{lll} 1, & \mbox{if } j = m_i \\ s_i + b, & \mbox{if } j < m_i \ \text{and} \ a_{ij} = 1 \\ s_i, & \mbox{otherwise} \end{array} \right. \qquad \text{for all} \ i \in [m], $$ for any $j \in [n]$. Finally, the reward function for each $j \in [n]$ is the contribution of $x_j$ to the objective function vector, which in this particular instance is state independent: $\delta_j(s, b) := b \times \left( c^1_j, \, c^2_j, \, \dots, c^K_j \right)$. For our particular instance in Example \ref{ex:modo}, this recursive formulation yields the network model depicted in Figure \ref{fig:recursivemodelexample} in the appendix. We note that this network model matches the one given in Figure \ref{fig:bddexample} until layer six (inclusive). The network model in Figure \ref{fig:bddexample} can be obtained from the one in Figure \ref{fig:recursivemodelexample} after applying Theorem \ref{thm:reduce} and Corollary \ref{cor:arcremoval} on layer seven. \hfill $\square$ \end{example} \subsection{Transformation from Decision Diagrams} \label{sec:bddTransformation} \andre{ A decision diagram (DD) is a network-based representation of a Boolean or a discrete function with a large list of applications in mathematical programming, operations research, and circuit design (see, e.g., \citealt{Bry92, Beh07,HadHoo07,BerHoeHoo11}). In such applications, DDs are used to compactly represent or approximate the set of feasible solutions to a discrete optimization problem. We refer to the work by \cite{bergman2016decision} for a survey of concepts and existing DD methodologies for optimization. } \andre{ Each layer in a DD $\mathcal{D}$, except the last one, corresponds to a unique decision variable mapped via the bijective function $y: [n] \rightarrow [n]$; that is, layer $l \in [n]$ corresponds to variable $x_{y(l)}$. Let~$\mathcal{P}_\mathcal{D}$ be the set of paths from root to terminal in $\mathcal{D}$. The arc-domains associate each path $p = (a_1, \ldots, a_n)$ in~$\mathcal{P}_\mathcal{D}$ with a solution vector $x(p) \in \mathbb{Z}^n$ defined by $x(p)_{y(\ell(a_j))} = \arcdomain{a_j}$ for all $j \in [n]$. In particular, for a given feasible set $\mathcal{X} \subset \mathbb{Z}^n$, $\mathcal{D}$ is said to be \emph{exact} for $\mathcal{X}$ if and only if $x(p) \in \mathcal{X}$ for all $p \in \mathcal{P}_\mathcal{D}$ and $|\mathcal{P}_\mathcal{D}| = |\mathcal{X}|$. That is, the paths of $\mathcal{D}$ encode $\mathcal{X}$ exactly. \looseness=-1 } \carlos{ In summary, decision diagrams are closely related to network models, but observe that arc-domains are exclusive to the former whereas arc-weight vectors it is are defined only in the latter. } Assume that, for a given MODO $\mathcal{M}$, the objective function $f_k(\cdot)$ is separable for all $k \in [K]$; i.e., $$f_{k}(x) := \sum_{j \in [n]} f_{k,j}(x_j)$$ for an appropriate choice of functions $f_{k,j} : \mathbb{Z} \rightarrow \mathbb{R}$, $j \in [n]$. In that case, we can transform an exact DD \carlos{representing $\mathcal{M}$ } into a network model $\mathcal{N}$ simply by defining suitable arc weights and removing the arc-domain labels. More specifically, for every arc $a \in \mathcal{A}$, we define its weight according to the variable corresponding to the layer of that arc, $x_{y(\ell(a))}$, and its assigned value, $\arcdomain{a}$, as $\arcweights{a} := \parentheses{f_{1,y(\ell(a))}(\arcdomain{a}), \ldots, f_{K,y(\ell(a))}(\arcdomain{a})}.$ The assignment of those weights to arcs implies that, for any path $p = (a_1, \ldots, a_n)$ in the network, \[ w(p) = \sum_{j \in [n]} \parentheses{f_{1,y(\ell(a_j))}(\arcdomain{a_j}), \ldots, f_{K,y(\ell(a_j))}(\arcdomain{a_j})} = f(x(p)), \] thus $\paretofrontier{\mathcal{N}}$ is the Pareto frontier of $\mathcal{M}$. This relation was first used in \cite{BerCir16}. \andre{ There exists an extensive number of DD construction algorithms for discrete optimization problems with a (single) separable objective function \citep{bergman2016decision}, some of which will be employed for our numerical study in Section \ref{sec:copmutationalInsights}. Note, however, that our proposed network models, after modifications by VPOs, do not necessarily map to valid exact DDs. Indeed, our network operators are guaranteed only to preserve the Pareto frontier, and as such the diagram is typically modified to readjust weights, merge nodes, include infeasible paths, or exclude feasible paths to allow for smaller network sizes (hence not satisfying the basic properties of an exact DD). } As mentioned in Example \ref{ex:modorec}, our running Example \ref{ex:modo} is a set packing instance. \S4 of \cite{bergman2016decision} provides a DD representation for the set packing problem that can be readily transformed into a network model by applying the methodology described above. \section{Validity-preserving Operations} \label{sec:VPOs} This section is devoted to the study of generic algorithms, denoted by validity preserving operations (VPOs), to transform a network $\mathcal{N}$ into a smaller network $\mathcal{N}'$ such that $\paretofrontier{\mathcal{N}} = \paretofrontier{\mathcal{N}'}$. For the structural results below, we assume that an initial valid network model $\mathcal{N} = (\mathcal{L}, \mathcal{A})$ is available. \subsection{Weight Shifting and Node Merging} \label{sec:reduction} We extend the classical concept of \textit{reduction} proposed by \cite{Bry86} as a VPO for network models. Reduction is an operation applied to DDs that merge nodes which share isomorphic subgraphs. \cite{Hooker2013} later extended this notion for DPs with state-dependent rewards, showing that further merging can be achieved for single-objective problems by considering isomorphism with respect to arc weights. Based on this concept, we define a weight-shifting operation that, given a node $u$, transfers weights from the arcs directed out of $u$ (i.e., $\mathcal{A}^+(u)$) to those directed into $u$ (i.e., $\mathcal{A}^-(u)$). \begin{proposition}[\bf Weight shift] \label{prop:shift} Let $u \in \mathcal{L}$ be a node in $\mathcal{N}$ such that $u \notin \set{\mathbf{r}, \mathbf{t}}$. For any $\tilde{c} \in \mathbb{R}^K$, the operations defined by (1) and (2) below is a VPO: \begin{enumerate} \item $\arcweights{a} := \arcweights{a} - \tilde{c}$ for all $a \in A^+(u)$; and \item $\arcweights{a} := \arcweights{a} + \tilde{c}$ for all $a \in A^-(u)$. \end{enumerate} \end{proposition} \begin{proof} Arc-weight $w(p)$ remains unchanged for all $p \in \mathcal{P}$ and thus $\paretofrontier{\mathcal{N}}$ is unchanged. \end{proof} \begin{example} Consider the network model in Figure~\ref{fig:bddexample}. Let $\tilde{c} = (2,8,2)$. The network model resulting from subtracting $\tilde{c}$ from arc-weight $\arcweights{u_2^7,\mathbf{t}}$ and adding $\tilde{c}$ to arcs $\arcweights{u_2^6,u_2^7}$ and $\arcweights{u_3^6,u_2^7}$ corresponds to the operations in Proposition \ref{prop:shift}. The last three layers of $\mathcal{N}$ are reproduced in Figure~\ref{fig:costShift1}. The result of applying the weight shift is depicted in Figure~\ref{fig:costShift2}. \hfill $\square$ \end{example} \begin{figure}[h] \centering \begin{minipage}[t]{0.3\textwidth} \centering \begin{tikzpicture}[scale=0.3][font=\sffamily,\tiny] \node [draw,circle,scale=1] (j) at (-6,-29) {$u^6_1$}; \node [draw,circle,scale=1] (k) at (0,-29) {$u^6_2$}; \node [draw,circle,scale=1] (l) at (6,-29) {$u^6_3$}; \node [draw,circle,scale=1] (m) at (-3,-35) {$u^7_1$}; \node [draw,circle,scale=1] (n) at (3,-35) {$u^7_2$}; \node [draw,circle,scale=1] (t) at (0,-41) {$\mathbf{t}$}; \path[->](j) edge node [xshift=-2mm,yshift=2mm,left] {$\parentheses{1,3,5}$} (m); \path[->](k) edge node [xshift=1mm,yshift=2mm,left] {$\parentheses{1,3,5}$} (m); \path[->](k) edge node [xshift=-1mm,yshift=2mm,right] {$\parentheses{0,0,0}$} (n); \path[->](l) edge node [xshift=2mm,yshift=2mm,right] {$\parentheses{0,0,0}$} (n); \path[->](m) edge node [left] {$\parentheses{0,0,0}$} (t); \path[->](n) edge node [right] {$\parentheses{2,8,2}$} (t); \end{tikzpicture} \caption{Original arc-weights} \label{fig:costShift1} \end{minipage} \hfill \begin{minipage}[t]{0.3\textwidth} \centering \begin{tikzpicture}[scale=0.3][font=\sffamily,\tiny] \node [draw,circle,scale=1] (j) at (-6,-29) {$u^6_1$}; \node [draw,circle,scale=1] (k) at (0,-29) {$u^6_2$}; \node [draw,circle,scale=1] (l) at (6,-29) {$u^6_3$}; \node [draw,circle,scale=1] (m) at (-3,-35) {$u^7_1$}; \node [draw,circle,scale=1] (n) at (3,-35) {$u^7_2$}; \node [draw,circle,scale=1] (t) at (0,-41) {$\mathbf{t}$}; \path[->](j) edge node [xshift=-2mm,yshift=2mm,left] {$\parentheses{1,3,5}$} (m); \path[->](k) edge node [xshift=1mm,yshift=2mm,left] {$\parentheses{1,3,5}$} (m); \path[->](k) edge node [xshift=-1mm,yshift=2mm,right] {$\parentheses{2,8,2}$} (n); \path[->](l) edge node [xshift=2mm,yshift=2mm,right] {$\parentheses{2,8,2}$} (n); \path[->](m) edge node [left] {$\parentheses{0,0,0}$} (t); \path[->](n) edge node [right] {$\parentheses{0,0,0}$} (t); \end{tikzpicture} \caption{Weight shift by $\tilde{c} = (2,8,2)$ at node $u_2^7$} \label{fig:costShift2} \end{minipage} \hfill \begin{minipage}[t]{0.3\textwidth} \centering \begin{tikzpicture}[scale=0.3][font=\sffamily,\tiny] \node [draw,circle,scale=1] (j) at (-6,-29) {$u^6_1$}; \node [draw,circle,scale=1] (k) at (0,-29) {$u^6_2$}; \node [draw,circle,scale=1] (l) at (6,-29) {$u^6_3$}; \node [draw,circle,scale=1] (o) at (0,-35) {$u^7_1$}; \node [draw,circle,scale=1] (t) at (0,-41) {$\mathbf{t}$}; \path[->](j) edge node [xshift=-2mm,yshift=2mm,left] {$\parentheses{1,3,5}$} (o); \path[->](k) edge [bend right] node [xshift=1mm,yshift=4mm,left] {$\parentheses{1,3,5}$} (o); \path[->](k) edge [bend left] node [xshift=-1mm,yshift=4mm,right] {$\parentheses{2,8,2}$} (o); \path[->](l) edge node [xshift=2mm,yshift=2mm,right] {$\parentheses{2,8,2}$} (o); \path[->](o) edge node [right] {$\parentheses{0,0,0}$} (t); \end{tikzpicture} \caption{Result of merge operation on nodes~$u_1^7$ and~$u_2^7$} \label{fig:costShift3} \end{minipage} \end{figure} We now present a sufficient condition for when merging nodes is a VPO. \begin{theorem}[Node merge] \label{thm:reduce} Let $u_1,u_2 \in \mathcal{L}$, $u_1 \neq u_2$, be two nodes in $\mathcal{N}$ for which there exists a one-to-one mapping between the arcs in $\mathcal{A}^+(u_1)$ and $\mathcal{A}^+(u_2)$ satisfying that, for every pair of arcs $(a_1,a_2)$ such that $a_1 \in \mathcal{A}^+(u_1)$ maps to $a_2 \in\mathcal{A}^+(u_2)$, we have $\arcterminal{a_1} = \arcterminal{a_2}$ and $w(a_1) = w(a_2)$. Then, the following sequence of operations characterizes a VPO: \begin{enumerate} \item Delete all arcs in $\mathcal{A}^+(u_2)$; \item Redefine $\arcterminal{a} = u_1$ for all $a \in \mathcal{A}^-(u_2)$; and \item Delete $u_2$, \end{enumerate} \end{theorem} \begin{example} Figure~\ref{fig:costShift3} depicts the result of merging nodes $u_1^7$ with $u_2^7$ in Figure~\ref{fig:costShift2}. Note that the resulting network model has fewer arcs and nodes, but the same number of paths. Such an operation thereby decreases the size of the network model without altering $\paretofrontier{\mathcal{N}}$. \hfill $\square$ \end{example} Proposition~\ref{prop:shift} and Theorem~\ref{thm:reduce} define a strategy for simplifying a network model. In particular, starting from the penultimate layer and moving upwards in the network, we process each node in the layer in the following way. We construct the vector~$\tilde{c}(u)$ for each node~$u$ in the inspected layer by taking the component-wise minimum arc-weight among arcs in $\mathcal{A}^{+}(u)$. Once~$\tilde{c}(u)$ has been obtained, the arc-weights are shifted up as prescribed in Proposition~\ref{prop:shift}. After repeating this operation to all nodes in a layer, the conditions of Theorem~\ref{thm:reduce} and its VPO are applied to the nodes of the same layer in the transformed network. Node-merge operations can be performed in any order, in the sense that any sequence will lead to the same reduced network. \begin{proposition} \label{prop:VPOcomplexity} The component-wise minimum arc-weight shift and node merge VPO have worst-case time complexity of $\mathcal{O}(K|\mathcal{L}|^2\log(|\mathcal{L}|))$. \end{proposition} \subsection{Arc removal} \label{sec:arcRemoval} In this section, we investigate algorithms and structural results of VPOs that eliminate arcs of the network model. For an arc $a \in \mathcal{A}$, let $\mathcal{N} - a$ be the network model resulting from the removal of $a$ from $\mathcal{N}$. By definition, removing arc $a$ is a VPO if and only if $\paretofrontier{\mathcal{N}} = \paretofrontier{\mathcal{N} - a}$. The following theorem shows that identifying when such condition holds in general is an NP-Hard problem. \begin{theorem} \label{thm:NPHardRemoval} Given a valid network model $\mathcal{N} = (\mathcal{L}, \mathcal{A})$ for a MODO $\mathcal{M}$, let $\tilde{\mathcal{A}} \subseteq \mathcal{A}$ be a subset of arcs of $\mathcal{N}$. Deciding whether there exists an arc $a \in \tilde{\mathcal{A}}$ such that $\paretofrontier{\mathcal{N}} = \paretofrontier{\mathcal{N} - a}$ is NP-hard. \end{theorem} Despite the hardness of determining whether an arc can be safely removed without changing the Pareto frontier, we can still exploit strong sufficient conditions for designing arc-removal VPOs. Additional notation is in order. Given a network model $\mathcal{N} = (\mathcal{L}, \mathcal{A})$ and two nodes $u,v \in \mathcal{L}$ such that $\nodelayer{u} < \nodelayer{v}$, let $\mathcal{N}[u,v]$ be the network model containing only nodes and arcs in $\mathcal{N}$ that lie on some path which starts at node $u$ and ends at node $v$. We introduce the following concept: \begin{definition} \label{def:isolating} A pair of nodes~$u,v \in \mathcal{L}$ is \emph{isolating} in~$\mathcal{N}$ when, for every arc $a \in \mathcal{A}_{\mathcal{N}} \backslash \mathcal{A}_{\mathcal{N}[u,v]}$, \begin{itemize} \item[(i)] $\arcterminal{a} \in \mathcal{L}_{\mathcal{N}[u,v]}$ implies that $\arcterminal{a} = u$; and \item[(ii)] $\arcroot{a} \in \mathcal{L}_{\mathcal{N}[u,v]}$ implies that $\arcroot{a} = v$. \end{itemize} \end{definition} According to Definition \ref{def:isolating}, nodes $u$ and $v$ are isolating in~$\mathcal{N}$ if $\mathcal{N}[u,v]$ contains all arcs from~$\mathcal{N}$ that are directed to nodes in $\mathcal{L}_\mathcal{N}[u,v] \setminus \{u\}$ and all arcs that are directed out of nodes in $\mathcal{L}_\mathcal{N}[u,v] \setminus \{v\} $. (For example, any pair of nodes are isolating in Figure \ref{fig:UnconstraintMODO}\subref{fig:unconstNetwork}.) Note that one can check whether two nodes $u$ and $v$ are isolating in~$\mathcal{N}$ in polynomial time, in the size~$\mathcal{N}$, with a breadth-first search. Pairs of isolating vertices yield a sufficient condition for an arc-removal operation to be a VPO. \begin{theorem} \label{thm:subBDDarcRemoval} Let $u$ and $v$ be isolating nodes in a network model $\mathcal{N}$. For any $a \in \mathcal{A}_{\mathcal{N}[u,v]}$, if $\paretofrontier{\mathcal{N}[u,v]} = \paretofrontier{\mathcal{N}[u,v] - a}$, then $\paretofrontier{\mathcal{N}} = \paretofrontier{\mathcal{N} - a}$. That is, if the removal of arc $a$ is a VPO in $\mathcal{N}[u,v]$, then it is also a VPO in $\mathcal{N}$. \end{theorem} The proof of Theorem \ref{thm:subBDDarcRemoval} is provided in Section \ref{subsec:ProofThmThree}. \smallskip Theorem~\ref{thm:subBDDarcRemoval} shows that pairs of isolating nodes in an arbitrary network model~$\mathcal{N}$ define subnetworks whose VPOs involving the removal of arcs are also VPOs for~$\mathcal{N}$. The simplest case reduces to two arcs with the same endpoints, which yields the following immediate corollary. \begin{corollary} \label{cor:arcremoval} Let $a_1$ and $a_2$ be any two arcs of a network model~$\mathcal{N}$ for which $\arcroot{a_1} = \arcroot{a_2}$ and $\arcterminal{a_1} = \arcterminal{a_2}$. If $\arcweights{a_1} \prec \arcweights{a_2}$ or $\arcweights{a_1} = \arcweights{a_2}$, then the removal of $a_1$ is a VPO. \end{corollary} Theorem~\ref{thm:subBDDarcRemoval} can also be applied directly by choosing two nodes $u,v$ such that $N[u,v]$ is sufficiently small so that all associated arcs can be removed efficiently. Specifically, given $\Delta := \nodelayer{v} - \nodelayer{u}$, the number of paths in $\mathcal{N}[u,v]$ is bounded by $\mathcal{O}(2^\Delta)$, and hence for small $\Delta$ the Pareto frontier (and arc to be removed) can be identified quickly using, e.g., the procedures from Section \ref{sec:findingTheParetoFrontier}, which we discuss next. For our numerical evaluation, we fixed $\Delta = 2$. That is, (i) we find a pair $(u,v)$ of isolating nodes that are distant by at most two layers, (ii) obtain the network model $\mathcal{N}[u,v]$, (iii) compute its Pareto frontier, and finally (iv) apply Theorem~\ref{thm:subBDDarcRemoval} to remove arcs that are VPOs in $\mathcal{N}$. \section{Generating the Pareto Frontiers from a Network Model} \label{sec:findingTheParetoFrontier} Given a valid network model, finding the Pareto frontier generally reduces to solving an MSP (by multiplying arc-weights by $-1$) in a layered-acyclic multi-digraph, for which an extensive literature exists; see, e.g., surveys by \citealt{Tarapata2007} and \citealt{GarGioTav10}. \andre{ In this section, we propose two methodologies for enumerating the Pareto frontier based on our network model structure. The first is a direct modification of the unidirectional recursion by \cite{Hen86}, also applied, e.g., in \cite{figueira2013algorithmic} and \cite{rong2014dynamic}. The second technique is an extension of \cite{GalandIPS13} and performs a bidirectional search that combines the partial Pareto frontiers of each layer using a \emph{coupling} operator. Both methodologies assign a set (or a collection of sets) of $K$-dimensional vectors to nodes of the network. Each $K$-dimensional vector is henceforth referred to as a \emph{label}, as is done in the MSP literature. } \subsection{Unidirectional Pareto frontier generation} \label{sec:unidirectional} The unidirectional algorithms process one layer at a time, computing the partial nondominated solutions at a node based on either the incoming arcs or the outgoing arcs. It is a direct application of the recursion by \cite{Hen86} but using the underlying structure of the network model, similar to the version presented by \cite{rong2014dynamic}. The procedure works as follows. When processed from the root node~$\mathbf{r}$ to the terminal node~$\mathbf{t}$, the algorithm assigns a single set of \emph{top-down labels} $\labelSetTD{u}$ to each node $u$. The label set of each node is initialized as the empty set, except the root node, where $\labelSetTD{\mathbf{r}}$ is initialized as $\{\textbf{0}\}$. For each layer $j$ from one to $n$, having constructed $\labelSetTD{u}$ for all $u \in \layer{j}$, the labels are constructed for the nodes in $\layer{j+1}$ by considering the arcs directed from nodes in $\layer{j}$ to $\layer{j+1}$, one by one. For each such arc $a$ (i.e., $a \in \bigcup_{u \in \layer{j}} \mathcal{A}^+(u)$) and every label $z \in \labelSetTD{\arcroot{a}}$, the label $z + w(a)$ is added to $\labelSetTD{\arcterminal{a}}$. After all arcs directed out of nodes in $\layer{j}$ are processed, $\labelSetTD{u}$ is re-assigned to $\nd{\labelSetTD{u}}$, to remove any labels that are dominated by other labels in the set. Note that one can also do a simple check each time a label is added to see if it is dominated by another label already existing for the node. At the culmination of the algorithm, $\labelSetTD{\mathbf{t}}$ will be $\paretofrontier{\mathcal{N}}$. One can also run the algorithm in the opposite direction, starting from $\mathbf{t}$ and flipping the direction of the arcs. The terminal node is initialized as $\{\textbf{0}\}$, and the nodes are processed in the opposite direction. We refer to the labels constructed in this direction as \emph{bottom-up labels} $\labelSetBU{u}$. The set of labels $\labelSetBU{\mathbf{r}}$ coincides with $\labelSetTD{\mathbf{t}}$ and is therefore equal to $\paretofrontier{\mathcal{N}}$. \begin{example} \label{ex:unidirectional} Consider Figure~\ref{fig:bddexample}. The labels on the left of the nodes (shown in red) correspond to the top-down labels (i.e., for every node $u$, they list $\labelSetTD{u}$). A ``$\checkmark$" is drawn next to labels that remains in $\labelSetTD{u}$ after the application of the $\nd{}$ operator, and a ``$\times$" is drawn otherwise. To the right of each node $u$, $\labelSetBU{u}$ is listed (in blue), with symbols ``$\checkmark$" and ``$\times$" indicating whether the labels remain or are discarded after the application of the $\nd{}$ operator, respectively. \hfill $\square$ \end{example} \subsection{Bidirectional Pareto frontier generation} \label{sec:bidirectional} We now provide a compilation method that extends the work of \cite{GalandIPS13} for network models. Namely, one may obtain the elements composing the Pareto frontier by constructing labels in both directions simultaneously and \emph{coupling} the top-down and bottom-up labels. Given two sets of vectors $\mathcal{Z}_1, \mathcal{Z}_2 \subseteq \mathbb{R}^K$, define the coupling of $\mathcal{Z}_1$ and $\mathcal{Z}_2$ as \[ \couple{\mathcal{Z}_1}{\mathcal{Z}_2} := \nd{\set{z: z = z^1 + z^2, z^1 \in \mathcal{Z}_1, z^2 \in \mathcal{Z}_2}}. \] The coupling function~$ \couple{\mathcal{Z}_1}{\mathcal{Z}_2}$ returns the nondominated set of vectors that result from every pairwise sum of vectors from the two sets~$\mathcal{Z}_1$ and~$\mathcal{Z}_2$. Let us fix a layer $j'$ and suppose we created the labels $\labelSetTD{u}$ for every node $u \in \layer{j}$, $j \leq j'$, and the labels $\labelSetBU{u}$ for every node $u \in \layer{j}$, $j \geq j'$. We define the operation of \emph{coupling on layer} $j'$ as \[ \coupleLayer{\layer{j'}} := \nd{ \bigcup_{u \in \layer{j'}} \couple{\labelSetTD{u}}{\labelSetBU{u}} }. \] This yields the nondominated set that results from the coupling of the top-down and the bottom-up labels on each node. Note that $\coupleLayer{\layer{j}} = \yset_\textnormal{N}$ for any $j \in [n+1]$. Example~\ref{ex:coupling} shows that this approach can significantly reduce the number of operations required to find the Pareto frontier of a network model. Since the nondominated frontier of any set~$\mathcal{S}$ of $K$-dimensional vectors can be found in time $\mathcal{O}\parentheses{|\mathcal{S}|\cdot \parentheses{\log\parentheses{|\mathcal{S}|}}^{K - 2}}$ \citep{borzsony2001skyline}, the coupling operation of sets~$Z_1$ and~$Z_2$ can be completed in time $\mathcal{O}\parentheses{|\mathcal{Z}_1|\cdot|\mathcal{Z}_2|\cdot \parentheses{\log\parentheses{|\mathcal{Z}_1|\cdot|\mathcal{Z}_2|}}^{K - 2}}$. \begin{example} \label{ex:coupling} Consider the network model in Figure~\ref{fig:bddexample}. Suppose we fix $\layer{5}$, composed of nodes $u_1^5$ and $u_2^5$, as the coupling layer. Only 14 top-down labels need to be created to find $\labelSetTD{u_1^5}$ and $\labelSetTD{u_2^5}$, and only 11 bottom-up labels need to be created in order to find $\labelSetBU{u_1^5}$ and $\labelSetBU{u_2^5}$. The coupling of these sets results in \begin{eqnarray*} \couple{\labelSetTD{u_1^5}}{\labelSetBU{u_1^5}} & = & \set{\parentheses{8,13,17},\parentheses{6,7,19},\parentheses{7,14,13},\parentheses{7,15,8},\parentheses{6,16,4}} \\ \couple{\labelSetTD{u_2^5}}{\labelSetBU{u_2^5}} & = & \set{\parentheses{10,21,8}}, \end{eqnarray*} and, finally, $\coupleLayer{\layer{5}} = \set{ \parentheses{8,13,17}, \parentheses{6,7,19}, \parentheses{7,14,13}, \parentheses{10,21,8} },$ as desired. Note that using either unidirectional approach requires the creation of a total of 36 labels, as opposed to the 25 required using coupling. \hfill $\square$ \end{example} Determining the best layer to couple on (i.e., the one in which the number of labels that need to be created is minimized) is a non-trivial. In particular, a unidirectional Pareto frontier compilation may require the creation of fewer labels than the bidirectional variant. We therefore propose the following heuristic procedure. Starting from $\mathbf{r}$ and $\mathbf{t}$, we first create $\labelSetTD{u'}$ and $\labelSetBU{u''}$, respectively, for all $u' \in \layer{2}$ and for all $u'' \in \layer{n}$. Then, having constructed top-down labels for each node up to $\layer{j_1}$, $2 \leq j_1$, and bottom-up labels for each node on or below $\layer{j_2}$, $j_2 \in [j_1+1,n]$, we pick among~$j_1$ and $j_2 $ the layer with the fewer number of total labels in order to proceed with the extension operations. Namely, if $\sum_{u \in \layer{j_1}}|\labelSetTD{u}| \leq \sum_{u \in \layer{j_2}}|\labelSetBU{u}|$, extension of the top-down labels to $\layer{j_1 + 1}$ is completed and $j_1$ is set to $j_1 + 1$. Otherwise, extension of the bottom-up labels to $\layer{j_2 - 1}$ is completed, and $j_2$ is set to $j_2 - 1$. This procedure is repeated until $j_1 = j_2$, upon which coupling on layer~$\layer{j_1}$ is used to calculate the Pareto frontier of the network model. \subsection{Label Removal Algorithms} \label{sec:labelRemovalAlgos} Given a network model, the complexity of finding the Pareto frontier is largely determined by the cardinality of $\labelSetTD{u}$ and $\labelSetBU{u}$. Thus, having a VPO for the reduction of $|\labelSetTD{u}|$ and $|\labelSetBU{u}|$ is relevant for computational purposes. The following proposition introduces such an operation. Given two nodes $u,v$ in a same layer $\layer{j}$, the intuition behind the proposition is to identify when the subnetwork associated with $\mathcal{N}[u,\mathbf{t}]$ is dominated by $\mathcal{N}[v,\mathbf{t}]$, in which case we can remove labels in $u$ that are dominated by $v$. \begin{proposition}[Label filtering] \label{prop:labelFiltering} Let $u$ and $v$ be two nodes in $\layer{j}$ for some $j \in [n]$. Suppose \[ \nd{ \paretofrontier{\mathcal{N}[u,\mathbf{t}]} , \paretofrontier{\mathcal{N}[v,\mathbf{t}]} } = \paretofrontier{\mathcal{N}[v,\mathbf{t}]}. \] If there exists a pair of labels $\ell^u \in \labelSetTD{u}$ and $\ell^v \in \labelSetTD{v}$ for which $\ell^u \prec \ell^v$ (or $\ell^u = \ell^v$), then removing $\ell^u$ from $\labelSetTD{u}$ is a VPO. Similarly, if \[ \nd{ \paretofrontier{\mathcal{N}[\mathbf{r},u]} , \paretofrontier{\mathcal{N}[\mathbf{r},v]} } = \paretofrontier{\mathcal{N}[\mathbf{r},v]}, \] and there exists $\ell^u \in \labelSetBU{u}$ and $\ell^v \in \labelSetBU{v}$ for which $\ell^u \prec \ell^v$ (or $\ell^u = \ell^v$), then removing $\ell^u$ from $\labelSetBU{u}$ is a VPO. \end{proposition} \begin{proof} We provide a proof of the first case, as the proof of the other case follows by inverting the network model. By the condition in the statement of the proof, for each path $p$ in $\mathcal{N}_{[u,\mathbf{t}]}$, it must be that $w(p) + \ell^u$ is dominated by $w(p') + \ell^v$, for some path $p'$ in $\mathcal{N}_{[v,\mathbf{t}]}$. Thus $w(p) + \ell^u$ does not belong to~$\paretofrontier{\mathcal{N}}$ for any such~$p$. It follows that the removal of $\ell^u$ is a VPO. \end{proof} Proposition \ref{prop:labelFiltering} generalizes the concept of \textit{state-based dominance} in DP \citep{Bertsekas2017} to network models. In particular, we can incorporate domain-specific information into a network model to identify cases where the conditions of Proposition \ref{prop:labelFiltering} are satisfied. We provide an example instantiation in Section \ref{sec:copmutationalInsights} which results in significant speedups in enumerating the Pareto frontier. \section{Numerical Study} \label{sec:copmutationalInsights} In this section, we provide a detailed numerical evaluation of the effectiveness of the proposed algorithm on five different classes of problems. For each class, we discuss how the initial network model is constructed, explain the best algorithmic configuration, and compare with existing state-of-the-art approaches for general MODOs. In particular, we consider the methodologies proposed by \cite{kirlik2014new} and \cite{Ozlen2013}, hereafter denoted by algorithms \texttt{K}{} and \texttt{O}{}, respectively. The source codes of these algorithms were kindly provided by the respective authors. \paragraph{Experimental Design and Evaluation.} All experiments ran on an Intel(R) Xeon(R) CPU E5-2680 v2 at 2.80GHz. Each experiment was limited to one thread and subject to a time limit of 3,600 seconds and a memory limit of 16GB. The algorithms \texttt{K}{} and \texttt{O}{} depend on the resolution of integer linear programs; we employed IBM ILOG CPLEX 12.7.1 with default settings for this task \citep{CPLEXRef}. The paired Wilcoxon signed-rank test was used to estimate $p$-values comparing pairs of algorithms \citep{wilcoxon1945individual} (this is a nonparametric test that will be used to compare if population mean ranks of solution times differ between algorithms). Data was generated following previous literature guidelines; details of the random generation procedure for each problem class are presented in the appendix. Direct comparisons between the algorithms are presented in cumulative distribution plots, which show the number of instances solved by each algorithm ($y$-axis) within a given time limit ($x$-axis). We use the integral of the curve associated with an algorithm to estimate its relative performance so that the quality of an algorithm depends both on the running time and on the number of instances solved. The order in which algorithms are listed in the legends reflect this metric, with the best-performing algorithms appearing on the top. Additionally, we present scatter plots to compare the best-performing network model algorithm against the best previous state-of-the-art algorithm. These plots are presented in logarithmic scale and represent the amount of time the algorithms require to solve each instance of the given benchmark. We also employ a color code to indicate the size of the Pareto frontier of each instance. The network model-based algorithms employing the bottom-up, top-down, and bidirectional Pareto frontier compilation are represented by~\texttt{BU}{}, \texttt{TD}{}, and~\texttt{Coup}{}, respectively. For applications where the domain-specific label filtering given in Proposition \ref{prop:labelFiltering} has been applied, we denote the extensions of \texttt{BU}{}, \texttt{TD}{}, and \texttt{Coup}{} by \texttt{BU+}{}, \texttt{TD+}{}, and \texttt{Coup+}{}, respectively. \subsection{Multiobjective 0-1 Knapsack Problem} \label{sec:knapsack} Given $n$ items, a capacity $W > 0$, and for each item $i \in [n]$, a weight $w_i > 0$ and $K$ profits $p^1_i, p^2_i, \dots, p^K_i > 0$, the multiobjective 0-1 knapsack problem (MKP) is \[ \max \left \{ ((p^1)^\top x, (p^2)^\top x, \dots, (p^K)^\top x) \,:\, w^\top x \le W, \; x \in \mathbb{B}^n \right \}.\] \paragraph{Network model construction.} The initial network is constructed via a recursive formulation using a single dimensional state variable $s \in \mathbb{R}_+$, which corresponds to the total weight of the knapsack at a certain stage. The root state is $s_0 = 0$. We cannot set a variable $x_j = 1$ if it weighs more than the available capacity, i.e., $\mathcal{V}_j(s) := \{ v \in \mathbb{B} \,:\, s + v \cdot w_j \leq W \}$. The transition functions update the total weight of the knapsack: $\tau_j(s, v) = s + v \cdot w_j$ for all $j \in [n-1]$ and $\tau_n(s, v) = W$. Lastly, for any $j \in [n]$, the reward function is the profit vector of item $j$, i.e., $\delta_j(s, v) := v \times \left( p^1_j, \, p^2_j, \, \dots, p^K_j \right)$. We incorporate the label filtering of Proposition~\ref{prop:labelFiltering} by exploiting the classical DP state dominance for knapsack problems. For any $j \in [n]$, let $u^j_i, u^j_{i'} \in \layer{j}$ be two possible states obtained at layer $j$ using the aforementioned recursive model. If $u^j_i \geq u^j_{i'}$, we have \[ \nd{ \paretofrontier{\mathcal{N}[u^j_{i},\mathbf{t}]} , \paretofrontier{\mathcal{N}[u^j_{i'},\mathbf{t}]} } = \paretofrontier{\mathcal{N}[u^j_{i'},\mathbf{t}]}. \] The equality above follows since each path (i.e., partial feasible solution) in $\mathcal{N}[u^j,\mathbf{t}]$ has a path in $\mathcal{N}[u^j_{i'},\mathbf{t}]$ of same path-weight, given that we have more capacity available at $u^j_{i'}$ than $u^j_i$. That is, we can assign the same variable values in a path starting at $u^j_{i'}$ and incur the same objective function contribution. Thus, we can remove labels at $u^j_i$ if they are dominated by a label at $u^j_{i'}$. \paragraph{Computational evaluation.} We experimented on 450 instances with $K \in \{3,4,\dots,7\}$ objectives and $n \in \{20,30,\dots,100\}$ variables. The detailed results are presented in Table~\ref{tab:KnapsackResults} of the appendix. We provide an analysis of the results and summarize our findings. \begin{figure}[h] \centering \begin{minipage}[b]{0.5\textwidth} \includegraphics[width=0.94\textwidth]{./ KnapCD} \caption{Knapsack cumulative distribution plot} \label{fig:knapCD} \end{minipage} \hfill \begin{minipage}[b]{0.45\textwidth} \centering \includegraphics[width=0.98\textwidth]{./ KnapSPPareto} \caption{Knapsack scatter plot} \label{fig:knapSP} \end{minipage} \end{figure} The cumulative distribution plot for the knapsack instances is presented in Figure~\ref{fig:knapCD}. The results show a clear dominance of all network model algorithms over \texttt{O}{} and \texttt{K}{}. Overall, \texttt{Coup+}{} delivered the best results, solving 370 instances, whereas the configurations~\texttt{BU}{} and~\texttt{BU+}{} were relatively weaker and solved 280 and 279 and instances, respectively. \texttt{O}{} and \texttt{K}{} solved 154 and 149, respectively. The figure also shows that the network model algorithms are considerably faster, as they solve more instances within seconds than~\texttt{O}{} and~\texttt{K}{} in one hour. Figure~\ref{fig:knapSP} shows a scatter plot comparing \texttt{Coup+}{} with~\texttt{O}{} for instances with up to five objective functions. We removed from the plot instances with $K = 6$ and $K = 7$, since~\texttt{O}{} and \texttt{K}{} have considerably worse performance and the results do not provide much insight. \texttt{Coup+}{} was at least as efficient as~\texttt{O}{} and \texttt{K}{} in every knapsack instance tested, and only three instances that could not be solved by this network model configuration were solved by the others (one by~\texttt{TD}{} and two by \texttt{Coup}{}). The sizes of the Pareto frontier do not seem to affect the relative performance between~\texttt{Coup+}{} and~\texttt{O}{}; namely, \texttt{Coup+}{} and the other network model algorithms perform better than the previous state of the art in all cases, perhaps even more so in instances with smaller solution sets. With respect to Pareto frontier compilation, the bidirectional strategy had the best results while bottom-up was relatively poor, independently from the inclusion of the label filtering VPO. Filtering affected network model algorithms in different ways, depending on the Pareto frontier compilation strategy used. The solution time differences between~\texttt{BU}{} and~\texttt{BU+}{} are statistically significant ($p$-value of~$10^{-9}$), although in practical terms they perform similarly; the number of solved instances is almost the same (280 vs 279) and the average running time goes from 252 to 244 with filtering, a gain of 3\% on average. The inclusion of filtering decreased the quality of the top-down algorithm, with 310 instances solved (as opposed to 323) and almost 40 seconds of additional computational time, on average, to solve instances solved by both algorithms (from 184 to 221, with $p$-value of $10^{-13}$). Finally, \texttt{Coup+}{} is significantly better than~\texttt{Coup}{} ($p$-value of $10^{-40}$); more instances were solved (370 in comparison with 362) in less time (average running time reduced from 318 to 228) and less variability (standard deviation reduced from 710 to 525). \subsection{Multiobjective Set Covering and Set Partitioning Problems} \label{sec:setCoveringandPartitioning} We consider the multiobjective variants of the classical set covering and set partitioning problems. Namely, let $A \in \mathbb{B}^{m \times n}$ be a binary constraint coefficient matrix, and let $c^1,\dots,c^K$ be $K$ cost vectors in $\mathbb{R}^n$. The multiobjective set covering problem (MSCP) is defined as $$\min \left \{ ((c^1)^\top x, (c^2)^\top x, \dots, (c^K)^\top x) \,:\, Ax \ge \boldsymbol{1}, x \in \mathbb{B}^n \right \}.$$ The multiobjective set packing problem (MSPP) replaces ``min'' by ``max'' and the inequality sign from ``$\ge$'' to ``$\le$'' in the definition above. \paragraph{Network model construction.} The original networks are produced by employing the DD transformation discussed in Section~\ref{sec:bddTransformation}. In particular, we used the set covering DD by \cite{BerHoeHoo11} and the set packing DD by \cite{BerCirHoeYun14}. In our experiments, label filtering for both applications did not impact performance, so we omit the corresponding results. \paragraph{Computational evaluation.} We experimented on 150 random instances with $n \in \{100, 150, 200\}$. Detailed results for the MCSP and the MSPP are presented in Tables~\ref{tab:SetCoveringResults} and~\ref{tab:SetPackingResults}, respectively. For the MSCP, \texttt{Coup}{} delivered the best results among the network model algorithms, solving 90 instances with an average running time of 81 seconds (and a standard deviation of 188). \texttt{TD}{} solved more instances (91), but its average runtime was higher (122 with a standard deviation of 263). Moreover, for the 88 instances solved by both, \texttt{Coup}{} had an average running time of 64 (with a standard deviation of 127), against 104 (standard deviation of 229) for~\texttt{TD}{}. \begin{figure}[h] \centering \begin{minipage}[b]{0.54\textwidth} \includegraphics[width=0.89\textwidth]{./ SCCD} \caption{Set covering cumulative distribution plot} \label{fig:SCCD} \end{minipage} \hfill \begin{minipage}[b]{0.45\textwidth} \includegraphics[width=1\textwidth]{./ SCSPPareto} \caption{Set covering scatter plot} \label{fig:SCSP} \end{minipage} \end{figure} Algorithms \texttt{K}{} and \texttt{O}{} solved 48 and 50 instances, respectively. Among these, there were instances that the network model configurations could not enumerate the Pareto frontier (between 9 and 11, depending on the configuration employed). These instances were relatively large, typically of size $n = 200$, resulting in large network models that could not be solved within the given limits. Figure~\ref{fig:SCSP} depicts the relative performance of~\texttt{O}{} and~\texttt{Coup}{} on the MSCP, in particular elucidating the one configuration for which~\texttt{O}{} significantly outperformed~\texttt{Coup}{} (200 variables, $K=3$). However, the same plot also suggests that~\texttt{Coup}{} is far more efficient for instances with relatively large Pareto frontiers. This suggests that the performance of the network models are less sensitive to the size of the Pareto frontier than~\texttt{K}{} and~\texttt{O}{}. The results for the MSPP are presented in the cumulative distribution plot in Figure~\ref{fig:SPCD} and in the scatter plot in Figure~\ref{fig:SPSP}. \begin{figure}[h] \centering \begin{minipage}[b]{0.54\textwidth} \includegraphics[width=0.89\textwidth]{./ SPCD} \caption{Set packing cumulative distribution plot} \label{fig:SPCD} \end{minipage} \hfill \begin{minipage}[b]{0.45\textwidth} \includegraphics[width=1\textwidth]{./ SPSPPareto} \caption{Set packing scatter plot} \label{fig:SPSP} \end{minipage} \end{figure} \texttt{Coup}{} also delivered the best results for this problem class, slightly outperforming \texttt{TD}{}; \texttt{Coup}{} solved one more instances than \texttt{TD}{} (102 vs 101) and had smaller running times (averages of 71 versus 123 and standard deviation of 170 versus 322). \texttt{Coup}{} and \texttt{TD}{} solved more instances of the MSPP than the MSCP, whereas \texttt{BU}{}, \texttt{K}{}, and \texttt{O}{} had the inverse behavior. Algorithms \texttt{K}{} and~\texttt{O}{} again solved fewer instances (41 and 42) and, among these, between 9 and 10 instances (depending on the configuration) were not solved by the network models. The size of the instances played a role in the efficiency of the algorithms. Figure~\ref{fig:SPSP} shows that~\texttt{K}{} outperforms~\texttt{Coup}{} in some instances, but in relatively fewer cases than in the MSCP. Moreover, the Pareto frontier sizes have the same impact on the relative performance of the algorithms, as in the case of the MSCP, showing the robustness of network models. \subsection{Multiobjective Traveling Salesperson Problem} The multiobjective traveling salesperson problem (MTSP) is a generalization of the classical TSP where arcs are associated with multiple (often conflicting) distance measures. That is, given a graph $G=(V,E)$ with vertex set $V = \{1,\dots,n\}$ and where each edge $e \in E$ is associated with costs $c^1_{e}, \dots, c^{K}_{e}$, MTSP asks for the nondominated Hamiltonian tours in $G$ with respect to edge costs. \paragraph{Network model construction.} The initial network is constructed using the classical dynamic programming model for the TSP \citep{Bertsekas2017}. Each state $s := (\bar{V}, v)$ is composed by a set $\bar{V} \subseteq V$, representing the vertices that are still left to be visited, and a vertex $v \in V \setminus \bar{V}$, representing the last visited vertex. The initial state is $s_0 := (V \setminus \{1\}, 1)$ (assuming we start and end at vertex 1). The variable $x_j$ denotes the vertex that is visited at the $j$-th position of the Hamiltonian tour; thus, $\mathcal{V}_j((\bar{V}, v)) = x_j$. The transition function updates the set of visited vertices, $\tau_j((\bar{V},v),x_j) = (\bar{V} \setminus \{x_j\}, x_j)$, and the reward function is the negative of the distance travelled (since we are maximizing), $\delta_j((\bar{V},v), x_j) = (-c^1_{v,x_j}, \cdots, -c^{K}_{v,x_j})$, for $j = 1,\dots, n$. Finally, we establish a special terminal state with $\delta_{n+1}((\emptyset,v), x_j) = (-c^1_{v,1}, \cdots, -c^{K}_{v,1})$ that represents the return to vertex 1 \paragraph{Computational evaluation.} We experimented on 150 instances, with 10 instances for each configuration of $n \in \{5, 10, 15\}$ and $K \in \{3, 4, 5, 6, 7\}$. We only depict results by \texttt{Coup}{}, which dominated all other network-based configurations, and \texttt{K}{}, which also was superior to \texttt{O}{} in all scenarios tested. In particular, \texttt{K}{} uses the Miller-Tucker-Zemlin formulation of TSP \citep{miller1960integer} as in \cite{Ozlen2013}. Table \ref{tab:MTSPResults} depicts the results, where column $\mathsf{P}$ gives the average size of the Pareto frontiers (taking into account only closed instances), $\mathsf{S}$ gives the number of problems solved for the associated technique, and $\overline{t}$ provides the average time (out of 10 instances with the configuration); small running times were rounded up to 1 second. { \renewcommand{\arraystretch}{1.1} \begin{table}[h] \centering \small \caption{Multiobjective Traveling Salesperson Problem Results} \label{tab:MTSPResults} \begin{tabular}{|crr|rr|rr|} \multicolumn{3}{c}{} & \multicolumn{2}{c}{\texttt{Coup}{}} & \multicolumn{2}{c}{\texttt{K}{}} \\ \hline \Tstrut \Bstrut $n$ & $K$ & $\mathsf{P}$ & $\mathsf{S}$ & $\overline{t}$ & $\mathsf{S}$ & $\overline{t}$ \\ \hline \Tstrut \Bstrut 5 & 3 & 6.9 & 10 & 1.0 & 10 & 1.0 \\ & 4 & 8.7 & 10 & 1.0 & 10 & 1.0 \\ & 5 & 8.1 & 10 & 1.0 & 10 & 1.2 \\ & 6 & 11.0 & 10 & 1.0 & 10 & 2.8 \\ & 7 & 10.9 & 10 & 1.0 & 5 & 11.4 \\ \hline 10 & 3 & 163.1 & 10 & 1.0 & 10 & 57.1 \\ & 4 & 675.7 & 10 & 1.0 & 7 & 2021.4 \\ & 5 & 2040.2 & 10 & 1.0 & 0 & - \\ & 6 & 20080.5 & 10 & 1.0 & 0 & - \\ & 7 & 9716.7 & 10 & 1.9 & 0 & - \\ \hline 15 & 3 & 670.7 & 10 & 3.2 & 7 & 2338.9 \\ & 4 & 8328.5 & 10 & 41.2 & 0 & - \\ & 5 & 55875.0 & 10 & 543.9 & 0 & - \\ & 6 & 190447.3 & 8 & 2462.1 & 0 & - \\ & 7 & - & 0 & - & 0 & - \\ \hline \end{tabular} \end{table} } Our results show a complete dominance of~\texttt{Coup}{} over~\texttt{K}{}. Namely, \texttt{Coup}{} was superior to~\texttt{K}{} by at least one order of magnitude in all instances; for some configurations, \texttt{K}{} could not solve a single instance, whereas~\texttt{Coup}{} closed all scenarios within seconds (see e.g., $n = 10$ and $K = 7$). Instances with 15 cities are very challenging for current state-of-the-art techniques; note that neither~\texttt{K}{} nor~\texttt{Coup}{} managed to solve any instance with $n = 15$ and~$K = 7$; observe also that the size of the Pareto frontiers increase significantly with~$n$ and~$K$. Nevertheless, \texttt{Coup}{} shows significant superiority in these scenarios as well; whereas \texttt{K}{} could not close any instances where $K \geq 4$, \texttt{Coup}{} solved all instances with up to 5 objective functions in less than 10 minutes in average, and 8 out of 10 instances with $K = 6$. \subsection{Multiobjective Cardinality-Constrained Absolute Value Problem} \label{sec:nonlinear} The multiobjective cardinality-constrained absolute value problem (MCCAVP) is defined as $$\min \left\{ \left( \left|(a^1)^\top x - b_1 \right|, \left|(a^2)^\top x - b_2 \right|, \hdots, \left|(a^K)^\top x - b_K \right| \right) \,:\, \boldsymbol{1}^\top x \leq C, x \in \mathbb{B}^n \right\},$$ where $a^1, \ldots, a^K \in \mathbb{Z}^n, b \in \mathbb{Z}^K$, and $C \in \mathbb{Z}_+$. The MCCAVP is a multiobjective variant of the discrete $L_1$-norm minimization problems, classically applied in statistical data fitting and circuit optimization \citep{jong2012smoothing,Narula1982}. For instance, in data fitting problems each linear function represents a residual, and in the multiobjective case we wish to evaluate the Pareto frontier of nondominated fitting configurations according to each residual. The MCCAVP illustrates how the procedure generalizes to multiobjective nonlinear applications. If any of the $K$ objective functions is instead written as a linear function raised to the power of $\alpha \in \mathbb{Z}_{\ge 1}$, the outer function can be replaced by the absolute value (if $\alpha$ is even) or simply ignored (if $\alpha$ is odd) without affecting the Pareto frontier. The MCCAVP therefore provides a modeling framework for a wide-range of objective functions. \paragraph{Network model construction.} The initial network is constructed via a multiobjective recursive formulation as presented in Section~\ref{sec:recursiveModeling}. The recursive formulation is obtained by using a $(K+1)$-dimensional state variable $s:=(\theta, \gamma) \in \mathbb{R}^{K} \times \mathbb{R}$, where $\theta_1, \dots, \theta_K$ represent the partial evaluation of each $(a^k)^\top x$ for all $k \in [K]$, and $\gamma$ is the number of variables that are set to one at that stage. The root state is $s_0 = (b,0)$. We cannot set a variable $x_j = 1$ if it exceeds the available capacity, i.e., $\mathcal{V}_j(s) := \{ v \in \mathbb{B} \,:\, \gamma + v \le C \}$. The transition functions update the partial evaluation of $(a^k)^\top x$ and the number of variables that are set to one; i.e., \[ \tau_j(s,v) = (\theta_1 + a^1_j \cdot v, \theta_2 + a^2_j \cdot v, \dots, \theta_K + a^K_j \cdot v, \gamma + v). \] for all $j \in [n]$. The reward function is the change in each objective function when transitioning from state $s=(\theta, \gamma)$ to another, that is, the $k$-th component of $\delta_j(s,v)$ is given by \[ \bigg( \delta_j(s,v) \bigg )_k := |\theta_k + a^k_j \cdot v - b_k| - |\theta_k - b_k|, \] for $k \in [k]$. To verify its validity, fix $k \in [K]$ and consider any feasible $x =(v_1,\dots,v_n)$ and the associated state transitions $s_0 = (\theta^0, \gamma^0), s_1 = (\theta^1, \gamma^1), \dots, s_n =(\theta^n, \gamma^n)$. Observe that, by definition, \begin{align*} \sum_{j=1}^n \bigg( \delta_j(s_{j-1},v_j) \bigg)_k &= |\underbrace{\theta^0_k + a^k_1 \cdot v_1}_{\theta^1_k} - b_k| - |\theta^0_k - b_k| + |\underbrace{\theta^1_k + a^k_2 \cdot v_2}_{\theta^2_k} - b_k| - |\theta^1_k - b_k| \\ & \quad + |\underbrace{\theta^2_k + a^k_3 \cdot v_3}_{\theta^3_k} - b_k| - |\theta^2_k - b_k| + \cdots + |\underbrace{\theta^{n-1}_k + a^k_n \cdot v_n}_{\theta^n_k} - b_k| - |\theta^{n-1}_k - b_k| \\ &= |\theta^{n}_k - b_k| - \underbrace{|\theta^0_k - b_k|}_{0} = \left| \sum_{j=1}^n a^k_j v_j - b_k \right |, \end{align*} which is the original objective for the $k$-th function. \paragraph{Mixed-Integer Linear Programming Reformulation.} Since the original formulation for the MCCAVP is nonlinear, it cannot be directly input to~\texttt{K}{} and~\texttt{O}{}. We consider the following linear reformulation of the MCCAVP for~\texttt{K}{} and~\texttt{O}{}: \begin{alignat*}{2} \min_{x \in \mathbb{B}^n, \boldsymbol{1}^\top x \leq C} \ & \{ y_1,\hdots,y_K\} \\ \text{s.t.} \ & y_k \geq (a^k)^\top x - b_k, && k \in [K], \\ & y_k \geq -(a^k)^\top x + b_k, \qquad && k \in [K]. \end{alignat*} \paragraph{Computational evaluation.} We experimented on 6,250 random instances. The detailed results for the MCCAVP are presented in Table \ref{tab:AbsValueResults}. In particular, all cases were solved by each network-based configuration. For this section, we restrict the analysis of the results to 450 of these instances, which have $M = 250$, $n \geq 15$, and $C \geq 30$, as the others were solved within a few seconds. The cumulative distribution plot is presented in Figure~\ref{fig:AVCD}. Whereas \texttt{K}{} and \texttt{O}{} solved less than 200 instances each, the network algorithms enumerated the Pareto frontier in less than 10 minutes in each case. Over the complete benchmark set, \texttt{BU}{} delivered the best results. For the restricted set of 450 instances, \texttt{BU}{} was also the best, although its results (average running time of 18 seconds with standard deviation of 41) were not different from those obtained by~\texttt{TD}{} (average of 24 and standard deviation of 69) in a statistically significant way ($p$-value of $0.026$). Algorithm~\texttt{O}{} solved more instances than~\texttt{K}{} in the extended dataset (4,118 vs. 4,108), albeit with a higher runtime (on average 30\% larger). Alternatively, for the restricted family of instances, \texttt{K}{} solved more instances (178 vs. 172) with a much shorter runtime (average of 258 and standard deviation of 492 vs. average of 432 and standard deviation of 801), thus suggesting that \texttt{K}{} outperforms \texttt{O}{} for harder instances. We therefore select~\texttt{K}{} for further comparison. \begin{figure}[h] \centering \begin{minipage}[b]{0.58\textwidth} \includegraphics[width=0.78\textwidth]{./ NLCD} \caption{Absolute value cumulative distribution plot} \label{fig:AVCD} \end{minipage} \hfill \begin{minipage}[b]{0.41\textwidth} \includegraphics[width=1.02\textwidth]{./ NLSPPareto} \caption{Absolute value scatter plot} \label{fig:AVSP} \end{minipage} \end{figure} Figure~\ref{fig:AVSP} shows a scatter plot comparing the performance of~\texttt{BU}{} with~\texttt{K}{}. Algorithm~\texttt{K}{} outperforms~\texttt{BU}{} in 5 instances by one order of magnitude (2 vs 30 seconds was the largest relative difference). All instances solved by \texttt{K}{} were also successfully addressed by \texttt{BU}{}, with an average running time of 8 seconds and standard deviation of 17. Furthermore, similar to what was observed for the MSCP and MSPP, there is a positive correlation between the size of the Pareto frontiers and the relative superiority of network model procedures over~\texttt{K}{} and~\texttt{O}{}. \section{Conclusion} \label{sec:conclusionAndFutureWork} This paper presents a novel framework for solving multiobjective discrete optimization problems (MODOs) through a reformulation into \textit{network models}, enhanced by validity-preserving operations that reduce the size of the network while preserving the Pareto frontier. The generality of the framework is established through application to five distinct problem classes, including a nonlinear multiobjective optimization problem. The experimental evaluation suggests that the proposed algorithm outperforms exisiting state-of-the-art general MODO solvers in several multiobjective variants of classical operations research problems. Our methodology assumes memory availability exceeds memory requirements for constructing and storing network models. Since multiobjective optimization problems become rapidly more challenging as problem size grows, this was only a limiting factor for the largest of instances generated, most of which were beyond the scope of other algorithms we tested against. As memory availability in modern-day CPUs continues to grow, investigating algorithms specifically designed to exploit this resource is of great interest, and this paper provides an aimed step in this direction. \bibliographystyle{plainnat}
{ "timestamp": "2018-09-06T02:05:36", "yymm": "1802", "arxiv_id": "1802.08637", "language": "en", "url": "https://arxiv.org/abs/1802.08637" }
\section{Introduction} \IEEEPARstart{S}tate-of-the-art computing platforms are widely based on the von-Neumann architecture \cite{von2012computer}. The von-Nuemann architecture is characterized by distinct spatial units for \textit{computing} and \textit{storage}. Such physically separated memory and compute units result in huge energy consumption due to frequent data transfer between the two entities. Moreover, the transfer of data through a dedicated limited-bandwidth bus limits the overall compute throughput. The resulting memory bottleneck is \textit{the major throughput concern} for hardware implementations of data intensive applications like machine learning, artificial intelligence \textit{etc}. \begin{figure*}[t] \centering \includegraphics[width=0.9\textwidth]{8T.pdf} \caption{(a) Schematic of a standard 8T-SRAM bit-cell. It consists of two decoupled ports for reading and writing respectively. (b) First proposed configuration (Config-A) for implementing the dot product engine using the 8T-SRAM bit-cell. The SL is connected to the input analog voltage $v_i$, and the RWL is turned ON. The current $I_{RBL}$ through the RBL is sensed and is proportional to the dot product $v_i \cdot g_i$, where $g_i$ is the ON/OFF conductance of the transistors $M1$ and $M2$. (c) Second proposed configuration (Config-B). The input analog voltages are applied to the RWL, while the SL is supplied with a constant voltage $V_{bias}$. The current through the RBL is sensed in the same way as in Config-A. } \label{fig:8t} \end{figure*} A possible approach geared towards high throughput beyond von-Neumann machines is to enable distributed computing characterized by tightly intertwined storage and compute capabilities. If computing can be performed inside the memory array, rather than in a spatially separated computing core, the compute throughput can be considerably increased. As such, one could think of \textit{ubiquitous} computing on the silicon chip, wherein both the logic cores and the memory unit partake in compute operations. Various proposals for \textit{`in-memory'} computing with respect to emerging non-volatile technologies have been presented for both dot product computations \cite{yang2013memristive,Li_2017} as well as vector Boolean operations \cite{borghetti2010memristive}. Prototypes based on emerging technologies can be found in \cite{jo2010nanoscale,Li_2017} . With respect to the CMOS technology, Boolean in-memory operations have been presented in \cite{dong20170} and \cite{agrawal2017x}. In \cite{dong20170} authors have presented vector Boolean operations using 6T SRAM cells. Additionally, authors in \cite{agrawal2017x} have demonstrated that the 8 transistor (8T) SRAM cells lend themselves easily as vector compute primitives due to their decoupled read and write ports. Both the works \cite{dong20170} and \cite{agrawal2017x} are based on vector Boolean operations. However, perhaps the most frequent and compute intensive function required for numerous applications like machine learning is the \textit{dot product} operation. Memristors based on resistive-RAMs (Re-RAMs) have been reported in many works as an analog dot product compute engine \cite{borghetti2010memristive, li2018analogue}. Few works based on analog computations in SRAM cells can be found in \cite{asram1, asram2}. Both these works use 6T SRAM cells and rely on the resultant accumulated voltage on the bit-lines (BLs). Not only 6T SRAMs are prone to read-disturb failures, the failures are also a function of the voltage on the BLs. This leads to a tightly constrained design space for the proposed 6T SRAM based analog computing. In this paper, we employ 8T cells that are much more robust as compared to the 6T cells due to isolated read port. We show that without modifying the basic bit-cell for the 8T SRAM cell, it is possible to configure the 8T cell for in-memory dot product computations. Note, in sharp contrast to the previous works on in-memory computing with the CMOS technology, we enable \textit{current based, analog-like} dot product computations using robust digital 8T bit-cells. The key highlights of the present work are as follows: \begin{enumerate} \item We show that the conventional 8T SRAM cell can be used as a primitive for analog-like dot product computations, without modifying the bit-cell circuitry. In addition, we present two different configurations for enabling dot product computation using the 8T cell. \item Apart for the sizing of the individual transistors consisting the read port of the 8T cell, the basic bit-cell structure remains unaltered. Thereby, the 8T SRAM array can also be used for usual digital memory read and write operations. As such, the presented 8T cell array can act as a dedicated dot product engine or as an \textit{on-demand} dot product accelerator. \item A detailed simulation analysis using 45nm predictive technology models including layout analysis and effect of non-idealities like the existence of line-resistances and variation in transistor threshold voltages has been reported highlighting the various trade-offs presented by each of the two proposed configurations. \end{enumerate} \section{8T-SRAM as a Dot Product Engine} A conventional 8T bit-cell is schematically shown in Fig. \ref{fig:8t}(a). It consists of the well-known 6T-SRAM bit-cell with two additional transistors that constitute a decoupled read port. To write into the cell, the write word-line (WWL) is enabled, and write bit-lines (WBL/WBLB) are driven to $V_{DD}$ or ground depending on the bit to be stored. To read a value from the cell, the read bit-line (RBL) is pre-charged to $V_{DD}$ and the read word-line (RWL) is enabled. Note, that the source-line (SL) is connected to the ground. Depending on whether the bit-cell stores a logic `1' or `0', the RBL discharges to 0V or stays at $V_{DD}$, respectively. The resulting voltage at the RBL is read out by the sense amplifiers. Although 8T-cells incur a $\sim$30\% increase in bit-cell area compared to the 6T design, they are read-disturb free and more robust due to separate read and write path optimizations \cite{chang2005stable}. We now show how such 8T-SRAMs, with no modification to the basic bit-cell circuit (except for the sizing of the read transistors), can behave as a dot product engine, without affecting the stability of the bits stored in the SRAM cells. We propose two configurations - \textit{Config-A} and \textit{Config-B}, for enabling dot-product operations in the 8T-SRAMs. Config-A is shown in Fig. \ref{fig:8t}(b). The inputs $v_i$ (encoded as analog voltages) are applied to the SLs of the SRAM array, and the RWL is also enabled. The RBL is connected to a sensing circuitry, which we will describe later. Thus, there is a static current flow from the SL to the RBL, which is proportional to the input $v_i$ and the conductance of the two transistors $M1$ and $M2$. For simplicity, assume that the weights (stored in the SRAM) have a single-bit precision. If the bit-cell stores `0', the transistor $M1$ is OFF, and the output current through the RBL is close to 0. Whereas if the bit-cell stores a `1', the current is proportional to $v_i \cdot g_{ON}$, where $g_{ON}$ is the series `ON' conductance of the transistors. Assume similar inputs $v_i$ are applied on the SLs for each row of the memory array. Since the RBL is common throughout the column, the currents from all the inputs $v_i$ are summed into the RBL. Moreover, since the SL is common throughout each row, the same inputs $v_i$ are supplied to multiple columns. Thus, the final output current through RBL of each column is proportional to $I_{RBL}^j=\Sigma (v_i\cdot g_i^j)$, where $g_i^j$ is the `ON' or `OFF' conductance of the transistors, depending on whether the bit-cell in the i-th row and j-th column stores a `1' or `0', respectively. The output current vector thus resembles the vector-matrix dot product, where the vector is $v_i$ in the form of input analog voltages, and the matrix is $g_i^j$ stored as digital data in the SRAM. \begin{figure}[t] \centering \includegraphics[width=0.5\textwidth]{8Tarr.pdf} \caption{8T-SRAM memory array for computing dot-products with 4-bit weight precision. Only the read port is shown, the 6T storage cell and the write port are not shown. The array columns are grouped in four, and the transistors $M1$ and $M2$ are sized in the ratio $8:4:2:1$ for the four columns. The output current $I_{OUT}^j$ represents the weighted sum of the $I_{RBL}$ of the four columns, which is approximately equal to the desired dot-product. } \label{fig:8tarr} \end{figure} Let us now consider a 4-bit precision for the weights. If the weight $W_i^j={w_3w_2w_1w_0}$, where $w_i$ are the bits corresponding to the 4-bit weight, the vector matrix dot product becomes:\\ $\Sigma (v_i\cdot W_i^j)=\Sigma [v_i\cdot (2^3w_3+2^2w_2+2^1w_1+w_0)]$\\ $=\Sigma (v_i\cdot 2^3w_3)+\Sigma (v_i\cdot2^2w_2)+\Sigma (v_i\cdot2^1w_1)+\Sigma (v_i\cdot w_0)$\\ Now, if we size the read transistors $M1$ and $M2$ of the SRAM bit-cells in column 1 through 4 in the ratio $2^3:2^2:2^1:1$, as shown in Fig. \ref{fig:8tarr}, the transistor conductances in the `ON' state would also be in the ratio $2^3:2^2:2^1:1$. Thus, summing the currents through the RBLs of the four columns yields the required dot product in accordance to the equation shown above. This sizing pattern can be repeated throughout the array. In addition, one could also use transistors having different threshold voltages to mimic the required ratio of conductances as $2^3:2^2:2^1:1$. Note that, the currents through the RBLs of the four consecutive columns are summed together, thus we obtain one analog output current value for every group of four columns. In other words, the digital 4-bit word stored in the SRAM array is multiplied by the input voltage $v_i$ and summed up by analog addition of the currents on the RBLs. This \textit{one-go} computation of vector multiplication and summation in a digital memory array would result in high throughput computations of the dot products. It is worth mentioning, that the way input $v_i$ are multiplied by the stored weights and summed up is reminiscent of memristive dot product computations \cite{li2018analogue}. However, a concern with the presented SRAM based computation is the fact that the ON resistance of the transistors (few kilo ohms) are much lower as compared to a typical memristor ON resistance which is in the range of few tens of kilo ohms \cite{chakraborty2017technology}. As such the static current flowing through the ON transistors $M1$ and $M2$ would typically be much higher in the presented proposal. In order to reduce the static current flow, we propose scaling down the supply voltage of the SRAM cell. Note, interestingly, 8T cells are known to retain their robust operation even at highly scaled supply voltages \cite{chang20075}. In the next section we have used a $V_{DD}$ lower than the nominal $V_{DD}$ of 1V. We would now describe another way of reducing the current, although with trade-offs, as detailed below. Config-B is shown in Fig. \ref{fig:8t}(c). Here, the SLs are connected to a constant voltage $V_{bias}$. The input vector $v_i$ is connected to RWLs, i.e., the gate of $M2$. Similar to Config-A, the output current $I_{RBL}$ is proportional to $v_i$. We will later show from our simulations that for a certain range of input voltage values, we get a linear relationship between $I_{RBL}$ and $v_i$, which can be exploited to calculate the approximate dot product. To implement multi-bit precision, the transistor sizing is done in the same way as Config-A as represented in Fig. \ref{fig:8tarr}, so that the $I_{RBL}$ is directly proportional to the transistor conductances. Key features of the proposed Config-B are as follows. The input voltages $v_i$ have a capacitive load, as opposed to a resistive load in Config-A. This relaxes the constraints on the input voltage generator circuitry, and is useful while cascading two or more stages of the dot product engine. However, as presented in the next section, Config-B has a small non-zero current corresponding to zero input as opposed to Config. A that has zero current for zero input. In order to sense the output current at the RBLs, we use a current to voltage converter. This can most simply be a resistor, as shown in Fig. \ref{fig:8t}. However, there are a few constraints. As the output current increases, the voltage drop across the output resistor increases, which in turn changes the desired current output. A change in the voltage on the RBL would also change the voltage across the transistors $M1$ and $M2$, thereby making their conductance a function of the voltage on the RBL. Thus, at higher currents corresponding to multiple rows of the memory array, the $I_{RBL}$ does not approximate the vector-matrix dot product, but deviates from the ideal output. This dependence of the RBL voltage on the current $I_{RBL}$ will be discussed in detail in the next section with possible solutions. \begin{figure*}[t] \centering \includegraphics[width=6.4in,keepaspectratio]{Comparison_without_opamp.pdf} \caption{$I_{RBL}$ versus $V_{in}$ characteristics for (a) Config. A and (b) Config. B shows the linear region of operation for different weights. $I_{RBL}$ versus Weight levels for (c) Config. A and (d) Config. B shows desirable linear relationship at various voltages $V_{in}$. $I_{RBL}$ shows significant deviation from ideal output ($I_N = N\times I_1$ with increasing number of rows for both (e) Config. A and (f) Config. B, where $I_1$ is the current corresponding to one row and N is the number of rows. The analyses were done for $V_{DD} = 0.65V$} \vspace{-4mm} \label{fig:compwoopamp} \end{figure*} \section{Results } The operation of the proposed configurations (Config-A and Config-B) for implementing a multi-bit dot product engine was simulated using HSPICE on the 45nm PTM technology \cite{zhao2006new}. For the entire analysis, we have used a scaled down $V_{DD}$ of 0.65V for the SRAM cells. The main components of the dot-product engine implementation are the input voltages and conductances of the transistors for different states of the cells. A summary of the analysis for the two configurations is presented in Fig. \ref{fig:compwoopamp}. In Fig. \ref{fig:compwoopamp}, we have assumed a sensing resistance of 50-ohms connected to the RBL. Note, a small sense resistance is required to ensure that the voltage across the sensing resistance is not high enough to drastically alter the conductances of the connected transistors $M1$ and $M2$. In Fig. \ref{fig:compwoopamp}(a)-(b) we plot the output current in RBL ($I_{RBL}$) as a function of the input voltage for three 4-bit weight combinations `1111', `1010' and `0100' for the two different configurations described in the previous section. The results presented are for a single 4-bit cell. To preserve the accuracy of a dot-product operation, it is necessary to operate the cell in the voltage ranges such that the current is a linear function of the applied voltage $v_i$. These voltage ranges are marked as linear region in Fig. \ref{fig:compwoopamp}(a)-(b). The slope of the linear section $I_{RBL}$ versus $V_{in}$ plot varies with weight, thus signifying a dot product operation. Further, at the left voltage extremity of the linear region, $I_{RBL}$ tends to zero irrespective of the weight, thus satisfying the constraint that the output current is zero for zero $V_{in}$. It is to be noted that the two configurations show significantly different characteristics due to the different point-of-application of input voltages. Fig. \ref{fig:compwoopamp}(c)-(d) presents the dependence of the current $I_{RBL}$ on the 4-bit weight levels for Config-A at constant voltages $V_{in}$ = 0.05V, 0.1V, 0.15V and configuration B at $V_{in}$ = 0.5V, 0.55V, 0.6V, respectively. Different voltages were chosen so as to ensure the circuit operates in the linear region as depicted by Fig. \ref{fig:compwoopamp}(a)-(b). Desirably, $I_{RBL}$ shows a linear dependence on weight levels and tends to zero for weight = `0000'. The choice of any voltage in the linear regions of Fig \ref{fig:compwoopamp}(a)-(b) does not alter the linear dependence of the $I_{RBL}$ on weight levels. To expand the dot-product functionality to multiple rows, we performed an analysis for upto 64 rows in the SRAM array, driven by 64 input voltages. In the worst case condition, when the 4-bit weight stores `1111', maximum current flows through the RBLs, thereby increasing the voltage drop across the output resistance. Fig. \ref{fig:compwoopamp}(e)-(f) indicates that the total current $I_{RBL}$ deviates from its ideal value with increasing number of rows, in the worst case condition. The deviation in Fig. \ref{fig:compwoopamp}(e)-(f) is because we sense the output current with an equivalent sensing resistance ($R_{sense}$) and hence the final voltage on the bit-line ($V_{BL}$) is dependent on the current $I_{RBL}$. At the same time, $I_{RBL}$ is also dependent on $V_{BL}$ and as a result the effective conductance of the cell varies as $V_{BL}$ changes as a function of the number of rows. It was also observed that the deviation reduces with decreasing sensing resistance as expected. Another concern with respect to Fig. \ref{fig:compwoopamp} is the fact that the total summed up current reaches almost 6mA for 64 rows for the worst case condition (all the weights are `1111'). \begin{figure*}[t] \centering \includegraphics[width=6.4in,keepaspectratio]{Comparison_with_opamp.pdf} \caption{$I_{RBL}$ versus $V_{in}$ characteristics for (a) Config. A and (b) Config. B shows the linear region of operation for different weights. $I_{RBL}$ versus weight levels for (c) Config. A and (d) Config. B shows desirable linear relationship at various voltages $V_{in}$. $I_{RBL}$ shows almost zero deviation from ideal output ($I_N = N\times I_1$ with increasing number of rows for both (e) Config. A and (f) Config. B, where $I_1$ is the current corresponding to one row and N is the number of rows. These analyses were done for $V_{DD} = 0.65V$} \vspace{-4mm} \label{fig:compopamp} \end{figure*} There are several ways to circumvent the deviation from ideal behavior with increasing number of simultaneous row accesses and also reduce the maximum current flowing through the RBLs. One possibility is to use an operational amplifier (Opamp) at the end of each 4-bit column, where the negative differential input of the Opamp is fed by the bit-line corresponding to a particular column. Whereas, the positive input is supplemented by a combination of the Opamp offset voltage and any desired voltage required for suitable operation of the dot-product as shown in left hand side of Fig. \ref{fig:compopamp}. Opamp provides a means of sensing the summed up current at the RBL while maintaining a constant voltage at the RBL. Opams in the configuration as shown in Fig. \ref{fig:compopamp} have been traditionally used for sensing in memristive crossbars as in \cite{Li_2017}. We performed the same analysis as previously described in Fig. \ref{fig:compwoopamp} for the two proposed configurations with the bit-line terminated by an Opamp. For our analysis, we have set $V_{pos} = 0.1V$ for the positive input of the Opamp and thus analysis is limited to input voltages above $V_{pos}$ to maintain the unidirectional current. Note, we have used an ideal Opamp for our simulations, where the voltage $V_{pos}$ can be accounted for both the non-ideal offset voltage of the Opamp and a combination of an externally supplied voltage. Fig. \ref{fig:compopamp}(a)-(b) shows the plot of $I_{RBL}$ versus input voltage $V_{in}$ for the two configurations. Similar behavior as in the case of Fig. \ref{fig:compwoopamp}(a)-(b) is observed even in the presence of the Opamp. However, note that the current ranges have decreased since RBL is now clamped at $V_{pos}$. Further, the dot-product operation is only valid for $V_{in}>V_{pos}$ and thus the acceptable input range is shifted in the presence of an Opamp. Fig. \ref{fig:compopamp}(c)-(d) shows the behavior of $I_{RBL}$ versus weight levels for the two configurations and desirably, linearity is preserved. Fig. \ref{fig:compopamp}(e)-(f) presents the current through the RBL as a function of the number of rows. As expected, due to the high input impedance of the Opamp, and the clamping of $V_{BL}$ at a voltage $V_{pos}$ the deviation of the summed up current from the ideal value have been mitigated to a huge extent. Although, the current levels have reduced significantly as compared to the Fig. \ref{fig:compwoopamp}, the resultant current for 64 rows would still be higher than the electro-migration limit for the metal lines constituting the RBL \cite{posser2014analyzing}. One possible solution is to sequentially access a smaller section of the crossbar (say 16 or 8 rows at a time), convert the analog current into its digital counterpart each time and finally add all accumulated digital results. In addition use of high threshold transistors for the read port of the SRAM would also help to reduce the maximum current values. Further, the maximum current is obtained only when all the weights are `1111', which is usually not true due to the sparsity of matrices involved in various applications as in \cite{changpinyo2017power,han2015learning}. \begin{figure}[t] \centering \includegraphics[width=0.3\textwidth]{fcn.pdf} \caption{Fully connected network topology consisting of 3 layers, the input layer, the hidden layer and the output layer \cite{chakraborty2017technology}. We have used M=784, N = 500 and P = 10. } \label{fig:fcn} \end{figure} \par We also performed functional simulations using the proposed dot-product engine based on Config. A in a fully connected artificial neural network consisting of 3 layers as shown in Fig. \ref{fig:fcn}. The main motivation behind this analysis is to evaluate the impact of the non-linearity in the I-V characteristics on the inference accuracy of the neural network. We chose an input voltage range of 0.1-0.22V. As can be observed in Fig. \ref{fig:compopamp}(a), the I-V characteristics are not exactly linear within this range, as such a network level functional simulation is required to ascertain the impact of the non-linearity on classification accuracy. The network details are as follows. The hidden layer consisted of 500 neurons. The network was trained using the Backpropagation algorithm \cite{Rumelhart_1985} on the MNIST digit recognition dataset under ideal conditions using MATLAB \textsuperscript\textregistered Deep Learning Toolbox\cite{palm2012prediction}. During inferencing, we incorporated the proposed 8T-SRAM based dot-product engine in the evaluation framework by discretizing and mapping the trained weights proportionally to the conductances of the 4-bit synaptic cell. The linear range of the voltage was chosen to be [0.1-0.22V] and normalized to a range of [0 1]. The dot-product operation was ensured by normalizing the I-V characteristics for all the weight levels such that current corresponding to the highest input voltage and highest weight level is $I_{max} = V_{max}\times G_{max}$. The activation function of the neuron was considered to be a behavioral $satlin$ function scaled according to the scaling factor of the weights to preserve the mathematical integrity of the network. To be noted, the normalization of current and input voltage simplifies the scaling of the neuron activation function. The accuracy of digit recognition task was calculated to be merely 0.11\% lower than the ideal case (98.27\%) thus indicating that the proposed dot-product engine can be seamlessly integrated into the neural network framework without significant loss in performance. \par \begin{figure}[t] \centering \includegraphics[width=0.5\textwidth]{layout_std.pdf} \caption{Thin-cell layout for a standard 8T-SRAM bit-cell \cite{chang2005stable}. } \label{fig:layout_std} \end{figure} \begin{figure*}[t] \centering \includegraphics[width=\textwidth]{layout.pdf} \caption{Thin-cell layout for the proposed 8T-SRAM array with 4-bit precision weights. The width of read transistors of different bit positions are sized in the ratio 8:4:2:1. An additional metal line for SL is also required, which runs parallel to the power-lines. This incurs an area overhead of $\sim$29.4\% compared to the standard 8T-SRAM bit-cell. } \label{fig:layout} \end{figure*} Further, it is to be noted that in many cases the inherent resilience of the applications that require dot product computations can be leveraged to circumvent some of the circuit level non-idealities. For example, for cases like training and inference of an artificial or a spiking neural network, various algorithmic resilience techniques can be applied where modeling circuit non-idealities and modifying the standard training algorithms\cite{chakraborty2017technology, Liu_2017} can help preserve the ideal accuracy of the classification task concerned. Additionally, the proposed technique can either be used as a dedicated CMOS based dot product compute engine or as an on-demand dot product accelerator, wherein the 8T array acts as usual digital storage and can also be configured as a compute engine as and when required. It is also worth mentioning that the 8T cell has also been demonstrated in \cite{agrawal2017x} as a primitive for vector Boolean operations. This work, significantly augments the possible use cases for the 8T cells by adding analog-like dot product acceleration. Due to different sizing of the read transistors and an additional metal line routing for SL, there is an area penalty of using the proposed configurations, compared to the standard 8T-SRAM bit-cell used for storage. Fig. \ref{fig:layout_std} shows the thin-cell layout for a standard 8T-SRAM bit-cell \cite{chang2005stable}. Note that the rightmost diffusion with width (W) constitute the read transistors ($M1$ and $M2$). To implement the 4-bit precision dot-product, we size the width of read transistors in the ratio $8:4:2:1$, as described earlier. Thus, the width of the rightmost diffusion is increased to 8W, 4W, and 2W, increasing the bitcell length (horizontal dimension) by $\sim$39.6\%, 17.1\%, and 5.7\% for bits 3, 2 and 1, respectively, compared to the standard minimum sized 8T bit-cell with diffusion width W. Moreover, to incorporate an extra metal line (SL), that runs parallel to $V_{DD}$ and ground lines, the cell width (vertical dimension) increases by $\sim$12.5\%. The resulting layout of first four columns for two consecutive rows in the proposed array is shown in Fig. \ref{fig:layout}. The overall area overhead for the whole SRAM array with 4-bit weight precision, amounts to $\sim$29.4\% compared to the standard 8T SRAM array. Note, this low area overhead results from the fact that both the read transistors $M1$ and $M2$ share a common diffusion layer and hence an increase in transistor width can be easily accomplished by having a longer diffusion, without worrying about spacing between metal or poly layers. Additionally, instead of progressively sizing the read transistors one could also use multi-V$_T$ design wherein the LSBs consist of hight V$_T$ read transistors and the MSBs consist of nominal ( or low V$_T$ read transistors). The use of multi-V$_T$ design can significantly reduce the reported area overhead. As such, the reported area overhead is close to the worst case impact on the bit-cell area without resorting to additional circuit tricks like multi-V$_T$ design. \par \section{Variation Analysis} To ascertain the robustness of the presented dot product computations, in this section, we analyze the effects of non-idealities on the output current. The non-idealities considered are SL and BL line-resistances and transistor threshold voltage variations. \subsection{Effect of Line-Resistances} Both the SL and BL line-resistances add parasitic voltage drops along the rows and the columns. Moreover, to complicate the analysis, the error in the output current would be a function of both the spatial dependence due to distributed line-resistances and data-dependence as a function of the stored weights in the memory array. We, therefore, resort to worst case analysis. The worst case arises when all the weights and all the inputs are at the highest value. This scenario results in maximum current flow through the BLs and SLs and hence has maximum impact of parasitic line-resistances. To analyze the impact, we consider a line resistance of 1.3 ohms/$\mu m$ \cite{thoziyoor2008cacti}. Based on the layout, the average line resistance between each bit-cell was found to be 1.25 ohms in the bit-line direction and 2.5 ohms in the SL direction. We explore both the configurations (Config. A and Config. B) to analyze the impact of the line-resistances and ways to compensate for the voltage degradation along the metal lines. In addition, for Config. B, we explore two variants to minimize the effect of line resistances. Note, in Config. B the inputs are connected to the word-lines i.e. to the gate of the transistors. As such, the inputs drive capacitive load and there is no voltage degradation due to line-resistances. On the other hand, the bias voltage is connected to the SL, which would degrade due to line-resistances and induce error in the final output current flowing through each column. To minimize this error, the two variants of Config. B presented in the manuscript are: \begin{itemize} \item Config. B with the bias voltage driving the SL from both the ends (i.e. from the extreme right and extreme left ends, as shown in Fig. \ref{fig:lineresfig}). \item Config. B with the SL tapped every 16 bits with regenerated values of the bias voltage in the horizontal direction, as depicted in Fig. \ref{fig:lineresfig}. \end{itemize} \begin{figure*}[t] \centering \includegraphics[width=\textwidth,keepaspectratio]{lineresfig.pdf} \caption{Bitcell organization of Config. B variants showing SL driven from both ends and tapping of SL every 16 bitcells. The line resistances in the source line (SL) and the bit-line (BL) are shown. } \label{fig:lineresfig} \end{figure*} Fig. \ref{fig:lineerrordiffcomb1} shows the worst case impact (when all inputs are at the highest value and all the weights are `1111') of the line-resistances in terms of percentage error in the output current (Note, this is error with respect to the current values, it should not be confused with the error corresponding to the classification accuracy) for the various configurations for simultaneous activation of 16 and 8 rows, respectively. As observed, Config. A has a higher error than all the variants of Config. B. Note, tapping is infeasible in case of Config. A because in Config A, the input voltages are connected to the SL. Tapping in Config. A would therefore require regeneration of input voltage along the horizontal direction, making it infeasible. In contrast, in case of Config. B, SL is supplied by a global bias voltage and hence, is easy to regenerate. We have assumed an array size of 64x128 (64 rows and 128 columns). Further, for our analysis we assume that the `farthest' 16 rows are simultaneously activated. SL and BL distributed resistances were included for all the activated rows, while the unactivated rows were modeled by an equivalent lumped resistance. \begin{figure}[t] \centering \includegraphics[width=2.5in,keepaspectratio]{lineerrordiffcomb.pdf} \caption{ Percentage error in output current for worst case combination (highest input values and all weights = `1111'). The left set of bar graphs represent the error for various combinations assuming 16 rows are activated simultaneous for the dot product computation, while the right set of bar graphs correspond to simultaneous activation of 8 rows.} \label{fig:lineerrordiffcomb1} \end{figure} \begin{figure}[t] \centering \includegraphics[width=3.5in,keepaspectratio]{lineres.pdf} \caption{(a) and (b) shows the percentage error map arising due to line resistance for different weight levels ranging from `0000' to `1111' and input voltages ranging from 0.35V to 0.675V for 16 and 8 activated rows. For e.g., the data point corresponding to V = 0.35V and weight level = `0000' means the test case where all the 4-bit weight elements in the memory array are considered to be at weight `0000' and the input voltages to all rows are 0.35V. The percentage error decreases with decreasing weight and input value. (c) Probability of occurrence of weight levels in a trained neural network on MNIST dataset shows lowest weight levels have the highest frequency, thus indicating low impact due to line resistance.} \label{fig:lineres1} \end{figure} For rest of the analysis, we choose Config. B with tapping every 16 bits. We now analyze the error percentage across all voltages and weight combinations to understand the impact of the degradation in light of applications discussed in the manuscript. Fig. \ref{fig:lineres1} (a) and (b) shows the 2D error-map due to line resistances for various combinations of input voltages and weights for 16 and 8 activated rows, respectively. Note, for each input voltage $-$ weight combination all rows are supplied with the same input voltage and all weights in the array are same. In addition, Fig. \ref{fig:lineres1} (c) shows the weight level distribution of a neural network layer trained on the MNIST dataset. As observed from Fig. \ref{fig:lineres1} (a) and (b), the error above 6\% for 16 rows and above 4\% for 8 rows is concentrated to the top 25\% of the map corresponding to the highest weights and inputs. However, from Fig. \ref{fig:lineres1} (c), we observe that for relevant applications such as neural network the trained weights are mostly concentrated to the weight level 1-6 where the error is close to 0-5\% for 16 rows and 0-3.4\% for 8 rows. From this analysis, we can conclude that using the circuit techniques presented \textit{i.e.} driving SL from both the sides and tapping every 16 columns and also leveraging the weight distribution for a trained neural network, the effect of line resistances for simultaneous activation of 8 and 16 rows can be substantially mitigated. For example, for Config. B with taps every 16 columns with SL being driven from both the ends, the worst-case error is contained within 9\%. Further, it was observed that the error improves rapidly when the input voltages or the programmed weights are less than their maximum possible values. \subsection{$V_T$ Variations} The variations in transistors can result in error in the dot-product operations. To analyze this, we perform 1000 Monte-Carlo simulations to assess the variation of the output current for various combinations of input voltages and weights. We considered 30 mV $\sigma$ variation of threshold voltage ($V_T$)for the minimum sized transistor and scaled the variation with width as $\sigma_L = \sigma_{min}\sqrt{W_{min}L_{min}/WL}$. Note, for random variations it is customary to include various sources of variations into effective variation in the transistor threshold voltage \cite{kuhn2011process}. We ran 1000 Monte-Carlo simulations for each voltage value ranging from 0.35V to 0.675V in steps of 0.025V and each weight level ranging from 0 to 15 and obtained the standard deviation in output current for each case. This captures the impact of Vth variations for a considerable precision of gate voltages. We calculated the standard deviation about the mean current for the entire range of output current from the cases described above for 16 activated rows of the memory array. The minimum current on the x-axis in Fig. \ref{fig:sd1} arises when the input voltages and (or) the stored weights in the memory array are zero. The next higher level of current is obtained when either the weight or the input voltage is incremented. It is worth noting that Fig. \ref{fig:sd1} corresponds to 16 activated rows in an array of 64 rows and 128 columns. Further, for the analysis in Fig. \ref{fig:sd1}, we have neglected the effect of line-resistances for the following reasons - 1) adding line-resistances makes the standard deviation in Fig. \ref{fig:sd1} a function of not only the random $V_T$ variation and weights, but also makes the deviation in current spatially dependent. This leads to a non-trivial analysis problem that can quickly become intractable. 2) As shown in the previous sub-section, even the worst case error due to line-resistances was well within acceptable limits. Fig. \ref{fig:sd1} shows that the standard deviation is higher for a higher value of output current. We fit a representative standard deviation for each current value using a polynomial fit as shown by the fitted line in Fig. \ref{fig:sd1}. \begin{figure}[t] \centering \includegraphics[width=3in, keepaspectratio]{sd.pdf} \caption{Standard deviation of current due to variations in Vt of the transistors of the bitcells with increasing current for 1000 Monte Carlo simulations. A single data point shown here refers to the standard deviation in output current when 16 rows are activated and input voltages to all rows are $V_{in}$ and weights of all elements are $w$. For different data points, we consider $V_{in}$ values ranging from 0.35V to 0.675V in steps of 0.025V and weight levels ranging from 1 to 16 to capture the impact of $V_T$ variations across the input parameter space.} \label{fig:sd1} \end{figure} In the functional simulations to evaluate the classification accuracy with such $V_T$ variations, we calculated the output current from every 16 rows and replaced it with a random value from a Gaussian distribution with the corresponding standard deviation of the particular output current. The final classification accuracy was 0.01\% lower than the case without random variations. This error resilience is mainly due to the robustness of the neural networks and the fact that as in Fig. 10 most of the weights are concentrated in lower levels that have lesser standard deviation. \section{Discussions} We would like to emphasize the fact that the present proposal aims at providing a means to enable \textit{in-situ} dot-product computations in standard 8T SRAM cells by exploiting the isolated read-port. We believe a wide-range of applications can be accelerated using the present proposal. As such, the presented dot product engine should not be seen only in context of machine learning and neural network applications. In general, any application that can benefit from approximate vector addition and multiplication can be a possible use case for the presented proposal. This wide spectrum of possible use cases implies that the exact details of the required peripheral circuits and its complexity would depend heavily on the target application. For example, error resilient applications like neural networks can rely on low cost peripherals whereas more traditional dot-product computations as in image processing might require sophisticated circuitry. Moreover, one could think of hybrid significance driven peripheral design such that the less significant computations are associated with low overhead peripherals while more significant operations are enabled by high accuracy circuits or a full digital computation without resorting to dot product acceleration. The target application would also dictate the constrains on OPAMP specification and the required precision of the resistance Rf shown in Fig. 8. In addition, the choice of Config. A versus Config. B would also depend on the target application. For example, Config. A shows better linearity as opposed to Config. B. However, the input voltages in Config. A drive a resistive load requiring complex driving circuits as opposed to Config. B which has capacitive load. The authors would also like to point that a detailed analysis of the appropriate peripherals and the associated architecture for each individual use case requires a case by case analysis and is not the focus of the present manuscript. The current manuscript is more of a generic proposal and a study of the effect of intrinsic non-idealities, for example, the non-linearity, the line-resistances and the transistor threshold voltage variations with respect to the present dot product engine. \begin{figure}[t] \centering \includegraphics[width=1.9in, keepaspectratio]{energy} \caption{ Average Energy comparison between conventional digitial sequential implementation and proposed Dot-product Engine (DPE). The energy is reported for 16x16 dot product computations wherein 16 rows are simultaneously activated and each row consists of 16 4-bit words.} \label{fig:sd1} \end{figure} We would now present the estimates for energy consumptions by performing 16x16 dot product operation with and without the proposed dot product engine. It is worth mentioning that the overall dot-product engine consists of DACs to generate analog inputs fed to the 8T-SRAM crossbar array, along with ADCs to detect the analog outputs and converting them back to digital bits. A cache memory of size 256KB with a basic sub-array size of 64$\times$128 bits was modeled using CACTI \cite{muralimanohar2009cacti} simulator. The energy consumption and latency of the peripheral circuitry (ADCs and DACs) was appropriately incorporated in the CACTI model, referring to \cite{liu201010b}. We assume a 16$\times$16 crossbar operation (\textit{i.e.} activating 16 rows at a time with each row containing 16 four bit words) at any given time, thus requiring 16 ADCs in the peripheral circuitry, per sub-array. The conversion time for the ADC operation was assumed to be 10ns and the energy estimates for the ADCs were adopted from \cite{liu201010b}. This framework was used to evaluate the total energy consumption and latency of the proposal for a test vector of 16$\times$16 dot-product, compared to the pure digital approach wherein the dot product was computed by sequential memory access and multiply as well as add operations were performed in dedicated adders and multipliers synthesized separately. Fig. 12 shows the energy for performing a 16$\times$16 dot-product with the proposed DPE and the conventional digital approach. This energy overhead stems from the fact that in digital approach, row-by-row access to the data from memory, followed by MAC operations are performed sequentially to compute the same 16$\times$16 matrix-vector dot-product, which the proposed DPE can do in a single instruction. Also, it was noted that the total energy consumption of the dot-product engine had a dominant contribution from the peripheral circuitry. Nevertheless, in general, the energy and latency overheads associated with respect to DACs and ADCs in similar dot product engines based on memristors have been extensively studied and can be found in works like \cite{shafiee2016isaac,chi2016prime}. \section{Conclusion} In the quest for novel in-memory techniques for beyond von-Neumann computing, we have presented the 8T-SRAM as a vector-matrix dot-product compute engine. Specifically, we have shown two different configurations with respect to 8T SRAM cell for enabling analog-like multi-bit dot product computations. We also highlight the trade-offs presented by each of the proposed configurations. The usual 8T SRAM bit cell circuit remains unaltered and as such the 8T cell can still be used for the normal digital memory read and write operations. The proposed scheme can either be used as a dedicated dot product compute engine or as an on-demand compute accelerator. The presented work augments the applicability of 8T cells as a compute accelerator in the view that dot products find wide applicability in multiple data intensive application and algorithms including efficient hardware implementations for machine learning and artificial intelligence. \section*{Acknowledgement} The research was funded in part by C-BRIC, one of six centers in JUMP, a Semiconductor Research Corporation (SRC) program sponsored by DARPA, the National Science Foundation, Intel Corporation and Vannevar Bush Faculty Fellowship. \bibliographystyle{IEEEtran}
{ "timestamp": "2018-10-18T02:01:44", "yymm": "1802", "arxiv_id": "1802.08601", "language": "en", "url": "https://arxiv.org/abs/1802.08601" }
\section{Introduction} \label{introduction} The question of whether modern video prediction models can correctly represent important sources of variability within a video, such as the motion parameters of cameras or objects, has not been addressed in sufficient detail. Moreover, the manner in which existing video prediction datasets have been constructed---both real-world and synthetic---makes probing this question challenging. Real-world datasets have thus far used videos from action recognition datasets~\citep{soomro_ucf101:_2012,karpathy_large-scale_2014} or other ``in-the-wild'' videos~\citep{santana2016learning}, assuring to some degree that the sets of sampled objects and motions will vary realistically between training and testing time. However, as the parameters governing the video appearance and dynamics are unknown, failures in prediction cannot be easily diagnosed. Synthetic datasets, on the other hand, define a set of dynamics (such as object translation or rotation) and apply them to simple objects with pre-defined appearances such as MNIST digits~\citep{lecun_gradient-based_1998} or human head models~\citep{facegen}. Since they are generated from known sets of objects and motion parameters, representations from a video prediction model can be evaluated based on how accurately they predict the generating parameters. However, up until now, both the training and testing splits of these datasets have been generated by sampling from the same object dataset and set of motion parameters, making it difficult to gauge the robustness of video representations to unseen combinations of object and dynamics. Our proposition is simple: a dataset that explicitly controls for appearance and motion parameters---and can alter them between training and testing time---is essential to answering the question of whether video prediction networks capture in their representations useful physical models of the visual world. To this end, we propose the Moving Symbols dataset, which extends Moving MNIST~\citep{srivastava_unsupervised_2015} with features designed to answer this question. Unlike existing implementations of Moving MNIST, which hard-code the image source files and speed of dynamics, we allow these parameters to change between training and testing time, enabling us to evaluate a model's robustness to multiple types of variability in the input. Additionally, we log the motion parameters at each time step to enable the evaluation of motion parameter recovery. To demonstrate the utility of our dataset, we evaluate a state-of-the-art video prediction model from \citet{villegas_decomposing_2017} on a series of dataset splits designed to stress-test its capabilities. Specifically, we present it with videos whose objects exhibit appearances and movements that differ from the training set in a controlled manner. Our results suggest that modern video prediction networks may fail to maintain the appearance and motion of the object for unseen appearances and rates, a problem that, to the best of our knowledge, has not yet been clearly expounded. \section{Moving Symbols} \label{dataset} The Moving MNIST dataset~\citep{srivastava_unsupervised_2015} is a popular toy dataset in the video prediction literature~\citep{patraucean_2015_spatio,kalchbrenner_video_2017,denton_unsupervised_2017}. Each video in the dataset consists of one or two single digit images that translate within a $64\times64$ pixel frame with random velocities. Although recent approaches can generate convincing future frames, they have been both trained and evaluated on videos generated by the same object dataset and motion parameters, limiting our understanding of what the networks learn to represent. Moving Symbols overcomes this limitation by allowing sampling from \textit{different} object datasets or sets of motion parameters at training and testing time. For example, the training set may contain slow- and fast-moving MNIST digits while the test set may contain Omniglot characters~\citep{lake2015human} at medium speed. By adjusting a configuration file, it is trivial to generate paired train/test splits under varying conditions. Furthermore, the video generator logs the image appearance, pose, and rate of movement of each object at each time step, which enables semantically meaningful evaluation as we demonstrate in Section \ref{sec:experiments}. \section{Experiments} \label{tests} \label{sec:experiments} To demonstrate the insights we can gain from the Moving Symbols dataset, we construct two groups of experiments, where each trial consists of different train/test splits (see Table~\ref{tab:exp_setup}). The experiments are designed to elucidate the model's ability to generalize across variation in either motion (1a-c) or appearance (2a-b). For appearance, to demonstrate the flexibility of our dataset generator, we introduce a novel dataset of 5,000 vector-based icons from \url{icons8.com}. We use our evaluation framework to analyze MCNet~\citep{villegas_decomposing_2017}, a state-of-the-art video prediction model. For each experiment, we generate 10,000 training videos and 1,000 testing videos. We train the model on each unique training set, using ten frames of input to predict ten future frames. After training the model for 80 epochs using the same procedure as \citet{villegas_decomposing_2017}, we evaluate it on the out-of-domain test set with ten input frames and twenty predicted future frames. \begin{table}[t] \centering \includegraphics[width=\textwidth]{figs/exp_setup.pdf} \vspace{-17pt} \caption{Experimental setup. The row for each trial describes the objects and motion speeds seen during training and the out-of-domain objects and motion speeds seen during testing.} \label{tab:exp_setup} \end{table} \begin{figure}[t] \centering \includegraphics[width=\textwidth]{figs/qual_results.pdf} \vspace{-16pt} \caption{Sample predictions for each experiment from Table \ref{tab:exp_setup}. From top to bottom: (i) sample input frames observed by each prediction model at $t=\{3,6,9\}$; (ii) sample ground truth frames (unobserved) at $t=\{13,16,19\}$; (iii) corresponding predictions from a model tested under the same conditions as the training set; and (iv) predictions from a model tested under different conditions, as per Table \ref{tab:exp_setup}.} \vspace{-12pt} \label{fig:qual_results} \end{figure} \begin{figure}[t] \centering \includegraphics[width=\textwidth]{figs/quant_results_test.pdf} \vspace{-12pt} \caption{Quantitative results to compare in-domain (dotted line) and out-of-domain evaluation performance (solid line), with standard error shown in gray. Across future time steps, we report median values for two metrics: positional Mean Squared Error (MSE) between the oracle's predicted position and ground truth (first and third plots), and cross-entropy between the oracle's digit prediction and true label (second and fourth plots). Lower is better in all cases.} \vspace{-8pt} \label{fig:quant_results} \end{figure} Figure \ref{fig:qual_results} shows a comparison of the predictions made for a random test sequence in each experiment. For the speed variation experiments, we observe that when MCNet is evaluated on out-of-domain videos, it propagates motion by replicating trajectories seen during training rather than adapting to the speed seen at test time (e.g. in experiment (1a) from Table \ref{tab:exp_setup}, MCNet slows down test digits). More surprisingly, the model has difficulty preserving the appearance of the digit when motion parameters change. Since MCNet explicitly takes in differences between subsequent frames as input, it is possible that the portion of the model encoding frame differences inadvertently captures and misrepresents appearance information. In the appearance variation experiments, MCNet clearly overfits to the appearances of objects seen during training. For example, the MNIST-trained model transforms objects into digit-like images, and the Icons8-trained model, which sees a broader variety of object appearances, transforms digits into iconographic images. This may be due to the use of an adversarial loss to train MCNet, which would penalize appearances that are not seen during training. To obtain quantitative results, we train an ``oracle'' convolutional neural network to estimate the digit class and location of MNIST digits in a $64\times64$ pixel frame and compare them against the logged ground truth. This allows us to draw more interpretable comparisons between generated frames than is possible with common low-level image similarity metrics such as PSNR or SSIM~\citep{wang_image_2004}. For the oracle's architecture, we use LeNet~\citep{lecun_gradient-based_1998} and attach two fully-connected layers to the output of the final convolutional layer to regress the digit's location. On a held-out set of test digits, the trained oracle obtains 98.47\% class prediction accuracy and a Root Mean Squared Error (RMSE) of 3.5 pixels for 64x64 frames. Figure \ref{fig:quant_results} shows our quantitative results on the MNIST-based test sets (all rows except (2a) from Table \ref{tab:exp_setup}). For the speed variation experiments, the frames predicted for out-of-domain test videos induce a large MSE from the oracle, especially during later time steps, whereas in-domain evaluation yields frames that induce a small MSE. This matches our qualitative observation that MCNet fails to propagate the out-of-domain object's motion, instead adjusting it to match the speeds seen during training. The digit prediction performance of the oracle also drops substantially when evaluating on out-of-domain videos, supporting our observation that MCNet has trouble preserving the appearances of objects that move faster or slower than those seen in training. In the appearance variation experiment (2b), we observe a large digit classification cross-entropy error for the out-of-domain case compared to the in-domain case because the Icons8-trained model transforms digits into iconographic objects. Surprisingly, the model preserves the trajectory of unseen digits better than the MNIST-trained model. This might be because predicting the wrong location for pixel-dense Icons8 images would incur a larger pixel-wise penalty during training than predicting the wrong location for pixel-sparse MNIST images. \section{Discussion and Future Work} \label{discussion} We have shown that Moving Symbols can help expose the poor generalization behavior of a state-of-the-art video prediction model, which has implications in the real-world case where unseen objects and motions abound. Only a fraction of our dataset's functionality has been demonstrated here---its other features can be used to construct more elaborate experiments, such as introducing scale, rotation, and multiple symbols with increasing complexity, or making use of the pose information as a further supervisory signal. The community's adoption of this dataset as an objective, open standard would lead to a new generation of self-supervised video prediction models. The Moving Symbols dataset is currently available at \href{https://github.com/rszeto/moving-symbols}{\texttt{https://github.com/rszeto/moving-symbols}}. \bibliographystyle{iclr2018_workshop}
{ "timestamp": "2018-03-23T01:04:04", "yymm": "1802", "arxiv_id": "1802.08936", "language": "en", "url": "https://arxiv.org/abs/1802.08936" }
\section{Introduction} \label{section:introduction} Generative Adversarial Networks (or GANs) are a promising techique for building flexible generative models \cite{GANS}. There have been many successful efforts to scale them up to large datasets and new applications \cite{LAPGAN,DCGAN,ACGAN,STACKGAN,PROGRESSIVEGAN,PROJECTION}. There have also been many efforts to better understand their training procedure, and in particular to understand various pathologies that seem to plague that training procedure \cite{UNROLLEDGAN,GENERALIZATIONANDEQUILIBRIUM,FID,LOCALLYSTABLE,PRINCIPLED}. The most notable of these pathologies --- ``mode collapse'' --- is characterized by a tendency of the generator to output samples from a small subset of the modes of the data distribution. In extreme cases, the generator will output only a few unique samples or even just the same sample repeatedly. Instead of studying this pathology and others from a probabilistic perspective, we study the distribution of the squared singular values of the input-output Jacobian of the generator. Studying this quantity allows us to characterize GANs in a new way --- we find that it is predictive of other GAN performance measures. Moreover, we find that by controlling this quantity, we can improve average-case performance measures while greatly reducing inter-run variance of those measures. More specifically, this work makes the following contributions: \begin{itemize} \item We study the squared singular values of the generator Jacobian at individual points in the latent space. We find that the Jacobian generally becomes ill-conditioned quickly at the beginning of training, after which it tends to fall into one of two clusters: a ``good cluster'' in which the condition number stays the same or even gradually decreases, and a ``bad cluster'', in which the condition number continues to grow. \item We discover a strong correspondence between the conditioning of the Jacobian and two other quantitative metrics for evaluating GAN quality: the Inception Score and the Frechet Inception Distance. GANs with better conditioning tend to perform better according to these metrics. \item We provide evidence that the above correspondence is causal by proposing and testing a new regularization technique, which we call Jacobian Clamping. We show that you can constrain the conditioning of the Jacobian relatively cheaply\footnote{The Jacobian Clamping algorithm doubles the batch size.} and that doing so improves the mean values and reduces inter-run variance of the values for the Inception Score and FID. \end{itemize} \section{Background and Notation} \label{section:background} \begin{figure*}[ht] \includegraphics[width=1.0\textwidth]{figures/JacobianClamping_MNIST_v3_crop} \caption{ MNIST Experiments. Left and right columns correspond to 10 runs without and with Jacobian Clamping, respectively. Within each column, each run has a unique color. Top to bottom, rows correspond to mean log-condition number, Classifier Score, and Frechet Distance. Note the dark purple run in the left column: the generator moves from the ill-conditioned cluster to the well-conditioned cluster while also moving from the low-scoring cluster to the high-scoring cluster. } \label{fig:mnist} \end{figure*} \paragraph{Generative Adversarial Networks:} A generative adversarial network (GAN) consists of two neural networks trained in opposition to one another. The generator $G$ takes as input a random noise vector $z \sim p(z)$ and outputs a sample $G(z)$. The discriminator $D$ receives as input either a training sample or a synthesized sample from the generator and outputs a probability distribution $D(x)$ over possible sample sources. The discriminator is then trained to maximize the following cost: \begin{equation} L_D = -\E_{x \sim p_\text{data}} [\log D(x)] - \E_{z \sim p(z)} [\log (1 - D(G(z)))] \end{equation} while the generator is trained to minimize\footnote{ This formulation is known as a ``Non-Saturating GAN'' and is the formulation in wide use, but there are others. See \citet{GANS} for more details.}: \begin{equation} L_G = -\E_{z \sim p(z)} [\log D(G(z))] \end{equation} \paragraph{Inception Score and Frechet Inception Distance:} In this work we will refer extensively to two\footnote{ We elect not to use the technique described in \citet{AIS} for reasons explained in \citet{FLOWGAN}.} ``scores'' that have been proposed to evaluate the quality of trained GANs. Both make use of a pre-trained image classifier. The first is the Inception Score \citep{IMPROVEDTECHNIQUES}, which is given by: \begin{equation} \exp\left (\mathbb{E}_{\mathbf{x} \in P_\theta} [KL(p(y \vert \mathbf{x}) \Vert p(y)] \right) \end{equation} where $\mathbf{x}$ is a GAN sample, $p(y\vert \mathbf{x})$ is the probability for labels $y$ given by a pretrained classifier on $\mathbf{x}$, and $p(y)$ is the overall distribution of labels in the generated samples (according to that classifier). The second is the Frechet Inception Distance \citep{FID}. To compute this distance, one assumes that the activations in the coding layer of the pre-trained classifier come from a multivariate Gaussian. If the activations on the real data are $N(m, C)$ and the activations on the fake data are $N(m_w, C_w)$, the FID is given by: \begin{equation} \|m-m_w\|_2^2+ \Tr \bigl(C+C_w-2\bigl(CC_w\bigr)^{1/2}\big) \end{equation} \paragraph{Mathematical Background and Notation:} Consider a GAN generator $G$ mapping from latent space with dimension $n_z$ to an observation space with dimension $n_x$. We can define $Z := R^{n_z}$ and $X := R^{n_x}$ so that we may write $G : z \in Z \to x \in X$. At any $z \in Z$, $G$ will have a Jacobian $J_z \in R^{n_x \times n_z}$ where $(J_z)_{i,j} := \frac{\partial G(z)_i}{\partial z_j}$. The object we care about will be the distribution of squared singular values of $J_z$. To see why, note that the mapping $M: z \in Z \to J_z^TJ_z$ takes any point $z$ to a symmetric and positive definite matrix of dimension $n_z \times n_z$ and so constitutes a Riemannian metric. We will write $J_z^T J_z$ as $M_z$ (and refer to $M_z$ somewhat sloppily as the ``Metric Tensor''). If we know $M_z$ for all $z \in Z$, we know most of the interesting things to know about the geometry of the manifold induced by $G$. In particular, fix some point $z \in Z$ and consider the eigenvalues $\lambda_1, \dots, \lambda_{n_z}$ and eigenvectors $v_1, \dots, v_{n_z}$ of $M_z$. Then for $\epsilon \in R$ and $k \in[1, n_z]$, \begin{equation} \label{eqn:eqlimit} \lim_{||\epsilon|| \to 0} \frac{||G(z) - G(z + \epsilon v_k)||}{ ||\epsilon v_k||} = \sqrt{\lambda_k} \end{equation} Less formally, the eigenvectors corresponding to the large eigenvalues of $M_z$ at some point $z \in Z$ give directions in which taking a very small ``step'' in $Z$ will result in a large change in $G(z)$ (and analogously with the eigenvectors corresponding to the small eigenvalues). Because of this, many interesting things can be read out of the eigenspectrum of $M_z$. Unfortunately, working with the whole spectrum is unwieldy, so it would be nicer to work with some summary quantity. In this work, we choose to study the condition number of $M_z$ (the best justification we can give for this is that we noticed during exploratory analysis that the condition number was predictive of the Inception Score, but see the supplementary material for further justification of why we chose this quantity and not some other quantity). The condition number is defined for $M_z$ as $\frac{\lambda_{max}}{\lambda_{min}}$. If the condition number is high, we say that the metric tensor is ``poorly conditioned''. If it's low, we say that the metric tensor is ``well conditioned''. Now note that the eigenvalues of $M_z$ are identical to the squared singular values of $J_z$. This is why we care about the singular value spectrum of $J_z$. \section{Analyzing the Local Geometry of GAN Generators} \label{section:analyzing} \paragraph{The Metric Tensor Becomes Ill-Conditioned During Training:} We fix a batch of $z \sim p(z)$ and examine the condition number of $M_z$ at each of those points as a GAN is training on the MNIST data-set. A plot of the results is in Figure \ref{fig:condition_number_average}, where it can be seen that $M_z$ starts off well-conditioned everywhere and quickly becomes poorly conditioned everywhere. There is considerable variance in how poor the conditioning is, with the log-condition-number ranging from around 12 to around 20. It is natural to ask how consistent this behavior is across different training runs. To that end, we train 10 GANs that are identical up to random initialization and compute the average log-condition number across a fixed batch of $z$ as training progresses (Figure \ref{fig:mnist} Top-Left). Roughly half of the time, the condition number increases rapidly and then stays high or climbs higher. The other half of the time, it increases rapidly and then decreases. This distribution of results is in keeping with the general understanding that GANs are ``unstable''. \begin{figure}[ht] \vskip 0.2in \begin{center} \centerline{\includegraphics[width=\columnwidth]{figures/log_condition_number_metric_tensor}} \caption{ The condition number of $M_z$ for a GAN trained on MNIST at various fixed $z$ throughout training. } \label{fig:condition_number_average} \end{center} \vskip -0.2in \end{figure} \begin{figure}[ht] \vskip 0.2in \begin{center} \centerline{\includegraphics[width=\columnwidth]{figures/vae_comparison_plot}} \caption{ Log spectra of the average Jacobian from 10 training runs of a variational autoencoder and 10 training runs of a GAN. There are a few interesting things about this experiment: First, it gives a way to quantify how much less `stable' the GAN training procedure is than the VAE training procedure. The spectra of the different VAE runs are almost indistinguishable. Second, though the GAN and VAE decoders both take noise from $N(0, I)$ as input, the overall sensitivity of the VAE decoder to its input seems to be quite a bit lower than that of the GAN decoder -- this does not stop the VAE from successfully modeling the MNIST dataset. } \label{fig:vae} \end{center} \vskip -0.2in \end{figure} \begin{figure*}[ht] \includegraphics[width=1.0\textwidth]{figures/JacobianClamping_spectra_crop} \caption{ Singular value spectra of the average Jacobian at the end of training, presented in log-scale. } \label{fig:spectra} \end{figure*} The condition number is informative and computing its average over many $z$ gives us a single scalar quantity that we can evaluate over time. However, this is only one of many such quantities that we can compute, and it obscures certain facts about the singular value spectrum of $J_z$ at various $z$. For completeness, we also compute -- following \citet{MCMCVAE}, who does the same for variational autoencoders -- the spectrum of the average (across a batch of $z$) Jacobian. It's not clear a priori what one should expect of these spectra, so to provide context we perform the same computation on 10 training runs of a variational autoencoder \citep{VAES, VAES2}. See Figure \ref{fig:vae} for more details. For convenience, we will largely deal with the condition number going forward. \paragraph{Conditioning is Predictive of Other Quality Metrics:} One reason to be interested in this condition number quantity is that it corresponds strongly to other metrics used to evaluate GAN quality. We take two existing metrics for GAN performance and measure how they correspond to the average log condition number of the metric tensor. The first measure is the Inception Score \cite{IMPROVEDTECHNIQUES} and the second measure is the Frechet Inception Distance \cite{FID}. We test GANs trained on three datasets: MNIST, CIFAR-10, and STL-10 \citep{MNIST,CIFAR,STL}. On the MNIST dataset, we modify both of these scores to use a pre-trained MNIST classifier rather than the Inception classifier. On the CIFAR-10 and STL-10 datasets, we use the scores as defined. We resized the STL-10 dataset to $48 \times 48$ as has become standard in the literature about GANs. The hyperparameters we use are those from \citet{DCGAN}, except that we modified the generator where appropriate so that the output would be of the right size. We first discuss results on the MNIST dataset. The left column of Figure \ref{fig:mnist} corresponds to (the same) 10 runs of the GAN training procedure with different random initializations. From top to bottom, the plots show the mean (across the latent space) log-condition number, the classifier score, and the MNIST Frechet Distance. The correspondence between condition number and score is quite strong in both cases. For both the Classifier Score and the Frechet Distance, the 4 runs with the lowest condition number also have the 4 best scores. They also have considerably lower intra-run score variance. \textbf{Note also that the dark purple run, which transitions over time from being in the ill-conditioned cluster to the well-conditioned cluster, also transitions between clusters in the score plots.} Examples such as this provide evidence for the significance of the correspondence. We conducted the same experiment on the CIFAR-10 and STL-10 datasets. The results from these experiments can be seen in the left columns of Figure \ref{fig:cifar} and Figure \ref{fig:stl} respectively. The correspondence between condition number and the other two scores is also strong for these datasets. The main difference is that the failure modes on the larger datasets are more dramatic --- in some runs, the Inception Score never goes above 1. For both datasets, however, we can see examples of runs with middling performance according to the score that also have moderate ill-conditioning: \textbf{In the CIFAR-10 experiments, the light purple run has a score that is in betweeen the ``good cluster'' and the ``bad cluster'', and it also has a condition number that is between these clusters. In the STL-10 experiments, both the red and light purple runs exhibit this pattern.} Should we be surprised by this correspondence? We claim that the answer is yes. Both the Frechet Inception Distance and the Inception Score are computed using a pre-trained neural network classifier. The average condition number is a first-order approximation of sensitivity (under the Euclidean metric) that makes no reference at all to this classifier. \paragraph{Conditioning is Related to Missing Modes:} \label{section:conditioningisrelated} Both of the scores aim to measure the extent to which the GAN is ``missing modes''. The Frechet Inception Distance arguably measures this in a more principled way than does the Inception Score, but both are designed with this pathology in mind. We might wonder whether the observed correspondence is partly due to a relationship between generator conditioning and the missing-mode-problem. As a coarse-grained way to test this, we performed the following computation: Using the same pre-trained MNIST classifier that was used to compute the scores in Figure \ref{fig:mnist}, we drew 360 samples from each of the 10 models trained in that figure and examined the distribution over predicted classes. We then found the class for which each model produced the fewest samples. The ill-conditioned models often had 0 samples from the least sampled class, and the well-conditioned models were close to uniformly distributed. In fact, the correlation coefficient between the mean log condition number for the model and the number of samples in the model's least sampled class was $-0.86$. \section{Jacobian Clamping} \label{section:jacobian} Given that the conditioning of $J_z$ corresponds to the Inception Score and FID, it is natural to wonder if there is a causal relationship between these quantities. The notion of causality is slippery and causal inference is an active field of research (see \citet{CAUSALITY} and \citet{MAKINGTHINGSHAPPEN} for overviews from the perspective of computer science and philosophy-of-science respectively) so we do not expect to be able to give a fully satisfactory answer to this question. However, we can perform one relatively popular method for inferring causality \cite{CAUSALREASONINGTHROUGHINTERVENTION, INTERVENTIONSANDCAUSALINFERENCE}, which is to do an intervention study. Specifically, we can attempt to control the conditioning directly and observe what happens to the relevant scores. In this section we propose a method for accomplishing this control and demonstrate that it both improves the mean scores and reduces variance of the scores across runs. We believe that this result represents an important step toward understanding the GAN training procedure. \begin{figure*}[ht] \includegraphics[width=1.0\textwidth]{figures/JacobianClamping_CIFAR10_crop} \caption{ CIFAR10 Experiments. Left and right columns correspond to 10 runs without and with Jacobian Clamping, respectively. Within each column, each run has a unique color. Top Row: Mean log-condition number over time. Bottom Row: Frechet Inception Distance over time. Note the light purple run (Left) which has a condition number between the ill-conditioned cluster and the well-conditioned one; it also has scores between the low-scoring cluster and the high-scoring one. Note the gold run (Right): it's the only run for which Jacobian Clamping "failed", and it's also the only run for which the condition number did not decrease after its initial period of growth. We felt that there was little information conveyed by the Inception Score that was not conveyed by the Frechet Inception Distance, so for reasons of space we have put the Inception Score plots in the supplementary material. } \label{fig:cifar} \end{figure*} \begin{figure*}[ht] \includegraphics[width=1.0\textwidth]{figures/JacobianClamping_STL_crop} \caption{ STL10 Experiments. Left and right columns correspond to 10 runs without and with Jacobian Clamping, respectively. Within each column, each run has a unique color. Top Row: Mean log-condition number over time. Bottom Row: Frechet Inception Distance over time. Note the red run (Left): the generator has a condition number between the ill-conditioned cluster and the well-conditioned one; it also has scores between the low-scoring cluster and the high-scoring one. Note also the light purple run (Left) which is similar. As in Figure \ref{fig:cifar}, we have moved the Inception Score plots to the supplementary material. } \label{fig:stl} \end{figure*} \paragraph{Description of the Jacobian Clamping Technique:} The technique we propose here is the simplest technique that we could get working. We tried other more complicated techniques, but they did not perform substantially better. An informal description is as follows: We feed 2 mini-batches at a time to the generator. One batch is noise sampled from $p_z$, the other is identical to the first but with small perturbations added. The size of the perturbations is governed by a hyperparameter $\epsilon$. We then take the norm of the change in outputs from batch to batch and divide it by the norm of the change in inputs from batch to batch and apply a penalty if that quotient becomes larger than some chosen hyperparameter $\lambda_{max}$ or smaller than another hyperparameter $\lambda_{min}$. The rough effect of this technique should be to encourage all of the singular values of $J_z$ to lie within $[\lambda_{min}, \lambda_{max}]$ for all $z$. See Algorithm \ref{alg:clamping} for a more formal description. With respect to the goal of performing an intervention study, Jacobian Clamping is slightly flawed because it does not directly penalize the condition number. Unfortunately, directly penalizing the condition number during training is not straightforward due to issues efficiently estimating the smallest eigenvalue \citep{NUMERICALLINEARALGEBRA}. We choose not to worry about this too much; We are more interested in understanding how the spectrum of $J_z$ influences GAN training than in whether the condition number is precisely the right summary quantity to be thinking about. \begin{algorithm}[tb] \caption{Jacobian Clamping} \label{alg:clamping} \begin{algorithmic} \STATE {\bfseries Input:} norm $\epsilon$, target quotients $\lambda_{max}, \lambda_{min}$, batch size $B$ \REPEAT \STATE $z \in R^{B \times n_z} \sim p_z$. \STATE $\delta \in R^{B \times n_z} \sim N(0,1)$. \STATE $\delta := (\delta / ||\delta||) \epsilon$ \STATE $z' := z + \delta$ \STATE $Q := ||G(z) - G(z')|| / ||z - z'||$ \STATE $L_{max} = (\max(Q, \lambda_{max}) - \lambda_{max})^2$ \STATE $L_{min} = (\min(Q, \lambda_{min}) - \lambda_{min})^2$ \STATE $L = L_{max} + L_{min} $ \STATE Perform normal GAN update on $z$ with $L$ added to generator loss. \UNTIL{Training Finished} \end{algorithmic} \end{algorithm} \paragraph{Jacobian Clamping Improves Mean Score and Reduces Variance of Scores:} In this section we evaluate the effects of using Jacobian Clamping. Our aim here is not to make claims of State-of-the-Art scores\footnote{ We regard these claims as problematic anyway. One issue (among many) is that scores are often reported from a single run, while the improvement in score associated with a given method tends to be of the same scale as the inter-run variance in scores.} but to provide evidence of a causal relationship between the spectrum of $M_z$ and the scores. Jacobian Clamping directly controls the conditition number of $M_z$. We show (across 3 standard datasets) that when we implement Jacobian Clamping, the condition number of the generator is decreased, and there is a corresponding improvement in the quality of the scores. This is evidence in favor of the hypothesis that ill-conditioning of $M_z$ ``causes'' bad scores. Specifically, we train the same models as from the previous section using Jacobian Clamping with a $\lambda_{max}$ of 20, a $\lambda_{min}$ of 1, and $\epsilon$ of 1 and hold everything else the same. As in the previous section, we conducted 10 training runs for each dataset. Broadly speaking, the effect of Jacobian Clamping was to prevent the GANs from falling into the ill-conditioned cluster. This improved the average case performance, but didn't improve the best case performance. For all 3 datasets, we show terminal log spectra of $E_z[J_z]$ in Figure \ref{fig:spectra}. We first discuss the MNIST results. The right column of Figure \ref{fig:mnist} shows measurements from 10 runs using Jacobian Clamping. As compared to their ``unregularized'' counterparts in the left column, the runs using Jacobian Clamping all show condition numbers that stop growing early in training. The runs using Jacobian Clamping have scores similar to the best scores achieved by runs without. The scores also show lower intra-run variance for the ``regularized runs''. The story is similar for CIFAR-10 and STL-10, the results for which can be seen in the right columns of Figures \ref{fig:cifar} and \ref{fig:stl} respectively. For CIFAR-10, 9 out of 10 runs using Jacobian Clamping fell into the ``good cluster''. The run that scored poorly also had a generator with a high condition number. \textbf{It is noteworthy that the failure mode we observed was one in which the technique failed to constrain the quotient $Q$ rather than one in which the quotient $Q$ was constrained and failure occured anyway.} It is also (weak) evidence in favor of the causality hypothesis (in particular, it is evidence against the alternative hypothesis that Jacobian Clamping acts to increase scores in some other way than by constraining the conditioning). For STL-10, all runs fell into the good cluster. It's worth mentioning how we chose the values of the hyperparameters: For $\epsilon$ we chose a value of 1 and never changed it because it seemed to work well enough. We then looked at the empirical value of the quotient $Q$ from Algorithm \ref{alg:clamping} during training without Jacobian Clamping. We set $\lambda_{min}, \lambda_{max}$ such that the runs that achieved good scores had $Q$ mostly lying between those two values. We consider the ability to perform this procedure an advantage of Jacobian Clamping. Most techniques that introduce hyperparameters don't come bundled with an algorithm to automatically set those hyperparameters. We have observed that intervening to improve generator conditioning improves generator performance during GAN training. In the supplementary material, we discuss whether this relationship between conditioning and performance holds for all possible generators. \begin{figure}[ht] \vskip 0.2in \begin{center} \centerline{\includegraphics[width=\columnwidth]{figures/wgan_plot_crop.png}} \caption{ We train a class-conditional WGAN-GP under 4 different settings: With and without Jacobian Clamping and with 5 and 1 discriminator updates. We perform 5 trials for each setting. We made no effort whatsoever to tune our hyperparameters. We find that when using only 1 discriminator update the baseline model ``collapses'' but the model with Jacobian Clamping does not. When using 5 discriminator updates, the model with Jacobian Clamping performs slightly worse in terms of Inception Score, but this difference is small and could easily be due to the fact that we did not perform any tuning. When using Jacobian Clamping, reducing the number of discriminator steps does not reduce the score, but it more than halves the wall-clock time. } \label{fig:wgangp} \end{center} \vskip -0.2in \end{figure} \paragraph{Jacobian Clamping Speeds Up State-of-the-Art Models:} One limitation of the experimental results we've discussed so far is that they were obtained on a baseline model that does not include modifications that have very recently become popular in the GAN literature. We would like to know how Jacobian Clamping interacts with such modifications as the Wasserstein loss \citep{WGAN}, the gradient penalty \citep{WGANGP}, and various methods of conditioning the generator on label information \citep{CONDITIONAL,ACGAN}. Exhaustively evaluating all of these combinations is outside the scope of this work, so we chose one existing implementation to assess the generality of our findings. We use the software implementation of a conditional GAN with gradient penalty from \url{https://github.com/igul222/improved_wgan_training} as our baseline because this is the model from \citet{WGANGP} that scored the highest. With its default hyperparameters this model has little variance in scores between runs but is quite slow, as it performs 5 discriminator updates per generator update. It would thus be desirable to find a way to achieve the same results with fewer discriminator updates. Loosely following \citet{CRAMERGAN}, we jointly vary the number of discriminator steps and whether Jacobian Clamping is applied. \textbf{Using the same hyperparameters as in previous experiments (that is, we made no attempt to tune for score) we find that reducing the number of discriminator updates and adding Jacobian Clamping more than halves the wall-clock time with little degradation in score.} See Figure \ref{fig:wgangp} for more details. \section{Related Work} \label{section:relatedwork} \textbf{GANs and other Deep Generative Models:} There has been too much work on GANs to adequately survey it here, so we give an incomplete sketch: One strand attempts to scale GANs up to work on larger datasets of high resolution images with more variability \cite{LAPGAN,DCGAN,ACGAN,STACKGAN,PROGRESSIVEGAN,PROJECTION,SAGAN}. Yet another focuses on applications such as image-to-image translation \cite{CYCLEGAN}, domain adaptation \cite{DOMAINADAPTATION}, and super-resolution \cite{SUPERRESOLUTION}. Other work focuses on addressing pathologies of the training procedure \cite{UNROLLEDGAN}, on making theoretical claims \cite{GENERALIZATIONANDEQUILIBRIUM} or on evaluating trained GANS \cite{LEARNTHEDISTRIBUTION}. In spectral normalization \cite{SPECTRALNORM}, the largest singular value of the individual layer Jacobians in the discriminator is approximately penalized using the power method (see \citet{NUMERICALLINEARALGEBRA} for an explanation of this). If Jacobian Clamping is performed with $\lambda_{min} = 0$, then it is vaguely similar to performing spectral normalization \textit{on the generator}. See \citet{GANSURVEY} for a more full accounting. \textbf{Geometry and Neural Networks:} Early work on geometry and neural networks includes the Contractive Autoencoder \cite{CONTRACTIVEAUTOENCODERS} in which an autoencoder is modified by penalizing norm of the derivatives of its hidden units with respect to its input. \citet{REPRESENTATIONLEARNING} discuss an interpretation of representation learning as manifold learning. More recently, \citet{MANIFOLDGAN} improved semi-supervised learning results by enforcing geometric invariances on the classifier and \citet{DYNAMICALISOMETRY} study the spectrum of squared singular values of the input-output Jacobian for feed-forward classifiers with random weights. \citet{SENSITIVITYANDGENERALIZATION} explore the relationship between the norm of that Jacobian and the generalization error of the classifier. In a related vein, three similar papers \cite{ODDITY,METRICS,GEOMETRY} have explicitly studied variational autoencoders through the lens of geometry. \textbf{Invertible Density Estimators and Adversarial Training:} In \citet{FLOWGAN} and \citet{NVPGAN}, adversarial training is compared to maximum likelihood training of generative image models using an invertible decoder as in \citet{NICE,nvp}. They find that the decoder spectrum drops off more quickly when using adversarial training than when using maximum likelihood training. This finding is evidence that ill-conditioning of the generator is somehow fundamentally coupled with adversarial training techniques. Our work instead studies the variation of the conditioning among many runs of the same GAN training procedure, going on to show that this variation corresponds to the variation in scores and that intervening with Jacobian Clamping dramatically changes this variation. We also find that the ill-conditioning does not always happen for adversarial training --- see Figure \ref{fig:spectra}. \section{Conclusions and Future Work} \label{section:conclusion} We studied the dynamics of the generator Jacobian and found that (during training) it generally becomes ill-conditioned everywhere. We then noted a strong correspondence between the conditioning of the Jacobian and two quantitative metrics for evaluating GANs. By explicitly controlling the conditioning during training through a technique that we call Jacobian Clamping, we were able to improve the two other quantitative measures of GAN performance. We thus provided evidence that there is a causal relationship between the conditioning of GAN generators and the ``quality'' of the models represented by those GAN generators. We believe this work represents a significant step toward understanding GAN training dynamics. \section*{Acknowledgements} We thank Ben Poole, Luke Metz, Jonathon Shlens, Vincent Dumoulin and Balaji Lakshminarayanan for commenting on earlier drafts. We thank Ishaan Gulrajani for sharing code for a baseline CIFAR-10 GAN implementation. We thank Daniel Duckworth for help implementing an efficient Jacobian computation in TensorFlow. We thank Sam Schoenholz, Matt Hoffman, Nic Ford, Jascha Sohl-Dickstein, Justin Gilmer, George Dahl, and Matthew Johnson for helpful discussions regarding the content of this work.
{ "timestamp": "2018-06-20T02:04:59", "yymm": "1802", "arxiv_id": "1802.08768", "language": "en", "url": "https://arxiv.org/abs/1802.08768" }
\section{Introduction} Modeling the diffusion of innovation, which can be technological or cultural, has an extensive history, over decades and across disciplines (see, e.g., Coleman \textit{et al.} (1966); Rogers (2003); Mahajan and Peterson (1985)). One process absent from all innovation diffusion models is suppression by one group of adoption or use of a cultural item in another group. Yet, as we describe below, this process clearly occurs for some items and has been recognized in some theoretical and empirical work. We present, analyze, and preliminarily test here, therefore, such a model, adapted from the Bolker-Pacala model in population biology. People may adopt or abandon cultural items for a variety of reasons, including intrinsic value to them, identity signals to themselves or others, and social pressure. There also may be external coercion applied. It has long been noted in social science that adoption of some cultural item may depend on its use by others. This effect can be positive for many reasons (Leibenstein, 1950; Lieberson, 2000) but also negative, in, for example, what Leibenstein called the snob effect (Leibenstein, 1950). More recently, and more specifically relevant to the process we are newly including here, Berger and colleagues (Berger \& Heath, 2008; Berger \& Le Mens, 2009) have pointed out that adoption of a cultural item by one group may induce members of another group to abandon it. Berger and Heath (2007, 2008) make a strong case for this for many cultural items, such as clothing brands and kind of automobile, explaining this effect with an identity motivation (Berger \& Heath, 2007). In \emph{Freakonomics}, Levitt and Dubner (2005) allege this phenomenon for first names, claiming that their California names data show that lower classes adopt names that the higher classes are using, but then that the higher classes abandon those names because the lower classes are now using them. Note that in the above examples the negative effect is an internal phenomenon, in that the suppressing group is not trying to lower use and adoption by the other group. Historically, however, external suppression also has occurred. One clear example is sumptuary laws, for example, in the Middle Ages in Europe, wherein clothes of certain colors and materials were not permitted to people below a certain social status. Some barriers to entry, such as requiring men who join the cavalry to come with a horse or charging high fees at golf courses can be seen also a as a higher socioeconomic class keeping certain cultural items at a low level in a lower socioeconomic class. In the case of external suppression, the reason for lower adoption and use will be different from when the phenomenon is internal; it will not be due to identity motivation, for example. For our purposes, however, constructing a model incorporating this negative effect, the mechanism is not important. As yet, models of the diffusion of innovation have not included such negative effects of use by one group on adoption and use by another group. Early models naturally were simplest, assuming a single homogenous population with a single mode of diffusion. These were made more complex, again, in various natural ways. Populations were made heterogeneous, sometimes by positing segmented populations, such that the process works differently in different segments, sometimes modeling a continuous distribution of the population in characteristics or adoption propensity. Different sources of diffusion were considered, for example, other people or media, and in some models, such as the Bass model (Bass, 1969), different sources were combined. Bartholomew added a loss-of-interest mechanism to those in the Bass model and also presented a stochastic model, in contrast to the common deterministic models (Bartholomew, 1976). Alternative assumptions were made concerning the underlying mechanism of diffusion, such as social contagion simply through exposure, social pressure or conformity, and social learning (Young 2009). Different channels of diffusion also have been examined, for example, with new attention to online diffusion (Go\"{e}l \textit{et al.}, 2012). One impetus behind the proliferation of models was that not all data showed the same pattern. For example, the archetypical innovation diffusion pattern is the logistic curve, with an adoption rate peak somewhere in the middle of the diffusion process. But for some products, adoption was bimodal, with an early rate of adoption peak, then a lull with relatively few new adoptions, then another rate of adoption peak. This required new models (Karmeshu \& Goswami, 2001, 2008; Karmeshu \& Sharma, 2004). To state a general principle, different cultural products may differ in their underlying diffusion mechanisms, and so may have fundamentally different diffusion patterns and require different models. Our new model is presented in the spirit of this principle. Similar to previous models, it relies on something like contagion as part of the underlying mechanism. It also is a mean field model, i.e., works with variables aggregated above the individual level, and it divides the population into different segments. The crucial novelty of this model is that adoption of the product by one segment may have a negative effect on adoption by another segment. Adoption by one segment is also allowed to have a positive effect on adoption by another segment and excessive adoption within a segment may have a negative effect on further adoption within the segment. These three effects we call ``external suppression," ``external stimulation," and ``internal suppression," respectively. In the model, they are all mathematically second-order effects. We present as an example and test this model on tablet use from 2012-2016. In numerous countries, tablet use increases in the oldest age group (55+) and at first increases even faster in the youngest age group (under 25). It appears, however, that when the oldest age group use reaches a certain level, somewhere close to 30 percent, tablet use in the youngest age group starts to decrease. A plausible mechanism is that when use among older adults gets sufficiently high, the young begin to perceive tablets as an older person’s device, or at least not something that can differentiate them from older people. Consequently young people become less likely to adopt and some even stop using the device. This may well be motivated by identity considerations, consistent with Berger and Heath's (2007) argument, but, again, it is not necessary to know the exact mechanism to model its population effect. For this process, we borrow an appropriate, existing model that has been adapted from the Bolker-Pacala model of population dynamics (Bolker \& Pacala, 1999; Bolker \textit{et al.}, 2003). We describe the model in the second section of the paper, summarizing the relevant theoretical finding. In the third section we present the empirical data on tablet use in the form of plots, along with plots of simulations that produce qualitatively similar patterns. The fourth section summarizes and draws conclusions. \section{The Model} We introduce the following model of innovation diffusion, which we call the BP model of innovation diffusion because it is taken from a multi-group mean field approximation of the Bolker-Pacala model of population dynamics in biology (Bessonov \textit{et al.}, 2014, 2016). The population dynamics model posits an initial population of individuals living on a lattice, i.e., a multi-dimensional grid. The lattice can represent geographical space, its typical biological use, but it also can represent other spaces on which a population may be distributed. For example, it could be a one-dimensional space of age or a multidimensional space with dimensions of age, ethnicity, various socioeconomic status measures, and so forth (see, e.g., McPherson, 1983). Each individual can give birth to a another individual or die or migrate, all at certain rates. In addition to their intrinsic rates, the existence of individuals may be affected negatively (suppressed) by the presence of other individuals. A mean field treatment of this is mathematically tractable, and, in fact, is equivalent to a kind of random walk. In the multi-group version, the population is partitioned into $N$ different groups. Suppression can occur both within a group and across groups. Before exposition of the BP model, let us discuss why the mathematical models we present are useful, that is, why we might want to develop and analyze them instead of, for example, simply looking at a simulation of individuals making certain probabiistic choices. We present two versions of the BP model, one a stochastic version, equivalent to a random walk, and the other a system of differential equations giving a deterministic trajectory, together with other equations describing the fluctuations around that trajectory. We can begin by noting that the random walk is exactly equivalent to the simulation of individuals making probabilitistic choices. Nevertheless, by casting it as a random walk we gain the ability to use the theoretical apparatus that has been developed for random walks, such as the conditions under which it approaches a steady state distribution and other outcomes that we do not develop here. Analyzing it as stochastic fluctuations around a deterministic solutions to a system of differential equations allows us to identify and classify equilibria, to precisely partition the parameter space with regard to equilibria, that is, the likely fate of the process, and even to note the possibility of interesting rare events such as a large fluctuation pushing the system from one equilibrium to another. To model innovation diffusion, the initial population consists of the initial adopters of the cultural product. Adoption of the product by a new person corresponds to birth and abandonment of the product correponds to death. Suppression within and across groups can inhibit further adoptions or even reduce use of the product within a group. Migration corresponds to movement by an adopter from one group to another. The continuous time model may be presented as follows (Bessonov \textit{et al.}, 2016). Represent the number in each group $Q_{i,L}$, $i=1,\ldots,N$, at time $t$ who have adopted the cultural product by \begin{align}\label{RW} \boldsymbol{n}(t) = \{n_1(t),n_2(t),\ldots,n_N(t)\}, \end{align}a continuous time random walk on $(\mathbb{Z}_{+})^{N}$ with rates obtained from, for $i,j = 1,2,\ldots,N$ \begin{align}\label{DE1} \boldsymbol{n}(&t + dt | \boldsymbol{n}(t)) \\ \notag&= \boldsymbol{n}(t) + \begin{dcases} \, e_i & \text{w. pr. } \beta_i n_i(t)dt + o(dt^2)\\ -e_i & \text{w. pr. } \mu_i n_i(t)dt +\frac{n_i(t)}{L}\sum_{j=1}^{N}a_{ij}n_j(t)dt + o(dt^2)\\ e_{j} - e_{i} & \text{w. pr. } n_i(t)q_{ij}dt + o(dt^2), \quad j\neq i \\ 0 & \text{w. pr. } 1- \sum_{i=1}^{N}(\beta_i+\mu_i)n_i(t)dt \\ & \qquad \qquad - \frac{1}{L}\sum_{i,j}n_i(t)n_j(t)a_{ij}dt +\sum_{i,j}n_i(t)q_{ij} + o(dt^2)\\ \text{other } & \text{w. pr. } o(dt^2) \end{dcases} \end{align} where $e_i$ is the vector with $1$ in the $i^{th}$ position and $0$ everywhere else. Let us define the variables and parameters. $\beta_i$ is the adoption rate and $\mu_i$ is the abandonment rate. The subscript means that they may vary by group. The multiplication of $\beta_j$ by $n_i$ fits the mechanism being contagion or exposure: it depends on the number who have adopted already. The multiplication of $\mu_j$ by $n_i$ is because it is precisely those who have adopted a product who can abandon it subsequently. $q_{ij}$ is the rate of migration from group $i$ to group $j$. Whether this is possible depends on the nature of the groups. For example, if they are adjacent age groups, then a positive migration rate from the younger to the older group is inevitable, but the reverse rate must be 0. In contrast, if the groups are social classes, then movement between all social classes, which is likely, would be conveyed by all migration rates being positive. The parameter $a_{ij}$ is the rate of supression of group $j$ by group $i$, where $i$ and $j$ can be the same. Finally, $L$ is adoption capacity, a control for the total number that can adopt, in other words a scale parameter. Table 1 lists the model parameters together with their meanings. \begin{table}[ht] \centering \caption{Model Variables and Parameters.} \begin{tabular}{cc} \hline \hline Parameter & Meaning \\ \hline $n_i$ & Number of adoptees in group $i$ \\ $\beta_i$ & Adoption rate in group $i$ \\ $\mu_i$ & Abandonment rate in group $i$ \\ $a_{ii}$ & Competition or self-limiting rate for group $i$ \\ $a_{ij}$ & Rate of suppression of group $j$ by group $i$ \\ $q_{ij}$ & Migration rate from group $i$ to group $j$ \\ $L$ & Adoption capacity \\ \hline \end{tabular} \end{table} The equation \ref{DE1} allows us to construct a system of differential equations, but it is convenient to normalize the number of adoptees by dividing by $L$. We set $$z_i(t) := \frac{n_i(t)}{L}, \qquad i=1,\ldots,N.$$ and define, for $i=1,\ldots, N$ \begin{align}\label{defF} F_i(\mathbf{z}(t)) = \left(\beta_i - \mu_i - \sum_{j\neq i}q_{ij}\right)z_i - a_{ii} z_i^2 - \displaystyle\sum_{j\ne i}a_{ji}z_iz_j + \sum_{j\neq i}q_{ji} z_j. \end{align} Then, the normalized system of differential equations is \begin{equation}\label{dfs2} \frac{d \mathbf{z}(t)}{dt} = \mathbf{F}(\mathbf{z}(t)). \end{equation} An equilibrium for the system occurs precisely at the points where \begin{equation}\label{dfs3} \mathbf{0} = \mathbf{F}(\mathbf{z}), \end{equation} with one solution being $\mathbf{z} \equiv \mathbf{0}$. This process has a functional Law of Large Numbers and functional Central Limit Theorem, that is, as $L \to \infty$ the process converges to a Gaussian diffusion (Bessonov \textit{et al.}, 2016; Kurtz, 1971). What this means is that for reasonably large $L$ the process will be very close to the following. There is a central tendency that is a deterministic trajectory, given by the system of partial differential equations \ref{dfs2} together with an initial value $\mathbf{z_0}$ (see eq. \ref{DE4} for the system for our two-group model). Because of the stochasticity of the process, however, the values at any given time $s$ will be normally distributed about that deterministic trajectory, with $N \times N$ covariance matrix $\mathbf{G}(\mathbf{z}(t))$ (Bessonov \textit{et al.}, 2016; Kurtz, 1971) \begin{equation}\label{cov1} \mathbf{G}(\mathbf{z}(t)) = \begin{dcases} \, G_{ii}(\mathbf{z}(t)) = (\beta_i + \mu_i + \sum_{j \ne i} q_{ji}) z_i + \sum_{j} a_{ji} z_iz_j + \sum_{j \ne i} q_{ji} z_j \\ \, G_{ij}(\mathbf{z}(t)) = -q_{ij}z_i - q_{ji}z_j & i \ne j \end{dcases} \end{equation} This means that from time $s$ to $s+\delta$, for small time increment $\delta$, the covariance matrix will be $G(s)$ multiplied by $\delta$. For our two-group model, the diffusion is in two dimensions with a $2 \times 2$ covariance matrix. The normalized system (Eqs. \ref{defF}, \ref{dfs2}, \ref{dfs3}) shows that the process possesses equilibria or steady states, that is, points where the deterministic trajectory remains constant (Bessonov \textit{et al.}, 2016). From the normalized system, we can find the equilibria and whether the equilibria are stable or unstable, that is, whether when close to an equilibrium the process will approach the equilibrium or not. In addition, for purposes of simulation, we need transition probabilities for discrete time, which are easily available from equation \ref{DE1}. Specifically, for $N$ groups, we can simulate the embedded discrete time random walk on $(\mathbb{Z_+})^N$, denoted $\{X_n\}_{n=0}^{\infty}$, associated with the continuous random walk \eqref{RW}. For $\mathbf{x} = (x_1,\ldots,x_N)\in\mathbb{Z_+}^N,$ set $$c(\mathbf{x}) = \sum_{i=1}^N \left(\beta_i + \mu_i + \frac{a_{ii}}{L}x_i\right)x_i + \sum_{i,j=1, i\ne j}^N q_{ij} x_i.$$ $\{X_n\}$ has transition probabilities, for $\mathbf{x}, \mathbf{y} \in (\mathbb{Z_+})^N$, $\mathbf{x}\neq\mathbf{0}$ \begin{equation}\label{Transition} P(\mathbf{x}, \mathbf{y}) = \frac{1}{c(\mathbf{x})}\cdot \begin{dcases} \, \beta_i x_i & \text{if } \mathbf{y} = \mathbf{x} + e_i, i=1,\ldots,N \\ \, \mu_i x_i + \frac{a_{ii}}{L} x_i^2 & \text{if } \mathbf{y} = \mathbf{x} - e_i, i=1,\ldots,N\\ \, q_{ij} x_i & \text{if } \mathbf{y} = \mathbf{x} - e_i + e_j, i \ne j \\ \, 0 & \text{ otherwise } \end{dcases} \end{equation} Recall that we use $e_i\in\mathbb{Z}^N$ to denote the vector with $1$ in the $i^{th}$ position and $0$ everywhere else Bessonov \textit{et al.} (2016) shows that a random walk with these transition probabilities is geometrically ergodic. That is, it is positive recurrent with exponential convergence to a stable distribution. \subsection{Model for 2 groups} We now focus on a model restricted to two groups. Fig. 1 shows a diagram of the random walk for this process starting at a point where $n_1$ members of group 1 have adopted and $n_2$ members of group 2 have adopted. \begin{figure}[h \begin{center} \scalebox{0.7}{\includegraphics{RWbp2b.jpg}} \end{center} \caption{Innovation Diffusion for Two Groups as Random Walk} \end{figure} To match our empirical case and keep the calculations straightforward, we assume only one-way migration. That is, we assume $q_{12}=0$ but allow $q_{21} \geq 0$. We obtain the equilibria using eqs. \ref{defF}, \ref{dfs2}, \ref{dfs3} and multiplying by $L$ to scale up to population values, or alternatively, by using the following \begin{equation}\label{DE4} \begin{dcases} & \frac{dn_1}{dt} = v_1 n(1) - \frac{a_{11}}{L}n_1^2(t) - \frac{a_{21}}{L}n_1(t)n_2(t) + q_{21}n_2(t)\\ & \frac{dn_2}{dt} = v_2 n(2) - \frac{a_{22}}{L}n_2^2(t) - \frac{a_{12}}{L}n_1(t)n_2(t) - q_{21}n_2(t), \end{dcases} \end{equation} where, to simplify notation, we use a ``net rate of adoption" for each group by setting $v_i := \beta_i - \mu_i$, for all $i$. Mathematically, up to four equilibrium points exist, which we label $E_1$, $E_2$, $E_3$, and $E_4$, but in fact four valid points are never realized. For some configurations of the parameters, only two real singularities exist; the other two are complex and therefore can be disregarded. For the remaining configuration of parameters, one equilibrium point has a negative value in one of its coordinates. As that is impossible when the coordinate represents the number of people who have adopted some cultural product, this equilibrium is not valid. Once the two or three singularities are identified, we carry out a stability analysis by evaluating the Jacobian of the system of differential equations at the different points and using the eigenvalues to classify the kind of singularity in the usual fashion (see, e.g., Logan, 2006). We will not present an exhaustive description of the kinds of singularities that can exist; that is available in numerous textbooks. The most important for our purposes are the following. A stable proper node is a point that a trajectory approaches directly, a focus one that it approaches by spiraling around it. Unstable versions of these exist: instead of approaching the equilibrium the trajectory leaves it, in the same fashion. A saddle point is a singularity that a trajectory approaches in one direction but leaves in a different direction (e.g., approaches from the South but recedes heading West); the result is the trajectory makes a more-or-less near pass by the singularity and leaves. As simple inspection of eqs. \ref{DE4} shows, $E_1 := (0,0)$ is an equilibrium point. The eigenvalues at $(0,0)$ are $\lambda_1 = v_1$ and $\lambda_2 = v_2 - q_{21}$. Thus, if the net rate of adoption in group 2 is greater than the rate of migration from group 2 to group 1, that is, $v_2 > q_{21}$, which is by far the most likely scenario, then $(0,0)$ is an unstable proper node; over time, trajectories go away from it. Should the migration rate be greater, $q_{21} > v_2$, then, $(0,0)$ will be a saddle point---still not a point of attractive stability. A second singularity always exists at $E_2 := (\frac{v_1L}{a_{11}},0)$. One eigenvalue is $\lambda_1 = -v_1$ and the second is $\lambda_2 = v_2 - \frac{a_{12}v_1}{a_{11}}- q_{21}$. Here, if the net adoption rate in group 2, $v_2$, is small enough, then, $\lambda_2 < 0$ and this singularity will be an asymptotically stable proper node. That is, trajectories will converge to this point; use of the cultural product in group 1 will die out. Two other possible equilibria exist. They are complicated, involving the complementary square roots of a quadratic equation. Setting \[ R := \sqrt{(a_{12}q_{21}-a_{21}q_{21}-a_{22}v_1+a_{21}v_2)^2-4(a_{11}a_{22}-a_{12}a_{21})(q_{21}^2-q_{21}v_2)} \] $E_3$ and $E_4$ are, respectively, observing the $\mp$ and $\pm$ \begin{equation*} \begin{array}{l} n_1 = L\left(\frac{\big(a_{21}q_{21}-a_{12}q_{21}+a_{22}v_1-a_{21}v_2 \mp R\big)}{2(a_{11}a_{22}-a_{12}a_{21})}\right), \\ n_2 = \frac{L}{a_{22}} \left(v_2-q_{21} + \frac{a_{12}\big(a_{12}q_{21}-a_{21}q_{21}-a_{22}v_1+a_{21}v_2 \pm R \big)}{2(a_{11}a_{22}-a_{12}a_{21})}\right). \end{array} \end{equation*} \noindent \emph{Simplification} Let us simplify the situation by assuming only one-way suppression, namely that $a_{21} = 0$. This corresponds to the empirical application in the next section. In this case, singularities $E_3$ and $E_4$ never can exist together. Either both are complex or one takes a negative value for one of its coordinates. In fact, $E_3$ can never be positive in both of its coordinates, so $E_3$ is not a viable equilibrium point. $E_4$ can be a viable equilibrium point, however. If so, it can be either a stable spriral, or an asymptotically stable proper node. Either way, $E_2$ has to be a saddle point and $E_1$ has to be an unstable proper node. If neither $E_3$ nor $E_4$ are viable singularities, then $E_1$ can be either an asymptotically stable proper node or a saddle point and $E_2$ is an asymptotically stable proper node. Table 1 summarizes the three possible configurations of singularities, along with a simple necessary condition. The full conditions distinguishing the first and second singularity are complicated and so omitted from the table. \[ \begin{array}{ccccc} \mathrm{Necessary} & E_1 & E_2 & E_3 & E_4 \\ \mathrm{Condition} \\ \hline v_2 > q_{21} & \text{unstable} & \text{saddle point} & \text{not viable} & \text{asymptotically} \\ & \text{proper node} & & & \text{stable focus} \\ \\ v_2 > q_{21} & \text{unstable} & \text{saddle point} & \text{not viable} & \text{asymptotically} \\ & \text{proper node} & & & \text{stable proper node} \\ \\ v_2 \leq q_{21} & \text{saddle point} & \text{asymptotically} & \text{not viable} & \text{not viable} \\ & & \text{stable proper node} \\ \hline \end{array} \] \noindent\hspace{1 cm} Table 1. Possible Equilibrium States for Innovation Diffusion Model. \subsection{Two group example with stable positive equilibrium} As an example of the innovation diffusion process, and to provide a comparison with the empirical data from the next section, we present the model outcomes for parameter settings chosen in the range in which there is a stable positive equiibrium. Specifically, we use as parameters the following values: initial values, $n_1(0) = n_2(0) = 10$; scale, $L=1000$; adoption rate and drop rate for group 1, $\beta_1 = .0003$, $\mu_1 = .0001$, for group 2, $\beta_2 = .0006$, $\mu_2 = .0001$; internal suppression, $a_{11}=.0002$, $a_{22} = .0001$; suppression (inhibition) of group 2 by group 1, $a_{12} = .0003$; migration from group 2 to group 1, $q_{21} = .00005$. This approaches an Ornstein-Uhlenbeck process. In other words, there will be stochastic fluctuations about the mean trajectory given by the system of differential equations in eqn. \ref{DE4}, and once the trajectory nears the equilibrium point these fluctuations will be distributed normally. There, the trajectory will have local drift $\mathbf{F}'(E_4)$ (see eq. \ref{defF}) and local covariance matrix $\mathbf{G}(E4)$ (see eq. \ref{cov1}). We also carried out a discrete time simulation of the innovation diffusion process with the saame parameters. This used the embedded random walk with transition probabilities given in eq. \ref{Transition}. Fig. 2 shows the trajectories of the numbers of adoptees in the first and second groups. \begin{figure}[h \begin{center} \scalebox{0.9}{\includegraphics{n1n2.jpg}} \end{center} \caption{Trajectories of Simulation of Innovation Diffusion for Two Groups} \end{figure} Another way of looking at the outcome is the projection of the trajectory in the $n_1 - n_2$ plane, showing how the numbers of adoptees in each group relate to each other. Labeling the groups ``older" and ``younger" to match the empirical examples to follow, one such simulated trajectory (the wobbly curve) is shown in Fig. 3, together with the numerical solution of the differential equation system, eq. \ref{DE4}, with the same parameters and initial conditions (the smooth curve). In the simulated trajectory, the number of adoptees in the older group ($n_1$) increases monotonically in time, so that time may be taken as increasing from left to right. The presentation of the differential equation system solution is parametric, so that time increases as the smooth curve proceeds away from the origin. Clearly, the simulation produces a stochastic path close to the smooth deterministic path given by the differential equation system. \begin{figure}[h \begin{center} \scalebox{0.4}{\includegraphics{Comb1.jpg}} \end{center} \caption{Simulation of Trajectory for Innovation Diffusion for Two Groups} \end{figure} The stability analysis for the model with these parameters gives three singularities, an unstable proper node at $E_1 = (0,0)$, a saddle point at $E_2 = (1000,0)$, and an in-spiral at $E_4 = (1193, 921)$; this last corresponds to the limit point of the deterministic curve in Fig. 3. The classification of $E_4$ follows because the Jacobian has one positive eigenvalue and one negative eigenvalue at $E_4$. The simulated trajectory clearly conforms to the theoretical analysis. At the equilibrium point, the Gaussian diffusion has local diffusion $\mathbf{F}'(E_4) = (-.48, -.54)$. That the drift is negative means that the farther it deviates from the equilibrium point the more stronger the trajectory will be pulled toward the equlibrium point. The local covariance matrix is \[\mathbf{G}(E_4)=\left[\begin{array}{cc} 761.9 & -.046 \\ -.046 & 1059.2 \end{array}\right]\] \section{Empirical Data on Tablet Use} We present data for a situation that we suggest corresponds to the scenario being modeled. The cultural product in question is the tablet (computer), and the groups in question are age groups. We focus on the youngest age group, ``under 25," and the oldest age group, ``55+." We suggest that the youngest group will have a greater net adoption rate than the older age group, due to characteristics such as greater long-term expected payoffs to adopting new technology and greater intensity of social contacts, which facilitates the spread of information and influence. We also assume, however, that if use of tablets in the older group rises too high, the younger group will begin to perceive the device as something for older people, at least not special for younger people; the tablet will lose much of its status value for younger people and this will inhibit or suppress its use. We assume there is no corresponding suppression of use by older people due to use by younger people. Finally, while there clearly is no direct migration from the ``under 25" group to the ``55+" group, there will be migration from ``under 25" to ``25 - 34," from ``25 - 34" to ``35 - 44," from ``35 - 44" to ``45 - 54," and from ``45 - 54" to ``55+." This we make take to be indirect migration from the youngest to the oldest age groups. Figures 4 through 9 show tablet use from 2012 through the first half of 2016 in 6 countries, with data from the Google Consumer Barometer (2016). These graphs the youngest group use against the oldest group use; the third dimension, time, is omitted. It may be noted, however, that tablet use in the oldest age group increases monotonically with time in the oldest age group. Thus, in each graph, time increases from left to right. \begin{figure}[h \begin{center} \scalebox{0.7}{\includegraphics{tabUK.jpg}} \end{center} \caption{Tablet Use in UK for Under 25 and 55+ Age Groups, 2012 - 2016.} \end{figure} \bigskip \newpage \begin{figure}[h \begin{center} \scalebox{0.7}{\includegraphics{tabAus.jpg}} \end{center} \caption{Tablet Use in Australia for Under 25 and 55+ Age Groups, 2012 - 2016.} \end{figure} \bigskip \begin{figure}[h \begin{center} \scalebox{0.7}{\includegraphics{tabNor.jpg}} \end{center} \caption{Tablet Use in Norway for Under 25 and 55+ Age Groups, 2012 - 2016.} \end{figure} \bigskip \newpage \begin{figure}[h \begin{center} \scalebox{0.7}{\includegraphics{tabUS.jpg}} \end{center} \caption{Tablet Use in US for Under 25 and 55+ Age Groups, 2012 - 2016.} \end{figure} \bigskip \begin{figure}[h \begin{center} \scalebox{0.7}{\includegraphics{tabJap.jpg}} \end{center} \caption{Tablet Use in Japan for Under 25 and 55+ Age Groups, 2012 - 2016.} \end{figure} \bigskip \newpage \begin{figure}[h \begin{center} \scalebox{0.7}{\includegraphics{tabFr.jpg}} \end{center} \caption{Tablet Use in France for Under 25 and 55+ Age Groups, 2012 - 2016.} \end{figure} \bigskip All six countries show that the increase in tablet use initially progresses more quickly in the younger age group. In four countries, clearly, tablet use ultimately declines in the younger group along with the last rise in the oldest group. This is not the case in two countries, Japan and France. The plots of the deterministic trajectory and of the simulation shown in Fig. 3 resemble the empirical graphs of Figs. 4-9. Recall that for those plots, the parameters speciify a higher net adoption rate in the younger group, a very small migration rate from the younger to the older group, and external suppression from the older group to the younger group but not the other direction. Concerning Japan and France, note that in these two countries tablet use in the oldest age group has not reached the levels that it has in the other four countries. Thus, arguably the model may apply to these two countries as well, they are just at an earlier portion of the innovation diffusion process. The results of a goodness-of-fit test support this interpretation as well as the applicability of the model more generally. We tested the goodness-of-fit of the empirical data for tablet use to predictions of the simulation model, using the same parameter settings as given above. In fact, the model predictions used were those from the simulation run depicted in Figs. 2 and 3. Table 2 below shows the results using a chi-squared goodness-of-fit statistic. For each country, the maximum values are equated, which calibrates the data, then the simulation gap corresponding to one year (usually 4,000 iterations) is estimated, and finally a starting time in the simulation is estimated. This leaves seven degrees of freedom. Clearly, for all six countries the model fits somewhat; we cannot reject the null hypothesis that the model fits the data ($p > .05$). It may be that a simpler model, namely a linear one, fits two of the countries, France and Japan, just as well. Nevertheless, the BP model fits sufficiently well the data from all six countries. \begin{table}[ht] \centering \caption{Fit of Empirical Data on Tablet Use, 2012 - 2016, to Simulation Model Outcomes. Chi-squared Statistic with Seven Degrees of Freedom.} \begin{tabular}{cc} \hline \hline Country & Chi-squared \\ \hline United Kingdom & 12.51$^*$ \\ United States & 9.31$^{**}$ \\ Australia & 13.51$^*$ \\ Norway & 9.78$^{**}$ \\ France & 12.17$^*$ \\ Japan & 13.81$^*$ \\ \hline $^*p>.05,$ $^{**} p>.1$ \end{tabular} \end{table} \section{Conclusion} We have presented here a new model of the diffusion of innovation. This is a model for a population divided into different groups, where adoption and use of the cultural product by one group may be negatively affected by use by a different group. The mathematics of the model is taken from the multi-layer Bolker-Pacala model of population dynamics. We present empirical evidence from several countries for tablet use that conforms to a pattern generated by the model, as shown by a simulation and supported by goodness-of-fit tests. The empirical pattern of tablet use, with quck adoption in one group but then decline, while the adoption in another group is slower, without decline, is unusual. Many models of the diffusion of innovation have been developed but none have been applied to such a situation, hence, the need for at least one more model. We do not claim that this BP model is generally appropriate, but we suggest that in situations of external suppression and inhibition, that is, from one group vis-a-vis another, this model can work well. Our empirical analysis here was for tablet use but we noted above other examples of this phenomenon in the literature such as first names and automobile makes, as well as historical examples such as sumptuary laws. We might note that the BP model can quickly present analytical difficulties. With age groups, fortunately, migration can occur in only one direction, but with other sorts of segmentation of populations, say along social class or region, migration would be possible in both directions. Even this small complication makes analyzing the steady states much more difficult. Considering more than two groups also would be desirable but, again, this greatly raises the level of analytical difficulty. It is always possible to simulate more complicated models, but a mathematical analysis is valuable for providing understanding. For example, in section 3, through finding the singularities and evaluating the Jacobeans at the singularities, we gain a fairly throrough understanding of the dynamic system, what its tendencies are, and how these are affected by the parameters.
{ "timestamp": "2018-02-27T02:08:54", "yymm": "1802", "arxiv_id": "1802.08943", "language": "en", "url": "https://arxiv.org/abs/1802.08943" }
\section{Introduction} \label{sec:intro} We consider the problem of detecting a sparse mixture. A simple variant of the problem can be formulated as follows. Let $F$ be a continuous distribution function on the real line, and $\varepsilon \in (0,1/2]$ and $\mu > 0$. The problem is to test \begin{equation}} % \setcounter{equation}{1}\label{h0} \mathscr{H}_0^n: X_1, \dots, X_n \stackrel{\rm iid}{\sim} F, \end{equation} versus \begin{equation}} % \setcounter{equation}{1}\label{h1} \mathscr{H}_1^n: X_1, \dots, X_n \stackrel{\rm iid}{\sim} (1-\varepsilon) F(\cdot) + \varepsilon F(\cdot - \mu). \end{equation} Mixtures models such as in \eqref{h1} have been considered for quite some time, particularly in the context of robust statistics, where they are known as contamination models \citep[Eq 1.22]{huber2009robust}. Rather, our contribution is in line with the testing problems studied by \citet{ingster1997some} in the context of the normal sequence model, where $F$ above corresponds to the standard normal distribution. In that setting, Ingster considered the following parameterization \begin{equation}} % \setcounter{equation}{1}\label{param} \varepsilon = \varepsilon_n = n^{-\beta}, \qquad \mu = \mu_n = \sqrt{2 r \log n}, \end{equation} for some $\beta > 0$ and $r > 0$. The advantage of this parameterization is that, holding $\beta$ and $r$ fixed, the situation admits a relatively simple description. Indeed, since both the null and the alternative hypotheses are simple, by the Neyman-Pearson Lemma, the likelihood ratio test (set at level $\alpha$) is most powerful. Ingster studied the large-sample behavior of this test procedure and discovered that, in the case where $\beta > 1/2$, when $r < \rho(\beta)$, the test is powerless in the sense of achieving power $\alpha$, while when $r > \rho(\beta)$, the test was fully powerful in the sense of achieving power 1, where the function $\rho$ is given by \begin{equation}} % \setcounter{equation}{1} \label{lower} \rho(\beta) := \begin{cases} \beta - 1/2, & 1/2 < \beta \leq 3/4, \\ (1 - \sqrt{1 - \beta})^2, & 3/4 < \beta < 1. \end{cases} \end{equation} Thus the existence of a detection boundary in the $(\beta, r)$ plane given by $r = \rho(\beta)$. In such a situation, we will say that a test procedure `achieves the detection boundary', or is `first-order optimal' (or simply `optimal'), if it is fully powerful when $r > \rho(\beta)$. Such detection boundaries where derived for other models, for example, in \citep{cai2014optimal,cai2011optimal,donoho2004higher}. We also mention that the situation where $\beta \le 1/2$ is also well-understood, but quite different, and will not be considered here. Most of the literature has focused on the more interesting setting where $\beta > 1/2$ and we do the same here. \subsection{Threshold tests} After determining what one can hope for, it becomes of interest to understand what one can achieve with less information. Indeed, the likelihood ratio test requires knowledge of all the quantities and objects defining the testing problem, in this case $(F,\varepsilon,\mu)$, and even in the present stylized setting we might want to know what can be done when some of this information is missing, in particular what defines the alternative, namely $(\varepsilon,\mu)$. (The case where $F$ is also unknown has attracted much less attention. We discuss it in \secref{discussion}.) When $F$ is known, the problem is that of goodness-of-fit testing, albeit with alternatives of the form \eqref{h1} in mind. \citet{donoho2004higher} opened this investigation with the analysis of various tests, including the max test based on $\max_i X_i$ and a variant of the Anderson-Darling test \citep{anderson1952asymptotic}. Seeing as a problem of multiple testing based on p-values defined as $U_i = 1 - F(X_i)$, the max test coincides with the Tippett-\v{S}id\'ak test combination test, while the Anderson-Darling test coincides with a proposal by Tukey called the higher criticism (HC). More recently, \citet{moscovich2016exact} analyzed a goodness-of-fit (BJ) test proposed by \citet{berk1979goodness} in the same setting. For $t \in \mathbb{R}$, define \begin{equation}} % \setcounter{equation}{1}\label{counts} N_n(t) = \#\{i : X_i \ge t\}. \end{equation} We note that, under the null hypothesis, $N_n(t)$ is binomial with parameters $(n, 1-F(t))$, which motivates the test that rejects for large values of \begin{equation}} % \setcounter{equation}{1} \sup_{t : F(t) \le 1/2} \frac{N_n(t) - n (1-F(t))}{\sqrt{n F(t) (1-F(t)) + 1}}. \end{equation} This is one of many possible variants of HC.\footnote{\quad The constraint `$F(t) \le 1/2$ can be replaced by $F(t) \le \gamma$, where $\gamma$ can be taken to be smaller, say $\gamma = 0.05$. The `$+1$' in the denominator is roughly equivalent to adding the constraint $F(t) \ge 1/n$, which \citet{donoho2004higher} recommend for reasons of stability. In any case, this variant performs as well (to first order) as any other variant of HC considered in the literature, at least in all the regimes commonly considered.} Let $U_{(1)} \le \cdots \le U_{(n)}$ denote the ordered $U_i$'s. We note that, under the null hypothesis, $U_{(i)}$ has the beta distribution with parameters $(i, n-i+1)$, which motivates the definition of BJ, rejecting for small values of \begin{equation}} % \setcounter{equation}{1}\label{BJ} \min_{i \in [n]} P_i, \end{equation} where $P_i := \mathsf{B}(U_{(i)}; i, n-i+1)$ and $\mathsf{B}(\cdot; a,b)$ denotes the distribution function of the beta distribution with parameters $(a,b)$. The verdict is the following. In the normal setting, HC and BJ achieve the detection boundary in the full range $\beta > 1/2$, while the max test is only able to achieve the detection in the upper half of the range $\beta > 3/4$. The same extends to other models, in particular to generalized Gaussian models where $F$ has density proportional to $\exp(-|x|^a/a)$ for some $a > 1$. (The case $a \le 1$ is qualitatively different. HC and BJ are still first-order optimal while the max test is suboptimal everywhere.) These tests are all threshold tests, where we define a threshold test as any test with a rejection region of the form $\bigcup_{t \in \mathcal{T}} \{N_n(t) \ge c_t\},$ for some subset $\mathcal{T} \subset \mathbb{R}$ and some critical values $c_t > 0$. More broadly, any combination test that we know of that is discussed in the multiple-testing literature is a threshold test. (This includes the tests proposed by Fisher, Lipt\'ak-Stouffer, Tippett-\v{S}id\'ak, Simes, and more.) Thus it might be of interest to understand what can be achieved with a threshold test. In this regard, it is useful to examine how one would optimize such an approach if one had perfect knowledge of the model. Let $\phi_t$ denote the test with rejection region $\{N_n(t) \ge c_t\}$, where \begin{equation}} % \setcounter{equation}{1}\label{critical} c_t := \min\big\{c \ge 0: \P_0(N_n(t) \ge c) \le \alpha\big\}. \end{equation} We define the oracle threshold test as the test $\phi_{t*}$, where \begin{equation}} % \setcounter{equation}{1}\label{threshold} t_* := \argmax_{t \in \mathbb{R}} \P_1(N_n(t) \ge c_t), \end{equation} with $\mathbb{P}_0$ denoting the distribution under the null \eqref{h0} and $\mathbb{P}_1$ that under the alternative \eqref{h1}. (Here and elsewhere, $\alpha$ denotes the desired significance level.) Note that computing $c_t$ only requires knowledge of $F$, while computing $t_*$ requires knowledge of the entire model, namely $(F,\varepsilon,\mu)$. Thus the construction of the test $\phi_{t*}$ relies on the oracle knowledge of $(\varepsilon,\mu)$. \subsection{Scan tests} Detection problems arise in a variety of contexts and in very many applications. An important example is in spatial statistics (itself a rather wide area), where the detection of `hot spots', meaning areas of unusually high concentration, has been considered for quite some time \citep{kulldorff1997spatial}. An early contribution to this literature is that of \citet{naus1965distribution}, who considered the distribution of the maximum number of points in an interval of given length (say $\ell$) when the points are drawn iid from the uniform distribution on $[0,1]$. This would nowadays be referred to as the scan statistic and arises when testing the null that the points are uniformly distributed in $[0,1]$ against the (composite) alternative that there is an sub-interval of length $\ell$ with higher intensity. Settings where sub-interval length is unknown have been considered \citep{arias2005near}. For $s \le t$, define $N_n[s,t] = \# \{i: X_i \in [s,t]\}$ and $F[s,t] = F(t) - F(s)$. We note that, under the null hypothesis, $N_n[s,t]$ is binomial with parameters $(n, F[s,t])$, which motivates the test that rejects for large values of \begin{equation}} % \setcounter{equation}{1}\label{HC_scan} \sup_{(s,t) : F[s,t] \le 1/2} \frac{N_n[s,t] - n F[s,t]}{\sqrt{n F[s,t] (1 -F[s,t]) + 1}}. \end{equation} Although there are many possible variants, this is the one we will be working with. We note that, under the null hypothesis, for any pair of indices $i < j$, $U_{(j)} - U_{(i)}$ has the beta distribution with parameters $(j-i, n-j+i+1)$ --- see \citep[Th 11.1]{gibbons2011nonparametric}. This motivates the definition of the scan test which rejects for small values of \begin{equation}} % \setcounter{equation}{1} \label{BJ_scan} \min_{1 \le i < j \le n} P_{i,j}, \end{equation} where $P_{i,j} := \mathsf{B}(U_{(j)} - U_{(i)}; j-i, n-j+i+1)$. In general, we define a scan test as any test with region rejection of a the form $\bigcup_{(s,t) \in \mathcal{K}} \{N_n[s,t] \ge c_{s,t}\}$, where $\mathcal{K}$ is a subset of $\{(s,t) : s < t\}$ and $c_{s,t} \ge 0$ are critical values. Let $\phi_{s,t}$ denote the test with rejection region $\{N_n[s,t] \ge c_{s,t}\}$, where \begin{equation}} % \setcounter{equation}{1}\label{critical-pair} c_{s,t} := \min\big\{c \ge 0 : \P_0(N_n[s,t] \ge c) \le \alpha\big\}. \end{equation} We define the oracle scan test as the test $\phi_{s_\bullet,t_\bullet}$, where \begin{equation}} % \setcounter{equation}{1}\label{scan} (s_\bullet,t_\bullet) := \argmax_{s < t} \P_1(N_n[s,t] \ge c_{s,t}). \end{equation} Indeed, $\phi_{s_\bullet,t_\bullet}$ relies on oracle knowledge of $(\varepsilon, \mu)$. To the best of our knowledge, this is the first time that such tests are considered in the line of work that concerns us here with roots in the work of \citet{ingster1997some} and \citet{donoho2004higher}. The main reason for considering these tests in the present context is that they happen to be first-order optimal, not only in the models considered in the literature (such as generalized Gaussian), but also in power-law models where $F$ has fat tails (e.g., Cauchy or Pareto), whereas threshold tests fail are suboptimal for such models. \subsection{Content} For simplicity and the sake of clarity, we will focus on oracle-type, rather than likelihood ratio, performance bounds. The former are indeed more transparent and can be obtained under more generality and with simpler arguments. Also our main intention here is to compare what can be achieved with threshold tests compared to the more general scan tests, defined next, and comparing the corresponding oracle tests seems more appropriate. In \secref{oracle}, we study the oracle threshold test and the oracle scan test. We then consider a number of models. In \secref{scan}, we consider the two scan tests described above and compare them to the oracle scan test. In \secref{numerics}, we present the result of some numerical experiments that illustrate our theory. We briefly discuss the performance of the likelihood ratio test and that of nonparametric approaches in \secref{discussion}. \section{Oracle threshold test and oracle scan test} \label{sec:oracle} In this section we state and prove some basic results for the oracle threshold and oracle scan tests. \subsection{Power monotonicity} It is natural to guess that the testing \eqref{h0} versus \eqref{h1} becomes easier as the shift $\mu$ increases. This is indeed the case, at least from the point of view of both oracle tests. \begin{prp} \label{prp:threshold_monotonicity} The oracle threshold test has monotonic power in the shift. \end{prp} \begin{proof} We assume that $\varepsilon > 0$ is fixed and let $\P_\mu$ denote the data distribution under the alternative \eqref{h1}. Take $\mu_1 \le \mu_2$ and let $t_k$ denote the oracle threshold \eqref{threshold} for $\mu_k$, so that the oracle test for $\mu_k$, meaning $\phi_{t_k}$, has rejection region $\{N_n(t_k) \ge c_{t_k}\}$ and power $\pi_k := \P_{\mu_k}(N_n(t_k) \ge c_{t_k})$. Thus we need to show that $\pi_1 \le \pi_2$. This is so because of the fact that, for any $t$, $N_n(t)$ is stochastically non-decreasing in $\mu$, leading to \begin{equation}} % \setcounter{equation}{1} \pi_1 = \P_{\mu_1}(N_n(t_1) \ge c_{t_1}) \le \P_{\mu_2}(N_n(t_1) \ge c_{t_1}) \le \P_{\mu_2}(N_n(t_2) \ge c_{t_2}) = \pi_2, \end{equation} where the last inequality is by construction of $t_2$ and $c_2$. \end{proof} Clearly, the oracle scan test has at least as much power as the oracle threshold test. Interestingly, it does not have monotonic power in general, although it does under some natural assumptions on the base distribution. \begin{prp} \label{prp:scan_monotonicity} Assume that $F$, as a distribution, is unimodal. Then the oracle scan test has monotonic power in the shift. \end{prp} \begin{proof} We stay with the setting and notation introduced in the proof of \prpref{threshold_monotonicity}. Let $d \ge 0$ be smallest such that \begin{equation}} % \setcounter{equation}{1} F[s_1 + d, t_1 + \mu_2 - \mu_1] = F[s_1, t_1]. \end{equation} The fact that $F$, as a distribution, is unimodal implies that $d \le \mu_2 - \mu_1$. Now, under the null, by construction, \begin{equation}} % \setcounter{equation}{1} \P_0(N_n[s_1 + d, t_1 + \mu_2 - \mu_1] \ge c_{s_1, t_1}) = \P_0(N_n[s_1, t_1] \ge c_{s_1, t_1}) \le \alpha. \end{equation} On the other hand, under $\P_{\mu_1}$, $N_n[s_1, t_1]$ is binomial with parameters $n$ and $q_1 := (1-\varepsilon)F[s_1, t_1] + \varepsilon F[s_1-\mu_1, t_1-\mu_1]$, while under $\P_{\mu_2}$, $N_n[s_1 + d, t_1 + \mu_2 - \mu_1]$ is binomial with parameters $n$ and \begin{align*} q_2 &:= (1-\varepsilon)F[s_1 + d, t_1 + \mu_2 - \mu_1] + \varepsilon F[s_1 + d -\mu_2, t_1 + \mu_2 - \mu_1 -\mu_2] \\ &= (1-\varepsilon)F[s_1, t_1] + \varepsilon F[s_1 + d -\mu_2, t_1 - \mu_1] \\ &\ge q_1, \end{align*} using the fact that $d \le \mu_2 - \mu_1$. This explains the first inequality in the following derivation \begin{align*} \pi_1 &= \P_{\mu_1}(N_n[s_1, t_1] \ge c_{s_1, t_1}) \\ &\le \P_{\mu_2}(N_n[s_1 + d, t_1 + \mu_2 - \mu_1] \ge c_{s_1, t_1}) \\ &\le \P_{\mu_2}(N_n[s_2, t_2] \ge c_{s_2, t_2}) = \pi_2, \end{align*} and the second inequality is by definition of $(s_2, t_2)$. \end{proof} \subsection{Performance bounds} We now provide necessary and sufficient conditions for the the oracle threshold test and the oracle scan test to be fully powerful in the large-sample limit ($n\to\infty$). We focus on the case where \begin{equation}} % \setcounter{equation}{1} n \varepsilon_n \to \infty, \qquad \sqrt{n} \varepsilon_n \to 0, \end{equation} where the first condition implies that, under the alternative, the sample is indeed contaminated with probability tending to 1, while the second condition puts us in the regime corresponding to $\beta > 1/2$ under Ingster's parameterization \eqref{param}. Our analysis below is based on the following simple result, which is an immediate consequence of Chebyshev's inequality and the central limit theorem. \begin{lem \label{lem:binom} Suppose that we are testing $N \sim \text{Bin}(n, p_n)$ versus $N \sim \text{Bin}(n, q_n)$ where $p_n \le 1/2$ and $p_n \le q_n$, and consider the test at level $\alpha$ that rejects for large values of $N$ --- which is the most powerful test. It is asymptotically powerful if $n (q_n - p_n)^2/q_n \to \infty$, while it is asymptotically powerless if $n (q_n - p_n)^2/p_n \to 0$. \end{lem} Using \lemref{binom}, we easily obtain performance guarantees for the oracle threshold test and the oracle scan test. \begin{prp \label{prp:threshold} The oracle threshold test is powerful if there is a sequence of thresholds $(t_n)$ such that \begin{equation}} % \setcounter{equation}{1}\label{prp-threshold1} n \varepsilon_n \bar F(t_n - \mu_n) \to \infty, \quad \text{and} \quad n \varepsilon_n^2 \bar F(t_n - \mu_n)^2/\bar F(t_n) \to \infty. \end{equation} It is powerless if for any sequence of thresholds $(t_n)$ \begin{equation}} % \setcounter{equation}{1}\label{prp-threshold2} n \varepsilon_n^2 \bar F(t_n - \mu_n)^2/\bar F(t_n) \to 0. \end{equation} \end{prp} \begin{proof} Let $(t_n)$ denote a sequence of thresholds satisfying \eqref{prp-threshold1}, and define $p_n = \bar F(t_n)$ and $q_n = (1-\varepsilon_n) \bar F(t_n) + \varepsilon_n \bar F(t_n - \mu_n)$. We know that $N_n(t_n) \sim \text{Bin}(n, p_n)$ under the null and $N_n(t_n) \sim \text{Bin}(n, q_n)$ under the alternative, with \begin{align*} n (q_n - p_n)^2/q_n &= \frac{n \varepsilon_n^2 (\bar F(t_n - \mu_n) - \bar F(t_n))^2}{(1-\varepsilon_n) \bar F(t_n) + \varepsilon_n \bar F(t_n - \mu_n)}. \end{align*} If the second part of \eqref{prp-threshold1} holds, then necessarily $\bar F(t_n - \mu_n) \gg \bar F(t_n)$, since \begin{equation}} % \setcounter{equation}{1} n \varepsilon_n^2 \bar F(t_n - \mu_n)^2/\bar F(t_n) = \big[n \varepsilon_n^2 \bar F(t_n)\big] \big[\bar F(t_n - \mu_n)/\bar F(t_n)\big]^2 \le (n \varepsilon_n^2) \big[\bar F(t_n - \mu_n)/\bar F(t_n)\big]^2, \end{equation} with $n \varepsilon_n^2 = o(1)$ by assumption. Hence, \begin{align*} n (q_n - p_n)^2/q_n &\sim \frac{n \varepsilon_n^2 \bar F(t_n - \mu_n)^2}{(1-\varepsilon_n) \bar F(t_n) + \varepsilon_n \bar F(t_n - \mu_n)} \\ &\asymp n \varepsilon_n \bar F(t_n - \mu_n) \bigwedge n \varepsilon_n^2 \bar F(t_n - \mu_n)^2/\bar F(t_n). \end{align*} Therefore, by \lemref{binom}, the sequence of tests $(\phi_{t_n})$ has full power in the limit when \eqref{prp-threshold1} holds. Now let $(t_n)$ be any sequence of thresholds and consider the sequence of tests $(\phi_{t_n})$. By \lemref{binom}, it has power $\alpha$ in the limit since \begin{equation}} % \setcounter{equation}{1}\label{prp-threshold-proof2} n (q_n - p_n)^2/p_n \le n \varepsilon_n^2 \bar F(t_n - \mu_n)^2/(1-\varepsilon_n)\bar F(t_n)\to 0, \end{equation} where the convergence to $0$ comes from \eqref{prp-threshold2}. \end{proof} \begin{rem}\label{rem:prp-threshold1-simple} Note that the first part of \eqref{prp-threshold1} may be replaced by \begin{equation}} % \setcounter{equation}{1}\label{prp-threshold1-simple} n \bar F(t_n) \to \infty. \end{equation} This is because this and $n \varepsilon_n^2 \bar F(t_n - \mu_n)^2/\bar F(t_n) \to \infty$ implies $n \varepsilon_n \bar F(t_n - \mu_n) \to \infty$. \end{rem} \begin{prp \label{prp:scan} The oracle scan test is powerful if there is a sequence of intervals $([s_n,t_n])$ such that \begin{equation}} % \setcounter{equation}{1}\label{prp-scan1} n \varepsilon_n F[s_n -\mu_n, t_n -\mu_n] \to \infty, \quad \text{and} \quad n \varepsilon_n^2 F[s_n -\mu_n, t_n -\mu_n]^2/F[s_n,t_n] \to \infty. \end{equation} It is powerless if for any sequence of intervals $([s_n,t_n])$ \begin{equation}} % \setcounter{equation}{1} n \varepsilon_n^2 F[s_n -\mu_n, t_n -\mu_n]^2/F[s_n,t_n] \to 0. \end{equation} \end{prp} The proof is completely parallel to that of \prpref{threshold} and is omitted. \subsection{Examples: generalized Gaussian models and more} \label{sec:generalized_Gaussian} We look at a number of models and in each case derive the performance of the oracle threshold and oracle scan tests, and compare that with the performance of the likelihood ratio test. To place the results in line with the literature on the topic, we adopt Ingster's parameterization \eqref{param} for $\varepsilon_n$, in fact a softer version of that \begin{equation}} % \setcounter{equation}{1}\label{eps} \varepsilon = \varepsilon_n \sim n^{-\beta}, \end{equation} for some fixed $\beta$. The parameterization of $\mu = \mu_n$ will depend on on the model. To further simplify matters, we assume throughout that \begin{equation}} % \setcounter{equation}{1}\label{varphi} \log \bar F(x) \sim -\varphi(x), \end{equation} where $\varphi(x)$ is continuous and strictly increasing for $x$ large enough. In that case, in view of \remref{prp-threshold1-simple}, we note that \eqref{prp-threshold1} is satisfied when \begin{equation}} % \setcounter{equation}{1}\label{prp-threshold1-1} \log n - \varphi(t_n) \to \infty, \qquad (1 - 2\beta)\log n + \varphi(t_n) - 2\varphi(t_n - \mu_n) \to \infty. \end{equation} \subsubsection{Extended generalized Gaussian} This class of models is defined by the property that $\varphi$ satisfies\footnote{ It is tempting to consider a more general condition where there is a function $\omega$ on $\mathbb{R}_+$ such that $\lim_{t\to\infty}\varphi(u t)/\varphi(t) \to \omega(u)$ for all $u \ge 0$. However, as long as $\omega$ is not constant (equal to zero in that case), it can easily be shown that $\omega(u) = u^a$ for some $a > 0$.} \begin{equation}} % \setcounter{equation}{1} \label{aggcondition} \varphi(u t)/\varphi(t) \to u^a, \quad t \to \infty, \quad \forall u \ge 0. \end{equation} Here $a > 0$ parameterizes this class of models. This covers the generalized Gaussian models, which are often used as benchmarks in this line of work. It also covers the case where $\varphi(t) \sim t^a (\log t)^b$ where $b \in \mathbb{R}$ is arbitrary. For $a > 1$, define \begin{equation}} % \setcounter{equation}{1} \label{gamma>1} \rho_a(\beta) = \begin{cases} (2^{1/(a -1)} - 1)^{a -1} (\beta - 1/2), & 1/2 < \beta < 1 - 2^{-a/(a -1)}, \\ (1 - (1-\beta)^{1/a})^{a}, & 1 - 2^{-a/(a -1)} \le \beta < 1. \end{cases} \end{equation} For $a \le 1$, define \begin{equation}} % \setcounter{equation}{1} \label{gamma<1} \rho_a(\beta) = 2 (\beta - 1/2). \end{equation} In addition to \eqref{eps}, assume that \begin{equation}} % \setcounter{equation}{1}\label{mu1} \mu = \mu_n \text{ satisfies } \varphi(\mu_n) \sim r \log n, \text{ with $r \ge 0$ fixed}. \end{equation} \begin{prp}\label{prp:generalized_gaussian} The curve $r = \rho_a(\beta)$ in the $(\beta,r)$ plane is the detection boundary that the oracle threshold test achieves. \end{prp} \begin{proof} We focus on proving that the oracle threshold test achieves that boundary. A simple inspection of the arguments reveal that they are tight, so that this is the precise detection boundary that the test achieves. (See the proof of \prpref{oracle_threshold_power_law} for an example.) We divide the proof into several cases. \medskip\noindent {\em Case 1: $a > 1$.} Define $b = 2^{-1/(a - 1)}$ and note that $0 < b < 1$. \medskip\noindent {\em Case 1.1: $1/2 < \beta < 1 - b^{a}$ and $r > \rho_a(\beta)$.} Under these conditions, $\beta < 1/2 + r(1/b - 1)^{-(a-1)}$, and in particular there is $\eta > 0$ such that \begin{equation}} % \setcounter{equation}{1}\label{beta-ub1} 1 - 2\beta \ge -2r(1/b - 1)^{-(a-1)} + \eta. \end{equation} Setting $t_n = (1 - b)^{-1}\mu_n$, by \eqref{aggcondition} and \eqref{mu1}, we have the following \begin{align} \varphi(t_n - \mu_n) &= \big(r b^a/(1-b)^a + o(1)\big) \log n, \\ \varphi(t_n) &= \big(r/(1 - b)^a + o(1)\big) \log n. \end{align} By \prpref{threshold_monotonicity} we may focus on $r$ small enough that $r/(1-b)^a < 1$. This is possible because $\rho_a(\beta) < (1-b)^a$ when $\beta < 1 - b^a$, which we assume here. (This can be easily verified using the definition of $b$.) Assuming that $r$ is as such, the first part of \eqref{prp-threshold1-1} is satisfied. For the second part, with \eqref{beta-ub1}, we have \begin{align*} &(1 - 2\beta)\log n - 2\varphi(t_n - \mu_n) + \varphi(t_n) \\ &\ge \big[-2r (1/b - 1)^{-(a-1)} +\eta -2 r b^a/(1-b)^a + r/(1-b)^a +o(1) \big] \log n \\ &=[\eta + o(1)] \log n \to \infty, \end{align*} using the definition of $b$ and simplifying. Thus the second part of \eqref{prp-threshold1-1} is also satisfied and the oracle threshold test is powerful. \medskip\noindent {\em Case 1.2: $1 - b^{a} \le \beta < 1$ and $r > \rho_a(\beta)$.} Under these conditions, we have $1 - \beta > (1 - r^{1/a})^a$, and in particular there is $\eta > 0$ such that \begin{equation}} % \setcounter{equation}{1}\label{beta-ub2} 1 - \beta - \eta \ge (1 - r^{1/a})^a \ge ((1 - \eta)^{1/a} - r^{1/a})^a. \end{equation} Set $t_n = (\frac{1}{r}(1 - \eta))^{1/a}\mu_n$, we have the following \begin{align} \varphi(t_n - \mu_n) &= \big((1 - \eta)^{1/a} - r^{1/a})^a + o(1)\big) \log n, \\ \varphi(t_n) &= (1 - \eta + o(1)) \log n. \end{align} By looking at the speed of $\varphi(t_n)$, the first part of \eqref{prp-threshold1-1} is satisfied immediately. For the second part, with \eqref{beta-ub1}, we have \begin{align*} &(1 - 2\beta)\log n - 2\varphi(t_n - \mu_n) + \varphi(t_n) \\ &= (1 - 2\beta)\log n - 2\big((1 - \eta)^{1/a} - r^{1/a})^a + o(1)\big) \log n + (1 - \eta + o(1))\log n \\ &= 2 \big[1 - \beta -\eta/2 - ((1 - \eta)^{1/a} - r^{1/a})^a + o(1)\big] \log n \to \infty. \end{align*} Thus the second part of \eqref{prp-threshold1-1} is also satisfied and the oracle threshold test is powerful. \medskip\noindent {\em Case 2: $a \le 1$.} By \prpref{threshold_monotonicity} we may restrict attention to the case where $2 \beta - 1 < r < 1$. Here we set $t_n = \mu_n$. Then the first part in \eqref{prp-threshold1-1} is clearly satisfied. For the second part, notice that \begin{align*} &(1 - 2\beta)\log n - 2\varphi(t_n - \mu_n) + \varphi(t_n) \\ &=(1 - 2\beta)\log n + (r + o(1)) \log n \\ &=[1 - 2\beta + r + o(1)] \log n \to \infty. \qedhere \end{align*} \end{proof} Thus, although the conditions are much more general here, the detection boundary is the same as in the corresponding generalized Gaussian model and, moreover, the oracle threshold test achieves that boundary. \begin{rem}[max test] In this class of models, it can be shown that the max test achieves the detection boundary over the upper range, meaning when $\beta \ge 1 - 2^{-a/(a -1)}$. In fact, $\rho^{\rm max}(\beta) := (1 - (1-\beta)^{1/a})^{a}$ defines the detection boundary for the max test. \end{rem} \subsubsection{Other models} In the next few classes of models, $\varphi$ satisfies \begin{equation}} % \setcounter{equation}{1}\label{omega} \frac{\varphi^{-1}(t) - \varphi^{-1}(v t)}{\lambda(t)} \to \omega(v), \quad t \to \infty, \quad \forall v \in (0,1]. \end{equation} for some functions $\lambda$ and $\omega$, with the latter being non-increasing, continuous, and such that $\omega(1) = 0$. This is actually also the case when $\varphi(t) \sim t^a (\log t)^b$ with $a > 0$ and $b \in \mathbb{R}$, with $\lambda(t) = t^{1/a} (\log t)^{-b/a}$ and $\omega(v) = (1 - v^{1/a})/a^{b/a}$. Define \begin{equation}} % \setcounter{equation}{1}\label{rho1} \rho(\beta) = \inf_{0 < h < 1 - \beta} \, \big[\omega(h) - \omega(2\beta -1 + 2h)\big]. \end{equation} In addition to \eqref{eps}, assume that \begin{equation}} % \setcounter{equation}{1} \mu = \mu_n \sim r \lambda(\log n), \quad r \ge 0 \text{ fixed}. \end{equation} \begin{prp}\label{prp:generalized_gaussian_other} The curve $r = \rho(\beta)$ in the $(\beta,r)$ plane is the detection boundary that the oracle threshold test achieves. \end{prp} \begin{proof} We focus on proving that the oracle threshold test achieves that boundary. Since $\omega(v)$ is continuous, we may define \begin{equation}} % \setcounter{equation}{1} h^* = \argmin_{0 \le h \le 1 - \beta} \, \big[\omega(h) - \omega(2\beta -1 + 2h)\big]. \end{equation} We focus on the case where $h^* < 1 -\beta$. In the case where $h^* = 1 -\beta$, the max test is powerful (\remref{max_test2}), and therefore so is the oracle threshold test. By \prpref{threshold_monotonicity} we may focus on the case where $r < \omega(h^*)$. With these assumptions and the fact that $\omega(h^*) - \omega(2\beta -1 + 2h^*) = \rho(\beta) < r$, there is $\eta > 0$ be such that \begin{equation}} % \setcounter{equation}{1}\label{proof_generalized_gaussian_other1} 2\beta - 1 + 2h^* + 2\eta < 1, \end{equation} and \begin{equation}} % \setcounter{equation}{1} \omega(h^*) - \omega(2\beta -1 + 2h^* +\eta) < r < \omega(h^*) - \omega(2\beta -1 + 2h^* +2\eta). \end{equation} Define $t_n := \mu_n +\varphi^{-1}(h^*\log n)$. Using \eqref{omega} multiple times, for $n$ sufficiently large, we have the following \begin{align*} \mu_n &= (r +o(1)) \lambda(\log n)\\ &\le [\omega(h^*) - \omega(2\beta - 1+ 2h^*+ 2\eta)]\lambda(\log n)\\ &= \varphi^{-1}(\log n) - \varphi^{-1}(h^*\log n) - \varphi^{-1}(\log n) + \varphi^{-1}((2\beta - 1+ 2h^* +2\eta)\log n)\\ &= \varphi^{-1}((2\beta - 1+ 2h^* +2\eta)\log n) -\varphi^{-1}(h^*\log n). \end{align*} Hence, eventually, $t_n \le \varphi^{-1}((2\beta - 1+ 2h^* +2\eta)\log n)$, implying that \begin{align*} \log n - \varphi(t_n) = \log n -(2\beta - 1+ 2h^* +2\eta)\log n = [1 -(2\beta -2 +2h^* +2\eta)] \log n \to \infty, \end{align*} using \eqref{proof_generalized_gaussian_other1}. Thus the first part of \eqref{prp-threshold1-1} is satisfied. Similarly, for $n$ sufficiently large, \begin{align*} \mu_n &= (r +o(1)) \lambda(\log n)\\ &\ge [\omega(h^*) - \omega(2\beta - 1+ 2h^*+ \eta)]\lambda(\log n)\\ &= \varphi^{-1}((2\beta - 1+ 2h^* +\eta)\log n) -\varphi^{-1}(h^*\log n), \end{align*} so that, eventually, $t_n \ge \varphi^{-1}((2\beta - 1+ 2h^*+\eta)\log n)$, implying that \begin{align*} &(1 - 2\beta)\log n -2\varphi(t_n - \mu_n) +\varphi(t_n) \\ &\ge (1 - 2\beta)\log n -2 h^* \log n + (2\beta - 1 + 2h^* + \eta)\log n\\ &= \eta \log n \to \infty. \end{align*} Thus the second part of \eqref{prp-threshold1-1} is satisfied. \end{proof} \begin{rem}[max test] \label{rem:max_test2} In the present situation, it can be shown that $\rho^{\rm max}(\beta) := \omega(1-\beta)$ defines the detection boundary for the max test. \end{rem} \subsubsection{Extended generalized Gumbel} This class of models is defined by $\varphi(t) = \exp(t^a)$ for some $a > 0$, which satisfies \eqref{omega} with $\lambda(t) = \tfrac1a (\log t)^{1/a-1}$ and $\omega(v) = \log(1/v)$. In this case, \begin{equation}} % \setcounter{equation}{1} \mu = \mu_n \sim \frac{r}a (\log \log n)^{1/a-1}, \end{equation} and the detection boundary is given by $r = - \log(1-\beta)$. Note that, at the detection boundary, $\mu_n \to \infty$ when $a > 1$; that $\mu_n \asymp 1$ when $a = 1$; and $\mu_n \to 0$ when $a < 1$. \subsubsection{Extended generalized Gumbel} This class of models is defined by $\varphi(t) = \exp((\log t)^a)$ for some $a > 1$, which satisfies \eqref{omega} with $\lambda(t) = \tfrac1a (\log t)^{1/a-1} \exp((\log t)^{1/a})$ and $\omega(v) = \log(1/v)$. In this case, \begin{equation}} % \setcounter{equation}{1} \mu = \mu_n \sim \frac{r}a (\log \log n)^{1/a-1} \exp((\log \log n)^{1/a}), \end{equation} and the detection boundary is given by $r = - \log(1-\beta)$ as in the previous class of models (since $\omega$ is the same). \begin{rem}[max test] \label{rem:max_test3} Based on \remref{max_test2}, in the last two classes of models, the max test achieves the detection boundary over the whole $\beta$ range. The same is true, more generally, when the infimum in \eqref{rho1} is at $h = 1-\beta$. \end{rem} \subsection{Examples: power-law models and more} \label{sec:power_law} In the next few classes of models, $F$ satisfies \begin{equation}} % \setcounter{equation}{1}\label{F1} \log(F(t+v) - F(t)) \sim -\lambda(t), \quad t \to \infty, \quad \forall v \ge 0, \end{equation} for some function $\lambda$ which is increasing eventually and such that $\lambda(t) \to \infty$ as $t \to \infty$. This includes models where \begin{equation}} % \setcounter{equation}{1}\label{F2} \bar F(t) \propto t^{-a} (\log t)^b (1 + o(1/t)), \quad t \to \infty, \end{equation} with $a > 0$ and $b \in \mathbb{R}$, in which case \eqref{F1} holds with $\lambda(t) = (a+1) \log t$. It also includes models where $\bar F(t) \propto (\log t)^{-a}(1 + o(1/t\log t))$, with $a > 0$, in which case \eqref{F1} holds with $\lambda(t) = \log t$, as well as other distribution with even slower decay. In addition to \eqref{eps}, assume that \begin{equation}} % \setcounter{equation}{1}\label{mu2} \mu = \mu_n \quad \text{satisfies} \quad \lambda(\mu_n) \sim r \log n, \quad r \ge 0 \text{ fixed}. \end{equation} \begin{prp}\label{prp:power_law} The curve $r = \rho(\beta) := 2\beta -1$ in the $(\beta,r)$ plane is the detection boundary that the oracle scan test achieves. \end{prp} \begin{proof} We focus on proving that the oracle scan test achieves that boundary. Fix $r$ such that $r > 2\beta - 1$. Consider the interval $[s_n, t_n]$ with $s_n := \mu_n$ and $t_n := \mu_n + v$, where $v > 0$ is such that $F[0,v] > 0$. We need to verify that \eqref{prp-scan1} holds. On the one hand, we have \begin{equation}} % \setcounter{equation}{1} n \varepsilon_n F[s_n -\mu_n, t_n -\mu_n] = n^{1 -\beta} F[0, v] \to \infty, \end{equation} because $\beta < 1$ by assumption. So the first part of \eqref{prp-scan1} holds. On the other hand, \begin{align*} &n \varepsilon_n^2 F[s_n -\mu_n, t_n -\mu_n]^2/F[s_n,t_n] = n^{1-2\beta} F[0,v]^2/n^{r+o(1)} = n^{r + 1 -2\beta + o(1)} \to \infty, \end{align*} since $r > 2\beta - 1$. So the second part of \eqref{prp-scan1} holds. \end{proof} We now show that threshold tests are suboptimal in the main class of models satisfying \eqref{F1}, namely \eqref{F2}. (The same happens to be true in other models with fat tails satisfying \eqref{F1}.) This is the main motivation for considering scan tests. \begin{prp} \label{prp:oracle_threshold_power_law} In a model satisfying \eqref{F2}, and with the same parameterization \eqref{mu2}, the curve $r = (1+1/a)(2\beta -1)$ in the $(\beta,r)$ plane is the detection boundary that the oracle threshold test achieves. \end{prp} \begin{proof} We first prove that the oracle threshold test achieves this detection boundary. By \prpref{threshold_monotonicity} we may assume that $r < 1+1/a$. Therefore, fix $r$ such that $(1+1/a) (2\beta -1) < r < 1+1/a$. Set the threshold $t_n = \mu_n + v$, where $v$ is such that $\bar F(v) > 0$. We need to verify that \eqref{prp-threshold1} holds, and we do so via \remref{prp-threshold1-simple}. Note that $t_n \sim \mu_n = n^{r/(a+1) + o(1)}$. In particular, \begin{equation}} % \setcounter{equation}{1} n \bar F(t_n) \sim n \mu_n^{-a} (\log \mu_n)^b = n^{1 - a r/(a+1) + o(1)} \to \infty, \end{equation} and, by the same token, \begin{equation}} % \setcounter{equation}{1} n \varepsilon_n^2 \bar F(t_n - \mu_n)^2/\bar F(t_n) \sim n^{1-2\beta} n^{- a r/(a+1) + o(1)} = n^{1-2\beta - a r/(a+1) + o(1)} \to \infty. \end{equation} We now turn to proving that this is the statement boundary is the best that the oracle threshold test can hope for. For this, fix $r < (1+1/a) (2\beta -1)$. We need to verify \eqref{prp-threshold2}. Suppose for contradiction that there is a sequence of thresholds, $(t_n)$, such that \eqref{prp-threshold2} does not hold. By extracting a subsequence if needed, we may assume that \begin{equation}} % \setcounter{equation}{1}\label{proof_oracle_threshold_power_law1} n \varepsilon_n^2 \bar F(t_n - \mu_n)^2/\bar F(t_n) \to \lambda \in (0, \infty]. \end{equation} First, suppose that $\liminf t_n/\mu_n < \infty$. Extracting a subsequence if needed, we may assume that $t_n = O(\mu_n)$. In that case, we have \begin{equation}} % \setcounter{equation}{1} n \varepsilon_n^2 \bar F(t_n - \mu_n)^2/\bar F(t_n) \le n \varepsilon_n^2/\bar F(t_n) \le n^{1 -2\beta + o(1)} \mu_n^{-a + o(1)} = n^{1 -2\beta - a r/(a+1) + o(1)} \to 0. \end{equation} Since this contradicts \eqref{proof_oracle_threshold_power_law1}, we must have $\liminf t_n/\mu_n = \infty$, meaning that $t_n \gg \mu_n$. In that case, we have $\bar F(t_n - \mu_n) \sim \bar F(t_n)$, implying that \begin{equation}} % \setcounter{equation}{1} n \varepsilon_n^2 \bar F(t_n - \mu_n)^2/\bar F(t_n) \sim n \varepsilon_n^2 \bar F(t_n) \le n \varepsilon_n^2 \to 0. \end{equation} This also contradicts \eqref{proof_oracle_threshold_power_law1}. Since there is no other option, it must be that \eqref{proof_oracle_threshold_power_law1} cannot hold. We conclude that, indeed, \eqref{prp-threshold2} holds for any sequence of thresholds. \end{proof} \section{Scan tests} \label{sec:scan} In this section, we study the scan tests \eqref{HC_scan} and \eqref{BJ_scan}, and show that both of them do as well as the oracle scan test, at least to first-order in the asymptote where $n \to \infty$ and under the various parameterizations used in the previous section. We refer to \eqref{HC_scan} as the Stouffer scan test, as it is constructed as Stouffer's combination test \citep{stouffer1949american}; while we refer to \eqref{BJ_scan} as the Tippett scan test, for similar reasons \citep{tippett1931methods}. \subsection{Stouffer scan test} We study the Stouffer scan test \eqref{HC_scan}. The main work goes into controlling this statistic under the null hypothesis. The limiting distribution of higher criticism can be derived from \citep{jaeschke1979asymptotic} and the limiting distributions of some variants of the scan statistic are known under other models \citep{kabluchko2011extremes, sharpnack2016exact}. We will not pursue such a fine result here, but contend ourselves with a relatively rough upper bound. \begin{lem} \label{lem:HC_scan} Given observations $x_1, \dots, x_n$, the maximum in \eqref{HC_scan} is attained at some $(s,t) = (x_i,x_j)$. \end{lem} \begin{proof} Define \begin{equation}} % \setcounter{equation}{1}\label{R_n} R_n(s,t) := \frac{N_n[s,t] - n F[s,t]}{\sqrt{n F[s,t] (1 -F[s,t]) + 1}}. \end{equation} Let $x_{(1)} \le \cdots \le x_{(n)}$ denote the ordered observations, and set $x_{(0)} = -\infty$ and $x_{(n+1)} = \infty$. It suffices to show that, for any $1 \le i \le j \le n$ and any $(s,t)$ such that $x_{(i-1)} < s \le x_{(i)}$ and $x_{(j)} \le t < x_{(j+1)}$, in addition to $F[s,t] \le 1/2$, we have $R_n(x_{(i)}, x_{(j)}) \ge R_n(s,t)$. The crucial observation is that $N_n[s,t] = N_n[x_{(i)}, x_{(j)}]$ while $F[x_{(i)}, x_{(j)}] \le F[s,t]$. It is thus enough to show that the function $p \mapsto (a - p)/(p(1-p) + b)^{1/2}$ is decreasing over $[0,1/2]$ for any $a, b \ge 0$. This is so since this function has derivative $- (a (1 -2 p) + 2 b + p)/(p(1-p) + b)^{3/2}$. \end{proof} \begin{thm} \label{thm:HC_scan0} With $S_n$ defined as the statistic \eqref{HC_scan}, we have \begin{equation}} % \setcounter{equation}{1} \P_0(S_n \ge 3\log n) \to 0. \end{equation} \end{thm} \begin{proof} We place ourselves under the null hypothesis. Recall the definition of $R_n$ in \eqref{R_n}. By \lemref{HC_scan} and the fact that $R_n(X_i, X_i) = 1$ for all $i$, if $S_n \ge 3 \log n$ necessarily $S_n = S_n^* := \max_{i \ne j} R_n(X_i, X_j)$. For any $i \ne j$, we have \begin{equation}} % \setcounter{equation}{1} R_n(X_i, X_j) \le 2 + S_{i,j}, \end{equation} with \begin{equation}} % \setcounter{equation}{1}\label{Nij} S_{i,j} := \frac{N_{i,j} -2 - (n-2) p_{i,j}}{\sqrt{(n-2) p_{i,j}(1-p_{i,j}) + 1}}, \quad N_{i,j} := N_n[X_i, X_j], \quad p_{i,j} := F[X_i, X_j]. \end{equation} The point of this reorganizing is that, given $(X_i, X_j)$, $N_{i,j}-2 \sim \text{Bin}(n-2, p_{i,j})$, and an application of Bernstein's inequality gives \begin{equation}} % \setcounter{equation}{1} \P_0(S_{i,j} \ge s \mid X_i, X_j) \le \exp\bigg(-\frac{s^2 b_{i,j}^2/2}{b_{i,j}^2 +s b_{i,j}/3}\bigg) \le \exp\bigg(-\frac{s^2/2}{1 +s/3}\bigg) \le \exp(-s), \quad \forall s \ge 6, \end{equation} because $b_{i,j} := \sqrt{(n - 2) p_{i,j} (1-p_{i,j}) + 1} \ge 1$. Thus, with the union bound, as $n \to \infty$, we have \begin{align*} \P_0(S_n \ge 3 \log n) &= \P_0(\exists i \ne j : S_{i,j} + 2 \ge 3 \log n) \\ &\le \sum_{i < j} \P_0(S_{i,j} \ge 3 \log n - 2) \le n^2 \exp(-3 \log n +2) \to 0, \end{align*} which proves the statement. \end{proof} With \thmref{HC_scan0}, one obtains the following performance bound for the Stouffer scan test. \begin{cor} The Stouffer scan test is powerful if there is a sequence of intervals $([s_n,t_n])$ such that \begin{equation}} % \setcounter{equation}{1}\label{cor-stouffer1} n \varepsilon_n F[s_n -\mu_n, t_n -\mu_n] \gg \log n, \quad \text{and} \quad n \varepsilon_n^2 F[s_n -\mu_n, t_n -\mu_n]^2/F[s_n,t_n] \gg (\log n)^2. \end{equation} \end{cor} \begin{proof} By \thmref{HC_scan0}, the Stouffer scan test at level $\alpha$ is at least as powerful as the test $\{S_n \ge 3 \log n\}$, eventually. Now, under the alternative, this test is powerful if we can prove that $p_n := F[s_n, t_n] \le 1/2$ and $R_n(s_n, t_n) \ge 3 \log n$. Define $p'_n := F[s_n - \mu_n, t_n - \mu_n]$ and $q_n := (1-\varepsilon_n) p_n + \varepsilon_n p'_n$, so that \eqref{cor-stouffer1} can be expressed as \begin{equation}} % \setcounter{equation}{1} n \varepsilon_n p'_n \gg log n \to \infty, \quad \text{and} \quad n \varepsilon_n^2 {p'_n}^2/p_n \gg (log n)^2 \to \infty. \end{equation} That $p_n \le 1/2$ is true, eventually, comes from the fact that \begin{equation}} % \setcounter{equation}{1} \infty \gets n \varepsilon_n^2 {p'_n}^2/p_n \le n \varepsilon_n^2/p_n, \end{equation} with $n \varepsilon_n^2 \to 0$ by assumption, so that necessarily $p_n \to 0$. Note that this implies that $q_n \to 0$ also. Given that $N_n[s_n, t_n]$ is binomial with parameters $n$ and $q_n$, with $n q_n \ge n p'_n \to \infty$ by the first part of \eqref{cor-stouffer1}, we have $N_n[s_n, t_n] = n q_n + O_P(\sqrt{n q_n (1-q_n)})$, and so \begin{equation}} % \setcounter{equation}{1} R_n(s_n, t_n) = \frac{n \varepsilon_n (p'_n - p_n) + O_P(\sqrt{n q_n (1-q_n)})}{\sqrt{n p_n (1-p_n)+ 1}} \sim \frac{n \varepsilon_n p'_n + O_P(\sqrt{n q_n})}{\sqrt{n p_n + 1}}, \end{equation} since $p'_n \gg p_n$, by the fact that \begin{equation}} % \setcounter{equation}{1} \infty \gets n \varepsilon_n^2 {p'_n}^2/p_n = n \varepsilon_n^2 p_n (p'_n/p_n)^2 = o(p'_n/p_n)^2. \end{equation} In addition, the same conditions imply \begin{equation}} % \setcounter{equation}{1} \frac{n \varepsilon_n p'_n}{\sqrt{n q_n}} \asymp \sqrt{n \varepsilon_n^2 {p'_n}^2/p_n} \bigvee n \varepsilon_n p'_n \to \infty, \end{equation} so that \begin{equation}} % \setcounter{equation}{1} R_n(s_n, t_n) \sim_P n \varepsilon_n p'_n/\sqrt{n p_n + 1} \asymp_P \sqrt{n \varepsilon_n^2 {p'_n}^2/p_n} \bigvee n \varepsilon_n p'_n \gg \log n. \end{equation} We conclude that $R_n(s_n, t_n) \ge 3 \log n$ holds with probability tending to 1. \end{proof} With this performance bound, it is straightforward to verify that the Stouffer scan test performs as well as the oracle scan test to first order, at least in the context of the parameterization used in the models studied in \secref{generalized_Gaussian} and \secref{power_law}. This comes from the fact that, in context of these sections, the quantity appearing in \eqref{cor-stouffer1} increases as a (fixed) positive power of $n$ under the alternative. We formalize this into the following statement, left without formal proof. \begin{cor}\label{cor:stouffer} The Stouffer scan test achieves the oracle scan detection boundary in all the settings considered in \secref{generalized_Gaussian} and \secref{power_law}. \end{cor} \subsection{Tippett scan test} We study the Tippett scan test \eqref{BJ_scan}, which we denote by $T_n$. We control this statistic under the null hypothesis by a simple application of the union bound. A more refined control seems possible in view of \cite{moscovich2016exact}, where the limiting distribution of \eqref{BJ} is obtained. \begin{prp} \label{prp:BJ_scan0} With $T_n$ defined as the statistic \eqref{BJ_scan}, we have \begin{equation}} % \setcounter{equation}{1} \P_0(T_n \le 1/n^3) \to 0. \end{equation} \end{prp} \begin{proof} Under the null, each $P_{i,j}$ is uniformly distributed in $[0,1]$. Thus the union bound gives \begin{equation}} % \setcounter{equation}{1} \P_0(T_n \le 1/n^3) \le n^2 \P_0(P_{i,j} \le 1/n^3) = n^2/n^3 = 1/n \to 0, \end{equation} which concludes the proof. \end{proof} Thus most of the work goes into controlling the statistic under the alternative. We do so by bounding the Tippett scan statistic by an expression that resembles that of the Stouffer scan statistic. We make use of the following simple concentration bound.\footnote{\quad Many things are known about the beta distribution and order statistics in general, but we could not immediately find such a simple bound.} \begin{lem}\label{lem:tippett} For $k \in [n]$, \begin{equation}} % \setcounter{equation}{1} \mathsf{B}(u; k, n-k+1) \le \exp\bigg(- \frac{(k -n u)^2/2}{nu(1-u) + (k-nu)/3}\bigg), \quad 0 \le u \le k/n. \end{equation} \end{lem} \begin{proof} Let $U_{k:n}$ denote the $k$-th order statistic of an iid sample of size $n$ from the uniform distribution on $[0,1]$. For $u \in [0,1]$ such that $nu \le k$, we have \begin{align*} \mathsf{B}(u; k, n-k+1) = \P(U_{k:n} \le u) = \P(\text{Bin}(n, u) \ge k), \end{align*} and we conclude with an application of Bernstein's inequality. \end{proof} \begin{prp} The Tippett scan test is powerful if there is a sequence of intervals $([s_n,t_n])$ such that \begin{equation}} % \setcounter{equation}{1}\label{prp-tippett1} n \varepsilon_n F[s_n -\mu_n, t_n -\mu_n] \gg \sqrt{\log n}, \qquad n \varepsilon_n^2 F[s_n -\mu_n, t_n -\mu_n]^2/F[s_n,t_n] \gg \log n. \end{equation} \end{prp} \begin{proof} Recall that $T_n = \min_{i < j} P_{i,j}$ and the expression of $P_{i,j}$. Thus an application of \lemref{tippett} gives \begin{equation}} % \setcounter{equation}{1} T_n \le 1/n^3 \ \Leftrightarrow \ \max_{i < j} \frac{(j-i - V_{i,j})_+^2/2}{n V_{i,j} (1-V_{i,j}) + (j-i - V_{i,j})_+/3} \ge 3 \log n, \end{equation} where $V_{i,j} := U_{(j)} - U_{(i)}$, after taking a logarithm. Moreover, $V_{i,j} = F[X_{(n-j+1)}, X_{(n-i+1)}]$ and $j-i = N_n[X_{(n-j+1)}, X_{(n-i+1)}] - 1$, yielding \begin{equation}} % \setcounter{equation}{1} T_n \le 1/n^3 \ \Leftrightarrow \ \max_{i\ne j} \frac{(N_{i,j} -1 -n p_{i,j})_+^2}{n p_{i,j} (1-p_{i,j}) +(N_{i,j} -1 -n p_{i,j})_+} \ge 6 \log n, \end{equation} with the notation of \eqref{Nij}. The latter inequality holds when there is $i \ne j$ such that \begin{equation}} % \setcounter{equation}{1} N_{i,j} -1 -n p_{i,j} \ge 12 \log n \quad \text{and} \quad \frac{N_{i,j} -1 -n p_{i,j}}{\sqrt{n p_{i,j} (1-p_{i,j})}} \ge \sqrt{12 \log n}, \end{equation} which is the case when \begin{equation}} % \setcounter{equation}{1}\label{tippett-proof1} n p_{i,j} \ge \sqrt{12 \log n} \quad \text{and} \quad \frac{N_{i,j} -1 -n p_{i,j}}{\sqrt{n p_{i,j} (1-p_{i,j})}} \ge \sqrt{12 \log n}. \end{equation} Let $(s_n, t_n)$ be as in the statement and let $(i,j)$ be such that $U_{(i)} \le s < U_{(i+1)}$ and $U_{(j-1)} < t \le U_{(j)}$. By construction, $p_{i,j} \ge F[s_n, t_n]$, so that the first part of \eqref{prp-tippett1} implies that the first part of \eqref{tippett-proof1} holds eventually. We also have $N_{i,j} \ge N_n[s_n,t_n] - 2$, so that \begin{equation}} % \setcounter{equation}{1} \frac{N_{i,j} -1 -n p_{i,j}}{\sqrt{n p_{i,j} (1-p_{i,j})}} \ge \frac{N_n[s_n,t_n] -3 -n F[s_n,t_n]}{\sqrt{n F[s_n,t_n] (1-F[s_n,t_n])}}, \end{equation} and the quantity on the RHS is controlled using the second part \eqref{prp-tippett1} exactly as in the proof of \prpref{scan}. \end{proof} Here too, these results make it straightforward to verify that the Tippett scan test performs as well as the oracle scan test (to first order) in the models and regimes seen earlier, leading us to state the following (left without a formal proof). \begin{cor}\label{cor:tippett} The Tippett scan test achieves the oracle scan detection boundary in all the settings considered in \secref{generalized_Gaussian} and \secref{power_law}. \end{cor} \section{Numerical experiments} \label{sec:numerics} We performed small-scale numerical experiments to probe our theory. We worked with Student t-distributions with varying numbers of degrees of freedom, ${\rm df} = 0.5, 1, 2, 5\}$. Recall that the Student t-distribution with $k$ degrees of freedom has density $\propto (1 + x/k)^{-(k+1)/2}$. We considered three different scenarios with varying sparsity exponents, $\beta = 0.6, 0.7, 0.8$. The sample size was set to $n = 30,000$. We compared the higher criticism test, the Berk-Jones test, the Stouffer scan test, and the Tippett scan test in each of these settings. We repeat each setting 200 times. See \figref{6}, \figref{7}, and \figref{8}. As the theory predicts, We can check that when the number of degrees of freedom is smaller, implying that the base distribution has fatter tails, the scan procedures dominate the threshold procedure. The threshold procedures become dominant as the tails become lighter. This is so at this particular sample size as, in principle, our theory indicates that with a larger sample size, the scan procedures would still dominate. The transition from powerless to powerful takes place at a larger effect size than predicted by the theory, which is also explain by the limited sample size.\footnote{The scan tests have computational complexity of order $O(n^2)$, which has limited the scale of our experiments.} \section{Discussion} \label{sec:discussion} While scan tests are commonly used in a number of detection problems, threshold tests are almost exclusively used in multiple testing situations. The main purpose of our work here was to reveal that scan tests can improve on threshold tests in somewhat standard multiple testing settings, particularly when the null distribution ($F$ in the paper) has heavy tails. \paragraph{Likelihood ratio performance bounds} Given our main objective, it was more natural to consider oracle-type performance bounds rather than using the likelihood ratio performance as benchmark. We can say nonetheless that, for representative models, the oracle threshold boundaries stated in \prpref{generalized_gaussian} and \prpref{generalized_gaussian_other} match those of the likelihood ratio test --- for example, this is true of generalized Gaussian models where $F$ has density of the form $f(t) \propto \exp(-|t|^a)$ for some $a > 0$. The same is true of the oracle scan boundary stated in \prpref{power_law} --- for example, this is true of power law models where $F$ has density of the form $f(t) \propto (1+|t|^a)^{-1}$ for some $a > 0$. \paragraph{Nonparametric approaches} \citet{arias2017distribution} consider the situation where the null distribution, $F$, is symmetric about 0 but otherwise unknown. They suggest two tests for symmetry: the CUSUM sign test and the tail-run test, which are meant to be the nonparametric equivalent of the higher criticism test and the tail-run sign test, respectively. Back-of-the-envelope calculations seem to indicate that these nonparametric tests achieve the same detection boundaries as their parametric counterparts in all the settings considered here. \begin{figure}[h!] \centering \includegraphics[scale=0.6]{./figures/fig300006} \caption{Here $\beta = 0.6$, the x-axis represents $r$ in the parameterization \eqref{mu2}, y-axis the power of the tests identified in the legend. Each subfigure corresponds to a Student t-distribution with the specified number of degrees of freedom. The black dashed vertical line corresponds to the oracle scan detection boundary established in \prpref{power_law}, while the dotted line corresponds to the oracle threshold detection boundary established in \prpref{oracle_threshold_power_law}.} \label{fig:6} \end{figure} \begin{figure}[h!] \centering \includegraphics[scale=0.6]{./figures/fig300007} \caption{Here $\beta = 0.7$, otherwise, see \figref{6} for more details.} \label{fig:7} \end{figure} \begin{figure}[h!] \centering \includegraphics[scale=0.6]{./figures/fig300008} \caption{Here $\beta = 0.8$, otherwise, see \figref{6} for more details.} \label{fig:8} \end{figure} \bibliographystyle{abbrvnat}
{ "timestamp": "2018-02-27T02:00:56", "yymm": "1802", "arxiv_id": "1802.08715", "language": "en", "url": "https://arxiv.org/abs/1802.08715" }
\section{Introduction} Many important recent advances in our understanding of the physical world have been driven by large-scale computational modeling and data analysis, for example, the 2012 discovery of the Higgs boson, the 2013 Nobel Prize in chemistry for computational modeling of molecules, and the 2016 discovery of gravitational waves. Given the ubiquitous use in science and its critical importance to the future of science and engineering, scientific computing plays a central role in scientific investigations and is critical to innovation in most domains of our lives. It underpins the majority of today's technological, economic and societal feats. We have entered an era in which huge amounts of data offer enormous opportunities. \href{{http://pathways.acm.org/executive-summary.html}}{By 2020, it is also expected that one out of every two jobs in the STEM (Science, Technology, Engineering and Mathematics) fields will be in computing} (Association for Computing Machinery, 2013, \cite{ACM2013}). These developments, needs and future challenges, as well as the developments that are now taking place within quantum computing, quantum information theory and data driven discoveries (data analysis and machine learning) will play an essential role in shaping future technological developments. Most of these developments require true cross-disciplinary approaches and bridge a vast range of temporal and spatial scales and include a wide variety of physical processes. To develop computational tools for such complex systems that give physically meaningful insights requires a deep understanding of approximation theory, high performance computing, and domain specific knowledge of the area one is modeling. Computing competence represents a central element in scientific problem solving, from basic education and research to essentially almost all advanced problems in modern societies. These competencies are not limited to STEM fields only. The statistical analysis of big data sets and how to use machine learning algorithms belong to the set of tools needed by almost all disciplines, spanning from the Social Sciences, Law, Education to the traditional STEM fields and Life Science. Unfortunately, many of our students at both the undergraduate and the graduate levels are unprepared to use computational modeling, data science, and high performance computing, skills that are much valued by a broad range of employers. This lack of preparation is most certainly no fault of our students, but rather a broader issue associated with how departments, colleges, and universities are keeping up with the demands of these high-tech employers. It is through this integrated computational perspective that we aim to address this. Furthermore, although many universities do offer compulsory programming courses in scientific computing, and physics departments offer one or more elective courses in computational physics, there is often not a uniform and coherent approach to the development of computing competencies and computational thinking. This has consequences for a systematic introduction and realization of computing skills and competencies and pertaining learning outcomes. The aim of this contribution is to present examples on how to introduce a computational perspective in basic undergraduate physics courses, basing ourselves on experiences made at the University of Oslo in Norway and now also at Michigan State University in the USA. In particular, we will present the \textbf{Computing in Science Education} project from the University of Oslo \cite{CSEUiO}, a project which has evolved into a Center of Excellence in Education, the \href{{http://www.mn.uio.no/ccse/english/}}{Center for Computing in Science Education} \cite{CCSEUiO}. Similar initiatives and ideas are also being pursued at Michigan State University. The overarching aim is to strengthen the computing competencies of students, with key activities such as the establishment of learning outcomes, how to develop assessment programs and course transformations by including computational projects and exercises in a coherent way. The hope is that these initiatives can also lead to a better understanding of the scientific method and scientific reasoning as well as providing new and deeper insights about the underlying physics that governs a system. This contribution is organized as follows. After these introductory remarks, we present briefly in the next section what we mean by computing and present possible learning outcomes that could be applied to a bachelor's degree program in physics (Sec. \ref{sec:competencies}), which are distinguished as more general competencies and course-specific ones. In Sec. \ref{sec:learingoutcomes}, we discuss possible paths on how to include and implement computational elements in central undergraduate physics courses. We discuss briefly how to assess various learning outcomes and how to develop a research program around this. Several examples that illustrate the links between the learning outcomes and specific mathematics and physics courses are discussed in Sec. \ref{sec:examples}. Finally, in the last section we present our conclusions and perspectives. \section{Computing competencies} \label{sec:competencies} The focus of this article is on computing competencies and how these help in enlarging the body of tools available to students and scientists alike, going well beyond classical tools taught in standard undergraduate courses in physics and mathematics. We will claim through various examples that computing allows for a more generic handling of problems, where focusing on algorithmic aspects results in deeper insights about scientific problems. With \textbf{Computing } we will mean solving scientific problems using all possible tools, including symbolic computing, computers and numerical algorithms, experiments (often of a numerical character) and analytical paper and pencil solutions. We will thus, deliberately, avoid a discussion of computing and computational physics in particular as something separate from theoretical physics and experimental physics. It is common in the scientific literature to encounter statements like \emph{Computational physics now represents the third leg of research alongside analytical theory and experiments}. In selected contexts where say high-performance topics or specific computational methodologies play a central role, it may be meaningful to separate analytical work from computational studies. We will however argue strongly, in particular within an educational context, for a view where computing means solving scientific problems with all possible tools. Through various examples in this article we will show that a tight connection between standard analytical work, combined with various algorithms and a computational approach, can help in enhancing the students' understanding of the scientific method, hopefully providing deeper insights about the physics (or other disciplines). Whether and how we achieve these outcomes is the purpose of research in computational physics education. The power of the scientific method lies in identifying a given problem as a special case of an abstract class of problems, identifying general solution methods for this class of problems, and applying a general method to the specific problem (applying means, in the case of computing, calculations by pen and paper, symbolic computing, or numerical computing by ready-made and/or self-written software). This generic view on problems and methods is particularly important for understanding how to apply available generic software to solve a particular problem. Algorithms involving pen and paper are traditionally aimed at what we often refer to as continuous models, of which only few can be solved analytically. The number of important differential equations in physics that can be solved analytically are rather few, limiting thereby the set of problems that can be addressed in order to deepen a student's insights about a particular physics case. On the other hand, the application of computers calls for approximate discrete models. Much of the development of methods for continuous models are now being replaced by methods for discrete models in science and industry, simply because we can address much larger classes of problems with discrete models, often also by simpler and more generic methodologies. In Sec. \ref{sec:examples} we will present several examples thereof. A typical case is that where an eigenvalue problem can allow students to study the analytical solution as well as moving to an interacting quantum mechanical case where no analytical solution exists. By merely changing the diagonal matrix elements, one can solve problems that span from classical mechanics and fluid dynamics to quantum mechanics and statistical physics. Using essentially the same algorithm one can study physics cases that are covered by several courses, allowing teachers to focus more on the physical systems of interest. There are several advantages in introducing computing in basic physics courses. It allows physics teachers to bring important elements of scientific methods at a much earlier stage in our students' education. Many advanced simulations used in physics research can easily be introduced, via various simplifications, in introductory physics courses, enhancing thereby the set of problems studied by the students (see Sec. \ref{sec:examples}). Computing gives university teachers a unique opportunity to enhance students' insights about physics and how to solve scientific problems. It gives the students the skills and abilities that are asked for by society. Computing allows for solving more realistic problems earlier and can provide an excellent training of creativity as well as enhancing the understanding of abstractions and generalizations. Furthermore, computing can decrease the need for special tricks and tedious algebra, and shifts the focus to problem definition, visualization, and "what if" discussions. Finally, if the setup of undergraduate courses is properly designed, with a synchronization with mathematics and computational science courses, computing can trigger further insights in mathematics and other disciplines. \section{Learning Outcomes and Assessment Programs}\label{sec:learingoutcomes} An essential element in designing a synchronization of computing in various physics (and other disciplines as well) courses is a proper definition of learning outcomes, as well as the development of assessment programs and possibly a pertinent research program on physics education. Having a strong physics education group that can define a proper research program is an essential part of such an endeavor. Michigan State University has a strong physics education group involved in such research programs. Similarly, the University of Oslo, with its recently established center of excellence in Education \cite{CCSEUiO}, has started to define a research program that aims at assessing the relevance and importance of computing in science education. Physics, together with basic mathematics and computational science courses, is at the undergraduate level presented in a very homogeneous way worldwide. Most universities offer more or the less the same topics and courses, starting with Mechanics and Classical Mechanics, Waves, Electromagnetism, Quantum physics and Quantum Mechanics and ending with Statistical physics. Similarly, during the last year of the Bachelor's degree one finds elective courses on computational physics and mathematical methods in physics, in addition to a selection of compulsory introductory laboratory courses. Additionally, most physics undergraduate programs have now a compulsory introductory course in scientific programming offered by the computer science department. Here, one encounters frequently Python as the default programming language. Moreover, one finds almost the same topics covered by the basic mathematics courses required for a physics degree, from basic calculus to linear algebra, differential equations and real analysis. Many mathematics departments and/or computational science departments offer courses on numerical mathematics that are based on the first course in programming. These developments have taken place during the last decade and several universities are attempting to include a more coherent computational perspective to our basic education. In order to achieve this, it is important to develop a strategy where the introduction of computational elements are properly synchronized between physics, mathematics, and computational science courses. This would allow physics teachers to focus more on the relevant physics. The development of learning outcomes plays a central role in this work. An additional benefit of properly-developed learning outcomes is the stimulation of cross-department collaborations as well as an increased awareness about what is being taught in different courses. Here we list several possibilities, starting with some basic algorithms and topics that can be taught in mathematics and computational science courses. We end with a discussion of possible learning outcomes for central undergraduate physics courses \subsection{General Learning Outcomes for Computing Competence.} Here we present some high-level learning outcomes that we expect students to achieve through comprehensive and coordinated instruction in numerical methods over the course of their undergraduate program. These learning outcomes are different from specific learning goals in that the former reference the end state that we aim for students to achieve. The latter references the specific knowledge, tools, and practices with which students should engage and discusses how we expect them to participate in that work. Numerical algorithms form the basis for solving science and engineering problems with computers. An understanding of algorithms does not itself serve as an understanding of computing, but it is a necessary step along the path. Through comprehensive and coordinated instruction, we aim for students to have developed a deep understanding of: \begin{itemize} \item the most fundamental algorithms for linear algebra, ordinary and partial differential equations, and optimization methods; \item numerical integration including Trapezoidal and Simpson's rule, as well as multidimensional integrals; \item random numbers, random walks, probability distributions, Monte Carlo integration and Monte Carlo methods; \item root finding and interpolation methods; \item machine learning algorithms; and \item statistical data analysis and handling of data sets. \end{itemize} Furthermore, we aim for students to develop: \begin{itemize} \item a working knowledge of advanced algorithms and how they can be accessed in available software; \item an understanding of approximation errors and how they can present themselves in different problems; and \item the ability to apply fundamental and advanced algorithms to classical model problems as well as real-world problems as well to assess the uncertainty of their results. \end{itemize} Later courses should build on this foundation as much as possible. In designing learning outcomes and course contents, one should make sure that there is a progression in the use of mathematics, numerical methods and programming, as well as the contents of various physics courses. This means also that teachers in other courses do not need to use much time on numerical tools since these are naturally included in other courses. \subsubsection{Learning Outcomes for Symbolic Computing} Symbolic computing is a helpful tool for addressing certain classes of problems where a functional representation of the solution (or part of the solution) is needed. Through engaging with symbolic computing platforms, we aim for students to have developed: \begin{itemize} \item a working knowledge of at least one computer algebra system (CAS); \item the ability to apply a CAS to perform classical mathematics including calculus, linear algebra and differential equations; and \item the ability to verify the results produced by the CAS using some other means. \end{itemize} \subsubsection{Learning Outcomes for Programming} Programming is a necessary aspect of learning computing for science and engineering. The specific languages and/or environments that students learn are less important than the nature of that learning (i.e., learning programming for the purposes of solving science problems). By numerically solving science problems, we expect students to have developed (these are possible examples): \begin{itemize} \item an understanding of programming in a high-level language (e.g., MATLAB, Python, R); \item an understanding of programming in a compiled language (e.g., Fortran, C, C++); \item the ability to to implement and apply numerical algorithms in reusable software that acknowledges the generic nature of the mathematical algorithms; \item a working knowledge of basic software engineering elements including functions, classes, modules/libraries, testing procedures and frameworks, scripting for automated and reproducible experiments, documentation tools, and version control systems (e.g., Git); and \item an understanding of debugging software, e.g., as part of implementing comprehensive tests. \end{itemize} \subsubsection{Learning Outcomes for Mathematical Modeling} Preparing a problem to be solved numerically is a critical step in making progress towards an eventual solution. By providing opportunities for students to engage in modeling, we aim for them to develop the ability to solve real problems from applied sciences by: \begin{itemize} \item deriving computational models from basic principles in physics and articulating the underlying assumptions in those models; \item constructing models with dimensionless and/or scaled forms to reduce and simplify input data; and \item interpreting the model's dimensionless and/or scaled parameters to increase their understanding of the model and its predictions. \end{itemize} \subsubsection{Learning Outcomes for Verification} Verifying a model and the resulting outcomes it produces are essential elements to generating confidence in the model itself. Moreover, such verifications provide evidence that the work is reproducible. By engaging in verification practices, we aim for students to develop: \begin{itemize} \item an understanding of how to program testing procedures; and \item the knowledge of testing/verification methods including the use of: \begin{itemize} \item exact solutions of numerical models, \item classical analytical solutions including asymptotic solutions, \item computed asymptotic approximation errors (i.e., convergence rates), and \item unit tests and step-wise construction of tests to aid debugging. \end{itemize} \end{itemize} \subsubsection{Learning Outcomes for Presentation of Results} The results of a computation need to be communicated in some format (i.e., through figures, posters, talks, and other forms of written and oral communication). Computation affords the experience of presenting original results quite readily. Through their engagement with presentations of their findings, we aim for students to develop: \begin{itemize} \item the ability to make use of different visualization techniques for different types of computed data; \item the ability to present computed results in scientific reports and oral presentations effectively; and \item a working knowledge of the norms and practices for scientific presentations in various formats (i.e., figures, posters, talks, and written reports). \end{itemize} The above learning goals and outcomes are of a more generic character. What follows here are specific algorithms that occur frequently in scientific problems. The implementation of these algorithms in various physics courses, together with problem and project solving, is a way to implement large fractions of the above learning goals. \subsection{Central Tools and Programming Languages} We will strongly recommend that Python is used as the high-level programming language. Other high-level environments like Mathematica and Matlab can also be presented and offered as special courses. This means that students can apply their knowledge from the basic programming course offered by most universities. Many university courses in programming make use of Python, and extend their computational knowledge in various physics classes. We recommend that the following tools are used: \begin{enumerate} \item \href{{http://jupyter.org/}}{jupyter and ipython notebooks}; \item version control software like \href{{https://git-scm.com/}}{git} and repositories like \href{{https://github.com/}}{GitHub} and \href{{https://gitlab.com/}}{GitLab}; \item other typsetting tools like {\LaTeX}; and \item unit tests and using existing tools for unit tests. \href{{https://docs.python.org/2/library/unittest.html}}{Python has extensive tools for this.} \end{enumerate} The notebooks can be used to hand in exercises and projects. They can provide the students with experience in presenting their work in the form of scientific/technical reports. Version control software allows teachers to bring in reproducibility of science as well as enhancing collaborative efforts among students. Using version control can also be used to help students present benchmark results, allowing others to verify their results. Unit testing is a central element in the development of numerical projects, from microtests of code fragments, to intermediate merging of functions to final tests of the correctness of a code. \subsection{Specific Algorithms for Basic Physics Courses} For a bachelor's degree in physics, it is now more and more common to require a compulsory programming course, typically taught during the first two years of undergraduate studies. The programming course, together with mathematics courses, lay the foundation for the use of computational exercises and projects in various physics courses. Based on this course, and the various mathematics courses included in a physics degree, there is a unique possibility to incorporate computational exercises and projects in various physics courses, without taking away the attention from the basic physics topics to be covered. What follows below is a suggested list of possible algorithms which could be included in central physics courses. The list is by no means exhaustive and is mainly meant as a guideline of what can be included. The examples we discuss in Sec. \ref{sec:examples}, illustrate how these algorithms can be included in courses like mechanics, quantum physics/mechanics, statistical and thermal physics and electromagnetism. These are all core courses in a typical bachelor's degree in physics. \subsection{Central Algorithms} \begin{itemize} \item Ordinary differential equations \begin{enumerate} \item Euler, modified Euler, Verlet and Runge-Kutta methods with applications to problems in courses on electromagnetism, methods for theoretical physics, quantum mechanics and mechanics. \end{enumerate} \item Partial differential equations \begin{enumerate} \item Diffusion in one and two dimensions (statistical physics), wave equation in one and two dimensions. These are examples of physics cases which could apper in courses on mechanics, electromagnetism, quantum mechanics, methods for theoretical physics and Laplace's and Poisson's equations in a course on electromagnetism. \end{enumerate} \item Numerical integration \begin{enumerate} \item Trapezoidal and Simpson's rule and Monte Carlo integration. Here one can envision applications in statistical physics, methods of theoretical physics, electromagnetism and quantum mechanics. \end{enumerate} \item Statistical analysis, random numbers, random walks, probability distributions, Monte Carlo integration and Metropolis algorithm. These are algorithms with important applications to statistical physics and laboratory courses. \item Linear Algebra and eigenvalue problems. \begin{enumerate} \item Gaussian elimination, LU-decomposition, eigenvalue solvers, and iterative methods like Jacobi or Gauss-Seidel for systems of linear equations. These algorithms are important for several courses, classical mechanics, methods of theoretical physics, electromagnetism and quantum mechanics. \end{enumerate} \item Signal processing \begin{enumerate} \item Discrete (fast) Fourier transforms, Lagrange/spline/Fourier interpolation, numeric convolutions {\&} circulant matrices, filtering. Here we can think of applications in electromagnetism, quantum mechanics, and experimental physics (data acquisition) \end{enumerate} \item Root finding techniques, used in methods for theoretical physics, quantum mechanics, electromagnetism and mechanics. \item Machine Learning algorithms and Statistical Data Analysis, relevant for laboratory courses \end{itemize} In order to achieve a proper pedagogical introduction of these algorithms, it is important that students and teachers see how these algorithms are used to solve a variety of physics problems. The same algorithm, for example the solution of a second-order differential equation, can be used to solve the equations for the classical pendulum in a mechanics course or the (with a suitable change of variables) equations for a coupled RLC circuit in the electromagnetism course. Similarly, if students develop a program for studies of celestial bodies in the mechanics course, many of the elements of such a program can be reused in a molecular dynamics calculation in a course on statistical physics and thermal physics. The two-point boundary value problem for a buckling beam (discretized as an eigenvalue problem) can be reused in quantum mechanical studies of interacting electrons in oscillator traps, or just to study a particle in a box potential with varying depth and extension. We discuss some selected examples in section \ref{sec:examples}. Our coming texbook \cite{DannyMortenBook} will contain a more exhaustive discussion of these, combined with a more detailed list of examples and a proper discussion of learning outcomes and possible assessment programs. In order to aid the introduction of computational exercises and projects, there is a strong need to develop educational resources. Physics is an old discipline with a large wealth of established analytical exercises and projects. In fields like mechanics, we have centuries of pedagogical developments with a strong emphasis on developing analytical skills. The majority of physics teachers are well familiar with this approach. In order to see how computing can enlarge this body of exercises and projects, and hopefully add additional insights to the physics behind various phenomena, we find it important to develop a large body of computational examples. The \href{{http://www.compadre.org/picup/}}{PICUP project}, Partnership for Integration of Computation into Undergraduate physics, develops such \href{{http://www.compadre.org/PICUP/resources/}}{resources for teachers and students on the integration of computational material} \cite{PICUP}. We strongly recommend these resources. \subsubsection{Advanced Computational Physics Courses} Towards the end of undergraduate studies it is useful to offer a course which focuses on more advanced algorithms and presents compiled languages like C++ and Fortran, languages our students will meet in actual research. Furthermore, such a course should offer more advanced projects which train the students in actual research, developing more complicated programs and working on larger projects. \subsection{Physics Education Research and Computing in Science Education} The introduction of computational elements in the various courses should be, if possible, strongly integrated with ongoing research on physics education. The Physics and Astronomy department at MSU is in a unique position due to its strong research group in physics education, the \href{{http://www.pa.msu.edu/research/physics-education-lab}}{PERL group} \cite{PERLMSU}. Together with the Center for Computing in Science Education at the University of Oslo \cite{CCSEUiO}, we are now in the process of establishing new assessments and assessment methods that address several issues associated with integrating computing into science courses. The issues include but are not limited to how well students learn computing, what new insights students gain about the specific science through computing, and how students' affective states (e.g., motivation to learn, computational self-efficacy) are affected by computing . Broadly speaking, these assessments should provide deeper insights into the integration of computing in science education in general as well as provide a structured framework for assessment of our efforts and a basis for systematic studies of student learning. The central questions that our research must address are \begin{enumerate} \item how can we assess the effect of integrating computing into science curricula on a variety of learned-centered constructs including computational thinking, motivation, self-efficacy and science identity formation, \item how should we structure assessments to ensure valid, reliable and impactful assessment, which provides useful information to our program and central partners, and finally \item how can the use of these structured assessments improve student outcomes in teacher-, peer-, and self-assessment. \end{enumerate} Addressing these questions requires a combination of qualitative techniques to construct the focus of these assessments, to build assessment items and to develop appropriate assessment methods, and quantitative techniques, including advanced statistical analysis to ensure validity and reliability of the proposed methods as well as to analyze the resulting data. The learning objectives and learning outcomes for computational methods developed as part of the first objective form parts of the basis for the assessment program, and we will also investigate the assessment of non-content learning goals such as self-efficacy and identity formation. Identifying and investigating the role of such non-content factors will be critical to support all students in achieving our computational learning goals. The effect of integration of computational methods into basic science courses have been sparsely studied, primarily because the practice is sparse. Further progress depends now on the development of assessments that can be used for investigative, comparative and/or longitudinal studies and to establish best practices in this emerging field. Some assessments will be developed for specific courses, but we will aim for broad applicability across institutions. \section{Examples on how to Include computing in Physics Undergraduate Programs}\label{sec:examples} Having defined possible learning outcomes, we would like now to present some examples which reflect the discussions above. These examples are mainly taken from various courses at the University of Oslo, although some of them have been used at Michigan State University. Since 2003, first via the \href{{http://www.mn.uio.no/ccse/english/people/index.html}}{Computing in Science Education project} \cite{CSEUiO} and now through the recently established center of excellence in education \href{{http://www.mn.uio.no/ccse/english/}}{Center for computing in Science Education} \cite{CCSEUiO}, computing has been introduced across disciplines in a synchronized way. Central elements here are a compulsory programming course with a strong mathematical flavour. This course gives a solid foundation in programming as a problem solving technique in mathematics. The line of thought is that when solving mathematical problems numerically, this should enhance algorithmic thinking, and thereby the students' understanding of the scientific process. Secondly, mathematics is at least as important as before, but should be supplemented with development, analysis, implementation, verification and validation of numerical methods. Finally, these methods are used in modeling and problem solving with numerical methods and visualisation, as well as traditional methods in various science courses, from the physical sciences to life science. Crucial ingredients for the success of the computing in Science Education project has been the support from governing bodies as well as extensive cooperations across departmental boundaries. And finally, the willingness of several university teachers and researchers to give priority to teaching reforms and course transformations. In addition to the above, over the years we have coordinated the use of computational exercises and numerical tools in most undergraduate courses. Furthermore, via the computing in Science Education project and now the Center for computing in Science Education, we help in updating the scientific staff's competence on computational aspects and give support (scientific, pedagogical and financial) to those who wish to revise their courses in a computational direction. This may include the organization of courses for university teachers. Summer students aid in developing and introducing computational exercises and several new textbooks have been developed, from the basic mechanics course to a course in statistical physics \cite{AMS2015, AIV2018,AMSDS2019}. \subsection{The Physics Undergraduate Program at the University of Oslo.} The layout of the physics bachelor's degree program at the University of Oslo is shown in table \ref{tab:FAUiO}. \begin{table} \caption{The bachelor's degree program in physics at the University of Oslo, Norway}\label{tab:FAUiO} \begin{footnotesize} \begin{tabular}{|l|l|l|l|} \hline \multicolumn{1}{|l}{ 6th Semester } & \multicolumn{1}{|l|}{ Elective } & \multicolumn{1}{|l|}{ Elective } & \multicolumn{1}{|l|}{ Elective } \\ \hline 5th Semester & FYS2160 Statistical physics & FYS3110 Quantum Mechanics & Elective \\ \hline 4th Semeters & FYS2130 Waves and Motion & FYS2140 Quantum physics & FYS2150 physics Laboratory \\ \hline 3rd Semester & FYS1120 Electromagnetism & MAT1120 Linear Algebra & AST2000 Intro to Astrophysics \\ \hline 2nd Semester & FYS-MEK1100 Mechanics & MEK1100 Vector Calculus & MAT1110 Calculus and Linear Algebra \\ \hline 1st Semester & MAT 1100 Calculus & MAT-INF1100 Modeling and Computations & IN1900 Intro to Programming \\ \hline Credits & 10 ECTS & 10 ECTS & 10 ECTS \\ \hline \end{tabular} \end{footnotesize} \end{table} In the first semester the students encounter the first level of syncronization between computing courses and mathematics courses. As an example, consider the numerical evaluation of an integral by the trapezoidal rule. Integral calculus is typically discussed first in the calculus course MAT1100. Thereafter, the algorithm for computing the integral using the trapezoidal rule for an interval $x \in [a,b]$ \[ \int_a^b(f(x) dx \approx \frac{1}{2}\left [f(a)+2f(a+h)+\dots+2f(b-h)+f(b)\right], \] is discussed and developed in MAT-INF1100, the modeling and computations course that serves as an intermediate step between the standard calculus course and the programming course. Finally, the algorithm is implemented in IN1900, introduction to programming with scientific applications. We show here a typical Python code which exemplifies this. \begin{lstlisting} from math import exp, log, sin def Trapez(a,b,f,n): h = (b-a)/float(n) s = 0 x = a for i in range(1,n,1): x = x+h s = s+ f(x) s = 0.5*(f(a)+f(b)) +s return h*s def f1(x): return exp(-x*x)*log(1+x*sin(x)) a = 1; b = 3; n = 1000 result = Trapez(a,b,f1,n) print(result) \end{lstlisting} Here we have defined an integral given by \[ I=\int_1^3 dx \exp{(-x^2)}\log{(1+x\sin{(x)})}. \] Coming back to the above learning outcomes, we would like to emphasize that Python offers an extremely versatile programming environment, allowing for the inclusion of analytical studies in a numerical program. Here we show an example code with the trapezoidal rule using Python's symbolic package \textbf{SymPy} \cite{SymPy} to evaluate an integral and compute the absolute error with respect to the numerically evaluated one of the integral $\int_0^1 dx x^2 = 1/3$. This is in shown in the following code part \begin{lstlisting} # define x as a symbol to be used by sympy x = Symbol('x') exact = integrate(function(x), (x, 0.0, 1.0)) print("Sympy integration=", exact) \end{lstlisting} where we have defined the function to integrate in the complete Python program that follows here. \begin{lstlisting} from math import * from sympy import * def Trapez(a,b,f,n): h = (b-a)/float(n) s = 0 x = a for i in range(1,n,1): x = x+h s = s+ f(x) s = 0.5*(f(a)+f(b)) +s return h*s # function to compute pi def function(x): return x*x a = 0.0; b = 1.0; n = 100 result = Trapez(a,b,function,n) print("Trapezoidal rule=", result) # define x as a symbol to be used by sympy x = Symbol('x') exact = integrate(function(x), (x, 0.0, 1.0)) print("Sympy integration=", exact) # Find relative error print("Relative error", abs((exact-result)/exact)) \end{lstlisting} The following extended version of the trapezoidal rule allows us to plot the relative error by comparing with the exact result. By increasing to $10^8$ points we arrive at a region where numerical errors start to accumulate, as seen in the figure \ref{fig:error}. \begin{lstlisting} from math import log10 import numpy as np from sympy import Symbol, integrate import matplotlib.pyplot as plt # function for the trapezoidal rule def Trapez(a,b,f,n): h = (b-a)/float(n) s = 0 x = a for i in range(1,n,1): x = x+h s = s+ f(x) s = 0.5*(f(a)+f(b)) +s return h*s # function to compute pi def function(x): return x*x # define integration limits a = 0.0; b = 1.0; # find result from sympy # define x as a symbol to be used by sympy x = Symbol('x') exact = integrate(function(x), (x, a, b)) # set up the arrays for plotting the relative error n = np.zeros(9); y = np.zeros(9); # find the relative error as function of integration points for i in range(1, 8, 1): npts = 10**i result = Trapez(a,b,function,npts) RelativeError = abs((exact-result)/exact) n[i] = log10(npts); y[i] = log10(RelativeError); plt.plot(n,y, 'ro') plt.xlabel('n') plt.ylabel('Relative error') plt.show() \end{lstlisting} The last example shows the potential of combining numerical algorithms with symbolic calculations, allowing thereby students to validate their algorithms. With concepts like unit testing, one has the possibility to test and verify several or all parts of the code. Validation and verification are then included \emph{naturally}. \begin{figure} \includegraphics[scale=0.8]{Figures/error.pdf} \caption{Log-log plot of the relative error as function of the number of integration points. Till approximately $n=10^6$, the relative error follows the predicted mathematical error of the trapezoidal rule. For higher numbers of integration points, numerical round off errors and loss of numerical precision give an increasing relative error.}\label{fig:error} \end{figure} The above example allows the student to also test the mathematical error of the algorithm for the trapezoidal rule by changing the number of integration points. The students get trained from day one to think error analysis. Figure \ref{fig:error} shows clearly the region where the relative error starts increasing. The mathematical error which follows the trapezoidal rule goes as $O(h^2)$ where $h$ is the chosen numerical step size. It is proportional to the inverse of the number of integration points $n$, that is $h\propto 1/n$. Before numerical round-off errors and loss of numerical precision kick in (near $h\sim 10^{-6}$) we see that the relative error in the log-log plot has a slope which follows the mathematical error. There are several additional benefits here. The general learning outcomes on computing can be included as in for example the following ways. We can easily bake in how to structure a code in terms of functions and modules, or how to read input data flexibly from the command line or how to write unit tests etc. The conventions and techniques outlined here will save students a lot of time when one extends incrementally software over time, from simpler to more complicated problems. In the next subsection we show how algorithms for solving sets of ordinary differential equations and finding eigenvalues can be reused in different courses with minor modifications only. \subsection{From Mathematics to Physics} We assume that our students know how to solve and study systems of ordinary differential with initial conditions only. Later in this section we will venture into two-point boundary value problems that can be studied and solved with eigenvalue solvers. Let us start with initial value problems and ordinary differential equations. Such equations appear in a wealth of physics applications. Typical examples students will encounter are the classical pendulum in a mechanics course, an RLC circuit in the course on electromagnetism, the modeling of the Solar system in an Astrophysics course and many other cases. The essential message is that, with properly scaled equations, students can use essentially the same algorithms to solve these problems, either starting with a simple modified Euler algorithm or a Runge-Kutta class of algorithms or the so-called Verlet class of algorithms, to mention a few. The idea is that algorithms students develop and use in one course can be reused in other courses. This allows students to make the relevant abstractions discussed above, opening up for a much wider range of applicabilities. Here we look at two familiar cases from mechanics and electromagnetism, the equations for the classical pendulum and those for an RLC circuit. When properly scaled, these equations are essentially the same. To scale equations, either in terms of dimensionless variables or appropriate variables, is an important aspect which allows the students to see the potential for abstractions and hopefully see how the problems studied in say a mechanics course can be transferred to other fields. The classical pendulum with damping and external force as it could appear in a mechanics course is given by the following equation of motion for the angle $\theta$ as function of time $t$ \[ ml\frac{d^2\theta}{dt^2}+\nu\frac{d\theta}{dt} +mgsin(\theta)=Acos(\omega t), \] where $m$ is its mass, $l$ the length, $\nu$ a damping factor and $A$ the amplitude of an applied external source with frequency $\omega$. The solution of this type of equations (second-order differential equations with given initial conditions) is something the students encounter the first semester thorugh the courses IN1900 and MAT-INF1100 at the University of Oslo. At Michigan State University there is now a compulsory course for physics majors that includes many of these elements. With this background, students are already familiar with the numerical solution and visualization of such equations. If we now move to a course on electromagnetism, we encounter almost the same equation for an RLC circuit, namely \[ L\frac{d^2Q}{dt^2}+\frac{Q}{C}+R\frac{dQ}{dt}=Acos(\omega t), \] where $L$ is the inductance, $R$ the applied resistance, $Q$ the time-dependent charge and $C$ the capacitance. Let us consider first the classical pendulum equations with damping and an external force and define the scaled velocity $\hat{v}$ as \[ \frac{d\theta}{d\hat{t}} =\hat{v}, \] where we have defined a dimensionless time variable $\hat{t}$. With the equation for the velocity we can rewrite the second-order differential in terms of two coupled first-order differential equations where the second equation represents the acceleration \[ \frac{d\hat{v}}{d\hat{t}} =Acos(\hat{\omega} \hat{t})-\hat{v}\xi-\sin(\theta). \] We have scaled the equations with $\omega_0=\sqrt{g/l}$, $\hat{t}=\omega_0 t$ and $\xi = mg/\omega_0\nu$. The frequency $\omega_0$ defines a so-called natural frequency defined by the gravitational acceleration $g$ and the length of the pendulum $l$. The frequency $\hat{\omega}= \omega/\omega_0$. In a similar way, our RLC circuit can now be rewritten in terms of two coupled first-order differential equations, \[ \frac{dQ}{d\hat{t}} =\hat{I}, \] and \[ \frac{d\hat{I}}{d\hat{t}} =Acos(\hat{\omega} \hat{t})-\hat{I}\xi-Q, \] with $\omega_0=1/\sqrt{LC}$, $\hat{t}=\omega_0 t$ and $\xi = CR\omega_0$. Here we see that the natural frequency is defined in terms of the physical parameters $L$ and $C$. The equations are essentially the same, the main differences reside in the different scaling constants and the introduction of a non-linear term for the angle $\theta$ in the pendulum equation. The differential solver the students end up writing in the mechanics course (which comes normally before the course on electromagnetism) can then be reused in the electromagnetism course, with a great potential for further abstraction. Let us now move to another frequently encountered problem in several physics courses, namely that of a two-point boundary value problem. In the examples below we will see again that if the equations are properly scaled, we can reuse the same algorithm for solving different physics problems. Here we will start with the equations for a buckling beam (a case which can be found in a mechanics course or a course on mathematical methods in physics). Thereafter, with a simple change of variables and constants, the same problem can be used to study a quantum mechanical particle confined to move in an infinite potential well. By simply changing the diagonal matrix elements of the discretized differential equation problem, we can study particles that move in a harmonic oscillator potential or other types of quantum-mechanical one-body or selected two-body problems. With slight modifications to the matrix that results from the discretization of a second derivative, we can study Poisson's equation in one dimension, a problem of relevance in electromagnetism. Let us start with the buckling beam. This is a two-point boundary value problem \[ R \frac{d^2 u(x)}{dx^2} = -F u(x), \] where $u(x)$ is the vertical displacement, $R$ is a material specific constant, $F$ is the applied force and $x \in [0,L]$ with $u(0)=u(L)=0$. We scale the equation with $x = \rho L$ and $\rho \in [0,1]$ and get (note that we change from $u(x)$ to $v(\rho)$) \[ \frac{d^2 v(\rho)}{dx^2} +K v(\rho)=0, \] which is, when discretized (see below), nothing but a standard eigenvalue problem with $K= FL^2/R$. Here we can assume that either the force $F$ or the material specific rigidity $R$ are unknown. If we replace $R=-\hbar^2/2m$ and $-F=\lambda$, we have the quantum mechanical variant for a particle moving in a well with infinite walls at the endpoints. The way to solve these equations numerically is to discretize the second derivative and the right hand side as \[ -\frac{v_{i+1} -2v_i +v_{i-i}}{h^2}=\lambda v_i, \] with $i=1,2,\dots, n$. Here $h$ is the step size which is defined by the number of integration (or mesh) points. We need to add to this system the two boundary conditions $v(0) =v_0$ and $v(1) = v_{n+1}$, although they are not needed in the solution of the equations since their values are known. For all integration points $i=1,2,\dots, n$ the set of equations to solve result in a so-called tridiagonal Toeplitz matrix ( a special case from the discretized second derivative) \[ \mathbf{A} = \frac{1}{h^2}\begin{bmatrix} 2 & -1 & & & & \\ -1 & 2 & -1 & & & \\ & -1 & 2 & -1 & & \\ & \dots & \dots &\dots &\dots & \dots \\ & & &-1 &2& -1 \\ & & & &-1 & 2 \\ \end{bmatrix} \] and with the corresponding vectors $\mathbf{v} = (v_1, v_2, \dots,v_n)^T$ allows us to rewrite the differential equation as a standard eigenvalue problem \[ \mathbf{A}\mathbf{v} = \lambda\mathbf{v}. \] The tridiagonal Toeplitz matrix has analytical eigenpairs, providing us thereby with an invaluable check on the equations to be solved. If we stay with quantum mechanical one-body problems (or special interacting two-body problems) adding a potential along the diagonal elements allows us to reuse this problem for many types of physics cases. To see this, let us assume we are interested in the solution of the radial part of Schr\"odinger's equation for one electron. This equation reads \[ -\frac{\hbar^2}{2 m} \left ( \frac{1}{r^2} \frac{d}{dr} r^2 \frac{d}{dr} - \frac{l (l + 1)}{r^2} \right )R(r) + V(r) R(r) = E R(r). \] Suppose in our case $V(r)$ is the harmonic oscillator potential $(1/2)kr^2$ with $k=m\omega^2$ and $E$ is the energy of the harmonic oscillator in three dimensions. The oscillator frequency is $\omega$ and the energies are \[ E_{nl}= \hbar \omega \left(2n+l+\frac{3}{2}\right), \] with $n=0,1,2,\dots$ and $l=0,1,2,\dots$. Since we have made a transformation to spherical coordinates it means that $r\in [0,\infty)$. The quantum number $l$ is the orbital momentum of the electron. In order to find analytical solutions for this problem, we would substitute $R(r) = (1/r) u(r)$ (which gives $u(0)=u(\infty)=0$ and thereby easier boundary conditions) and obtain \[ -\frac{\hbar^2}{2 m} \frac{d^2}{dr^2} u(r) + \left ( V(r) + \frac{l (l + 1)}{r^2}\frac{\hbar^2}{2 m} \right ) u(r) = E u(r) . \] The boundary conditions are $u(0)=0$ and $u(\infty)=0$. In order to scale the equations, we introduce a dimensionless variable $\rho = (1/\alpha) r$ where $\alpha$ is a constant with dimension length and get \[ -\frac{\hbar^2}{2 m \alpha^2} \frac{d^2}{d\rho^2} v(\rho) + \left ( V(\rho) + \frac{l (l + 1)}{\rho^2} \frac{\hbar^2}{2 m\alpha^2} \right ) v(\rho) = E v(\rho) . \] Let us choose $l=0$ for the mere sake of simplicity. Inserting $V(\rho) = (1/2) k \alpha^2\rho^2$ we end up with \[ -\frac{\hbar^2}{2 m \alpha^2} \frac{d^2}{d\rho^2} v(\rho) + \frac{k}{2} \alpha^2\rho^2v(\rho) = E v(\rho). \] We multiply thereafter with $2m\alpha^2/\hbar^2$ on both sides and obtain \[ -\frac{d^2}{d\rho^2} v(\rho) + \frac{mk}{\hbar^2} \alpha^4\rho^2v(\rho) = \frac{2m\alpha^2}{\hbar^2}E v(\rho) . \] A natural length scale comes out automatically when scaling. To see this, since $\alpha$ is constant we are left to determine, we determine $\alpha$ by requiring that \[ \frac{mk}{\hbar^2} \alpha^4 = 1. \] This defines a natural length scale in terms of the various physical constants that determine the equation. The final expression, inserting $k=m\omega^2$ is \[ \alpha = \left(\frac{\hbar}{m\omega}\right)^{1/2}. \] If we were to replace the harmonic oscillator potential with the attractive Coulomb interaction from the hydrogen atom, the parameter $\alpha$ would equal the Bohr radius $a_0$. This way students see the general properties of a two-point boundary value problem and can reuse the code they developed for a mechanics course to the subsequent quantum mechanical course. Defining \[ \lambda = \frac{2m\alpha^2}{\hbar^2}E, \] we can rewrite Schroedinger's equation as \[ -\frac{d^2}{d\rho^2} v(\rho) + \rho^2v(\rho) = \lambda v(\rho) . \] This is similar to the equation for a buckling beam, except for the potential term. In three dimensions with our scaling, the eigenvalues for $l=0$ are $\lambda_0=3,\lambda_1=7,\lambda_2=11,\dots .$ If we define first the diagonal matrix element \[ d_i=\frac{2}{h^2}+V_i, \] and the non-diagonal matrix element \[ e_i=-\frac{1}{h^2}, \] we can rewrite the Schr\"oedinger equation as \[ d_iu_i+e_{i-1}v_{i-1}+e_{i+1}v_{i+1} = \lambda v_i, \] where $v_i$ is unknown and $i=1,2,\dots, n$. We can reformulate the latter equation as a matrix eigenvalue problem \[ \begin{bmatrix} d_1 & e_1 & 0 & 0 & \dots &0 & 0 \\ e_1 & d_2 & e_2 & 0 & \dots &0 &0 \\ 0 & e_2 & d_3 & e_3 &0 &\dots & 0\\ \dots & \dots & \dots & \dots &\dots &\dots & \dots\\ 0 & \dots & \dots & \dots &\dots &d_{n-1} & e_{n-1}\\ 0 & \dots & \dots & \dots &\dots &e_{n-1} & d_{n} \end{bmatrix} \begin{bmatrix} v_{1} \\ v_{2} \\ \dots\\ \dots\\ \dots\\ v_{n} \end{bmatrix}=\lambda \begin{bmatrix}{c} v_{1} \\ v_{2} \\ \dots\\ \dots\\ \dots\\ v_{n} \end{bmatrix} \] or if we wish to be more detailed, we can write the tridiagonal matrix as \[ \begin{bmatrix} \frac{2}{h^2}+V_1 & -\frac{1}{h^2} & 0 & 0 & \dots &0 & 0 \\ -\frac{1}{h^2} & \frac{2}{h^2}+V_2 & -\frac{1}{h^2} & 0 & \dots &0 &0 \\ 0 & -\frac{1}{h^2} & \frac{2}{h^2}+V_3 & -\frac{1}{h^2} &0 &\dots & 0\\ \dots & \dots & \dots & \dots &\dots &\dots & \dots\\ 0 & \dots & \dots & \dots &\dots &\frac{2}{h^2}+V_{n-1} & -\frac{1}{h^2}\\ 0 & \dots & \dots & \dots &\dots &-\frac{1}{h^2} & \frac{2}{h^2}+V_{n} \end{bmatrix}. \] The following Python code sets up the matrix to diagonalize by defining the minimun and maximum values of $r$ with a maximum value of integration points. It plots the eigenfunctions of the three lowest eigenstates. \begin{lstlisting} #Program which solves the one-particle Schrodinger equation #for a potential specified in function #potential(). from matplotlib import pyplot as plt import numpy as np #Function for initialization of parameters def initialize(): RMin = 0.0 RMax = 10.0 lOrbital = 0 Dim = 400 return RMin, RMax, lOrbital, Dim # Harmonic oscillator potential def potential(r): return 0.5*r*r #Get the boundary, orbital momentum and number of integration points RMin, RMax, lOrbital, Dim = initialize() #Initialize constants Step = RMax/(Dim) DiagConst = 2.0/ (Step*Step) NondiagConst = -1.0 / (Step*Step) OrbitalFactor = lOrbital * (lOrbital + 1.0) #Calculate array of potential values v = np.zeros(Dim) r = np.linspace(RMin,RMax,Dim) for i in range(Dim): r[i] = RMin + (i+1) * Step; v[i] = potential(r[i]) + OrbitalFactor/(r[i]*r[i]); #Setting up a tridiagonal matrix and finding eigenvectors and eigenvalues Matrix = np.zeros((Dim,Dim)) Matrix[0,0] = DiagConst + v[0]; Matrix[0,1] = NondiagConst; for i in xrange(1,Dim-1): Matrix[i,i-1] = NondiagConst; Matrix[i,i] = DiagConst + v[i]; Matrix[i,i+1] = NondiagConst; Matrix[Dim-1,Dim-2] = NondiagConst; Matrix[Dim-1,Dim-1] = DiagConst + v[Dim-1]; # diagonalize and obtain eigenvalues, not necessarily sorted EigValues, EigVectors = np.linalg.eig(Matrix) # sort eigenvectors and eigenvalues permute = EigValues.argsort() EigValues = EigValues[permute] EigVectors = EigVectors[:,permute] # now plot the results for the three lowest lying eigenstates for i in range(3): print(EigValues[i]) FirstEigvector = EigVectors[:,0] SecondEigvector = EigVectors[:,1] ThirdEigvector = EigVectors[:,2] plt.plot(r, FirstEigvector**2 ,'b-',r, SecondEigvector**2 ,'g-',r, ThirdEigvector**2 ,'r-') plt.axis([0,4.6,0.0, 0.025]) plt.xlabel(r'$r$') plt.ylabel(r'Radial probability $r^2|R(r)|^2$') plt.title(r'Radial probability distributions for three lowest-lying states') plt.savefig('eigenvector.pdf') plt.show() \end{lstlisting} The last example shows the potential of combining numerical algorithms with analytical results (or eventually symbolic calculations), allowing thereby students to test their physics understanding. One can easily switch to other potentials by simply redefining the potential function. For example, a finite box potential can easily be defined as \begin{lstlisting} # Finite depth and range box potential, with strength V and range a def potential(r): if r >= 0.0 and r <= 10.0: V = -0.05 else: V =0.0 return V \end{lstlisting} Thereafter, the students can explore the role of the potential depth and the range of the potential. Analyzing the eigenvectors gives additional information about the spatial degrees of freedom in terms of different potentials. The possibility to visualize the results immediately, as shown in figure \ref{fig:eigenvector}, aids in providing students with a deeper understanding of the relevant physics. This example contains also many of the computing learning outcomes we discussed above, in addition to those related to the physics of a particular system. We see that, by proper scaling, the students can make further abstractions and explore other physics cases easily where no analytical solutions are known. With unit testing and analytical results they can validate and verify their algorithms. \begin{figure} \includegraphics[scale=0.8]{Figures/eigenvector.pdf} \caption{Plot of the eigenfunctions of the three lowest-lying eigenvalues for a harmonic oscillator problem in three dimensions. The students can easily change the type of potential and explore the physics that arises from these potentials.}\label{fig:eigenvector} \end{figure} The above example allows the student to test the mathematical error of the algorithm for the eigenvalue solver by simply changing the number of integration points. Again, as discussed above in connection with the trapezoidal rule, the students get trained to develop an understanding of the error analysis and where things can go wrong. The algorithm can be tailored to any kind of one-particle problem used in quantum mechanics. A simple rewrite allows for the reuse in linear algebra problems for solution of say Poisson's equation in electromagnetism, or the diffusion equation in one dimension. To see this and how the same matrix can be used in a course in electromagnetism, let us consider Poisson's equation. We assume that the electrostatic potential $\Phi$ is generated by a localized charge distribution $\rho (\mathbf{r})$. In three dimensions the pertinent equation reads \[ \nabla^2 \Phi = -4\pi \rho (\mathbf{r}). \] With a spherically symmetric potential $\Phi$ and charge distribution $\rho (\mathbf{r})$ and using spherical coordinates, the relevant equation to solve simplifies to a one-dimensional equation in $r$, namely \[ \frac{1}{r^2}\frac{d}{dr}\left(r^2\frac{d\Phi}{dr}\right) = -4\pi \rho(r), \] which can be rewritten via a substitution $\Phi(r)= \phi(r)/r$ as \[ \frac{d^2\phi}{dr^2}= -4\pi r\rho(r). \] The inhomogeneous term $f$ or source term is given by the charge distribution $\rho$ multiplied by $r$ and the constant $-4\pi$. We can rewrite this equation by letting $\phi\rightarrow u$ and $r\rightarrow x$. Scaling again the equations and replacing the right hand side with a function $f(x)$, we can rewrite the equation as \[ -u''(x) = f(x). \] Our scaling gives us again $x\in [0,1]$ and the two-point boundary value problem with $u(0)=u(1)=0$. With $n+1$ integration points and the step length defined as $h=1/(n)$ and replacing the continuous function $u$ with its discretized version $v$, we get the following equation \begin{equation*} -\frac{v_{i+1}+v_{i-1}-2v_i}{h^2} = f_i \hspace{0.5cm} \mathrm{for} \hspace{0.1cm} i=1,\dots, n, \end{equation*} where $f_i=f(x_i)$. Bringing up again the tridiagonal Toeplitz matrix, \[ \mathbf{A} = \frac{1}{h^2}\begin{bmatrix} 2& -1& 0 &\dots & \dots &0 \\ -1 & 2 & -1 &0 &\dots &\dots \\ 0&-1 &2 & -1 & 0 & \dots \\ & \dots & \dots &\dots &\dots & \dots \\ 0&\dots & &-1 &2& -1 \\ 0&\dots & & 0 &-1 & 2 \\ \end{bmatrix}, \] our problem becomes now a classical linear algebra problem \[ \mathbf{A}\mathbf{v}=\mathbf{f}, \] with the unknown function $\mathbf{v}$. Using standard LU decomposition algorithms \cite{GolubVanLoan} (here one can use the so-called Thomas algorithm which reduces the number of floating point operations to $O(n)$) one can easily find the solution to this problem. These examples demonstrate how one can, with a discretized second derivative, solve physics problems that arise in different undergraduate courses using standard linear algebra and eigenvalue algorithms and ordinary differential equations, allowing thereby teachers to focus on the interesting physics. Many of these problems can easily be linked up with ongoing research. This opens up for many interesting perspectives in physics education. We can bring in at a much earlier stage in our education basic research elements and perhaps even link with ongoing research during the first year of undergraduate studies. Instead of focusing on tricks and mathematical manipulations to solve the continuous problems for those few case where an analytical solution can be found, the discretization of the continuous problem opens up for studies of many more interesting and realistic problems. However, we have seen that in order to verify and validate our codes, the existence of analytical solutions offer us an invaluable test of our algorithms and programs. The analytical results can either be included explicitely or via symbolic software like Python's Sympy package. Thus, computing stands indeed for solving scientific problems using all possible tools, including symbolic computing, computers and numerical algorithms, numerical experiments (as well as real experiments if possible) and analytical paper and pencil solutions. The cases we have presented here represent only a limited set of examples. A longer version of this article, with more examples and details on assessments programs, is under preparation as a textbook \cite{DannyMortenBook}. The possible learning outcomes we defined for various physics courses are often based on the above simple discretization. With basic knowledge on how to solve linear algebra problems, eigenvalue porblems and differential equations, topics normally taught in mathematics and computational science courses, we can offer our students a much more challenging and interesting education. Furthermore, we give our students the competencies which are required by future employers, either in the private or the public sector. \section{Conclusions and Perspectives} In this contribution, we have outlined some of the basic elements that we feel are necessary to address in order to introduce computing in various undergraduate physics courses. Some of the conclusions we would like to emphasize include a proper definition of computing, the development of learning outcomes that apply to both computational science, mathematics, and physics courses as well as proper assessment programs. Collaboration across departments is necessary in order to achieve a synchronization between various topics and learning outcomes, as well as an early introduction to programming. Many universities require such courses as part of a physics degree. Coordinating such a programming course with mathematics courses and other science courses results in a better coordination of both learning outcomes and computing skills and abilities. The experiences we have drawn from the University of Oslo and Michigan State University show that an early and compulsory programming course, which includes central scientific elements, is important in order to integrate properly a computational perspective in our physics education. The benefits are many, in particular it allows us to make our research more visible in early undergraduate physics courses, enhancing research-based teaching with the possibility to focus more on understanding and increased insight. It gives also our candidates the skills and abilities that are requested by society at large, both from the private and the public sectors. With computing, we emphasize a broader and more up-to-date education with a problem-based orientation, often requested by potential employers. Furthermore, our experiences from the both universities indicate that a discussion of computing across disciplines results in an increased impetus for broad cooperation in teaching and a broader focus on university pedagogical topics. We are now in the process of developing computing learning outcomes with examples for central physics courses. Together with a research based assessment program, we will be able to answer central questions like whether the introduction of computing increases a student's insights and understanding of the underlying physics. \begin{acknowledgement} MHJ's work is supported by U.S. National Science Foundation Grant No.~PHY-1404159. MDC's work is supported by U.S. National Science Foundation Grants Nos.~DRL-1741575, DUE-1725520, DUE-1524128, DUE-1504786, and DUE-1431776. Both authors acknowledge support from the recently established Center for Computing in Science Education, University of Oslo, Norway. \end{acknowledgement}
{ "timestamp": "2018-02-27T02:06:44", "yymm": "1802", "arxiv_id": "1802.08871", "language": "en", "url": "https://arxiv.org/abs/1802.08871" }
\section{Introduction} Aggregating information coming from multiple sources is a long-standing problem in both knowledge representation and the study of multi-agent systems (see, e.g., \cite{vanHarmelen2007}). Depending on the chosen representation for the incoming pieces of knowledge or information, a number of competing approaches has seen the light in these literatures. Belief merging \cite{LiberatoreSchaerf1998,KoniecznyPinoPerezJLC2002,KoniecznyLangMarquisAIJ2004} studies the problem of aggregating propositional formulas coming from a number of different agents into a set of models, subject to an integrity constraint. Judgment and binary aggregation \cite{EndrissHBCOMSOC2016,DokowHolzmanJET2010,GrandiEndrissAIJ2013} asks individual agents to report yes/no opinions on a set of logically connected binary issues, called the agenda, to take a collective decision. Social welfare functions, the cornerstone problem in social choice theory (see, e.g., \cite{Arrow1963}), can also be viewed as mechanisms merging conflicting information, namely the individual preferences of voters expressed in the form of linear orders over a set of alternatives. Other examples include graph aggregation \cite{EndrissGrandiAIJ2017}, which has applications in multi-agent argumentation \cite{BoothEtAlKR2014,CaminadaPigozzi2011,ChenEndrissTARK2017} and clustering aggregation \cite{GionisEtAlTKDD2007}, as well as ontology merging \cite{Porello2014}. In this work we take a general perspective and we represent individual knowledge coming from multiple sources as a profile of databases, modelled as finite relational structures \cite{AbiteboulHV95,MaierUV84}. Our aim is to reconcile two possibly conflicting views of the problem of information fusion. On the one hand, the study of information merging (typically knowledge or beliefs) in knowledge representation has focused on the design of rules that guarantee the consistency of the outcome, with the main driving principles inspired from the literature on belief revision.\footnote{Albeit we acknowledge the work of \cite{DoyleWellmanAIJ1991, MaynardZhangLehmannJAIR2003}, which aggregate individual beliefs, modelled as plausibility orders, in an "Arrovian" fashion.}. On the other hand, social choice theory has focused on agent-based properties, such as fairness and representativity of an aggregation procedure, paying attention as well on possible strategic behaviour by either the agents involved in the process or an external influencing source. While there already have been several attempts at showing how specific merging or aggregation frameworks could be simulated or subsumed by one another (see, e.g., \cite{GrandiEndrissIJCAI2011,DietrichList2007a,GregoireKonieczny2006,EveraereEtAl2015}), we believe that a more general perspective is needed to reconcile the two views described above. Perhaps the closest approach to ours is the work of Baral \emph{et al.} \cite{BaralEtAl1992}. In their paper, the authors consider the problem of merging information represented in the form of a first-order theory, taking a syntactic rather than a semantic approach, and focuses on finding maximally consistent sets of the union of the individual theories received. In doing so, however, the authors privilege the knowledge representation approach, and have no control on the set of agents supporting a given maximally consistent set rather than another. Our starting point is a set of finite relational structures on the same signature, coming from a set of agents or sources, and our research problem is how to obtain a collective databases summarising the information received. Virtually all of the settings mentioned above (beliefs, graphs, preferences, judgments...) can be represented as databases, showing the generality of our framework. We propose a number of rules for database aggregation, inspired by existing ones proposed in the literature on computational social choice, and we evaluate them axiomatically. We privilege computationally friendly aggregators, for which the time to determine the collective outcome is polynomial in the time spent reading the individual input received. When integrity constraints are present, we study how to guarantee that a given aggregators ``lifts'' the integrity constraint from the individual to the collective level, i.e., the aggregated databases satisfies the same constraints as the individual ones. We first analyse the problem of lifting first-order formulas in database aggregation theoretically, comparing the results obtained with the literature on lifting propositional constraints in binary aggregation. We provide characterisation results for a number of natural restricted languages, and we investigate which of the rules we introduced lift classical integrity constraints from database theory: functional dependencies, referential integrity constraints, and value constraints. Since databases are typically queried using formulas in first order logic, a natural question to ask in a multi-agent setting is whether the aggregation of the individual answers to a query coincides with the answer to the same query on the aggregated database. We provide a partial answer to this important problem, by identifying sufficient conditions on the first-order query language for both the intersection and the union operator. The paper is organised as follows. In Section~\ref{sec:preliminaries} we introduce the basic definitions of databases and integrity constraints. In Sections~\ref{aggregators} and~\ref{sec:axioms} we introduce a number of database aggregation procedures, and we propose axiomatic properties for their studies. Sections~\ref{sec:lifting}, ~\ref{sec:characterisations}, and ~\ref{sec:queries} contains our main results on the lifting of integrity constraints and aggregated query answering. Section~\ref{sec:conclusions} concludes the paper. \section{Preliminaries on Databases}\label{sec:preliminaries} In this section we introduce basic notions on databases that we will use in the rest of the paper. In particular, we adopt a relational perspective~\cite{AbiteboulHV95} and define a database as a finite relational structure over a database schema: \begin{definition}[Database Schema] A {\em (relational) database schema} $\mathcal D$ is a finite set $\{ P_1 / q_1,\dots,P_n / q_n \}$ of relation symbols $P$ with arity $q \in \mathbb{N}$. \end{definition} In the following we assume a countable domain $U$ of elements $u, u', \ldots$, for the interpretation of relation symbols in a database schema $\mathcal D$. \begin{definition}[Database Instance] \label{dbinstance} Given domain $U$ and database schema $\mathcal D$, a {\em $\mathcal D$-instance} over $U$ is a mapping $D$ associating each relation symbol $P \in \mathcal D$ with a finite $q$-ary relation over $U$, i.e., $D(P) \underset{{\small fin}}{\subset} U^{q}$. \end{definition} By Def.~\ref{dbinstance} a database instance is a finite (relational) model of a database schema. The {\em active domain} $\textit{adom}(D)$ of an instance $D$ is the set of all individuals in $U$ occurring in some tuple $\vec{u}$ of some predicate interpretation $D(P)$, that is, $\textit{adom}(D) = \bigcup_{P \in \mathcal D} \{ u \in U \mid u = u_i \text{ for some } \vec{u} \in D(P) \}$. Observe that, since $\mathcal D$ contains a finite number of relation symbols and each $D(P)$ is finite, so is $\textit{adom}(D)$. We denote the set of all instances on $\mathcal D$ and $U$ as $\mathcal D(U)$. Clearly, the formal framework for databases we adopt is quite simple, but still it is powerful enough to cover practical cases of interest \cite{MaierUV84}. Here we do not discuss in details the pros and cons of the relational approach to database theory and refer to the literature for further details \cite{AbiteboulHV95}. To specify the properties of databases, we make use of first-order logic with equality and no function symbols. Let $V$ be a countable set of {\em individual variables}. \begin{definition}[FO-formulas over $\mathcal D$]\label{def:fo} Given a database schema $\mathcal D$, the formulas $\varphi$ of the first-order language $\L_{\mathcal D}$ are defined by the following BNF: \begin{eqnarray*} \varphi & ::= & x = x'\mid P(x_1, \ldots ,x_{q}) \mid \lnot \varphi \mid \varphi \to \varphi \mid \forall x \varphi \end{eqnarray*} where $P \in \mathcal D$, $x_1, \ldots ,x_{q}$ is a $q$-tuple of terms and $x, x' $ are terms. \end{definition} We assume ``$=$'' to be a special binary predicate with fixed obvious interpretation. By Def.~\ref{def:fo}, $\L_\mathcal D$ is a first-order language with equality over the relational vocabulary $\mathcal D$ and with no function symbols. In the following we use the standard abbreviations $\exists$, $\wedge$, $\vee$, and $\neq$. Also, free and bound variables are defined as standard. For a formula $\varphi \in \L_{\mathcal D}$, we write $\varphi(x_1,\ldots,x_\ell)$, or simply $\varphi(\vec x)$, to list explicitly in arbitrary order all free variables $x_1,\ldots,x_\ell$ of $\varphi$. A {\em sentence} is a formula with no free variables. Notice that the only terms in our language $\L_{\mathcal D}$ are individual variables. We can add constant for individuals with some minor technical changes to the definitions and results in the paper. However, these do not impact on the theoretical contribution and we prefer to keep notation lighter. To interpret FO-formulas on database instances, we introduce {\em assignments} as functions $\sigma: V \mapsto U$. Given an assignment $\sigma$, we denote by $\sigma^x_u$ the assignment such that {(i)}\xspace $\sigma^x_u(x) = u$; and {(ii)}\xspace $\sigma^x_u(x') = \sigma(x')$, for every $x' \in V$ different from $x$. We can now define the semantics of $\L_\mathcal D$. \begin{definition}[Satisfaction of FO-formulas]\label{def:fo-sem} Given a $\mathcal D$-instance $D$, an assignment $\sigma$, and an FO-formula $\varphi\in\L_{\mathcal D}$, we inductively define whether $D$ \emph{satisfies $\varphi$ under $\sigma$}, or $ (D, \sigma) \models \varphi$, as follows: \begin{tabbing} $ (D, \sigma)\models P(x_1,\ldots,x_{q})$ \ \ \ \= iff \ \ \= $\langle \sigma(x_1),\ldots,\sigma(x_{q}) \rangle \in D(P)$\\ $ (D, \sigma)\models x = x'$ \> iff \> $\sigma(x)=\sigma(x')$\\ $ (D, \sigma)\models \lnot\varphi$ \> iff \> $(D, \sigma) \not \models\varphi$\\ $ (D, \sigma)\models \varphi \to \psi$ \> iff \> $(D,\sigma) \not \models \varphi $ or $ (D, \sigma)\models\psi$\\ $ (D, \sigma)\models \forall x\varphi$ \> iff \> for every $u\in \textit{adom}(D)$, $ (D, \sigma^x_u) \models\varphi$ \end{tabbing} A formula $\varphi$ is {\em true} in $D$, written $D\models\varphi$, iff $ (D, \sigma) \models \varphi$, for all assignments $\sigma$. \end{definition} Observe that we adopt an {\em active-domain} semantics, that is, quantified variables range only over the active domain of $D$. This is standard in database theory \cite{AbiteboulHV95}, where $\textit{adom}(D)$ is assumed to be the ``universe of discourse''.\\ \textbf{Constraints.} It is well-known that several properties and constraints on databases can be expressed as FO-sentences. Here we consider some of these for illustrative purposes. \begin{definition}[Functional Dependency]\label{def:functionaldependency} A {\em functional dependency} is an expression of type $n_1, \ldots, n_k \mapsto n_{k+1}, \ldots, n_{q}$. A database instance $D$ satisfies a functional dependency $n_1, \ldots, n_k \mapsto n_{k+1}, \ldots, n_{q}$ for predicate symbol $P$ with arity $q$ iff for every $q$-ple $\vec{u}$, $\vec{u}'$ in $D(P)$, whenever $u_i = u'_i$ for all $i \leq k$, then we also have $u_i = u'_i$ for all $k+1 \leq i \leq q$. If $k = 1$, we say that it is a {\em key dependency}. \end{definition} Clearly, any database instance $D$ satisfies a functional dependency $n_1, \ldots, n_k \mapsto n_{k+1}, \ldots, n_{q}$ iff it satifies the following: \begin{eqnarray*} \forall \vec{x} , \vec{y} \left(P(\vec{x}) \land P(\vec{y}) \land \bigwedge_{i \leq k }(x_i = y_i) \to \bigwedge_{k+1 \leq i \leq q}(x_i = y_i)\right) \end{eqnarray*} \begin{definition}[Value Constraint] A {\em value constraint} is an expression of type $n_k \in D(P_v)$, where $D(P_v)$ contains a list of admissible values. A database instance $D$ satisfies a value constraint $n_k \in P_v$ for predicate symbol $P$ with arity $q \geq k$ iff for every $q$-ple $\vec{u}$ in $D(P)$, $u_k \in D(P_v)$ \end{definition} Also for value constraints, it is easy to check that an instance $D$ satisfies constraint $n_k \in P_v$ for symbol $P$ iff it satisfies the following: \begin{eqnarray*} \forall x_1, \dots, x_q (P(x_1,\dots,x_q) \to P_v(x_k)) \end{eqnarray*} \begin{definition}[Referential Integrity Constraint] A referential integrity constraints enforces the foreign key of a predicate $P_1$ to be the primary key of predicate $P_2$. A database instance satisfy a referential integrity constraint on the last $k$ attributes, and we denote it $(P_1\to P_2, k)$, if for all $q_{1}$-uple $\vec{u}\in D(P_1)$, there exists a $q_{2}$-uple $\vec{u}'\in D(P_2)$ such that for all $1\leq i \leq k$ we have that $u_{q_1-k+j}=u_j'$. \end{definition} A referential integrity constraint can also be translated in a first-order formula as follows: \begin{eqnarray*} \forall \vec{x} [ P_1(\vec{x}) \rightarrow \exists \vec{y} (P_2(\vec{y}) \wedge \bigwedge_{i=1}^{k} (x_{q_1-k+j}=y_j)) ] \end{eqnarray*} \section{Aggregators} \label{aggregators} The main research question we investigate in this paper regards how to define an aggregated database instance from the instances of $\mathcal N=\{1,\dots,n\}$ agents. This question is typical in social choice theory, where judgements, preferences, etc., are aggregated according to some notion of rationality that will be introduced in Section~\ref{sec:lifting}. For the rest of the paper we fix a database schema $\mathcal D$ over a common domain $U$, and consider a {\em profile } $\vec{D} = (D_1, \dots, D_n)$ of $n$ instances over $\mathcal D$ and $U$. Then, we can define an aggregation procedure on such instances. \begin{definition}[Aggregation Procedure] Given database schema $\mathcal D$ and domain $U$, an {\em aggregation procedure} $F: \mathcal D(U)^n \to \mathcal D(U)$ is a function assigning to each tuple $\vec{D}$ of instances for $n$ agents an aggregated instance $F(\vec{D}) \in \mathcal D(U)$. Let $\mathcal F$ be the class of all aggregation procedures. \end{definition} We use $N_{\vec{u}}^{\vec{D}(P)} :: = \{ i \in \mathcal N \mid \vec{u} \in D_i(P) \}$ to denote the set of agents accepting tuple $\vec{u}$ for symbol $P$, under profile $\vec{D}$. Notice that considering a unique domain $U$ is not really a limitation of the proposed approach: instances $D_1, \ldots, D_n$, each on a possibly different domain $U_i$, for $i \leq n$, can all be seen as instances on $\bigcup_{i \in \mathcal N} U_i$. Hereafter we illustrate and discuss some examples of aggregation procedures: \textbf{Union} (or nomination): for every $P \in \mathcal D$, $F(\vec{D})(P)=\bigcup_{i \leq n} D_i(P) $. Intuitively, every agent is seen as having partial but correct information about the state of the world. Union can be considered a good aggregator if databases represent the agents' knowledge bases (certain information). \smallskip \textbf{Intersection} (or unanimity): for every $P \in \mathcal D$, $F(\vec{D})(P)=\bigcap_{i \leq n} D_i(P) $. Here every agent is supposed to have a partial and possibly incorrect vision of the state of the world. \smallskip \textbf{Quota rules}: a {\em quota} rule is an aggregation rule $F$ defined via functions $q_P : U^q \to \{0,1, \ldots , n+1 \}$, associating each symbol $P$ and $q$-uple with a quota, by stipulating that $\vec{u} \in F(\vec{D})(P)$ iff $|\{i \mid \vec{u}\in D_i(P)\}| \geq q_P(\vec{u})$. $F$ is called {\em uniform} whenever $q$ is a constant function for all tuples and symbols. Intuitively, if a tuple $\vec{u}$ appears in at least $q(\vec{u})$ of the initial databases, then it is accepted. The (strict) majority rule is a quota rule for $q = \lceil (n+1)/2 \rceil$; while union and intersection are quota rule for $q = 1$ and $q = n$ respectively. We call the uniform quota rules for $q = 0$ and $q = n + 1$ {\em trivial rules}. \smallskip \textbf{Distance-based function}: The symmetric distance can be used to measure dissimilarity between databases, obtaining the following definition: \begin{eqnarray*} F(\vec{D})(P) & = & \operatornamewithlimits{argmin}_{A \underset{fin}{\subset} U^{q_P}} \sum_{i \in \mathcal N} (|D_i(P)\setminus A| + |A \setminus D_i(P)|) \end{eqnarray*} Intuitively, the symmetric distance minimizes the ``distance'' between the aggregated database $F(\vec{D})$ and each $D_i$, defined as the number of tuples in $D_i$ but not in $F(\vec{D})$, plus the number of tuples in $F(\vec{D})$ but not in $D_i$, calculated across all $i \in \mathcal N$. \smallskip \textbf{Dictatorship of agent $i^* \in \mathcal N$}: we have that $F(\vec{D}) = D_{i^*}$, i.e., the dictator $i^*$ completely determines the aggregated database. \smallskip {\bf Oligarchy of coalition $C^* \subseteq \mathcal N$}: for every $P \in \mathcal D$, $F(\vec{D})(P) = \bigcap_{i \leq C^*} D_{i}(P) $. Oligarchy reduces to dictatorship for singletons, and to intersection for $C^* = \mathcal N$. \smallskip Quota rules are inspired by their homonyms in judgment aggregation \cite{DietrichListJTP2007}, introduced as a generalisation of the classic majority rule. The union and the intersection rules are well-known in the area of modal epistemic logic, corresponding, respectively, to distributed knowledge and ``everybody knows that'' \cite{Hintikka1962}. Distance-based procedures have been widely studied and axiomatised in the area of logic-based belief merging \cite{KoniecznyPinoPerezJLC2002}, while dictatorships and oligarchies are classical notions from social choice theory. Obviously, different aggregation procedures can be thought of. We chose to focus on those above in the following, as they are well-studied in the literature and have nice computational properties such as being computable in polynomial time. \section{The Axiomatic Method}\label{sec:axioms} Aggregation procedures are best characterised by means of axioms. In particular, we consider the following properties, where relation symbols $P, P' \in \mathcal D$, profiles $\vec{D}, \vec{D}' \in \mathcal D(U)^n$, tuples $\vec{u}$, $\vec{u}' \in U^+$ are all universally quantified. \smallskip \textbf{Independence ($I$)}: if $N_{\vec{u}}^{\vec{D}(P)} = N_{\vec{u}}^{\vec{D}'(P)}$ then $\vec{u} \in F(\vec{D})(P)$ iff $\vec{u} \in F(\vec{D}')(P)$. \smallskip Intuitively, if the same agents accepts (resp.~reject) a tuple in two different profiles, then the tuple is accepted (resp.~rejected) in both aggregated instances. The axiom of independence is a widespread requirement from social choice theory, and is arguably the main cause of most impossibility theorems, such as Arrow's seminal result \cite{Arrow1963}. From a computational perspective, independent rules are typically easier to compute than non-independent ones. Clearly, quota rules satisfy independence; while neither dictatorship nor oligarchies do. \smallskip \textbf{Unanimity ($U$)}: $F(\vec{D})(P) \supseteq \bigcap_{i \in \mathcal N} D_i(P)$. \smallskip That is, a tuple accepted by all agents, also appears in the aggregated database (for the relevant relation symbol). In particular, all rules in Section~\ref{aggregators} satisfy unanimity. \smallskip \textbf{Groundedness ($G$)}: $F(\vec{D})(P) \subseteq \bigcup_{i \in \mathcal N} D_i(P)$. \smallskip By groundedness any tuple appearing in the aggregated database must be accepted by some agent. All rules from Section~\ref{aggregators}, with the exception of the distance-based rule, satisfy this property. \smallskip \textbf{Anonymity ($A$)}: for every permutation $\pi : \mathcal N \to \mathcal N$, we have $F(D_1, \ldots , D_n ) = F (D_{\pi(1)} , \ldots , D_{\pi(n)})$. \smallskip Here the identity of agents is irrelevant for the aggregation procedure. Clearly, this is the case for all aggregators in Section~\ref{aggregators} but dictatorship and oligarchy. \smallskip \textbf{Positive Neutrality ($N^+$)}: if $N_{\vec{u}}^{\vec{D}(P)} = N_{\vec{u}'}^{\vec{D}(P)}$ then $\vec{u} \in F(\vec{D})(P)$ iff $\vec{u}' \in F(\vec{D})(P)$. \textbf{Negative Neutrality ($N^-$)}: if $N_{\vec{u}}^{\vec{D}(P)} = \mathcal N \setminus N_{\vec{u}'}^{\vec{D}(P)}$ then $\vec{u} \in F(\vec{D})(P)$ iff $\vec{u}' \not\in F(\vec{D})(P)$. Observe that both versions of neutrality differs from independence as here we consider two different tuples in the same profile, while independence deals with the same tuple in two different profiles. We can easily see that all aggregators introduced in Section~\ref{aggregators} satisfy positive neutrality and, with the exception of most quota rules (see below), negative neutrality as well. \smallskip \textbf{Systematicity ($S$)}: if $N_{\vec{u}}^{\vec{D}(P)} = N_{\vec{u}'}^{\vec{D}(P')}$ then $\vec{u} \in F(\vec{D})(P)$ iff $\vec{u}' \in F(\vec{D})(P')$. \smallskip Observe that systematicity is equivalent to the conjunction of neutrality and independence. \smallskip \textbf{Permutation-Neutrality ($N^{P}$)}: Given a permutation $\rho : U \to U$ over domain $U$, and its straightforward lifting to a profile $\vec{D}$, then $F(\rho(\vec{D}))=\rho(F(\vec{D}))$. \smallskip Again, all aggregators but dictatorship and oligarchies satisfy permutation-neutrality. \textbf{Monotonicity ($M$)}: if $\vec u \in F(\vec{D})(P)$ and for every $i \in \mathcal N$, either $D_i(P) = D'_i(P)$ or $D_i(P) \cup \{\vec{u}\} \subseteq D'_i(P)$, then $\vec u \in F(D')(P)$. \smallskip Intuitively, a monotonic aggregators keeps on accepting a given tuple if the support for that tuple increases. \smallskip Combinations of the axioms above can be used to characterise some of the rules that we defined in Section~\ref{aggregators}. Some of these results, such as the following, lift to databases known results in judgement (propositional) aggregation. \begin{lemma} An aggregation procedure satisfies $A$, $I$, and $M$ iff it is a quota rule. \end{lemma} \begin{proof} The implication from right to left follows from the fact that quota rules satisfy independence $I$, anonymity $A$, and monotonicity $M$, as we remarked above. For the implication from left to right, observe that, to accept a given tuple $\vec{u}$ in $F(\vec{D})(P)$, an independent aggregation procedure will only look at the set of agents $i \in \mathcal N$ such that $\vec{u} \in D_i(P)$. If the procedure is also anonymous, then acceptance is based only on the number of individuals admitting the tuple. Finally, by monotonicity, there will be some minimal number of agents required to trigger collective acceptance. That number is the quota associated with the tuple and the symbol in hand. \end{proof} If we add neutrality (both positive and negative), then we obtain the class of uniform quota rules. If we furthermore impose unanimity and groundedness, then this excludes the trivial quota rules. \begin{lemma If the number of individuals is odd and $|\mathcal D| \geq 2$, an aggregation procedure $F$ satisfies $A$, $N^-$, $N^+$, $I$ and $M$ on the full domain $\mathcal D(U)^n$ if and only if it is the majority rule. \end{lemma} \begin{proof} By neutrality the quota must be the same for all tuples and all relation symbols. By negative-neutrality the two sets $N_{\vec{u}}^{\vec{D}(P)}$ and $\mathcal N \setminus N_{\vec{u}}^{\vec{D}(P)}$ must be treated symmetrically. Hence, the only possibility is to have a uniform quota of $(n +1)/2$. \end{proof} The corresponding versions of these results have been shown in judgment and graph aggregation \cite{DietrichListJTP2007,EndrissGrandiAIJ2017}. Notice however that there are some notable differences w.r.t.~the literature. For instance, the axiom of neutrality is here split into a positive and a negative part. We conclude this section by showing the following equivalence between majority and distance-based rules. \begin{lemma} In the absence of integrity constaints, and for an odd number of agents, the distance-based rule coincides with the majority rule. \end{lemma} \begin{proof} By the definition of the distance based rule, we have that \begin{eqnarray*} F(\vec{D})(P) & = & \operatornamewithlimits{argmin}_{A \underset{fin}{\subset} U^{q_P}} \sum_{i \in \mathcal N} (|D_i(P)\setminus A| + |A \setminus D_i(P)|) \end{eqnarray*} With a slight abuse of notation, if $A \subseteq U^{m}$ let $A(\vec{u})$ be its characteristic function. Since the minimisation is not constrained, and all structures are finite, this is equivalent to: \begin{eqnarray*} F(\vec{D})(P) & = & \operatornamewithlimits{argmin}_{A \underset{fin}{\subset} U^{q_P}} \sum_{i \in \mathcal N} \sum_{\vec{u}\in U^{q_P}} |D_i(P)(\vec{u}) - A(\vec{u})| \\ & = & \operatornamewithlimits{argmin}_{A \underset{fin}{\subset} U^{q_P}} \sum_{\vec{u}\in U^{q_P}} \sum_{i \in \mathcal N} |D_i(P)(\vec{u}) - A(\vec{u})| \end{eqnarray*} Therefore, for each $\vec{u}$, if for a majority of the individuals in $\mathcal N$ we have that $\vec{u}\in D_i(P)$, then $\vec{u}\in A$ minimises the overall distance, and symmetrically for the case in which a majority of individuals are such that $\vec{u}\not \in D_i(P)$. \end{proof} \section{Lifting Constraints}\label{sec:lifting} In this section we analyse further the properties of the aggregation procedures introduced in Section~\ref{aggregators}. Specifically, we present a notion of {\em collective rationality} that aims to capture the appropriateness of a given aggregator $F$ w.r.t.~some constraint $\phi$ on the input instances $D_1, \ldots, D_n$. Hereafter let $\phi$ be a sentence in the first-order language $\L_{\mathcal D}$ associated to $\mathcal D$, interpreted as a common constraint that is satisfied by all $D_1, \ldots, D_n$. Here we are interested in the following notion: \begin{definition}[Collective Rationality] A constraint $\phi$ is lifted by an aggregation procedure $F$ if whenever $D_i \models \phi$ for all $i \in \mathcal N$, then also $F(\vec{D}) \models \phi$. An aggregation procedure $F : \mathcal D(U)^n \to \mathcal D(U)$ is {\em collectively rational} (CR) with respect to $\phi$ iff $F$ lifts $\phi$. \end{definition} Intuitively, an aggregator is CR w.r.t.~constraint $\phi$ iff it lifts, or preserves, $\phi$. \begin{example} We now provide an illustrative example of first-order collective (ir)rationality with the majority rule. Consider agents 1 and 2 with database schema $\mathcal D = \{ P/1, Q/2 \}$. Two database instances are given as $D_1 = \{ D(a), Q(a,b) \}$ and $D_2 = \{ D(a), Q(a,c) \}$. Clearly, both instances satisfy the integrity constraint $\phi = \forall x (P(x) \to \exists y Q(x,y))$. However, their aggregate $D = F(D_1,D_2) = \{ D(a) \}$, obtained by the majority rule, does not satisfy $\phi$. This example, which can be considered a {\em paradox} in the sense of \cite{GrandiEndrissAIJ2013}, shows that not every constraint in the language $\L_{\mathcal D}$ is collective rational w.r.t.~mojority, thus obtaining a first, simple negative result. \end{example} One natural question to ask about lifting of constraints is the following. \begin{question} Given an axiom AX, what is the class of constraints that are lifted by all aggregators $F$ satisfying AX? \end{question} To make this question more precise, consider the following definition. \begin{definition} Given a language $\L \subseteq \L_{\mathcal D}$, define $CR[\L]$ as the class of aggregation procedures that lift all $\phi \in \L$: \begin{eqnarray*} CR[\L] & ::= & \{F : D(U)^n \to D \mid \text{$F$ is CR for all $\phi \in \L$}\} \end{eqnarray*} Moreover, an aggregator $F$ {\em satisfies a set $AX$ of axioms w.r.t.~language $\L$}, if $F$ satisfies the axioms in AX on set $\{ D \in \mathcal D(U) \mid D \models \phi \}$ for all constraints $\phi \in \L$. The class of all such aggregators is given as: \begin{eqnarray*} \mathcal F_{\L}[AX] & := & \{F : \mathcal D(U)^{n} \to \mathcal D(U) \mid \text{ $F$ satisfies $AX$ on} \\ & & \text{$\{ D \in \mathcal D(U) \mid D \models \phi \}$ for all $\phi \in \L$}\} \end{eqnarray*} \end{definition} The following Lemmas extend results in \cite{GrandiEndrissAIJ2013} to the case of database aggregation. Hereafter, for a language $\L$ and operator $\bullet$, $\L^{\bullet}$ is the language obtained by closing formulas in $\L$ under $\bullet$. The proofs are immediate, so we omit them. We only remark that point (3) follows from the fact that the constraints $\phi \in \L$ are assumed to be sentences. \begin{lemma} \label{aux1} For every language $\L \subseteq \L_{\mathcal D}$: \begin{enumerate} \item $CR[\L^{\land}] = CR[\L^{\equiv}] = CR[\L]$ \item $CR[\L \cup \{ \top \}] = CR[\L \cup \{ \bot \}] = CR[\L]$ \noindent Moreover, \item $CR[\L^{\forall}] = CR[\L^{\exists}] = CR[\L]$ \end{enumerate} \end{lemma} By Lemma~\ref{aux1} an aggregator $F$ is CR w.r.t.~a language $\L$ iff it is CR w.r.t.~the closure of $\L$ under either conjuction, or coimplication, or universal or existential quantification. Also, adding either $\top$ or $\bot$ does not change collective rationality. Furthermore, the following result extends Lemma 7 in \cite{GrandiEndrissAIJ2013}. Also in this case, proofs are immediate and therefore omitted. \begin{lemma} \label{aux2} For all languages $\L_1, \L_2 \subseteq \L_{\mathcal D}$, \begin{enumerate} \item If $\L_1 \subseteq \L_2$ then $CR[\L_2] \subseteq CR[\L_1]$ \item $CR[\L_1 \cup \L_2] = CR[\L_1] \cap CR[\L_2]$ \end{enumerate} \end{lemma} By Lemma~\ref{aux2}, collective rationality is anti-monotone w.r.t.~language inclusion, and an aggregator $F$ is CR w.r.t.~the union of languages iff it is CR w.r.t.~each language separately. The next results, which extend Lemma 8 in \cite{GrandiEndrissAIJ2013}, relate collective rationality with axioms. \begin{lemma} \label{lemma8} For all languages $\L_1, \L_2 \subseteq \L_{\mathcal D}$, \begin{enumerate} \item If $\L_1 \subseteq \L_2$ then $\mathcal F_{\L_2}[AX] \subseteq \mathcal F_{\L_1}[AX]$\\ In particular, if $\top \in \L$ then $\mathcal F_{\L}[AX] \subseteq \mathcal F_{\{ \top \}}[AX]$ \item $\mathcal F_{\L}[AX1, AX2] = \mathcal F_{\L}[AX1] \cap \mathcal F_{\L}[AX2]$ \end{enumerate} \end{lemma} \begin{proof} As regards (1), if $F$ satisfies $AX$ on $\{ D \in \mathcal D(U) \mid D \models \phi \}$, for all $\phi \in \L_2$, and $\L_1 \subseteq \L_2$, then in particular it satisfies $AX$ on $\{ D \in \mathcal D(U) \mid D \models \phi \}$, for all $\phi \in \L_1$. Then, (2) follows immediately from (1), as $\{ \top \} \subseteq \L$. As for (3), $F$ satisfies $AX1$ and $AX2$ on $\{ D \in \mathcal D(U) \mid D \models \phi \}$, for all $\phi \in \L$, iff both $F$ satisfies $AX1$ and $F$ satisfies $AX2$. \end{proof} However, not all results available at the propositional level extend to the first order. In particular, the following result means that Lemma 6 in \cite{GrandiEndrissAIJ2013} does not lift to the first order. \begin{lemma} \label{lemma6} There exists languages $\L_1$ and $\L_2$, both containing $\top$ and $\bot$, such that $\L_1 \neq \L_2$ but $CR[\L_1 ] = CR[\L_2]$. \end{lemma} \begin{proof} Consider languages $\L_1 = \{ \bot, \top \}$ and $\L_2 = \L_1 \cup \{ \forall x P(x) \}$ on $\mathcal D = \{ P/1 \}$. By Lemma~\ref{aux2}.(1), $CR[\L_2] \subseteq CR[\L_1]$. Now, suppose that $F \in CR[\L_1]$ and consider a profile $\vec{D}$ such that $D_i \models \forall x P(x)$ for all $i \in \mathcal N$. By definition, $F(\vec{D}) \in \mathcal D(U)$. We consider two alternatives: either $F(\vec{D})$ is empty and then $F(\vec{D}) \models \forall x P(x)$ trivially; or $F(\vec{D})$ is not empty, then $F(\vec{D})(P) \subseteq U$ and $F(\vec{D}) \models \forall x P(x)$ as well. As a result, $CR[\L_1] \subseteq CR[\L_2]$. \end{proof} By Lemma~\ref{lemma6}, the operator $CR[-]$ from languages to sets of aggregators is not injective in general. Symmetrically, we introduce an operator $LF[-]$ from sets of aggregators to languages. \begin{definition}[Lifted Language] Given a set $\mathcal G$ of aggregation procedures, let $LF[\mathcal G]$ be the language of the constraints that are lifted by all $F \in \mathcal G$: $$LF[\mathcal G] ::= \{ \varphi \in \L_{\mathcal D} \mid \text{$F$ is $CR$ w.r.t.~$\varphi$, for all $F \in \mathcal G$} \}$$ \end{definition} Clearly, $LF[\mathcal G]$ is the intersection of all $LF[\{ F \}]$, for $F \in G$. Lemma~\ref{lemma6} has an impact on the following result, which correspondent to Proposition 9 in \cite{GrandiEndrissAIJ2013}. In particular, while in \cite{GrandiEndrissAIJ2013} we have equality for item (1), here we only have inclusion. \begin{proposition} Let $\L$ a language containing $\top$ and $\bot$, and $\mathcal G$ a class of aggregators. Then, \begin{enumerate} \item $\L \subseteq LF[CR[\L]]$, and this inclusion is strict for some languages. \item $\mathcal G \subseteq CR[LF[G]]$, and this inclusion is strict for some classes. \end{enumerate} \end{proposition} \begin{proof} As regards (1), inclusion $\L \subseteq LF[CR[\L]]$ is an immediate consequence of the definitions of $CR$ and $LF$. On the other hand, consider languages $\L_1 = \{ \bot, \top \}$ and $\L_2 = \L_1 \cup \{ \forall x P(x) \}$ in the proof of Lemma~\ref{lemma6}. We have $CR[\L_1] = CR[\L_2]$, and therefore $LF[CR[\L_1]] = LF[CR[\L_2]]$, but $\L_1 \subset \L_2$, and therefore $\L_1 \subset LF[CR[\L_1]]$. As for (2), inclusion $\mathcal G \subseteq CR[LF[\mathcal G]]$ is also an immediate consequence of the definitions of $CR$ and $LF$. Further, in \cite{GrandiEndrissAIJ2013} Proposition 9, it is given a class (basically, $\mathcal G$ does not contain generalised dictatorships) for which this inclusion is strict. \end{proof} To conclude, the relationship between operators $CR[-]$ and $LF[-]$ can be represented as in Fig.~\ref{fig1}. \begin{figure} \begin{center} \begin{tikzpicture}[auto,node distance=3cm,->,>=stealth',shorten >=1pt,semithick] \tikzstyle{every state}=[fill=white,text=black,minimum size=0.3cm] \tikzstyle{every initial by arrow}=[initial text=] \node[] (t0) {$\L \subseteq \L_{\mathcal D}$}; \node[] (t1) [right of = t0] {$\mathcal G \subseteq \mathcal F$}; \path (t0) edge[bend left] node {$CR[-]$} (t1); \path (t1) edge[bend left] node {$LF[-]$} (t0); \end{tikzpicture} \caption{the operators $CR[-]$ and $LF[-]$. \label{fig1}} \end{center} \end{figure} The two operators are inverse one to the other, but they do not commute. \section{Characterisation Results}\label{sec:characterisations} In this section we show some correspondences between axiomatic properties and restrictions to the first order language in which integrity constraints can be expressed, in line with previous work by Grandi and Endriss \cite{GrandiEndrissAIJ2013}. We then focus on the database-specific constraints introduced in Section~\ref{sec:preliminaries}, showing sufficient and necessary conditions for collective rationality of quota rules. To state the next result we consider a set $Con \subseteq U$ of constants, interpreted as themselves in each $D_i$, that is, $\sigma(c) = c$ for every $c \in Con$. Then, let $lit^+ \subseteq \L_{\mathcal D}$ be some language containing only positive literals of form $P(c_1, \ldots, c_q)$, for $P\in \mathcal D$ and constants $c_1, \ldots, c_q$. \begin{theorem} \label{theor1} $\mathcal F_{lit^+}[U] \subseteq CR[lit^+]$, and $\mathcal F_{lit^+}[U] \supseteq CR[lit^+]$ only if $Con$ contains all individuals in the domain of $F$. \end{theorem} \begin{proof} As to inclusion $\subseteq$, we see that if all instances $D_1, \ldots, D_n$ satisfy formulas $P(c_1, \ldots, c_q)$ in $lit^+$, then $\vec{c} \in D_i(P)$ for every $i \in \mathcal N$. By unanimity we have that $\bigcap_{i \in \mathcal N} D_i(P) \subseteq F(\vec{D})(P)$, and therefore $\vec{c} \in F(\vec{D})(P)$. Hence, $F$ is collectively rational on $lit^+$. As to $\supseteq$, suppose that $F \in CR[lit^+]$ and choose a profile $D_1, \ldots, D_n$ with $\vec{u} \in \bigcap_{i \in \mathcal N} D_i(P)$, that is, for every $i \in \mathcal N$, $D_i \models P(u_1, \ldots, u_q)$. Since we assumed that $Con$ contains all individuals in the domain of $F$, individuals $u_1, \ldots, u_q$ belong to $Con$ and formulas $P(u_1, \ldots, u_q)$ are in $lit^+$. Further, $F$ is CR on $D_1, \ldots, D_n$ and therefore $F(\vec{D}) \models P(u_1, \ldots, u_q)$, that is, $\vec{u} \in F(\vec{D})(P)$, which mean that $F$ is unanimous. \end{proof} By Theorem~\ref{theor1} an aggregator $F$ is collectively rational on a language $lit^+$ with positive literals only iff it is unanimous on the class of instances satisfying the very same positive literals. A symmetric result holds for the axiom of groundedness and any language $lit^- \subseteq \L_{\mathcal D}$ containing only {\em negative} literals of form $\neg P(c_1, \ldots, c_q)$. The proof is similar, so we omit it. \begin{theorem} \label{theor2} $\mathcal F_{lit^-}[G] \subseteq CR[lit^-]$, and $\mathcal F_{lit^-}[G] \supseteq CR[lit^-]$ only if $Con$ contains all individuals in the domain of $F$. \end{theorem} From Theorem~\ref{theor1} and \ref{theor2}, we immediately obtain the following corollary by the lemmas in section~\ref{sec:lifting}, where $lit = lit^+ \cup lit^-$. \begin{corollary} $\mathcal F_{lit}[U,G] \subseteq CR[lit]$, and $\mathcal F_{lit}[U,G] \supseteq CR[lit]$ only if $Con$ contains all individuals in the domain of $F$. \end{corollary} \begin{proof} As to inclusion $\subseteq$, by Lemma~\ref{lemma8}.(2), $\mathcal F_{lit}[U,G] = \mathcal F_{lit}[U] \cap \mathcal F_{lit}[G]$, and by Lemma~\ref{lemma8}.(1) $\mathcal F_{lit}[U] \cap \mathcal F_{lit}[G] \subseteq \mathcal F_{lit^+}[U] \cap \mathcal F_{lit^-}[G]$. Then, by Theorem~\ref{theor1} and \ref{theor2}, $\mathcal F_{lit^+}[U] \cap \mathcal F_{lit^-}[G] \subseteq CR[lit^+] \cap CR[lit^-]$. Finally, by Lemma~\ref{aux2}.(1) $CR[lit^+] \cap CR[lit^-] \subseteq CR[lit]$. The other inclusion is proved similarly. \end{proof} Notice that, differently from the propositional case \cite[Theorem 10]{GrandiEndrissAIJ2013}, here we need both axioms of unanimity and groundedness to preserve both positive and negative literals, while for propositional literals unanimity suffices. Hence, also simple results do not transfer immediately from the propositional to the first-order setting. Next, define $\L_{\leftrightarrow}$ as the language of equivalences $\forall \vec{x} \vec{x}' (P(\vec{x}) \leftrightarrow P'(\vec{x}'))$ for relation symbols $P, P' \in \mathcal D$. We show the following: \begin{theorem}\label{theor3} $ CR[\L_{\leftrightarrow}] = \mathcal F_{\leftrightarrow}[N^{+}]$ \end{theorem} \begin{proof} As for inclusion $\supseteq$, pick an equivalence $\forall \vec{x}, \vec{x}'(P(\vec{x}) \leftrightarrow P'(\vec{x}'))$. This defines a database in which relation symbols $P$ and $P'$ share the same pattern of acceptance/rejection, and since aggregator $F$ is neutral over issues, we get $F(\vec{D}) \models \forall \vec{x}, \vec{x}'(P(\vec{x}) \leftrightarrow P'(\vec{x}'))$. Therefore, the constraint given by the initial equivalence is lifted. \\ As for inclusion $\subseteq$, suppose that a profile $\vec{D}$ is such that $N_{\vec{u}}^{\vec{D}(P)} = N_{\vec{u}'}^{\vec{D}(P')}$. This implies that for every $i \in \mathcal N$, $\vec{D}_i \models \forall \vec{x}, \vec{x}'(P(\vec{x}) \leftrightarrow P'(\vec{x}'))$, and since $F$ is in $CR[\L_{\leftrightarrow}]$, $\vec{u} \in F(D)(P)$ iff $\vec{u}' \in F(D)(P')$. This holds for every such profile $\vec{D}$, proving that $F$ is neutral. \end{proof} By Theorem~\ref{theor3} an aggregator $F$ is collectively rational on language $\L_{\leftrightarrow}$ iff it is positively neutral on the class of instances satisfying all formulas in $\L_{\leftrightarrow}$. Let us now define the following class: \begin{definition}[Generalised dictatorship] An aggregation procedure $F : \mathcal D(U)^n \to \mathcal D(U)$ is a {\em generalised dictatorship} if there exists a map $g: \mathcal D(U)^n \to \mathcal N$ such that for every $\vec{D} \in \mathcal D(U)^n$, $F(\vec{D}) = D_{g(\vec{D})}$. Let $GDIC$ be the class of all generalised dictatorships. \end{definition} Generalised dictatorships include classical dictatorships, but also more interesting procedures known as \emph{most representative voters rules}, which selects the individual input that best summarises a given profile. Clearly, since each single instance satisfies the given set of constraints, a generalised dictatorship is collectively rational with respect to the full first-order language. \begin{theorem} $GDIC \subset CR[\L_{\mathcal D}]$ \end{theorem} Observe that while for binary aggregation the theorem above is an equality \cite[][Theorem16]{GrandiEndrissAIJ2013}, this is not the case for database aggregation. This is due to the fact that the first-order language cannot specify uniquely a given database instance. The proof of this fact is rather immediate: consider a dictatorship of the first agent, modified by permuting all the elements in $U$. That is, $F(\vec{D})=\rho(D_1)$ where $\rho:U\to U$ is any permutation. Clearly, $D_1 \neq \rho(D_1)$, but all constraints that were satisfied by $D_1$ are also satisfied by $\rho(D_1)$. Hence, this aggregator is collectively rational with respect to the full first-order language $\L_{\mathcal D}$, but is not a generalised dictatorship. We now turn our attention to integrity constraints proper to databases. We begin with functional dependencies. \begin{proposition} \label{prop1} A quota rule lifts a functional constraint iff $q_P > \frac{n}{2}$ for all relation symbols $P$ occurring in the functional constraint. \end{proposition} \begin{proof} By assumption, every instance $D_i$ satisfies the constraint. That is for every tuple $(u_1,\dots, u_k)$, either there is a unique $(u_{k+1},\dots, \allowbreak u_q)$ such that $(u_1,\dots,u_q)=\vec{u}\in D_i(P)$, or there is none. Suppose now that the constraint is falsified by the collective outcome. That is, there are $\vec{u} \neq \vec{u}'$ such that both $\vec{u}\in F(\vec{D})(P)$ and $\vec{u}'\in F(\vec{D})(P)$, and $\vec{u}$ and $\vec{u}'$ coincide on the first $k$ coordinates. By definition of quota rules, this means that at least $q_P$ voters are such that $\vec{u}\in D_i(P)$, and at least $q_P$ possibly different voters had $\vec{u}'\in D_i(P)$. Since each individual can have either $\vec{u}$ or $\vec{u}'$ in $D_i(P)$, by the pigeonhole principle this is possible if and only if the quota $q_P \leq \frac{n}{2}$. \end{proof} As immediate applications of Prop.~\ref{prop1}, the intersection rule clearly lifts any functional dependency, while the union lifts none. To see the latter, it is sufficient to consider two database instances that associates different tuples to the same primary key. \begin{proposition}\label{prop2} An aggregation procedure $F$ lifts a value constraint if $F$ is grounded. \end{proposition} \begin{proof} Let $n_k\in D(P_v)$ be a value constraint, where for all $i,j\in \mathcal N$, we have that $D_i(P_v)=D_j(P_v)$. A grounded aggregation procedure is such that $F(\vec{D})(P)\subseteq \bigcup_{i \in \mathcal N} D_i(P)$. Hence, for all $\vec{u}\in F(\vec{D})(P)$, there exists an $i\in \mathcal N$ such that $\vec{u}\in D_i(P)$. Since all individual databases satisfy the value constraint, we have that $u_k\in D_i(P_v)$, and therefore $u_k\in F(\vec{D})(P_v)\subseteq \bigcup_{i \in \mathcal N} D_i(P_v)$, showing that also $F(\vec{D})(P)$ satisfies the value constraint. \end{proof} The converse of the Prop.~\ref{prop2} is not true in general, since a non-grounded aggregator could be easily devised while still satisfying a given value constraint. The last result in this section concerns again quota rules. \begin{proposition} \label{prop3} A quota rule lifts a referential constraint $(P_1\to P_2, k)$ iff $q_{P_2} = 1$. \end{proposition} \begin{proof} Let $\vec{u}\in F(\vec{D})(P_1)$. Since all the individual databases satisfy the integrity constraint, we know that for every $i \in \mathcal N$ there exists a $\vec{u}_i\in D_i(P_2)$ such that its first $k$ coordinates coincides with the last $k$ coordinates of $P_1$. Since all $\vec{u}_i$ are possibly different, they may be supported by one single individual each. Therefore, the referential constraint is lifted if and only if the quota relative to $P_2$ is sufficiently small, i.e., $q_{P_2} = 1$. \end{proof} As immediate application of Prop.~\ref{prop3}, intersection and union rules are included in the results above, since they are quota rules. As regards distance-based rules, we only remark that they lift all integrity constraint by their definition, provided that the minimisation is restricted to consistent databases. \section{Aggregation and Query Answering}\label{sec:queries} In this section we analyse one of the most common operation on databases, i.e., querying, to the light of (rational) aggregation. Observe that any open formula $\phi(x_1,\dots,x_\ell)$, with free variables $x_1,\dots,x_\ell$, can be thought of as a query \cite{AbiteboulHV95}. Evaluating $\phi(x_1,\dots,x_\ell)$ on a database instance $D$ returns the set $ans(D, \phi)$ of tuples $\vec{u}=(u_1,\dots,u_\ell)$ such that the assignment $\sigma$, with $\sigma(x_i) = u_i$ for $i \leq \ell$, satisfies $\phi$, that is, $(D, \sigma) \models \phi$. Hereafter, with an abuse of notation, we often write simply $(D, \vec{u}) \models \phi$. Given the relevance of query answering in database theory, the following question is of obvious interest. \begin{question} What is the relationship between the answer $ans(F(\vec{D}), \phi)$ to query $\phi$ on the aggregated database $F(\vec{D})$, and the answers $ans(D_1, \phi), \allowbreak \ldots, ans(D_n, \phi)$ to the same query on each instance $D_1, \ldots, D_n$? \end{question} Clearly, given a query $\phi$, every aggregator $F$ on database instances induces an aggregation procedure $F^*$ on the query answers, as illustrated by the following diagram, where $D = F(\vec{D})$: \begin{center} \begin{tikzpicture}[auto, node distance=2cm, ->, >=stealth', shorten >=1pt, semithick] \tikzstyle{every place/.style}=[fill=white, text=black, minimum size=15pt] \tikzstyle{every initial by arrow}=[initial text=] \node (s0) {$D_1, \ldots, D_n$}; \node (s1) [right of=s0, node distance=5cm] {$D$}; \node (s2) [below of=s0] {$ans(D_1, \phi), \ldots, ans(D_n, \phi)$}; \node (s3) [below of=s1] {$ans(D,\phi)$}; \path (s0) edge node[above] {$F$} (s1); \path (s0) edge node[left] {$\phi$} (s2); \path (s1) edge node[right] {$\phi$} (s3); \path (s2) edge node[above] {$F^*$} (s3); \end{tikzpicture} \end{center} Hereafter we consider some examples to illustrate this question. \begin{example} \label{ex1} If we assume intersection as the aggregation procedure, it is easy to check that in general the answer to a query in the aggregated database is not the intersection of the answers for each single instance. To see this, let $D_1(P) = \{ (a,b) \}$ and $D_2(P) = \{ (a,d) \}$ and consider query $\phi = \exists y P(x,y)$. Clearly, $ans(D_1 \cap D_2, \phi)$ is empty, while $ans(D_1, \phi) \cap ans(D_2, \phi) = \{a\}$. Hence, in general $\bigcap_{i \in \mathcal N} ans(D_i, \phi) \not \subseteq ans(\bigcap_{i \in \mathcal N} D_i, \phi)$. The converse can also be the case. Consider instances $D_1$, $D_2$ such that $D_1(P) = \{(a,a), (a,b)\}$, $D_1(R) = \{ c \}$, and $D_2(P) = \{(a,a), (a,b)\}$, $D_2(R) = \{ d \}$, with query $\phi = \forall y P(x,y)$. The intersection $ans(D_1, \phi) \cap ans(D_2, \phi)$ of answers is empty. However the answer w.r.t.~the intersection of databases is $ans(D_1 \cap D_2, \phi) = \{ a \}$, since the active domain of the intersection only includes elements $a$ and $b$. As a result, in general $ans(\bigcap_{i \in \mathcal N} D_i, \phi) \not \subseteq \bigcap_{i \in \mathcal N} ans(D_i, \phi)$. \end{example} Similar arguments can be used to show that the union of answers is in general different from the answer on the union of instances. These examples shows that it is extremely difficult to find aggregators that commute for any first-order query $\phi \in \L_{\mathcal D}$. Hence, they naturally raise the question of syntactic restrictions on queries such that the aggregation procedure $F^* = \phi \circ F \circ \phi^{-1}$ on answers can be expressed explicitly in terms of $F$ (e.g., the intersection of answers is the answer to the query on the intersection): \begin{question} \label{quest1} Given aggregation procedures $F$ and $F^*$, is there a restriction of the query language for $\phi$ such that the diagram above commute? \end{question} This problem is related to the following, more general question. \begin{question} \label{quest2} Given an aggregation procedure $F$ and a query language $\L$, what is the aggregation procedure $F^*$? Can $F^*$ be represented explicitly? \end{question} The following result provides a first, partial answer to Question~\ref{quest1}, in the case $F$ and $F^*$ are unions. \begin{lemma}[Existential Fragment] \label{existential} Consider the positive existential fragment $\L^+_{\exists}$ of first-order logic defined as follows: \begin{eqnarray*} \phi & ::= & P(x_1, \ldots, x_q) \mid \phi \lor \phi \mid \exists x \phi \end{eqnarray*} The language $\L^+_{\exists}$ is lifted by unions, that is, for $F$ and $F^*$ equal to set-theoretical union, the diagram commutes for the query language $\L^+_{\exists}$. \end{lemma} \begin{proof} The proof is by induction on the structure of query $\phi$. For atomic $\phi = P(x_1, \ldots, x_q)$, $\vec{u} \in ans(\bigcup_{i \in \mathcal N} D_i,\phi)$ iff $(\bigcup_{i \in \mathcal N} D_i, \vec{u}) \models \phi$, iff for some $i \in \mathcal N$, $(D_i, \vec{u}) \models \phi$, iff $\vec{u} \in ans(D_i,\phi)$ for some $i \in \mathcal N$, iff $\vec{u} \in \bigcup_{i \in \mathcal N} ans(D_i,\phi)$. For $\phi = \psi \lor \psi'$, $\vec{u} \in ans(\bigcup_{i \in \mathcal N} D_i,\phi)$ iff $(\bigcup_{i \in \mathcal N} D_i, \vec{u}) \models \phi$, iff $(\bigcup_{i \in \mathcal N} D_i, \vec{u}) \models \psi$ or $(\bigcup_{i \in \mathcal N} D_i, \vec{u}) \models \psi'$, iff for some $i, j \in \mathcal N$, $(D_i, \vec{u}) \models \psi$ or $(D_j, \vec{u}) \models \psi'$ by induction hypothesis. In particular, we have both $(D_i, \vec{u}) \models \psi \lor \psi'$ and $(D_j, \vec{u}) \models \psi \lor \psi'$, that is, $\vec{u} \in \bigcup_{i \in \mathcal N} ans(D_i,\phi)$. On the other hand, $\vec{u} \in \bigcup_{i \in \mathcal N} ans(D_i,\phi)$ iff $\vec{u} \in ans(D_i,\phi)$ for some $i \in \mathcal N$, iff $(D_i, \vec{u}) \models \psi$ or $(D_i, \vec{u}) \models \psi'$. In both cases, by induction hypothesis $(\bigcup_{i \in \mathcal N} D_i, \vec{u}) \models \phi$, that is, $\vec{u} \in ans(\bigcup_{i \in \mathcal N} D_i,\phi)$. For $\phi = \exists x \psi$, $\vec{u} \in ans(\bigcup_{i \in \mathcal N} D_i,\phi)$ iff $(\bigcup_{i \in \mathcal N} D_i, \vec{u}) \models \phi$, iff for some $u \in adom(\bigcup_{i \in \mathcal N} D_i)$, $(\bigcup_{i \in \mathcal N} D_i, \vec{u} \cdot u) \models \psi$, and therefore for some $i, j \in \mathcal N$, $u \in adom(D_j)$ and $(D_i, \vec{u} \cdot u) \models \psi$. Notice that if $(D_i, \vec{u} \cdot u) \models \psi$, then $u \in adom(D_i)$ as well, as $\phi$ belongs to the positive (existential) fragment of first-order logic. Hence, for some $i \in \mathcal N$, $u \in adom(D_i)$ and $(D_i, \vec{u} \cdot u) \models \psi$, that is, $\vec{u} \in ans(D_i,\phi)$ for some $i \in \mathcal N$. On the other hand, $\vec{u} \in \bigcup_{i \in \mathcal N} ans(D_i,\phi)$ iff $\vec{u} \in ans(D_i,\phi)$ for some $i \in \mathcal N$, iff $u \in adom(D_i)$ and $(D_i, \vec{u} \cdot u) \models \psi$, that is, $u \in adom(\bigcup_{i \in \mathcal N} D_i)$ and $(\bigcup_{i \in \mathcal N} D_i, \vec{u} \cdot u) \models \psi$ by induction hypothesis. Hence, $\vec{u} \in ans(\bigcup_{i \in \mathcal N} D_i,\phi)$. \end{proof} By Lemma~\ref{existential} queries in $\L^+_{\exists}$ are preserved whenever both $F$ and $F^*$ are unions. Obviously, it would be of interest to find what is the largest fragment $\L'$ of first-order logic preserved by unions. By the results in this section we know that $\L^+_{\exists} \subseteq \L' \subset \L_{\mathcal D}$. Further, we may wonder whether a result symmetric to Lemma~\ref{existential} holds for intersections and the positive universal fragment $\L^+_{\forall}$ of first-order logic defined as follows: \begin{eqnarray*} \phi & ::= & P(x_1, \ldots, x_q) \mid \phi \land \phi \mid \forall x \phi \end{eqnarray*} Unfortunately, in Example~\ref{ex1} we provided a formula $\phi = \forall y P(x, y)$ in $\L^+_{\forall}$ and instances $D_1$, $D_2$ such that $ans(D_1 \cap D_2, \phi) \not \subseteq ans(D_1, \phi) \cap ans(D_2, \phi)$. Hence, for $F$ and $F^*$ equal to set-theoretical intersection, the diagram above does not commute for the query language $\L^+_{\forall}$. Nonetheless, we are able to prove a weaker but still significant result related to Question~\ref{quest2}. Specifically, the next lemma shows that if in the diagram above $F$ is the intersection and the query language is $\L^+_{\forall}$, then $F^*$ is unanimous, in the sense that $\bigcap_{i \in \mathcal N} ans(D_i, \phi) \subseteq ans(\bigcap_{i \in \mathcal N} D_i),\phi)$. \begin{lemma} \label{universal} Let the aggregator $F$ be the intersection and let the query language be $\L^+_{\forall}$. Then, the lifted aggregator $F^*$ is unanimous. \end{lemma} \begin{proof} We prove that $\bigcap_{i \in \mathcal N} ans(D_i, \phi) \subseteq ans(\bigcap_{i \in \mathcal N} D_i,\phi)$. So, if $\vec{u} \in \bigcap_{i \in \mathcal N} ans(D_i, \phi)$ then for every $i \in \mathcal N$, $(D_i, \vec{u}) \models \phi$. We now prove by induction on $\phi \in \L^+_{\forall}$ that if for every $i \in \mathcal N$, $(D_i, \vec{u}) \models \phi$, then $(\bigcap_{i \in \mathcal N} D_i, \vec{u}) \models \phi$. As to the base case for $\phi = P(x_1, \ldots, x_q)$ atomic, $(D_i, \vec{u}) \models P(x_1, \ldots, x_q)$ iff $\vec{u} \in D_i(P)$ for every $i \in \mathcal N$. In particular, $\vec{u} \in \bigcap_{i \in \mathcal N} D_i(P)$ as well, and therefore $(\bigcap_{i \in \mathcal N} D_i, \vec{u}) \models P(x_1, \ldots, x_q)$. As to the inductive case for $\phi = \psi \land \psi'$, suppose that $(D_i, \vec{u}) \models \phi$, that is, $(D_i, \vec{u}) \models \psi$ and $(D_i, \vec{u}) \models \psi$ for every $i \in \mathcal N$. By induction hypothesis we obtain that $(\bigcap_{i \in \mathcal N} D_i, \vec{u}) \models \psi$ and $(\bigcap_{i \in \mathcal N} D_i, \vec{u}) \models \psi'$, i.e., $(\bigcap_{i \in \mathcal N} D_i, \vec{u}) \models \phi$. Finally, if $(D_i, \vec{u}) \models \forall x \psi$ for every $i \in \mathcal N$, then for all $v \in \textit{adom}(D_i)$, $(D_i, \vec{u} \cdot v) \models \psi$. In particular, for all $v \in \textit{adom}(\bigcap_{i \in \mathcal N} D_i)$, $(D_i, \vec{u} \cdot v) \models \psi$ for every $i \in \mathcal N$, and by induction hypothesis, for all $v \in \textit{adom}(\bigcap_{i \in \mathcal N} D_i)$, $(\bigcap_{i \in \mathcal N} D_i, \vec{u} \cdot v) \models \psi$, i.e., $(\bigcap_{i \in \mathcal N} D_i, \vec{u}) \models \forall x \psi$. As a result, $\vec{u} \in ans(\bigcap_{i \in \mathcal N} D_i, \phi)$. \end{proof} A result symmetric to Lemma~\ref{universal} holds for language $\L^+_{\exists}$ and unions: \begin{lemma} \label{existential2} Let the aggregator $F$ be the union and let the query language be $\L^+_{\exists}$. Then, the lifted aggregator $F^*$ is grounded. \end{lemma} \begin{proof} We prove that $ans(\bigcup_{i \in \mathcal N} D_i,\phi) \subseteq \bigcup_{i \in \mathcal N} ans(D_i, \phi)$. So, if $\vec{u} \in ans(\bigcup_{i \in \mathcal N} D_i,\phi)$ then $(\bigcup_{i \in \mathcal N} D_i, \vec{u}) \models \phi$. We now prove by induction on $\phi \in \L^+_{\forall}$ that if $(\bigcup_{i \in \mathcal N} D_i, \vec{u}) \models \phi$, then for some $i \in \mathcal N$, $(D_i, \vec{u}) \models \phi$. As to the base case for $\phi = P(x_1, \ldots, x_q)$ atomic, $(\bigcup_{i \in \mathcal N} D_i, \vec{u}) \models P(x_1, \ldots, x_q)$ iff $\vec{u} \in \bigcup_{i \in \mathcal N} D_i(P)$, iff $\vec{u} \in D_i(P)$ for some $i \in \mathcal N$. In particular, $(D_i, \vec{u}) \models P(x_1, \ldots, x_q)$ as well, and therefore $\vec{u} \in \bigcup_{i \in \mathcal N} ans(D_i, \phi)$. As to the inductive case for $\phi = \psi \lor \psi'$, suppose that $(\bigcup_{i \in \mathcal N} D_i, \vec{u}) \models \phi$, that is, $(\bigcup_{i \in \mathcal N} D_i, \vec{u}) \models \psi$ or $(\bigcup_{i \in \mathcal N} D_i, \vec{u}) \models \psi'$. In the first case, by induction hypothesis we have that $\vec{u} \in \bigcup_{i \in \mathcal N} ans(D_i, \psi)$ i.e., for some $i \in \mathcal N$, $(D_i, \vec{u}) \models \psi$, and therefore $(D_i, \vec{u}) \models \phi$. Hence, $\vec{u} \in ans(D_i, \phi)$ for some $i \in \mathcal N$, that is, $\vec{u} \in \bigcup_{i \in \mathcal N} ans(D_i, \phi)$. The case for $(\bigcup_{i \in \mathcal N} D_i, \vec{u}) \models \psi'$ is symmetric. Finally, if $(\bigcup_{i \in \mathcal N} D_i, \vec{u}) \models \exists x \psi$, then for some $v \in \textit{adom}(\bigcup_{i \in \mathcal N} D_i)$, $(\bigcup_{i \in \mathcal N} D_i, \vec{u} \cdot v) \models \psi$. In particular, by induction hypothesis, $\vec{u} \cdot v \in \bigcup_{i \in \mathcal N} ans(D_i, \psi)$, that is, $( D_i, \vec{u} \cdot v) \models \psi$ for some $i \in \mathcal N$. Further, since $\psi$ is a positive formula, $v \in \textit{adom}(D_i)$, and therefore, $(D_i, \vec{u} \cdot v) \models \phi$, i.e., $\vec{u} \in \bigcup_{i \in \mathcal N} ans(D_i, \phi)$. \end{proof} To conclude this section we discuss the results obtain so far. We said that Lemma~\ref{existential} can be seen as a (partial) answer to Question~\ref{quest1}. Similarly, Lemma~\ref{universal} and \ref{existential2} are related to Question~\ref{quest2}. Results along the lines of Lemmas~\ref{existential}-\ref{existential2} may find application in efficient query answering: it might be that in some cases, rather than querying the aggregated database $F(\vec{D})$, it is more efficient to query the individual instances $D_1, \ldots, D_n$ and then aggregate the answers. In such cases it is crucial to know which answers are preserved by the different aggregation procedures. The results provided in this section aimed to be a first, preliminary step in this direction. \section{Conclusions and Related Work}\label{sec:conclusions} In this paper we have proposed a framework for the aggregation of conflicting information coming from multiple sources in the form of finite relational databases. We proposed a number of aggregators inspired by the literature on social choice theory, and adapted a number of axiomatic properties. We then focused on two natural questions which arise when dealing with the aggregation of databases. First, we studied what languages for integrity constraints are lifted by some of the rules we proposed, i.e., what constraints are true in the aggregated database supposing that all individual input satisfies the same constraints. Second, we investigated first-order query answering in the aggregated databases, characterising some languages for which the aggregation of the answers to the individual databases corresponds to the answer to the query on the aggregated database. Our initial results shed light on the possible use of choice-theoretic techniques in the database merging and integration, and opens multiple interesting directions for future research. In particular, the connections to the literature on aggregation and merging can be investigated further. First, section~\ref{sec:characterisations} showcased results for which database aggregation behaves similarly to binary aggregation with integrity constraints (see \cite{GrandiEndrissAIJ2013}), but pointed out at some crucial differences. In particular, there are natural classes of integrity constraints used in databases for which the equivalent in propositional logic, the language of choice for binary aggregation, would be tedious and lenghty. We were able to provide initial results on their preservation through aggregation. Second, the recent work of Endriss and Grandi \cite{EndrissGrandiAIJ2017} is also strongly related to our contribution. Since graphs are a specific type of relational structures, our work directly generalise their graph aggregation framework to relations of arbitrary arity. However, the specificity of their setting allows them to obtain very powerful impossibility results, which are yet to be explored in the area of database aggregation. Third, to the best of our knowledge the problem of aggregated query answering is new in the literature on aggregation, albeit a similar problem has been studied in the aggregation of argumentation graphs \cite{ChenEndrissTARK2017}, a setting closer to that of graph aggregation. Also this direction deserves further investigation.
{ "timestamp": "2018-02-26T02:10:07", "yymm": "1802", "arxiv_id": "1802.08586", "language": "en", "url": "https://arxiv.org/abs/1802.08586" }
\chapter{Motivation} \input{C1} \input{C2} \chapter{Zero Mode Counting via Sheaf Cohomology} \label{chapter:MasslessSpectraAndSheafCohomology} \input{C3} \chapter{From F.P. Graded \texorpdfstring{$\mathbf{S}$}{S}-Modules to Zero Mode Counting} \label{chapter:DetailsOnFPGradedSModules} \input{C4} \chapter{Toric F-Theory GUT-Models} \label{chapter:GUTModels} \input{C5} \input{C6} \input{C7}
{ "timestamp": "2018-02-27T02:06:27", "yymm": "1802", "arxiv_id": "1802.08860", "language": "en", "url": "https://arxiv.org/abs/1802.08860" }
\subsection{Key simulation improvements} Evaluation of the signal processing relies, in part, on an improved signal and noise simulation. The signal simulation has two key improvements compared to prior implementations. Previous implementations~\cite{larsoft} adopted a simplified model to consider the induction charge from all drifting electrons, in which case the field response was extracted from an entire track at a single angle. An additional simplification was made to use a common, average field response independent of the transverse distance between the wire and the drifting charge. The improved simulation addresses these two issues by employing the field response calculations described in section~\ref{sec:field_resp}. It supports a long-range induction model that more correctly incorporates angle-dependent effects and records the current induced not only on the closest wire but also current which is induced from the ten wires to either side. This allows for the accounting of contributions to the total induced current down to the sub-percent level. The simulation also takes into consideration the fine-grained variability that exists for different possible paths within a single wire region. The second improvement relates to the treatment of the complexity inherent in the initial distribution of energy depositions by particles interacting in the LAr and its resulting distribution of ionization electrons. In past implementations, the ionization electrons were grouped into spatial bins and the contribution to induced current was extracted from each bin. To improve on this, the new simulation retains the identity of each energy deposition produced by a given step of the particle tracking simulation. The effects of diffusion and absorption are applied to each ionization point by associating a 2D Gaussian charge distribution. The distributions for all ionization points are kept distinct during drifting until they reach the wire planes. There, they are sampled and interpolated onto the regular 2D grid defined in the transverse and longitudinal directions by the field response functions described in section~\ref{sec:field_resp}. At this point, a fine binning (\SI{0.5}{\us} $\times$ \SI{0.3}{\mm} for the MicroBooNE implementation) is applied. The simulation improvements described above highlight an important connection between the choice of signal simulation and the signal processing models which may be employed. The former primarily applies a convolution of the ionization charge distribution and the field (and electronics) response functions. The latter primarily performs their deconvolution. Prior simulation implementations used the same kernel in both the convolution and deconvolution processes. In the absence of noise and up to the spatial binning simulated, the prior simulation approach produces an exact recovery of the initial ionization charge distribution, which is not representative of real detector effects. The new approach supports a better approximation of reality by allowing the variation of field response across a wire region to be accounted for in the convolution. However, this variation cannot be accounted for in the deconvolution as there exists no \textit{a priori}\xspace knowledge of the fine-grained charge distribution. Quantification of this effect, i.e. the realistic performance of signal processing, is one of the goals of the evaluation shown here. \subsection{Simulation overview} \label{sec:sim} The TPC detector simulation spans detector physics ranging from measures of energy loss by particles traversing the detector to the digitized waveforms produced from a model of the front-end electronics. A data-driven, analytical simulation of the inherent electronics noise is also performed. Figure~\ref{fig:simevent} shows one example event produced by this simulation. \begin{figure}[htbp] \centering \begin{subfigure}[t]{0.49\textwidth} \centering \includegraphics[width=0.99\textwidth]{figs/sim5GeVMuADC.png} \caption{Simulated waveforms.}\label{fig:simevent_wf} \end{subfigure} \begin{subfigure}[t]{0.49\textwidth} \centering \includegraphics[width=0.99\textwidth]{figs/sim5GeVMuCharge.png} \caption{Extracted charge distributions.}\label{fig:simevent_charge} \end{subfigure} \caption[Simulated isochronous event.]{Simulated waveforms and reconstructed charge from a nearly isochronous track of a GEANT4-based 5-GeV muon in liquid argon. The X-axis represents the channel (wire) from the three wire planes of MicroBooNE. The Y-axis represents the sampled time ticks (per \SI{0.5}{\micro\second}). (a) Simulated waveforms including inherent electronics noise in units of ADC counts. (b) Reconstructed charge in units of $e^-$ corresponding to the simulated waveforms in (a).} \label{fig:simevent} \end{figure} The signal simulation, i.e. the ADC waveform on a given channel, \begin{equation} \label{eq:sim-convolution} M = (Depo \otimes Drift \otimes Duct + Noise) \otimes Digit, \end{equation} is conceptually a convolution of five functions: \begin{description} \item[$Depo$] represents the initial distribution of the ionization electrons created by energy depositions in space and time. \item[$Drift$] represents a function that transforms an initial charge cloud to a distribution of electrons arriving at the wires. Physical processes related to drifting, including attenuation due to impurities, diffusion and possible distortions to the nominal applied electric field, are applied in this function. \item[$Duct$] is a family of functions, each is a convolution $F \otimes E$ of the field response functions $F$ associated with the sense wire and the element of drifting charge and the electronics response function $E$ corresponding to the shaping and amplification of the front end electronics. \item[$Digit$] models the digitization electronics according to a given sampling rate, resolution, and dynamic voltage range and baseline offset resulting in an ADC waveform. \item[$Noise$] simulates the inherent electronics noise by producing a voltage level waveform from a random sampling of a Rayleigh distributed amplitude spectrum and uniformly distributed phase spectrum. The noise spectra used are from measurements with the MicroBooNE detector after software noise filters, which have excess (non-inherent) noise effects removed. \end{description} These functions are defined over broad ranges and with fine-grained resolution. The characteristic scales are set by the variability and extent of the field response functions and the sampling rate of the digitizer. Their range is set by the size of the detector, which is related to the length of time over which one exposure is digitized. With sub-millimeter field variability, \SI{0.5}{\micro\second} sampling, a detector of several meters in extent, and a readout over several milliseconds, the size of each dimension of these functions is $10^3 - 10^5$. A naive implementation of the required convolution is not possible with commercial computing hardware. The remainder of this section describes each term in equation~\ref{eq:sim-convolution} including the methods employed to reduce the dimensionality. \subsubsection{Initial distribution of charge depositions} \label{sec:sim-depo} The simulation takes as input a distribution of initial charge depositions - free ionization electrons from charged particles traversing the liquid argon medium. In order to reduce the size of the input, the simulation requires charge depositions to be defined as a set of discrete, localized depositions \begin{equation} \label{eq:sim-depo-def} Depo_i = (q_i, t_i, \vec{r}_i), \end{equation} where $q_i$ is the number of ionization electrons at the given point $(\vec{r}_i,t_i)$ in space and time. The scale at which charge deposition is localized must be chosen to balance computation time and to ensure that discreteness is smoothed out by subsequent convolution terms. In practice, this limitation is well matched to the form of results produced by tracking simulations such as those based on GEANT4~\cite{geant4} implemented in LArSoft (Liquid Argon Software)~\cite{larsoft}. The user may directly supply $q_i$ in terms of ionization electrons or in terms of the amount of energy lost by particles over a given step of the tracking simulation (i.e. $dE/dx$ over distance $dx$). If the latter is provided, the simulation will apply the appropriate Fano factor~\cite{fano_factor}, recombination~\cite{recombination1, recombination2, recombination3} and their associated statistical fluctuations in generating the ionization charge. \subsubsection{Drift transport and physics} \label{sec:sim-drift} The simulation supplies a full set of physical processes related to the transport of ionization electrons through liquid argon under the influence of an applied electric field. These processes include electron attenuation (corresponding to electron attachment on impurities in liquid argon), longitudinal and transverse diffusion~\cite{Li:2015rqa}, and transport. The default transport is performed neglecting distortions due to the build up of space charge~\cite{space-charge}. As a result, initial point-like depositions are typically provided as points but they may have characteristic widths ($\sigma_{\transverse},\sigma_{\longitudinal}$) in the transverse direction and along the nominal drift, respectively, as a 3D Gaussian cloud \begin{equation} \label{eq:sim-drift} Depo_i \otimes Drift \to Depo_i(q_i,t_i+t_{drift}, \vec{r}_i\rvert_{x=x_{rp}}, \sigmaTi{i}, \sigmaLi{i}), \end{equation} where $\sigma \cong \sqrt{2\cdot D\cdot t_{drift}}$, $t_{drift}$ is the drift time, and $D$ is transverse ($D_T$) or longitudinal ($D_L$) diffusion coefficients, respectively~\cite{Li:2015rqa}. The transport term of the convolution transforms each deposition as in expression~\ref{eq:sim-drift} independently until the center of the resulting distribution of ionization electrons reaches a \textit{response plane}. The response plane is normal to the nominal drift direction which is along the $x$ coordinate axis and is located near the wire planes. Its location $x_{rp}$ exactly coincides with the plane containing the starting locations (\SI{10}{\cm}) of the drift paths on which the field response functions are defined/simulated. Diffusion gives the initial deposition a finite extent in the transverse and longitudinal directions. Each deposition is then trifurcated for each wire plane. The transverse location and extent are transformed to measures $\rho$ along the pitch direction for the given plane. The longitudinal extent is transformed to a time-based measure using the nominal drift speed relative to the time at which the center of the deposition reached the response plane. This trifurcation transforms a single 3+1 space-time dimensional problem into three smaller 1+1 problems. The single remaining space coordinate is the transverse measure along the pitch direction of each wire plane. \subsubsection{Detector field and electronics response} \label{sec:sim-duct} The $Duct$ term is itself a convolution $F_j \otimes E$ of the field and electronics responses, respectively. The first function $F_j$ describes the current induced on a sense wire due to the passage of a nearby unit charge along a path $j$ starting a distance $\rho_j$ in the wire pitch direction. Given a wire at $\rho = 0$, these paths are shown in figure~\ref{fig:fielddemo}. The field response employed in the simulation is a 2D Garfield calculation, meaning that it is independent of the position along the wire direction for each wire plane. However, this 2D calculation is meant to serve as a good approximation of the realistic 3D field response averaging along the wire direction. The second function $E$ includes two parts as shown in figure~\ref{fig:resp1} corresponding to the pre-amplifier and two RC filters. In addition, an intermediate gain of 1.2 is included. $E$ determines the voltage response of the front-end amplifiers to the instantaneous application of a unit charge as their input. In simulation, the electronics response is taken to be constant for all channels. The convolution $F_j \otimes E$ is independent of the distribution of drifted charge and can be calculated separately. Results of such a calculation are shown in figure~\ref{figs:overall_response}. Instead of performing the convolution through a direct integration in the time domain, a discrete Fourier transform (DFT) is applied on each term, followed by a multiplication in frequency-space and a final inverse DFT. However, in the simulation, subsequent convolution is required in order for the intermediate frequency-space result to be cached for later reuse. The sampling period is fixed according to the electronics digitization in $E$, as the instantaneous current in $F_j$ must be integrated over each sample. In the discussion below, this convolution of field and electronics responses is referred to generically as a response function. Given a point charge, the changes of the waveform resulting from the longitudinal Gaussian diffusion and electronics responses (preamplifier, RC filters) are demonstrated in figure~\ref{fig:stepbystep}. Note that the RC filters have an effect on the collection plane's signal, though it is relatively small (less than \SI{1}{\percent}). The tail of the RC filter also shows up in the Y plane waveform of figure~\ref{fig:simevent_wf}. The simulation has the framework to incorporate multiple responses, e.g., different electronics responses corresponding to normal channels or mis-configured channels, as well as different field responses corresponding to shorted wire regions. These specific responses are relevant to real MicroBooNE data as mentioned in ~\cite{noise_filter_paper} and important in data/MC comparisons as elaborated in ~\cite{SP2_paper}. \begin{figure}[htbp] \centering \includegraphics[width=0.85\textwidth]{figs/Ydemo.pdf} \caption{Breakdown of a full simulation of a point source of charge. The Y plane waveforms are taken as an example. The ADC baselines for Y plane are set to 409 ADC counts, \SI{200}{\mV}. An inset figure magnifies the long tail from the RC filters. The maximum magnitude of the negative tail is roughly \SI{0.7}{\percent} of the signal peak. Diffusion is a 2D smearing effect and the plot just shows the result of the central wire.} \label{fig:stepbystep} \end{figure} \subsubsection{Performing the convolution} As described above, the drifted charge distribution and response functions now cover a discrete two dimensional domain, where $\rho_j$ is a measure in the pitch direction relative to a given wire. That is, $j$ spans the number of nearby drift paths. The simulation has drift paths which cover the central wire region and ten wire regions to either side with ten paths per region. This gives a total of 210 paths per plane centered on each wire of interest. This span in $\rho$ is used to identify all depositions that have any portion of their total extent contained therein. It is worth remembering that these depositions are in a parameterized form following a 2D Gaussian distribution. These distributions must be discretized in order to be convolved with the corresponding discrete response function. There are two aspects to this discretization which are important to its performance and correctness. The first is due to the response functions covering more transverse span than just a single wire region. All depositions falling in the nominal 21 wire regions must be discretized for each wire. Most of these same depositions will also contribute to the neighboring wires. Rediscretizing for each wire would entail a factor of 20 redundant calculations. Discretizing the entire domain would require a prohibitive amount of computer memory. To overcome computing limitations, the wires of a plane are processed in order of their pitch location starting from the smallest $\rho$. As the calculation advances to the next wire, it frees the memory associated with the wire region of the lowest $\rho$ and discretizes one new wire region at the high end. The second aspect relates to correctness. The coverage of $\rho$ by the response functions is very fined grained. Neighboring drift paths are separated by \SI{0.3}{\milli\meter} (1/10 of the wire pitch in MicroBooNE). At this scale, the path-to-path response variation is typically small, within \SI{10}{\percent}. However, particularly for the induction planes, at any scale of such coverage of $\rho$ there is an aliasing effect that will occur for elongated charge distributions which fall in a line close to perpendicular to the wire planes. As shown in figure~\ref{fig:interp}, this effect is in a small but non-negligible phase space that must be mitigated. An interpolation is performed using the two nearest response functions at both $\rho_i$ and $\rho_{i+1}$. In order to calculate the interpolated response, for any wire plane using its own Cartesian coordinate, the deposited charge after drifting can be expressed as $q(t, z)$. This charge is to be convolved with the field response $F(t,x,y,z)$ of which $x$ is fixed at the starting location of the virtual response plane and $y$ equivalently averages out. Consider $\rho_j < z < \rho_{j+1}$ and the two field responses $F_1(t) = F_j(t, \rho_j)$ and $F_2(t) = F_{j+1}(t, \rho_{j+1})$, the convolution is performed as follows: \begin{equation} \label{eq:field_interp} \int^{\rho_{j+1}}_{\rho_j} \{ q(t, z)\otimes(F_1(t)\cdot u(z) + F_2(t)\cdot (1-u(z))) \} dz, \end{equation} where $u(z)$ is the weighting function for interpolation between two calculated paths of field response. Since transverse and longitudinal diffusion are independent processes, $q(t, z)$ can be re-phased as $q(t)\cdot Gaus(z)$. The width and center of the function $Gaus$ depend on the initial deposition location of the charge. Therefore, expression~\ref{eq:field_interp} can be simplified as an integral of $z$ and convolution of $t$, \begin{equation} \label{eq:field_interp2} \int^{\rho_{j+1}}_{\rho_j} \{Gaus(z)u(z)\} dz \cdot q(t) \otimes F_1(t) + \int^{\rho_{j+1}}_{\rho_j} \{Gaus(z)(1-u(z))\}dz \cdot q(t) \otimes F_2(t) \end{equation} In practice, any charge deposition at $z$ between path $j$ and $j+1$ is redistributed to the two positions $\rho_j, \rho_{j+1}$ against the weight $ \int^{\rho_{j+1}}_{\rho_j} \{Gaus(z)u(z)\} dz$. The weighting function $u(z)$ has two options at present, \begin{align} \label{eq:weight} {\rm Average:} &~ u(z) = 0.5,{~\rm or}\\ {\rm Linear:} &~ u(z) = \frac{z-\rho_{j}}{\rho_{j+1}-\rho_j}. \end{align} The weight integral can be analytically derived using a Gaussian function and the error function. Linear interpolation more closely reflects the underlying physics as illustrated in figure~\ref{fig:interp} because linear interpolation lends to a continuous variation of the field response and smooth waveforms at the paths of each calculated field response. \begin{figure}[htbp] \centering \begin{subfigure}[t]{0.49\textwidth} \centering \includegraphics[width=0.99\textwidth]{figs/Interp_1.pdf} \caption{Average.} \label{fig:interp_1} \end{subfigure} \begin{subfigure}[t]{0.49\textwidth} \centering \includegraphics[width=0.99\textwidth]{figs/Interp_2.pdf} \caption{Linear interpolation.} \label{fig:interp_2} \end{subfigure} \caption{Simulated baseline-subtracted waveforms of an induction plane large angle track, near perpendicular to the wire plane. The track covers 6 paths / 5 sub-pitches within a wire region. The inset figure magnifies the waveform in the dashed box, displaying the fine grained field response as well as the cancellation of the bipolar field response at the paths of each calculated field response (boundaries of sub-wire regions). (a) Average weighting, leading to improper cancellation of the bipolar response. (b) Linear interpolation, leading to proper cancellation of the bipolar response.} \label{fig:interp} \end{figure} Following $Duct$, a final digitization step is applied, currently supplying a simple ADC model. It is parameterized by the ADC resolution, full scale voltage range and baselines to apply to the input voltage waveforms. For MicroBooNE, a 2-MHz, 12-bit ADC is used with the dynamic voltage range up to \SI{2000}{\mV}. \subsubsection{Truth values}\label{sec:MCtruth} A new set of input quantities of the simulation (MC truth) is constructed to compare the ionization electrons generated by Geant4 and propagated to the wire planes to those reconstructed after signal processing. MC truth retains track ID and PDG code (particle species) information (from the generator) to facilitate the assessment of the efficacy of downstream reconstruction algorithms. As described in section~\ref{sec:method_filter}, software filters are applied in signal processing to extract the ionization signal and mitigate high frequency noise. These filters smear the charge signal in the time and wire dimensions, or longitudinally and transversely, respectively. For equitable treatment of MC truth to that posed by signal processing, the same high frequency time and wire software filters are treated directly to the ionization electrons as a function of time and wire in construction of the MC truth signal. \subsubsection{Noise simulation}\label{sec:noise_model} The signal simulation described above produces ADC-level, time-domain waveforms in each channel. To simulate detector data, waveforms arising from inherent electronics noise are added to the signal waveform. An analytic method to simulate the noise is introduced below and it is applicable to a wide range of noise simulations other than the inherent electronics noise. This method was motivated by~\cite{noise_Milind}. The core of this method is to simulate the stochastic behavior of the noise in the frequency domain. The stochastic effect originates from the random occurrence of each noise pulse (e.g. due to thermal fluctuation) and the method is applicable in the condition that the occurrence of each noise pulse is uniformly distributed in time. With this condition, the noise stochastic behavior in the frequency domain follows a random walk process and the analytic mathematical description of the random walk can be used to model the noise. A single parameter - the mean frequency amplitude - is required for the noise simulation for each frequency. This parameter is extracted from data. Details of the method are described below. In general, the inherent electronics noise can be categorized into several types: white noise, flicker noise (pink noise), and Brownian noise (red noise)~\cite{op_amp_handbook, IEEE_noise, noise_webcast}. The corresponding power spectrum densities are constant, proportional to $1/f$, and proportional to $1/f^2$, respectively. The total noise divergence (at infinite frequency for white noise and at infinitesimal frequency for the latter two types of noise) does not occur because the detector devices themselves have cutoffs at low and high frequencies. Meanwhile, the power spectrum can be altered by the response of the dedicated electronics device. Given any kind of inherent electronics noise pulse, assume its function in the time domain is $i(t)$ and in the frequency domain is $I(\omega)$. For white noise $I(\omega)$ is $G(\omega) \cdot 1$ and for flicker noise $G(\omega) \cdot \frac{1}{\sqrt{\omega}}$, where $G(\omega)$ includes the normalization for a single noise pulse as well as the response of the electronics device. Consider a train of noise pulses: the function in time domain is \begin{equation} f(t) = \sum^{N}_{n=1}q_n \cdot i(t-t_n), \end{equation} and in frequency domain is \begin{equation}\label{eq:noisefreq} F(\omega) = \sum^{N}_{n=1}q_n \cdot I(\omega)\cdot e^{-j \omega t_n}, \end{equation} where $q_n$ is the sign (+ or -) of the noise pulse, $n$ is the index, $j$ is the imaginary unit, and $t_n$ is the occurrence time of each noise pulse, uniformly distributed in time. Absorbing $q_n$ and the phase of $I(\omega)$ into the phase term, equation~\ref{eq:noisefreq} can be rewritten as \begin{equation}\label{eq:randomwalk} F(\omega) = \sum^{N}_{n=1}|I(\omega)| \cdot e^{-j\theta_n(\omega)}, \end{equation} where $\theta_n(\omega)$ is uniformly distributed in $[0, 2\pi)$ if\footnote{In practice, the discrete Fourier transform naturally discards the information if $\omega \cdot t_N < 2\pi$ and the analysis technique, e.g. region of interest (ROI), can further suppress the low frequency noise. The improper simulation of low frequency noise which does not meet the condition $\omega \cdot t_N \gg 2\pi$ can be ignored.} $\omega \cdot t_N \gg 2\pi$. In the 2D complex plane, given a frequency $\omega$, equation~\ref{eq:randomwalk} follows a 2-dimensional random walk with the angle $\theta_n$ uniformly distributed over $[0, 2\pi)$. Since the number of steps, $N$, is large enough, the probability density distribution of the distance ($r$) from the origin to the end point can be analytically described by~\cite{randomwalk} \begin{equation}\label{eq:rayleigh} R(r; \sigma) = \frac{r}{\sigma^2}e^{-\frac{r^2}{2\sigma^2}}, \end{equation} where $R(r; \sigma)$ is the Rayleigh distribution with the mean value of $\sigma\cdot \sqrt{\pi/2}$ and $\sigma^2 = 0.5\cdot N \cdot |I(\omega)|^2$. Then, we can represent equation~\ref{eq:randomwalk} in a polar form by \begin{equation}\label{eq:noiserayleigh} F(\omega) = r(\omega) \cdot e^{-i\alpha_{\omega}}, \end{equation} where $r(\omega)$ follows the Rayleigh distribution and $\alpha_{\omega}$ is uniformly distributed in $[0, 2\pi)$. If $N$ is large enough, the $\alpha_{\omega}$'s are mutually independent for different frequencies. Equation~\ref{eq:rayleigh} is equivalent to two independent Gaussian distributions on the real and imaginary axes with the same standard deviation $\sigma$, which is the only parameter in the corresponding Rayleigh distribution. This feature can be employed to simulate the random walk (noise) and to deduce that the summation of multiple different step length random walks can be described as in equation~\ref{eq:noiserayleigh}, because of the additive property of Gaussian distribution. As a result, all sources of noise can be summed, and for each frequency the simulation can be done by sampling the Rayleigh distribution, which has a single parameter $\sigma_{total}$ to randomize the frequency amplitude with a uniformly distributed phase from 0 to 2$\pi$. The $\sigma_{total}(\omega)$, i.e. the mean frequency amplitude divided by $\sqrt{\pi/2}$, can be extracted from the data. In this paper, the mean frequency amplitude was extracted from MicroBooNE data as shown in figure~\ref{fig:noiseamp}. An inverse Fourier transformation of the randomized noise frequency spectrum will provide the final noise waveform in the time domain. \begin{figure}[htbp] \centering \includegraphics[width=0.6\textwidth]{figs/Digitized_Amplitude.pdf} \caption{Mean frequency amplitude of the inherent electronics noise for different lengths of wires are deduced from MicroBooNE data using ADC waveforms after noise filtering and mis-configured channel corrections~\cite{noise_filter_paper}. A small constant term which is not associated with the pre-amplifier is subtracted. The length of \SI{233}{\cm} is associated with collection plane wires, and the rest are associated with induction plane wires. The mean amplitude for intermediate wire lengths is obtained by interpolation.} \label{fig:noiseamp} \end{figure} \subsection{Quantitative evaluation of the signal processing}\label{sec:quan_eva} A qualitative demonstration of the signal processing performance is shown in figure~\ref{fig:simevent}. Several features of the raw and reconstructed tracks can be seen: the fine structure of the charge depositions along the track, the short tracks off the main trajectory, and the high charge density of the track in the collection plane, mainly due to the small angle of the track with respect to the Y wires. In this section, we present a quantitative evaluation of the signal processing. This evaluation addresses the intrinsic charge smearing in the time and wire dimensions, charge matching among the three wire planes, and reconstructed charge resolution, bias, and inefficiency for large angle tracks. Understanding these effects is critical for subsequent event reconstruction. Below are the definitions of these quantities. \begin{description} \item{\textit{Charge}} -- the number of ionization electrons. For a track (line charge), a constant charge density is used. \item{\textit{Time smearing}} -- the standard deviation of the Gaussian distribution of the deconvolved charge time spectrum on a wire, specifically associated with a point source of charge. \item{\textit{Wire smearing}} -- the fraction of the integrated deconvolved charge on a wire with respect to the total deconvolved charge over the entire range of fired wires, specifically associated with a point source of charge. \item{\textit{Charge bias}} -- the mean fractional difference between the total deconvolved charge and the true charge. In terms of a track (line charge), the mean value of the distribution of each wire's integrated deconvolved charges from a range of wires will be used to calculate the charge bias with respect to the true charge within one wire pitch. \item{\textit{Charge resolution}} -- the standard deviation of the total deconvolved charge relative to the mean deconvolved charge. In analogy to the definition of charge bias, in terms of a track (line charge), the distribution of each wire's integrated deconvolved charges from a range of wires will apply. \item{\textit{Charge inefficiency}} -- specifically associated with tracks (line charges), the fraction of the wires which have ZERO deconvolved charge (no ROI found). Note that these wires will not be involved in the calculation of charge bias or charge resolution. \end{description} \subsubsection{Basic performance of the signal processing}\label{sec:basic_perf} The deconvolved results in the three planes for a point source of charge are shown in figure~\ref{fig:point-charge}. Ten thousand electrons were simulated one meter from the wire planes. No noise was included; only diffusion and response functions were simulated. The electronics responses in simulation and deconvolution are identical. The results for two extreme positions of the point source of charge were presented, at the wire and at the wire region boundary, respectively. \begin{figure}[htbp] \centering \begin{subfigure}[]{1.0\textwidth} \includegraphics[width=0.95\textwidth]{figs/ReconPointChargeatwire.pdf} \caption{Point source at the wire (\SI{0.0}{\milli\meter} transverse position relative to the closest wire).} \label{fig:point-charge-wire} \end{subfigure} \begin{subfigure}[]{1.0\textwidth} \includegraphics[width=0.95\textwidth]{figs/ReconPointChargeatbound.pdf} \caption{Point source at the wire region boundary (\SI{1.5}{\milli\meter} transverse position relative to the closest wire).} \label{fig:point-charge-bound} \end{subfigure} \caption[Simulated point charge.]{Point source charge after signal processing. Charge is shared across wires on a given plane. The fractional charge recovery relative to $1\times10^4$ electrons is indicated by the numbers in the top row, negative charge by the former number and the positive charge by the latter one, rounded to 0.01. Relative charge per wire is indicated in the lower row.} \label{fig:point-charge} \end{figure} \textbf{Time smearing} is about 2.7 ticks and 2.3 ticks for induction and collection planes, respectively, largely dictated by the high frequency software filters in the signal processing. A one meter drift span provides a longitudinal/time smearing of about 2 ticks. \textbf{Wire smearing} is indicated by the numbers in the second row in figure~\ref{fig:point-charge}. More than \SI{60}{\percent} of charge is extracted by the closest wire if the point source of charge is close to the wire. For the induction plane, about 50\% of the charge is extracted by both the closest wire and the adjacent wire if the point source of charge is close to the boundary of the wire region, due to the long range of induction as shown in figure~\ref{fig:fielddemo}. For the collection plane, this effect is smaller due to the predominant collection signal on the closest wire. \textbf{Charge bias} is at the \SI{1}{\percent} level. In this case, it originates from the mismatch of the field response in signal simulation (fine-grained) and deconvolution (average response). Positive charge (a fraction of distorted deconvolved charge spectrum below zero) can also be reconstructed due to this mismatch. A 1-m drifting diffusion is applied which smears the charge deposition and reduces the mismatch of the average response and position-dependent response within one wire pitch. As a result, the extracted charge across the three wire planes is reasonably well-matched. Charge bias arising from small position-dependency within the wire region is inevitable. The deconvolved charge spectrum of any event topology can be obtained through a convolution with the point-charge ``response'', i.e. the deconvolved charge spectrum of point-charge as shown in figure~\ref{fig:point-charge}. For instance, it is anticipated that the charge bias should also be at the \SI{1}{\percent} level for a track of charge. However, including the electronics noise has a significant impact on the performance, as explained in the following section. \subsubsection{Charge resolution due to electronics noise}\label{sec:cr_elecnoise} Given an energy deposition, the overall charge resolution originates from the fluctuation of the number of ionization electrons in the signal formation. Several effects are considered, including the statistical fluctuation in the production of ionization electrons, recombination with argon ions, and absorption in drifting due to impurities of liquid argon. The total statistical fluctuation can be ignored relative to the impact of electronics noise. For instance, a MIP track corresponds to $\sim10^4$ ionization electrons within one wire pitch. The statistical fluctuation in production is $\sqrt{10^4\cdot 0.1} = 32$ and in drifting is $\sqrt{10^4 \cdot 0.7 \cdot (1-0.7)} = 50$, where $0.1$ is the typical Fano factor~\cite{fano_factor} and $0.7$ is the typical survival probability of ionization electrons considering recombination and absorption~\cite{recombination1, recombination2, recombination3}. By contrast, the equivalent noise charge (ENC) after deconvolution is $\sim$1k electrons. Thus, the electronics noise is the main contributor to the charge resolution. Before deconvolution, the ENC is roughly the same for all three wire planes (about 300$\pm$50 electrons irrespective of wire length) as shown in ~\cite{noise_filter_paper} and illustrated in figure~\ref{fig:noiseamp}. After deconvolution, the noise-induced charge is magnified from the ENC value, especially by the bipolar field response of the induction planes. When performing the deconvolution in the frequency domain for the induction planes, the induction plane bipolar response is in the denominator. This response is suppressed at low frequencies (see figure~\ref{fig:induction_field}), thus amplifying the low frequency noise. Figure~\ref{fig:noisecharge} presents the noise-induced charge after signal processing for the full waveform. For the induction plane, there is a very large contribution from the lowest frequencies. By contrast, the collection plane has a similar deconvolved charge distribution as the original noise waveform, apart from a unit conversion from electrons to ADC counts ($\sim$182 $e^-$/ADC). \begin{figure}[htbp] \centering \begin{subfigure}[]{0.49\textwidth} \includegraphics[width=0.95\textwidth]{figs/Vnoisecharge.pdf} \caption{Induction plane.} \label{fig:noisecharge_a} \end{subfigure} \begin{subfigure}[]{0.49\textwidth} \includegraphics[width=0.95\textwidth]{figs/Wnoisecharge.pdf} \caption{Collection plane.} \label{fig:noisecharge_b} \end{subfigure} \caption{Noise-induced charge distribution after signal processing for the full waveform. The top row is in the time domain. The bottom row is in the frequency domain after FFT.} \label{fig:noisecharge} \end{figure} ROI finding aims to identify a ROI window based on the `true' signal; therefore, the charge resolution due to noise can be obtained by analysis of various ROI windows. Two types of charge resolution will be studied here. The first type is the total charge resolution within the entire ROI window, which is related to the energy reconstruction for each wire, i.e. $dE/dx$. The second type is the single bin charge resolution, where one bin corresponds to the minimum time unit for the subsequent event reconstruction (e.g. 1 bin may account for multiple time ticks). Figure ~\ref{fig:charge_res} illustrates the total charge resolution and bin charge resolution for each wire plane as a function of ROI window length. \begin{figure}[htbp] \centering \begin{subfigure}[]{0.49\textwidth} \includegraphics[width=1\textwidth]{figs/Totalchargeres.pdf} \caption{Total charge resolution.} \label{fig:charge_res_a} \end{subfigure} \begin{subfigure}[]{0.49\textwidth} \includegraphics[width=1\textwidth]{figs/Binchargeres.pdf} \caption{Single bin charge resolution (1 bin = 4 ticks).} \label{fig:charge_res_b} \end{subfigure} \caption{Charge resolution (in units of electrons) due to the electronics noise. 1 tick = \SI{0.5}{\micro\second}. (a) Total charge resolution within the entire ROI window. (b) Single bin charge resolution within the minimum time unit for subsequent event reconstruction. See text for additional explanation of the variation of the single bin charge resolution for the induction plane.} \label{fig:charge_res} \end{figure} A linear baseline subtraction is needed for the induction planes to remove the distortion introduced from the large amount of low frequency components as discussed in section~\ref{sec:method}. For the collection plane, this linear baseline subtraction is unnecessary. In fact, if the baseline subtraction were performed, it would contribute to additional smearing. The underlying mathematics is implied by the charge spectrum in the frequency domain as shown in figure~\ref{fig:noisecharge}. The induction plane has many more low frequency components and stronger bin-to-bin correlations, whereas the collection plane has smaller bin-to-bin correlations. The single bin charge resolution is constant for the collection plane. For the induction plane, the single bin charge resolution increases since the impact of the baseline correction decreases with an increase of ROI window length. In addition, the single bin charge resolution for the induction plane depends on the location of the bin within the ROI window. The bin resolution at the boundaries of the ROI window is zero and increases rapidly for the intermediate region like a step function. The ``flat top'' of the step has a shallow valley at the center of the ROI window which varies in flatness up to \SI{20}{\percent}. For bins ranging from 1 to 6 ticks, the single bin charge resolution is roughly proportional to the number of time ticks in one bin. Based on figure~\ref{fig:charge_res}, given the signal length, the ROI window should be as small as possible while covering the signal in order to improve the resolution. In general, for a point source of charge or a line charge close to parallel to the wire plane (small $\theta_{xz}$), the ROI length is typically $\sim$20 ticks, driven by the time smearing. Due to the charge smearing, the ability to identify the ROI boundaries is impacted by a low signal-to-noise ratio locally. The variation of the ROI window length in turn alters the charge bias (see section~\ref{sec:charge_bias}) and noise-induced charge (leading term in charge resolution as shown in figure~\ref{fig:charge_res}), introducing an additional smearing of the total deconvolved charge as part of the charge resolution. This additional smearing comprises up to \SI{10}{\percent} of the total charge resolution. \subsubsection{Charge bias due to thresholding in ROI finding}\label{sec:charge_bias} Waveform ROI finding is required due to noise, especially that magnified by deconvolution in the induction planes. As a consequence, a threshold (see section~\ref{sec:finding_ROIs}) based on RMS noise on the signal strength dictates the ROI window size on the charge spectrum. The ``true'' charge is smeared after signal processing and a fraction of deconvolved charge falls below the ROI threshold. Though extension of the original ROI window is performed, signal loss can occur due to its exclusion from the final ROI window. This is the major source of charge bias given that the use of the average field response in signal processing has limited impact (see section~\ref{sec:basic_perf}). For instance, if noise is included, the deconvolved charge in the distant adjacent wires in figure~\ref{fig:point-charge}, i.e. a few percent of the total charge shared by the $\pm$2 wires, will be overwhelmed by noise and entirely below the ROI threshold; thus, the charge is left absent from the signal processing. Additional results in the context of line charges can be seen in figure~\ref{fig:trackcharge_rbe}. \subsubsection{Inefficiency of line charge extraction} \label{sec:charge_inefficiency} Because of the charge bias and charge resolution, signal processing can be inefficient for specific event topologies, which means the deconvolved charge spectrum is entirely below the ROI threshold and no ROI window is created. This effect is especially pronounced in the induction planes for prolonged tracks with large $\theta_{xz}$, where the bipolar response causes a suppression of the signal, while at the same time increasing the noise. The hits on a track tend to share the same $\theta_{xz}$, and this can lead to gaps in the identified track or even the complete disappearance of the reconstructed track. Disconnected or absent tracks pose a challenge for pattern recognition with regard to the track reconstruction as well as vertex identification. \subsubsection{Performance of line charge extraction} \label{sec:evaluation_result} To evaluate the signal processing performance of the line charge extraction, various topologies were simulated. As described in section~\ref{sec:topology_signal}, the nominal coordinate system is the detector (collection plane's) Cartesian coordinate system, with the $y$-axis in the vertical up direction, the $y$-axis in the drifting field direction, and the $z$-axis in the beam direction. The line charge is located on this $x-z$ plane with $\theta_y=90^{\circ}$ and a set of varying $\theta_{xz}$'s which define the shape of the signal. The simulated charge density is $1.6\times10^4$ electrons per \SI{3}{\milli\meter} (wire pitch) along the trajectory from a 1-meter MIP track centered one meter from the wire plane. Diffusion and inherent electronics noise are applied in the simulation. Here one line charge corresponds to two $\theta_{xz}$ angles, as in $\theta_{x'z'}$ and $\theta_{xz}$ for the induction planes (same value for U, V planes) and collection plane, respectively. In figure~\ref{fig:tracktopo}, $\overline{OA}$ and $\overline{OA'}$ are the projections of the track on wire pitch direction for collection plane and induction plane, respectively. In this case ($\theta_y=90^{\circ}$), because $\overline{OA} = 2\cdot \overline{OA'}$ as a result of the $60^{\circ}$ rotation of induction wires, ${\rm tan}\theta_{x'z'} = 2\cdot {\rm tan}\theta_{xz}$ and the total deposited charge within one wire pitch which is inversely proportional to the length of projection on wire pitch direction is scaled up by a factor of two for the induction planes. \begin{figure}[htbp] \centering \includegraphics[width=0.7\textwidth]{figs/tracktopo.pdf} \caption{Illustration of the line charge employed to evaluate the signal processing performance. Coordinates and angles are defined in figure~\ref{fig:geometry}. The line charge (red line) is located on the $x-z$ plane (collection plane coordinate, black axes). The corresponding projection on the $x'-z'$ plane (induction plane coordinate, blue axes), which is a 60$^{\circ}$ rotation of the $y(z)$-axis around $x$-axis, is plotted as well. Projections on either wire pitch direction ($\overline{OA}$ and $\overline{OA'}$) are also indicated and $\overline{OA} = 2\cdot \overline{OA'}$.} \label{fig:tracktopo} \end{figure} \begin{figure}[htbp] \centering \includegraphics[width=0.9\textwidth]{figs/TrackChargeRBE.pdf} \caption{Charge bias, resolution, and inefficiency of the reconstructed charge in one wire pitch as a function of $\theta_{xz}$ ($\theta_{x'z'}$). The line charge is simulated assuming a 1-meter MIP track. Any wire/channel with no charge reconstructed is included in the inefficiency calculation and ignored in the resolution and bias calculations. The last bin (89.5$^{\circ}$) for induction plane corresponds to full inefficiency of charge extraction and has no input for the resolution and bias calculations. The Y-axis is relative to the `true' charge deposition within one wire pitch before drifting.} \label{fig:trackcharge_rbe} \end{figure} \begin{figure}[htbp] \centering \includegraphics[width=0.55\textwidth]{figs/UlineChargeWf.pdf} \includegraphics[width=0.55\textwidth]{figs/VlineChargeWf.pdf} \includegraphics[width=0.55\textwidth]{figs/YlineChargeWf.pdf} \caption{Average reconstructed charge distribution within one wire region for the three wire planes at various $\theta_{xz}$ values corresponding to figure~\ref{fig:trackcharge_rbe}.} \label{fig:trackcharge_wf} \end{figure} In general, the performance of the line charge extraction deteriorates with increasing $\theta_{xz}$ as shown in figure~\ref{fig:trackcharge_rbe}, where 0$^{\circ}$ corresponds to a track parallel to the wire plane (isochronous) and 90$^{\circ}$ to a track perpendicular to the wire plane. The shape of the reconstructed charge spectrum as shown in figure~\ref{fig:trackcharge_wf} is dominated by the Gaussian smearing for small $\theta_{xz}$, or by the track topology for large $\theta_{xz}$ in which case the contributions from the charge in neighboring wires are stretched in time and produce a triangular-like shape. A larger $\theta_{xz}$ corresponds to a wider (larger ROI window) and flatter (smaller signal-to-noise ratio) reconstructed charge distribution, therefore associated with a larger charge resolution and bias. As explained in section~\ref{sec:charge_inefficiency}, due to the bipolar response cancellation (see figure~\ref{fig:sim_perpendicularmip}) and magnified noise (see figure~\ref{fig:noisecharge}), the induction plane has considerably worse performance than the collection plane and is more sensitive to large $\theta_{xz}$. Specifically, the induction plane has a significant inefficiency of charge extraction for large $\theta_{xz}$. This inefficiency is related to diffusion which introduces an additional smearing. In figure~\ref{fig:trackcharge_rbe}, the result corresponds to the average performance considering diffusion along the track. The extreme case of a long perpendicular track ($\theta_{xz}=90^{\circ}$) is not shown in figure~\ref{fig:trackcharge_rbe}. For the induction planes, similar to the last point in this figure, a long perpendicular track has zero efficiency to extract charge, with the exception of the ends of the track which have non-zero efficiency. However, for physics analyses, the phase space around this extreme case, e.g. 89$^{\circ}$-90$^{\circ}$, can be ignored, because, in practice, multiple scattering of the particle along a trajectory most likely results in a few degree path deflection. Moreover, for a high energy neutrino beam experiment with the predominance of forward-going event topologies, this impacted phase space is even smaller. For the collection plane, there is still a high efficiency to reconstruct the charge if the track is perpendicular to the wire plane\footnote{If the track is prolonged in time, the waveform may be identified as noise baseline shifting and filtered.}. Similar to a point source as shown in figure~\ref{fig:point-charge}, the reconstructed charge is shared by adjacent wires. \section{Reconstruction of drifted electron distribution}~\label{sec:charge_extract} In this section, we describe the principle for reconstructing the charge distribution of drifted ionization electrons (section~\ref{sec:principle}) from the measured TPC signal waveform and the actual implementation of algorithms applied in analyzing MicroBooNE data (section~\ref{sec:method}). \subsection{ Signal extraction principles}\label{sec:principle} As described in section~\ref{sec:signal_form}, the raw digitized TPC signal is a convolution of the arriving ionization electron distribution, the field response describing the induced current on wires due to moving charge, and the overall electronics response. The goal of TPC charge extraction is to unfold the field and electronics responses from the raw TPC signal and to recover the number of ionization electrons passing by each wire at each sampled time. \subsubsection{Deconvolution and software filters}\label{sec:deconvolution} The principal method to extract charge is deconvolution. This procedure in its one-dimensional (1D) form has been used in the data analysis of previous liquid argon experiments~\cite{icarus_sulej, Baller:2017ugz}. This technique has the advantages of being robust and fast. It is an essential step in the overall drifted-charge profiling process. Deconvolution is a mathematical technique to extract the \textit{original signal} $S(t)$ from the \textit{measured signal} $M(t')$. The measured signal is modeled as a convolution integral over the original signal $S(t)$ and a given detector \textit{response function} $R(t,t')$, which gives the instantaneous portion of the measured signal at some time $t'$ due to an element of original signal at time $t$: \begin{equation}\label{eq:decon_1} M(t') = \int_{-\infty}^{\infty} R(t,t') \cdot S(t) dt. \end{equation} If the detector response function is time-invariant, then $R(t,t') \equiv R(t'-t)$ and we can solve the above equation by performing a Fourier transformation, yielding $M(\omega) = R(\omega) \cdot S(\omega)$, where $\omega$ is in units of angular frequency. We can derive the signal in the frequency domain by taking the ratio of the measured signal and the given response function: \begin{equation}\label{eq:decon_2} S(\omega) = \frac{M(\omega)}{R(\omega)}. \end{equation} In principle, the original signal in the time domain $S(t)$ can then be obtained by applying the inverse Fourier transformation from the frequency domain $S(\omega)$. However in practice, there are two intermixed complicating effects. First, the measured signal $M(t')$ contains an additional contribution from various electronics noise sources~\cite{noise_filter_paper}. Second, in realistic detectors, the response function $R$ decreases substantially at high frequency (large $\omega$). These two factors lead to high-frequency components of the noise spectrum being artificially amplified through equation~\ref{eq:decon_2}. If left unchecked, the derived signal $S(\omega)$ would be completely overwhelmed by noise. To address this issue, a \textit{filter function} $F(\omega)$ is introduced, yielding \begin{equation}\label{eq:decon_filt} S(\omega) = \frac{M(\omega)}{R(\omega)} \cdot F(\omega). \end{equation} Its purpose is to attenuate the problematic high frequency noise. The addition of the filter function $F$ can be considered as a replacement of the response function $R$. In essence, the deconvolution replaces the real field and electronics response function with an effective software filter response function. The advantage of this procedure is most pronounced on the induction plane where the irregular bipolar field response function is replaced by a regular unipolar response function through the inclusion of the software filter. A common choice of software filter is the Wiener filter\footnote{A discussion of the application of the Wiener filter in the data unfolding problem can be found in~\cite{Tang:2017rob}.}~\cite{wiener}, which is based on the expected signal $\overline{S^2(\omega)}$ and noise $\overline{N^2(\omega)}$ frequency spectra: \begin{equation}\label{eq:wiener} F(\omega) = \frac{\overline{R^2(\omega)S^2(\omega)}}{\overline{R^2(\omega)S^2(\omega)} + \overline{N^2(\omega)}}. \end{equation} With this construction, the Wiener filter is expected to achieve the best signal to noise ratio with minimal mean square error (the sum of the variance and the squared bias) of the deconvolved distribution. However, naively applying the Wiener filter to TPC signal processing is problematic for three reasons. Firstly, as described in section~\ref{sec:topology_signal}, the TPC signal $S(\omega)$ varies substantially depending on the exact nature of the event topology. The electronics noise spectrum is also a function of the duration of the time window over which it is observed. A longer time window allows for observation of more low frequency noise components. Therefore, it is impractical to achieve a universal Wiener filter yielding the best signal-to-noise ratio for all signals of varying time windows. Secondly, given the definition of the Wiener filter in equation~\ref{eq:wiener}, we have $F(\omega = 0)<1$. Considering the addition of the filter as a replacement of the response function $R(t'-t)$, we can see that the Wiener filter does not conserve the total number of ionization electrons. Thirdly, as shown in equation~\ref{eq:decon_filt}, the filter acts as a replaced response function and smears the extracted ionization electron distribution along the drift time dimension. Since the drift time is equivalent to the drift distance, a filter that can alter the charge distribution in an extended (non-local) time range instead of in a short (local) one is undesirable. For induction wire planes, none of the ionization electrons are expected to be collected, which leads to a bipolar signal in the time domain and a low-frequency suppressed signal in the frequency domain. A direct construction of the Wiener filter with this low-frequency suppression would lead to a non-local charge smearing. To overcome these shortcomings associated with the Wiener filter, we use a Wiener-inspired filter. Details are elaborated upon in section~\ref{sec:method_filter}. \subsubsection{2D deconvolution}\label{sec:2D_deconvolution} As described in section~\ref{sec:signal_form}, the induced current on the sense wire receives contributions not only from ionization charge passing by the sense wire, but also from ionization charge drifting in nearby wire regions. Naturally, the contribution of charge farther from the target sense wire is smaller. Ignoring the variation of the strength of the field response within one wire region, equation~\eqref{eq:decon_1} can be expanded to \begin{equation}\label{eq:decon_2d_1} M_i(t_0) = \int_{-\infty}^{\infty} \left( ... + R_1(t_0-t)\cdot S_{i-1}(t) + R_0(t_0-t) \cdot S_i(t) + R_1(t_0-t) \cdot S_{i+1} (t) + ...\right) \cdot dt, \end{equation} where $M_i$ represents the measured signal from wire $i$. $S_{i-1}$, $S_i$, and $S_{i+1}$ represent the real signal inside the boundaries of wire $i$ and its adjacent neighbors. $R_0$ represents an average response function for an ionization charge passing through the wire region of interest. The average is taken over all possible drift paths through the wire region. Similarly, $R_1$ represents the average response function for an ionization charge drifting past in the next adjacent wire region. One can expand this definition to $n$ number of neighbors by introducing terms up to $R_n$. Equation~\eqref{eq:decon_2d_1} assumes translational invariance in the response function (i.e. the $R$ does not depend on the actual location of the wire). In section~\ref{sec:MCtruth}, we will discuss the impact of ignoring position dependence of the field response at small scales within the wire region of interest. If we then apply a Fourier transform on both sides of equation~\eqref{eq:decon_2d_1}, we have: \begin{equation}\label{eq:decon_2d_2} M_i(\omega) = ... + R_1(\omega) \cdot S_{i-1}(\omega) + R_0(\omega) \cdot S_i(\omega) + R_1(\omega) \cdot S_{i+1} (\omega) + ..., \end{equation} which can be written in matrix notation as: \begin{equation} \begin{pmatrix} M_1(\omega)\\ M_2(\omega)\\ \vdots\\ M_{n-1}(\omega)\\ M_{n}(\omega) \end{pmatrix} = \begin{pmatrix} R_0(\omega) & R_1(\omega) & \ldots & R_{n-2}(\omega) & R_{n-1}(\omega) \\ R_1(\omega) & R_0(\omega) & \ldots & R_{n-3}(\omega) & R_{n-2}(\omega) \\ \vdots & \vdots & \ddots & \vdots & \vdots \\ R_{n-2}(\omega) & R_{n-3}(\omega) & \ldots & R_0(\omega) & R_1(\omega) \\ R_{n-1}(\omega) & R_{n-2}(\omega) & \ldots & R_1(\omega) & R_0(\omega) \\ \end{pmatrix} \cdot \begin{pmatrix} S_1(\omega)\\ S_2(\omega)\\ \vdots\\ S_{n-1}(\omega)\\ S_{n}(\omega) \end{pmatrix} \label{eq:matrix_expansion} \end{equation} Now, if we assume that we know all response functions (i.e. the matrix $R$), the problem converts into deducing the vector $S$ from the measured signal $M$ by an inversion of matrix $R$, provided that the wires of interest are distant from the wire plane edges, the matrix $R$ is symmetric and Toeplitz\footnote{A Toeplitz matrix is one for which each diagonal descending from left to right has all elements equal. Multiplication by a Toeplitz matrix is equivalent to an operation of discrete convolution.}, and the inverse problem can be solved using discrete-space Fourier transformation techniques as discussed in section~\ref{sec:deconvolution}, on $M_i(\omega)$, $S_i(\omega)$, and $R_i(\omega)$. Therefore, instead of 1D a deconvolution involving only the time dimension, a two-dimensional (2D) deconvolution involving both time and wire dimensions is performed to recover the ionization electron distribution. An additional Wiener-inspired filter is applied to the wire dimension deconvolution in analogy to that of the time domain deconvolution. These software filters will be discussed in section~\ref{sec:method_filter}. An example comparison of the 1D and 2D deconvolution results in a data event can be seen in figure~\ref{fig:1dvs2d}, demonstrating the charge recovery using the 2D deconvolution approach in contrast to the 1D method. Figure~\ref{fig:1dvs2d} highlights the signal dependence on topology and long range induction. An evaluation of 2D deconvolution signal processing performance given the topological dependence of the signal and the long range induction inherent to the signal is considered in section~\ref{sec:evaluation}. \begin{figure}[htb] \centering \includegraphics[width=0.99\textwidth]{figs/noiseVs1DVs2D.pdf} \caption{ A neutrino candidate from MicroBooNE data (event 41075, run 3493) measured on the U plane. (a) Raw waveform after noise filtering in units of average baseline subtracted ADC scaled by 250 per \SI{3}{\us}. (b) Charge spectrum in units of electrons per \SI{3}{\us} after signal processing with 1D deconvolution. (c) Charge spectrum in units of electrons per \SI{3}{\us} after signal processing with 2D deconvolution.} \label{fig:1dvs2d} \end{figure} \subsubsection{Region of interest (ROI)} The 2D deconvolution procedure described in section~\ref{sec:2D_deconvolution} provides a robust and computationally-efficient method to extract the distribution of ionization electrons. While successful for the collection plane the procedure is still not optimal for the induction planes due to the bipolar nature of the measured induction plane signals. In order to illustrate this point, the average response functions on the closest wire for a point-like ionization charge is shown in figure~\ref{fig:induction_field}. The average response function includes both the average field response (averaged over all possible electron drift paths within the wire region as simulated by Garfield without diffusion) and the electronics response (\SI{2}{\us} peaking time). The normalization of the overall response function is chosen so that the integral of the collection plane response function is unity, corresponding to a single electron. Figure~\ref{fig:induction_field_b} shows the frequency components of the average response functions for the three wire planes. All responses have suppressions at high frequency, where the filter is required to stabilize the deconvolved results (e.g. equation~\ref{eq:decon_filt}). Compared to the collection wire response, the induction wire responses exhibit suppressions at low frequency. In particular, at zero frequency, the frequency components are equivalent to the integral of the response function over time and should be close to zero as indicated by equation~\ref{eq:voltage}. \begin{figure}[htb] \centering \begin{subfigure}[t]{0.48\textwidth} \includegraphics[width=1.0\textwidth]{figs/ave_resp_1.pdf} \caption{Average response in time domain.} \label{fig:induction_field_a} \end{subfigure} \begin{subfigure}[t]{0.48\textwidth} \includegraphics[width=1.0\textwidth]{figs/ave_resp_2.pdf} \caption{Average response in frequency domain.} \label{fig:induction_field_b} \end{subfigure} \caption{Examples of simulated average response functions for induction (black and red) and collection (blue) wires in the time (a) and frequency (b) domains.} \label{fig:induction_field} \end{figure} Similar to the situation at high frequencies, the suppression of the induction responses at low frequencies is problematic for the proposed deconvolution procedure. As shown in~\cite{noise_filter_paper}, the measured signal contains electronics noise, which is not necessarily as suppressed at low frequencies. Therefore, following equation~\eqref{eq:decon_filt}, the low frequency noise will be amplified in the deconvolution process. The amplification of low frequency noise can be seen clearly in figure~\ref{figs:ROI_example_a}. Left unmitigated, the amplification of low frequency noise would lead to an unacceptable uncertainty in the charge estimation. In principle, the amplification of the low-frequency noise through the deconvolution process can be suppressed through the application of low-frequency (high-pass) filters similar to the filters suppressing high-frequency (low-pass) noise. However, as explained in section~\ref{sec:deconvolution}, applying such a low-frequency filter would lead to an alteration of the charge distribution in extended (non-local) time ranges, which is not desirable. Instead we turn to the technique of selecting a signal region of interest (ROI) in the time domain. The region of interest (ROI) technique was proposed~\cite{Baller:2017ugz} to reduce the data size and to speed up the deconvolution process. The idea is to limit the deconvolution to a small time window that is slightly bigger than the extent of the signal it contains. The entire event readout window (4.8 ms for MicroBooNE) is replaced by a set of ROIs. For induction wire signals, the ROI technique also limits the low frequency noise. To illustrate this point, we consider a time window with $N$ samples. MicroBooNE samples at intervals of \SI{0.5}{\us} and the highest frequency that can be resolved is the Nyquist frequency of \SI{1}{\MHz}. After a discrete Fourier transform, the bin above the first bin (zero frequency) starts at $\frac{\SI{1}{\MHz}}{N/2}$. The noise in the zero-frequency bin represents a baseline shift after the ROI is transformed back into the time domain. Therefore, once we identify the signal region and create a ROI just big enough to cover the signal, we can naturally suppress the low-frequency noise at the cost of having to correct for the baseline shift. This correction is performed through a linear interpolation of the two baselines, determined by samples at the start and end of the ROI window in the time domain. \subsection{Method}\label{sec:method} In this section, we describe the inputs and the detailed algorithms to extract the ionization charge spectrum from the digitized TPC wire plane signals, based on the principles described above. The algorithm is implemented and available at~\cite{WCT_SP}. In general, the full chain contains four major steps: \begin{itemize} \item {\bf Noise filtering: } \\ Apply specific noise filters to remove possible excess noise apart from the inherent electronics noise. The results of this step have been previously reported in~\cite{noise_filter_paper}. \item {\bf 2D deconvolution:} \\ Apply a 2D deconvolution to the digitized TPC wire signals, resulting in a deconvolved charge distribution. An average field response involving multiple sense wires is calculated and utilized as shown in section~\ref{sec:response_function} Two types of deconvolved charge spectra are obtained, corresponding to different software filters for the time domain. A Wiener-inspired filter is applied to maximize the signal-to-noise ratio with better time resolution and a Gaussian filter is applied to achieve a non-distorted charge spectrum except for a Gaussian smearing. Details will be explained in section~\ref{sec:method_filter}. \item {\bf ROI finding and refining:} \\ Perform ROI finding with the deconvolved charge distribution after the Wiener-inspired filter. The principle of these algorithms will be explained in section~\ref{sec:finding_ROIs} and ~\ref{sec:refining_ROIs}. In short, loose and tight high-pass filters are combined to optimize the purity and efficiency of ROI finding. \item {\bf ROI application:} \\ Apply the identified ROI window to the deconvolved charge distribution after the Gaussian filter and extract the ionization charge, with a linear baseline subtraction for the induction planes based on the start/end bins of the ROI window. \end{itemize} A flow chart of the full chain of the aforementioned signal processing can be seen in figure~\ref{fig:SPdiagram}. \begin{figure} \centering \includegraphics[width=0.95\textwidth]{figs/SPdiagram.pdf} \caption{A flow chart of the full chain of signal processing. See text for explanation.} \label{fig:SPdiagram} \end{figure} \subsubsection{Position-averaged response functions}\label{sec:response_function} Since the signal in a readout wire is recorded without any prior knowledge of the transverse position distribution of electrons within a wire region, the average response functions are used in the 2D deconvolution as discussed in section~\ref{sec:2D_deconvolution}. The average response function for a single electron within a particular wire region is obtained through the following summation: \begin{equation} R_i=\frac{0.5\times R^{0.0~mm}_i + R^{0.3~mm}_i + R^{0.6~mm}_i + R^{0.9~mm}_i + R^{1.2~mm}_i + 0.5\times R^{1.5~mm}_i}{5}, \end{equation} where $R_i^{z}$ represents the response function at the $i$th wire for an electron starting at $z$ transverse position. The impact of applying the average response function in the 2D deconvolution will be explained in section~\ref{sec:MCtruth}. These average responses for 21 wires (the central wire $\pm$ 10 wires on both sides) are shown in figure~\ref{figs:decon_overall_response}. The left panel shows the response function in 2D with the ``Log10'' scale. The X-axis represents the wire number. The Y-axis represents the drift time with 1 tick of \SI{0.5}{\us} in each bin. The normalization is the same as that in figure~\ref{figs:overall_response}. The right panel shows the average response function in a linear scale for the first five wires. For the V and Y wire planes, the strength of the response function drops quickly for wires further away from the central wire (i.e. negligible beyond \num{\pm4} wires). For the U induction wire plane, the strength of the response function is still sizable at \num{\pm4} wires. This is due to the fact that the U induction wire plane is the first wire plane facing the active TPC volume without any shielding. \begin{figure}[!htbp] \begin{subfigure}[t]{0.48\textwidth} \includegraphics[width=1.0\textwidth]{figs/decon_resp_2D_A.pdf} \caption{Wire number versus time in ``Log10'' scale.} \label{figs:decon_overall_response_a} \end{subfigure} \begin{subfigure}[t]{0.48\textwidth} \includegraphics[width=1.0\textwidth]{figs/decon_resp_2D_B.pdf} \caption{Response functions for each of the first five wires.} \label{figs:decon_overall_response_b} \end{subfigure} \caption{The position-averaged response functions after convolving the field response function and an electronics response function at \SI{2}{\us} peaking time in ``Log 10'' scale (a) and linear scale for the first 5 wires (b). } \label{figs:decon_overall_response} \end{figure} \subsubsection{Software filters}\label{sec:method_filter} \begin{figure}[!htbp] \begin{subfigure}[t]{0.48\textwidth} \includegraphics[width=1.0\textwidth]{figs/filter_shape_1.pdf} \caption{Filters in the time domain.} \label{figs:filter_hf_a} \end{subfigure} \begin{subfigure}[t]{0.48\textwidth} \includegraphics[width=1.0\textwidth]{figs/filter_shape_2.pdf} \caption{Filters in the frequency domain.} \label{figs:filter_hf_b} \end{subfigure} \caption{Wiener-inspired (dashed line) and Gaussian (solid line) software filters are shown in (a) the time domain and (b) the frequency domain for each of the three wire planes. In the time domain, we commonly refer to the filter function as the smearing function. With the chosen functional forms, the local behavior of the filters is apparent from their shapes. } \label{figs:filter_hf} \end{figure} In this section, we describe the derivation of the software filters used in the signal processing. As mentioned in section~\ref{sec:deconvolution}, the chosen Wiener-inspired filter is based on the Wiener filter in equation~\ref{eq:wiener} constructed from simulation. The time window is chosen to be 100 $\mu$s which generally performs well in a variety of cases with proper additional smearing of the signal. The signal is chosen to be an isochronous MIP track traveling perpendicular to the wire orientation. The total number of ionization electrons is assumed to be $1.6\times10^4$ per wire pitch. The RMS of the spread in the drift time due to diffusion is taken to be 1 mm, corresponding to the average drift distance in the MicroBooNE detector. The gain and peaking time are 14 mV/fC and 2 $\mu$s, which is the same as the nominal running condition~\cite{noise_filter_paper}. The electronic noise is simulated in an analytic way, as will be described in section~\ref{sec:noise_model}. The results following equation~\eqref{eq:wiener} were then fitted with the following functional form in order to exclude low-frequency suppressions presented in the original Wiener filters: \begin{equation}\label{eq:fit} F(\omega) = c\cdot e^{- \frac{1}{2} \cdot \left( \frac{\omega}{a} \right)^b}, \end{equation} where $a$, $b$, $c$ are free parameters to be determined by the fit. This functional form of the filter guarantees that the corresponding smearing function (filter) in the time domain is local. Given the fit, the filter is chosen to be \begin{equation} F(\omega) = \begin{cases} e^{- \frac{1}{2} \cdot \left( \frac{\omega}{a} \right)^b} & \omega >0 \\ 0 & \omega = 0, \\ \end{cases} \end{equation} with $a$ and $b$ being the same parameters as in equation~\ref{eq:fit} with $c$ removed. The modification of the filter takes into account the following considerations: \begin{itemize} \item The filter is explicitly zero at $\omega = 0$ in order to remove any DC component in the deconvolved signal. This removes information about the baseline. A new baseline is calculated and restored for the waveform after deconvolution. \item The above functional form of the filter leads to \begin{equation} \lim_{\omega \rightarrow 0} F(\omega) = 1. \end{equation} This means that the integral of the corresponding smearing function in the time domain is unity, which does not introduce any extra factor in the overall normalization. \end{itemize} Figure~\ref{figs:filter_hf} shows the three Wiener-inspired filters and a Gaussian filter, \begin{equation} F(\omega) = \begin{cases} e^{- \frac{1}{2} \cdot \left( \frac{\omega}{a} \right)^2} & \omega >0 \\ 0 & \omega = 0, \\ \end{cases} \end{equation} for charge extraction. Compared to the Wiener-inspired filters, the Gaussian filter is expected to have a slightly worse signal-to-noise ratio. However, such a filter has advantages in calculating the charge and is better matched with a Gaussian hit finder that may be used in later stages of event reconstruction. In 2D deconvolution, a similar filter in the wire dimension is constructed using the Gaussian form \begin{equation} F(\omega_w) = e^{- \frac{1}{2} \cdot \left( \frac{\omega_w}{a} \right)^2} \end{equation} where the ``frequency'', $\omega_w$, is the Fourier transform over the wire number instead of time. Figure~\ref{figs:filter_wire_filter} shows the filters in the wire domain. Different parameters are chosen for induction and collection wire planes. \begin{figure}[!htbp] \begin{subfigure}[t]{0.48\textwidth} \includegraphics[width=1.0\textwidth]{figs/wire_filter_shape_1.pdf} \caption{Wire filters in the wire number domain.} \label{figs:filter_wire_filter_a} \end{subfigure} \begin{subfigure}[t]{0.48\textwidth} \includegraphics[width=1.0\textwidth]{figs/wire_filter_shape_2.pdf} \caption{Wire filters in the ``frequency'' domain.} \label{figs:filter_wire_filter_b} \end{subfigure} \caption{ Filters using a Gaussian form are applied for the wire dimension shown in both the wire number (a) and ``frequency'' (b) domains. Different parameters are chosen for induction and collection wire planes. } \label{figs:filter_wire_filter} \end{figure} \subsubsection{Identification of signal ROIs}\label{sec:finding_ROIs} As shown in figure~\ref{figs:ROI_example_a}, the direct application of the deconvolution procedure significantly amplifies the low-frequency noise for the induction wire planes. In order to identify the signal regions of interest (signal ROIs or ROIs for short), additional low-frequency filters with a functional form \begin{equation} F_{\rm LF}(\omega) = 1 - e^{\left(\frac{\omega}{a}\right)^2}, \end{equation} are applied to the deconvolved charge distribution for the induction wire planes to search for ROIs. These low-frequency filters are not used for the collection wire plane signal. \begin{figure}[!htbp] \begin{subfigure}[t]{0.48\textwidth} \includegraphics[width=1.0\textwidth]{figs/low_freq_filter_1.pdf} \caption{Low-frequency filters in the time domain.} \label{figs:filter_lf__a} \end{subfigure} \begin{subfigure}[t]{0.48\textwidth} \includegraphics[width=1.0\textwidth]{figs/low_freq_filter_2.pdf} \caption{Low-frequency filters in the frequency domain.} \label{figs:filter_lf_a} \end{subfigure} \caption{ Low-frequency filters used in identifying ROIs for induction wire planes shown in both the time (a) and frequency (b) domains.} \label{figs:filter_lf} \end{figure} Figure~\ref{figs:filter_lf} shows the two low-frequency filters used to identify ROIs for induction wire planes. Due to suppressions at low frequencies, the corresponding smearing functions in the time domain exhibit a long negative tail. The magnitude of the negative tail is larger for the tight low-frequency filter, whereas the tail extends to longer times for the loose low-frequency filter. Since such a long-range behavior is not desired for the filters used to obtain the charge signal, these filters are exclusively used to identify ROIs. \begin{figure}[!htbp] \centering \begin{subfigure}[]{0.49\textwidth} \centering \includegraphics[width=0.8\textwidth]{figs/ROI_1.pdf} \caption{Without low-frequency filter.} \label{figs:ROI_example_a} \end{subfigure} \begin{subfigure}[]{0.49\textwidth} \centering \includegraphics[width=0.8\textwidth]{figs/ROI_2.pdf} \caption{With tight low-frequency filter.} \label{figs:ROI_example_b} \end{subfigure} \begin{subfigure}[]{0.49\textwidth} \centering \includegraphics[width=0.8\textwidth]{figs/ROI_3.pdf} \caption{With loose low-frequency filter.} \label{figs:ROI_example_c} \end{subfigure} \caption{ Comparison of the deconvolved signal on the U induction plane (a) without the low-frequency filter, (b) with the tight low-frequency filter, and (c) with the loose low-frequency filter. } \label{figs:ROI_example} \end{figure} Figure~\ref{figs:ROI_example} shows the impact of these low-frequency filters. Without the filter, the low-frequency noise totally overwhelms the signal (figure~\ref{figs:ROI_example_a}). After applying the tight low-frequency filter (figure~\ref{figs:ROI_example_b}), the signal-to-noise ratio improves for short (in time) signals. However, the long signal at around 7000 time ticks is removed by the tight low-frequency filter. It is recovered by the loose low-frequency filter (figure~\ref{figs:ROI_example_c}). The deconvolved charge distribution for each channel within the entire readout window ($\sim$9600 ticks) is used to calculate the RMS (root mean square) noise, to set the threshold of the ROI identification. The RMS is calculated using a 68\% quantile range relative to the ADC count distribution mean value. This RMS calculation is insensitive to the true signals in the deconvolved charge distribution. For the collection plane, the threshold is set at 5 times the RMS noise which is about 300 electrons/tick on average. For the induction planes, the deconvolved charge distribution with the tight low-frequency filter is employed to calculate the RMS noise and the threshold is set at 3.5 times the RMS noise, which is about 350 electrons/tick and 500 electrons/tick on average for the U plane and V plane, respectively. ROIs are then extracted from these deconvolved signals. For the induction wire planes, there are two types of ROIs: tight and loose ROIs, which are extracted from the deconvolved signal after applying tight and loose low-frequency filters, respectively. The goal of the tight ROIs is to achieve high purity in terms of containing real signal. However, it is expected that tight ROIs have a low efficiency, in particular for long (in time) signals. On the other hand, the goal of the loose ROIs is to achieve high efficiency in terms of containing real signal. The trade-off is that we expect the purity of the loose ROIs to be lower. ROIs are then extracted by searching for signal above noise. Each ROI is then extended in time to cover the signal tails. For (ROIs) tight ROIs in the (collection) induction plane, additional ROIs are created by examining the connectivity of the existing ones. Each of the loose ROIs is then compared with the tight ROIs on the same wire. If one loose ROI overlaps with a tight ROI, the loose ROI is extended to ensure the tight ROI is contained. If a tight ROI is not contained by a loose ROI, a new loose ROI with the exact range of the tight ROI is created. The operations above ensure that each tight ROI is contained by a loose ROI. Figure~\ref{figs:loosetight_ROI_example} shows the impact of including tight and loose ROIs for the induction plane signal processing. \begin{figure}[!htbp] \centering \begin{subfigure}[]{0.49\textwidth} \centering \includegraphics[width=0.9\textwidth]{figs/WithLooseandTightROIs.pdf} \label{figs:loosetight_ROI_example_b} \caption{Deconvolved signal with ``loose and tight ROIs''.} \end{subfigure} \begin{subfigure}[]{0.49\textwidth} \centering \includegraphics[width=0.9\textwidth]{figs/NoLooseROIs.pdf} \label{figs:loosetight_ROI_example_a} \caption{Deconvolved signal with ``tight ROIs''.} \end{subfigure} \begin{subfigure}[]{0.49\textwidth} \centering \includegraphics[width=0.95\textwidth]{figs/looseandtightcomp_waveform.pdf} \label{figs:loosetight_ROI_example_c} \caption{Waveform comparison for the channel marked with a red line in (a) and (b).} \end{subfigure} \caption{A neutrino candidate from MicroBooNE data (event 41075, run 3493) on the U plane is shown in an event display with (a) and without (b) ``loose ROIs''. In (c), the raw baseline-subtracted waveform (black) and the deconvolved signal with (dotted red) and without (dashed red) ``loose ROIs'' are presented. } \label{figs:loosetight_ROI_example} \end{figure} \subsubsection{Refinement of ROIs}\label{sec:refining_ROIs} As explained previously, the loose ROIs for the induction wire plane are expected to have a high efficiency in containing the real signal, but low purity. Therefore, an additional refining procedure using connectivity information is applied to exclude fake ROIs. The applicability of this methodology to a variety of event topologies demonstrates its robustness. The basic components of the ROI refinement include: \begin{itemize} \item {\bf Clean ROIs: } \\ The motivation of this step is to remove fake ROIs---ROIs containing no signal. In particular, each loose and tight ROI is examined to ensure that part of the bin content inside the ROI is above a predefined threshold. ROIs failing this examination are removed. Loose ROIs are clustered according to connectivity information. For each loose ROI cluster, if none of its loose ROIs contain one or more tight ROIs, the cluster is removed entirely. \item {\bf Break ROIs:} \\ The motivation of this step is to separate a loose ROI into a few small ROIs. Sometimes a few separated tracks (e.g. near the neutrino interaction vertex) can be quite close to each other along the drift time direction. Often a single loose ROI would be created to contain these tracks given the presence of low-frequency noise. Therefore, a special algorithm is needed to identify this scenario and separate the ROIs. In particular, each loose ROI is examined to search for multiple independent peaks. If found, the loose ROI is separated into several loose ROIs. Figure~\ref{figs:break_ROI_example} shows the impact of the ``break ROIs'' step. \begin{figure}[!htbp] \centering \begin{subfigure}[]{0.49\textwidth} \centering \includegraphics[width=0.9\textwidth]{figs/WithBreakROIs.pdf} \label{figs:break_ROI_example_b} \caption{Deconvolved signal with ``break ROIs''.} \end{subfigure} \begin{subfigure}[]{0.49\textwidth} \centering \includegraphics[width=0.9\textwidth]{figs/NoBreakROIs.pdf} \label{figs:break_ROI_example_a} \caption{Deconvolved signal without ``break ROIs''.} \end{subfigure} \begin{subfigure}[]{0.49\textwidth} \centering \includegraphics[width=0.9\textwidth]{figs/breakROIscomp_waveform.pdf} \label{figs:break_ROI_example_c} \caption{Waveform comparison for the channel marked with a red line in (a) and (b).} \end{subfigure} \caption{The same neutrino candidate from figure~\ref{figs:loosetight_ROI_example} is shown in an event display with (a) and without (b) ``break ROIs''. In (c), the raw baseline-subtracted waveform (black) and the deconvolved signal with (dotted red) and without (dashed red) ``break ROIs'' are presented. } \label{figs:break_ROI_example} \end{figure} \item {\bf Shrink ROIs:} \\ The motivation of this step is to reduce the length of a ROI that contains real signal that had otherwise been accidentally extended into a much broader time range due to the presence of low-frequency noise. In particular, the range of each loose ROI is reduced according to the tight ROIs they contain as well as those in the adjacent loose ROIs. Figure~\ref{figs:shrink_ROI_example} shows the effect of the ``shrink ROIs'' step. \begin{figure}[h!] \center \begin{subfigure}[]{0.49\textwidth} \centering \includegraphics[width=0.9\textwidth]{figs/WithShrinkROIs.pdf} \label{figs:shrink_ROI_example_b} \caption{Deconvolved signal with ``shrink ROIs''.} \end{subfigure} \begin{subfigure}[]{0.49\textwidth} \centering \includegraphics[width=0.9\textwidth]{figs/NoShrinkROIs.pdf} \label{figs:shrink_ROI_example_a} \caption{Deconvolved signal without ``shrink ROIs''.} \end{subfigure} \begin{subfigure}[]{0.49\textwidth} \centering \includegraphics[width=0.9\textwidth]{figs/shrinkROIscomp_waveform.pdf} \label{figs:shrink_ROI_example_c} \caption{Waveform comparison for channel marked with red line in (a) and (b).} \end{subfigure} \caption{ The same neutrino candidate from figure~\ref{figs:loosetight_ROI_example} is shown in an event display with (a) and without (b) ``shrink ROIs''. In (c), the raw baseline-subtracteed waveform (black) and the deconvolved signal with (dotted red) and without (dashed red) ``shrink ROIs'' are presented. } \label{figs:shrink_ROI_example} \end{figure} \end{itemize} The overall ROI refinement takes an iterative approach by applying the above steps in sequence. The final remaining ROIs are then applied to the deconvolved signal without the low-frequency filter. A linear correction to the baseline is applied so that the bin content of the ROI boundaries are exactly zero. \section{LArTPC signal formation} \label{sec:signal_form} The formation of the TPC signal consists of three parts: i) the electric field response to the drifting of a point ionization charge leading to induced currents on the sense wires, ii) the electronics response to the induced current waveform input to each channel in terms of amplification and shaping, and iii) the initial distribution of the ionization charge in the bulk of the detector, how this charge drifts in the applied electric field and how it undergoes diffusion and absorption as it drifts. In the following sections, we describe each part in detail. \subsection{Field response}\label{sec:field_resp} When ionization electrons drift past the initial two induction wire planes toward the final collection wire plane, current is induced on nearby wires. Henceforth, we refer to the induced current on one wire due to a single electron charge as a field response function. The principle of current induction is described by Ramo's theorem~\cite{ramo}. An element of ionization charge $q$ in motion at a given location induces a current $i$ on some electrode (wire), \begin{equation}\label{eq:shockley_ramo} i = - q \vec{E}_w \cdot \vec{v}_q. \end{equation} This current is proportional to the inner product of a constructed weighting field $\vec{E}_w$ for a given wire and the drift velocity $\vec{v}_q$ of the charge at the given location. The weighting field $\vec{E}_w$ only depends on the geometry of the electrodes and can be calculated by removing the drifting charge, placing the targeted electrode at unity potential, and setting all other conductors to ground. For a single medium, the weighting potential is independent of the dielectric properties/constants. For multiple media, the weighting potential must be calculated taking into account each material's dielectric properties. This result is valid in the quasi-static approximation and in arbitrary linear media where the permittivity is independent of the potentials~\cite{ramo_extension, ramo_generalization}. A generalized form of the weighting potential considering non-linear effects can also be found in ~\cite{ramo_extension} and~\cite{ramo_generalization}. The charge's drifting velocity $\vec{v}_q$ is a function of the external electric field, which depends on the geometry of the electrodes as well as the applied drifting and bias voltages and liquid argon temperature. Figure~\ref{fig:fielddemo} shows electron drift paths in an applied electric field as well as lines of equal potentials for the weighting field for the U, V, and Y wire planes in MicroBooNE. The electron drift paths and the weighting fields are calculated using the Garfield~\cite{garfield} software. These calculations adopt a 2D model of a portion of MicroBooNE near a subset of wires. Some limitations in this model are discussed in section~\ref{sec:3D_field}. \begin{figure} \centering \begin{subfigure}[b]{0.475\textwidth} \centering \includegraphics[width=1.0\textwidth]{figs/uvy_field.pdf} \caption[]% {{\small Electron drift paths.}} \label{fig:efield} \end{subfigure} \hfill \begin{subfigure}[b]{0.475\textwidth} \centering \includegraphics[width=1.0\textwidth]{figs/u_weighting.pdf} \caption[]% {{\small Weighting potential on a U wire.}} \label{fig:u_weigthing} \end{subfigure} \vskip\baselineskip \begin{subfigure}[b]{0.475\textwidth} \centering \includegraphics[width=1.0\textwidth]{figs/v_weighting.pdf} \caption[]% {{\small Weighting potential on a V wire.}} \label{fig:v_weighting} \end{subfigure} \quad \begin{subfigure}[b]{0.475\textwidth} \centering \includegraphics[width=1.0\textwidth]{figs/y_weighting.pdf} \caption[]% {{\small Weighting potential on a Y wire.}} \label{fig:v_weigthing} \end{subfigure} \caption[] {\small A demonstration of electron drift paths in the applied electric field (panel a) and weighting potentials (panels b, c, d) on individual wires of the 2D MicroBooNE TPC model, using the Garfield program. The coordinates for each plane are defined in section~\ref{sec:topology_signal} as shown in figure~\ref{fig:coordinate}. The x-Axis is in the drifting field direction and the z-Axis is in the beam direction. The weighting potential is a dimensionless quantity, given as a value relative to the to the electric potential on the target wire. Values for the weighting potential are indicated in percentage on each equipotential line, ranging from 1\% for the farthest to 60\% for the closest illustrated.} \label{fig:fielddemo} \end{figure} Although equation~\ref{eq:shockley_ramo} fully describes a field response function for a given point charge at a given moment in time, the easiest way to understand the qualitative behavior of the field response function is through the integral of the induced current and its connection to Green's reciprocity theorem. Let's consider a case where a point charge $q_m$ is moving in an inter-electrode space. If we then assume that the charge $q_{m}$ is on an infinitesimal electrode and the sensing electrode is labeled as electrode I, then by Green's reciprocity, \begin{equation}\label{eq:repo} q_m \cdot V_m = Q_I \cdot V_I. \end{equation} Here $Q_I$ is the charge on the sensing electrode induced by $q_m$. $V_m$ is the potential at the location of $q_m$ introduced by the sensing electrode potential $V_I$. With equation~\ref{eq:repo}, we can derive the induced current as \begin{equation}\label{eq:repo1} i = \frac{dQ_I}{dt} = q_m \cdot \vec{\nabla} V_w \cdot \frac{d\vec{r}}{dt}, \end{equation} where the weighting potential\footnote{This is equivalent to the definition from Ramo's theorem, where the voltage on the electrode under consideration is set to 1 V and all others are set to 0 V.} is a dimensionless quantity defined as $V_w=V_m/V_I$. It is easy to see that the above equation recovers equation~\ref{eq:shockley_ramo} where $\vec{\nabla} V_w$ corresponds to $-\vec{E}_w$ and $\frac{d\vec{r}}{dt}$ is the velocity of the charge $q_m$. Given equation~\ref{eq:repo1}, the integral of the induced current due to a charge $q_m$ moving along its drift path \begin{equation}~\label{eq:voltage} \int i dt = q_m \cdot \left( V_w^{end} - V_w^{start}\right) \end{equation} is proportional to the difference of the weighting potential at the end and start of the path. For signal processing and signal simulation, the field response functions for a single ionization electron traveling over a number of possible discrete drift paths are calculated with the Garfield program~\cite{garfield} using a 2D model for MicroBooNE wires with a scheme illustrated in figure~\ref{fig:garfield_schem}. The calculation utilizes a region that spans \SI{22.4}{\cm} (the upper boundary at \SI{20.4}{\cm} in front of the Y plane) along the nominal electron drift direction and \SI{30}{\cm} perpendicular to both the drift direction and the wire orientation. In the calculation, each wire plane contains 101 wires with \SI{150}{\micro\meter} diameter separated by a \SI{3}{\mm} wire pitch. The drifting field (273 V/cm) is achieved by setting the negative voltage at the upper boundary of the simulated area. The nominal MicroBooNE operating bias voltages for each wire plane are used in the calculation. There are two stages in calculating the field response functions. The first one is the calculation of the electron drift paths in the applied electric field as shown in figure~\ref{fig:efield}. The second stage is the calculation of the weighting electric potentials as shown in the remaining panels of figure~\ref{fig:fielddemo}. The induced current can be calculated following equation~\eqref{eq:shockley_ramo}. The electron drift velocity as a function of electric field is taken from recent measurements~\cite{Li:2015rqa,lar_property}. For these single-electron simulations, diffusion is omitted. \begin{figure}[!htbp] \centering \includegraphics[width=0.8\textwidth]{figs/garfield_schem.pdf} \caption{Illustration of the 2D Garfield simulation scheme (dimensions not to scale), where black dots indicate individual wires. MicroBooNE's anode plane-to-plane spacing is \SI{3}{\mm}, with \SI{3}{\mm} wire pitch in each plane. The inset denotes the sub-pitch designation of electron drift paths whereupon the field response is calculated.} \label{fig:garfield_schem} \end{figure} For a single drift path calculation, the electron starts from a point \SI{10}{\cm} away from the wire plane above the central wire (shown as ``0 wire'' in figure~\ref{fig:garfield_schem}). The field response functions for that central wire and $\pm$10 wires on both sides (21 wires in total) are calculated. The simulation is then repeated starting at different transverse locations, each shifted by \SI{0.3}{\mm} from the previous, spanning \SI{0}{\mm} to \SI{1.5}{\mm}. In total, 126 field responses (six electron positions $\times$ 21 readout wires) are calculated. In the described 2D scheme, the inter-plane wires are aligned. The shift in relative inter-plane 2D geometry is a 3D effect and has minimal impact on the calculated field response shape. Figure~\ref{figs:overall_response} shows the overall response functions for each wire of interest for induction U (top panel), induction V (middle panel), and collection Y (bottom panel) wire planes, where the overall response is the field responses convolved with the electronics response (to be described in section~\ref{sec:electronics_resp}). The X-axis in figure~\ref{figs:overall_response} is the initial transverse position of the ionization electron relative to the central wire of each plane and expressed in units of wire number. % Each wire region ($\pm$0.5 wire pitch) is sampled by 11 electron drift paths with starting points that are regularly separated by \SI{0.3}{\mm}. % The field response on the central wire for a single path is thus represented by a slice of this plot at a corresponding location on the X-axis for the given path. The Y-axis is the drift time relative to the electron arriving at the V plane. Time is made discrete in units of \SI{0.5}{\us} (one ``tick'') which corresponds to the analog-to-digital converter (ADC) sampling period employed by the MicroBooNE readout electronics. The normalization of the overall response function is chosen so that the integral of the response function of the closest wire from the collection Y plane is unity, which corresponds to a single electron. To emphasize the shape of the field response functions, a special scale labeling with ``Log10'' is used to set the color scale of the induced current $i$ (electrons per \SI{0.5}{\us}) in Figure~\ref{figs:overall_response}: \begin{equation}\label{eq:log10} i \text{ in ``Log 10'' } = \begin{cases} \text{log}_{10}(i\cdot10^{5}),& \text{if }i>1\times10^{-5},\\ 0,& \text{if } -1 \times10^{-5} \leq i \leq 1 \times 10^{-5},\\ -\text{log}_{10}(-1\cdot i\cdot10^{5})& \text{if }i<-1\times10^{-5}.\\ \end{cases} \end{equation} The 2D nature of the model used for MicroBooNE wire planes in the Garfield calculation described above is an approximation of the actual detector. No detector edge effects are considered and the 2D nature of the model implies that wires are effectively infinite in length and that any effects due to the wires crossing near each other can not be included. An initial set of field calculations has been performed with custom software that utilizes a 3D model of the MicroBooNE wire planes. More discussions can be found in section~\ref{sec:3D_field}. \begin{figure}[!htbp] \centering \includegraphics[width=0.7\textwidth]{figs/resp_2D_new.pdf} \caption{The overall response functions after convolving the field response function and an electronics response function with a \SI{2}{\us} peaking time are shown in two dimensions. All plots are shown in ``Log10'' scale.} \label{figs:overall_response} \end{figure} Given the results shown above, we can conclude the following: \begin{itemize} \item As an ionization electron moves towards the closest induction plane wire, it climbs up the corresponding induction wire weighting potential; therefore, the induced current is negative and corresponds to the positive voltage waveform shown in this paper following the MicroBooNE electronics readout convention. \item As an ionization electron passes the induction wire plane and moves toward the collection wire plane, the induction wire weighting potential decreases and the induced current changes sign. This results in the bipolar shape for the induction plane signals. See examples in figure~\ref{fig:fieldstructure}. \item As an ionization electron drifts towards the wire on which it will eventually be collected (the central wire), the weighting function for that wire always increases. Consequently, collection wire signals are unipolar in shape. \item For an ionization electron originating from the cathode plane (i.e. weighting potential zero), the integrated induction charge in an induction wire plane is nominally zero, since the electron ultimately ends up at a collection wire for which the corresponding induction wire weighting potential is also zero. Similarly, the integrated induction charge in the collection wire should be equal to the charge of one electron, as the corresponding collection wire weighting potential is unity. \item The above conclusion about the integrated induction charge is no longer true for an ionization electron generated inside the active LArTPC volume due to a non-zero weighting potential at the point of origin. The deviation from zero integrated charge is largest for the first induction U plane, as V and Y wire planes are shielded by the U plane thus their weighting fields do not extend as far into the volume. However, even for the most extreme case, these deviations are generally limited to a few percent for points of origin 10 cm away from the U wire plane. We should also note that the induced current due to the sudden creation of the ionization electron is balanced by the creation of the positive argon ion.\footnote{At creation, $\int i dt = \sum_m q_m \cdot V_w^{end} =0$, since $\sum_m q_m = 0$. However, immediately following creation, the ion drift velocity is $\mathcal{O}(10^6)$ smaller than the electron drift velocity. Thus, drift of the Ar$^+$ contributes negligibly to the induced current.} \item The strength of the induced current on an induction plane wire due to an ionization electron is related to the maximum weighting potential that the electron can reach. Therefore, we expect the induction signal to increase as a given electron is allowed to pass closer to the wire. However, the bias voltage applied to each wire plane forces the electrons to divert from regions of maximum induction wire weighting potential. This causes the induction signals to be generally smaller than those on collection wires. \item Since the weighting field of the sense wire extends far beyond the sense wire region, an ionization electron drifting far away from a sense wire will still prompt induced current on the sense wire, although at a reduced strength. Therefore, the induced current depends on the local charge density, which is further determined by the event topology. \item The time duration of the induced current (field response) depends on the drift velocity of the ionization electron as well as the electrode geometry. For the U wire plane, the induced current becomes sizable when the ionization electron is about several cm away from the wire plane and ends when the electron is collected by the Y wire plane. For the V wire plane, the induced current becomes sizable when the electron is about to pass the U wire plane and also ends when the electron is collected by the Y wire plane. For the Y plane, the induced current becomes sizable as the electron is about to pass the U wire plane. These field response functions are further modified for a cloud of electrons due to different drift paths. In particular, broadening is expected for the collection wire response function for a cloud of electrons. \item As illustrated in figure~\ref{fig:efield}, there exist variations in the drift paths of ionization electrons toward the collection wire. This results in fine structures (see figure~\ref{fig:fieldstructure}) in the field response which are dependent on the drifting path as well as the weighting potential shown in figure~\ref{fig:fielddemo}. In particular, an electron traveling along a path equidistant from two collection wires (at the boundary of one wire pitch) will be collected with a few \si{\micro\second} delay compared to one that arrives at its collection wire more directly. The phase space for this delay is small so it affects a minority of drifting charge. Integrated over a distribution of ionization electrons, this variation contributes to a broadening of the overall collection signal as shown in the bottom panel of figure~\ref{fig:parallelmip}. The second negative peak in the field response of the induction wire as shown in figure~\ref{fig:fieldstructure} originates from (and thus coincides with) the collection of the ionization electron in the collection wire. However, due to shielding from the V plane (see weighting potential in figure~\ref{fig:fielddemo}), this peak for the U plane is insignificant. \end{itemize} \begin{figure} \centering \includegraphics[width=0.7\textwidth]{figs/fieldstructure.pdf} \caption{Field responses (induced-current) from various paths of one drifting ionization electron for the three wire planes. Y-axis is the integrated charge over \SI{0.1}{\micro\second}. Within the central wire pitch, a center path at \SI{0}{\milli\meter} (solid) and a boundary path at \SI{1.5}{\milli\meter} (dashed) are employed for this demonstration. See figure~\ref{fig:garfield_schem} for an illustration of the simulated geometry. The fine structures in the field responses are subject to the path of the drifting ionization electron and the weighting potential as shown in figure~\ref{fig:fielddemo}.} \label{fig:fieldstructure} \end{figure} \subsection{Electronics response}\label{sec:electronics_resp} The induced current on the wire is received, amplified, and shaped by a pre-amplifier. This process is described by the electronics response function. The impulse response function in the time domain is shown in figure~\ref{fig:resp1_preamp}. The MicroBooNE front-end cold electronics~\cite{cold} are designed to be programmable with four different gain settings (\SIlist[per-mode=symbol]{4.7;7.8;14;25}{\mV\per\femto\coulomb}) and four peaking time settings (\SIlist{0.5;1.0;2.0;3.0}{\us}). In MicroBooNE, the gain is roughly 3.5\% lower than expected and the peaking time is 10\% higher than expected~\cite{SP2_paper}. For a fixed gain setting, the peak of the impulse response is always at the same height independent of the peaking time. The peaking time is defined as the time difference between 5\% of the peak at the rising edge and the peak. The different gain settings allow for applications with differing ranges of input signal strength. The four peaking time settings are provided to satisfy the Nyquist criterion~\cite{nyquist} at different sampling rates. Two additional RC filters are exploited to remove the baseline from the pre-amplifier and the intermediate amplifier. The intermediate amplifier provides an additional gain of 1.2 (dimensionless) to compensate for the loss without any shaping/filtering. The time-domain impulse response is as follows (and is shown in figure~\ref{fig:resp1_rc}): \begin{align}\label{eq:rcfilter} {\rm Single~RC:} &~ h(t) = \delta(t) - \frac{1}{\tau}\cdot e^{-t/\tau}u(t),\\ {\rm RC \otimes RC:} &~ h(t) = \delta(t) + (\frac{t}{\tau}-2)\frac{1}{\tau}\cdot e^{-t/\tau}u(t), \end{align} where the time constant $\tau = RC$ and $\delta(t), u(t)$ are the delta function and the step function, respectively. In general, the time constant is \SI{1}{\milli\second} in MicroBooNE and the RC filter effect is visible when the signal is large or long enough. The resulting signal waveform, as illustrated in figure~\ref{fig:parallelmip}, after electronics shaping is then digitally sampled at \SI{2}{\mega\hertz} by a 12-bit ADC with the input voltage ranging from 0 to \SI{2}{\volt}. More details regarding the performance of MicroBooNE cold electronics can be found in~\cite{noise_filter_paper}. \begin{figure}[!h!tbp] \centering \begin{subfigure}[t]{0.45\textwidth} \centering \includegraphics[width=0.99\textwidth]{figs/elec.pdf} \caption{Pre-amplifier response function.}\label{fig:resp1_preamp} \end{subfigure} \begin{subfigure}[t]{0.45\textwidth} \centering \includegraphics[width=0.99\textwidth]{figs/RC.pdf} \caption{RC filter response function.}\label{fig:resp1_rc} \end{subfigure} \caption[elec-resp]{MicroBooNE pre-amplifier electronics impulse response functions are shown for (a) four peaking time settings at 4.7 mV/fC gain and (b) a single RC filter and two independent RC filters (RC $\otimes$ RC).} \label{fig:resp1} \end{figure} \subsection{Topology-dependent TPC signals}\label{sec:topology_signal} As shown in figure~\ref{fig:fielddemo}, the weighting potential is not confined to just the region around one sense wire. An element of drifting ionization charge will induce signal on many wires in its vicinity. The signal of any one wire depends on the ensemble distribution of charge in its neighborhood. To illustrate this effect, we first consider the signal resulting from an isochronous (parallel to the anode plane) minimum ionizing particle (MIP) track perpendicular to each anode plane wire orientation. Figure~\ref{fig:parallelmip} shows the simulated central wire signal when the contributions of ionization charge at different positions beyond the central wire are considered. For all three wire planes, the signal contribution due to long-range induced current for ionization charge is non-negligible. For the induction V and collection Y wire planes, the proportion of the wire signal is small for ionization charge beyond a couple wires from the central wire. For the induction U wire plane, which is the first wire plane facing the active TPC region, the contribution of the wire signal from distant wires can be sizable for the ionization charge as far as 10 wires away from the central wire. The modification of the signal due to this long range induction effect is relatively small for the collection wires, since the two induction planes provide shielding. The unipolar induction signal from any collected electrons on the collection plane is large compared to the bipolar contributions from electrons which collect on neighboring wires. On the other hand, signal distortion due to long range induction can be sizable for the induction wires due to their smaller field response functions and the potential cancellation of multiple bipolar signals. This is also true for adjacent collection plane wires which did not actually collect the ionization charge. They effectively behave as induction wires. \begin{figure}[!h!tbp] \centering \includegraphics[width=0.625\textwidth]{figs/parallelMIP.pdf} \caption[parallel-mip]{Simulated baseline-subtracted TPC signals for an ideal isochronous MIP track traveling perpendicular to each wire plane orientation ($\theta_{xz}=0^{\circ}$ and $\theta_y=90^{\circ}$, i.e. along the $z$-axis, wire pitch direction, for each wire plane) for MicroBooNE wire plane geometry. The track is an ideal line source that runs perpendicular to all wires, spans the transverse domain of the simulation, and is comprised of $\approx$4400 ionization electrons per mm mimicking a MIP. Only the field response and pre-amplifier electronics response (\SI{2}{\us} peaking time and \SI{14}{mV} gain) are included; diffusion is neglected. ``0 wire'' depicts the signal for charge drifting within one-half pitch distance of a central wire. The ``[-N,+N] wire'' plots provide the contribution to the signal on the central wire from ionization electrons that drift in progressively more distant $N$ neighboring wire regions.} \label{fig:parallelmip} \end{figure} To describe the signal dependence on track topology and later for evaluation of the signal processing technique (section~\ref{sec:quan_eva}), the detector Cartesian coordinate as well as each wire plane's coordinate are defined. \begin{description} \item{\textit{Coordinate system for each wire plane}} -- As shown in figure~\ref{fig:coordinate}, for each wire plane, the $x$-axis is along the drifting field direction, the $y$-axis is along the wire orientation, and the $z$-axis is along the wire pitch direction. The nominal (default) detector coordinate system is identical to the collection plane's coordinate system for which the $y$-axis is vertical in the upwards direction and the $z$-axis is along the wire pitch direction. \item{\textit{Angles for topology description}} -- Based on the predefined coordinate for each wire plane, two angles are defined as well to describe the topology of the track. As shown in figure~\ref{fig:angle}, $\theta_y$ is the angle between the track and the $y$-axis, and $\theta_{xz}$ is the angle between the projection onto the $x-z$ plane and the $z$-axis. \end{description} Since the $x$ component determines the time extent of the track and the $z$ component determines the range of sense wires (channels), $\theta_{xz}$ alone determines the shape of the TPC signal, assuming the field response is identical along the $y$-axis (wire orientation). This assumption exactly holds in the current 2D field response calculation and corresponds to roughly the average of the 3D field response. The $y$ component is proportional to the length of the charge deposition projection on the wire direction. It simply scales the charge deposition within one wire pitch by $1/(\cos\theta_{xz}\cdot \sin\theta_y)$. As an example, figure~\ref{fig:sim_perpendicularmip} and figure~\ref{fig:sim_parallelmip} demonstrate the TPC signal topology-dependency on $\theta_{xz}$ and $\theta_y$. Note that the discussions above are related to the individual coordinates and angles for each wire plane. One track in the detector coordinate has different angles with respect to each wire plane's coordinate. \begin{figure}[htbp] \centering \begin{subfigure}[t]{0.8\textwidth} \includegraphics[width=1.0\textwidth]{figs/coordinate.pdf} \caption{Coordinates for collection and induction planes. The $y'$ ($z'$) axis is rotated by 60$^{\circ}$ around the $x$ axis from the $y$ ($z$) axis. The illustrated induction plane's coordinate is consistent with the V plane's and the U plane's is with a rotation in the opposite direction.} \label{fig:coordinate} \end{subfigure} \begin{subfigure}[t]{0.8\textwidth} \includegraphics[width=1.0\textwidth]{figs/angle.pdf} \caption{Definition of two angles, $\theta_{xz}$ and $\theta_y$.} \label{fig:angle} \end{subfigure} \caption{Geometric coordinates and angles for topology description.} \label{fig:geometry} \end{figure} Compared to the collection plane signals, the induction plane signals can be much smaller due to the cancellation of the various bipolar signals from all nearby elements of an extended event topology. In particular, for a track traveling in a direction close to normal to the wire planes (commonly referred to as a \textit{prolonged track}), its induction plane signals will have low amplitude and a long duration in time (figure~\ref{fig:sim_perpendicularmip}). This amplitude can be comparable to noise levels. Having the lowest achievable inherent electronics noise~\cite{noise_filter_paper}, avoiding excess noise sources, and applying proper signal processing are crucial to resolve the induction plane signals. Recovering these signals enables new opportunities to take full advantage of the LArTPC's capability and reduce its residual ambiguities in later reconstruction. In order to minimize the inherent electronics noise, MicroBooNE uses a custom designed complementary metal-oxide-semiconductor (CMOS) analog front-end application-specific integrated circuit (ASIC)~\cite{asic} operating at cryogenic temperatures inside the liquid argon. The close proximity of the preamplifier to the sense wire minimizes the input capacitance. The low temperature of LAr further reduces the electronics noise of the ASIC. The residual equivalent noise charge (ENC) after the software noise filtering varies with wire length and is found to be around 400 electrons for the longest wires (4.7 m) in MicroBooNE. This noise level is significantly lower than previous experiments utilizing warm front-end electronics. More details can be found in~\cite{noise_filter_paper}. \begin{figure}[!h!tbp] \centering \includegraphics[width=0.7\textwidth]{figs/sim_perpendicularMIP.pdf} \caption[sim-perpendicular-mip]{Simulated baseline-subtracted MicroBooNE TPC signals for a 1 meter long MIP track ($\approx$4400 ionization electrons per mm) traveling perpendicular to each wire plane orientation ($\theta_y=90^{\circ}$) with $\theta_{xz}$ varying in the $x-z$ plane with respect to the $z$-axis. Detector physics effects and the nominal MicroBooNE electronics response~\cite{SP2_paper} were included. The signal shape is solely determined by $\theta_{xz}$, independent of $\theta_y$.} \label{fig:sim_perpendicularmip} \end{figure} \begin{figure}[!h!tbp] \centering \includegraphics[width=0.7\textwidth]{figs/sim_parallelMIP.pdf} \caption[sim-parallel-mip]{Simulated baseline-subtracted MicroBooNE TPC signals for a 1 meter long MIP track ($\approx$4400 ionization electrons per mm), isochronous ($\theta_{xz}=0^{\circ}$) with $\theta_y$ varying with respect to the wire orientation. Detector physics effects and the nominal MicroBooNE electronics response~\cite{SP2_paper} were included. Given a $\theta_{xz}$, $\theta_y$ just changes the signal amplitude.} \label{fig:sim_parallelmip} \end{figure} \section{Introduction}\label{sec:introduction} The liquid argon time projection chamber (LArTPC)~\cite{rubbia77,Chen:1976pp,willis74,Nygren:1976fe} is an innovative detector technology being actively developed worldwide. Several features of the LArTPC make it well adapted to the study of neutrinos and other rare processes. Argon is readily available commercially ($\sim$\SI{1}{\percent} by volume, the most abundant noble gas in the atmosphere). Free electrons have high mobility, low diffusion~\cite{Li:2015rqa}, and very high survival time in pure liquid argon (LAr), making it an attractive material for a TPC. In addition, LAr has a relatively high density and a high scintillation light yield. In the near term, the Short-Baseline Neutrino Program~\cite{Antonello:2015lea} will utilize three LArTPCs (MicroBooNE, SBND, and ICARUS) at Fermi National Accelerator Laboratory (Fermilab) to search for eV-scale sterile neutrino(s) and measure neutrino-argon interaction cross sections. In the long term, the long-baseline Deep Underground Neutrino Experiment (DUNE)~\cite{Acciarri:2015uup} is planning to use four large 10 kilotons LArTPC modules as far detectors to search for leptonic CP violation, determine the neutrino mass hierarchy, test the standard three-neutrino paradigm, search for proton decays, and potentially observe supernova neutrino bursts. The development of high-quality and fully-automated event reconstruction algorithms for LArTPC neutrino detectors is crucial to the success of the short-baseline and long-baseline physics programs and is an area of significant activity~\cite{uboone-pandora, uboone-dl, uboone-MCS, wirecell}. The robust recovery of the ionization signals from the LArTPC images is the critical first stage of LArTPC reconstruction. The MicroBooNE detector~\cite{Acciarri:2016smi} is the first LArTPC in the Short-Baseline Neutrino Program~\cite{Antonello:2015lea} to be operational. It is a single-phase LArTPC built to observe interactions of neutrinos from the on-axis Booster~\cite{bnb} and off-axis NuMI~\cite{numi} beams at Fermi National Accelerator Laboratory in Batavia, IL. The TPC LAr volume is 2.56 m $\times$ 2.3 m $\times$ 10.4 m with about 90 metric tons active mass, housed in a foam-insulated evacuable cryostat vessel. At the anode end of the 2.56 m drift distance, there are three parallel wire readout planes~\cite{Acciarri:2017wp}. The first wire plane facing the cathode is labeled "U", and the second and third plane are labeled "V" and "Y", respectively. The wire pitch and the gap between two adjacent wire planes are both 3 mm. The 3456 wires in the Y plane are oriented vertically. The U and V planes each contain 2400 wires oriented $\pm$60$^\circ$ with respect to vertical. Behind the wire planes and external to the TPC, there is an array of 32 photomultiplier tubes~\cite{Conrad:2015xta} to detect scintillation light for triggering, timing, and other purposes. \begin{figure}[!h!tbp] \includegraphics[width=1.12\textwidth]{figs/LArTPC_Concept.pdf} \caption[TPC Basics]{Diagram illustrating the signal formation in a LArTPC with three wire planes~\cite{Acciarri:2016smi}. The signal on each plane produces a 2D image of the event. For simplicity, the signal in the U induction plane is omitted from the diagram. } \label{fig:tpccartoon} \end{figure} The TPC signal formation in MicroBooNE is illustrated in figure~\ref{fig:tpccartoon}. At a drift field of \SI{273}{\V/\cm} corresponding to a cathode high voltage of \SI{-70}{\kV}, the ionization electrons drift through the LAr detector volume along the electric field lines at a nominal speed of about \SI{1.10}{\mm/\us} toward the anode wire planes. Different bias voltages, \SI{-110}{\V}, \SI{0}{\V}, and \SI{+230}{\V} are applied to the U, V, and Y wire planes, respectively, to ensure all ionization electrons pass through the U and V planes before being collected by the Y plane~\cite{Bunemann_Cranshaw_Harvey_1949} at the nominal MicroBooNE operating voltage of \SI{-70}{\kV}. As ionization electrons drift towards and then past the wires of the U and V planes, currents with bipolar shape are induced on the U and V planes. In contrast, a unipolar-shaped current is induced on a wire of the Y plane as all nearby ionization charge is collected. The U and V wire planes are commonly referred to as the induction planes. Although also measuring induced current, the Y wire plane is commonly referred to as the collection plane. While the collection plane signal is mostly unipolar and large in amplitude with a Gaussian time profile, the induction plane signal is bipolar and small in amplitude with a complex time profile. The latter is due to the overlapping of many bipolar signal shapes as a distribution of drifting charge passes near the wires. Despite complications in the induction plane signal, the combination of the induction and collection wire planes is essential for tomographic event reconstruction~\cite{wirecell} in single-phase LArTPCs. Leveraging the induction signal in combination with the collection signal is important to fully exploit single-phase LArTPC capabilities. The implementation of the induction and collection wire readout planes is a unique feature of the current single-phase LArTPC detectors. An alternative readout scheme, 2D pixel readout, would not suffer from ailments introduced from the complexity of the induction plane signal. However, this alternative scheme for large detectors is not actionable at present, though good progress has been made~\cite{pixel-readout}. If the wire readout in MicroBooNE were replaced by a full 2D pixel readout, the total number of channels would be 2.7 million instead of 8,256, resulting in a significant increase in the cost of electronics. Furthermore, the power consumption of these electronics inside LAr would be a serious concern. Given the wire readout technology employed in the single-phase LArTPCs, reconstruction of the charge passing through the induction wire plane improves the correlation of signal between the multiple anode plane views and helps resolve degeneracies inherent in a projective wire geometry. The successful reconstruction of a 3D event topology generally requires robust signal extraction in multiple 2D projection views. Since the ionization electrons are not collected on any of the induction wire planes, they naturally provide additional non-destructive views of the ionization electrons from the charged particle tracks. A successful extraction of the ionization electron information from the complicated induction plane signals is essential for 3D event reconstruction in single-phase LArTPCs using tomographic reconstruction and is expected to further enhance 3D reconstruction for techniques that match the image in different 2D projection views. This paper is organized as follows. In section~\ref{sec:signal_form}, we review the process of TPC signal formation including the induced current generation, signal amplification and shaping, as well as the impact of noise. In section~\ref{sec:charge_extract}, we describe the principle and the algorithm implemented to extract ionization charge from the TPC signal. The performance of the TPC signal processing chain is then evaluated with a detailed TPC simulation in section~\ref{sec:evaluation}. The assessment of this signal processing technique on MicroBooNE data is provided in a dedicated accompanying paper~\cite{SP2_paper}. A discussion of some identified limitations of the current techniques is in section~\ref{sec:discussion}. A summary and prospects for future improvements are presented in section~\ref{sec:summary}. \section{Additional Plots} \bibliographystyle{JHEP} \section{Discussion}\label{sec:discussion} The signal processing and TPC simulation described and evaluated here allows for a detailed appraisal of cosmic and neutrino-induced TPC event reconstruction in MicroBooNE. For example, with this work, the study of cosmic ray reconstruction efficiencies in MicroBooNE, as shown in ~\cite{uboone-mucs}, can be extended to address the reconstruction efficacy of tagged cosmic events at the raw waveform level in MicroBooNE. However, the current state of signal processing and TPC simulation is not without deficiencies even though we have ideal implementations of the TPC and electronics designs. Some of these shortcomings and methods to remedy them are discussed in the following section. \subsection{Limitations of 2D field responses}\label{sec:3D_field} As described above, this work relies on calculating field response functions using a simplified 2D model of the MicroBooNE detector. While these calculations are an improvement over previous ones in terms of including both fine-grained variations and more correct long-range induction, there exists some uncertainty and concern about any residual limitations. This section lists the potential limitations, discusses some remediation and enumerates the technical challenges to overcome. The current model assumes three parallel wire planes which extend to infinity in both transverse directions and are limited transversely to ten wires on either side of the central wire of interest. As such, the model cannot accommodate detector edges nor variations in individual wire location, angle, or bias voltage. The 2D nature disallows accounting for any possible variation along the direction of the wires and in particular requires a somewhat arbitrary choice to be made for the transverse location of wires in one plane relative to the others. Also as described above, the induced current is appreciable over a greater range than just the wire region nearest to a given element of drifting charge. At the same time, after the distance of ten wire regions, the strength is reduced to a negligible level in most cases. This means that multiple sets of 2D field response functions can be used to model variations on a patchwork basis. Such an approach is developed for modeling the MicroBooNE wires with bias voltages consistent with a short to ground. This improvement still suffers from the above limitations, and in particular must have the parameters governing bias voltage and relative wire positions tuned to match real data from the detector. One approach to address these limitations is to develop a new model which spans all three spatial dimensions. This approach goes beyond what the current Garfield 2D analytic calculation provides. The Finite Element Method can be applied to 3D geometry of arbitrary construction. The required computation time for this method naively scales as the volume of the geometry relative to the feature size. Calculations spanning tens of centimeters with wires of \SI{0.15}{\mm} diameter become computationally challenging. This feature can be somewhat mitigated by applying adaptive grid techniques. Development of FEM based 3D field calculations is an active area of investigation but currently they only span a subset of the total required volume. Another promising approach to calculate the electrostatic fields in 3D is to use the Boundary Element Method~\cite{bem}. Its computation cost scales as the surface area instead of the volume of the geometry. An initial pursuit of this approach has been investigated~\cite{bempp} and some examples of its results are included below. Regardless of the method employed to solve the Laplace equation in 3D, there are various technical computing challenges related to the increase in the size of the resulting field response data set compared to that from 2D calculations. For example, the equivalent of defining a 1D linear grid of six drift path starting points as illustrated in figure~\ref{fig:garfield_schem} is to calculate paths starting on some 2D planar grid. Continuing to require discrete translational symmetry reduces the problem substantially. The minimum set of unique drift paths on the MicroBooNE wire crossing pattern (a hexagonal lattice) is an order of magnitude larger than the 1D linear grid case. For each drift path, one must still calculate one field response function for each wire within range for each plane. The simulation must convolve the drifting charge distributions now in 3D partly by doing a 2D interpolation to the nearest drift paths. Figure~\ref{fig:larf-triangle} illustrates one possible grid which spans about four times the minimum triangle and has roughly \SI{0.1}{\mm} spacing. Detectors lacking the symmetry of the MicroBooNE wire crossing pattern will require a much larger minimum region. To model a region that lacks translational symmetry requires calculating a family of drift paths which span the entire region. Figure~\ref{fig:larf-paths} shows an example family of drift paths somewhat equivalent to those shown in figure~\ref{fig:efield} but calculated in 3D using the BEM. \begin{figure}[htbp] \centering \includegraphics[width=0.6\textwidth]{figs/larf-path-starts.png} \caption{A region of starting points for electron drift paths given a 3D model for MicroBooNE. Blue wires are in the U plane, red are in the Y plane. Potential starting points are represented by white balls. In this example, their separation is roughly \SI{0.1}{\mm} and they span a region about four times as large as the absolute minimum given the wire crossing symmetry.} \label{fig:larf-triangle} \end{figure} \begin{figure}[htbp] \centering \includegraphics[height=3.5cm,clip,trim=0cm 5cm 0cm 17cm]{figs/larf-upaths1.png} \includegraphics[height=4.5cm,clip,trim=10cm 0cm 10cm 0cm]{figs/larf-upaths8.png} \includegraphics[height=4.5cm,clip,trim=4cm 0cm 10cm 0cm]{figs/larf-upaths7.png} \includegraphics[height=4.5cm,clip,trim=4cm 0cm 10cm 0cm]{figs/larf-upaths6.png} \includegraphics[height=4.5cm,clip,trim=10cm 0cm 0cm 10cm]{figs/larf-upaths5.png} \caption{Various views of a family of possible drift paths through a 3D model of MicroBooNE wires. Paths are colored based on the local electrostatic potential. Path starting points are shown as white balls. Red wires make up the collection plane. The top image shows a view looking approximately across the detector, perpendicular to the collection wires. The bottom row shows views, in order of left to right: along a Y, V and U wire. The final view on the bottom is from behind the wire planes.} \label{fig:larf-paths} \end{figure} In addition to the residual discrepancy between the 2D approximation and the real 3D case, the current 2D Garfield calculation has limitations due to its own configuration. Close proximity of the cathode to the wire planes (in our calculation the cathode is \SI{20}{\cm} from the V plane) results in a squeezing of the weighting potential. The effect of this squeezing is most pronounced on the U plane, which has no shielding. Nevertheless, this squeezing effect is only considerable in the case of a long track at small $\theta_{xz}$. In this circumstance, the field response from a larger range of wires (greater than the $0\pm10$ wire in the current calculation) is required because the coherent summation of long-range induction from distant wires is non-negligible. A dedicated test-stand to calibrate the single-phase LArTPC field response to a point source charge would greatly aid in validating the residual 3D effect to a 2D field response calculation. \subsection{Limitations of the current ROI finding}\label{sec:limitation_ROI} ROI finding is a critical step in charge extraction and has direct consequences for charge bias and inefficiency. For the induction planes, a prolonged (large $\theta_{xz}$) track will have a large bias and inefficiency in charge extraction. An example of such a track is presented in figure~\ref{fig:roi-failure}. The ROI failure occurs on the U plane. The V and Y planes provide an indication of how the topology should have been reconstructed in the U plane view. Given that these highly inclined tracks are extended in time and have small amplitude, the associated ROIs tend not to be found. This effect is more serious at large $\theta_{xz}$ and for tracks with long drift time, where the effects of diffusion further reduce the amplitude. For short drifting distance with small diffusion, ROI finding for large angle tracks has found some success. Figures~\ref{fig:triangleEVD} and~\ref{fig:triangle} demonstrate the successful recovery of a large $\theta_{xz}$, time prolonged signal on the U plane. In the raw data waveform after noise filtering, notice that the signal is buried within the resolution of the baseline. In this case, the 2D deconvolution procedure excels at extracting this low amplitude signal from noise. This example also highlights the intra- and inter-wire dependence of the weighting field on the signal shape, which is quasi-triangular as discussed in section~\ref{sec:evaluation_result}. A web-based interactive tool to explore the raw waveform, waveform after noise filtering, and the charge spectra after deconvolution can be found in~\cite{magnify}. \begin{figure}[htbp] \centering \includegraphics[width=0.8\textwidth]{figs/5366.pdf} \caption{An example topology from MicroBooNE data (event 31, run 5366) for which the signal processing failed to find an ROI on the U plane. The signal after 2D deconvolution is displayed. Black rectangles denote inactive TPC wires. (a) U plane view. (b) V plane view. (c) Y plane view} \label{fig:roi-failure} \end{figure} \begin{figure}[htbp] \centering \includegraphics[width=0.8\textwidth]{figs/triangleEVD.pdf} \caption{A neutrino candidate from MicroBooNE data (event 41075, event 3493) measured on the U plane after (a) noise filtering (in units of average baseline subtracted ADC scaled by 250 per \SI{3}{\us}) and (b) 2D deconvolution (in units of electrons per \SI{3}{\us}). The vertical green lines denote the wires corresponding to the waveforms and charge spectra, respectively, in figure~\ref{fig:triangle}.} \label{fig:triangleEVD} \end{figure} \begin{figure}[htbp] \centering \includegraphics[width=0.8\textwidth]{figs/dataWfmTriangle.pdf} \caption{Noise-filtered waveforms (black, in units of average baseline subtracted ADC per \SI{3}{\us}) and charge spectra (red, in units of electrons per \SI{3}{\us}) corresponding to a U plane track with $\theta_{xz}\approx84^{\circ}$. Note, the quasi-triangular shaped reconstructed signal from the large-angle track. This signal is indistinguishable from the baseline in the noise-filtered waveform.} \label{fig:triangle} \end{figure} At present, the ROI finding considers the amplitude of the deconvolved signal compared with a certain predefined noise threshold as well as the connectivity of the ROI windows in wire and time dimensions. However, the shape of the reconstructed charge distribution may be helpful to predict or extrapolate the ROI window to fully cover the signal range, and the start drifting position would be used to deconvolve the corresponding diffusion smearing. An adaptive ROI finding threshold would be useful to balance the noise level and charge extraction efficacy. In short, a second-pass signal processing may be attempted with more knowledge of the charge deposition that can only be obtained after downstream event reconstruction, e.g., start time matching, 3D object matching/imaging, clustering, tracking, etc. These considerations are the subject of ongoing work. \subsection{Impact of excess noise in real data}\label{sec:impact_noise} Excess noise, as introduced in~\cite{noise_filter_paper}, can be characterized and largely removed by noise filtering prior to signal processing. However, the noise filter changes the noise spectrum as well as the signal spectrum. For the MicroBooNE detector, the coherent noise removal in the noise filter has a large impact on signal processing. The coherent noise originates from a low-voltage regulator which serves a number of channels/wires and manifests itself at the low end of the frequency spectrum, e.g. $\leq$\SI{30}{\kHz}. For an isochronous track, the noise filter can mistake the signal as part of the coherent noise and distort the waveforms from a range of wires/channels. Unfortunately, signals from the induction plane, particularly the U plane, are especially affected and have a large fraction of the power spectrum interfered with by the coherent noise. This gives rise to an additional bias for the reconstructed charge for the U plane, a -10 to -\SI{20}{\percent} effect. As the angle of the track increases with respect to the wire plane, this effect is mitigated dramatically, and can typically be ignored for tracks with angle greater than 5$^\circ$. \section{Summary}\label{sec:summary} This paper presents the principle of single phase LArTPC signal formation and a method to perform ionization electron signal analysis and processing for wire readout schemes. Several components of the signal processing are described. Most notably, the implementation of a 2D deconvolution (in time and wire dimensions) is introduced as a natural complement to the field ranges and contours inherent to LArTPC signals. Also, the careful selection and refinement of signal regions of interest are emphasized for a successful extraction of the charge distribution from all sense wire planes. A full TPC signal simulation with long-range and fine-grained field responses has been developed and employed to evaluate the signal processing. Also, an analytic method has been introduced to simulate inherent electronics noise, which is a driving factor in signal processing performance. This motivates the design decision to develop and employ cold electronics~\cite{cold} for LArTPC applications. With the signal and noise simulation, a quantitative assessment of the signal processing method has been performed. For track-like topologies, the signal processing for the collection plane achieved better than \SI{5}{\percent} charge resolution, full efficiency, and negligible charge bias for track angle $\theta_{xz}\lesssim80^{\circ}$; the signal processing for the induction plane achieved better than \SI{10}{\percent} charge resolution, full efficiency, and negligible charge bias with $\theta_{xz}\lesssim40^{\circ}$. For the induction planes, both the charge bias and charge resolution increase significantly with an increase of track angle $\theta_{xz}$ greater than $40^{\circ}$. The signal processing has a significant increase in inefficiencies/failures for induction plane tracks with $\theta_{xz}\gtrsim75^{\circ}$. The simulation is meant not only for the evaluation of signal processing but also as a fundamental tool to understand the LArTPC detector by comparison with real data. The data-to-simulation comparison will be demonstrated in a forthcoming paper~\cite{SP2_paper}. Weak points of the current signal processing method are also presented. These shortcomings are strongly dependent on signal topology. Improvements to these deficiencies are informed by the full chain of event reconstruction and are an active area of development. Signal processing is the foundation of LArTPC event reconstruction paradigms such as Pandora multi-algorithm pattern recognition~\cite{uboone-pandora}, deep learning with convolutional neural networks~\cite{uboone-dl}, Wire-Cell tomography~\cite{wirecell}, and therefore the foundation of physics analyses. We present a procedure to extract signal from all planes, taking special precautions to counteract difficult circumstances innate to the induction plane signal. This technique makes a more robust input to the subsequent 3D reconstruction. The signal processing method outlined here brings closer the realization of the promised capability of LArTPC detector technology. \section{Identification and Filtering of MicroBooNE TPC Excess Noise}\label{sec:types} \begin{figure}[!h!tbp] \center \includegraphics[width=\fighalfwidth]{figs/final/noise_summary_1us.pdf} \includegraphics[width=\fighalfwidth]{figs/final/noise_summary_2us.pdf} \caption[excess]{Average noise magnitude in the frequency domain for 1 $\mu$s and 2 $\mu$s is shown. Three sources of excess noise are illustrated on the left panel: Low voltage regulator noise, cathode HV harmonic noise, and 900 kHz burst noise. \textcolor{blue}{Add label for 36 and 108 kHz noise as well. And signal is already removed from this.}} \label{excess_noise_summary} \end{figure} In the Sec.~\ref{sec:cold_noise}, we described the inherent noise of the electronics that is expected to be dominated by the noise associated with the first transistor. In MicroBooNE, we observed excess noise beyond the expected inherent noise. Figure~\ref{excess_noise_summary} shows the average spectrum in the frequency domain for U plane from an example event for 1 $\mu$s (left) and 2 $\mu$s (right) peaking time. The three major excess noise are clearly visible. \begin{itemize} \item {\bf Noise induced by the low-voltage regulators:} \\ In Sec.~\ref{sec:coherent}, we describe the noise originating from the low-voltage regulators supplying the ASIC operating voltages. This noise, coherent for all the channels supplied by each regulator, shows up at low frequency and affects all wires. \item {\bf Noise induced by the cathode high voltage power supply:} \\ In Sec.~\ref{sec:harmonic}, we describe the noise induced on the anode wire plane by the voltage fluctuations in the cathode potential, such as ripple from the high voltage (HV) supply. This type of noise, affects all the wires in the first induction plane, at a reduced level in the second induction wire plane, and at an even lower level in the collection wire plane. \item {\bf 900 kHz Burst Noise:} \\ In Sec.~\ref{sec:zig-zag}, we describe the observed burst noise with its frequency around 900 kHz. This type of noise has a clear positon dependence and has a burst nature. While the exact source has not been clearly identified, evidence is pointing towards the PMT high voltage supply or laser system. \end{itemize} In the following sections, we will describe aforementioned excess noise in detail and compare them with the expectation of the inherent noise to illustrate their impact on the TPC performance. A software filter has been developed to largely remove the excess noise with a minimal impact on signal. \subsection{Low Frequency Noise from Voltage Regulator}~\label{sec:coherent} This low frequency noise is injected via the power input to the cold ASIC. More specifically, it is due to the voltage regulator used to provide a stable voltage for the cold ASIC. It is pronounced at low frequencies ($\lesssim 30 \textrm{kHz}$) as shown in Fig.~\ref{excess_noise_summary}. While the regulator noise is very low, it still exceeds the noise of the p-channel metal-oxide semiconductor (pMOS) input transistor. The source electrode of this transistor is driven by the regulator voltage with respect to the gate electrode that is connected to a sense wire. The resulting injected noise is thus proportional to the wire capacitance. This noise is typically found to be correlated across 192 channels on the same service board that contains the same low voltage regulators. Usually, the correlation is highest within the 48 channels covering 3 ASICs. To subtract the regulator coherent noise, at each time tick, the median value of the 48 channels is calculated and is used to form a correction waveform to be subtracted from each of the 48 channels. Before subtraction, the region in the correction waveform corresponding to the signal is identified using threshold cut based on ADC root mean square value. Any region above 5 times of the rms with seven to ten time ticks extending on both sides is removed. Instead of making the correction waveform completely zero in these regions, linear interpolation is used. It was also noticed that the wires on the service board's edges usually experience a larger contribution from the coherent noise. In order to further reduce the noise level for these edge wires, a scaling adjustment of the correction waveform is performed during waveform subtraction. The scaling factor is calculated to be the ratio of the correlation coefficient of a single channel and the average correlation coefficient of the 48 channels. The bottom right plot of Fig.~\ref{freq_filt} shows the waveform after the coherent noise removal. As to be discussed in Sec.~\ref{sec:hardware}, the regulator noise has been largely suppressed with a hardware upgrade of the service boards. \subsection{The HV Power Supply Noise}~\label{sec:harmonic} As shown in Fig.~\ref{excess_noise_summary}, the examination of the noise on U-plane channels in the frequency domain reveals a series of single frequency lines that appear to be odd harmonics of the 36 kHz ripple frequency of the HV power supply. The two main (highest and second highest) single frequency noise components corresponds to 36 kHz and 108 kHz, with the other single frequency component's amplitude being much smaller. The peak magnitude of this harmonic noise in the first induction plane ``U'' is about 15 ADC counts. The magnitude of this harmonic noise is attenuated in the second induction plane ``V'' by about a factor of three due to the shielding by U-wires, and the noise on ``Y'' collection wires is further attenuated to a negligible level by shielding of both U and V wires. In order to understand how sensitive the anode plane is to tiny potential variations at the cathode 2.5 meters away, a simple estimation can be made. For a 2.5 m wire length with a sense wire capacitance of $\sim$20 $fF/m$, the induced charge would be $\sim$0.05 $fC/ mV$, resulting in $\sim$300 e$^-$/mV. The expected peak amplitude of the waveform is $\sim 15$ ADC counts which corresponds to a charge of $\sim$0.5 fC. Therefore, only about 10 mV at the cathode can produce the observed noise, or about 5 parts in $10^8$ of the drift voltage. In the software filter, the harmonic noise is directly removed from the frequency domain. Fig.~\ref{freq_filt} shows the effect of noise filters on the single-frequency noise components. The top left panel shows an example waveform from a U-plane where it is difficult to differentiate the signal from the noise. The top right panel is the same waveform after filtering out the 36 kHz noise and the bottom left panel is after filtering out both the 36 kHz and 108 kHz noise. Two peaks from signals can be clearly identified after the noise filtration. As to be discussed in Sec.~\ref{sec:hardware}, during the detector hardware upgrades in summer shutdown, the HV power supply noise is later completely removed after installation of an additional hardware filter in the high voltage cable connecting to the TPC cathode. \begin{figure}[!h!tbp] \center \includegraphics[width=\fighalfwidth]{figs/final/raw_signal_new.pdf} \includegraphics[width=\fighalfwidth]{figs/final/raw_signal_after_36_new.pdf} \includegraphics[width=\fighalfwidth]{figs/final/raw_signal_after_both_new.pdf} \includegraphics[width=\fighalfwidth]{figs/final/raw_signal_after_full_filter_new.pdf} \caption[freq_filt]{A U-plane waveform from data (top left), the same waveform after filtering out 36 kHz noise (top right), and the same waveform after filtering out both 36 kHz and 108 kHz noise components (bottom left) are shown. The waveform after full software noise filtering chain (bottom right). 36 and 108 kHz noise is due to the HV supply ripple, and this noise is about $\sim$3 times lower on the V plane and negligible on the Y plane.} \label{freq_filt} \end{figure} \subsection{900 kHz Noise (Burst Noise)}~\label{sec:zig-zag} \begin{figure}[!h!tbp] \center \includegraphics[width=\figwidth]{figs/final/zigzag_time_domain-1.pdf} \includegraphics[width=\figwidth]{figs/final/zigzag_time_domain-2.pdf} \caption[zigzag_time_domain]{The ``900 kHz noise'' of a V-plane channel is shown. Blue highlighted region is identified burst noise region and red is actual channel waveform.} \label{zigzag_time_domain} \end{figure} The waveforms with the ``900 kHz noise'' are shown in Fig.~\ref{zigzag_time_domain}. Blue highlighted region is identified as burst noise region and is identified using simple selection criteria, where we look for any two consecutive time ticks varying between above and below baseline alternatively, and with this pattern repeating for at-least 5 time ticks. Also, the absolute value of the difference in ADC counts measured between two consecutive ticks must be larger 3 ADCs. The oscillation (or ringing) has a mean period of $\sim$ 1.1 $\mu$s corresponding to $\sim$ 900 kHz, evident in the frequency spectrum in Fig.~\ref{excess_noise_summary}. The bursts of this noise are intermittent with the amplitude varying in time. This noise has been observed on the wires concentrated at one corner for the detector. The anti-aliasing filter can be used to suppress it by using a longer peaking time. As expected, this noise is prominent with 1 $\mu$s peaking time, but almost absent with the 2 $\mu$s peaking time. This high frequency noise can also be removed by applying a low-pass frequency filter in software. Given that this noise is negligible with the 2 $\mu$s peaking time, there is no need to apply the low-pass software filter in the nominal running configuration. Although the exact origin of this noise has not been identified, current evidence is pointing to the power switching circuits related to either PMT high voltage power supply or the calibration laser interlock system. The impact of this noise on the signal processing is negligible. \subsection{Impact of Noise Filtering on Signal}
{ "timestamp": "2018-04-11T02:04:30", "yymm": "1802", "arxiv_id": "1802.08709", "language": "en", "url": "https://arxiv.org/abs/1802.08709" }
\section{Introduction} \label{sec:intro} At the onset of a solar flare, magnetic reconnection releases pent up magnetic stress, energizing many thousands of tenuous magnetic flux tubes in the corona. Energy released in the corona is transported to the chromosphere, where a sharp rise in pressure causes plasma to ablate into the corona (``chromospheric evaporation'', \citealt{hirayama1974}), filling these flux tubes with hot, dense plasma, causing the extreme brightenings associated with flares. Simultaneously, due to conservation of momentum, the increased pressure also causes a down-flow of material deeper into the chromosphere (``chromospheric condensation'', \citealt{ichimoto1984}). While the process of energy transport in solar flares is well understood \citep{benz2008}, many fundamental questions remain unanswered, particularly concerning the multi-threaded nature of flares. The first open question is the number of coronal loops which form or over which energy is released during a flare, or whether there is even a well-defined number \citep{magyar2016}. There is no doubt whatsoever that flares do not release their energy over a monolithic loop, which is obvious from imaging \citep{svestka1982,aschwanden2001,sheeley2004}, from spectral considerations \citep{mcclements1989,mariska1993,hori1998,doschek2005,warren2016}, and from deficiencies in models \citep{reeves2002,warren2006,reep2016}. It is not trivial to count the number of loops in images of flares, nor define an algorithm to do so, since instruments fundamentally are limited by their spatial resolution, and since there is always a number of loops or other features along the line of sight within any given pixel on an imaging instrument. Nevertheless, numerous studies have measured the widths of coronal loops (\textit{e.g.} \citealt{antolin2012,brooks2012,brooks2013,winebarger2014,brooks2016}), in order to determine whether the observations are resolved. Recently, the analysis of \citet{aschwanden2017} found that images taken of loops with the \textit{High-resolution Coronal Imager} (\textit{Hi-C}, \citealt{cirtain2013}) are resolved, while images from instruments such as the \textit{Atmospheric Imaging Assembly} (\textit{AIA}, \citealt{lemen2012}) resolve some but not all loops. Most of these studies were undertaken in non-flaring active regions, however, which may be fundamentally different in topology from flaring loops. The next problem concerns the parameters of individual loops within a flaring arcade. It is implausible that each loop receives an equal amount of energy, but it is unclear what the distribution of energy might be, whether that distribution is stochastic in some way, how the released energy is partitioned amongst various transport mechanisms, \textit{etc}. Estimates of total flare energies, and how that energy is partitioned, are commonly performed (\textit{e.g.} \citealt{emslie2004,emslie2005,milligan2014}), but these are generally global estimates, or at best, limited to a certain volume of a flare which contains many loops. For example, measurements of energy flux delivered by an electron beam are routinely performed by the \textit{RHESSI} satellite \citep{lin2002}, which can both give an estimate of ribbon area and the total energy delivered to that ribbon (\textit{e.g.} \citealt{krucker2011}). Owing to finite spatial resolution (even more limited in HXRs), and to the optically-thin nature of the corona, however, no instrument is currently able to measure the fraction of energy that goes to each loop. Given the complexity of solar flares, the problem of understanding how energy is released on individual loops may seem insurmountable. Spectroscopic observations, however, hold many clues that can be used to piece together a coherent picture of solar flares. For example, hydrodynamic simulations of flares show that the transition region and chromospheric down-flows that occur during impulsive energy release must be short lived, between 50 to 75\,s \citep{fisher1987,fisher1989}, as the flows are quickly stopped by the higher density material at lower heights. In many flares, observations of the \ion{Si}{4} emission lines taken with the \textit{Interface Region Imaging Spectrograph} (\textit{IRIS}, \citealt{depontieu2014}) show down-flows that persist for many hundreds of seconds, in apparent contradiction with the theoretical prediction (see Section \ref{sec:single} for further discussion). The modeling by \citet{reep2016} indicates that a model with many loops being heated within a single \textit{IRIS} pixel can reproduce this behavior, however. This suggests that individual flare loops are woefully under-resolved by current instrumentation, and are not likely to be fully resolved in the foreseeable future. In this paper, we consider the implications of the up-flows observed in high temperature emission lines. Early flare observations rarely showed the strong up-flows expected from chromospheric evaporation. Instead, observed line profiles were generally dominated by a stationary component, even during the earliest part of the flare \citep{antonucci1982,mcclements1989,doschek1993,mariska1993,alexander1998}. With increasing spatial resolution, strong evaporative up-flows have been observed with increasing frequency \citep{czaykowska1999,milligan2009,doschek2013,brosius2013}. With its high spatial resolution, \textit{IRIS} routinely observes completely blue-shifted profiles of \ion{Fe}{21} \citep{tian2015,polito2015,polito2016,dudik2016,lee2017,li2017,brosius2017}. As we will see, the magnitude and duration of these high-temperature up-flows are sensitive to the details of the heating. We therefore examine various heating durations with hydrodynamic simulations in order to determine what values are consistent or inconsistent with the data. In Section \ref{sec:single}, we first examine a set of simulations of loops with various energy fluxes and heating durations, finding that heating durations of the order of 300\,s are most consistent with up-flows of \ion{Fe}{21}, though single loops cannot reproduce pervasive red-shifts routinely seen in \ion{Si}{4} emission. In Section \ref{sec:multi}, we then develop a multi-threaded model to see what values can simultaneously reproduce the emission in both lines. We find that a large number of threads, with a similar median heating duration and high median energy flux are consistent with published observations. In Section \ref{sec:obs}, we then present observations of an M-class flare with both \ion{Fe}{21} and \ion{Si}{4} emission (where not saturated), and fit the model to this event, finding consistency with heating durations 50--100\,s and high energy fluxes. Finally, we discuss the implications of this work in Section \ref{sec:discussion}. \section{Single Loop Model} \label{sec:single} In many observational studies of flares with \textit{IRIS}, red-shifts have been observed in transition region lines (\ion{Si}{4}, \ion{O}{4}, \ion{C}{2}) lasting for ten minutes to over an hour \citep{sadykov2015,brannon2015,li2015,polito2016,sadykov2016,warren2016,zhang2016,li_b_2017}, although not all authors commented on this behavior explicitly. Previously, studies with the \textit{Coronal Diagnostic Spectrometer} (\textit{CDS}, \citealt{harrison1995}) had detected similar red-shifts in \ion{O}{5} and similar lines \citep{czaykowska1999,brosius2004}, though they were not the primary focus of those works. The first explanation put forth for these red-shifts is that these are simple events of chromospheric condensation, where the expansion of plasma caused by a rise in pressure from chromospheric heating causes a down-flow of material that balances the momentum of the up-flows \citep{canfield1987,zarro1988}. This explanation suffices for short-lived red-shifts in H$\alpha$ \citep{ichimoto1984}, \ion{Mg}{2} \citep{graham2015}, \ion{Fe}{2} \citep{kowalski2017}, and other chromospheric lines. However, it was shown with both hydrodynamic loop modeling and analytic considerations, by \citet{fisher1989}, that condensations can only last for 50--75 seconds, regardless of the strength or duration of heating. The apparent contradiction between theory and observations can be resolved if one considers a multi-threaded model, where the observed emission is due to more than one loop \citep{reep2016}. In that work, it was found that a single loop cannot reproduce the persistent red-shifts seen by the flare reported in \citet{warren2016}, for any value of energy flux, or for repeated heating events on that loop. A multi-threaded model very naturally reproduces them: each successively heated loop dominates the signal, so that the overlap of many loops within one pixel can show red-shifts lasting longer than one minute. The simulations were also consistent with the observed emission measure distribution, peak temperatures, and density measurements. Unfortunately, as that flare was small, there was no detectable emission in \ion{Fe}{21}, which may prove useful as a diagnostic of heating duration. In that work, it was assumed that each individual thread is heated for only 10 seconds, though there is no compelling reason to choose that duration. We now turn our focus towards examining this assumption of heating duration, by comparing Doppler shifts of synthesized spectral lines to observations of two particular lines: \ion{Si}{4} 1402.77\,\AA\ and \ion{Fe}{21} 1354.08\,\AA. The former line has been used as a diagnostic of condensation (\textit{e.g.} \citealt{warren2016}), while the latter is commonly used as a diagnostic of evaporation \citep{graham2015,polito2015,polito2016}. We begin with the simplest case: a single loop, with various heating durations and energy fluxes. Can any combination of energy flux and heating duration reproduce observed trends? \subsection{Simulation Set-up} \label{subsec:simulation_setup} In this work, we have run simulations with the field-aligned HYDrodynamics and RADiation code (HYDRAD, \citealt{bradshaw2003}), which solves the equations of conservation of mass, momentum, and energy along a full loop for a two-fluid plasma constrained to a magnetic flux tube with either uniform or expanding cross-section. The equations and details of the code can be found in \citet{bradshaw2013}. The code uses adaptive mesh refinement, which is important for resolving the transition region and areas with large gradients, including shocks in the corona. The code also solves the equations for non-equilibrium ionization (NEI) states, which can diverge significantly from the equilibrium values with impulsive heating \citep{bradshaw2003b,bradshaw2013b}, and affects both the radiative losses and the ionization states for synthesized spectral lines, which is an important consideration for the work here (all ions in this paper are treated in full NEI). The chromospheric radiative losses are based on the prescription derived by \citep{carlsson2012}. In this work, we assume that each loop is vertical relative to the solar surface, semi-circular in shape, for a fixed length of $2L = 60$\,Mm, with a uniform cross-section. We assume that the heating is due to a beam of non-thermal electrons depositing their energy in a thick-target plasma via Coulomb collisions, with a functional form following \citet{emslie1978} and \citet{hawley1994}, and implementation details in \citet{reep2013,reep2016b}. We assume an electron spectrum injected at the apex of the loop (and acting symmetrically on each half of the loop) of the form: \begin{equation} \mathfrak{F}(E_{0}, t) = \frac{F_{0}(t)}{E_{c}^{2}}\ (\delta - 2) \times \begin{cases} 0 & \text{if } E_{0} < E_{c} \\ \Big(\frac{E_{0}}{E_{c}}\Big)^{- \delta} & \text{if } E_{0} \geq E_{c} \end{cases} \label{sharpdist} \end{equation} \noindent where $F_{0}$ is the energy flux carried by the beam (keV\,s$^{-1}$\,cm$^{-2}$), $E_{c}$ the low energy cut-off (keV), $E_{0}$ the initial kinetic energy of an electron (keV), and $\delta$ the spectral index. We take $E_{c} = 15$\,keV and $\delta = 5$ in this work, though they do vary for different flares (\textit{e.g.} \citealt{sui2007,kontar2008,hannah2008}) and likely from loop to loop, and a model trying to reproduce a specific event would need to use the values appropriate to that event. We assume that the temporal envelopes of the heating are triangular, with equal rising and falling times. We have run a total of 341 simulations with durations ranging from 1 to 1000\,s, in increments of 0.1 in log space, and peak energy flux values ranging from $10^{8}$ to $10^{11}$\,erg\,s$^{-1}$\,cm$^{-2}$. After the simulations have been run, we use a forward model to synthesize spectral lines as might be seen by \textit{IRIS} from the values of density, temperature, bulk velocity, and ionization fractions as a function of position and time along the loop(s). We follow the methodology of \citet{bradshaw2011}, where the emission is calculated along the loop and then binned according to the size of a pixel along the slit, as if the detector were looking down upon the loop at solar center (no longitudinal effects). The bulk flow velocities are converted into velocities along the line-of-sight in order to calculate Doppler shifts. We assume that the emission is optically-thin. We use the \textit{IRIS} response functions obtained from the SolarSoft functions in IDL. In the multi-threaded modeling, we assume that the loops are all rooted within the same pixel, and therefore include only foot-point contributions to the emission. A more general model might include a coronal component as a sort of background contribution. \subsection{Simulations of Single Loops} \label{subsec:single_sims} We begin by showing the hydrodynamic evolution of coronal loops heated by an electron beam for various energy fluxes and heating durations. In Figure \ref{fig:density} we show a comparison between two simulations heated for the same energy flux, $F_{0} = 2 \times 10^{10}$\,erg\,s$^{-1}$\,cm$^{-2}$, but two different heating durations (\textit{i.e.} two different total energies; 25 seconds on top, 100 seconds on bottom). Each plot shows the electron density as a function of position (on a logarithmic scale), at the labeled times. The plots have been colored according to the bulk flow velocity at each position so that red marks the locations where plasma is down-flowing, blue where it is up-flowing, and white where there are no flows. The dotted white line on each plot shows the initial density profile. \begin{figure*} \begin{minipage}[b]{\linewidth} \centering \includegraphics[width=\linewidth]{density_25sec.eps} \end{minipage} \begin{minipage}[b]{\linewidth} \centering \includegraphics[width=\linewidth]{density_100sec.eps} \end{minipage} \caption{The electron density as a function of position (logarithmic x-axis), at 6 selected times, for two different simulations with equal energy flux, $F_{0} = 2 \times 10^{10}$\,erg\,s$^{-1}$\,cm$^{-2}$, and heating durations of 25\,s (top) and 100\,s (bottom). The evolution of the up-flows is strongly dependent on the duration of the heating. The lines are color-coded according to the bulk flow velocity, where red marks down-flows, blue up-flows, and white no significant flows. The initial profile is shown as a dotted white line for comparison. Each plot has been truncated at a position just beyond the apex of the loop.} \label{fig:density} \end{figure*} In the first case with 25 seconds of heating (top plots in Figure \ref{fig:density}), the electron beam quickly deposits its energy into the chromosphere, and the temperature rises sharply. As the pressure grows, it begins to expand both upwards as an evaporation front, and downwards as a condensation front. The condensations continue throughout the 25 seconds of heating, only stopping after heating as the temperature begins to fall due to strong radiation in the chromosphere. The evaporation front, however, flows unimpeded initially, quickly raising the coronal density to well over $10^{10}$\,cm$^{-3}$. After the heating ceases, the pressure in the chromosphere begins to dissipate, and the evaporation slows to a halt. After the evaporation fronts from each leg of the loop collide, density waves begin to slosh back and forth across the corona, carrying plasma back down to the chromosphere, and all the while losing energy through the increase in radiative losses. In the second case (bottom plots in Figure \ref{fig:density}), the heating ramps up more slowly, peaking 50 seconds after the onset. During that time, the initial condensation front, clearly visible at 25 seconds into the simulation, begins to dissipate as it travels to deeper and denser parts of the chromosphere where the inertia is higher. Thus, the condensation front dissipates before the heating ceases, unlike the previous case. The evaporation front continues unimpeded during the period of heating, as in the former case, however. Because the evaporation lasts longer, the coronal density becomes higher than in the previous case, as well. After 100 seconds, the heating stops, the evaporation begins to weaken, and the dominant flows are then due to density waves like the previous case. For these two cases of single loops (number of loops $N = 1$), in Figure \ref{fig:single_lines} we show the synthesized \ion{Si}{4} (red) and \ion{Fe}{21} (blue) line intensity and Doppler shifts along the line-of-sight as a function of time, including the effects of NEI, in the first pixel (\textit{i.e.} near the foot-point, see Figure 1 of \citealt{bradshaw2011}). The case with 25 seconds of heating is shown at the left, 100 seconds at the right. In the former, the plasma does not get hot enough to produce \ion{Fe}{21} emission above the noise level that \textit{IRIS} could detect, whereas in the latter there is weak emission that is initially strongly blue-shifted, but quickly slows. In both cases, the \ion{Si}{4} emission brightens as the plasma is heated, slowly falling off afterwards. The condensations observed in \ion{Si}{4} quickly decay in both cases. \begin{figure*} \begin{minipage}[t]{0.5\textwidth} \includegraphics[width=\linewidth]{lightcurve_25sec.eps} \end{minipage} \begin{minipage}[t]{0.5\textwidth} \includegraphics[width=\linewidth]{lightcurve_100sec.eps} \end{minipage} \hspace{\fill} \caption{The synthesized \ion{Si}{4} (red) and \ion{Fe}{21} (blue) intensities and Doppler shifts near the foot-point of the loop for the simulations in Figure \ref{fig:density}, with the 25\,s case on the left and the 100\,s case on the right. The note $N = 1$ indicates that these plots are for single loops, rather than multi-threaded simulations.} \label{fig:single_lines} \end{figure*} The energy flux and heating duration both strongly affect the plasma evolution on flaring loops. Figure \ref{fig:apex_values} demonstrates this by showing the evolution of a number of loops. Each plot displays the apex density and temperature as a function of time in loops heated with a fixed energy flux $\log{F} = 9.3$ (left) and $10.3$ (right) and various heating durations ranging from 1 second (black) up to 1000 seconds (red), with durations shown in colors going across the rainbow. In order to strongly increase the density near the apex of the loop, there must be a strong evaporation front carrying plasma into the corona from the chromosphere. The longest heating durations produce the largest densities and temperatures, though the peak times are significantly delayed compared to shorter heating events, some of which barely cause a response in the plasma. \begin{figure*} \begin{minipage}[t]{0.5\textwidth} \includegraphics[width=\linewidth]{apex_F93.eps} \end{minipage} \begin{minipage}[t]{0.5\textwidth} \includegraphics[width=\linewidth]{apex_F103.eps} \end{minipage} \hspace{\fill} \caption{The apex temperatures and densities as functions of time in loops heated with an energy flux $\log{F} = 9.3$ (left) and $10.3$ (right) for various heating durations, ranging from 1 to 1000\,s, in increments of 0.1 in log-space (black through red). The evolution of the plasma depends strongly on both the energy flux and heating duration.} \label{fig:apex_values} \end{figure*} It is obvious that there is a sharp divide between the left and right plots of Figure \ref{fig:apex_values} due to the difference in energy flux. In the former, only the longest heating durations produce explosive evaporation, while in the latter, all but the shortest events are explosive. This suggests that, for a given energy flux and low energy cut-off, the heating duration is also an important variable in determining the threshold for explosive evaporation. The threshold derived by \citet{fisher1985a,fisher1985b,fisher1985c} assumed a low energy cut-off $E_{c} = 20$\,keV and heating duration of 5\,s, showing that for those values, an energy flux $\gtrsim 10^{10}$\,erg\,s$^{-1}$\,cm$^{-2}$ drives explosive evaporation. \citet{reep2015} examined the effect of the low energy cut-off on the threshold for explosive evaporation, showing that significantly less energy is required to drive explosive evaporation for cut-offs less than 20\,keV, a result which was first observationally confirmed by \citet{gomory2016}. One motivation of the present work is to more closely examine the second assumption: how does the heating duration impact evaporation? We move on to a parameter survey in order to study the effects of heating duration on emission, by first assuming a fixed energy flux and variable heating duration. In Figure \ref{fig:const_flux}, we show the synthesized foot-point emission of \ion{Si}{4} and \ion{Fe}{21} from 6 simulations with energy flux $F_{0} = 5 \times 10^{10}$\,erg\,s$^{-1}$\,cm$^{-2}$, and heating durations of [3, 10, 50, 100, 316, 1000] seconds (left to right, top to bottom). \begin{figure*} \begin{minipage}[t]{0.32\textwidth} \includegraphics[width=\linewidth]{lightcurve_constflux_3sec.eps} \end{minipage} \begin{minipage}[t]{0.32\textwidth} \includegraphics[width=\linewidth]{lightcurve_constflux_10sec.eps} \end{minipage} \hspace{\fill} \begin{minipage}[t]{0.32\textwidth} \includegraphics[width=\linewidth]{lightcurve_constflux_50sec.eps} \end{minipage} \begin{minipage}[t]{0.32\textwidth} \includegraphics[width=\linewidth]{lightcurve_constflux_100sec.eps} \end{minipage} \begin{minipage}[t]{0.32\textwidth} \includegraphics[width=\linewidth]{lightcurve_constflux_300sec.eps} \end{minipage} \hspace{\fill} \begin{minipage}[t]{0.32\textwidth} \includegraphics[width=\linewidth]{lightcurve_constflux_1000sec.eps} \end{minipage} \caption{Synthesized \ion{Si}{4} and \ion{Fe}{21} foot-point emission for 6 simulations with energy flux $F_{0} = 5 \times 10^{10}$\,erg\,s$^{-1}$\,cm$^{-2}$, and heating durations of [3, 10, 50, 100, 316, 1000] seconds (left to right, top to bottom). The light curves and Doppler shifts of these lines both strongly depend on the heating duration.} \label{fig:const_flux} \end{figure*} The differences in heating duration have various effects on the emission. First, in loops heated for a shorter amount of time, there is no visible emission in \ion{Fe}{21}, because the temperature has not increased enough to ionize the iron ions. This does not imply that there is no evaporation in the loop, only that the evaporation is not visible in the highest temperature lines. Second, in all cases, the red-shifts in \ion{Si}{4} cease in about one minute, while the blue-shifts in \ion{Fe}{21} take considerably longer to slow. Further, the time it takes for the blue-shifts to stop is longer for longer heating durations, with an approximately linear correlation. This can be explained quite readily: once the heating stops, the pressure in the chromosphere begins to fall, so that the expansion which causes evaporation stops. Third, the intensity of both lines increases with increasing heating duration, simply because more energy has been deposited into the loop. Fourth, the delay between the onset of heating and the formation of the \ion{Fe}{21} line grows with increasing heating duration, which is due to the assumed temporal envelope with a slow rise to the maximum heating rate. A constant heating rate for different durations would produce equivalent dynamics while both are still being heated, so that the delay in line formation would be equal. We next turn our attention towards studying loops that are all heated for the same duration, 316\,s, but with variable maximum energy fluxes: $\log{F_{0}} = $[9.0, 9.5, 10.0, 10.3, 10.5, 11.0]\,erg\,s$^{-1}$\,cm$^{-2}$. The synthesized light curves are shown in Figure \ref{fig:const_dur}, from left to right and top to bottom, as before. \begin{figure*} \begin{minipage}[t]{0.32\textwidth} \includegraphics[width=\linewidth]{lightcurve_constdur_F90.eps} \end{minipage} \begin{minipage}[t]{0.32\textwidth} \includegraphics[width=\linewidth]{lightcurve_constdur_F95.eps} \end{minipage} \hspace{\fill} \begin{minipage}[t]{0.32\textwidth} \includegraphics[width=\linewidth]{lightcurve_constdur_F100.eps} \end{minipage} \begin{minipage}[t]{0.32\textwidth} \includegraphics[width=\linewidth]{lightcurve_constdur_F103.eps} \end{minipage} \begin{minipage}[t]{0.32\textwidth} \includegraphics[width=\linewidth]{lightcurve_constdur_F105.eps} \end{minipage} \hspace{\fill} \begin{minipage}[t]{0.32\textwidth} \includegraphics[width=\linewidth]{lightcurve_constdur_F110.eps} \end{minipage} \caption{Synthesized \ion{Si}{4} and \ion{Fe}{21} foot-point emission for 6 simulations with heating duration of 316 seconds, and fluxes $\log{F_{0}} = $[9.0, 9.5, 10.0, 10.3, 10.5, 11.0]\,erg\,s$^{-1}$\,cm$^{-2}$ (left to right, top to bottom). The light curves and Doppler shifts of these lines both strongly depend on the energy flux. } \label{fig:const_dur} \end{figure*} In this case, we again find that there are important trends. First, for loops that have lower energy fluxes, there is no visible \ion{Fe}{21} emission, as the temperature is again too low. Second, for lower energy fluxes, \ion{Si}{4} can actually be blue-shifted, rather than red-shifted, which has been reported in IRIS observations \citep{testa2014}, and suggested as a potential diagnostic of the presence of non-thermal electrons in nanoflares. For higher energy fluxes where the emission is red-shifted, the condensations once again damp in about a minute. Third, the intensities of both lines increase with increasing energy flux, as one might expect. Fourth, the delay between the onset of heating and the formation of \ion{Fe}{21} is reduced for larger energy fluxes, as the increased temperature and stronger evaporative flows more quickly ionize the iron ions. Finally, and perhaps most importantly, in all of the loops where the line forms, the blue-shifts in \ion{Fe}{21} damp in about the same amount of time, suggesting that the heating duration plays a more important role than the energy flux in determining how long evaporation continues. For this reason, it seems that the heating duration of loops can be diagnosed by the duration of evaporative up-flows. In all of these cases, however, we see that the condensation flows do not last for more than about one minute. In flares like the one reported in \citet{polito2016}, where there are both long-lasting red-shifts in \ion{Si}{4} and a slow decrease in the up-flow speed of \ion{Fe}{21}, one explanation could be that a single loop model is insufficient, even though the up-flows alone can be reproduced from a single model with a long heating duration (as done in that paper). We therefore turn our attention towards a multi-threaded model where we attempt to explain the emission and flows in both lines simultaneously. \section{Multi-threaded Model} \label{sec:multi} We employ a multi-threaded hydrodynamic model in order to understand how the heating on individual loops produces the observed emission in both \ion{Si}{4} and \ion{Fe}{21}. There are a number of new variables introduced in order to develop a model with more than one loop. Most importantly, the first is the number of loops $N$ that are rooted within a pixel. This is essentially a free parameter; we do not know \textit{a priori} what values are reasonable. Second, there is a small delay in time between the energization of each new loop, which may vary. We assume that the delay therefore occurs on a Poisson distribution, with an average delay $r$ seconds between events. In our previous work \citep{reep2016}, we found that the number of loops times that average delay determines the duration of the red-shifts in \ion{Si}{4}, \textit{i.e.} $N \times r \gtrsim \tau_{\text{red-shifts}}$, which led to the conclusion that there were more than 60 loops rooted within an \textit{IRIS} pixel for the event in that study. It is improbable that the energy input on each loop is equal, and observations indicate that there is extreme variability from pixel to pixel, which suggests that there is an energy distribution \citep{warren2016}. The intensities of individual pixels fall on a power-law distribution, so that we assume the energy input similarly is described by a power-law with index $\alpha$. This index likely varies with time, and from event to event. In this work, we assume the index is fixed in time, and allow the value to be taken as an input, which is equivalent to modifying the median energy flux of the electron beams injected onto the loops. Finally, unlike our previous work where we assumed a fixed heating duration of $10$\,s on all loops, we now allow it to vary. The event studied in \citet{reep2016} did not produce any \ion{Fe}{21} emission, so that we could not constrain its value. Because the single loop model indicates that the duration and decay of the evaporative up-flows depend on the heating duration, we use \ion{Fe}{21} emission to constrain heating durations. We therefore wish to allow the heating durations to vary, and we examine two cases. In the first case, we assume all loops have the same heating duration. In the second case, for each loop we randomly select a duration from an assumed distribution with some minimum and maximum values. The distribution is chosen simply to illustrate the consequences of having variable heating durations, since we do not know whether there is a distribution, or what form it may take. We summarize the method by which we create light curves and velocity plots in Table \ref{table:method}. We wish to emphasize that there are multiple random variables, so the results can change even with the same input parameters, though similar trends are found irrespective of that randomness. Following this process, we first begin with the case where all loops have the same, fixed heating duration. We seek to determine whether the multi-threaded model can reproduce both persistent red-shifts in \ion{Si}{4} and the typical up-flow patterns in \ion{Fe}{21}. \begin{table*} \caption{The method by which we create the light curves for the multi-threaded model.} \begin{tabular}{c | l | p{0.5\textwidth}} Step \# & Action & Description \\ \hline 1 & Run simulations and forward model & Create a database of simulations with various values of energy fluxes and heating durations, meant to span the range of reasonable parameters\\ 2 & Choose multi-threaded parameters & Select values: total number of loops $N$, average waiting time $r$, index $\alpha$ of energy power-law distribution, and a minimum energy flux value $F_{\text{min}}$ \\ 3 & Randomly draw energy fluxes & Randomly draw $N$ energy fluxes from a power-law distribution with index $\alpha$ and minimum value $F_{\text{min}}$ \\ 4 & Randomly draw heating durations & In Section \ref{subsec:fixed_dur}, all heating durations are taken to be equal. In Section \ref{subsec:distr_dur}, randomly draw $N$ heating durations from an assumed distribution with some minimum and maximum values. \\ 5 & Randomly draw waiting times & Randomly draw $N$ waiting times from a Poisson distribution, and then offset the start times of each successive loop \\ 6 & Load simulation data & For each pair of energy flux and heating duration, load the appropriate simulation into IDL \\ 7 & Combine spectra & Add the intensities from each loop as a function of time as if they are all located within one pixel, offset each loop by the waiting times, and fit each summed line with a Gaussian at each time to calculate the Doppler velocity \\ 8 & Plot and analyze & \\ \end{tabular} \label{table:method} \end{table*} \subsection{Fixed Heating Duration} \label{subsec:fixed_dur} We first examine a few cases with equal values of $N$, $r$, $F_{\text{min}}$, and $\alpha$, and various heating durations. We start with the case where $N = 200$ loops, $r = 5$\,s, $F_{\text{min}} = 3 \times 10^{9}$\,erg\,s$^{-1}$\,cm$^{-2}$, and $\alpha = -1.5$. Figure \ref{fig:N200} shows results for 6 multi-threaded simulations, where we have set the heating durations to [1, 10, 50, 100, 316, 1000]\,s, from left to right and top to bottom. \begin{figure*} \begin{minipage}[t]{0.32\textwidth} \includegraphics[width=\linewidth]{N200_1sec.eps} \end{minipage} \begin{minipage}[t]{0.32\textwidth} \includegraphics[width=\linewidth]{N200_10sec.eps} \end{minipage} \hspace{\fill} \begin{minipage}[t]{0.32\textwidth} \includegraphics[width=\linewidth]{N200_50sec.eps} \end{minipage} \begin{minipage}[t]{0.32\textwidth} \includegraphics[width=\linewidth]{N200_100sec.eps} \end{minipage} \begin{minipage}[t]{0.32\textwidth} \includegraphics[width=\linewidth]{N200_300sec.eps} \end{minipage} \hspace{\fill} \begin{minipage}[t]{0.32\textwidth} \includegraphics[width=\linewidth]{N200_1000sec.eps} \end{minipage} \caption{Synthesized \ion{Si}{4} and \ion{Fe}{21} foot-point emission for 6 multi-threaded simulations with $N = 200$ loops, $r = 5$\,s, $F_{\text{min}} = 3 \times 10^{9}$\,erg\,s$^{-1}$\,cm$^{-2}$, and $\alpha = -1.5$, with heating durations of [1, 10, 50, 100, 316, 1000]\,s (left to right, top to bottom). Heating durations in the range 50--100\,s appear reasonable compared to observations. } \label{fig:N200} \end{figure*} First, consider the \ion{Si}{4} emission. In all but the shortest case, the intensity peaks slightly above $100$\,kDN, with considerable variability in time. In the case of short heating durations, the intensity quickly plummets once new loops stop forming, $\approx 1000$\,s after the start. For longer heating durations, the intensity varies more smoothly, and continues to rise even after the last loop has formed. In terms of the velocities, shorter heating durations come close to reproducing persistent red-shifts, with the $50$\,s case perhaps the most similar to observations like those of \citet{warren2016}. For longer heating durations, however, the red-shifts quickly disappear after the start time. The first few loops to be energized dominate the line signal, and because the heating continues for longer periods of time, the intensity does not fade quickly, though the condensations stop in about a minute, as found in the single loop case. Next, consider the \ion{Fe}{21} emission. With the shortest heating durations, the emission is simply not present because the plasma has not been heated to the formation temperature of \ion{Fe}{21}. For the values assumed, the emission just begins to be visible for heating durations $\gtrsim 20$\,s. In those cases where it forms, the intensity grows and the light curve becomes considerably smoother with increasing heating duration. The velocities measured all show the evaporation beginning at around 180--200\,km\,s$^{-1}$, and tending towards 0 at later times. The rate at which they tend to 0, however, clearly changes, and shorter heating durations more quickly drop in speed. The cases with longer durations look similar to the single loop case, because the first few loops dominate the signal as they continue to be heated. We briefly examine the effect that the number of loops has on these results. Choosing the 100-second heating duration case, we vary the number of loops $N$, while holding $N \times r \approx 1000$\,s. Figure \ref{fig:N_variable} shows 3 such cases, with $N = $[100, 250, 500] loops (and the case with 200 loops is the bottom left plot of Figure \ref{fig:N200}). In all four cases, the intensities of both lines are similar, though the variability decreases as the number of loops increases. The Doppler shifts of \ion{Fe}{21} also follow similar trends, with less variability for more loops. The only major difference is that the red-shifts in \ion{Si}{4} become more pronounced and long-lasting with an increased number of loops, which reiterates a result from \citet{reep2016}. This is because the newly heated loops at any given time dominate the signal of the line, and these loops are the ones currently experiencing chromospheric condensation. \begin{figure*} \begin{minipage}[t]{0.32\textwidth} \includegraphics[width=\linewidth]{N100_100sec.eps} \end{minipage} \begin{minipage}[t]{0.32\textwidth} \includegraphics[width=\linewidth]{N250_100sec.eps} \end{minipage} \hspace{\fill} \begin{minipage}[t]{0.32\textwidth} \includegraphics[width=\linewidth]{N500_100sec.eps} \end{minipage} \caption{Similar to Figure \ref{fig:N200}, for 3 cases with 100 seconds of heating and number of loops $N = $[100, 250, 500]. A larger number of loops primarily decreases variability in emission, and produces steadier \ion{Si}{4} red-shifts. } \label{fig:N_variable} \end{figure*} \subsection{Distribution of Heating Durations} \label{subsec:distr_dur} It is probable that the heating durations on all the loops in an arcade are not equal, and therefore there is likely a distribution. We do not know what distribution that might be, so we therefore briefly examine two cases: a uniform distribution and a power-law distribution, each with an upper and lower limit. We otherwise follow the same process listed in Table \ref{table:method}, except that in Step 4, we randomly draw a heating duration for each loop instead of using a fixed duration. Figure \ref{fig:distribution} shows a few examples. The top three plots use a uniform distribution of heating durations, while the bottom three use a power-law distribution of heating durations with slope $-1.0$. The left column assumes a duration range from 1--100\,s, the middle column 1--1000\,s, and the right column 10--300\,s. \begin{figure*} \begin{minipage}[t]{0.32\textwidth} \includegraphics[width=\linewidth]{lightcurve1-100.eps} \end{minipage} \begin{minipage}[t]{0.32\textwidth} \includegraphics[width=\linewidth]{lightcurve1-1000.eps} \end{minipage} \hspace{\fill} \begin{minipage}[t]{0.32\textwidth} \includegraphics[width=\linewidth]{lightcurve10-300.eps} \end{minipage} \begin{minipage}[t]{0.32\textwidth} \includegraphics[width=\linewidth]{lightcurvePL1-100.eps} \end{minipage} \begin{minipage}[t]{0.32\textwidth} \includegraphics[width=\linewidth]{lightcurvePL1-1000.eps} \end{minipage} \hspace{\fill} \begin{minipage}[t]{0.32\textwidth} \includegraphics[width=\linewidth]{lightcurvePL10-300.eps} \end{minipage} \caption{Synthesized \ion{Si}{4} and \ion{Fe}{21} foot-point emission for 6 multi-threaded simulations with $N = 200$ loops, $r = 5$\,s, $F_{\text{min}} = 3 \times 10^{9}$\,erg\,s$^{-1}$\,cm$^{-2}$, and $\alpha = -1.5$. The top three plots use a uniform distribution of heating durations, while the bottom three plots use a power-law distribution of slope $-1.0$. The left column assumes a duration range from 1--100\,s, the middle column 1 -- 1000\,s, and the right column 10--300\,s. A distribution with a median duration around 50\,s produces reasonable results in both lines.} \label{fig:distribution} \end{figure*} There are two important points to note. First, as before, extremely long heating durations suppress persistent red-shifts in \ion{Si}{4} (top center), but extremely short heating durations fail to produce \ion{Fe}{21} emission to any appreciable extent (bottom left). As many flares display both of these signatures (\textit{e.g.} \citealt{sadykov2015,battaglia2015,polito2016}), this suggests that the average heating duration must lie between $\approx 10$--$100$\,s or so. Second, both distributions produce reasonable results given a reasonable average heating duration and energy flux: compare the top left plot to the bottom right plot, with equal median energy fluxes and median durations of $\approx 50$ and $63$\,s, respectively. The power-law distribution, however, produces smoother Doppler shifts in \ion{Fe}{21}. A large parameter survey could shed more light on the distribution of heating durations. However, there are many uncertainties and assumptions that limit the usefulness of such an exercise. We therefore turn our attention to a flare observed with IRIS, and apply this model in order to explain the observed emission. In this way, we directly check the validity of the model. \section{\textit{IRIS} observation of the March 12, 2015 flare} \label{sec:obs} On March 12, 2015 the \textit{IRIS} spectrograph was observing the AR NOAA 12297 from 05:45 to 17:40~UT during a large sit-and-stare \emph{HOP 245} study with a cadence of about 5~s and exposure time of 4~s. Two M-class flares occurred between 11:30 and 12:30~UT, peaking at around 11:50~UT (M1.6) and 12:14~UT (M1.4) respectively, as shown in the \textit{GOES} soft X-ray light curves in Figure~\ref{fig:goes}. These flares were analyzed by several authors \citep[\textit{e.g.}][]{tian2016,brannon2016}. In this work, we focus on studying the time evolution of the chromospheric evaporation as observed in the \ion{Fe}{21} 1354.08~\AA~line (T~$\approx$~10$^{7}$~K) during the impulsive phase of the first M1.6-class flare, over the interval indicated by the two vertical pink lines in Figure~\ref{fig:goes}. \begin{figure} \centering \includegraphics[width=0.5\textwidth]{goes_sji.eps} \caption{Soft X-ray light curves of the M-class flares on March 12, 2015 as observed by the \textit{GOES} satellite in the 0.5--4~\AA~and 1--8~\AA~channels. The dotted pink lines define the time interval over which we measure the blue-shifts of the \ion{Fe}{21} line during the impulsive phase of the first flare.} \label{fig:goes} \end{figure} \textit{IRIS} Slit-Jaw Images (SJI) were taken in three different passbands centered around 1330~\AA, 1400~\AA~and 2832~\AA~with a cadence of about 15~s over a 120\arcsec~x~119\arcsec~field-of-view. The flare occurred near disk center, at around [-190", -170"]. The three SJI channels are dominated by emission from the \ion{C}{2} 1335.78~\AA~(T~$\approx$~10$^{4.5}$~K), \ion{Si}{4} 1402.77~\AA~(T~$\approx$~10$^{4.9}$~K) and \ion{Mg}{2} wing (T~$\approx$~10$^{3.8}$~K) respectively. During flares, the SJI 1330~\AA~passband also includes some contribution from \ion{Fe}{21}~1354.08~\AA~emission. We used level 2 spectral and imaging data, which are processed for dark current subtraction, flat-field and geometry corrections. The orbital and absolute wavelength calibration of the spectral data was performed by measuring the centroid position of the \ion{O}{1}~1355.568~\AA~photospheric line included in the same spectral window of the \ion{Fe}{21}. Images from the SDO/AIA telescope were also analyzed, to provide a context of the observed event and information about the morphology of the flare loops. The AIA level 1 data were processed using the SolarSoft \emph{aia\_prep.pro} routine, which performs the co-alignment of images from different passbands and the adjustment of the telescope plate scale. The AIA~1600~\AA~images and SJI 1330~\AA~observations, both dominated by chromospheric plasma emission, were co-aligned by eye, giving an uncertainty of around 2 AIA pixels ($\approx$~2\arcsec). Figure~\ref{fig:sji} shows the SJI 1330~\AA~image (left) and the closest AIA 131~\AA~image (right) taken at about 11:42:30~UT, during the early impulsive phase of the M1.6-class flare. The SJI 1330~\AA~filter mainly shows chromospheric emission from the elongated flare ribbons. During flares, the AIA 131~\AA~band is dominated by the hot \ion{Fe}{21} emission from the flare loops. The vertical dotted line in Fig.~\ref{fig:sji} indicates the IRIS spectrograph slit position of the sit-and-stare study. It is possible that some \ion{Fe}{21} emission from the hot loops might overlap with the footpoint emission along the line of sight. However, in the early impulsive phase of the flare the spectra at the ribbon are dominated by the blue shifted component due to the evaporation and this component can be easily separated from the almost stationary loop emission. \begin{figure*} \centering \includegraphics[width=\textwidth]{sji_aia_11_42.eps} \caption{\emph{Left panel:} \textit{IRIS} SJI image in the 1330~\AA~channel during the impulsive phase of the flare. The dotted white line indicates the slit position in the IRIS spectrograph sit-and-stare observation. The pink segment highlighted on the slit shows the location on the northern ribbon where we observe the blue-shifts of the \ion{Fe}{21} line over time. \emph{Right panel:} AIA 131~\AA~image closest in time to the SJI image on the left. The location of the \textit{IRIS} slit and the blue-shift location is also overlaid. See Movie 1.} \label{fig:sji} \end{figure*} The evolution of the flare as observed by these two imagers can be best seen in Movie ~1 associated with Fig~\ref{fig:sji}. As shown in the movie, the IRIS slit crosses the northern flare ribbon and the flare loops during all the observation, and part of the southern ribbon from about 11:58~UT. Increased emission from the ribbons is observed in the SJI images from $\approx$~11:41:30~UT, whereas the hot (above 10 MK) \ion{Fe}{21} 1354.01~\AA~line can be first clearly detected in the IRIS spectra from about 11:42:30~UT, at the time indicated by the first vertical pink line in Fig.~\ref{fig:goes}. The \ion{Fe}{21} spectra at this time show that the line is very broad ($\geq$~1~\AA) and largely blue-shifted ($\approx$~200 km~$\,$~s$^{-1}$), suggesting ongoing chromospheric evaporation. The IRIS spectral data show that the \ion{Fe}{21} emission is then observed to move towards the flare loop top. As a result, the hot emission from the loops becomes progressively more intense, as can be best seen in Movie~1. At the same time, the non-thermal broadening and blue-shift of the line decrease gradually, in agreement with recent IRIS flare observations \citep[e.g.][]{polito2015, polito2016,graham2015}. This is also consistent with the standard solar flare scenario, suggesting that the flare loops are filled with evaporating plasma from the flare foot-points and ribbons. In Section \ref{subsec:obs_bs} we will analyze the time evolution of the evaporating hot plasma from the flare ribbons, as observed in the \ion{Fe}{21} 1354.08~\AA~spectra. The results will then be compared to the predictions of our flare simulations in Section \ref{subsec:comparison}. \subsection{Evolution of \ion{Si}{4} and \ion{Fe}{21} shifts} \label{subsec:obs_bs} The maximum blue-shift of the \ion{Fe}{21} is observed just above the intense FUV continuum emission from the flare ribbons, between the IRIS slit pixels 283 and 287. These slit pixels correspond to the location on the Sun which is highlighted in pink in Fig.~\ref{fig:sji}. The \ion{Si}{4} line is not saturated in these pixels, in contrast to the intense ribbon location. Fig.~\ref{fig:obs_spectra} shows the time evolution of the \ion{Si}{4}(left) and \ion{Fe}{21} (right) spectra in that location, as obtained by stacking together slices of the CCD images over time. We performed a fit with a single Gaussian of the \ion{Si}{4} and \ion{Fe}{21} spectral window for each of these 5 slit pixels over time using the Solarsoft routine \emph{xcfit\_block.pro}. Figure~\ref{fig:obs_doppler} shows the intensity (top) and Doppler shift centroid velocity (bottom) of the lines as a function of time, as obtained by the fitting procedure. Different colors of the plot symbols represent the results for different IRIS slit pixels, from 283 to 287. Negative (positive) values of Doppler velocity indicate blue-(red-)shifts of the \ion{Si}{4} and \ion{Fe}{21} lines from their at-rest observed wavelength of $\approx$~1402.77 and 1354.1~\AA~respectively. The reference wavelength of the \ion{Fe}{21} was measured during the gradual phase of the flare at the loop top, where the line is expected to be at rest. The red lines indicate the start time for Figure \ref{fig:obs_doppler}. \begin{figure*} \centering \includegraphics[width=0.4\textwidth]{time_spectra_plot_Si_expanded.eps} \includegraphics[width=0.4\textwidth]{time_spectra_plot_expanded.eps} \caption{Time evolution of the \ion{Si}{4} and \ion{Fe}{21} spectral lines as observed just above the ribbon location for the March 12, 2015 flare. The plots are obtained by stacking together a slice (around the slit-pixels 284 and 285) of the CCD images of the spectral windows as a function of time. The spectra are plotted as a function of Doppler shift velocity, where negative (positive) values indicate blue-shifts (red-shifts) of the line from the rest wavelengths of 1402.77~\AA and 1354.01~\AA. The red lines indicate the start time for Figure \ref{fig:obs_doppler}. } \label{fig:obs_spectra} \end{figure*} Figure~\ref{fig:obs_doppler} shows that the blue-shift of the \ion{Fe}{21} line during the M-class flare under study gradually decreases (while its intensity increases) going towards the peak of the flare. The line becomes completely stationary in most of the pixels in about 4~min or more, at the time indicated by the second pink line in Fig.~\ref{fig:goes}. Previous work by \cite{graham2015} and \cite{polito2016} found an evaporation duration of $\approx$~10min for two different X-class flares \citep[see also e.g.][]{li2015}. \begin{figure} \centering \includegraphics[width=0.48\textwidth]{flare_int_time2.eps} \includegraphics[width=0.48\textwidth]{flare_vel_time2.eps} \caption{Intensity (top panel) and velocity (bottom panel) of the \ion{Si}{4} and \ion{Fe}{21} lines as a function of time for 5 pixels close to the location of the flare ribbon. Different colors indicate different pixels. Negative values of Doppler shift indicate blue-shifts. The start time is shown as a red line in Figure \ref{fig:obs_spectra}. } \label{fig:obs_doppler} \end{figure} \subsection{Comparison to the multi-threaded model} \label{subsec:comparison} We briefly compare the observations against the multi-threaded model. Using a power-law distribution of heating durations between 30 and 300\,s, we model two cases: $r = 5$ and $3$\,s, shown in Figure \ref{fig:model_doppler}. The modeled values are mostly consistent with the observations, with some differences. The intensities in both \ion{Si}{4} and \ion{Fe}{21} are approximately the same ($10^{5}$ and $10^{3}$\,DN at their peaks, respectively). The observed \ion{Si}{4} intensities appear to have a decreasing trend, however, suggesting that the energy input is decreasing with time, which is not accounted for in the simulations. The red-shifts in \ion{Si}{4} remain close to $30$\,km\,s$^{-1}$ for the time period under consideration. The \ion{Fe}{21} intensities grow with time, while the blue-shifts gradually decay over about 4--5\,min from a maximum of about $200$\,km\,s$^{-1}$. Finally, the initial observed intensities of \ion{Fe}{21} are higher and have a slightly more gradual change with time than the simulations. Overall, the basic premise of the model is consistent with the observations, although the match is far from perfect due to our lack of knowledge of the exact parameters on the sun. \begin{figure*} \centering \includegraphics[width=0.48\textwidth]{N200_30-300_a10_comparison.eps} \includegraphics[width=0.48\textwidth]{N333_30-300_a10_comparison.eps} \caption{The modeled foot-point emission from two sets of multi-threaded simulations with $r = 5$ (left) and $3$\,s (right), with a high median flux. Compare the intensities and Doppler shifts to the observed quantities in Figure \ref{fig:obs_doppler}, with which there is a broad consistency. } \label{fig:model_doppler} \end{figure*} \section{Discussion} \label{sec:discussion} In this work, we have shown that the heating duration plays a vital role on the observed Doppler shifts and intensities of the \ion{Fe}{21} and \ion{Si}{4} lines observed routinely by \textit{IRIS} in solar flares. In order to produce decays of 5--10 minutes of blue-shifts in \ion{Fe}{21} as in \citet{graham2015}, it is necessary to heat a loop for a similar duration. If a loop were heated for a short time with a large enough energy flux, it is possible to produce \ion{Fe}{21} emission, but the evaporative up-flows only last slightly longer than that heating duration. The logic is simple: once the heating ceases, the over-pressure that causes the expansion of material also ceases. We therefore find that longer heating durations are required to explain the observations (within the multi-threaded modeling we use), and suggest that the duration of up-flows act as a diagnostic of that heating duration. In earlier work, \citet{warren2006} came to a similar conclusion: heating durations on individual threads of $\approx 200$\,s are more consistent with GOES and Yohkoh soft X-ray light curves than heating durations of $\approx 20$\,s. For loops heated strongly enough to produce strong evaporative up-flows, there must also be down-flows due to the conservation of momentum. Therefore, we simultaneously expect to measure red-shifts in cooler lines like \ion{Si}{4}. Indeed, we do find such red-shifts, though they are significantly longer lived than the roughly 60\,s predicted by \citet{fisher1989}. A multi-threaded model, with multiple loops rooted in one pixel being heated in succession, naturally explains these pervasive red-shifts as the succession of chromospheric condensations on each successive loop. This was the primary result of \citet{reep2016}. When both \ion{Si}{4} and \ion{Fe}{21} emission are present in the data, we can constrain the number of loops within a pixel, the energy fluxes onto those loops, and the heating duration on those loops. This method can be generalized for a detector with a wider temperature coverage and similar cadence. For example, \ion{Fe}{23} up-flows behave similarly in some flares (\textit{e.g.} \citealt{brosius2013}), where the higher temperature of formation for that line could be used to more strongly place limits on the energy flux. Further, the amount of plasma at temperatures exceeding 20--30\,MK requires large energy fluxes, so that the relative proportion of this super-hot component could act as another test for this model. Constraining the energy flux in this way is an important diagnostic, as there are no other direct ways to measure this value on individual loops, and the evolution of plasma on a loop is primarily determined by the energy input. We therefore summarize the results: \begin{enumerate} \item[(1)] The duration of chromospheric evaporation depends intimately on the heating duration. To produce evaporation lasting 5 -- 10\,min requires heating durations nearly as long. \item[(2)] There is a distribution of heating durations, with average values $\approx 50$ -- $100$\,s consistent with the data. If the average value is too long, persistent red-shifts seen in \ion{Si}{4} are suppressed. If the average value is too short, \ion{Fe}{21} emission does not form in appreciable amounts, and the evaporation decays too quickly. \end{enumerate} \leavevmode \newline \acknowledgments This research was performed while JWR held an NRC Research Associateship award at the US Naval Research Laboratory with the support of NASA. V.P. was supported by NASA grant NNX15AF50G, and by contract 8100002705 from Lockheed-Martin to SAO. Figure \ref{fig:apex_values} was produced with color-blind friendly IDL color tables kindly provided by Graham Kerr and Paul Wright (see \citealt{pjwright}). This research has made use of NASA's Astrophysics Data System. CHIANTI is a collaborative project involving George Mason University, the University of Michigan (USA) and the University of Cambridge (UK). IRIS is a NASA small explorer mission developed and operated by LMSAL with mission operations executed at NASA Ames Research center and major contributions to downlink communications funded by ESA and the Norwegian Space Centre.
{ "timestamp": "2018-02-27T02:07:18", "yymm": "1802", "arxiv_id": "1802.08884", "language": "en", "url": "https://arxiv.org/abs/1802.08884" }
\section{Appendix} \subsection{Interaction Hamiltonian and eigenenergies} \label{sec:interactionPotential} Our impurity-bath system can be considered an ultracold mixture experiment with extreme imbalance, where the interaction potentials are well known and impurities rapidly thermalize with the Rb bath, while impurity-impurity interaction is negligible. Important interaction energy scales in our system are set by the elastic (spin-maintaining) and spin-exchanging collisions between the Cs impurities of mass $m_{\mathrm{i}}$ and bath atoms of mass $m_{\mathrm{b}}$. The former lead to thermalization of the impurities within the bath, while the latter is a dissipative process, releasing energy into the system. Thermalization of an impurity in a quantum bath is not trivial, since the impurities' kinetic energy is dissipated by phonon scattering within the BEC \cite{Lausch2018}. However, for typical velocity and energy scales in our system, we can assume a particle-like character of elastic s-wave collisions, as discussed in the following. The dispersion relation $\epsilon_k = \hbar k v_c (1 + \nicefrac{\xi^2 k^2}{2})^{1/2}$ of the weakly interacting Rb BEC is shown in fig.~\ref{fig:S0}. Here, $k = p_b / \hbar$ is the wave vector of Bogoliubov excitations with momentum $p_b$, $v_c = \hbar / (\sqrt{2} \xi m_{\mathrm{b}})$ denotes the critical velocity and $\xi = 1 / \sqrt{8 \pi n a_{\mathrm{bb}}}$ is the BEC healing length at density $n$ with the boson-boson s-wave scattering length $a_{\mathrm{bb}} = 101\,a_0$ and Bohr radius $a_0$ \cite{Kempen2002}. In a classical bath at temperature $T$, the expectation value for relative collision velocities $\bar{v}$ is given by $\bar{v} = \sqrt{8 k_B T / (\pi \mu)}$ with reduced mass $\mu =\nicefrac{m_{\mathrm{i}} \, m_{\mathrm{b}}}{(m_{\mathrm{i}} + m_{\mathrm{b}})}$. Typical relative collision velocities $\bar{v}$ and the corresponding relative collision energy $\bar{E} = 1/2 \, \mu \bar{v}^2 = 4 / \pi \times k_B T$ for fully thermalized impurities lie deep in the particle-like collision regime of the excitation spectrum. Therefore, when evaluating impurity-condensate collision rates, we do not expect significantly different behavior with respect to a fully classical, thermal bosonic bath. A significant change of the collisional properties can be expected for energies $k_B T$ in the order of $1/2 \, \mu v_c^2$ \cite{Lausch2018}, corresponding to $k \approx 1 / \xi$, thus allowing to tune collisional properties by the choice of density, interaction and temperature of the BEC. This is in contrast to studies of Bose polarons in a similar system, where only the low $k$ part of the impurities's spectral function is measured by RF spectroscopy, implying the scattering with low-momentum Bogoliubov excitations only. \cite{Jorgensen2016}. Thus, we use the effective elastic scattering length $a = 645\,a_0$ \cite{Takekoshi2012} ($a_0$ is the Bohr radius) to calculate the elastic collision rate $\Gamma_{\mathrm{el}} = \sigma \bar{v} \left<n\right>$ with the s-wave scattering cross section $\sigma = 4 \pi a^2$ for distinguishable particles. \begin{figure} \centering \includegraphics[scale=1]{./imgs/S0} % \caption{ Bogoliubov dispersion relation of a Rb BEC for a density of $n = \SI{2.7e13}{\centi \meter^{-3}}$ (red), as used in the measurement of Fig. 2 in the body of this work with $v_b = \hbar k / m_{\mathrm{b}}$. The dashed (dotted) line describe the wave-like, linear (particle-like, quadratic) excitation character for low (high) momenta. The corresponding speed of sound $v_c$ and the expectation value of the relative collision velocity $\bar{v}$ at $\SI{300}{\nano \kelvin}$ are shown together with the respective energy scales. } \label{fig:S0} \end{figure} A comparison of the rate constants of the elastic collision constant ${G_\mathrm{el}}$ with ${G_\mathrm{el}} = \sigma \bar{v} = \SI{1.61e-10}{\hertz \, \centi \meter^3}$ (for $T=\SI{300}{\nano \kelvin}$) and the spin-exchange constant $G = \nicefrac{\Gamma_{\mathrm{se}}}{\left<n\right>}$ rates from our model yields information about the microscopic dynamics in the system. The ratio $\nicefrac{{G_\mathrm{el}}}{G} \approx 11$ at the Rb bath temperature of $\SI{300}{\nano \kelvin}$ means that, on average, 1 in 11 collisions between impurity and bath atoms results in a spin-exchange. Since only few elastic collisions suffice for Cs atoms to thermalize in the Rb bath \cite{Davis1995, Hohmann2017}, fully thermalized Cs impurities in the BEC are assumed for modeling spin-exchange in the following. The thermalization also implies that the density distribution of each impurity within one lattice well is effectively two-dimensional due to the large axial trap frequency in the lattice: The energy level spacing in the lattice $\hbar \omega_{\mathrm{ax}} / k_B = \SI{3}{\micro \kelvin}$ is one order of magnitude above the BEC temperature of $\SI{300}{\nano \kelvin}$, therefore yielding negligible occupation of excited states in the lattice. \\ \textbf{Hamiltonian.} We consider a Cs (Rb) atom with total angular momentum $\mathbf{F}_{\mathrm{i}}$ ($\mathbf{F}_{\mathrm{b}}$). The quantum numbers are $F_{\mathrm{i}}=3$ (hyperfine ground state) or $F_{\mathrm{i}}=4$ for Cs and $F_{\mathrm{b}}=1$ for Rb with the projections onto the quantization axis $m_{F, \mathrm{i}}$ and $m_{F, \mathrm{b}}$, respectively. The full Hamiltonian of the interacting particles in the center-of-mass system writes \cite{Stoof1988} \begin{displaymath} H = \frac{\mathbf{p}^2}{2 \mu} + \sum_{j=\mathrm{i}, \mathrm{b}} H^{\mathrm{i}}_{j} + \hat{H}^{\mathrm{int}}. \end{displaymath} Here, the first term is the total kinetic energy in the system (with relative momentum $\mathbf{p}$). $H^{\mathrm{i}}_j = V_j^{\text{HFS}} + V_j^{\text{Z}}$ is the internal energy of each collision partner $j$ (impurity $\mathrm{i}$ and bath atom $\mathrm{b}$) with hyperfine and Zeeman energy $V_j^{\text{HFS}}$ and $V_j^{\text{Z}}$, respectively. Finally, $\hat{H}^{\mathrm{int}}$ is the interaction term that originates from a central interaction potential. Due to the central character of the interaction the total spin in the system $\mathbf{F} = \mathbf{F}_{\mathrm{i}} + \mathbf{F}_{\mathrm{b}}$ is conserved and $F$ and the projection to the quantization axis $M$ are good quantum numbers. The impurity in state $\ket{\psi}_{\mathrm{i}}=\ket{F_{\mathrm{i}}, m_{F, \mathrm{i}}}$ and bath in state $\ket{\psi}_{\mathrm{b}}=\ket{F_{\mathrm{b}} m_{F, \mathrm{b}}}$ couple to $\ket{F_{\mathrm{i}} F_{\mathrm{b}}; F M}$ during the collision. In order to calculate eigenstates of the Hamiltonian and the collision rates in the system, the interaction $\hat{H}^{\mathrm{int}}$ is expressed in terms of the total spin $F$, ranging from $\left| F_{\mathrm{i}} - F_{\mathrm{b}} \right|$ to $|F_{\mathrm{i}} + F_{\mathrm{b}}|$ as \newcommand{\sum_{F=\left| F_{\mathrm{i}} - F_{\mathrm{b}} \right|}^{F_{\mathrm{i}} + F_{\mathrm{b}}}}{\sum_{F=\left| F_{\mathrm{i}} - F_{\mathrm{b}} \right|}^{F_{\mathrm{i}} + F_{\mathrm{b}}}} \begin{equation} \hat{H}^{\mathrm{int}} = \left<n\right> \sum_{F=\left| F_{\mathrm{i}} - F_{\mathrm{b}} \right|}^{F_{\mathrm{i}} + F_{\mathrm{b}}} g_F \mathcal{P}_F \label{eq:interactionHamiltonianProjection} \end{equation} with the projection operators onto total $F$, $\mathcal{P}_F = \sum_{M=-F}^{F} \ket{F_{\mathrm{i}} F_{\mathrm{b}}; F, M}\bra{F_{\mathrm{i}} F_{\mathrm{b}}; F, M}$ and the spatial wave function overlap $\left<n\right>$. $g_F = ({4 \pi \hbar^2 }/{\mu}) \, a_F$ is the coupling constant with s-wave scattering length $a_F$ in scattering channels with total spin $F$.\\ \textbf{Elastic collisions.} The central interaction potential leads to elastic collisions, where the atoms' internal state does not change. However, the energy expectation values are different for each combination of internal states of the impurity $\ket{\psi}_{\mathrm{i}}$ and bath atom $\ket{\psi}_{\mathrm{b}}$. For $B=0$ and $T=0$ the full Hamiltonian reduces to $\hat{H}^{\mathrm{int}}$, and the energy writes $E_{\ket{\psi}_{\mathrm{i}}} = \braket{\psi_{\mathrm{i}} \otimes \psi_{\mathrm{b}}|\hat{H}^{\mathrm{int}}|\psi_{\mathrm{i}} \otimes \psi_{\mathrm{b}} }$, with $\ket{\psi_{\mathrm{i}} \otimes \psi_{\mathrm{b}}} = \ket{{F_{\mathrm{i}} m_{F, \mathrm{i}}; F_{\mathrm{b}} m_{F, \mathrm{b}}}}$\\ When considering the situation, where an impurity atom is prepared in a quantum superposition of internal states $\ket{\psi}_{\mathrm{i}} = {\left(\ket{g} + i\ket{e} \right)} / {\sqrt{2}}$, the state-depending interaction energy leads to a dephasing of the qubit. In the spin-echo measurement (see fig.~4 of main text) we use $\ket{g}=\ket{F_{\mathrm{i}}=3, m_{F, \mathrm{i}}=3}$ and $\ket{e}=\ket{F_{\mathrm{i}}=4, m_{F, \mathrm{i}}=3}$). Dephasing leads to information loss, when the qubit is used as an information carrier, but could also be used to extract information about the bath for probing applications. While for the former, low dephasing rates are desired, for the latter a strong interaction is favorable. The state dependent energies $E_{\ket{g}}$ and $E_{\ket{e}}$ are calculated for Rb in $\ket{\psi}_{\mathrm{b}} = \ket{F_{\mathrm{b}}=1, m_{F, \mathrm{b}}=1}$, as used in the measurement. We evaluate the Clebsch-Gordon coefficients $\Braket{F_{\mathrm{i}} F_{\mathrm{b}}; F, M | {F_{\mathrm{i}} m_{F, \mathrm{i}} F_{\mathrm{b}} m_{F, \mathrm{b}}}}$ in eq.~(\ref{eq:interactionHamiltonianProjection}) and get \begin{align} E_{\ket{g}} &= \frac{4 \pi \hbar^2 n_{\mathrm{Rb}}}{\mu} a_{g, 4} \\ E_{\ket{e}} &= \frac{4 \pi \hbar^2 n_{\mathrm{Rb}}}{\mu} \left( \frac{1}{5}a_{e, 4} + \frac{4}{5} a_{e,5} \right). \end{align} Therefore, the impurity qubit is dephasing at the rate $\delta_{\ket{e}-\ket{g}} = \frac{E_{\ket{e}}-E_{\ket{g}}}{h}$, which reads \begin{equation} \delta_{\ket{e}-\ket{g}} = \frac{4 \pi \hbar^2 n_{\mathrm{Rb}}}{\mu h} \left(a_{g,4} -\frac{1}{5}a_{e, 4} - \frac{4}{5} a_{e,5} \right) \end{equation} with Plack's constant $h$. Thus dephasing rates can be expressed in terms of scattering lengths differences, here $\Delta a_{g-e} = a_{g,4} -\frac{1}{5}a_{e, 4} - \frac{4}{5} a_{e,5}$. The state dependent scattering lengths are $a_{g, 4}=648\,a_0$ for the ground state and $a_{e, 4} = 570\,a_0$ and $a_{e, 5} = 626\,a_0$ in the excited state \cite{Tiemann2018}, yielding an effective scattering lengths difference for our choice of qubit states of $\Delta a_{g-e} = 33\,a_0$. Analogously, the dephasing rates for all possible state combinations can be evaluated. For example, for an alternative qubit choice $\ket{g}=\ket{F_{\mathrm{i}}=3, m_{F, \mathrm{i}}=0}$ and $\ket{e}=\ket{F_{\mathrm{i}}=4, m_{F, \mathrm{i}}=0}$, this yields $\Delta a_{0-0}= 330 \, a_0$. \\ \textbf{Spin-exchange collisions.} Additionally to elastic collisions, the central interaction potential also allows an exchange of angular momentum between the collision partners, i.e. spin-exchange, while maintaining the total projection $M=m_{F, \mathrm{i}} + m_{F, \mathrm{b}}$. At ultracold temperatures and finite magnetic fields the spin-exchange is unidirectional, determined by the eigenenergies (Zeeman energy) of impurity and bath atoms. For the Cs-Rb combination, in each spin-exchange collision the energy $\nicefrac{E_{\mathrm{exo}}}{B} = h \cdot \SI{350}{\kilo \Hz \per G}$ is converted into kinetic energy, while transferring $\SI{1}{\hbar}$ from the impurity to a bath atom. The energy of $E_{\mathrm{exo}} = k_B \times \SI{4}{\micro \kelvin}$ (for $B=\SI{250}{\milli G}$) and angular momentum due to spin-exchange is transferred to a bath, consisting of $>10^4$ atoms. For our strongly imbalanced mixture, this does effectively not change the temperature or the mean spin projection of the bath.\\ A calculation of the rate constant $G^{\ket{f}}_{\ket{i}}$ for a transition from state $\ket{i}=\ket{F_{\mathrm{i}} F_{\mathrm{b}}; m_{F, \mathrm{i}} m_{F, \mathrm{b}}}$ to state $\ket{f}=\ket{F_{\mathrm{i}} F_{\mathrm{b}}; m_{F, \mathrm{i}}^{\prime} m_{F, \mathrm{b}}^{\prime}}$ requires a full diagonalization of the system's Hamiltonian including kinetic, Zeeman and hyperfine energies. In general it yields a dependency on the initial $m_F$ states of both collisional partners \cite{Stoof1988}. For the case of Rb in $m_{F, \mathrm{b}}=-1$ being discussed here, the rates of spin-exchange have been calculated theoretically \cite{Tiemann2018}. We expect a spin-exchange constant for an exchange of $1 \hbar$ between Cs and Rb of $\bar{G}_{1\hbar}=\SI{1.57(65)E-11}{\hertz \, \centi \meter^3}$ for the allowed transitions at low magnetic background fields $<\SI{1}{G}$ (for comparison to measurement, see sec. \ref{sec:rateConstant}). Here, the error bar gives the standard deviation for $G$ values in different $m_{F, \mathrm{i}}$ states. Note, that spin-exchange is also allowed in quanta of $2\hbar$. Here, however, the rate constant $\bar{G}_{2\hbar}=\SI{1.22(76)E-12}{\hertz \, \centi \meter^3}$ is expected to be one order of magnitude lower than for spin-exchange of $1 \hbar$, so we neglect this process in our model. \\ \subsection{Rb and Cs preparation} \begin{figure} \centering \includegraphics[width=88mm]{./imgs/S1} % \caption{(a) Potential landscape of the Rb dipole trap $U_{\mathrm{Rb}} / k_B$. Cs (blue) is prepared independently from the BEC (red) and transported to the BEC subsequently. Numbers give the respective $m_F$ states. $z$ is the main experiment axis, and $g$ gives the direction of gravity. (b) Position distribution of Cs within the BEC after transport (blue histogram) and the measured BEC line density $\rho_{\mathrm{l}}$ at the beginning of the interaction time after $\SI{6.5}{\milli \second}$ expansion time. } \label{fig:S1} \end{figure} The Rb BEC is created in an all-optical evaporation scheme in a crossed dipole trap at $\SI{1064}{\nano \meter}$ in the magnetic field insensitive $m_{F, \mathrm{b}}=0$ state. The atom number and condensate fraction is measured after a free expansion of $\SI{20}{\milli \second}$ in time-of-flight. After evaporation, we increase the Rb trapping potential by increasing the power of the axial dipole trap adiabatically from $U_{0, \mathrm{Rb}} / k_B = \SI{-3}{\micro \kelvin}$ after evaporation to $U_{\mathrm{Rb}} / k_B = \SI{-30}{\micro \kelvin}$ (with Boltzmann constant $k_B$). The BEC is characterized after this adiabatic compression, yielding a total atom number of $(10-20) \times 10^3$ and a condensate fraction of about 0.30 - 0.35. Subsequently, Cs atoms are loaded from a high-gradient magneto-optical trap into an independent crossed dipole trap, sharing the main, horizontal dipole trap beam along the $z$ axis with the BEC (see fig. \ref{fig:S1}, (a)). Three Raman sideband cooling pulses \cite{Treutlein2001}, separated by short evolution times of $\pi/2 \omega_\mathrm{rad}$ ($\omega_\mathrm{rad} = 2 \pi \times 600\,$Hz is the radial trapping frequency) cool the Cs atoms to approximately $\SI{2}{\micro \kelvin}$. Rb atoms are transferred into one of the $m_{F, \mathrm{b}} = 0, \pm 1$ states by two Landau-Zener microwave sweeps, near-resonant to the $h \times \SI{6.8}{\giga \hertz}$ hyperfine transition. Figure \ref{fig:S1} shows the potential landscape for Rb together with the Rb and Cs position distributions along the main experiment axis $z$ after preparation.\\ \textbf{The dipole trap.} The trap frequencies of the dipole trap in radial and axial direction for Rb are $\omega_r = 2 \pi \times \SI{700}{\hertz} $ and $\omega_z=2 \pi \times \SI{50}{\hertz}$, respectively. Note, that Cs has nearly the same trapping frequencies due to the favorable ratio of mass and dipole force, leading to a negligible gravitational sag between both species below $\SI{50}{\nano \meter}$. The initial density of a BEC with condensate fraction $\eta$, temperature $T$ and total atom number $N$ in the trap is calculated as a bimodal distribution $n_{\mathrm{b}} = n_{\mathrm{th}}(r, z) + n_{\mathrm{TF}}(r, z)$. Here, $n_{\mathrm{th}}(r, z)$ is the density of the thermal background and $n_{\mathrm{TF}}(r, z)$ the Thomas-Fermi distributed condensed fraction \cite{Dalfovo1999}. The thermal density profile writes \newcommand{n_{0, \mathrm{th}} }{n_{0, \mathrm{th}} } \begin{equation} n_{\mathrm{th}}(r, z) = n_{0, \mathrm{th}} \exp{\left( -\frac{r^2}{2 \sigma_r^2} - \frac{z^2}{2 \sigma_z^2} \right)} \end{equation} with the thermal peak density $n_{0, \mathrm{th}} = \frac{(1-\eta)N}{(2\pi)^{\nicefrac{3}{2}} \sigma_r^2 \sigma_z}$ and widths of the thermal cloud in radial and axial direction $\sigma_r = \sqrt{\frac{k_B T }{m_{\mathrm{b}}}} \frac{1}{\omega_r}$ and $\sigma_z = \sqrt{\frac{k_B T }{m_{\mathrm{b}}}} \frac{1}{\omega_z}$. The Thomas-Fermi density profile $n_{\mathrm{TF}}(\mathbf{r})$ of a BEC writes \newcommand{n_{0, \mathrm{TF}} }{n_{0, \mathrm{TF}} } \begin{equation} n_{\mathrm{TF}}(r, z) = n_{0, \mathrm{TF}} \left( 1 - \frac{r^2}{R_r^2} - \frac{z^2}{R_z^2} \right) \end{equation} with peak density $n_{0, \mathrm{TF}} = \frac{15 \eta N}{8 \pi R_r^2 R_z}$ and Thomas-Fermi radii $R_{r, z} = \frac{2\mu_c}{m \omega_{r, z}^2}$ in radial and axial direction, respectively. Here, $\mu_c = \frac{15^\frac{2}{5}}{2} \left( \frac{\eta N a}{\bar a} \right)^\frac{2}{5} \hbar \bar\omega$ is the chemical potential of an interacting BEC with scattering length $a = 101\,a_0$ \cite{Kempen2002} and characteristic length $\bar a = \sqrt{\frac{\hbar}{m_{\mathrm{b}} \bar \omega }}$, $ \bar{\omega} = (\omega_r^2 \omega_z)^{1/3}$. Typically, our BEC has a calculated peak density on the order of $\SI{1e14}{\centi \meter^{-3}}$ and Thomas-Fermi radii of $R_r=\SI{1}{\micro \meter}$ and $R_z=\SI{10}{\micro \meter}$.\\ \textbf{Loss channels of Cs.} Depending on the Cs hyperfine state $F_{\mathrm{i}}=3,4$, different loss channels limit the lifetime of Cs atoms in the Rb BEC. For Cs in the $F_{\mathrm{i}}=3$ hyperfine ground state, three-body recombination of one Cs atom with two Rb atoms leads to a loss at a rate $\Lambda_{\mathrm{3body}} = L_3 \left< n^2 \right>$ with $\left<n^2\right> = \int n_{\mathrm{b}}^2 n_{\mathrm{i}} \mathrm{d}^3 \mathbf{r}$. The value of $L_3 = \SI{28(1)e-26}{\hertz \, \centi \meter^6 }$ has been experimentally obtained for Rb in $m_{F, \mathrm{b}}=0$ in an independent measurement. When Cs is prepared in the excited hyperfine state $F_{\mathrm{b}}=4$, additional 2-body recombination can occur at a rate $\Lambda_{\mathrm{2body}} = L_2 \left< n\right>$ with a loss coefficient $L_2 = \SI{4(2)e-12}{\centi \meter^3 \hertz}$ (determined for Rb in $m_{F, \mathrm{b}}=0$) and the (linear) density overlap $\left< n \right>$. Since the expected rate of three-body loss $\Lambda_{\mathrm{3body}}=\SI{16}{\kilo \hertz}$ is in the same order of magnitude as the elastic collision rate $\Gamma_{\mathrm{el}} = \SI{36}{\kilo \hertz}$, we intend to reduce the loss rate in order to observe Cs-BEC dynamics driven by elastic and spin-exchange dynamics, rather than mere loss of Cs.\\ Therefore, before initiating the interaction of Cs and Rb, the axial confinement is lowered by switching off the axial dipole trap beam, so the axial trap frequency $\omega_z$ reduces to $\tilde{\omega}_z = 2\pi \times \SI{8}{\hertz}$ instantaneously. The Thomas-Fermi radii evolve according to $R_{r, z}(t) = \lambda_{r,z}(t) R_{r, z}(0)$ with time-dependent proportionality factors $\lambda_{r,z}$(t) \cite{Kagan1996, Castin1996}, given by \begin{equation} \ddot{\lambda}_{j=r,z} = \frac{\omega_{j}(0)^2}{\lambda_{j} \lambda_r^2 \lambda_z} - \omega_{j}^2(t) \lambda_{j}, \end{equation} with $\lambda_{r,z}(0)=1$ and $\dot{\lambda}_{r,z}(0)=0$ and the time dependent trap frequencies $\omega_{r,z}(t)$. During the expansion, the Thomas-Fermi radius $R_z$ increases from initially $\SI{13}{\micro \meter}$ to $\SI{270}{\micro \meter}$ axially, while the peak density reduces by almost a factor of ten within the total BEC expansion of up to $\SI{26.5}{\milli \second}$. For the thermal background, the degrees of freedom do not couple in the quasi-harmonic trapping potential, so the radial position distribution remains unaffected. Due to the short expansion time $<\SI{26}{\milli \second}$ with respect to the trap period of $\SI{125}{\milli \second}$, we assume free expansion along the $z$ axis with \cite{ketterle1999} \begin{equation} \sigma_z^2(t) = \sigma_z^2(0) + \frac{k_B T}{m_{\mathrm{b}}} t^2. \end{equation} We compare our model to a measurement of the line density $\rho_{\mathrm{l}}$ of the Rb bath (BEC and thermal background) and find good agreement until interaction times of about $\SI{15}{\milli \second}$. For longer interaction times, $\rho_{\mathrm{l}}$ shows a bimodal density distribution, which we cannot reproduce by our model. We attribute the occurrence of this localized fraction to the emergence of a shallow lattice in the axial dipole trap beam due to an unwanted, partial retro-reflection of the trapping light on the glass cell, which might induce a localization in the lowest Bloch band of that lattice \cite{Kolovsky2004}. We estimate a lattice depth on the order of $0.2 \, E_r$ (recoil energy $E_r=\nicefrac{(\hbar k)^2}{2 m_{\mathrm{b}}}$ with wave vector $k=\nicefrac{2 \pi}{\SI{1064}{\nano \meter}}$). Since lattice effects are only expected to occur along the $z$ direction, we expect the radial distribution to remain unaffected.\\ \begin{figure} \centering \includegraphics[width=85mm]{./imgs/S2} % \caption{Quasi \textit{in situ} atomic BEC density and extracted line density $\rho_{\mathrm{l}}$ of the BEC after $\SI{1.5}{\milli \second}$ time-of-flight (atom number 14$\times 10^3$ atoms and condensate fraction 0.35). (a) - (d) show atomic density and extracted line densities (blue) for various interaction times $t_i$ given in the respective panel together with our model (red). The line density is extracted by vertically binning the measured atomic column density. The model takes the limited imaging resolution of the absorption imaging system of $\SI{10}{\micro \meter}$ into account.} \label{fig:S2} \end{figure} \textbf{Cs distribution.} Cs atoms are pinned to their position by the species-selective lattice, yielding trap frequencies of $\omega_r=2\pi \times \SI{715}{\hertz}$ and $\omega_z = 2 \pi \times \SI{63}{\kilo \hertz}$. Since the mean free path length $\nicefrac{1}{( n_{\mathrm{b}}(t_i) \sigma )}$ of Cs impurities in the expanding BEC exceeds the radial size of the BEC for all interaction times $t_i$, we expect no localization effects in the Rb bath in radial direction. At a bath temperature of $\SI{300}{\nano \kelvin}$, the Cs distribution has a spatial extent of $\sigma_r=\SI{1}{\micro \meter}$ in radial direction. Along the axial direction ($z$) impurities predominantly occupy the ground state of the species-selective lattice with a width of $\sigma_z \approx \SI{10}{\nano \meter}$. \subsection{Species-selective lattice potential} The species-selective lattice is formed by two counter-propagating, linearly polarized laser beams at a wavelength of $\lambda_{\mathrm{Lat}} = \SI{790}{\nano \meter}$, superposed to the axial dipole trap along $z$. The wavelength choice realizes a tune-out trapping scheme, exploiting the coupling to both Rb-$D$ lines \cite{LeBlanc2007}. A selectivity of $1800$ is achieved for Rb in the $m_{F, \mathrm{b}}=\pm1$ state, limited by vector light shifts \cite{Schmidt2016_2}. A small detuning of the laser frequency can be introduced, allowing the transport of Cs atoms in this conveyor belt lattice \cite{Kuhr2001}. For the transport, we use a lattice potential of $\SI{150}{E_{r, \mathrm{i}}}$ for the impurity atoms with a residual potential of $\SI{0.05}{E_{r, \mathrm{b}}}$ for the bath atoms (with photon recoil energy $E_{r, \mathrm{i, b}}$ for impurity and bath atoms, respectively).\\ During transport ($\SI{10}{\milli \second}$ duration) and holding of Cs impurities in the species-selective lattice ($\SI{20}{\milli \second}$ duration) the lattice causes an off-resonant photon scattering of on average $0.25$ by each BEC atom. All BEC characteristics given in respective measurements are determined including this off-resonant photon scattering. \subsection{Impurity Spin Readout} While the $m_{F, \mathrm{i}}$ population in ultracold gases is routinely detected in Stern-Gerlach experiments during time-of-flight, we rely on \textit{in situ} fluorescence imaging of Cs atoms in the dipole trap, which excludes those standard methods. In contrast, our $m_{F, \mathrm{i}}$ mapping scheme is based on microwave transitions between the hyperfine ground states $F=3$ and $F=4$ of Cs (see fig.\ref{fig:Landau-Zener}), while Cs atoms remain localized in the species-selective lattice. The population of a desired $\tilde{m}_{F, \mathrm{i}}$ state is measured in two steps. First, the $m_{F, \mathrm{i}} \neq \tilde{m}_{F, \mathrm{i}}$ states are transferred to $\ket{F_{\mathrm{i}}=4, m_{F, \mathrm{i}}^{\prime}}$ by independent Landau-Zener (LZ) microwave sweeps, near-resonant to the Cs $\SI{9.2}{\giga \hertz}$ clock transition. In order to guarantee adiabatic transfer for all $m_{F, \mathrm{i}}$ states, the Rabi frequency of the transition $\ket{F_{\mathrm{i}}=3, m_{F, \mathrm{i}}=3} \rightarrow \ket{F=4, m_{F, \mathrm{i}}^{\prime}=3}$ has been measured. The remaining Rabi frequencies $\Omega_{m_{F, \mathrm{i}} \rightarrow m_{F, \mathrm{i}}^{\prime}}$ were calculated based on the ratio of their transition strength $C_{m_{F, \mathrm{i}}}^{m_{F, \mathrm{i}}^{\prime}}$ to the one of the $\Omega_{m_{F, \mathrm{i}}=3\rightarrow m_{F, \mathrm{i}}^{\prime}=3}$ transition (see fig. \ref{fig:Landau-Zener}). After the LZ transitions have been completed, the population in the $\ket{F=4}$ manifold is removed by a state selective push-out light pulse on the $D_2, F=4 \rightarrow F=5$ cycling transition, leaving only $\tilde{m}_{F, \mathrm{i}}$ atoms in the trap. \begin{figure} \centering \includegraphics[width=75mm]{./imgs/LandauZenerScheme.pdf} \caption{Detection scheme of $m_{F, \mathrm{i}}$ population. The population of one $\ket{F_{\mathrm{i}}=3, \tilde{m}_{F, \mathrm{i}}}$ state (here $\tilde{m}_{F, \mathrm{i}}=1$) is measured by transferring the population of all other $\ket{F_{\mathrm{i}}=3, m_{F, \mathrm{i}}}$ states in 6 independent Landau-Zener (LZ) microwave transitions to the excited hyperfine state. For the LZ sweeps $\pi$ transitions ($\Delta m_{F, \mathrm{i}} = 0$) are used. For details, see text. } \label{fig:Landau-Zener} \end{figure} \subsection{Spin-Evolution model} We model the evolution of the impurity spin with a rate equation, where spin-exchange at rate $\Gamma_{\mathrm{se}}$ and atom loss due to inelastic three body loss $\Lambda$ change the population in each $m_{F, \mathrm{i}}$ state $N_{m_{F, \mathrm{i}}}$ according to \begin{equation} \begin{aligned} \dot{N}_{+3} &= -\left( \Gamma_{\mathrm{se}} + \Lambda \right) N_{+3} \\ \dot{N}_{ m_{F, \mathrm{i}} } &= -\left( \Gamma_{\mathrm{se}} + \Lambda \right) N_{m_{F, \mathrm{i}}} + \Gamma_{\mathrm{se}} N_{m_{F, \mathrm{i}}+1} \\ & \quad \text{for} \quad m_{F, \mathrm{i}} = -2, -1, 0, +1, +2 \\ \dot{N}_{-3} &= \Gamma_{\mathrm{se}} N_{-2} - \Lambda N_{-3}. \end{aligned} \label{eq:rateModel} \end{equation} \textbf{Average Rates} A common approach to solve the spin dynamics uses an average spin-exchange rate $\left<\Gamma_{\mathrm{se}}\right>$. Here, the time dependent density overlap $\left<n\right>(t) = \int n_{\mathrm{i}}(\mathbf{r}) n_{\mathrm{b}}(\mathbf{r}, t) \mathrm{d}^3\bf{r}$ of impurity atoms and the expanding BEC is calculated. This approach is used, when expectation values for spin-exchange rates in the main body, as well as in this appendix are given.\\ \textbf{Monte-Carlo approach} The use of averaged rates $\left< \Gamma_{\mathrm{se}} \right>$ however neglects the influence of the inhomogeneous density distribution of impurities within the Rb bath and of the bath itself, which both lead to a temporal fluctuation of the spin-exchange rate for the model, leading to an effective broadening of the $m_{F, \mathrm{i}}$ distribution for increasing interaction durations $t_i$. In order to include the influence of the inhomogeneous distributions, we use a Monte-Carlo simulation, where the local density of the Rb cloud $n_{\mathrm{b}}(\mathbf{r}, t)$ is evaluated for each Cs impurity and time step in the Monte-Carlo sample individually. The rate model (see eq.~(\ref{eq:rateModel})) for a sample of $N_{\mathrm{tot}}$ independent realizations (impurity atoms) is solved, where in each integration step at time $t$ the position $\mathbf{r}_0(t)$ of the impurity atom $j$ is randomly drawn from its thermal distribution in the trap. This yields an impurity density $n_{\mathrm{i}}(\mathbf{r})=\delta(\mathbf{r} - \mathbf{r}_0)$ and therefore density overlap with the BEC of $\left<n\right> = n_{\mathrm{BEC}}(\mathbf{r}_0)$ for each time step $t$, so the spin-exchange rate writes \begin{equation} {\Gamma_{\mathrm{se}}}(t) = G \, n_{\mathrm{BEC}}(\mathbf{r}_{0}(t)). \label{eq:localGamma} \end{equation} By independently solving the rate equation for each of all $N_{\mathrm{tot}}$ impurities, this yields the $m_{F, \mathrm{i}}$ population $N^{(j)}_{m_{F, \mathrm{i}}}(t)$, from which the population in the ensemble is calculated as \begin{equation} N^{}_{m_{F, \mathrm{i}}}(t) = \sum_{j=1}^{N_{\mathrm{tot}}} N^{(j)}_{m_{F, \mathrm{i}}}(t) \label{eq:spinEvolutionTotal} \end{equation} in each time step. When solving the rate model, initial $m_{F, \mathrm{i}}$ populations obtained from the respective measurement. The Monte-Carlo model is used for all analyses of spin-evolution in the text body (Fig.~2, Fig.~3) as well as this appendix (Fig.~S5). \subsection{Measuring spin-exchange constant $G$} \label{sec:rateConstant} \begin{figure} \centering \includegraphics[width=75mm]{./imgs/S3} \caption{Extracting the spin-exchange coefficient $G$. (a) Measurement of the spin evolution of Cs atoms prepared in a thermal Rb bath (15(1)\,$\times 10^3$ atoms, temperature $T=\SI{1.25(10)}{\micro \kelvin}$) at $B=\SI{750}{\milli G}$. The population $N_{\mathrm{meas}}$ gives the total number of detected Cs atoms for one specific experimental setting ($t_i,\, m_F\,\mathrm{state}$). The data was taken throughout 4037 independent experimental runs with a duration of about $\SI{10}{\second}$ each. (b) The total population decays due to three-body recombination during the interaction time $t_i$. Here, total population (\enquote{counts}) refers to the sum of all $m_{F, \mathrm{i}}$ populations for the respective interaction duration $t_i$. From the population decay, the loss rate $\Lambda_{\mathrm{3body}} = \nicefrac{1}{\SI{265}{\milli \second}}$ for the spin-exchange model (eq. \ref{eq:rateModel}) is determined. (c) Modeled $m_{F, \mathrm{i}}$ evolution from the Monte-Carlo simulation using the initial $m_{F, \mathrm{i}}$ population from the measurement (a) as the starting distribution. Here, the spin-exchange constant $G=\SI{1.71e-11}{\hertz \, \centi \meter^3}$ is used, which fits best with the measurement (see d) (d) A $\chi^2$ optimization is used to extract the rate constant $G$. } \label{fig:supp3} \end{figure} We apply the Monte-Carlo model to determine the spin-exchange constant $G$ in our atomic mixture. Therefore, a dilute cloud of Rb atoms is prepared in the $m_{F, \mathrm{b}}=-1$ state, so spin-exchange can be observed in a classical bath with well known density distribution. We combine Cs and Rb as described in \cite{Hohmann2017} and allow Cs to fully thermalize within the dilute Rb cloud. We measure the $m_{F, \mathrm{i}}$ dynamics for $\SI{300}{\milli \second}$ at a magnetic background field of $B = \SI{750}{\milli G}$ (see fig.\ref{fig:supp3}) and observe spin-exchange with the bath, as well as atom loss, presumably due to three-body recombination. Models for different $G$ constants are compared. For each data point (pixel) in the measurement, a chi-squared value is calculated $\chi^2_{j}=\frac{(N_{j, \mathrm{exp}} - N_{j, \mathrm{mod}})}{\sigma_j^2}$ from the measured population $N_{j, \mathrm{exp}}$, the modeled population $N_{j, \mathrm{mod}}$ and the expected uncertainty $\sigma_j$. We extract a spin-exchange constant $G = \SI{1.71e-11}{\hertz \, \centi \meter^3}$ by minimizing the total $\chi^2 = \sum_j \chi^2_j$. The minimization yields a statistical uncertainty of $\Delta G_{\mathrm{stat}} = \SI{0.01e-11}{\hertz \, \centi \meter^3}$. We compare our result to the theoretically estimated spin-exchange constant $\SI{1.57E-11}{\hertz \, \centi \meter^{3}}$ with an uncertainty of $\Delta G = \SI{0.61e-11}{\hertz \, \centi \meter^3}$ originating from the $m_{F, \mathrm{i}}$ dependency (see sec. \ref{sec:interactionPotential}) and we find good agreement. Discrepancies between the measurement and our model occur mainly for short interaction durations, which we attribute to our assumption of $m_{F, \mathrm{i}}$-independent spin-exchange. In fact, the theoretical calculation of $\SI{1.57(61)E-11}{\hertz \, \centi \meter^{3}}$ shows a larger uncertainty $\Delta G$ due to the state dependency than our $\chi^2$ fit $\Delta G_{\mathrm{stat}}$. Therefore, when referring to $G$, we use the uncertainty of the theoretical value. In our mixture with Rb in $m_{F, \mathrm{b}}=-1$, we do not expect a strong dependency of the spin-exchange constant on the magnetic field \cite{Tiemann2018}, e.g. due to Feshbach resonances, so we use the extracted $G$ constant for modeling spin-evolution at $B=\SI{250}{\milli G}$. \subsection{Details on spin-echo measurement} \begin{figure} \centering \includegraphics[width=75mm]{./imgs/S4} \caption{Measurement of the spin-echo contrast. (a) The total number of Cs atoms for each detected interaction duration for highest density (BEC) is shown. (b) Spin-echo fringe for a constant interaction time (here $T=\SI{100}{\micro \second}$) and varied pulse phase $\varphi$ of the last $\nicefrac{\pi}{2}$-pulse. The relative population $P_{\ket{g}}$ of the ground state $\ket{g}$ is determined by removing atoms in $\ket{e}$ from the trap with a resonant light pulse. For measurements in (c) and (d) the visibility $\nu$ is extracted from populations $P_{\ket{g}}$ at phases $\varphi=0, \SI{180}{\degree}$ (see vertical lines) as $\nu = {|P_{\varphi=\SI{180}{\degree}}-P_{\varphi=\SI{0}{\degree}}|} / {(P_{\varphi=\SI{180}{\degree}}+P_{\varphi=\SI{0}{\degree}})}$. (c) For impurities prepared in a BEC (red circles), we measure a coherence time of $T_2=\SI{1.17 \pm .06}{\milli \second}$. In the case, when Cs is prepared in the BEC, but the BEC is removed before the coherence measurement (blue triangles), we get $T_2=\SI{0.98\pm 0.03}{\milli \second}$. (d) Cs impurities are prepared in a thermal Rb bath (spin mixture) slightly above the critical temperature. We measure the same coherence time, when Rb is present (red squares, $T_2=\SI{1.07\pm 0.08}{\milli \second}$) and when Rb is removed before the pulse-echo sequence (blue triangles, $T_2=\SI{1.07\pm 0.10}{\milli \second}$). The initial visibility is limited due to imperfections in the initial $m_{F, \mathrm{i}}$ preparation and spin-exchange before the start of the spin-echo measurement (in the thermal case), both yielding a measurement background. Error bars give statistical uncertainties in the atom number determination. } \label{fig:S4} \end{figure} We study the coherence properties of Cs impurities immersed into the BEC in a spin-echo sequence. The coherence of individual Cs atoms in a similar system, but without a bath, has been studied in the work of Kuhr \textit{et al.} \cite{Kuhr2005}.\\ Experimentally, Cs atoms are immersed into the BEC $\SI{6}{\milli \second}$ before the spin-echo sequence in order to ensure a thermalization of Cs atoms within the BEC. We create a quantum superposition $\ket{\psi}_{\mathrm{i}} = \nicefrac{1}{\sqrt{2}}(\ket{g}+i\ket{e})$ with $\ket{g} = \ket{F_{\mathrm{i}}=3, m_{F, \mathrm{i}}=3}$ and $\ket{e} = \ket{F_{\mathrm{i}}=4, m_{F, \mathrm{i}}^{\prime}=3}$. The two states have been chosen due to the strong resonant Rabi coupling frequency of $\Omega_0 = 2 \pi \times\SI{40}{\kilo \hertz}$ in our setup. During the measurement, we apply a magnetic background field of $\SI{250}{\milli G}$ along the $z$ axis. The coherence time is extracted by measuring the ground state population after the last $\pi / 2$ pulse, when varying the phase of the last pulse. From the visibility decay (for details, fig.~\ref{fig:S4}), then the coherence time is extracted. First, we probe the coherence of individual impurities in a purely thermal Rb gas at a temperature of approximately $300\,$nK. When Rb is removed from the trap just before the spin-echo sequence, i.e. in the absence of impurity-bath interactions, we measure a coherence time of $T_2=\SI{1.07\pm 0.10}{\milli \second}$. By contrast, when Rb was present, we obtain $T_2=\SI{1.07\pm 0.08}{\milli \second}$. We compare the coherence time in a thermal bath to a situation, where the Cs impurities are transported into the Rb BEC of comparable temperature, but much higher density. The BEC is prepared in the $m_{F, \mathrm{b}}=1$ state, preventing an influence of spin-exchange collisions on the coherence. Here, we extract a coherence time of $\SI{1.17\pm0.06}{\milli \second}$, when Rb is present during the pulse-echo sequence. This means that coherence is maintained despite elastic impurity-bath collisions at a rate of $\nicefrac{1}{\SI{140}{\micro \second}}$ for the highest density (BEC) during the spin-echo sequence. Finally, when the BEC is removed from the trap before the pulse-echo sequence, the coherence time slightly reduces to $T_2=\SI{0.98\pm0.03}{\milli \second}$, which we attribute to a heating effect from the push-out process: During the push-out, Rb is accelerated by a resonant light beam in radial direction. After a push-out duration of $\SI{20}{\micro \second}$, we expect Cs and Rb to be fully separated ($\SI{3}{\micro \meter}$ distance), while Rb is acquiring a kinetic energy of $\SI{11}{\micro \kelvin}$. The acceleration of Rb enhances the collisional cross section, and at a density overlap of $\left<n\right> = \SI{2.7e13}{\centi \meter^{-3}}$, one in three Cs atoms undergoes a collision with Rb, leading to heating and thereby dephasing fluctuations. \\ In the following, we discuss different decoherence mechanisms, while the results are further discussed in the body of this work. In our experiment, we do not expect longitudinal decay ($T_1$) on relevant time scales, since the transition $F_{\mathrm{i}}=4 \rightarrow F_{\mathrm{i}}=3$ is dipole-forbidden. However, longitudinal decoherence might be mimicked by two-body (hyperfine-changing) relaxation of the $F_{\mathrm{i}}=4$ state into the ground state $F=3$, leading to atom loss and reduced contrast. For the calculated Cs-Rb overlap during the pulse-echo sequence of $\left<n\right> = \SI{2.7e13}{\centi \meter^{-3}}$ two-body loss is expected to yield a lifetime of the $F=4$ state of $\tau_{\mathrm{2body}} > \SI{10\pm5}{\milli \second}$ in agreement with our observation (see fig. \ref{fig:S4}~(b)), which is long compared to the extracted coherence time in the measurement. \\ The transverse coherence time $T_2$ of the atomic ensemble is limited by inhomogeneous, but quasi-constant dephasing ($T_2^{*}$) and a fluctuation of the dephasing ($T_2^{\prime}$) as $\nicefrac{1}{T_2} = \nicefrac{1}{T_2^{\prime}}+\nicefrac{1}{T_2^{*}}$ (see e.g. \cite{Kuhr2005}). In our measurement, we use a spin-echo technique in order to find the fundamental limitation of the coherence. Therefore, the measured $T_2$ directly yields $T_2^{\prime}$. $T_2^{\prime}$ is estimated by the influence of temporal fluctuation $\Delta \delta = \sqrt{2} / T_2^{\prime}$ of the detuning $\delta(t) = \omega_0(t) - \omega_L(t)$ (with atomic transition frequency $\omega_0$ and microwave driving frequency $\omega_L$). In our system, we expect magnetic field fluctuations to be main contributors to $\Delta \delta$. For our combination of Zeeman states of $\ket{g}$ and $\ket{e}$, a fluctuation of $\Delta \delta = \SI{1}{\kilo \hertz}$ is induced by magnetic field fluctuations of $\Delta B = \SI{0.48}{\milli G}$, which equals roughly $\SI{0.1}{\percent}$ of the earth magnetic field in our laboratory. Additional dephasing sources are fluctuations connected to the finite temperature of the atoms, heating from to dipole traps \cite{Kuhr2005}, as well as atomic collisions.\\ \section*{Acknowledgements} We thank Michael Hohmann and Farina Kindermann for their help in initially constructing the experiment and for initial discussions, Steve Haupt for his support in preparing the measurements, and Eberhard Tiemann and Axel Pelster for helpful discussions. A.W. thanks Dieter Meschede for support in initiating the project. This work was funded in the early stage by the European Union via ERC Starting grant "QuantumProbe"; equipment was partially contributed by Deutsche Forschungsgemeinschaft via Sonderforschungsbereich (SFB) SFB/TRR185. D.M. acknowledges funding via SFB/TRR49, T.L. acknowledges funding by Carl Zeiss Stiftung, and F.S. acknowledges funding by the Studienstiftung des deutschen Volkes.
{ "timestamp": "2018-09-03T02:06:11", "yymm": "1802", "arxiv_id": "1802.08702", "language": "en", "url": "https://arxiv.org/abs/1802.08702" }
\section{Introduction} \label{sec:intro} The solution of the many-body Schr\"odinger equation describing a system of interacting baryons is challenging because of the nonperturbative nature and the strong spin/isospin-dependence of realistic nuclear interactions. Quantum Monte Carlo (QMC) methods provide a powerful tool to tackle the nuclear many-body problem in a nonperturbative fashion. They have been proven to be remarkably successful in describing the properties of strongly correlated fermions in a large variety of physical conditions~\cite{Carlson:2015}. Historically, QMC methods have made use of phenomenological nuclear interactions, such as the Argonne $v_{18}$ (AV18) nucleon-nucleon $(N\!N)$ potential combined with Urbana/Illinois models for the three-nucleon $(3N)$ forces~\cite{Carlson:2015}. By construction, these potentials are nearly local, meaning that the dominant parts of the interaction depend only on the relative distance, spin, and isospin of the two interacting nucleons, and not upon their momenta. This feature has been one of the keys to success for the application of QMC algorithms to the study of nuclear systems. Green's function Monte Carlo (GFMC) and auxiliary field diffusion Monte Carlo (AFDMC) methods have employed these phenomenological potentials to accurately calculate properties of nuclei, neutron drops, and neutron-star matter~\cite{Carlson:2015,Gandolfi:2011,Gandolfi:2012,Maris:2013,Gandolfi:2014,Gandolfi:2014_epja,Buraczynski:2016,Buraczynski:2017}. Despite the large success of such models, phenomenological interactions are not free from drawbacks. They do not provide a systematic way to estimate theoretical uncertainties, and it is not clear how to improve their quality. In addition, some models of the $3N$ force provide a too soft equation of state of neutron matter~\cite{Sarsa:2003,Maris:2013}, with the consequence that the predicted neutron-star maximum mass is not compatible with the observation of heavy neutron stars~\cite{Demorest:2010,Antoniadis:2013}. An alternative approach to nuclear interactions that overcomes the limitations of the phenomenological models is provided by chiral effective field theory (EFT)~\cite{Epelbaum:2009,Machleidt:2011}. In chiral EFT, nuclear interactions are systematically derived in connection with the underlying theory of the strong interaction, by writing down the most general Lagrangian consistent with the symmetries of low-energy quantum chromodynamics (QCD) in terms of the relevant degrees of freedom at low energies; nucleons and pions. A power-counting scheme is then chosen to order the resulting contributions according to their importance. The result is a low-energy EFT according to which nuclear forces are given in an expansion in the ratio of a soft scale (the pion mass or a typical momentum scale in the nucleus) to a hard scale (the chiral breakdown scale). The long-range part of the potential is given by pion-exchange contributions that are determined by the chiral symmetry of QCD and low-energy experimental data for the pion-nucleon system. The short-range terms are instead characterized by contact interactions described by so-called low-energy constants (LECs), which are fit to reproduce experimental data ($N\!N$ scattering data for the two-body part of the interaction, and few- and/or many-body observables for the many-body components). Among the advantages of such an expansion, compared to traditional approaches, are the ability to systematically improve the quality of the interaction order by order, the possibility to estimate theoretical uncertainties, the fact that many-body forces arise naturally, and that electroweak currents can be derived consistently. In the last decade, intense efforts have been devoted to the development of chiral EFT interactions, as shown by the availability of different potentials in the literature~\cite{Epelbaum:2009,Machleidt:2011,Epelbaum:2015,Ekstrom:2015,Carlsson:2016,Entem:2017,Ekstrom:2017,Reinert:2017}, typically written in momentum space. It is only in recent years that chiral EFT interactions have been formulated equivalently in coordinate space. New potentials are now available, including next-to-next-to-leading-order (N$^2$LO) local interactions~\cite{Gezerlis:2013,Gezerlis:2014}, supplemented by consistent $3N$ potentials~\cite{Tews:2016,Lynn:2016}, as well as chiral interactions with explicit delta degrees of freedom~\cite{Piarulli:2015,Piarulli:2016,Piarulli:2018}. Local chiral interactions up to N$^2$LO can be written using the same operator structure as the phenomenological potentials, providing for the first time the opportunity to combine EFT-derived interactions and accurate QMC methods. The GFMC method has been used to study the ground state of light nuclei employing local chiral interactions~\cite{Gezerlis:2013,Gezerlis:2014,Lynn:2014,Tews:2016,Lynn:2016,Piarulli:2016,Lynn:2017,Chen:2017,Piarulli:2018}. The same potentials have been used in AFDMC calculations of pure neutron systems, ranging from few-body systems~\cite{Klos:2016,Zhao:2016,Gandolfi:2017} to pure neutron matter~\cite{Gezerlis:2013,Gezerlis:2014,Tews:2016}. More recently, the first AFDMC study of $p$-shell nuclei employing local chiral interactions has been reported~\cite{Lonardoni:2017afdmc}. In this work we provide a comprehensive description of the AFDMC algorithm for the study of ground-state properties of light and medium-mass nuclei employing local chiral interactions at N$^2$LO, extending the findings of Ref.~\cite{Lonardoni:2017afdmc}. The structure of this paper is as follows. In~\cref{sec:ham} we introduce the nuclear Hamiltonian employed in this work. In~\cref{sec:vmc,sec:afdmc} we review the main features of the employed QMC methods. \Cref{sec:wf} is devoted to the description of the employed trial wave functions. In~\cref{sec:res} we present our results for nuclei with $3\le A\le 16$. Finally, we give a summary in~\cref{sec:summ}. \section{Hamiltonian} \label{sec:ham} Nuclei are described as a collection of point-like particles of mass $m_N$ interacting via two- and three-body potentials according to the nonrelativistic Hamiltonian \begin{align} H=-\frac{\hbar^2}{2m_N}\sum_i \nabla_i^2+\sum_{i<j}v_{ij}+\sum_{i<j<k}V_{ijk} , \end{align} where the two-body interaction $v_{ij}$ also includes the Coulomb force. In QMC calculations, it is convenient to express the interactions in terms of radial functions multiplying spin and isospin operators. The commonly employed Argonne $v_8'$ (AV8$'$) potential~\cite{Wiringa:2002}, as well as the two-body part of the recently developed local chiral interactions~\cite{Gezerlis:2013}, can be expressed as: \begin{align} v_{ij} = \sum_{p=1}^8 v_p(r_{ij}) \mathcal O_{ij}^{p}, \label{eq:v_ij} \end{align} with \begin{align} \mathcal O_{ij}^{p=1,8} = \big[\mathbbm 1,\bm\sigma_i\cdot\bm\sigma_j,S_{ij},\vb{L}\cdot\vb{S}\big] \otimes\big[\mathbbm 1,\bm\tau_i\cdot\bm\tau_j\big], \end{align} where \begin{align} S_{ij}=3\,\bm\sigma_i\cdot\hat{\vb{r}}_{ij}\,\bm\sigma_j\cdot\hat{\vb{r}}_{ij}-\bm\sigma_i\cdot\bm\sigma_j, \end{align} is the tensor operator, and \begin{align} \vb{L}&=\frac{1}{2i}(\vb{r}_i-\vb{r}_j)\times(\bm\nabla_i-\bm\nabla_j), \label{eq:L} \\ \vb{S}&=\frac{1}{2}(\bm\sigma_i+\bm\sigma_j), \label{eq:S} \end{align} are the relative angular momentum and the total spin of the pair $ij$, respectively. The radial functions of~\cref{eq:v_ij} are fitted to $N\!N$ scattering data. At N$^2$LO, the operator structure of the local chiral interactions is the same as above, with the only exception that the $\vb{L}\cdot\vb{S}\,\bm\tau_i\cdot\bm\tau_j$ term is not present at N$^2$LO (see Ref.~\cite{Gezerlis:2014} for more details). The three-body force $V_{ijk}$ is written as a sum of contributions coming from two-pion exchange (TPE), plus shorter-range terms. In the case of local chiral interactions at N$^2$LO, $P$- and $S$-wave TPE contributions are included, and they are characterized by the same LECs involved in the two-body sector. The shorter-range part of the $3N$ force is instead parametrized by two contact terms, the LECs of which have been fit to the alpha particle binding energy and to the spin-orbit splitting in the neutron-$\alpha$ $P$-wave phase shifts (see Refs.~\cite{Lynn:2016,Lynn:2017} for more details). The chiral $3N$ interaction at N$^2$LO can be conveniently written as \begin{align} V=V_a^{2\pi,P}+V_c^{2\pi,P}+V^{2\pi,S}+V_D+V_E, \label{eq:v_ijk} \end{align} where the first three terms correspond to the TPE diagrams in $P$ and $S$ waves (Eqs. (A.1b), (A.1c) and (A.1a) of Ref.~\cite{Lynn:2017}, respectively). The subscripts $a$ and $c$ refer to the operator structure of such contributions, which can be written in terms of anticommutators or commutators, respectively. $V_D$ and $V_E$ involve contact terms. In this work we employ the form (A.2b) of Ref.~\cite{Lynn:2017} for $V_D$, and we consider two choices for $V_E$, namely $E\tau$ and $E\mathbbm1$ (Eqs.~(A.3a) and (A.3b) of Ref.~\cite{Lynn:2017}). By defining the following quantities: \begin{align} \delta_{R_0}(r)&=\frac{n}{4\pi R_0^3\Gamma(3/n)}\,e^{-(r/R_0)^n} \nonumber, \\ T(r)&=\left(1+\frac{3}{m_\pi r}+\frac{3}{m_\pi^2r^2}\right)\frac{e^{-m_\pi r}}{m_\pi r} T_c(r) \nonumber, \\ Y(r)&=\frac{e^{-m_\pi r}}{m_\pi r}\,Y_c(r) \nonumber, \\ Z(r)&=\frac{m_\pi r}{3}\Big(Y(r)-T(r)\Big) \nonumber, \\ Y_c(r)&=1-e^{-(r/R_0)^n} \nonumber, \\ T_c(r)&=\left(1-e^{-(r/R_0)^n}\right)^{n_t} \nonumber, \\ X_{i\alpha j\beta}&=(3\,\delta_{\alpha\gamma}\,\hat{r}_{ij}^\gamma\;\delta_{\beta\mu}\,\hat{r}_{ij}^\mu-\delta_{\alpha\beta})\,T(r_{ij}) +\delta_{\alpha\beta}\,Y(r_{ij}) \nonumber, \\ {\cal X}_{i\alpha j\beta}&=X_{i\alpha j\beta}(\vb{r}_{ij})-\delta_{\alpha\beta}\frac{4\pi}{m_\pi^3}\delta_{R_0}(r_{ij}) \nonumber, \\ {\cal Z}_{ij\alpha}&=Z(r_{ij})\,\delta_{\alpha\gamma}\,\hat{r}_{ij}^\gamma, \end{align} we can recast the contributions of~\cref{eq:v_ijk} in a form that is suitable for QMC calculations: \begin{widetext} \begin{align} V_a^{2\pi,P}&=A_a^{2\pi,P}\sum_{i<j<k}\sum_{\rm cyc}\Big\{\bm{\tau}_i\cdot\bm{\tau}_k,\bm{\tau}_j\cdot\bm{\tau}_k\Big\} \Big\{\sigma_i^\alpha\sigma_k^\gamma,\sigma_k^\mu\sigma_j^\beta\Big\} {\cal X}_{i\alpha k\gamma}\,{\cal X}_{k\mu j\beta} \nonumber \\ &=4\,A_a^{2\pi,P}\sum_{i<j}\bm{\tau}_i\cdot\bm{\tau}_j\,\sigma_i^\alpha\sigma_j^\beta \sum_{k\neq i,j}{\cal X}_{i\alpha k\gamma}\,{\cal X}_{k\mu j\beta} \nonumber \\ &=4\,A_a^{2\pi,P}\sum_{i<j}\bm{\tau}_i\cdot\bm{\tau}_j\,\sigma_i^\alpha\sigma_j^\beta \sum_{k\neq i,j}\left(X_{i\alpha k\gamma}-\delta_{\alpha\gamma}\frac{4\pi}{m_\pi^3}\delta_{R_0}(r_{ik})\right) \left(X_{k\mu j\beta}-\delta_{\mu\beta}\frac{4\pi}{m_\pi^3}\delta_{R_0}(r_{kj})\right) \nonumber \\ &=V_a^{XX}+V_a^{X\delta}+V_a^{\delta\delta} , \label{eq:anti} \\ V_c^{2\pi,P}&=A_c^{2\pi,P}\sum_{i<j<k}\sum_{\rm cyc}\Big[\bm{\tau}_i\cdot\bm{\tau}_k,\bm{\tau}_j\cdot\bm{\tau}_k\Big] \Big[\sigma_i^\alpha\sigma_k^\gamma,\sigma_k^\mu\sigma_j^\beta\Big] {\cal X}_{i\alpha k\gamma}\,{\cal X}_{k\mu j\beta} \nonumber \\ &=A_c^{2\pi,P}\sum_{i<j<k}\sum_{\rm cyc}\Big[\bm{\tau}_i\cdot\bm{\tau}_k,\bm{\tau}_j\cdot\bm{\tau}_k\Big] \Big[\sigma_i^\alpha\sigma_k^\gamma,\sigma_k^\mu\sigma_j^\beta\Big] \left(X_{i\alpha k\gamma}-\delta_{\alpha\gamma}\frac{4\pi}{m_\pi^3}\delta_{R_0}(r_{ik})\right) \left(X_{k\mu j\beta}-\delta_{\mu\beta}\frac{4\pi}{m_\pi^3}\delta_{R_0}(r_{kj})\right) \nonumber \\ &=V_c^{XX}+V_c^{X\delta}+V_c^{\delta\delta} , \label{eq:comm} \\ V^{2\pi,S}&=A^{2\pi,S}\sum_{i<j<k}\sum_{\rm cyc} \bm{\tau}_i\cdot\bm{\tau}_j\,\sigma_i^\alpha\sigma_j^\beta\,{\cal Z}_{ik\alpha}\,{\cal Z}_{jk\alpha} \nonumber \\ &=A^{2\pi,S}\sum_{i<j}\bm{\tau}_i\cdot\bm{\tau}_j\,\sigma_i^\alpha\sigma_j^\beta \sum_{k\neq i,j}{\cal Z}_{ik\alpha}\,{\cal Z}_{jk\alpha} , \label{eq:tm} \\ V_D&=A_D\sum_{i<j}\bm{\tau}_i\cdot\bm{\tau}_j\,\sigma_i^\alpha\sigma_j^\beta \sum_{k\neq i,j}{\cal X}_{i\alpha j\beta}\Big[\delta_{R_0}(r_{ik})+\delta_{R_0}(r_{jk})\Big] \nonumber \\ &=A_D\sum_{i<j}\bm{\tau}_i\cdot\bm{\tau}_j\,\sigma_i^\alpha\sigma_j^\beta \sum_{k\neq i,j}\left(X_{i\alpha j\beta}-\delta_{\alpha\beta}\frac{4\pi}{m_\pi^3}\delta_{R_0}(r_{ij})\right) \Big[\delta_{R_0}(r_{ik})+\delta_{R_0}(r_{jk})\Big] \nonumber \\ &=V_D^{X\delta}+V_D^{\delta\delta} , \label{eq:vd} \\ V_E&=A_E\sum_{i<j}\bm{\tau}_i\cdot\bm{\tau}_j\sum_{k\neq i,j}\delta_{R_0}(r_{ik})\delta_{R_0}(r_{jk}) , \label{eq:ve} \end{align} \end{widetext} where the sum over the coordinate projections (Greek indices) is implicit. \Cref{eq:ve} is the expression for the $E\tau$ parametrization of the contact term $V_E$. The $E\mathbbm1$ form is recovered by setting $\bm{\tau}_i\cdot\bm{\tau}_j=\mathbbm1$. For the local chiral interactions at N$^2$LO, we have \begin{align} A_a^{2\pi,P}&=\frac{1}{2}\left(\frac{g_A}{f_\pi^2}\right)^2\left(\frac{1}{4\pi}\right)^2 \frac{m_\pi^6}{9}c_3 \nonumber, \\ A_c^{2\pi,P}&=-\frac{c_4}{2c_3}A_a^{2\pi,P} \nonumber, \\ A^{2\pi,S}&=\left(\frac{g_A}{2f_\pi}\right)^2\left(\frac{m_\pi}{4\pi}\right)^2\frac{4m_\pi^6}{f_\pi^2}c_1 \nonumber, \\ A_D&=\frac{m_\pi^3}{12\pi}\frac{g_A}{8f_\pi^2}\frac{1}{f_\pi^2\Lambda_\chi}c_D \nonumber, \\ A_E&=\frac{c_E}{f_\pi^4\Lambda_\chi}, \label{eq:lecs} \end{align} where $g_A=1.267$ is the axial-vector coupling constant, $f_\pi=92.4\,\rm MeV$ is the pion decay constant, $m_\pi=138.03\,\rm MeV$ is the averaged pion mass, $\Lambda_\chi$ is taken to be a heavy meson scale $\Lambda_\chi=700\,\rm MeV$, and $c_1,\,c_3,\,c_4,\,c_D,\,c_E$ are the LECs. Note that, using these definitions, the structure of the phenomenological Urbana IX (UIX) model is recovered by imposing $\delta_{R_0}(r)=0$, $n=2$, $n_t=2$, and $A_c^{2\pi,P}=\frac{1}{4}A_a^{2\pi,P}$ as well as $A_D=A_E=0$. Finally, we consider coordinate-space cutoffs $R_0=1.0\,\rm fm$ and $R_0=1.2\,\rm fm$, approximately corresponding to cutoffs in momentum space of $500\,\rm MeV$ and $400\,\rm MeV$~\cite{Lynn:2017}, note however also Ref.~\cite{Hoppe:2017}. \section{Review of the VMC method} \label{sec:vmc} In the variational Monte Carlo (VMC) method, given a trial wave function $\Psi_T$, the expectation value of the Hamiltonian $H$ is given by \begin{align} \!\!\!E_0\leq\langle H\rangle=\frac{\langle\Psi_T|H|\Psi_T\rangle}{\langle\Psi_T|\Psi_T\rangle} =\frac{\displaystyle\int\!dR\,\Psi_T^*(R)H\Psi_T(R)} {\displaystyle\int\!dR\,\Psi_T^*(R)\Psi_T(R)} , \label{eq:<h>} \end{align} where $R=\{\vb{r}_1,\dots,\vb{r}_A\}$ are the coordinates of the particles, and there is an implicit sum over all the particle spin and isospin states. $E_0$ is the energy of the true ground state with the same quantum numbers as $\Psi_T$, and the leftmost equality in the above relation is valid only if the wave function is the exact ground-state wave function $\Psi_0$. In the VMC method, one typically minimizes the energy expectation value of~\cref{eq:<h>} with respect to changes in the variational parameters, in order to obtain $\Psi_T$ as close as possible to $\Psi_0$. The integral of~\cref{eq:<h>} can be rewritten as \begin{align} \langle H\rangle=\frac{\displaystyle\int dR\,P(R) \frac{H\Psi_T(R)}{\Psi_T(R)}}{\displaystyle\int dR\,P(R)} , \end{align} where $P(R)=|\Psi_T(R)|^2$ can be interpreted as a probability distribution of points $R$ in a $3A$-dimensional space. The above multidimensional integral can be solved using Monte Carlo sampling. In practice, a number of configurations $R_i$ are sampled using the Metropolis algorithm~\cite{Metropolis:1953}, and the local energy of the system is calculated as: \begin{align} \langle E\rangle=\frac{1}{A}\sum_{i=1}^A \frac{\langle R_i|H|\Psi_T\rangle}{\langle R_i|\Psi_T\rangle} , \end{align} where $\langle R|\Psi_T\rangle=\Psi_T(R)$. More details on the sampling procedure and on the calculation of statistical errors can be found, e.g., in Ref.~\cite{Ceperley:1995}. For spin/isospin-dependent interactions the generalization of~\cref{eq:<h>} is straightforward: \begin{align} \langle H\rangle=\frac{\displaystyle\int dR\,\sum_{S,S'}\Psi_T^*(R,S')H_{S,S'}\Psi_T(R,S)} {\displaystyle\int dR\,\sum_S|\Psi_T(R,S)|^2} , \end{align} where now the wave function also depends upon spin and isospin states $S=\{s_1,\dots,s_A\}$, and \begin{align} H_{S,S'}=\langle S'|S\rangle\left[-\frac{\hbar^2}{2m}\sum_i\nabla_i^2\right]+\langle RS'|V|RS\rangle . \end{align} In this case, the VMC method can be implemented by either explicitly summing over all the spin and isospin states \begin{align} &\langle H\rangle=\displaystyle\int dR\,E_L(R)P(R) ,\nonumber \\ &P(R)=\frac{\sum_S|\Psi_T(R,S)|^2}{\displaystyle\int dR\,\sum_S|\Psi_T(R,S)|^2} ,\nonumber \\ &E_L(R)=\frac{\sum_{S,S'}\Psi_T^*(R,S')H_{S,S'}\Psi_T(R,S)}{\sum_S|\Psi_T(R,S)|^2} , \end{align} or by sampling the spin and isospin states: \begin{align} &\langle H\rangle=\displaystyle\int dR\,\sum_S\,E_L(R,S)P(R,S) ,\nonumber \\ &P(R,S)=\frac{|\Psi_T(R,S)|^2}{\displaystyle\int dR\,|\Psi_T(R,S)|^2} ,\nonumber \\ &E_L(R,S)=\frac{\sum_{S'}\Psi_T^*(R,S')H_{S,S'}\Psi_T(R,S)}{|\Psi_T(R,S)|^2} . \end{align} The Metropolis algorithm can then be used to sample either $R$ from $P(R)$ in the former case, or $R$ and $S$ from $P(R,S)$ in the latter case. \section{Review of the AFDMC method} \label{sec:afdmc} Diffusion Monte Carlo (DMC) methods are used to project out the ground state with a particular set of quantum numbers. The starting point is a trial wave function $|\Psi_T\rangle$, typically the result of a VMC minimization, that is propagated in imaginary time $\tau$: \begin{align} |\Psi_0\rangle\propto \lim_{\tau\rightarrow\infty} e^{-(H-E_T)\tau}|\Psi_T\rangle , \end{align} where $E_T$ is a parameter that controls the normalization. For spin/isospin-independent interactions, the object to be propagated is given by the overlap between the wave function and a set of configurations in coordinate space $\langle R|\Psi_T\rangle=\Psi_T(R)$. By using the completeness relation $\int dR |R\rangle\langle R|=\mathbbm1$, we can write the propagation in imaginary time as \begin{align} \langle R'|\Psi(\tau)\rangle=\displaystyle\int dR\,G(R',R,\tau)\,\langle R|\Psi_T(0)\rangle , \label{eq:imtimeprop} \end{align} where the propagator (or Green's function) $G$ is defined as the matrix element between the two points $R$ and $R'$ in the volume \begin{align} G(R',R,\tau)=\langle R'|e^{-(H-E_T)\tau}|R\rangle, \end{align} and $\langle R'|\Psi(\tau)\rangle$ approaches the true ground-state for large imaginary time. In practice, it is not possible to directly compute the propagator $G(R',R,\tau)$. However, one can use the short-time propagator $G(R',R,d\tau)$: \begin{align} \langle R'|\Psi(\tau)\rangle= \int dR_n\,dR_{n-1}\ldots\,dR_1\,dR\,G(R',R_n,\delta\tau) \nonumber \\ \times G(R_{n-1},R_{n-2},\delta\tau)\ldots\,G(R_1,R,\delta\tau) \langle R|\Psi_T(0)\rangle, \end{align} and then employ Monte Carlo techniques to sample the paths $R_i$ in the imaginary-time evolution. The method is accurate for small values of the time step $\delta\tau$, and the exact result can be determined by using different values of $\delta\tau$ and extrapolating to $\delta\tau\to0$. By using the Trotter formula~\cite{Trotter:1959} to order $d\tau^3$, the short-time propagator can be approximated with: \begin{align} G(R',R,\delta\tau)&\equiv\langle R'|e^{-(H-E_T)\delta\tau}|R\rangle \nonumber \\ &\approx \langle R'|e^{-(V-E_T)\frac{\delta\tau}{2}}e^{-T\delta\tau}e^{-(V-E_T)\frac{\delta\tau}{2}} |R\rangle , \end{align} where $T$ is the nonrelativistic kinetic energy, and $V$ is the employed potential. The propagator for the kinetic energy alone corresponds to the free-particle propagator: \begin{align} G_0(R',R)&=\langle R'|e^{-T\delta\tau}|R\rangle\nonumber \\ &=\left(\frac{m}{2\pi\hbar^2\delta\tau}\right)^{\frac{3A}{2}}e^{-\frac{m(R-R')^2}{2\hbar^2\delta\tau}} , \label{eq:g0} \end{align} which yields a Gaussian diffusion for the paths in coordinate space, with $\sigma^2 = 4 \frac{\hbar^2}{2m} \delta\tau$. The propagator for spin/isospin-independent potentials is simply given by: \begin{align} \langle R'|e^{-(V-E_T)\delta\tau}|R\rangle \approx \prod_{i<j}e^{-[V(r_{ij})-E_T]\delta\tau}\,\delta(R-R') , \label{eq:p_vr} \end{align} where each pair interaction can be simply evaluated as a function of the coordinates of the system, and the energy $E_T$ results in a normalization factor. Note that the addition of spin/isospin-independent three- and many-body interactions is straightforward. For spin/isospin-dependent interactions, the propagation of the potential becomes more complicated. In general, this is because quadratic operators like $\bm\sigma_i\cdot\bm\sigma_j$ generate amplitudes along the singlet and the triplet states of a pair. The propagator of~\cref{eq:p_vr} generalizes in this case to \begin{align} \langle R'|e^{-(V-E_T)\delta\tau}|R\rangle \rightarrow \langle R'S'|e^{-(V-E_T)\delta\tau}|RS\rangle \nonumber \\ \approx \langle S'|\prod_{i<j}e^{-(V(r_{ij})-E_T)\delta\tau}|S\rangle\,\delta(R-R') , \label{eq:p_vrs} \end{align} where now the matrix $\exp[(-(V-E_T)\delta\tau)]$ is not diagonal in the spin of each pair. One possible strategy to compute the propagator of~\cref{eq:p_vrs} is to include all the spin and isospin states in the trial wave function, as is done in GFMC calculations~\cite{Carlson:2015}. This, however, implies a number of wave-function components proportional to $2^A$, which currently limits GFMC calculations to $A=12$. The idea of the AFDMC method is to start from a trial wave function whose computational cost is polynomial with $A$, rather than exponential. Such a wave function can be written in the single-particle representation: \begin{align} \langle S|\Psi\rangle\propto\xi_{\alpha_1}(s_1)\,\xi_{\alpha_2}(s_2)\dots\,\xi_{\alpha_A}(s_A) , \label{eq:wf_s} \end{align} where $\xi_{\alpha_i}(s_i)$ are functions of the spinor $s_i$ with state $\alpha_i$. In the above expression, the radial orbitals are omitted for simplicity, and the antisymmetrization is trivial. A quadratic operator in the spin acting on the wave function above generates two different amplitudes: \begin{align} \langle S|\bm\sigma_1\cdot\bm\sigma_2|\Psi\rangle & =\langle S|2\,\mathcal P_{12}^\sigma-\mathbbm1|\Psi\rangle \nonumber \\ & =2\,\xi_{\alpha_1}(s_2)\,\xi_{\alpha_2}(s_1)\,\xi_{\alpha_3}(s_3)\dots\,\xi_{\alpha_A}(s_A) \nonumber \\ & \quad\,-\xi_{\alpha_1}(s_1)\,\xi_{\alpha_2}(s_2)\,\xi_{\alpha_3}(s_3)\dots\,\xi_{\alpha_A}(s_A) \nonumber \\ & =\langle S'|\Psi\rangle+\langle S''|\Psi\rangle . \end{align} In general, the action of all pairwise spin/isospin operators (or propagators) generates $2^A\binom{A}{Z}$ amplitudes (if charge conservation is imposed). Even though this number can be further reduced by assuming that the nucleus has good isospin~\cite{Carlson:2015}, the action of pairwise operators largely increases the number of components with respect to the initial wave function, thus losing the computational advantage of the polynomial scaling with $A$. However, linear spin/isospin operators do not break the single-particle representation. They simply imply rotations of the initial spinors, without generating new amplitudes, as for instance: \begin{align} \langle S|\sigma_1^\alpha|\Psi\rangle &= \sigma_1^\alpha\,\xi_{\alpha_1}(s_1)\,\xi_{\alpha_2}(s_2)\,\xi_{\alpha_3}(s_3)\dots\,\xi_{\alpha_A}(s_A) \nonumber \\ & =\xi_{\alpha_1}(s'_1)\,\xi_{\alpha_2}(s_2)\,\xi_{\alpha_3}(s_3)\dots\,\xi_{\alpha_A}(s_A) \nonumber \\ & =\langle S'|\Psi\rangle . \end{align} Quadratic operators can be linearized by using the Hubbard-Stratonovich transformation: \begin{align} e^{-\frac{1}{2}\lambda \mathcal O^2}=\frac{1}{\sqrt{2\pi}}\int dx\, e^{-\frac{x^2}{2}+\sqrt{-\lambda}x\mathcal O} , \label{eq:hs} \end{align} where $x$ are usually called auxiliary fields, and the integral above can be computed with Monte Carlo techniques, i.e., by sampling points $x$ with probability distribution $P(x)=\exp(-x^2/2)$. By using the transformation of~\cref{eq:hs}, Hamiltonians involving up to quadratic operators in spin and isospin can be efficiently employed in the imaginary-time propagation of a trial wave function of the form of~\cref{eq:wf_s}, retaining the good polynomial scaling with $A$. \subsection{Propagation of spin/isospin quadratic operators} \label{sec:p2} Let us consider the two-body interaction of~\cref{eq:v_ij} up to $p=6$: \begin{widetext} \begin{align} V_{NN}^6&=\sum_{i<j}\Big\{ \Big[v_1(r_{ij})+v_2(r_{ij})\,\bm\tau_i\cdot\bm\tau_j\Big]\mathbbm1 +\Big[v_3(r_{ij})+v_4(r_{ij})\,\bm\tau_i\cdot\bm\tau_j\Big]\bm\sigma_i\cdot\bm\sigma_j +\Big[v_5(r_{ij})+v_6(r_{ij})\,\bm\tau_i\cdot\bm\tau_j\Big]S_{ij}\Big\} , \nonumber \\ &=\sum_{i<j}v_1(r_{ij})+\sum_{i<j}\Big[v_2(r_{ij})\Big]\bm\tau_i\cdot\bm\tau_j +\sum_{i<j}\sum_{\alpha\beta}\Big[v_3(r_{ij})\,\delta_{\alpha\beta} +v_5(r_{ij})(3\,\hat{r}_{ij}^\alpha\,\hat{r}_{ij}^\beta-\delta_{\alpha\beta})\Big]\sigma_i^\alpha\sigma_j^\beta \nonumber \\ &\quad+\sum_{i<j}\sum_{\alpha\beta}\Big[v_4(r_{ij})\,\delta_{\alpha\beta} +v_6(r_{ij})(3\,\hat{r}_{ij}^\alpha\,\hat{r}_{ij}^\beta-\delta_{\alpha\beta})\Big]\bm\tau_i\cdot\bm\tau_j\,\sigma_i^\alpha\sigma_j^\beta , \nonumber \\ &=V_{SI}(R)+{1\over2}\sum_{i\ne j}A^{(\tau)}_{ij}\,\bm\tau_i\cdot\bm\tau_j +{1\over2}\sum_{i\ne j}\sum_{\alpha\beta}A^{(\sigma)}_{i\alpha j\beta}\,\sigma_i^{\alpha}\sigma_j^{\beta} +{1\over2}\sum_{i\ne j}\sum_{\alpha\beta}A^{(\sigma\tau)}_{i\alpha j\beta} \,\bm\tau_i\cdot\bm\tau_j\,\sigma_i^{\alpha}\sigma_j^{\beta} , \nonumber \\ &=V_{SI}(R)+V_{SD}(R), \label{eq:v_si+sd} \end{align} \end{widetext} where $V_{SI}(V_{SD})$ is the spin/isospin-independent(-dependent) part of the interaction, and $A^{(\tau)}_{ij}\;(A\times A)$, $A^{(\sigma)}_{i\alpha j\beta}\;(3A\times 3A)$, and $A^{(\sigma\tau)}_{i\alpha j\beta}\;(3A\times 3A)$ are real and symmetric matrices. As such, these matrices can be diagonalized: \begin{align} \sum_{j}A^{(\tau)}_{ij}\,\psi_{n,j}^{(\tau)}&=\lambda_n^{(\tau)}\,\psi_{n,i}^{(\tau)} , \nonumber \\ \sum_{j\beta}A^{(\sigma)}_{i\alpha j\beta}\,\psi_{n,j\beta}^{(\sigma)}&=\lambda_n^{(\sigma)}\,\psi_{n,i\alpha}^{(\sigma)} , \nonumber \\ \sum_{j\beta}A^{(\sigma\tau)}_{i\alpha j\beta}\,\psi_{n,j\beta}^{(\sigma\tau)}&=\lambda_n^{(\sigma\tau)}\,\psi_{n,i\alpha}^{(\sigma\tau)} , \end{align} and it is possible to define a new set of operators expressed in terms of their eigenvectors: \begin{align} \mathcal O_{n\alpha}^{(\tau)}&=\sum_{j}\tau_j^\alpha\,\psi_{n,j}^{(\tau)} , \nonumber \\ \mathcal O_{n}^{(\sigma)}&=\sum_{j\beta}\sigma_j^\beta\,\psi_{n,j\beta}^{(\sigma)} , \nonumber \\ \mathcal O_{n\alpha}^{(\sigma\tau)}&=\sum_{j\beta}\tau_j^\alpha\sigma_j^\beta\,\psi_{n,j\beta}^{(\sigma\tau)} , \end{align} such that the spin/isospin-dependent part of~\cref{eq:v_si+sd} can be recast as: \begin{align} V_{SD}(R)&= {1\over2}\sum_{\alpha=1}^3\sum_{n=1}^{A} \lambda_n^{(\tau)}\Big(\mathcal O_{n\alpha}^{(\tau)}\Big)^2 +{1\over2}\sum_{n=1}^{3A} \lambda_n^{(\sigma)}\Big(\mathcal O_n^{(\sigma)}\Big)^2 \nonumber \\ &\quad+{1\over2}\sum_{\alpha=1}^3\sum_{n=1}^{3A} \lambda_n^{(\sigma\tau)}\Big(\mathcal O_{n\alpha}^{(\sigma\tau)}\Big)^2 . \end{align} The potential written in this form contains only quadratic operators in spin/isospin. We can thus use the Hubbard-Stratonovich transformation of~\cref{eq:hs} to write the propagator of the $V_{NN}^6$ interaction acting on a configuration $|RS\rangle$ as: \begin{widetext} \begin{align} e^{-V_{NN}^6\delta\tau}|RS\rangle=e^{-V_{SI}(R)\delta\tau}\prod_{m=1}^{15A}\frac{1}{\sqrt{2\pi}} \displaystyle\int dx_m\,e^{\frac{x_m^2}{2}}\,e^{\sqrt{-\lambda_m\delta\tau}\,x_m \mathcal O_m}|RS\rangle =|RS'\rangle , \end{align} \end{widetext} where 15 auxiliary fields are needed for each nucleon, 3 for $\tau$ operators, 3 for $\sigma$, and 9 for $\sigma\tau$. The propagation (rotation) of spinors depends upon the sampling of the auxiliary fields $X=\{x_m\}$, so does the new spin/isospin configurations $S'\equiv S'(X)$. The full short-time propagator, which includes both kinetic and potential energies, can finally be expressed as: \begin{widetext} \begin{align} G(R',R,S'(X),S,\delta\tau)=\langle R'S'|\Big({m\over2\pi\hbar^2\delta\tau}\Big)^{3A\over2}e^{-{m(R-R')^2\over2\hbar^2\delta\tau}} e^{-(V_{SI}(R)-E_T)\delta\tau} \prod_{m=1}^{15A}{1\over\sqrt{2\pi}}\int dx_m\,e^{-{x_m^2\over2}} e^{\sqrt{-\lambda_m\delta\tau}\,x_m \mathcal O_m}|RS\rangle. \label{eq:prop} \end{align} \end{widetext} Note that the above expressions refer to the simple propagator $\exp[-T\delta\tau]\exp[-(V-E_T)\delta\tau]$. In practice, we sample the more accurate propagator $\exp[-(V-E_T)\delta\tau/2]\exp[-T\delta\tau]\exp[-(V-E_T)\delta\tau/2]$, which implies two sets of rotations in $\delta\tau/2$: the first depending on $R$, and the second on the diffused $R'$, for a total of 30 auxiliary fields. Compared to the GFMC method, where the coordinates are sampled and the spin and isospin states are explicitly included and summed, in AFDMC, spin and isospin are also sampled via Hubbard-Stratonovich rotations. This largely reduces the computational cost of the imaginary-time propagation of a many-body wave function, allowing one to calculate nuclei more efficiently up \isotope[12]{C}, and to go beyond $A=12$. \subsection{Propagation of spin-orbit operators} \label{sec:pls} The spin-orbit operator reads \begin{align} v_{LS}(r_{ij})=v_7(r_{ij})\,\vb{L}\cdot\vb{S} , \end{align} where $\vb{L}$ and $\vb{S}$ are defined in~\cref{eq:L,eq:S}, respectively. As shown in Ref.~\cite{Pieper:1998}, one way to evaluate the propagator for spin-orbit operators is to consider the expansion at first order in $\delta\tau$ \begin{align} e^{-v_7(r_{ij})\,\vb{L}\cdot\vb{S}\,\delta\tau}\approx\mathbbm1-v_7(r_{ij})\,\vb{L}\cdot\vb{S}\,\delta\tau , \end{align} acting on the free propagator $G_0$ of~\cref{eq:g0}. The resulting propagator is \begin{align} G_{LS}\approx e^{\sum_{i\ne j}\frac{1}{8i}\frac{2m}{\hbar^2}v_7(r_{ij})(\vb{r}_i-\vb{r}_j) \times(\bm\Delta\vb{r}_i-\bm\Delta\vb{r}_j)\cdot(\bm\sigma_i+\bm\sigma_j)} , \end{align} where $\bm\Delta\vb{r}_i=\vb{r}_i-\vb{r}_i'$ is the difference of the particle position before and after the action of the free propagator $G_0$. Note that the above propagator is only linear in the spin, i.e., it does not require any auxiliary field to be sampled. However, it can be shown that it induces spurious counter terms~\cite{Sarsa:2003}. These can be removed by using the modified propagator: \begin{align} G_{LS}\approx &\,e^{\sum_{i\neq j}\frac{1}{4i}\frac{m}{\hbar^2\delta\tau}v_7(r_{ij}) [\vb{r}_{ij}\times\bm\Delta\vb{r}_{ij}]\cdot\bm\sigma_i} \nonumber \\ &\times e^{-\frac{1}{2}\left[\sum_{i\neq j}\frac{1}{4i}\frac{m}{\hbar^2}v_7(r_{ij}) [\vb{r}_{ij}\times\bm\Delta\vb{r}_{ij}]\cdot\bm\sigma_i\right]^2} . \end{align} This alternative version of the spin-orbit propagator contains quadratic spin operators, and thus it requires additional Hubbard-Stratonovich fields to be sampled, but it is correct at order $\delta\tau$. \subsection{Propagation of three-body forces} \label{sec:p3} Several terms of the $3N$ interaction (\cref{eq:v_ijk}) can be directly included in the AFDMC propagator. These are $V_a^{2\pi,P}$, $V^{2\pi,S}$, $V_D$, and $V_E$ of~\cref{eq:anti,eq:tm,eq:vd,eq:ve}, which correspond to terms involving only quadratic spin and isospin operators. These have the same operator structure as the spin/isospin-dependent part of the two-body potential (\cref{eq:v_si+sd}). The dependence on the third particle $k$ enters only in the radial functions ${\cal X}_{i\alpha j\beta}$, ${\cal Z}_{ij\alpha}$, and $\delta_{R_0}(r)$, which can be absorbed in the definition of the matrices $A^{(\tau)}_{ij}$ and $A^{(\sigma\tau)}_{i\alpha j\beta}$. The structure of $V_c^{2\pi,P}$ contains instead cubic spin and isospin operators, and the Hubbard-Stratonovich transformation of~\cref{eq:hs} cannot be applied. It follows that these terms cannot be exactly included in the standard AFDMC propagation. It may be possible to invoke more complicated algorithms to sample them, but the imaginary-time step will need to be higher order in $\delta\tau$. However, their expectation value can always be calculated, and it can be used to derive an approximate three-body propagator for $V_c^{2\pi,P}$. Let us define an effective Hamiltonian $H'$ that can be exactly included in the AFDMC propagation: \begin{align} H'=H-V_c^{2\pi,P}+\alpha_1 V_a^{XX}+\alpha_2 V_D^{X\delta}+\alpha_3 V_E . \label{eq:h'} \end{align} The three constants $\alpha_i$ are adjusted in order to have: \begin{align} \langle V_c^{XX}\rangle & \approx\langle\alpha_1 V_a^{XX}\rangle , \nonumber \\ \langle V_c^{X\delta}\rangle & \approx\langle\alpha_2 V_D^{X\delta}\rangle , \nonumber \\ \langle V_c^{\delta\delta}\rangle & \approx\langle\alpha_3 V_E\rangle , \label{eq:pert} \end{align} where $\langle\,\cdots\rangle$ indicates the average over the wave function (see~\cref{sec:obs}), and the identifications are suggested by the similar ranges and functional forms. Once the ground state $\Psi_0'$ of $H'$ is calculated via the AFDMC imaginary-time propagation, the expectation value of the Hamiltonian $H$ is given by \begin{align} \!\!\langle H\rangle &\approx\langle\Psi_0'|H'|\Psi_0'\rangle+\langle\Psi_0'|H-H'|\Psi_0'\rangle \nonumber \\ & \approx\langle H'\rangle+\langle V_c^{2\pi,P}\!-\alpha_1V_a^{XX}\!-\alpha_2V_D^{X\delta}\!-\alpha_3V_E\rangle \nonumber \\ & \approx\langle H'\rangle+\langle V_{\rm pert}\rangle , \label{eq:<h3b>} \end{align} where the last term is evaluated perturbatively, meaning that its expectation value is calculated, even though not all the operators are included in the propagator $(V_c^{2\pi,P})$. By opportunely adjusting the constants $\alpha_i$ of~\cref{eq:pert}, we ensure that the correction $\langle V_{\rm pert}\rangle$ is small compared to $\langle H'\rangle$. A similar approach is used in the GFMC method to calculate the small nonlocal terms that are present in the AV18 interaction. In that case the difference $v_8'-v_{18}$ is calculated as a perturbation~\cite{Pudliner:1997}. \subsection{Importance sampling} \label{sec:is} Diffusion Monte Carlo algorithms, such as the GFMC and AFDMC methods, are much more efficient when importance sampling techniques are also implemented. In fact, sampling spatial and spin/isospin configurations according to $G(R',R,S'(X),S,\delta\tau)$ might not always be efficient. For instance, consider the case of a strongly repulsive interaction at short distances. In such a situation, sampling the spatial coordinates according to the kinetic energy only is not an optimal choice because no information about the interaction is included in sampling the paths, but only through the weights associated with the configurations. As a result, an inefficiently sampled path might have a very small weight, making its contribution very small along the imaginary time. Suppose that we construct a positive definite wave function $\Psi_G$ close to that of the true ground state of the Hamiltonian $H$. $\Psi_G$ can be used to guide the imaginary-time evolution by defining a better propagator compared to that of~\cref{eq:imtimeprop}, to be used to sample coordinates and spin/isospin configurations: \begin{widetext} \begin{align} \langle\Psi_G|R'S'\rangle\langle R'S'|\Psi(\delta\tau)\rangle &=\displaystyle\int dR\,G(R',R,S'(X),S,\delta\tau)\,\langle\Psi_G|R'S'(X)\rangle\langle RS|\Psi_T(0)\rangle \nonumber \\ &=\displaystyle\int dR\,G(R',R,S'(X),S,\delta\tau)\frac{\langle\Psi_G|R'S'(X)\rangle}{\langle\Psi_G|RS\rangle} \langle\Psi_G|RS\rangle\langle RS|\Psi_T(0)\rangle . \end{align} \end{widetext} Note that if $\Psi_G$ is positive definite, the above propagation does not change the variance of the computed observables. In typical DMC calculations the modified propagator is sampled by shifting the Gaussian in the free propagator, and then including the local energy in the weight of the configuration (see, e.g., Ref.~\cite{Foulkes:2001}). A similar approach has also been used in AFDMC calculations in the past. However, in the latest implementation of the AFDMC method, a much more efficient way to implement the importance sampling propagator is used. The goal is to sample the modified propagator: \begin{align} G(R',R,S'(X),S,\delta\tau)\frac{\langle\Psi_G|R'S'(X)\rangle}{\langle\Psi_G|RS\rangle} . \end{align} We first sample a set of coordinate displacements $\Delta R$ according to~\cref{eq:prop} and a set of auxiliary fields $X$ from Gaussian distributions. Since the propagator $G$ implies the Gaussian sampling for the kinetic energy and for the auxiliary fields, sampling $\Delta R$ and $X$ has the same probability of sampling $-\Delta R$ and $-X$. Driven by this observation, we calculate the ratios: \begin{align} w_1&=\frac{\langle\Psi_G|R+\Delta R,S'(X)\rangle}{\langle\Psi_G|RS\rangle}e^{-[V_{SI}(R+\Delta R)-E_T]\delta\tau}, \nonumber \\ w_2&=\frac{\langle\Psi_G|R-\Delta R,S'(X)\rangle}{\langle\Psi_G|RS\rangle}e^{-[V_{SI}(R-\Delta R)-E_T]\delta\tau}, \nonumber \\ w_3&=\frac{\langle\Psi_G|R+\Delta R,S'(-X)\rangle}{\langle\Psi_G|RS\rangle}e^{-[V_{SI}(R+\Delta R)-E_T]\delta\tau}, \nonumber \\ w_4&=\frac{\langle\Psi_G|R-\Delta R,S'(-X)\rangle}{\langle\Psi_G|RS\rangle}e^{-[V_{SI}(R-\Delta R)-E_T]\delta\tau}, \label{eq:w_i} \end{align} where $V_{SI}$ is the spin/isospin-independent part of the interaction. We then sample one of the above choices according to the ratios $w_i$. Finally, the total weight of the new configuration is given by \begin{align} W=\frac{1}{4}\sum_i w_i , \end{align} and $W$ is used for branching as in the standard DMC method~\cite{Carlson:2015}. \subsection{Observables} \label{sec:obs} The expectation value of an observable $\mathcal O$ is calculated by using the sampled configurations $R_iS_i$ as: \begin{align} \displaystyle\langle \mathcal O(\tau)\rangle=\frac{\displaystyle\sum_i \frac{\langle R_iS_i|\mathcal O|\Psi_T\rangle}{W}\frac{W}{\langle R_iS_i|\Psi_T\rangle}} {\displaystyle\sum_i\frac{W}{\langle R_iS_i|\Psi_T\rangle}}. \label{eq:obs} \end{align} The above expression is valid only for observables that commute with the Hamiltonian. For other observables, such as radii and densities, expectation values are often calculated from mixed estimates \begin{align} \langle\mathcal O(\tau)\rangle\approx2\frac{\langle\Psi_T|\mathcal O|\Psi(\tau)\rangle}{\langle\Psi_T|\Psi(\tau)\rangle} -\frac{\langle\Psi_T|\mathcal O|\Psi_T\rangle}{\langle\Psi_T|\Psi_T\rangle} , \label{eq:mix} \end{align} where the first term corresponds to the DMC expectation value, and the second term is the VMC one. \Cref{eq:mix} is valid for diagonal matrix elements, but it can be generalized to the case of off-diagonal matrix elements, e.g., in transition matrix elements between different initial and final states (see Ref.~\cite{Pervin:2007}). Note that the extrapolation above is small for accurate wave functions. This is the case, for instance, for closed-shell nuclei and single operators. For open-shell systems, particularly for halo nuclei, the information encoded in the trial wave function may not be as accurate as that for simpler systems. This can result in a nonnegligible extrapolation of the mixed expectation value. An example of this behavior is provided by the nuclear radius, the VMC expectation value of which is typically larger than the DMC one for open-shell systems. One way to reduce the extrapolation of the mixed estimate for the radius is to use a penalty function during the optimization of the variational parameters in the trial wave function. This penalty function sets a constraint on the VMC radius so as to adjust its expectation value close to the DMC estimate, thus reducing the extrapolation. \subsection{Constrained and unconstrained evolution} \label{sec:cp} The fact that the weight $W$ is always real and positive and that $\Psi_T$ is complex makes the denominator of~\cref{eq:obs} average quickly to zero. This is the well known sign problem in DMC methods. One way to avoid the sign problem is to use a constraint during the imaginary-time evolution. In practice, a configuration is given zero weight (thus it is dropped during branching) if its real part changes sign. In our implementation of the AFDMC method, we follow Ref.~\cite{Zhang:2003}. In sampling the propagator, we calculate the weights $w_i$ of~\cref{eq:w_i} as \begin{align} \frac{\langle\Psi_G|(R',S'(X)\rangle}{\langle\Psi_G|RS\rangle}\rightarrow \Re\left\{\frac{\langle\Psi_T|(R',S'(X)\rangle}{\langle\Psi_T|RS\rangle}\right\}, \end{align} and we then apply the constraint by assigning zero weight to a move that results in a negative ratio. This is analogous to the constrained-path approximation~\cite{Zhang:1997}, but for complex wave functions and propagators. This constrained evolution does not suffer a sign problem, but it makes the final result dependent on the choice of $\Psi_T$. Moreover, it implies that the calculated energy is not necessarily an upperbound to the true ground-state energy, as is the case of the fixed-node approximation in real space~\cite{Ortiz:1993,Foulkes:2001}. The results given by the constrained evolution can be improved by releasing the constraint and following the unconstrained evolution. After a set of configurations is generated using the constraint, the guiding function is taken as \begin{align} \langle\Psi_G|RS\rangle=\Re\big\{\langle\Psi_G|RS\rangle\big\}+\alpha\Im\big\{\langle\Psi_G|RS\rangle\big\}, \end{align} where $\alpha$ is a small arbitrary constant. This ensures that the ratio in the weights $w_i$ of~\cref{eq:w_i} is always positive and real. The propagation continues then according to the modified $\langle\Psi_G|RS\rangle$, and observables are calculated as before according to~\cref{eq:obs}. In several cases the expectation value $\langle O\rangle$ reaches a stable value independent of imaginary time before the signal-to-noise ratio goes to zero, and the result is exact within the statistical uncertainty. This is the case for light systems, $A\le4$. For larger nuclei the variance grows much faster as a function of the imaginary time, so that the unconstrained evolution cannot always be followed until $\langle O\rangle$ reaches a plateau. In these cases, the final result is extrapolated using an exponential fit as in Ref.~\cite{Pudliner:1997}. We found that a single-exponential form with free-sign coefficients yields the most stable fits in our case. Such a form has been used to obtain all the quoted results. Examples of unconstrained evolution are provided in~\cref{sec:constr}. \section{Trial wave function} \label{sec:wf} The AFDMC trial wave function we use takes the form: \begin{widetext} \begin{align} \langle RS|\Psi\rangle=\langle RS|\prod_{i<j}f^1_{ij}\,\prod_{i<j<k}f^{3c}_{ijk}\, \left[\mathbbm1+\sum_{i<j}\sum_{p=2}^6 f^p_{ij}\,\mathcal O_{ij}^p\, f_{ij}^{3p} +\sum_{i<j<k}U_{ijk}\right]|\Phi\rangle_{J^\pi,T} , \label{eq:psi} \end{align} \end{widetext} where $|RS\rangle$ represents the sampled $3A$ spatial coordinates and the $4A$ spin/isospin amplitudes for each nucleon, and the pair correlation functions $f^{p=1,6}_{ij}\equiv f^{p=1,6}(r_{ij})$ are obtained as the solution of Schr\"odinger-like equations in the relative distance between two particles, as explained in Ref.~\cite{Carlson:2015}. The two spin/isospin-independent functions $f^{3c}_{ijk}$ and $f^{3p}_{ij}$ are defined as \begin{align} f^{3c}_{ijk}&=1+q_1^c\,\vb{r}_{ij}\cdot\vb{r}_{ik}\,\vb{r}_{ji}\cdot\vb{r}_{jk}\, \vb{r}_{ki}\cdot\vb{r}_{kj}\,e^{-q_2^c(r_{ij}+r_{ik}+r_{jk})} , \nonumber \\ f_{ij}^{3p}&=\prod_k\left[1-q_1^p(1-\vb{r}_{ik}\cdot\vb{r}_{jk})\,e^{-q_2^p(r_{ij}+r_{ik}+r_{jk})}\right] , \end{align} and they are introduced to reduce the strength of the spin/isospin-dependent pair correlation functions when other particles are nearby~\cite{Pudliner:1997}. Finally, three-body spin/isospin-dependent correlations are also included as \begin{align} U_{ijk}=\sum_n \epsilon_n V_{ijk}^n(\alpha_n r_{ij},\alpha_n r_{ik},\alpha_n r_{jk}) , \end{align} where the terms $V^n_{ijk}$ are the same as the $3N$ interactions of~\cref{eq:v_ijk}, $\epsilon_n$ are potential quenching factors, and $\alpha_n$ are coordinate scaling factors. In the correlations above, we include the four terms $V_a^{2\pi,P}$, $V^{2\pi,S}$, $V_D$, and $V_E$. $V_c^{2\pi,P}$ can also be implemented in the trial wave function, but since its structure involves three-body spin/isospin operators, its inclusion results in a severely larger computational cost. The term $|\Phi\rangle$ is taken as a shell-model-like wave function. It consists of a sum of Slater determinants constructed using single-particle orbitals: \begin{align} \langle RS|\Phi\rangle_{J^\pi,T} = \sum_n c_n\Big[\sum \mathcal C_{J\!M}\,\mathcal D\big\{\phi_\alpha(\vb{r}_i,s_i)\big\}_{J,M}\Big]_{J^\pi,T} , \end{align} where $\vb{r}_i$ are the spatial coordinates of the nucleons, and $s_i$ represents their spinor. $J$ is the total angular momentum, $M$ its projection, $T$ the total isospin, and $\pi$ the parity. The determinants $\mathcal D$ are coupled with Clebsch-Gordan coefficients $\mathcal C_{J\!M}$ in order to reproduce the experimental total angular momentum, total isospin, and parity $(J^\pi,T)$. The $c_n$ are variational parameters multiplying different components having the same quantum numbers. Each single-particle orbital $\phi_\alpha$ consists of a radial function multiplied by the spin/isospin trial states: \begin{align} \phi_\alpha(\vb{r}_i,s_i)=\Phi_{nj}(r_i)\left[Y_{l,m_l}(\hat{\vb{r}}_i)\chi_\gamma(s_i)\right]_{j,m_j} , \label{eq:phi} \end{align} where the spherical harmonics $Y_{l,m_l}(\hat{\vb{r}}_i)$ are coupled to the spin state $\chi_\gamma(s_i)$ in order to have single-particle orbitals in the $j$ basis. The radial parts $\Phi(r)$ are obtained from the bound-state solutions of the Woods-Saxon wine-bottle potential: \begin{align} v(r)=V_s\left[\frac{1}{1+e^{(r-r_s)/a_s}}+\alpha_s\,e^{-(r/{\rho_s})^2}\right] , \end{align} where the five parameters $V_s$, $r_s$, $a_s$, $\alpha_s$, and $\rho_s$ can be different for orbitals belonging to different states, such as $1S_{1/2}$, $1P_{3/2}$, $1P_{1/2}$,\ldots, and they are optimized in order to minimize the variational energy. Finally, the spin/isospin trial states are represented in the $|p\uparrow\rangle$, $|p\downarrow\rangle$, $|n\uparrow\rangle$, $|n\downarrow\rangle$ basis $(|\chi_{\gamma=1,4}\rangle)$. The spinors are specified as: \begin{align} |s_i\rangle \equiv \left(\begin{array}{c} a_i \\ b_i \\ c_i \\ d_i \end{array}\right) =a_i|p\uparrow\rangle+b_i|p\downarrow\rangle+c_i|n\uparrow\rangle+d_i|n\downarrow\rangle , \end{align} and the trial spin/isospin states are taken to be: \begin{align} \chi_1(s_i)&=\langle s_i|\chi_1\rangle=\langle s_i|(1,0,0,0)\rangle=a_i,\nonumber \\ \chi_2(s_i)&=\langle s_i|\chi_2\rangle=\langle s_i|(0,1,0,0)\rangle=b_i,\nonumber \\ \chi_3(s_i)&=\langle s_i|\chi_3\rangle=\langle s_i|(0,0,1,0)\rangle=c_i,\nonumber \\ \chi_4(s_i)&=\langle s_i|\chi_4\rangle=\langle s_i|(0,0,0,1)\rangle=d_i . \end{align} Let us consider a system with $K$ states. According to the definitions above, a single Slater determinant $\mathcal D\equiv\mathcal D\big\{\phi_\alpha(\vb{r}_i,s_i)\big\}_{J,M}$ is constructed as: \begin{align} \mathcal D= \left| \begin{array}{cccc} a_1\phi_1(\vb{r}_1) & a_2\phi_1(\vb{r}_2) & \dots & a_A\phi_1(\vb{r}_A) \\ a_1\phi_2(\vb{r}_1) & a_2\phi_2(\vb{r}_2) & \dots & a_A\phi_2(\vb{r}_A) \\ \dots & \dots & \dots & \dots \\ b_1\phi_1(\vb{r}_1) & b_2\phi_1(\vb{r}_2) & \dots & b_A\phi_1(\vb{r}_A) \\ b_1\phi_2(\vb{r}_1) & b_2\phi_2(\vb{r}_2) & \dots & b_A\phi_2(\vb{r}_A) \\ \dots & \dots & \dots & \dots \\ d_1\phi_1(\vb{r}_1) & d_2\phi_1(\vb{r}_2) & \dots & d_A\phi_1(\vb{r}_A) \\ \dots & \dots & \dots & \dots \\ d_1\phi_K(\vb{r}_1) & d_2\phi_K(\vb{r}_2) & \dots & d_A\phi_K(\vb{r}_A) \\ \end{array} \right| . \end{align} For \isotope[16]{O}\,$(0^+,0)$, for instance, the number of states is four: one $1S_{1/2}$, two $1P_{3/2}$, and one $1P_{1/2}$. Each of them can accommodate two spins and two isospin states, and the full $\langle RS|\Phi\rangle_{0^+,0}$ wave function can be written as a single Slater determinant. For open-shell systems instead, many Slater determinants need to be included in order to have a good trial wave function with the proper $(J^\pi,T)$. For $A=6$ systems, e.g., including single-particle orbitals up to the $sd$-shell there are ten possible states: one $1S_{1/2}$, two $1P_{3/2}$, one $1P_{1/2}$, three $1D_{5/2}$, two $1D_{3/2}$, and one $2S_{1/2}$. These can be combined in 9 different Slater determinants in order to have the \isotope[6]{He}\,$(0^+,1)$ wave function, or in 32 Slater determinants to make \isotope[6]{Li}\,$(1^+,0)$. Finally, for \isotope[12]{C}\,$(0^+,0)$, by considering only $K=4$ as for \isotope[16]{O}\,$(0^+,0)$, the number of Slater determinants needed to build a $(0^+,0)$ wave function is already 119, making it computationally challenging to include $sd$-shell orbitals for $A=12$. The trial wave function of~\cref{eq:psi} contains a sum over pair correlation functions, meaning that only one pair of nucleons $ij$ is correlated at a time (linear correlations). This is different from the GFMC wave function~\cite{Carlson:2015}, where all pairs are correlated at the same time. In the AFDMC method, this same construction would, however, forbid the application of the Hubbard-Stratonovich transformation, justifying the choice of~\cref{eq:psi}. An improved AFDMC two-body wave function could include linear and quadratic pair correlations: \begin{widetext} \begin{align} \langle RS|\Psi\rangle_{2b}=\langle RS|\prod_{i<j}f^1_{ij}\, \left[\mathbbm1+\sum_{i<j}\sum_{p=2}^6 f^p_{ij}\,\mathcal O_{ij}^p\, +\sum_{i<j}\sum_{p=2}^6 f^p_{ij}\,\mathcal O_{ij}^p \sum_{\substack{k<l\\ij\ne kl}}\sum_{q=2}^6 f^q_{kl}\,\mathcal O_{kl}^q\,\right]|\Phi\rangle_{J^\pi,T} , \label{eq:psi2} \end{align} \end{widetext} where the sum over $kl$ includes all nucleon pairs except when $k=i$ and $l=j$. The $f^{p,q}_{ij}$ functions are solved for as before, and the operators $\mathcal{O}^{p,q}_{ij}$ are the same as in~\cref{eq:psi}. Although the two-body wave function of~\cref{eq:psi2} contains all quadratic correlations, most of the relevant physics is captured with a subset of these correlations, corresponding to the action of the $\mathcal O_{ij}^{p,q}$ operators on four distinct particles (so-called independent pair correlations). Since these correlations never act on the same particle, all the $\mathcal{O}^{p,q}_{ij}$ operators commute, removing the need for an explicit symmetrization of the wave function. Such a wave function could, in principle, improve the energy expectation value for large systems, but the computational cost of its evaluation is significantly higher than for a wave function with linear correlations only. In fact, the cost of computing expectation values of two-body operators on a two-body wave function of the form of~\cref{eq:psi2} is proportional to $A^4$ for linear correlations, and to $A^6$ for quadratic correlations. For this reason in the present work we consider only linear two-body correlations in the wave function, and we present a test study of quadratic correlations in~\cref{sec:psi2}. \section{Results} \label{sec:res} \subsection{Test of constrained and unconstrained evolution} \label{sec:constr} As introduced in \cref{sec:cp}, the energy (and other observables) calculated with the AFDMC method during the constrained evolution is dependent on the choice of $\Psi_T$. This is shown in~\cref{tab:tr} where the energy of \isotope[4]{He} is calculated for the Argonne $v_6'$ (AV6$'$) potential~\cite{Wiringa:2002} employing different trial wave functions. Full w.f. refers to the wave function of~\cref{eq:psi} where all the two-body correlations are included. Simple w.f. is instead a simplified wave function where only the central and $p=2,5$ operator correlations are used, the strength of the latter $(\mathcal O_{ij}^5=S_{ij}\,\bm\tau_i\cdot\bm\tau_j)$ being artificially reduced by a factor 3 after the optimization process. At the variational level it is evident how the simplified wave function is not the optimal choice for $\Psi_T$, as the energy expectation value is much higher than for the fully optimized wave function. For both choices of $\Psi_T$, the constrained evolution reduces the binding energy, moving towards the GFMC reference value for the same potential (see~\cref{tab:av6c}), but the results are still inconsistent. It is only the unconstrained evolution that brings the results for both wave functions in agreement within statistical errors. This is also shown in~\cref{fig:tr_he4}, where the AFDMC energy is plotted as a function of imaginary time for the unconstrained evolution. \setlength{\tabcolsep}{8pt} \begin{table}[tb] \centering \caption[]{\isotope[4]{He} ground-state energies for the AV6$'$ potential and different trial wave functions (see text for details). C(U) refers to the constrained(unconstrained) evolution. Errors are statistical. Results are in MeV.} \begin{tabular}{lcc} \hline\hline Energy & Simple w.f. & Full w.f. \\ \hline $E_{\rm VMC}$ & $-9.49(5)$ & $-23.35(1)$ \\ $E_{\rm AFDMC}^{\rm C}$ & $-25.28(3)$ & $-26.45(1)$ \\ $E_{\rm AFDMC}^{\rm U}$ & $-26.34(12)$ & $-26.31(4)$ \\ \hline\hline \end{tabular} \label{tab:tr} \end{table} \setlength{\tabcolsep}{8pt} \begin{figure}[b] \includegraphics[width=\linewidth]{ue_he4.pdf} \caption[]{Energy of \isotope[4]{He} as a function of imaginary time after releasing the constraint for the AV6$'$ potential. The two data sets refer to the two different wave functions of~\cref{tab:tr}. Red lines are exponential fits to the Monte Carlo results.} \label{fig:tr_he4} \end{figure} We report in~\cref{tab:av6c} the constrained and unconstrained energies for $A=3,4,6$ employing the AV6$'$ potential, in comparison with the GFMC results for the same interaction~\cite{Wiringa:2002}. It is interesting to note that constrained energies do not always satisfy the variational principle, as anticipated in~\cref{sec:cp}. This is seen, e.g., in \isotope[3]{H} and \isotope[4]{He}, for which the constrained energy is below the GFMC prediction, considered to be the exact solution for the given potential. However, once the unconstrained evolution is performed, the AFDMC and GFMC results agree within $1\%$ or less. \begin{table}[tb] \centering \caption[]{Ground state energies for $A=3,4,6$ employing the AV6$'$ potential. Errors are statistical. Results are in MeV.} \begin{tabular}{lccc} \hline\hline $\isotope[A]{Z}\,(J^\pi,T)$ & $E^{\rm C}_{\rm AFDMC}$ & $E^{\rm U}_{\rm AFDMC}$ & $E_{\rm GFMC}$ \\ \hline \isotope[3]{H}\,$(\frac{1}{2}^+,\frac{1}{2})$ & $-8.08(1)$ & $-7.95(2)$ & $-7.95(2)$ \\ \isotope[4]{He}\,$(0^+,0)$ & $-26.45(1)$ & $-26.31(4)$ & $-26.15(2)$ \\ \isotope[6]{Li}\,$(1^+,0)$ & $-28.09(4)$ & $-28.26(10)$ & $-28.37(4)$ \\ \hline\hline \end{tabular} \label{tab:av6c} \end{table} In~\cref{fig:tr_he6,fig:tr_o16} we show two examples of unconstrained calculation for larger systems, \isotope[6]{He} and \isotope[16]{O} respectively, employing realistic two- plus three-body interactions. We use the local chiral potential at N$^2$LO with cutoff $R_0=1.2\,\rm fm$ for \isotope[6]{He} and $R_0=1.0\,\rm fm$ for \isotope[16]{O}. The employed wave functions include all two- and three-body correlations, and for \isotope[6]{He} we include single-particle orbitals up to the $sd$-shell. In general, the larger the system, the shorter the imaginary-time evolution that can be followed before the variance becomes too large. This is particularly evident in \isotope[16]{O}, for which the unconstrained evolution can be satisfactorily performed up to $2.5\times10^{-4}\,\rm MeV^{-1}$, compared to the $4\times10^{-4}\,\rm MeV^{-1}$ for \isotope[6]{He} of~\cref{fig:tr_he6} with the same interaction, and to the $5\times10^{-3}\,\rm MeV^{-1}(10^{-2}\,\rm MeV^{-1})$ for \isotope[4]{He} (with AV6$'$ of~\cref{fig:tr_he4}). For example, at $\tau=2\times10^{-4}\,\rm MeV^{-1}$ the statistical error per nucleon is $0.01\,\rm MeV$ for \isotope[4]{He} and \isotope[6]{Li}, and $0.19\,\rm MeV$ for \isotope[16]{O}. This is a direct consequence of the quality of the employed wave function. For small nuclei, the wave function of~\cref{eq:psi} provides a good description of the system, and the energy expectation value of the constrained evolution is already close to the expected result. In \isotope[6]{He} the difference between the constrained and unconstrained energy is of the order of $1\,\rm MeV$, roughly $3\%$ of the final result. In \isotope[16]{O} instead, the constrained energy is higher, and the unconstrained evolution lowers its value by about $25\,\rm MeV$, $\approx22\%$ of the total energy. This could be improved by employing more sophisticated wave functions including higher order correlations, such as in~\cref{eq:psi2}, and/or using more refined techniques to perform the unconstrained evolution. Studies along these directions are underway. \begin{figure}[htb] \includegraphics[width=\linewidth]{ue_he6.pdf} \caption[]{\isotope[6]{He} unconstrained evolution for the local chiral potential at N$^2$LO $(E\tau)$ with cutoff $R_0=1.2\,\rm fm$. Data points refer to the expectation value of $H'$, \cref{eq:h'}.} \label{fig:tr_he6} \end{figure} \begin{figure}[htb] \includegraphics[width=\linewidth]{ue_o16.pdf} \caption[]{\isotope[16]{O} unconstrained evolution for the local chiral potential at N$^2$LO $(E\tau)$ with cutoff $R_0=1.0\,\rm fm$. Data points refer to the expectation value of $H'$, \cref{eq:h'}.} \label{fig:tr_o16} \end{figure} \subsection{Test of quadratic two-body correlations} \label{sec:psi2} The results presented in the previous section are obtained using a trial wave function of the form of~\cref{eq:psi}, i.e., by retaining only two-body linear correlations in $\langle RS|\Psi\rangle$. We present in~\cref{tab:psi2} a test study on the effect of including quadratic correlations in the wave function on the energy expectation value. The energy expectation values for the constrained evolution have been calculated for \isotope[4]{He}, \isotope[16]{O}, and symmetric nuclear matter (SNM) with 28 particles in a box with periodic boundary conditions at saturation density $\rho_0=0.16$ fm$^{-3}$. We use the AV6$'$ potential with no Coulomb interaction for all the systems. Results are shown for the linear, independent pair, and full quadratic two-body correlations. \begin{table}[htb] \centering \caption[]{Energy per nucleon (in MeV) for \isotope[4]{He}, \isotope[16]{O}, and SNM at $\rho_0$. The employed potential is AV6$'$. No Coulomb interaction is considered here. Results are shown for the linear, independent pair, and full quadratic two-body correlations. Errors are statistical.} \begin{tabular}{ccccc} \hline\hline System & Linear & Ind-Pair & Quadratic \\ \hline \isotope[4]{He} & $-6.79(1) $ & $-6.81(1) $ & $-6.78(1) $ \\ \isotope[16]{O} & $-7.23(6) $ & $-7.59(9) $ & $-7.50(9) $ \\ SNM & $-13.92(6)$ & $-14.80(7)$ & $-14.70(11)$ \\ \hline\hline \end{tabular} \label{tab:psi2} \end{table} Though there is little difference in \isotope[4]{He}, the constrained energies for both \isotope[16]{O} and SNM are lower when employing quadratic correlations, particularly for SNM. In \isotope[16]{O} the energy gain for the constrained evolution is only $\approx0.3(1)\,\rm MeV/A$, while in SNM this value increases up to $\approx0.8(1)\,\rm MeV/A$. Within statistical uncertainties, no difference in the results is found between independent pair and full quadratic correlations, though the latter have a higher computational cost. Note that the variational parameters in the trial wave function of~\cref{eq:psi2} were re-optimized for \isotope[4]{He}. In the case of \isotope[16]{O} and SNM instead, due to the cost of optimizing such parameters using the full wave function of~\cref{eq:psi2}, we used the same parameters obtained for the linear wave function of~\cref{eq:psi}. \subsection{Fit of the three-body interaction} The three-body interaction, which appears naturally in the chiral expansion at N$^2$LO, introduces two additional LECs that need to be fit to experimental data. The choice considered here is to fit the LECs $c_D$ and $c_E$, multiplying the intermediate- and short-range parts of the $3N$ interaction respectively (see~\cref{eq:lecs}), to two uncorrelated observables as in Ref.~\cite{Lynn:2016}: the binding energy of \isotope[4]{He} and $n$-$\alpha$ scattering $P$-wave phase shifts. This choice probes properties of light nuclei (the \isotope[4]{He} binding energy) while also providing a handle on spin-orbit splitting via the splitting in the two $P$-wave $n$-$\alpha$ phase shifts. Furthermore, the $n$-$\alpha$ system is the lightest nuclear system presenting three interacting neutrons. It follows that this choice constrains $c_D$ and $c_E$ well, and also probes $T=3/2$ physics. \begin{figure}[b] \includegraphics[width=\linewidth]{nalpha.pdf} \caption[]{$P$-wave $n$-$\alpha$ elastic scattering phase shifts compared to an $R$-matrix analysis of experimental data~\cite{Hale}.} \label{fig:nalpha} \end{figure} The detailed fitting procedure is reported in Ref.~\cite{Lynn:2016}, where different parametrizations of the three-body force for different cutoffs were explored. No fit for the $E\mathbbm1$ parametrization and the softer cutoff $R_0=1.2\,\rm fm$ was reported at that time. However, in Ref.~\cite{Lonardoni:2017afdmc} a significant overbinding of \isotope[16]{O} was found for this softer cutoff and the $E\tau$ parametrization of the $3N$ interaction. Locally regulated chiral interactions spoil the Fierz ambiguity typically exploited to allow the selection of one of six operators in the contact interaction $V_E$: see Refs.~\cite{Lynn:2016,Huth:2017} for details. This means that observables will depend on the parametrization of the $3N$ interaction and as suggested in Ref.~\cite{Lynn:2016}, this is especially true for larger or more dense nuclear systems. Ref.~\cite{Lynn:2016} also showed that the $E\tau$ parametrization was the most attractive of the two parametrizations, while the $E\mathbbm1$ parametrization was the least attractive. Therefore, it became important to consider now the $E\mathbbm1$ parametrization with the softer cutoff $R_0=1.2\,\rm fm$. This combination is thus explored in this work, together with the $E\mathbbm1$ parametrization for the $R_0=1.0\,\rm fm$ cutoff, and the $E\tau$ parametrization for both cutoffs. In~\cref{fig:nalpha} we report the $P$-wave $n$-$\alpha$ phase shifts for the four different combinations of operator structure and cutoff considered in this work. The corresponding values of $c_D$ and $c_E$ are shown in~\cref{tab:3bfit}. \begin{table}[tb] \centering \caption[]{LECs $c_D$ and $c_E$ for different cutoffs and parametrizations of the $3N$ force.} \begin{tabular}{cccc} \hline\hline $3N$ & $R_0\,(\rm fm)$ & $c_D$ & $c_E$ \\ \hline $E\tau$ & $1.0$ & $0.0$ & $-0.63$ \\ & $1.2$ & $3.5$ & $0.09$ \\ $E\mathbbm1$ & $1.0$ & $0.5$ & $0.62$ \\ & $1.2$ & $-0.75$ & $0.025$ \\ \hline\hline \end{tabular} \label{tab:3bfit} \end{table} \subsection{Test of the three-body calculation} The energies reported in~\cref{fig:tr_he6,fig:tr_o16} correspond to the expectation values of the effective Hamiltonian $H'$, \cref{eq:h'}. These need to be adjusted with the perturbative correction of~\cref{eq:<h3b>}---also extracted from the unconstrained evolution---in order to obtain the final results reported in~\cref{tab:10,tab:12}. Once the optimal set of parameters $\alpha_i$ is found, these corrections are small, almost consistent with zero within Monte Carlo statistical uncertainties, as shown in~\cref{tab:pert}. \setlength{\tabcolsep}{4pt} \begin{table}[htb] \centering \caption[]{Energy expectation values of~\cref{eq:<h3b>} for $A\ge6$. Errors are statistical. Results are in MeV.} \begin{tabular}{lccccc} \hline\hline $\isotope[A]{Z}\,(J^\pi,T)$ & $3N$ & $R_0\,(\rm fm)$ & $\langle H'\rangle$ & $\langle V_{\rm pert}\rangle$ & $\langle H\rangle$ \\ \hline \isotope[6]{He}\,$(0^+,1)$ & $E\tau$ & $1.0$ & $-28.3(4)$ & $0.1(2)$ & $-28.4(4)$ \\ & & $1.2$ & $-29.1(1)$ & $0.2(1)$ & $-29.3(1)$ \\ & $E\mathbbm1$ & $1.0$ & $-28.5(5)$ & $-0.3(2)$ & $-28.2(5)$ \\ & & $1.2$ & $-27.3(3)$ & $0.1(2))$ & $-27.4(4)$ \\ \hline \isotope[6]{Li}\,$(1^+,0)$ & $E\tau$ & $1.0$ & $-31.2(4)$ & $0.3(3)$ & $-31.5(5)$ \\ & & $1.2$ & $-31.9(3)$ & $0.4(1)$ & $-32.3(3)$ \\ & $E\mathbbm1$ & $1.0$ & $-30.9(4)$ & $-0.2(2)$ & $-30.7(4)$ \\ & & $1.2$ & $-30.0(3)$ & $-0.1(2)$ & $-29.9(4)$ \\ \hline \isotope[12]{C}\,$(0^+,0)$ & $E\tau$ & $1.0$ & $-75(2)$ & $3(1)$ & $-78(3)$ \\ \hline \isotope[16]{O}\,$(0^+,0)$ & $E\tau$ & $1.0$ & $-115(5)$ & $2(1)$ & $-117(5)$ \\ & & $1.2$ & $-265(25)$ & $-2(6)$ & $-263(26)$ \\ & $E\mathbbm1$ & $1.0$ & $-114(6)$ & $1(2)$ & $-115(6)$ \\ & & $1.2$ & $-113(5)$ & $-2(2)$ & $-111(5)$ \\ \hline\hline \end{tabular} \label{tab:pert} \end{table} \setlength{\tabcolsep}{10pt} The final result $\langle H\rangle$ is, however, nearly independent of variations of the $\alpha_i$ parameters, even for larger systems. This is shown in~\cref{tab:alpha} where the $\alpha_i$ are arbitrarily changed in \isotope[16]{O} within $5-10\%$ with respect to the optimal values, given in the first row for each cutoff. This results in $\lesssim4\%$ variations of the total energy, compatible with the overall Monte Carlo statistical uncertainties. Note that, in order to save computing time, this test has been done using the constrained evolution. However, the optimal constrained expectation values $\langle V_{\rm pert}\rangle$ are consistent with the unconstrained ones of~\cref{tab:pert}. \setlength{\tabcolsep}{1.5pt} \begin{table}[htb] \centering \caption[]{Contributions to the energy expectation value of~\cref{eq:<h3b>} in \isotope[16]{O}. The parametrization $E\tau$ of the $3N$ force is used for different cutoffs. $\langle V_{\rm pert}\rangle$ is extracted from a mixed estimate, as in~\cref{eq:mix}. For each cutoff, the first line represents the optimal choice for $\alpha_i$. Energies (in MeV) are the result of the constrained evolution. Errors are statistical.} \begin{tabular}{ccccc} \hline\hline $R_0\,(\rm fm)$ & $(\alpha_1,\alpha_2,\alpha_3)$ & $\langle H'\rangle$ & $\langle V_{\rm pert}\rangle$ & $\langle H\rangle$ \\ \hline $1.0$ & $(2.05,-3.80,-0.95)$ & $-90.0(3)$ & $1.8(5)$ & $-91.8(6)$ \\ & $(2.50,-3.30,-1.20)$ & $-125.1(6)$ & $-33.9(8)$ & $-92.2(1.0)$ \\ & $(1.95,-4.00,-0.90)$ & $-83.3(2)$ & $5.9(9)$ & $-89.2(1.0)$ \\ & $(1.80,-4.20,-0.85)$ & $-75.6(3)$ & $13.9(1.4)$ & $-89.4(1.5)$ \\ \hline $1.2$ & $(1.80,0.45,8.00)$ & $-171(2)$ & $-2(1)$ & $-169(2)$ \\ & $(1.90,0.50,8.50)$ & $-197(3)$ & $-25(2)$ & $-172(3)$ \\ & $(1.70,0.40,7.50)$ & $-147(1)$ & $15(1)$ & $-162(1)$ \\ \hline\hline \end{tabular} \label{tab:alpha} \end{table} \setlength{\tabcolsep}{10pt} Unless specified otherwise, in the following, all ground-state energies correspond to the final expectation value $\langle H\rangle$, extracted from the unconstrained Monte Carlo results for $\langle H'\rangle$ with an exponential fit, and adjusted with the perturbative correction of~\cref{eq:<h3b>} when $3N$ forces are employed. \subsection{Ground-state energies and charge radii} We consider local chiral Hamiltonians at leading-order (LO), next-to-leading-order (NLO), and N$^2$LO, the latter including both two- and three-body forces. At each order we can assign theoretical uncertainties to observables coming from the truncation of the chiral expansion, see, e.g., Ref.~\cite{Epelbaum:2015epja}. For an observable $X$ at N$^2$LO, the theoretical uncertainty is obtained as \begin{align} \Delta X^{\text{N}^2\text{LO}}=&\max(Q^4\times|X^{\text{LO}}|,\nonumber \\ &\phantom{\max(\,}Q^2\times|X^{\text{NLO}}-X^{\text{LO}}|, \nonumber \\ &\phantom{\max(\,}Q^{\phantom{2}}\times|X^{\text{N}^2\text{LO}}-X^{\text{NLO}}|), \label{eq:err} \end{align} where we take $Q=m_\pi/\Lambda_b$ with $m_\pi\approx140\,\rm MeV$ and $\Lambda_b=600$~MeV, as in Ref.~\cite{Lonardoni:2017afdmc}. The expectation value of the charge radius is derived from the point-proton radius using the relation: \begin{align} \left\langle r_{\rm ch}^2\right\rangle= \left\langle r_{\rm pt}^2\right\rangle+ \left\langle R_p^2\right\rangle+ \frac{A-Z}{Z}\left\langle R_n^2\right\rangle+ \frac{3\hbar^2}{4M_p^2c^2}, \label{eq:rch} \end{align} where $r_{\rm pt}$ is the calculated point-proton radius, $\left\langle R_p^2\right\rangle=0.770(9)\,\rm{fm}^2$~\cite{Beringer:2012} the proton radius, $\left\langle R_n^2\right\rangle=-0.116(2)\,\rm{fm}^2$~\cite{Beringer:2012} the neutron radius, and $(3\hbar^2)/(4M_p^2c^2)\approx0.033\,\rm{fm}^2$ the Darwin-Foldy correction~\cite{Friar:1997}. For \isotope[6]{He} a spin-orbit correction $\left\langle r_{\rm so}^2\right\rangle=-0.08\,\rm{fm}^2$~\cite{Ong:2010} is also included. The point-nucleon radius $r_{\rm pt}$ is calculated as \begin{align} \left\langle r_N^2\right\rangle=\frac{1}{{\cal N}}\big\langle\Psi\big|\sum_i\mathcal P_{N_i} |\vb{r}_i-\vb{R}_{\rm cm}|^2\big|\Psi\big\rangle, \end{align} where $\vb{R}_{\rm cm}$ is the coordinate of the center of mass of the system, ${\cal N}$ is the number of protons or neutrons, and \begin{align} \mathcal P_{N_i}=\frac{1\pm\tau_{z_i}}{2}, \label{eq:proj} \end{align} is the projector operator onto protons or neutrons. The charge radius is a mixed expectation value, and it requires the calculation of both VMC and DMC point-proton radii, according to~\cref{eq:mix}. Regardless of the employed optimization of the variational wave function (free or constrained), the extrapolation of the mixed estimate $\left\langle r_{\rm ch}^2\right\rangle$ is small, and the final results for different optimizations typically agree within statistical uncertainties. The ground-state energies and charge radii for light systems $(A=3,4)$ employing local chiral potential at N$^2$LO are shown in~\cref{tab:afdmc-gfmc}. Results with ($E\tau$ parametrization) and without the $3N$ force are shown for different choices of the cutoff $R_0$. For all the $s_{1/2}$, systems we used the same parameters $\alpha_i$ for the propagation of the $3N$ force, determined in order to minimize the perturbative correction of~\cref{eq:<h3b>}. The agreement with the GFMC results of Ref.~\cite{Lynn:2016,Lynn:2017}, where the $3N$ interactions are fully included in the propagation, is within a few percent both at the two- and three-body level, providing a good benchmark for the AFDMC propagation technique described in~\cref{sec:p3}. \begin{table*}[htb] \centering \caption[]{Ground-state energies and charge radii for $A=3,4$ employing local chiral potentials at N$^2$LO. The $E\tau$ parametrization of the $3N$ force is used. Errors are statistical. GFMC results are from Refs.~\cite{Lynn:2014,Lynn:2016}.} \begin{tabular}{cclcccc} \hline\hline Nucleus & Cutoff & Potential & \multicolumn{2}{c}{AFDMC} & \multicolumn{2}{c}{GFMC} \\ $\isotope[A]{Z}\,(J^\pi,T)$ & $R_0\,(\rm fm)$ & & $E\,(\rm MeV)$ & $r_{\rm ch}\,(\rm fm)$ & $E\,(\rm MeV)$ & $r_{\rm ch}\,(\rm fm)$ \\ \hline \isotope[3]{H}\,$(\frac{1}{2}^+,\frac{1}{2})$ & $1.0$ & $N\!N$ & $-7.54(4)$ & $1.75(2)$ & $-7.55(1)$ & $1.78(2)$ \\ & & $3N$ $E\tau$ & $-8.33(7)$ & $1.72(2)$ & $-8.34(1)$ & $1.72(3)$ \\ & $1.2$ & $N\!N$ & $-7.76(3)$ & $1.74(2)$ & $-7.74(1)$ & $1.75(2)$ \\ & & $3N$ $E\tau$ & $-8.27(5)$ & $1.73(2)$ & $-8.35(4)$ & $1.72(4)$ \\ \hline \isotope[3]{He}\,$(\frac{1}{2}^+,\frac{1}{2})$ & $1.0$ & $N\!N$ & $-6.89(5)$ & $2.02(2)$ & $-6.78(1)$ & $2.06(2)$ \\ & & $3N$ $E\tau$ & $-7.55(8)$ & $1.96(2)$ & $-7.65(2)$ & $1.97(2)$ \\ & $1.2$ & $N\!N$ & $-7.12(3)$ & $1.98(2)$ & $-7.01(1)$ & $2.01(1)$ \\ & & $3N$ $E\tau$ & $-7.64(4)$ & $1.95(5)$ & $-7.63(4)$ & $1.97(1)$ \\ \hline \isotope[4]{He}\,$(0^+,0)$ & $1.0$ & $N\!N$ & $-23.96(8)$ & $1.72(2)$ & $-23.72(1)$ & $1.73(1)$ \\ & & $3N$ $E\tau$ & $-27.64(13)$ & $1.68(2)$ & $-28.30(1)$ & $1.65(2)$ \\ & $1.2$ & $N\!N$ & $-25.17(5)$ & $1.69(1)$ & $-24.86(1)$ & $1.69(1)$ \\ & & $3N$ $E\tau$ & $-28.37(8)$ & $1.65(1)$ & $-28.30(1)$ & $1.64(1)$ \\ \hline\hline \end{tabular} \label{tab:afdmc-gfmc} \end{table*} In~\cref{fig:ene} we present the ground-state energies per nucleon of nuclei with $3\le A\le16$ for cutoffs $R_0=1.0\,\rm fm$ and $R_0=1.2\,\rm fm$, respectively. Results at LO, NLO, and N$^2$LO for both $E\tau$ and $E\mathbbm1$ parametrizations of the $3N$ force are shown. Error bars are estimated by including both the Monte Carlo uncertainties and the errors given by the truncation of the chiral expansion, the latter being the dominant ones. For the harder interaction $(R_0=1.0\,\rm fm)$, the predicted binding energies at N$^2$LO are in good agreement with experimental data all the way up to $A=16$. No differences, within theoretical uncertainties, are found for the two different parametrizations of the $3N$ force. \begin{figure*}[htb] \includegraphics[width=0.48\linewidth]{e_10.pdf}\qquad\includegraphics[width=0.48\linewidth]{e_12.pdf} \caption[]{Ground-state energies per nucleon for $3\le A\le16$ with local chiral potentials: (a) $R_0=1.0\,\rm fm$ cutoff (left panel), (b) $R_0=1.2\,\rm fm$ cutoff (right panel). Results at different orders of the chiral expansion and for different $3N$ parametrizations are shown. Smaller error bars (indistinguishable from the symbols up to $A=6$) indicate the statistical Monte Carlo uncertainty, while larger error bars are the uncertainties from the truncation of the chiral expansion. LO and N$^2$LO $E\tau$ results for \isotope[16]{O} with $R_0=1.2\,\rm fm$ are outside the displayed energy region. Updated from Ref.~\cite{Lonardoni:2017afdmc}.} \label{fig:ene} \end{figure*} \isotope[12]{C} in the $E\tau$ parametrization is slightly underbound. This is most likely a consequence of the employed wave function that results in a too high energy for the constrained evolution. This could be due to the complicated clustering structure of $^{12}$C not included in $\Psi_T$, which would require a much longer unconstrained propagation to filter out the corresponding low excitations from $\Psi_T$. For $A=6$ the wave function is constructed using up to $sd$-shell single-particle orbitals. For \isotope[12]{C} instead, coupling $p$-shell orbitals only already results in a sum of 119 Slater determinants. Including orbitals in the $sd$-shell could in principle result in a better wave function for this open-shell system, but it will sizably increase the number of determinants to consider, making the calculation prohibitively time consuming. Another possible improvement would be to include quadratic terms in the pair correlations, as shown in~\cref{eq:psi2}. However, first attempts in \isotope[16]{O} lead to just a $\approx6(2)\,\rm MeV$ reduction of the total energy in a simplified scenario (see~\cref{tab:psi2}), with a noticeably increased computational cost. For the softer interaction $(R_0=1.2\,\rm fm)$, NLO and in particular LO results are typically more bound compared to the $R_0=1.0\,\rm fm$ case. Both parametrizations of the $3N$ force bring the N$^2$LO energies compatible with the experimental values up to $A=6$, and consistent with those obtained with the hard potential. \begin{figure*}[htb] \includegraphics[width=0.48\linewidth]{rch_10.pdf}\qquad\includegraphics[width=0.48\linewidth]{rch_12.pdf} \caption[]{Charge radii for $3\le A\le16$ with local chiral potentials: (a) $R_0=1.0\,\rm fm$ cutoff (left panel), (b) $R_0=1.2\,\rm fm$ cutoff (right panel). The legend and error bars are as in~\cref{fig:ene}. Updated from Ref.~\cite{Lonardoni:2017afdmc}.} \label{fig:rch} \end{figure*} For the heaviest system considered here, \isotope[16]{O}, the picture is quite different. At LO, the system is dramatically overbound $(\approx -1\,\rm GeV)$, which would imply very large theoretical uncertainties at NLO and N$^2$LO coming from the prescription of~\cref{eq:err}. Within these uncertainties, NLO and N$^2$LO two-body energies are compatible with the corresponding results for the hard interaction (see~\cref{tab:10,tab:12}). However, the contribution of the $3N$ force at N$^2$LO largely depends upon the employed operator structure. The $E\tau$ parametrization for the soft potential is very attractive, adding almost $10\,\rm MeV$ per nucleon to the total energy, and thus predicting a significant overbinding with a ground-state energy of $\approx -260\,\rm MeV$. The $E\mathbbm1$ parametrization is instead less attractive, resulting in $\approx 0.30\,\rm MeV$ per nucleon more binding with respect to the two-body case, compatible with the energy expectation values for the hard potential. \Cref{fig:rch} shows the charge radii at different orders of the chiral expansion and for different cutoffs and parametrizations of the $3N$ force. The agreement with experimental data for the hard interaction at N$^2$LO is remarkably good all the way up to oxygen. One exception is \isotope[6]{Li}, for which the charge radius is somewhat underpredicted. However, a similar conclusion is found in GFMC calculations employing the AV18+IL7 potential, where charge radii of lithium isotopes are underestimated~\cite{Carlson:2015}. For the soft interaction, the description of charge radii resembles order by order that for the hard potential up to $A=6$, with the N$^2$LO results in agreement with experimental data, except for \isotope[6]{Li} (also shown in~\cref{tab:afdmc-gfmc}). The picture changes again for $A=16$. The charge radius of \isotope[16]{O} turns out to be close to $2.2\,\rm fm$ with the $E\tau$ parametrization of the $3N$ force, smaller than that of \isotope[6]{Li} for the same potential, but consistent with the significant overbinding predicted for $A=16$. The oxygen charge radius for the $E\mathbbm1$ parametrization is instead closer to the experimental value. The details of LO, NLO, and N$^2$LO calculations for $A\ge6$ are reported in~\cref{tab:10,tab:12} for $R_0=1.0\,\rm fm$ and $R_0=1.2\,\rm fm$, respectively. Results for the constrained and unconstrained evolution energies are both shown, together with the charge radii. Both Monte Carlo uncertainties and theoretical errors coming from the truncation of the chiral expansion are reported (where available). At N$^2$LO the two-body energy is shown together with that of the two different parametrizations of the $3N$ force ($E\tau$ and $E\mathbbm{1}$). The full calculation of \isotope[12]{C} at N$^2$LO required on the order of $10^6$ CPU hours (on Intel Broadwell cores @ 2.1GHz) for a single cutoff $(1.0\,\rm fm)$ and $3N$ parametrization $(E\tau)$. Due to the high computational cost, no attempts were made for the $E\mathbbm{1}$ parametrization of the $3N$ force or for the $1.2\,\rm fm$ cutoff. \begin{table*}[htb] \centering \caption[]{Ground-state energies and charge radii for $A\ge6$ with local chiral potentials. Results at different orders of the chiral expansion and for different $3N$ parametrizations are shown. Energy results are shown for both the constrained ($E_{\rm C}$) and unconstrained ($E$) evolutions. The first error is statistical, the second is based on the EFT expansion uncertainty. The employed cutoff is $R_0=1.0\,\rm fm$.} \begin{tabular}{llccc} \hline\hline $\isotope[A]{Z}\,(J^\pi,T)$ & Potential & $E_{\rm C}\,(\rm MeV)$ & $E\,(\rm MeV)$ & $r_{\rm ch}\,(\rm fm)$ \\ \hline \isotope[6]{He}\,$(0^+,1)$ & LO & $-42.1(1)$ & $-41.3(1)(9.6)$ & $1.67(4)(39)$ \\ & NLO & $-18.19(7)$ & $-20.0(3)(5.0)$ & $2.33(5)(15)$ \\ & N$^2$LO $N\!N$ & $-22.24(4)$ & $-23.1(2)(1.2)$ & $2.11(4)(5)$ \\ & N$^2$LO $3N$ $E\tau$ & $-26.58(6)$ & $-28.4(4)(2.0)$ & $1.99(4)(8)$ \\ & N$^2$LO $3N$ $E\mathbbm{1}$ & $-26.33(8)$ & $-28.2(5)(1.9)$ & $2.01(4)(7)$ \\ & exp & & $-29.3$ & $2.068(11)$~\cite{Mueller:2007} \\ \hline \isotope[6]{Li}\,$(1^+,0)$ & LO & $-42.8(1)$ & $-42.4(1)(9.9)$ & $2.03(6)(47)$ \\ & NLO & $-19.2(2)$ & $-21.5(3)(4.9)$ & $2.76(8)(17)$ \\ & N$^2$LO $N\!N$ & $-24.3(1)$ & $-25.5(4)(1.1)$ & $2.46(4)(7)$ \\ & N$^2$LO $3N$ $E\tau$ & $-28.9(1)$ & $-31.5(5)(2.3)$ & $2.33(4)(10)$ \\ & N$^2$LO $3N$ $E\mathbbm{1}$ & $-28.9(1)$ & $-30.7(4)(2.1)$ & $2.33(4)(10)$ \\ & exp & & $-32.0$ & $2.589(39)$~\cite{Nortershauser:2011} \\ \hline \isotope[12]{C}\,$(0^+,0)$ & LO & $-131.5(2)$ & $-131(1)(31)$ & $1.66(4)(39)$ \\ & NLO & $-31.1(2)$ & $-41(2)(21)$ & $3.25(5)(37)$ \\ & N$^2$LO $N\!N$ & $-63.5(2.4)$& $-66(3)(6)$ & $2.66(4)(14)$ \\ & N$^2$LO $3N$ $E\tau$ & $-70.2(5)$ & $-78(3)(9)$ & $2.48(4)(18)$ \\ & N$^2$LO $3N$ $E\mathbbm{1}$ & $-$ & $-$ & $-$ \\ & exp & & $-92.2$ & $2.471(6)$~\cite{Sick:1982} \\ \hline \isotope[16]{O}\,$(0^+,0)$ & LO & $-251.7(2)$ & $-247(1)(58)$ & $1.44(3)(34)$ \\ & NLO & $-37.3(2)$ & $-49(2)(46)$ & $3.27(5)(43)$ \\ & N$^2$LO $N\!N$ & $-72.8(2)$ & $-87(3)(11)$ & $2.76(5)(12)$ \\ & N$^2$LO $3N$ $E\tau$ & $-91.8(6)$ & $-117(5)(16)$ & $2.71(5)(13)$ \\ & N$^2$LO $3N$ $E\mathbbm{1}$ & $-85.8(5)$ & $-115(6)(15)$ & $2.72(5)(11)$ \\ & exp & & $-127.6$ & $2.730(25)$~\cite{Sick:1970} \\ \hline\hline \end{tabular} \label{tab:10} \end{table*} \begin{table*}[htb] \centering \caption[]{Same as~\cref{tab:10} but for the $R_0=1.2\,\rm fm$ cutoff.} \begin{tabular}{llccc} \hline\hline $\isotope[A]{Z}\,(J^\pi,T)$ & Potential & $E_{\rm C}\,(\rm MeV)$ & $E\,(\rm MeV)$ & $r_{\rm ch}\,(\rm fm)$ \\ \hline \isotope[6]{He}\,$(0^+,1)$ & LO & $-55.65(6)$ & $-54.9(2)(12.8)$ & $1.31(2)(31)$ \\ & NLO & $-21.41(6)$ & $-21.8(1)(7.7)$ & $2.08(4)(18)$ \\ & N$^2$LO $N\!N$ & $-24.25(5)$ & $-24.3(1)(1.8)$ & $2.02(4)(4)$ \\ & N$^2$LO $E\tau$ & $-28.37(5)$ & $-29.3(1)(1.8)$ & $1.92(4)(4)$ \\ & N$^2$LO $E\mathbbm{1}$ & $-26.98(8)$ & $-27.4(4)(1.8)$ & $2.00(4)(4)$ \\ & exp & & $-29.3$ & $2.068(11)$~\cite{Mueller:2007} \\ \hline \isotope[6]{Li}\,$(1^+,0)$ & LO & $-56.84(3)$ & $-56.0(1)(13.1)$ & $1.59(2)(37)$ \\ & NLO & $-23.64(8)$ & $-25.2(2)(7.2)$ & $2.47(4)(21)$ \\ & N$^2$LO $N\!N$ & $-26.76(3)$ & $-27.0(2)(1.7)$ & $2.41(4)(5)$ \\ & N$^2$LO $E\tau$ & $-30.8(1)$ & $-32.3(3)(1.7)$ & $2.24(4)(6)$ \\ & N$^2$LO $E\mathbbm{1}$ & $-29.2(1)$ & $-29.9(4)(1.7)$ & $2.29(4)(5)$ \\ & exp & & $-32.0$ & $2.589(39)$~\cite{Nortershauser:2011} \\ \hline \isotope[16]{O}\,$(0^+,0)$ & LO & $-1158.8(5)$ & $-1110(31)(259)$ & $1.15(5)(27)$ \\ & NLO & $-72.3(1)$ & $-77.5(7)(240.8)$ & $2.65(5)(35)$ \\ & N$^2$LO $N\!N$ & $-98.6(1)$ & $-106(4)(56)$ & $2.47(5)(8)$ \\ & N$^2$LO $E\tau$ & $-169(2)$ & $-263(26)(56)$ & $2.17(5)(11)$ \\ & N$^2$LO $E\mathbbm{1}$ & $-99.5(4)$ & $-111(5)(56)$ & $2.55(5)(8)$ \\ & exp & & $-127.6$ & $2.730(25)$~\cite{Sick:1970} \\ \hline\hline \end{tabular} \label{tab:12} \end{table*} As shown in~\cref{tab:10,tab:12}, the overbinding in \isotope[16]{O} happens only when the $3N$ force is included with the $E\tau$ parametrization for $R_0=1.2\,\rm fm$. The alternative combinations of three-body operators and cutoffs considered in this work predict instead binding energies compatible with the experimental value. A close look at the energy contributions coming from the $3N$ force in \isotope[6]{Li} and \isotope[16]{O} (\cref{tab:v3}) clearly shows the issue. A large negative $V_D$ contribution in \isotope[16]{O} for the soft $E\tau$ potential drives the system to a strongly bound state. In fact, even though the total energy at the two-body level is similar to that of the other soft potentials for $A=16$, the individual expectation values for the kinetic energy and the two-body potential are severely larger, consistent with a very compact system. The $3N$ force adds then $\approx 13\,\rm MeV$ per nucleon, roughly half coming from the also increased TPE contribution, and half from $V_D$. In the case of the $R_0=1.0\,\rm fm$ cutoff instead, the $3N$ force in both parametrizations adds only $<3\,\rm MeV$ per nucleon to the total two-body energy, with similar TPE contributions and a balance between $\langle V_D\rangle$ and $\langle V_E\rangle$. This is still true in \isotope[6]{Li} also for $R_0=1.2\,\rm fm$, but the balance is broken for the soft $E\tau$ potential in \isotope[16]{O}. The main reason for such behavior can be attributed to the large value of $c_D$ for this potential (see~\cref{tab:3bfit}), particularly effective for $A>6$. As has been discussed briefly above and in more detail in Refs.~\cite{Lynn:2016,Huth:2017}, locally regulated chiral interactions spoil the Fierz rearrangement freedom used to select one of the six possible operators that are consistent with the symmetries of the theory for the contact interaction at N$^2$LO, $V_E$. This means that results at finite cutoff depend on this choice. However, these additional regulator artifacts are absorbed by higher-order LECs in chiral EFT~\cite{Huth:2017}. Furthermore, the dependence is typically within the truncation uncertainties (an exception occurs for denser or heavier systems such as neutron matter beyond saturation density, or as shown above, \isotope[16]{O}). In these cases, since the next order in chiral EFT where $3N$ contacts appear is next-to-next-to-next-to-next-to-leading order, a significant challenge at this point, one can use instead the parametrization $V_{E\mathcal{P}}$ of the contact interaction introduced in Ref.~\cite{Lynn:2016}, which projects onto the total spin $S=1/2$ and total isospin $T=1/2$ triples. These are the triples that survive in the infinite (momentum-space) cutoff limit and thus, this parametrization partially restores the Fierz rearrangement freedom. However, the $V_{E\mathcal{P}}$ parametrization involves spin/isospin operators beyond quadratic order and presents a challenge to the direct inclusion in the AFDMC propagation. We leave the exploration of this parametrization to future works. \setlength{\tabcolsep}{5pt} \begin{table*}[htb] \centering \caption[]{Expectation value of the N$^2$LO energy contributions in \isotope[6]{Li} and \isotope[16]{O}. All energies (in MeV) are mixed estimates from the constrained evolution: $2\,\langle\mathcal O_{\rm DMC}\rangle - \langle\mathcal O_{\rm VMC}\rangle$. Errors are statistical.} \begin{tabular}{cclcccccccc} \hline\hline System & $R_0\,(\rm fm)$ & Potential & $E_{\rm kin}$ & $v_{ij}$ & $E_{\rm kin}+v_{ij}$ & $V_{ijk}$ & $V^{2\pi,P}$ & $V^{2\pi,S}$ & $V_D$ & $V_{E}$ \\ \hline \isotope[6]{Li} & 1.0 & $N\!N$ & $116.8(4)$ & $-151.2(4)$ & $-34.4(8)$ & & & & & \\ & 1.0 & $3N$ $E\tau$ & $135.3(7)$ & $-165.6(5)$ & $-30.2(1.2)$ & $-11.1(3)$ & $-13.3(3)$ & $-0.43(1)$ & $0$ & $2.67(2)$ \\ & 1.0 & $3N$ $E\mathbbm1$ & $135.5(6)$ & $-165.8(6)$ & $-30.3(1.2)$ & $-11.3(2)$ & $-13.3(2)$ & $-0.42(1)$ & $-0.89(2)$ & $3.38(4)$ \\ [0.2cm] & 1.2 & $N\!N$ & $110.3(3)$ & $-145.4(3)$ & $-35.1(6)$ & & & & & \\ & 1.2 & $3N$ $E\tau$ & $129.3(6)$ & $-160.1(5)$ & $-30.8(1.1)$ & $-11.8(3)$ & $-6.1(2)$ & $-0.39(1)$ & $-4.6(1)$ & $-0.63(1)$ \\ & 1.2 & $3N$ $E\mathbbm1$ & $118.8(4)$ & $-154.0(3)$ & $-35.2(7)$ & $-5.5(1)$ & $-5.6(1)$ & $-0.26(1)$ & $0.08(1)$ & $0.27(1)$ \\ \hline \isotope[16]{O} & 1.0 & $N\!N$ & $319(1)$ & $-453(1)$ & $-134(2)$ & & & & & \\ & 1.0 & $3N$ $E\tau$ & $370(1)$ & $-500(1)$ & $-130(2)$ & $-44(1)$ & $-55(1)$ & $0.85(1)$ & $0$ & $8.50(4)$ \\ & 1.0 & $3N$ $E\mathbbm1$ & $367(1)$ & $-497(1)$ & $-131(2)$ & $-41(1)$ & $-54(1)$ & $0.72(1)$ & $-4.03(5)$ & $15.7(1)$ \\ [0.2cm] & 1.2 & $N\!N$ & $377(1)$ & $-528(2)$ & $-151(3)$ & & & & & \\ & 1.2 & $3N$ $E\tau$ & $556(4)$ & $-712(3)$ & $-156(7)$ & $-202(3)$ & $-101(2)$ & $-0.72(9)$ & $-94(2)$ & $-5.43(3)$ \\ & 1.2 & $3N$ $E\mathbbm1$ & $377(1)$ & $-529(1)$ & $-152(2)$ & $-26(1)$ & $-34(1)$ & $0.94(1)$ & $4.53(8)$ & $1.90(1)$ \\ \hline\hline \end{tabular} \label{tab:v3} \end{table*} \setlength{\tabcolsep}{10pt} \subsection{Charge form factors and Coulomb sum rules} One- and two-body point-nucleon densities are calculated as \begin{align} \!\!\rho_{N}(r) &=\frac{1}{4\pi r^2}\big\langle\Psi\big|\sum_i \mathcal P_{N_i}\delta(r-|\vb{r}_i-\vb{R}_{\rm cm}|)\big|\Psi\big\rangle, \label{eq:rho_N} \\ \!\!\rho_{NN}(r)&=\frac{1}{4\pi r^2}\big\langle\Psi\big|\sum_{i<j}\mathcal P_{N_i}P_{N_j}\delta(r-|\vb{r}_i-\vb{r}_j|)\big|\Psi\big\rangle, \label{eq:rho_NN} \end{align} where $\mathcal P_{N_i}$ is the projector operator of~\cref{eq:proj}. With the current definitions, $\rho_N$ and $\rho_{NN}$ integrate to the number of nucleons and the number of nucleon pairs, respectively. As opposed to the charge radius, densities are not observables themselves, but the one-body densities can be related to physical quantities experimentally accessible via electron-nucleon scattering processes, such as the longitudinal elastic (charge) form factor. In fact, the charge form factor can be expressed as the ground-state expectation value of the one-body charge operator~\cite{Mcvoy:1962}, which, ignoring small spin-orbit contributions in the one-body current, results in the following expression: \begin{align} F_L(q)=\frac{1}{Z}\frac{G_E^p(Q_{\rm el}^2)\,\tilde{\rho}_p(q)+G_E^n(Q_{\rm el}^2)\,\tilde{\rho}_n(q)}{\sqrt{1+Q_{\rm el}^2/(4 m_N^2)}}, \label{eq:ff} \end{align} where $\tilde{\rho}_{N}(q)$ is the Fourier transform of the one-body point-nucleon density defined in~\cref{eq:rho_N}, and $Q^2_{\rm el}=\vb{q}^2-\omega_{\rm el}^2$ is the four-momentum squared, with $\omega_{\rm el}=\sqrt{q^2+m_A^2}-m_A$ the energy transfer corresponding to the elastic peak, $m_A$ being the mass of the target nucleus. $G_E^N(Q^2)$ are the nucleon electric form factors, for which we adopt Kelly's parametrization~\cite{Kelly:2004}. \begin{figure}[htb] \includegraphics[width=\linewidth]{ff_li6_e1.pdf} \caption[]{Charge form factor in \isotope[6]{Li}. The solid blue (red) line is the AFDMC result for the N$^2$LO $E\mathbbm1$ potential with cutoff $R_0=1.0\,(1.2)\,\rm fm$. Lighter shaded areas indicate the uncertainties from the truncation of the chiral expansion. Darker shaded areas are the theoretical error bands only taking into account NLO and N$^2$LO results. Black triangles are the VMC one-body results for AV18+UIX~\cite{Wiringa:1998}. The experimental data are taken from Ref.~\cite{Li:1971}.} \label{fig:ff_li6} \end{figure} \begin{figure}[b] \includegraphics[width=\linewidth]{ff_c12_et.pdf} \caption[]{Charge form factor in \isotope[12]{C}. In blue are the AFDMC results for the $E\tau$ parametrization of the $3N$ force and cutoff $R_0=1.0\,\rm fm$. Black triangles are the GFMC one-body results for AV18+IL7~\cite{Lovato:2013}. The experimental data are taken from Ref.~\cite{Devries:1987}. Updated from Ref.~\cite{Lonardoni:2017afdmc}.} \label{fig:ff_c12} \end{figure} \begin{figure}[htb] \includegraphics[width=\linewidth]{ff_o16_e1.pdf} \caption[]{Charge form factor in \isotope[16]{O}. In blue (red) are the AFDMC results as in~\cref{fig:ff_li6}. Black triangles are the cluster-VMC one-body results for AV18+UIX~\cite{Lonardoni:2017cvmc}. Experimental data are from I. Sick, based on Refs.~\cite{Sick:1970,Schuetz:1975,Sick:1975}.} \label{fig:ff_o16} \end{figure} The charge form factors of \isotope[6]{Li}, \isotope[12]{C}, and \isotope[16]{O} are shown in~\cref{fig:ff_li6,fig:ff_c12,fig:ff_o16}, respectively. In all the plots, the blue (red) curve is the AFDMC result for the N$^2$LO $E\mathbbm1$ potential ($E\tau$ for \isotope[12]{C}), with cutoff $R_0=1.0\,(1.2)\,\rm fm$. Monte Carlo error bars are typically of the size of the lines within the momentum range considered here. Lighter shaded areas indicate the uncertainties from the truncation of the chiral expansion, according to~\cref{eq:err}. Darker shaded areas are instead the theoretical error bands only considering the last term of the prescription, i.e., taking into account the NLO and N$^2$LO results only. AFDMC results are compared to experimental data and to available Monte Carlo calculations employing the phenomenological potentials and one-body charge operators only. No two-body operators are included in the calculation of the charge form factors in the current work. However, as shown in Refs.~\cite{Wiringa:1998,Lovato:2013,Mihaila:2000} for the three different systems, such operators give a measurable contribution only for $q>2\,\rm fm^{-1}$, as they basically include relativistic corrections. The charge form factor of \isotope[6]{Li} for the $E\mathbbm1$ interaction is compatible with experimental data at low momentum for both cutoffs, with larger theoretical uncertainties for the soft potential. Results for the $E\tau$ parametrization show a similar behavior. The discrepancy for $q\gtrsim2\,\rm fm^{-1}$ is due to the missing two-body currents. In fact, AFDMC results for local chiral forces are compatible with the VMC one-body results for AV18+UIX~\cite{Wiringa:1998} up to high momentum. A similar physical picture is given for both \isotope[12]{C} and \isotope[16]{O}, for which the positions of the first diffraction peaks in the form factors are well reproduced by the hard potentials within the theoretical error bands, and deviations from the experimental data occur at high momentum only. For the soft $E\mathbbm1$ interaction instead, the description of the charge form factor in \isotope[16]{O} is less accurate, with the position of the first diffraction peak overestimated, and the slope of $F_L(q)$ for $q=0$ underestimated, consistent with the smaller charge radius compared to the experimental value. The difference with respect to the experimental results is however not as dramatic as for the soft $E\tau$ potential (see Ref.~\cite{Lonardoni:2017afdmc}), and it is mostly covered by the very large theoretical error bands. These, in particular, are dominated by the LO contributions to the theoretical error estimate, as shown by the difference between lighter and darker bands in the form factor. Finally, it is interesting to note that for all three systems, the local chiral interactions with the hard cutoff $R_0=1.0\,\rm$ fm give the same physical description of the charge form factor as the phenomenological potentials, provided that one-body charge operators only are included in the calculation. Two-body densities are related to the Coulomb sum rule, which is defined as the energy integral of the electromagnetic longitudinal response function. As with the charge form factor, the Coulomb sum rule can be written as a ground-state expectation value~\cite{Mcvoy:1962}, leading to the relation: \begin{align} S_L(q)=&\frac{1}{Z} \frac{1}{G_E^{p\,2}(Q_{\rm qe}^2)}\frac{1}{1+Q_{\rm qe}^2/(4 m_N^2)} \nonumber \\ & \times\Big\{ G_E^{p\,2}(Q_{\rm qe}^2)\,\Big[\tilde{\rho}_{pp}(q)+Z\Big] \nonumber \\ & +G_E^{n\,2}(Q_{\rm qe}^2)\,\Big[\tilde{\rho}_{nn}(q)+(A-Z)\Big] \nonumber \\ & +2\,G_E^p(Q_{\rm qe}^2)\,G_E^n(Q_{\rm qe}^2)\,\tilde{\rho}_{np}(q) \nonumber \\ & -\Big[G_E^p(Q_{\rm qe}^2)\,\tilde{\rho}_p(q)+G_E^n(Q_{\rm qe}^2)\,\tilde{\rho}_n(q)\Big]^2 \Big\}, \label{eq:sl} \end{align} where $\tilde{\rho}_{\rm{NN}}(q)$ is the Fourier transform of the two-body point-nucleon densities defined in \cref{eq:rho_NN}, and $Q^2_{\rm qe}=\vb{q}^2-\omega^2_{\rm qe}$, with $\omega_{\rm qe}$ the energy transfer corresponding to the quasielastic peak. Although the Coulomb sum rule is not directly an experimental observable (experimental information can be however extracted from the longitudinal response function, as done in Ref.~\cite{Lovato:2016} for \isotope[12]{C}), it is still an interesting quantity for the study of integral properties of the response of a nuclear many-body system to an external probe. \begin{figure}[b] \includegraphics[width=\linewidth]{sl_10_et.pdf} \caption[]{Coulomb sum rule for $4\le A\le16$. Lines refer to AFDMC results for the N$^2$LO $E\tau$ potential with cutoff $R_0=1.0\,\rm fm$. Solid symbols are the GFMC one- plus two-body results for AV18+IL7~\cite{Lovato:2013,Lonardoni:2017cvmc}. Shaded areas indicate the statistical Monte Carlo uncertainty.} \label{fig:sl} \end{figure} We report in~\cref{fig:sl} the Coulomb sum rule for $4\le A\le16$ using the N$^2$LO $E\tau$ potential with cutoff $R_0=1.0\,\rm fm$. The GFMC results for \isotope[4]{He} and \isotope[12]{C}~\cite{Lovato:2013,Lonardoni:2017cvmc} employing the AV18+IL7 potential are also shown for comparison. The discrepancy between the AFDMC and GFMC results above $\approx 3\,\rm fm^{-1}$ is due to the missing two-body currents in the present calculation. For lower momenta the description of the sum rule is remarkably consistent with that provided by phenomenological potentials. Moreover, the results for \isotope[12]{C} are compatible with the available experimental data as extracted in Ref.~\cite{Lovato:2016}, as shown already in Ref.~\cite{Lonardoni:2017afdmc}. All $p$-shell nuclei show a similar profile for $S_L(q)$, with a peak around $1.6\,\rm fm^{-1}$, slightly more pronounced for open-shell systems $(A=6,12)$. The same observations hold for the $E\mathbbm1$ parametrization of the $3N$ force and for both cutoffs, with the Coulomb sum rule of \isotope[4]{He} and \isotope[6]{Li} very close to those shown in~\cref{fig:sl}. An exception is the case of \isotope[16]{O}, for which $S_L(q)$ is largely different for the soft cutoff, consistent with the results for the charge form factor, as already shown in Ref.~\cite{Lonardoni:2017afdmc}. \section{Summary} \label{sec:summ} We presented a detailed description of the AFDMC method for nuclei, with particular attention given to the construction of the trial wave function, the propagation of $3N$ forces, and the constrained/unconstrained imaginary-time evolution. We reported a series of test results for these technical aspects of the algorithm. We performed AFDMC calculations of nuclei with $3\leq A\leq16$ using local chiral EFT interactions up to N$^2$LO, completing and expanding the results of Ref.~\cite{Lonardoni:2017afdmc}. Both two- and three-body potentials have been considered, the latter described by two different operator structures, namely $E\tau$ and $E\mathbbm1$. Two coordinate-space cutoffs, $R_0=1.0\,\rm fm$ and $R_0=1.2\,\rm fm$, have been used, with results presented at each order of the chiral expansion and for each $3N$ parametrization. To this aim, a new fit of the three-body LECs $c_D$ and $c_E$ has been presented for the $E\mathbbm1$ parametrization with the soft cutoff $R_0=1.2\,\rm fm$. Binding energies and charge radii were shown for all the systems, and results for the charge form factor in \isotope[6]{Li}, \isotope[12]{C}, and \isotope[16]{O} were also reported. For all these observables, the AFDMC results were supported by statistical Monte Carlo errors and theoretical errors coming from the truncation of the chiral expansion. Finally, the Coulomb sum rule for systems with $4\leq A\leq 16$ was also shown. The outcomes of this work confirm that local chiral interactions fit to few-body observables give a very good description of the ground-state properties of nuclei up to \isotope[16]{O}. This is true for both harder and softer interactions, even though the latter imply larger theoretical uncertainties coming from LO contributions to the truncation error estimate. We found that the overbinding in \isotope[16]{O} for the soft $E\tau$ parametrization of the $3N$ force is generated by large attractive contributions from the large value of the LEC $c_D$. Therefore, it will be very interesting to explore further $3N$ fits and operator choices in heavier nuclei as well as dense matter. \acknowledgments{We thank I.~Tews, A.~Lovato, A.~Roggero, and R.~F.~Garcia Ruiz for many valuable discussions. The work of D.L. was was supported by the U.S. Department of Energy, Office of Science, Office of Nuclear Physics, under the FRIB Theory Alliance Grant Contract No. DE-SC0013617 titled ``FRIB Theory Center - A path for the science at FRIB'', and by the NUCLEI SciDAC program. The work of S.G. and J.C. was supported by the NUCLEI SciDAC program, by the U.S. Department of Energy, Office of Science, Office of Nuclear Physics, under contract No. DE-AC52-06NA25396, and by the LDRD program at LANL. K.E.S. was supported by the National Science Foundation grant PHY-1404405. The work of J.E.L. and A.S. was supported by the ERC Grant No.~307986 STRONGINT and the BMBF under Contract No.~05P15RDFN1. Computational resources have been provided by Los Alamos Open Supercomputing via the Institutional Computing (IC) program, by the National Energy Research Scientific Computing Center (NERSC), which is supported by the U.S. Department of Energy, Office of Science, under contract DE-AC02-05CH11231, and by the Lichtenberg high performance computer of the TU Darmstadt.
{ "timestamp": "2018-02-27T02:08:37", "yymm": "1802", "arxiv_id": "1802.08932", "language": "en", "url": "https://arxiv.org/abs/1802.08932" }
\section{Introduction}\label{s:intro} Recent progress in low-temperature plasma physics, both, in experiments and applications \cite{plasma-road-map-17, doe-report-17}, creates an urgent need for accurate simulations of the plasma-solid interface. Even though there have been remarkable recent advances, both, in plasma modeling and surface science simulations, the combination of the two is still at an early stage. Current simulations in low-temperature plasma physics, often omit plasma-surface processes or treat them phenomenologically. Let us take as an example the treatment of neutrals. In advanced kinetic simulations based on the Boltzmann equation, e.g. \cite{hagelaar_2005,donko_2016} or particle in cell (PIC-MCC) simulations, e.g. \cite{ebert_2016,becker_2017} neutrals are treated as a homogeneous background, and their interaction with surfaces is not included in the description. However, the effect of energetic neutrals maybe crucial for secondary electron emission (SEE), as was demonstrated in PIC simulations of Derszi \textit{et al.} where neutrals above a threshold energy of 23eV were traced \cite{derszi_2015}. The second example is the impact of the properties of the surface, such as surface roughness or oxidation or coverage by an adsorbate, on the behavior of the plasma. Using realistic surface properties--as they emerge upon contact with a plasma--drastically alters the plasma-surface interaction compared to the case of an ideal (i.e. clean and perfect) surface. This has been studied in great detail for the case of SEE by Phelps and Petrovic \cite{phelps_1999}, and this was taken into account in PIC simulations via modified cross sections in Ref.~\cite{derszi_2015}. In this work it was found that a realistic (``dirty'' \cite{phelps_1999}) surface gives rise to a significant increase of the ion density, even far away from the electrode. The data of Ref.~\cite{phelps_1999} suggest that there remain substantial uncertainties in the values of the SEE coefficient. In a real plasma treatment experiment a ``clean'' surface may correspond to the initial state of an electrode which, ultimately, turns into a ``dirty'' metal that is covered by adsorbates or an oxide layer. Similarly, Li \textit{et al.} studied the effect of surface roughness on the field emission by including a phenomenological geometric enhancement factor \cite{li_2013}. The third example is related to plasma electrons hitting a solid surface. The standard assumption in simulations is that these electrons are lost without reflection, e.g. \cite{sheehan_2013}, and only recently a microscopic calculation of the electron sticking coefficient was performed by Bronold and Fehske \cite{bronold_prl15}. They also studied the charge transfer when a strontium ion from the plasma approaches a gold surface \cite{pamperin_prb15}. \begin{figure} \begin{center} \hspace{-0.cm}\includegraphics[width=0.5\textwidth]{new_picture_without_arrows.png} \end{center} \caption{Sketch of the plasma-solid interface which comprises the plasma sheath and plasma facing layers of the solid \cite{interface}. Among the relevant processes are diffusion, adsorption (``sticking'') and desorption of neutrals, penetration (stopping) of ions and electron transfer between solid and plasma. Typically, in plasma simulations the effect of the surface is described by empirical parameters such as the SEE coefficient, sticking coefficients, sputter rates etc. The mutual influence of the plasma on the solid and vice versa is a major challenge for a predictive theoretical treatment and require a combination of various theoretical approaches, see Fig.~\ref{fig:theory}.} \label{fig:psi} \end{figure} The latter quantum-mechanics based approaches are very promising but they are still at a very early stage of development. While they clearly indicate the importance of an accurate treatment of plasma-surface processes, they cannot yet make reliable predictions. The reason is that a huge variety of complex physical and chemical processes occur at the plasma-solid interface, which include secondary electron emission, sputtering, neutralization and stopping of ions, adsorption and desorption of neutral particles as well as chemical reactions, for an illustration, see Fig.~\ref{fig:psi}. Moreover, the typical particle densities in the plasma and the solid differ by many orders of magnitude and to completely different physics active on both sides of the interface: low-density gas-like behavior in the plasma versus quantum dynamics of electrons in the solid. Furthermore, the density gap gives rise to a huge gap in relevant space and time scales, cf.~Fig.\ref{fig:theory}. \begin{description} \item[The first step] to tackle these problems is to have a look at the theoretical approaches that have been developed in solid state physics to describe a surface that is exposed to a plasma. These methods are based on density functional theory (DFT) and various additional many-body methods that allow to treat correlated materials. However, these methods typically focus on the ground state properties of the solid. In contrast, in the presence of a plasma, particle and energy fluxes to and from the solid occur giving rise to nonequilibrium effects and high excitation. Therefore, \item[ as the second step,] one has to consider nonequilibrium methods that describe the solid and the plasma-solid interface under these conditions. These include time-dependent \textit{ab initio} (quantum) methods density functional theory, nonequilibrium Green functions, and quantum kinetic theory. However, these approaches are extremely time consuming and allow one to cover only small systems for a few femtoseconds. Therefore, \item[the third step] consists in additional simplifications, mostly, in eliminating the quantum effects from the dynamics of the interface. This leads to semi-classical molecular dynamics simulations for the heavy particles where all quantum effects are being ``absorbed'' into effective pair interaction potentials or force fields. With accurate force fields (typically based on \textit{ab initio} DFT simulations) the resulting MD simulations are very accurate and of first principle character (fully solving Newton's equations). This is still very challenging computationally because stable solution of these equations requires a time step of about one femtosecond. Therefore, there is no straight way to reach experimentally relevant time and length scales, even on supercomputers. This leads to \item[step four:] invoking additional physical ideas that allow one to either \textit{accelerate} or \textit{extend} these first principles MD simulations, without compromising the accuracy, to the time scales of interest. Even though this may seem impossible, a number of powerful and successful concepts have been developed in Statistical physics, many-body physics, quantum chemistry and surface science. One of the goals of this paper is to present an overview on those concepts that might be of relevance for plasma-surface simulations in the near future. \end{description} This paper is organized as follows. In Sec.~\ref{s:challenges} we give a brief summary on the theoretical methods that are required to accurately simulate plasma-surface processes and discuss their problems. In Sec.~\ref{s:concepts} we give a brief overview on the acceleration approaches that are of potential relevance for plasma-surface interaction. The first group of methods---acceleration of phase space sampling---is discussed in Sec.~\ref{s:phase_space} whereas the second method---coarse graining approaches---is the context of Sec.~\ref{s:coarse_graining}. Then we discuss in more detail one of the latter approaches---Dynamical freeze out of dominant modes---in Sec.~\ref{s:freezout}. The conclusions are given in Sec.~\ref{s:conclusion1}. \begin{figure*} \begin{center} \hspace{-0.cm}\includegraphics[width=0.78\textwidth]{theory_psst.pdf} \end{center} \vspace{-2.0cm} \caption{Theoretical methods for the description of the plasma-solid interface \cite{interface}, as sketched in Fig.~\ref{fig:psi}. Some of the processes of interest are listed in the figure. Note the dramatically different length scales and the very different properties of plasma and solid requiring totally different methods to be applied on the plasma and the solid side. Standard methods for the bulk solid are Density functional theory (DFT), Bethe-Salpeter equation (BSE), Dynamical Mean Field Theory (DMFT), and Quantum Monte Carlo (QMC). To simulate surface processes (central box), additional non-adiabatic (time-dependent) approaches are required: molecular dynamics (MD), Kinetic Monte Carlo (KMC), Quantum Kinetics, Born-Oppenheimer MD (BO-MD), time-dependent DFT (TDDFT), Nonequilibrium Green functions (NEGF) and \textit{ab initio} NEGF (AI-NEGF). To account for the complex interactions between plasma and solid, the corresponding methods have to be properly linked: plasma simulations should provide the momentum dependent fluxes $\textbf{J}^p_a$ of all species ``a'' to the surface whereas surface simulations deliver the corresponding fluxes $\textbf{J}^s_a$ that leave the surface. Bulk solid simulations provide the band structure $\epsilon_\lambda$ and reactive force fields (FF), whereas surface simulations return the updated surface morphology ``SM'', chemical modifications etc. For details see Sec.~\ref{s:challenges}. This paper focuses on the MD approach and on the question how to increase its efficiency and provides examples for the fluxes of atoms $\textbf{J}^s_A$ (Sec.~\ref{ss:md-re}) and time-dependent surface morphology (Sec.~\ref{ss:spa}).} \label{fig:theory} \end{figure*} \section{Challenges in the simulation of plasma-solid interaction}\label{s:challenges} An accurate simulation of plasma-surface processes, first of all, requires a reliable description of the solid (step one above). To this one first of all needs to obtain the ground state properties of the solid--the energy spectrum (band structure) and the Kohn-Sham orbitals--which is done by density functional theory (DFT) simulations, cf. right part of Fig.~\ref{fig:theory}. However, DFT is known to have problems, in particular, in treating materials with strong electronic correlations including various oxides. Here, many-body approaches are being used that include the Bethe-Salpeter equation (BSE), dynamical mean field theory (DMFT) of quantum Monte Carlo (QMC). If the solid comes in contact with a low-temperature plasma (step two above), energetic electrons and ions may excite the electrons of the solid and the lattice. This is already not captured by ground state DFT but requires time-dependent extensions, cf. the approaches listed in the central box of Fig.~\ref{fig:theory}. Recently some elementary processes such as the impact of ions (stopping power) and neutralization of ions at a surface and chemical reactions were studied by \textit{ab initio} quantum simulations. This includes Born-Oppenheimer MD (coupling of density functional theory for the electrons to MD for the ions) and time-dependent DFT (TDDFT), e.g. \cite{brenig-pehlke_08,zhao_prb_15}. \textit{Ab initio} nonequilibrium Green functions (NEGF) simulations are an alternative that allow for a more accurate treatment of electronic correlations \cite{balzer_prb_16, schluenzen_cpp_16, balzer_prl_18}. For completeness, we also mention \textit{ab initio} NEGF--a recent combination of ground state DFT and NEGF \cite{marini}. However, TDDFT, NEGF and AI-NEGF simulations are extremely CPU-time demanding and can treat only small systems for short time scales. For example, Born-Oppenheimer MD simulation requires a time step around 0.1fs, which allows to treat on the order $100\dots 1000$ atoms for $1\dots 100$ picoseconds, during a week of simulations on massively parallel hardware, e.g. \cite{hutter_12}. The demand for TDDFT and NEGF is several orders of magnitude larger. At the same time for many processes an explicit quantum modeling of the electron dynamics is not necessary. This concerns, in particular, the dynamics of neutral particles on a surface: diffusion, adsorption and desorption or many chemical reactions. Here, often a semi-classical MD simulation is performed (step three above)--a technique that is well developed in surface science and in theoretical chemistry, e.g. \cite{gross-book}. Similarly, MD simulations are well established in low-temperature plasmas, e.g. to compute first principle structural properties of dust particles \cite{com-plasmas_14} or the diffusion coefficient in a strongly correlated magnetized plasma \cite{ott_prl_11}. In each case, the quality of the MD results depends on the accuracy of effective pair potentials or force fields that are usually derived from microscopic quantum simulations or are adjusted to reproduce experimental data. These MD simulations are not \textit{ab initio} anymore (they neglect quantum effects in the dynamics), but still carry first principle character (they solve Newton's equations exactly), so they will be referred to as first principle MD simulations below. Typically they require a time step of the order of $1$ fs and can treat huge systems. For example, Ref.~\cite{nakano_08} reported simulations of a system containing $10^{11}$ atoms that reach times of the order of several milliseconds. However, this is presently only possible on the largest supercomputers or on dedicated hardware, e.g.~\cite{piana_13}. Despite these impressive records, it is clear that in the near future MD simulations for plasma-surface processes will remain many orders of magnitude short of system sizes and length scales needed to compare with experiments. In plasma physics, these are minutes and (at least) micrometers, respectively. Therefore, additional strategies are needed. One way is of course the use of additional approximations leading to simplified models at the expense of accuracy and reliability. Here, we discuss another approach, which aims at retaining the first principles character of the MD simulations (step four above). The idea is to invoke additional information on the system properties that allow one to effectively accelerate the simulations and/or to extend them to larger scales \textit{without loosing accuracy}. There exists a variety of acceleration strategies including hyperdynamics \cite{voter_97}, metadynamics \cite{parrinello_pnas_02} or temperature accelerated dynamics \cite{Sorensen2000}. A more recent concept is collective variable driven hyperdynamics \cite{Bal2015} that was reported to achieve, for some applications, speed-ups of the order of nine orders of magnitude. Another approach developed by the present authors \cite{abraham_jap_16, abraham_cpp_18} uses a \textit{selective acceleration of some relevant processes} and also achieved speed-ups exceeding a factor $10^9$. Another direction of developments does not aim at accelerating the \textit{ab initio} simulations but to extend them to longer times by a suitable combination with analytical models \cite{franke_prb_10, paper1}. These methods will be called below \textit{Dynamical freeze out of dominant modes} (DFDM). The goal of this article is to present an overview on these very diverse acceleration/extension developments, to discuss their respective strengths and limitations and to outline future improvements and extensions for applications in plasma-surface interaction. \begin{figure} \begin{center} \hspace{-0.cm}\includegraphics[width=0.52\textwidth]{acceleration.pdf} \end{center} \vspace{-.80cm} \caption{Main potential strategies to accelerate/extend first principles (semiclassical) MD simulations for plasma-surface applications. This method is primarily applicable to the dynamics of neutral particles. The left column contains common approaches in surface science MD. The right column sketches strategies that are motivated by many-body theory and plasma physics. The red items are discussed in detail in this paper.} \label{fig:scheme} \end{figure} \section{Concepts to accelerate and/or extend \textit{ab initio} MD simulations}\label{s:concepts} As discussed in the introduction, first principles MD is based on the use of accurate pair potentials or force fields. The steepness of these force fields leads to a rather small time step of the order of one femtosecond that has to be used to achieve convergent simulations. As a consequence, the total simulation duration is far away from experimentally relevant times of seconds and even minutes and, therefore, acceleration strategies are of high interest. This problem is not specific to plasma-surface interaction, but also occurs in the study of phase transitions, the chemistry of macromolecules, biology, in surface physics and surface chemistry. These different and diverse communities developed a large number of strategies to accelerate MD simulations, to improve the treatment of rare events or to extrapolate to longer times or larger systems. These strategies can be loosely grouped into two classes, which are depicted in Fig.~\ref{fig:scheme}. The first group (left column) includes methods that accelerate MD simulations by overcoming bottlenecks, such as rare events or trajectories being trapped in local potential minima. The second group of methods has been termed ``coarse graining'' approaches (right column in Fig.~\ref{fig:scheme}). Here the idea is to average over fast processes or small length scales that are not of interest for the physical observables. This is complementary to the first group and promising for plasma-surface simulations. \section{Acceleration of phase space sampling}\label{s:phase_space} We start by discussing the concepts listed in the left column of Fig.~\ref{fig:scheme}. The approaches discussed in this section are used to treat systems which reside in local energy minima for a long time before any event of interest occurs. Metadynamics is represented in section~\ref{ss:metadyn}. The methods presented in sections~\ref{ss:hd}--\ref{ss:ParRep} -- hyperdynamics, temperature accelerated dynamics and parallel replica dynamics -- have been proposed by Voter and co-workers. They have in common that they aim at an effective reduction of the waiting time between successive infrequent events. A different approach has been introduced in Ref.~\cite{abraham_jap_16}. It relies on treating the diffusive motion of atoms on surfaces exclusively with Langevin dynamics. This method allows for a selective process acceleration and is discussed in Sec.~\ref{ss:spa}. In the following, we give a brief overview of these methods; for further details, we refer to recent reviews \cite{abraham_epjd_17,neyts2017molecular,Neyts2012mdmc,Perez2009_voter,Voter2002 }. \subsection{Metadynamics}\label{ss:metadyn} Metadynamics is a technique to enhance the computation of a multidimensional free energy surface (FES) of a many-body system. In most cases the FES is far too complicated to be directly computed. Laio and Parrinello in 2002 introduced a method that allows for an efficient computation of the FES by means of molecular dynamics~\cite{parrinello_pnas_02} which has become a widely used tool in computational (bio-)physics, chemistry and material science. An overview is given e.g.\ in Ref.~\cite{laio_ropp_08}. The first key idea (and fundamental assumption) of the method is that the free energy $F$ of a system with a set of coordinates $\mathbf{x}$, the potential $ V(\mathbf{x})$ and the inverse temperature $\beta = 1 / (k_\mathrm{B}T)$ can be expressed as a function of just a few collective variables $\mathbf{S} = ( S_1, \ldots, S_d)$ according to \begin{equation} F = - \frac{1}{\beta} \ln \left( \int \exp\left[-\beta V(\mathbf{x}) \right] \delta[\mathbf{S}-\mathbf{S}(\mathbf{x})] \, \mathrm{d}\mathbf{x} \right) \,. \end{equation} If the collective variables provide an adequate representation of the whole configuration space, the FES can be efficiently explored by performing MD simulation where a second key idea is applied: a history-dependent small ``bias potential'' $\Delta V$ is added, i.e. $V \to V + \Delta V$. This potential successively enforces the system to leave every occurring minimum in the FES, thus, avoiding bottlenecks due to rare events. In a very simple fashion, this bias potential can be constructed as a sum of weighted Gaussian functions, \begin{equation} \Delta V(\mathbf{S},t)=w \sum_{t'\in \mathcal{T}} \exp \left(-\sum_{i=1}^d \frac{[S_i(t)-S_i(t')]^2}{2(\Delta s_i)^2} \right) \,, \end{equation} where the set $\mathcal{T}$ comprises all times $t'$ before the time $t$, at which the sum of Gaussian functions has been extended by one term. The interval between successive creations of new Gaussian functions as well as their weights $w$ and widths $\Delta s_i$ should be chosen such that a compromise between computation time and accuracy is achieved. After a sufficiently long simulation time the potential landscape $V+\Delta V$ levels out and becomes flat. This can be recognized in the simulation by the ``diffusive'' behavior of the considered collective variables \cite{laio_ropp_08}. Then the inverse of the final bias potential $\Delta V$ provides an accurate estimator of the free energy $F$. In contrast to the methods discussed below, the use of metadynamics alone does not yield correct state-to-state dynamics. Nevertheless, it can be utilized for that purpose if it is combined with other methods, such as the collective variable driven hyperdynamics (CVHD) described in section~\ref{ss:hd}. Metadynamics is a versatile method and it allows for a relatively easy implementation. Hiwever, the choice of the collective variables can be very difficult. \subsection{Hyperdynamics}\label{ss:hd} Using hyperdynamics \cite{Voter1997}, the state-to-state dynamics of an infrequent event system is accelerated by adding a space-dependent bias potential $\Delta V(\mathbf{r})$ to the potential energy surface $V(\mathbf{r})$. Thereby, the energy barriers between different states are reduced so that transitions occur more often. For the applicability of the method, it is required that both the unbiased and the biased system dynamics obey the so-called transition state theory (TST) \cite{Marcelin1915_tst,Eyring1935_TST}. Furthermore, the bias potential $\Delta V(\mathbf{r})$ must be zero at all dividing surfaces, and it must be chosen such that the correct relative probabilities of the transitions are maintained. The construction of an appropriate bias potential can be an elaborate task in many cases. In the original publication~\cite{Voter1997}, the diffusion of an Ag$_{10}$ cluster on an Ag(111) surface was investigated by constructing $\Delta V$ as a function of the lowest eigenvalue of the Hessian matrix. At this, boost factors of roughly \SI{8e3}{} were achieved. Another approach introduced by Fichthorn \textit{et al.}~\cite{Fichthorn2009}, the so-called bond-boost method, is to let $\Delta V$ be a function of the nearest-neighbor bond lengths in a solid. In a study of the diffusion of Cu atoms on a Cu(001) surface in the temperature range between \SI{230}{K} and \SI{600}{K}, boost factors of up to $10^6$ could be achieved \cite{Miron2003bondboost}. Even higher boost factors of up to $10^9$ were obtained for the same system using the CVHD method introduced by Bal and Neyts \cite{Bal2015}. The idea of this method is to use the concept of metadynamics for an incremental build-up of a bias potential depending on just one collective variable, i.e., a variable that describes the relevant processes in the system. The CVHD method can be applied whenever the requirements of hyperdynamics are fulfilled and an appropriate collective variable can be found. Even though the latter may be difficult in some cases, the CVHD method has the potential to be applied to many kinds of different systems. For example, it has already been used to study the folding of a polymer chain model \cite{Bal2015} and the pyrolysis and oxidation of $n$-dodecane \cite{Bal2016fuel}. \subsection{Temperature accelerated dynamics}\label{ss:tad} The idea of temperature accelerated dynamics (TAD) \cite{Sorensen2000} is to make transitions occur more often by performing the simulation at an elevated temperature $T_\mathrm{high}$ instead of the temperature of interest $T_\mathrm{low}$. Because this procedure alone would induce wrong ratios of escape probabilities, an additional mechanism is applied to correct for this. The method can only be applied if the system obeys the harmonic TST (HTST). Thus, it is more restrictive than hyperdynamics and parallel replica dynamics (section~\ref{ss:ParRep}), for which the harmonic approximation is not necessary. TAD is carried out by performing ``basin constrained molecular dynamics'' (BCMD) for each system state. Whenever a transition occurs at $T_\mathrm{high}$, the escape path and the corresponding escape time are stored, but the involved particles are reflected back to their initial energy basin and the dynamics is continued. For each observed transition at time $t_\mathrm{high}$, the transition time is extrapolated to the lower temperature according to \begin{equation} t_\mathrm{low} = t_\mathrm{high} \exp \left\{ E_\mathrm{a} \left(\frac{1}{k_\mathrm{B}T_\mathrm{low}} - \frac{1}{k_\mathrm{B}T_\mathrm{high}} \right) \right \}\,, \end{equation} where $E_\mathrm{a}$ is the energy barrier of the transition. The BCMD routine is stopped when the simulation time reaches \begin{equation} \tilde{t}_\mathrm{high} = \frac{\ln(1/\delta)}{\nu_\mathrm{min}} \left( \frac{\nu_\mathrm{min}t_\mathrm{low,min}}{\ln(1/\delta)} \right)^{T_\mathrm{low} / T_\mathrm{high}}\,, \end{equation} where $t_\mathrm{low,min}$ is the minimum of the extrapolated transition times at $T_\mathrm{low}$, $\nu_\mathrm{min}$ is a guess for a lower bound of the pre-exponential factors occurring in the formulas for all possible transitions, and $\delta$ is a pre-defined limit for the probability to observe every transition after $\tilde{t}_\mathrm{high}$ that would replace the transition at the current minimum $t_\mathrm{low,min}$. As a result of the procedure, the simulation time is advanced by $t_\mathrm{low,min}$, and the process corresponding to this time is executed. The boost factor that can be achieved by means of TAD depends critically on the ratio of $T_\mathrm{high}$ to $T_\mathrm{low}$. As one cannot choose arbitrarily high values of $T_\mathrm{high}$ without breaking the requirements of HTST, the method is particularly effective for systems at low temperature. For example, a boost factor of $10^7$ was achieved in a simulation of the growth of a Cu(100) surface at $T_\mathrm{low} = \SI{77}{K}$ \cite{Voter2002}. A recent example for a simulation at a higher temperature of $T_\mathrm{low} = \SI{500}{K}$ can be found in Ref.~\cite{Georgieva2011tad}, where the sputter deposition of Mg--Al--O films was studied. \subsection{Parallel replica dynamics}\label{ss:ParRep} While standard parallelization techniques are usually applied to extend the accessible system sizes, parallel replica dynamics (ParRep) allows one to use parallel computing to extend the time scales, too \cite{Voter1998}. Among the three methods hyperdynamics, TAD and ParRep, ParRep is the most accurate one, and a higher boost can be trivially achieved by increasing the number of processors $N_\mathrm{p}$ \cite{Perez2009_voter,Neyts2012mdmc}. ParRep can be applied to any infrequent event system with first-order kinetics, i.\,e., with exponentially distributed first-escape times of all occurring processes, \begin{equation} f(t) = \lambda \exp \left(-\lambda t \right) \,. \end{equation} The ParRep procedure starts by replicating and dephasing the system on each available processor. Each copy of the system is propagated independently and in parallel on each processor until a transition is detected on one of the processors. Then, the system clock is advanced by the sum of the $N_\mathrm{p}$ individual simulation times, and the global system state is set to the state reached after the observed transition. Subsequently, a short serial run is performed to allow for the occurrence of correlated events. After that, the whole procedure is repeated. As the number of processors is one limiting factor of the achievable speed-up, the ParRep method is often less effective than hyperdynamics and TAD. Nevertheless, it is simple to implement, and it can be combined with other acceleration methods. Therefore, ParRep has become a valuable tool in the field of computational materials science \cite{Perez2015parrep}. For example, it has been applied to simulate the diffusion of H$_2$ in crystalline C$_{60}$ \cite{Uberuaga2003ex_parrep}, the diffusion of lithium in amorphous polyethylene oxide \cite{Duan2005ex_parrep}, and the crack-tip behaviour in metals \cite{Warner2007ex_parrep}. \subsection{Selective process acceleration (SPA). MD simulation of gold film growth on a polymer surface}\label{ss:spa} In some cases, it is reasonable to assume that the motion of atoms on a surface or in a medium is approximately Brownian. This type of motion can be generated by solely performing Langevin dynamics for the particles of interest, while the other atoms and molecules of the background medium do not have to be explicitly included in the simulations. Here we consider, as an example, the deposition of gold atoms onto a polymer surface. The MD simulations tracked each individual atom, its diffusion on the surface, the emergence and growth of clusters and, eventually the coalescence of the latter. A typical example is presented in Fig.~\ref{fig:abraham-snap} and shows the cluster configuration at an early moment (top) and a later time point (bottom). The influence of the plasma environment is mostly due to the impact of energetic ions. This leads to the formation of surface defects that trap incoming atoms and prevent their diffusion. The figure compares the cases of a weak plasma effect (right column, the fraction of atoms trapped equals $\gamma=0.001$) and a stronger effect (left column, $\gamma=0.05$.) In the former case, a small number of large clusters is being formed, due to cluster coalescence, whereas in the latter case the film is much more homogeneous, containing a much larger number of smaller clusters \cite{abraham_diss_18}. We underline that the lower snapshots in the figure refer to a film thickness of about one nanometer which requires a deposition time of about two minutes. This is impossible to achieve with first principle MD simulations. The key to achieve this extreme simulation duration and to compare to experiments was selective process acceleration (SPA). The main ideas of the approach are explained in the following. \begin{figure}[h] \begin{center} \hspace{-0.2cm}\includegraphics[width=0.49\textwidth]{cluster_snap.png} \end{center} \caption{Time evolution of the gold film morphology deposited on a polystyrene substrate from accelerated MD simulations. Figures show the 3D-cluster configuration in real space. Top row: early time corresponding to a film thickness of $0.03$nm. Bottom row: later time, see also Fig.~\ref{fig:abraham}. The influence of the plasma is varied from the left to the right column. Right: defect fraction due to energetic ions, $\gamma=0.001$. Left: $\gamma=0.05$. From Ref.~\cite{abraham_diss_18}. } \label{fig:abraham-snap} \end{figure} In the MD simulations the isotropic Langevin equation of motion for all gold particles with the mass $m$ and spatial coordinates $\mathbf{r}=(\mathbf{r}_1, \mathbf{r}_2, \ldots)$ are solved: \begin{equation} m \ddot{\mathbf{r}} = - \nabla U(\mathbf{r}) - \frac{m}{t_\mathrm{damp}} \dot{\mathbf{r}} + \sqrt{\frac{2m k_\mathrm{B}T}{t_\mathrm{damp}}} \mathbf{R}\,, \label{eq:eqmotion_langevin} \end{equation} where the potential $U$ describes the interaction between gold particles. For this potential \textit{ab initio} force field data are being used (the MD simulations used the LAMMPS package). Further, $t_\mathrm{damp}$ has the role of a damping parameter, and $\mathbf{R}$ is a delta-correlated Gaussian random process. This random force and the viscous damping simulate the effect of the polymer on the heavy gold particles. If only neighborless atoms are considered, i.\,e., $\nabla U \equiv 0 $, the combination of the last two terms on the right side of Eq.~(\ref{eq:eqmotion_langevin}) induces a diffusive motion with the diffusion coefficient \begin{equation} \label{eq:diffusioncoeff} D=\frac{1}{m} {k_\mathrm{B}T t_\mathrm{damp}}\,. \end{equation} Thus, it is clear that the utilization of the Langevin dynamics allows one to control the speed of the surface diffusion and bulk by choosing a specific combination of the temperature $T$ and the damping parameter $t_\mathrm{damp}$. An anisotropic diffusive motion can be generated if one generalizes Eq.~(\ref{eq:eqmotion_langevin}) by separately defining damping parameters $t_\mathrm{damp}^x$, $t_\mathrm{damp}^y$ and $t_\mathrm{damp}^z$ for each of the three spatial directions. Beyond that, it is possible to add a spatial dependence to the diffusion coefficient if one lets the damping parameters depend on the position of the particle. Based on the above considerations, Abraham \textit{et al}.~\cite{abraham_jap_16} developed a procedure to simulate the growth of nanogranular gold structures on a thin polymer film. Instead of simulating the polymer with explicit particle models, their method relies on performing Langevin dynamics for the gold atoms with the simulation box being partitioned into three parts representing the upper part of the polymer bulk (I), the surface of the polymer (II) and the region above the surface (III). By choosing appropriate ratios of the damping parameters, one can make sure that the atoms spend most of the time in the surface layer (II), where they perform a random walk which is restricted to a small range of possible $z$-coordinates. The use of Langevin dynamics is restricted to regions (I) and (II); in region (III), the dynamics is purely microscopic. This allows one to add particles to the system by creating particles at the top of the simulation box and assigning them a negative initial velocity. Therefore, it is possible to perform the simulation with values of the deposition rates $J_\mathrm{sim}$ and diffusion coefficients $D_\mathrm{sim}$ that are much higher than the values in typical experiments. In Ref.~\cite{abraham_jap_16}, it was argued that the simulations yield an adequate description of a real experimental deposition process if the ratio $J_\mathrm{sim}/D_\mathrm{sim}$ is equal to the ratio $J_\mathrm{exp}/D_\mathrm{exp}$ of the corresponding quantities in the experiment. The idea behind that is that -- at least at the early stage of the deposition process -- the growth should be essentially determined by the average distance an atom travels on the surface between successive depositions of atoms. Hence, the absolute time of the process is assumed to be irrelevant. \begin{figure}[h] \begin{center} \hspace{-0.2cm}\includegraphics[width=0.5\textwidth]{md_au_ps_morphology.pdf} \end{center} \caption{Number density of gold clusters on a polymer film as a function of the effective film thickness. The data has been taken from Ref.~\cite{abraham_jap_16} where results of MD simulations with SPA are compared with data from GISAXS experiments~Ref.~\cite{Schwartzkopf2015}. The upper horizontal axis of the plot shows the impressiv effective simulation time reached by \textit{accelerating the deposition of atoms and the diffusion} of atoms on the surface. } \label{fig:abraham} \end{figure} The results presented in Ref.~\cite{abraham_jap_16} were obtained with a time step of $\SI{1}{fs}$, and a damping parameter for the diffusion in $x$- and $y$-directions of $\SI{1}{ps}$. The temperature and the deposition rate were set to match the conditions of the experimental results in Ref.~\cite{Schwartzkopf2015} for the sputter deposition of gold on polystyrene. Using these parameters, the direct MD simulation time for the growth of a thin gold film is roughly $10^9$ times shorter than the corresponding time in the experiment. Or in other words, the duration of the MD simulations could be extended by nine orders of magnitude. To verify the validity of such a dramatic shift of the time scales, comprehensive tests of the method were performed, see also Refs.~\cite{abraham_diss_18,abraham_epjd_17} for a discussion. In particular, as one accelerates only selected processes, i.e., the deposition of atoms and the diffusion of atoms on the surface, one has to make sure that the neglect of other processes, e.\,g., the relaxation of a cluster structure, does not lead to artifacts in the simulation results. In Ref.~\cite{abraham_jap_16}, the method was tested by comparing several quantities describing the evolution of the gold film morphology with the results of time-resolved in situ grazing incidence x-ray scattering (GISAXS) experiments of Schwartzkopf \textit{et al.} \cite{Schwartzkopf2015}. It turned out that many of the experimentally observed features could be reproduced for film thicknesses up to \SI{3}{nm}. This thickness corresponds to an impressive effective simulation time of \SI{367}{s} which is directly suited for comparison with measurements. As an example of the compared quantities, figure~\ref{fig:abraham} shows the number density of metal clusters on the polymer surface. The comparison between experimental data and simulation results shows very good agreement, at least up to a time of about \SI{350}{s}. For longer times, the simulation start to deviate from the measurements indicating that the procedure is no longer applicable. The present approach of selective acceleration of dominant processes can be generalized to other systems as well. A recent application concerned the deposition and growth of bi-metallic clusters on a polymer surface \cite{abraham_cpp_18} where the acceleration allowed one to study the very slow process of demixing of the two metals. Applications of this approach to various plasma processes should be possible as well. One effect that has already been studied is the creation of defects by ion impact. The main effect is trapping of clusters \cite{abraham_jap_15} at the defect locations which reduces their diffusion and limits cluster coalescence, see Fig.~\ref{fig:abraham-snap} above. In addition to the deposition of neutral atoms, the method also allows one to describe the impact of ions and the growth of charged clusters. In concluding this section we mention that similar problems of rare events appear not only in MD simulations but also in statistical approaches such as Monte Carlo simulations. Some strategies are discussed in Ref. \cite{boening_prl_08}, where further references are given as well. \begin{table*}{\Large $\;\qquad$ {\bf Hierarchy of scales and relaxation processes in many-body systems}}\\[3ex] \begin{tabular}{|c|c|c|c|} \hline &&&\\ Time and & \textbf{Stage}, Effects & Quantities & Theory \\ length scales &&&\\ [2ex] \hline \hline &&&\\ \textbf{IV} & {\bf Equilibrium} & $n^{\rm EQ}_a$, $T^{\rm EQ}$, $p^{\rm EQ}$ & Equilibrium theory \\ && $p=p(n_1, n_2 \dots,T)$ & Equation of state\\ $t > t_{\rm hyd}$&& $n_a=n_a(n_1,n_2\dots,T)$ & Mass action law \\ & Correlated equilibrium, or& $p=p^{\rm ideal}+p^{\rm cor},\;$ etc.&\\[2ex] $l > l_{\rm hyd}$ & Stationary nonequilibrium state & $n_a({\cal U})$, $T({\cal U})$, $p({\cal U})$ & Quasi-equilibrium theory \\ &in an external field ${\cal U}(\textbf{R})$ &&\\[2ex] \hline &&&\\ \textbf{III} & {\bf Hydrodynamic Stage} & $n_a({\bf R}t)$, $\textbf{u}_a({\bf R}t)$, $T_a({\bf R}t) $ & Hydrodynamic equations \\[2ex] $t\in[t_{\rm rel},t_{\rm hyd}]$ & Local equilibrium & $f_a=f^{\rm EQ}_a \Big(n({\bf R}t),\textbf{u}({\bf R}t),T({\bf R}t)\Big)$ & Gas-dynamic equations\\ $l\in[l_{\rm mfp},l_{\rm hyd}]$ & & & Reaction-diffusion eqs. \\ & Correlation corrections & $\:n_a({\bf R}t) = n_a^{\rm ideal}({\bf R}t) + n_a^{\rm cor}({\bf R}t)$ & Rate equations\\ && etc. & Master equation\\ [2ex] \hline &&&\\ \textbf{II} & {\bf Kinetic Stage} & $ f_a({\bf p}{\bf R},t) $ & Kinetic theory/ \\ & & $f_a(t_0)$& Relaxation time\\ $t\in[t_{\rm cor},t_{\rm rel}]$ & Functional hypothesis && approximation \\ & Equilibrium correlations& $g_{ab}=g^{\rm EQ}_{ab}(\{f(t)\})$ & Markov limit (M) +\\ $l\in[l_{\rm cor}, l_{\rm mfp}]$ & Kinetic energy conservation&$\;\,\quad = g^{\rm M}_{ab,0}+g^{\rm M}_{ab,1}+ \dots$ & Correlation corrections\\ [2ex] \hline &&&\\ \textbf{I} & {\bf Initial Stage} & $g_{ab}({\bf p}_a{\bf R}_a{\bf p}_b{\bf R}_b,t)$ & Generalized \\ &Initial correlations & $g_{ab}(t_0)$ &kinetic equations \\ $t\in[t_0,\tau_{\rm cor}]$&Correlation buildup & & Correlation time approx.\\ &Total energy conservation && \\ $l < l_{\rm cor}$ &Higher correlations & $g_{abc}$, $g_{abcd}, \dots$ & first principle simulations \\ [2ex] \hline \end{tabular}\\[2ex] \caption{Characteristic scales and relaxation processes in correlated many--particle systems (schematic). Typical examples are the relaxation of electrons in a plasma following local ionization or excitation by a short electric field pulse or the thermalization of atoms from the plasma on a solid surface. Beginning at the initial time $t_0$, the evolution goes (from bottom to top) through several time stages and extends from small to larger length scales. This can be viewed as successive \textit{coarse graining}, cf. Fig.~\ref{fig:scheme}. Accordingly, the relevant observables and concepts for a statistical description change. For explanations and details, see Sec.~\ref{s:coarse_graining}. Adapted from Ref.~\cite{bonitz_qkt}. } \label{tab1} \end{table*} \section{Coarse graining approaches}\label{s:coarse_graining} \subsection{General idea} We now discuss the approaches listed in the right column of Fig.~\ref{fig:scheme}. The main idea of the coarse graining approaches is to perform an analysis of the different length and time scales that exist in a many-particle system -- such as the plasma-surface system -- driven out of equilibrium [one example could be a dense system of ions that is excited by an electric field pulse. Another example could be an ensemble of neutrals from the plasma that impact a solid surface and equilibrate there due to collisions with the lattice atoms]. Depending on the type of excitation and on the properties of the system, the relaxation towards equilibrium typically proceeds in a number of steps. Even though this is a highly complex process in general, it is often possible to identify a sequence of relaxation processes or even a hierarchy. A situation typical of gases and plasmas is sketched in table~\ref{tab1}. Here, four relaxation stages are distinguished which are separated by the following characteristic time scales: the hydrodynamic time scale $t_{\rm hyd}$, as well as the kinetic time scales $t_{\rm rel}$ (relaxation time) and $\tau_{\rm cor}$ (correlation time). The latter is directly related to an equilibration of pair, $g_{ab}$ (triple, $g_{abc}$, and higher) correlations, $t_{\rm rel}$ denotes the time necessary to establish a Maxwell (equilibrium) distribution $f^{\rm EQ}$, and $t_{\rm hyd}$ is associated with the decay of density, velocity or temperature fluctuations or inhomogeneities or similar large-scale excitations. At each of these stages the system is adequately described by a specific set of quantities, and their dynamics is governed by specific equations: hydrodynamic equations (Stage III), kinetic equations (Stage II) and generalized non-Markovian kinetic equations or \textit{ab initio} simulations (Stage I), respectively. Even though each of these equations is an approximation to the full many-body equations, these equations are accurate within their respective stages and time scales. Thereby, the accuracy of these models is typically not limited by the equations themselves (e.g. resulting from the omission of higher order terms) but by the parameters entering these models. For example, these parameters are the transport or hydrodynamic coefficients (determined by the distribution function) in hydrodynamic equations. In the case of kinetic equations, these parameters are cross sections or collision integrals. Of course, approximation schemes are used in practice for these parameters. But here we consider a different (although hypothetical) situation: \textit{if these input quantities would be known exactly, the underlying model equations would be exact}, within their respective range of applicability\cite{comment_exactness}. Of course, this requires the application of rigorous coarse graining procedures for the derivation of these equations, some examples and properties of which we will discuss in Secs.~\ref{ss:averaging}, \ref{ss:environment} and \ref{ss:bbgky}. The existence of formally exact coarse grained equations is at the heart of this work and is explored in the remainder of this paper. In particular, our main idea is to obtain these exact input quantities for models from first principle MD simulations and thereby realize our goal to significantly extend the duration of first principle simulations. Since the coarse grained equations (Stages II-IV) emerge dynamically in the course of the equilibration, this novel method is called \textit{Dynamical freeze out of dominant modes} (DFDM). We demonstrate it for simple examples in Sec.~\ref{s:freezout}. \subsection{Time dependencies during equilibration} An observation of interest for the following discussion is that the distribution function $f_a(\textbf{R},\textbf{p},t)$ [$a$ could denote ions or neutrals, in the examples above] is still time- and space-dependent, even after its equilibration. However, this dependence is only implicit and arises exclusively from the (slower) time evolution and weaker space dependence of the macroscopic fields entering the function \begin{equation} f_a(\textbf{R},\textbf{p},t) = f_a^{\rm EQ}[n(\textbf{R},t),\textbf{u}(\textbf{R},t), T(\textbf{R},t)], \end{equation} i.e., the density $n$, the mean velocity $\textbf{u}$ and the mean energy (temperature $T$). By contrast, the momentum dependence is fixed by the Maxwellian form. This applies to the hydrodynamic Stage (III), cf.~Tab.~\ref{tab1}, whereas the distribution carries, in addition, an explicit time dependence, on the kinetic stage (II). This concept is based on the \textit{functional hypothesis} of Bogolyubov \cite{bogoljubov}. Close to the border of the two stages, i.e. for $t$ slightly below $t_{\rm rel}$, the deviations of $f$ from the distribution function $f^{\rm EQ}$ at equilibrium are small, and the time derivative of $f$ [the collision integral, see Eq.~(\ref{eq:col-integral})] can be approximated by the linear relation \begin{equation} I_f(t) \approx - \frac{1}{t_{\rm rel}} \left[ f(t)-f^{\rm EQ} \right]. \label{eq:rta} \end{equation} This is nothing but the familiar \textit{relaxation time approximation} (or Bhatnagar–Gross–Krook (BGK) collision integral \cite{bgk-integral}), where the relaxation time is usually taken from equilibrium models of the system which have limited accuracy. In cases where \textit{ab initio} simulation data are available, as we assume here, the MD simulation results can be used to extract an \textit{ab initio} relaxation time which transforms Eq.~(\ref{eq:rta}) into an exact relation. This also requires to use the MD data for the equilibrium distribution that may be modified in the presence of correlations between the particles. This improved description also allows for a more accurate modeling of the hydrodynamic stage. We mention for the sake of completeness, that a similar crossover can also be studied between stages I and II. In that case, the pair correlation function reaches its equilibrium form $g^{\rm EQ}$ around the correlation time $t\sim \tau_{\rm cor}$ and retains only an implicit time dependence via the distribution functions $g(t)=g^{\rm EQ}[f(t)]$ (cf.~Tab.~\ref{tab1}) thereafter. Similar to the relaxation time approximation, here one can use a \textit{correlation time approximation} to describe the final approach to the equilibrium correlations \cite{bonitz_pla_96,bonitz_qkt}. \subsection{Averaging over time and/or length scales}\label{ss:averaging} As a simple example, we consider the second order differential equation \begin{eqnarray} \frac{\mathrm{d}^2{\hat A}}{\mathrm{d}t^2} + \gamma \frac{\mathrm{d}{\hat A}}{\mathrm{d}t} + {\hat k}(t){\hat A} = 0\, , \label{eq:fast_oscill} \end{eqnarray} which represents e.g.\ a Newtonian equation of motion. Here, $\gamma$ is a dissipation coefficient, and ${\hat k}(t)$ denotes a quickly fluctuating force constant or frequency. This could be a dust particle in a complex plasma that experiences collisions with plasma neutrals, ions or electrons. Another example could be a large molecule on a solid surface that undergoes collisions with the lattice atoms. A frequent situation is that ${\hat k}(t)=k+\delta \hat{k}$, where $k\equiv \langle {\hat k}\rangle$, and the brackets denote time averaging over the period of the fast oscillations or rapid dynamics of the light particles. Correspondingly, the dynamics of the variable $\hat A$ can be split into a slowly changing term and a rapidly oscillating contribution according to ${\hat A}(t)=A+\delta \hat{A}$, where $A=\langle {\hat A}\rangle$ obeys the equation of motion \begin{eqnarray} \frac{\mathrm{d}^2 A}{\mathrm{d}t^2} + \gamma \frac{\mathrm{d} A}{\mathrm{d}t} + k(t)A = - \langle \delta{\hat k}(t) \delta{\hat A(t)} \rangle = I_A\,. \label{eq:slow_oscill} \end{eqnarray} This equation describes the ``coarse grained dynamics'' where the fast ``random fluctuations'' seem to be eliminated, and indeed, the left-hand side of this equation coincides with the original equation~(\ref{eq:fast_oscill}). However, the fast processes leave a trace on the slow dynamics via the new term on the right side, which is the correlation function of two fluctuating quantities. If the right-hand side is known, equation~(\ref{eq:slow_oscill}) will still be exact. In fact, it is not difficult to derive the equation of motion for $\langle \delta{\hat k}(t) \delta{\hat A(t)} \rangle$ by first finding the equation for $\delta{\hat A}(t)$, cf. e.g. \cite{klimontovich_sp, bonitz_un}. However, it is easy to see that this equation is not closed as well, and couples to a new quantity, $\langle \delta{\hat k}(t)\delta{\hat k}(t) \delta{\hat A(t)} \rangle$, which obeys its own equation of motion. Thus, an infinite hierarchy of equations emerges which is the direct consequence of the averaging procedure. The common solution is to decouple this hierarchy by invoking additional assumptions on the fast dynamics. A common approximation is to assume $\tau^{\delta {\hat k}}_{cor} / t_{A} \to 0$ which means that the correlation time of the rapid process (i.e. of the light particles) is vanishingly small compared to the characteristic time scale of $A$ (the heavy particle). In that case, the term on the right side of Eq.~(\ref{eq:slow_oscill}) becomes a delta correlated random process, and one recovers a Langevin-type equation with a Gaussian white noise term. The above model is representative for a large variety of problems containing multiple time scales. A similar situation appears in the case of multiple length scales, where a \textit{spatial averaging} leads to a coarse grained description. A further example for a coarse grained description is \textit{many-body dynamics in phase space}, which is described by a generalized distribution function ${\hat N}(\textbf{r},\textbf{p},t)$ -- the microscopic phase space density introduced by Klimontovich \cite{klimontovich_sp}. This distribution function obeys a mean-field type equation \begin{eqnarray} \left\{\frac{\partial}{\partial t} + \textbf{v}\cdot\nabla_\textbf{r} + {\hat \textbf{F}}\cdot \nabla_\textbf{p} \right\}{\hat N}(\textbf{r},\textbf{p},t) = 0\, , \label{eq:nhat} \end{eqnarray} where $\hat \textbf{F}$ denotes the total force on the particles which includes all external and induced forces. In general, the force also contains rapid and slow contributions. Thus, we can write again $\hat \textbf{F}=\textbf{F}+\delta\hat \textbf{F}$, and the above coarse graining procedure can be repeated. In fact, this procedure leads to the well-known Boltzmann-type kinetic equations for the one-particle distribution function $ f(\textbf{r},\textbf{p},t) \equiv \langle{\hat N}(\textbf{r},\textbf{p},t)\rangle$. It has the same form as Eq.~(\ref{eq:nhat}) and reads \begin{eqnarray} \left\{\frac{\partial}{\partial t} + \textbf{v}\cdot\nabla_\textbf{r} + \textbf{F}\cdot \nabla_\textbf{p} \right\}f(\textbf{r},\textbf{p},t) = I_f(\textbf{r},\textbf{p},t)\, . \label{eq:kin_eq} \end{eqnarray} However, it contains the collision integral $I_f$ on the right-hand side in addition, which is directly determined by the correlation function \begin{eqnarray} I_f(\textbf{r},\textbf{p},t) = - \langle \delta\hat \textbf{F}\cdot \nabla_\textbf{p}\delta\hat N\rangle\, \label{eq:col-integral} \end{eqnarray} of the fluctuations. As before, the collision term is an unknown quantity, and one can derive an equation of motion for it. Again it includes a higher-order correlation function, and the whole system turns into an infinite hierarchy of equations. The standard solution is to use physical approximations for the choice of the collision integral $I_f$ using e.g.\ the Boltzmann collision integral or improvements such as the Balescu-Lenard integral~\cite{bonitz_qkt}. Now, let us discuss how the idea of the present paper can be applied here. In cases where we have first principle simulation data at our disposal, there exists a straightforward way how the hierarchies of equations discussed above can be decoupled. When performing MD simulations of a classical system, every observable can be computed. For example, it is possible to evaluate the right-hand side of Eq.~(\ref{eq:slow_oscill}) during a MD simulation. However, the result for $\delta {\hat k}(t)\delta {\hat A}(t)$ is random, depending on the choice of the initial conditions for the particle trajectories. The simple solution consists in running many independent MD simulations. Thereby, the initial conditions have to be chosen with a proper probability distribution given by the ensemble over which the averaging in the collision term $I_A$ is performed. Then, the first principle MD result for $I_A$ follows by averaging over $M$ realizations according to \begin{eqnarray} I_A(t) = \lim_{M \to \infty}\frac{1}{M}\sum_{l=1}^M \left(\delta {\hat k}(t)\delta {\hat A}(t)\right)^{(l)}\,. \label{eq:mean_ia} \end{eqnarray} This method is in principle exact for a classical system. The efficiency crucially depends on the cost of the MD simulation and on the number $M$ of trajectories needed to obtain a converged result. At the same time, the fluctuation around the result (\ref{eq:mean_ia}) gives an estimate of the statistical uncertainty. A similar approach can be applied to the kinetic equation (\ref{eq:kin_eq}) and the collision integral $I_f$, Eq.~(\ref{eq:col-integral}). Of course, the question arises what is gained by this approach compared to performing only MD simulations without resorting to the many-body equation (\ref{eq:slow_oscill}) or the kinetic equation (\ref{eq:kin_eq}) at all. The point is that accurate MD simulations are typically substantially more costly than the latter approaches. Therefore, an advantageous compromise between accuracy and computational effort consists in performing MD simulations, for short time scales, and in continuing the simulation by solving a kinetic equation, for longer times. For the latter, the use of the MD input for the collision integral, as explained by Eq.~(\ref{eq:mean_ia}), could yield a significant increase in accuracy and, it could eventually offer a way to extend first principle simulations to longer times. We return to this issue in section~\ref{s:freezout}. Notice that similar approaches exist also for quantum systems. An example is the ``Stochastic Mean field'' approach, cf. Ref.~\cite{lacroix_prb_14} and references therein. Instead of MD simulations, here time-dependent Hartree-Fock simulations are performed over which the averaging is carried out. The result of this procedure turned out to be very encouraging when compared to exact simulation results, at least for short and intermediate time scales \cite{lacroix_prb_14}. \subsection{Averaging over ``environmental'' degrees of freedom}\label{ss:environment} Another application of coarse graining concepts is frequently used for particles in contact with a reservoir or ``bath''. Again this can be heavy dust particles in contact with lighter plasma particles or a macromolecule in a plasma or on a surface. The complete state of the system of $N$ particles and $N_B$ bath particles is described by the phase space distribution function $F(\textbf{R},\textbf{P};\textbf{R}_B, \textbf{P}_B)$, where we use the compact notation $\textbf{R}=\textbf{r}_1, \textbf{r}_2\dots \textbf{r}_N$ and $\textbf{P}=\textbf{p}_1, \textbf{p}_2\dots \textbf{p}_N$ for the particles and similar notations for the bath particles and restrict ourselves to classical particles. It is easy to formulate the equations of motion for the whole system, but the solution for the $N+N_B$ particles is extremely costly. More importantly, one is typically not interested in the details of the dynamics of the bath particles, where usually $N_B\gg N$. Therefore, the standard procedure is to switch to a ``reduced'' description, which resolves only the degrees of freedom of the system, by integrating over the bath parameters according to \begin{eqnarray} f(\textbf{R},\textbf{P}) \equiv \int d \textbf{R}_B d \textbf{P}_B \,F(\textbf{R},\textbf{P};\textbf{R}_B, \textbf{P}_B)\,. \nonumber \end{eqnarray} If the bath is in thermodynamic equilibrium at temperature $T_B$, this coarse graining procedure transforms the dynamics of the system of particles from the microcanonical ensemble into the canonical or grand-canonical ensemble. The corresponding equations of motion can then be solved by Langevin MD simulation or by using a Nose-Hoover thermostat \cite{nose}. Here, the main assumption is of course the equilibrium of the bath. This neglects the influence of the dynamics of the system particles on the bath particles which might be questionable, in particular at strong excitation conditions. In that case, an alternative strategy consists in performing short-time MD simulations including the bath dynamics by using different realizations of the bath over which an averaging is performed, in analogy to Eq.~(\ref{eq:mean_ia}). This accurate procedure is computationally expensive and cannot be extended to long times. Thus, strategies can be developed, where one switches to the equilibrium description of the bath at times exceeding the thermalization time $t_B^{\rm rel}$ of the bath. A similar procedure is used in the simulation of disordered systems where an average over different realizations of the disorder is performed. Such a strategy should be applicable to certain plasma-surface simulations such as scattering of plasma particles from a surface or diffusion of an adsorbate atom or molecule on a surface. In that case it can be justified to treat the surface as a ``bath'' at sufficiently long time scales. In contrast, the initial time period of the interaction of a plasma particle with the surface requires a full dynamic treatment of the adsorbate and the surface atoms. A similar idea is realized in Sec.~\ref{ss:md-re}. \subsection{Reduced distribution functions. BBGKY hierarchy}\label{ss:bbgky} Finally, we discuss an important approach to treat correlated many-particle systems which is based on the concept of reduced distribution functions. A typical example is the dynamics of electrons following a strong excitation and their subsequent thermalization, due to electron-electron collisions. The dynamics of a classical $N$-particle system can be treated by first principle MD simulations or, equivalently, by the $N$-particle distribution function $F_N(x_1, x_2, \dots x_N)$ where $x_i=(\textbf{r}_i, \textbf{p}_i)$ and $F_N$ obeys the Liouville equation \begin{eqnarray} f_s(x_1\dots x_s) &= &\frac{N!}{(N-s)!} \nonumber \\ && \times \int d x_{s+1}\dots d x_N\,F_N(x_1\dots x_N)\, . \label{eq:Liouvilleequation} \end{eqnarray} The equation of motion for $f_s$ follows directly from the original equation for $F_N$ by integration over the remaining variables, as in the definition (\ref{eq:Liouvilleequation}) of $f_s$. The resulting equation for $f_s$ is not closed but involves contributions from $f_{s+1}$ giving rise to a hierarchy of equations---the BBGKY (Bogoliubov-Born-Green-Kirkwood-Yvon) hierarchy, which is discussed in detail e.g.\ in Ref. \cite{bonitz_qkt}. If the system is non-interacting, the $s$-particle distribution is a product of $s$ single-particle factors, e.g.\ $f_2(x_1, x_2)=f_1(x_1) f_1(x_2)$. Here, we do not consider quantum exchange effects. In case of correlations between the particles, this relation is generalized to \begin{eqnarray} f_2(x_1, x_2)=f_1(x_1) f_1(x_2) + g_2(x_1, x_2)\, , \label{eq:g2} \end{eqnarray} where $g_2$ is the pair correlation function. The BBGKY hierarchy is decoupled by invoking physically motivated approximations e.g. for the pair correlation function \begin{eqnarray} g_2(x_1, x_2,t) \to g^{app}_2([f_1];x_1, x_2,t) \, , \label{eq:bbgky_decoup} \end{eqnarray} which is a given functional of the one-particle distribution function. As a result one obtains a closed equation for $f_1$---the kinetic equation---where the pair correlation function determines the collision integral $I_f=I_f[g_2^{app}]$, which coincides with the previous result given by Eq.~(\ref{eq:col-integral}). In the spirit of the present paper, we underline that the resulting equation for $f_1$ is a dramatic simplification compared to the description of the full dynamics of all $N$ particles, in particular when $N$ is macroscopically large. This is the result of a very efficient coarse graining procedure. However, the quality of the result depends on the accuracy of the approximation, $g^{app}_2([f_1];x_1, x_2,t)$. As discussed above, reliable approximations exist for limiting cases, e.g.\ when the problem contains small parameters (such as for weak interaction) and for long times $t\ge \tau_{cor}$. However, for the initial time period, $0\le t < \tau_{cor}$, the standard (Markovian) results for $g^{app}_2$ are known to fail, e.g.~\cite{bonitz_qkt, bonitz_pla_96}. Here, the concept of the present paper can be utilized again: perform MD simulations at short time scales, use their result to reconstruct the exact functions $g_2([f_1],t)$, and extend this result to longer times using the kinetic equation for $f_1$. Such a procedure has not yet been realized so far for kinetic equations. For this reason it is interesting to look at other examples where this concept has been tested. \section{Dynamical freeze out of dominant modes (DFDM)}\label{s:freezout} Let us now turn to the final approach listed in Fig.~\ref{fig:scheme} under coarse graining concepts, right column. \subsection{Coupled DFT-master equation approach for molecule diffusion on a metal surface}\label{ss:dft-me} Franke and Pehlke performed extensive density functional theory (DFT) simulations of the diffusion of a 1,4-butaneditiol molecule on a gold surface \cite{franke_prb_10}. They found the local adsorption energy minima of the molecule and then applied the nudged elastic band (NEB) approach \cite{henkelmann_jcp_00} to compute the transition rates between them. This allowed them to record the entire ``network'' of atomic scale diffusion paths of the molecule on the surface. There is a large number of processes that are analyzed in Ref.~\cite{franke_prb_10}. Here, we are not concerned with the details of the associated diffusion hops, Ref.~\cite{franke_prb_10}-- but focus on the coarse graining idea. In order to connect the \textit{ab initio} short-time diffusion simulations to the long-time behavior Franke and Pehlke considered a master equation~\cite{franke_prb_10} \begin{eqnarray} \frac{dp_i(t)}{dt} &=& \sum\limits_{j\ne i} \left\{ \Gamma_{j\to i}\, p_j(t) - \Gamma_{i\to j}\, p_i(t) \right\},\; \label{eq:master} \\ & &0 \le p_i(t) \le 1, \quad \sum_i p_i(t)=1\,, \nonumber \end{eqnarray} where $i$ is a multi-index numbering the configurations of the molecule which have a probability $p_i(t)$, and $\Gamma_{a\to b}$ are the transition rates (probability per unit time) from state $a$ to $b$. The first term on the right side of Eq.~(\ref{eq:master}) describes processes which increase the probability to realize state $i$ (``gain''), whereas the second term describes the analogous loss processes. For the computation of the transition rates the authors used standard TST~\cite{vineyard_57}, \begin{eqnarray} \Gamma_{a\to b} = \nu^0_{a\to b} \, e^{-\Delta E_{a\to b}/k_BT}\, . \label{eq:tst} \end{eqnarray} Here, $\Delta E_{a\to b}$ is the energy barrier for the transition between states $a$ and $b$ which is computed using DFT, and $\nu^0$ is the attempt frequency which is of the order of $10^{12}$\,s$^{-1}$ \cite{franke_prb_10}. The authors of this reference consider two stages of the evolution: the initial stage, corresponding to stage I in table~\ref{tab1}, and the asymptotic hydrodynamic state, corresponding to stage III. In stage I the dynamics depend strongly on the initial configuration of the molecule, and the diffusion retains a memory of the initial state being anisotropic. In contrast, one expects spatially isotropic motion of the molecule in stage III, where averaging over many initial configurations is assumed, since all memory of the initial state has been lost. Correspondingly, it is expected that the standard diffusion equation \begin{eqnarray} \frac{\partial n(\textbf{r},t)}{\partial t} = D \Delta n(\textbf{r},t)\, , \label{eq:diffusion} \end{eqnarray} holds with the well-known time-dependent solution of the initial value problem for the initial condition $n(\textbf{r},0) = N\delta(\textbf{r}-\textbf{r}_0)$, \begin{eqnarray} n(\textbf{r},t) = \frac{N}{4\pi D t}\,e^{-(\textbf{r}-\textbf{r}_0)^2/(4Dt)}\,. \label{eq:diff-solution} \end{eqnarray} The solution (\ref{eq:diff-solution}) was recovered in Ref.~\cite{franke_prb_10} by mapping of the $p_i(t)$ on the associated spatial coordinates of the center of mass of the molecule, giving rise to the space-dependent probability density $P(\textbf{r},t)$. $P(\textbf{r},t) = n(\textbf{r},t)/N$ is proportional to the particle density n(\textbf{r},t), where $N$ is the total number of molecules (number of initial configurations). In the long-time limit, an exponential temperature dependence of the diffusion coefficient results, i.e., an Arrhenius law \begin{eqnarray} D(T) = a_0^2\, \nu \, e^{-\Delta E/(k_BT)}\, , \label{eqDTArrhenius} \end{eqnarray} which made it possible to recover the effective attempt frequency $\nu$ and effective diffusion energy barrier $\Delta E$. In (\ref{eqDTArrhenius}), $a_0$ is the surface lattice constant. To summarize, \textit{ab initio} results for the elementary diffusion motions of a molecule on a surface have been obtained in this example. While it provides a complete microscopic picture of surface diffusion, this information is far too detailed for many purposes, in particular, for comparisons with measurements. In order to characterize the mobility of the molecules, the main interest concerns the long-time behavior, which is governed by much simpler physics described by the classical diffusion equation (\ref{eq:diffusion}). This reduced dynamics \textit{emerges dynamically} during the course of the evolution of the system due to self-averaging effects. The main advantage of this approach is that the involved diffusion coefficient can be obtained exactly from \textit{ab initio} DFT data instead of using standard approximate results from transport models. At the same time the present combination of DFT and a master equation approach is in principle able to provide additional information, beyond that presented in Ref.~\cite{franke_prb_10}. First of all, it would be possible to establish the equilibration time scale $t_{\rm rel}$, when the system reaches the isotropic diffusion regime (stage III in table~\ref{tab1}). Furthermore, it should be possible to investigate the transient behavior being relevant at shorter time (stage II) as well, and this might give rise to a modified diffusion equation. Finally, we note that similar master equation based approaches have been applied to the computation of chemical reaction rates, cf. \cite{maranzana_pccp_07, robertson_pccp_07} and references therein. In the context of plasma-surface interaction, diffusion and chemical reaction rates in the presence of a plasma are of high interest. Therefore, the present approach might be useful to derive improved surface diffusion and reaction models that take into the influence of a plasma. \subsection{Discussion of the validity of the master equation}\label{ss:valid-me} At this point, a first assessment of the concept to couple first principles data with an analytical model is given. The central question concerns, of course, the validity limits of the master equation. Let us summarize the corresponding requirements. \begin{description} \item[i)] First, it is known that TST is the basis for the transition rates~(\ref{eq:tst}), and it assumes that the surface is in thermal equilibrium. \item[ii)] All energy barriers have to be large compared to the thermal energy. \item[iii)] The transition probabilities in the master equation (\ref{eq:master}) depend linearly on the current occupation probabilities, i.e., the rates $\Gamma_{a\to b}$ are independent of all $p_i(t)$. \item[iv)] The master equation is Markovian, i.e., the transition probabilities depend only on the current state of the system (no memory). \item[v)] The transition rates $\Gamma_{a\to b}$ in the master equation are time-independent. \end{description} Let us ask the question now, how important these conditions are and which of them can eventually be relaxed, if we would have first principle simulation data at our disposal. First of all, conditions i) and ii) can be easily dropped and the rates can be replaced by numerical simulation results, $\Gamma_{a\to b} \longrightarrow \Gamma^{\rm sim}_{a\to b}$. Second, the restrictions iii)--v) on the master equation can be dropped as well in favor of numerical results $\Gamma^{\rm sim}_{a\to b}(\{p_i\},t)$, which are updated during the simulation. Obviously, these rates depend generally on the probabilities as well. In fact, such a generalized version of the master equation which takes into account memory of the previous system states, is nothing but a generalized non-Markovian kinetic equation~\cite{bonitz_qkt}, which can be derived rigorously from the fundamental equations of many-body physics, such as the BBGKY-hierarchy, cf. Sec.~\ref{ss:bbgky}. The necessary requirements to such a generalized master equation are: \begin{enumerate} \item[A.] The microstates of the many-body system are mapped onto a complete set of events ``i'' with well defined probabilities, i.e. $\sum_i p_i(t)=1$, and $0\le p_i(t)\le 1,$ for all $i$ and all times. \item[B.] There has to be a rigorous and numerically stable procedure how to identify these states and to assure ergodicity. \item[C.] There has to be a consistent, stable and sufficiently accurate procedure how to determine the corresponding probabilities and transition rates ``on the fly'' during a simulation. \end{enumerate} A first example how to realize such a procedure is given in the following section~\ref{ss:md-re}. \subsection{Coupled molecular dynamics--rate equation approach for atom adsorption on a metal surface}\label{ss:md-re} Filinov \textit{et al.} \cite{paper1, paper2} presented a first application of the procedure outlined above to the adsorption dynamics and sticking probability of argon atoms on a platinum surface. They performed semi-classical MD simulations of the atom dynamics using \textit{ab initio} pair potentials. These simulations yield the complete information on the particle trajectories $x_i(t)=\{\textbf{r}_i(t), \textbf{p}_i(t)\}$ with $i=1\dots N$. These trajectories depend on the initial conditions such as the incident energy $E_i(0)$ and angle $\theta_i(0)$, and position $\textbf{r}_i(0)$. All observables of interest can in principle be computed from these trajectories (microstates) without resorting to additional approximations such as in TST (\ref{eq:tst}). \begin{figure}[h] \begin{center} \hspace{-0.cm}\includegraphics[width=0.48\textwidth]{Rst0t0depend.pdf} \end{center} \vspace{-.5cm} \caption{Sticking coefficient of argon atoms from first principle MD simulations, as a function of time for two incident angles $\Theta$ and several impact energies $E_i$, for a lattice temperature $T_s=300K$. The experimental data (symbols \cite{expPt}) are placed at the equilibration time, $t_D=1.5t_0=10$ps obtained in the simulations. For details see Ref.~\cite{paper2}.} \label{fig:fig13_paper2} \end{figure} Using the procedure described below, Filinov \textit{et al.} computed the sticking coefficient of argon atoms, as a function of time and obtained very good agreement with experiments \cite{paper2}, as is illustrated in Fig.~\ref{fig:fig13_paper2}. Interestingly, the results provide energy and angle resolved data for the probability that an impacting atom will be adsorbed at the surface (or be reflected). This is valuable input information for microscopic plasma simulations. Let us now return to the idea of this paper--how to extend these simulations to longer times. Even though for the present problem of computing the sticking probability MD simulations of $10 \dots 40$ps duration are sufficient and no extensions are required, it is very instructive to analyze the potential of this scheme. First, it is clear that macroscopic properties such as the sticking coefficient do not require the complete microscopic information of all particle trajectories. Therefore, we attempt to map this microscopic information on a finite set of many-body states that are distinguished by the energy of the atoms. The kinetic energy $E_{\text{p}}$ contains two orthogonal contributions and reads \numparts \begin{eqnarray} E_{\text{p}} &=& E_{\text{p}}^{\perp} + E_{\text{p}}^{\parallel}\,, \qquad E_{\text{p}}^{\perp, \parallel} = \frac{(p^{\perp, \parallel})^2}{2m} \,. \end{eqnarray} \endnumparts In addition, each particle moves in the potential landscape of all surface atoms $V$ giving rise to the total energy \begin{equation} E_t(\vec r)=E_{\text{p}}(\vec r)+V(\vec r)\,. \label{eq:etot} \end{equation} All trajectories of the atoms can be uniquely classified by their energy at every time: 1.) There are atoms with positive surface-normal energy, i.e., $E_{\text{p}}^{\perp} + V >0$. These particles are desorbed from the surface and their fraction is denoted as $N_C$ (continuum states). 2.) The remaining atoms have $E_{\text{p}}^{\perp} + V \le 0$ and belong into two fractions. The first fraction % has a positive total energy, i.e., $E_t>0$. These atoms can freely move across the surface and are denoted by $N_Q$ (fraction of ``quasi-trapped'' atoms). 3.) The remaining atoms have a negative total energy ($E_t<0$). That is they are trapped in local potential minima, and their fraction is denoted by $N_T$. An example of such a trajectory is depicted in Fig.~\ref{fig:alex}a. There, an atom approaches the surface (being in a continuum state) and by colliding with the surface atoms, rapidly looses a large fraction of its energy, becoming trapped. Afterwards it gains again energy from another collision with the surface (thereby being transferred to a quasi-trapped state) until it is eventually desorbed (returning to a continuum state C). The main advantage of this mapping approach is that a large statistical ensemble of trajectories is available for sufficiently many atoms leading to an excellent accuracy of the results. We underline that this classification of all atoms into just three categories (three ``states'') is unique, i.e., $N_Q(t)+N_T(t)+N_C(t)=1$, in agreement with the condition A listed in section~\ref{ss:valid-me}. In fact, these fractions can be understood as occupation probabilities $p_i$ with $i=\{C, Q, T\}$ of the three distinct macro-states. The three fractions of atoms are time-dependent and change during the time the atoms spend on the surface. The time dependence is governed by the system of rate equations \cite{paper1} \numparts \begin{eqnarray} \dot{N}_{Q} &=& - (T_{TQ}+T_{CQ})N_{Q} + T_{QT} N_{T},\label{eq1}\\ \dot{N}_{T} &=& - (T_{QT}+T_{CT}) N_{T} + T_{TQ} N_{Q},\\ \dot{N}_{C} &=& - (\dot{N}_{Q} + \dot{N}_{T})=T_{CT}N_{T}+ T_{CQ}N_{Q} \, , \label{eq:REQ} \end{eqnarray} \endnumparts that is, in fact, just an example of the master equation discussed in Sec.~\ref{ss:valid-me}, and the transition rates $T_{\alpha \beta}$ $[\alpha, \beta = \{C, Q, T\}]$ are just the coefficients $\Gamma_{\alpha \to \beta}$ occurring in the latter. \begin{figure*}[h] \begin{center} \hspace{-0.cm}\includegraphics{Nqtanalg45.pdf} \end{center} \vspace{-1.0cm} \caption{Illustration of the combined MD-rate equations approach for atom sticking \cite{paper1}. \textbf{a)}: Example of an Ar-atom trajectory at a platinum surface at temperature $T_s=300$K. $z(t)$: height above the surface (in Angstroms). $E(t)$: total energy of the atom (in 10 meV). Incident conditions (energy and angle): $E_i=21.6$~meV and $\theta_i=45^\circ$. Depending on the energy the atom belongs to one of the three categories: continuum (C), quasi-trapped (Q) or trapped (T). \textbf{b)}: Time evolution of the three dominant transition rates (in units $t_0^{-1}$). The rate $T_{\ensuremath{\alpha}\ensuremath{\beta}}$ denotes the transition $\beta \rightarrow \ensuremath{\alpha}$. The smallest rate, $T_{CT}\approx 0.093$, is not shown. \textbf{c)}: Fraction of trapped and quasi-trapped atoms as a function of time. Dashed and dotted lines: MD simulation data. $N_1$ and $N_2$ are the associated solutions of the rate equations using stationary transition rates. The time unit is $t_0=6.53$ps. } \label{fig:alex} \end{figure*} In the terms used in Tab.~\ref{tab1}, this system of rate equations corresponds to the methods listed for the hydrodynamic stage (stage III). But, nothing prevents us of course from using these equations also for earlier time scales, i.e., for the kinetic and initial stages (stage II and I, respectively). As discussed in Sec.~\ref{ss:valid-me}, to be applicable at earlier times, we have to permit a dependence of the rates on time and on the individual probabilities in that case, i.e., $T_{\alpha \beta}=T_{\alpha \beta}(N_C, N_Q, N_T;t)$. Furthermore, it was shown in Ref.~\cite{paper1} that these rates depend sensitively on the energy distribution function of the gas atoms $F(E^\perp, E^\parallel;t)$, which themselves evolve with the time duration atoms spend on the surface before they are desorbed. In Ref.~\cite{paper1} it was demonstrated in detail, how the complete information from the MD trajectories can be used to explicitly reconstruct the rates $T_{\alpha \beta}$ for all times. This means that the conditions B and C of Sec.~\ref{ss:valid-me} are also realized. Furthermore, the analysis revealed that the gas atom--surface interaction proceeds in two stages. At first, the energy distribution functions thermalize within the relaxation time $t_{\rm rel} \approx (20 \dots 40)$ps, which depends on the energy and angle of incidence of the atom. During this period of time the transition rates also reach their equilibrium form, $T_{\alpha \beta}(t) \to T^{\rm EQ}_{\alpha \beta}$, and remain practically constant for times $t > t_{\rm rel}$. This can be seen in~figure~\ref{fig:alex}b. This first time interval corresponds to the stages I and II in table~\ref{tab1}. The subsequent temporal evolution corresponds to stage III and turns out to be accurately described by the rate equations (\ref{eq1})-(\ref{eq:REQ}) with constant transition rates. This behavior is verified by a comparison of MD simulation data with the analytical solutions of the rate equations (\ref{eq1})-(\ref{eq:REQ}). The corresponding results are shown in figure~\ref{fig:alex}c. While the analytical results differ qualitatively from the MD simulation results, for short times, the analytical fractions of trapped and quasi-trapped atoms practically coincide with the MD data, for times larger than about $5t_0\approx 33$ps. This is approximately the time, where the transition rates have saturated (Fig.~\ref{fig:alex}b). In other words, the set of three occupation probabilities (fractions) $N_C, N_Q$ and $N_T$ is sufficient to capture the entire sticking and desorption properties of the gas atoms, for times $t\ge t_{\rm rel}$. These three collective variables have emerged dynamically during the temporal evolution and the associated coarse graining dynamics. Thus, the system of rate equations is sufficient to describe the mean adsorption/desorption dynamics for a sufficiently large ensemble of atoms at longer times. The solution of these equations is computationally cheap and allows one to propagate the system, in principle, to arbitrarily long times. This could be of relevance for experiments where the surface properties change in time, e.g. due to an AC field or in the course of continuous sputter deposition. Note that the transition rates $T^{\rm EQ}_{\alpha \beta}$ obtained from the MD simulations are not approximations but have first principle quality, in principle. \section{Conclusion and outlook}\label{s:conclusion1} The dramatic increase of computation power holds high promises for an improved simulation of plasma-surface interaction processes. This has the potential for major advances of this field because most current models are phenomenological using surface coefficients that are poorly known both experimentally and theoretically. Moreover, these parameter--even if they exist--may carry an (unknown) dependence on the surface conditions or the plasma parameters. Here, we discussed the application of first principle simulations to plasma-surface interaction where we concentrated on semiclassical molecular dynamics simulations where electronic degrees of freedom are not explicitly resolved. This is well justified for processes involving neutral particles of low energy where excitation or ionization can be neglected. Such MD simulations using accurate force fields as an input have been successfully used for more than two decades. However, the required small time steps (of the order of one femtosecond) prohibits, in most cases, to achieve time scales for which experimental data are available or which are of relevance for low-temperature plasma experiments. In this paper we discussed strategies how to overcome this limitation, cf. Fig.~\ref{fig:scheme}. As a first concept, we briefly reviewed acceleration techniques and presented one recent example---\textit{Selective Process Acceleration (SPA)}~\cite{abraham_jap_16}---which is capable of achieving a boost factor of more than $10^9$. This was applied to cluster growth on a polymer substrate during sputter deposition. It was demonstrated that a controlled increase of the deposition and diffusion rates, such that ratio remains constant and on the level of the experiment, allows for a remarkable acceleration without loss of accuracy, for times up to about four minutes. These simulations can be an important piece of future plasma-surface simulations, using plasma data for the deposition rates [the flux $\textbf{J}_A^p$ in Fig. \ref{fig:theory}], as an input and delivering the time-dependent modifications of the surface morphology [labeled ``SM'' in Fig. \ref{fig:theory}] as an output for surface physics simulations. Our second and main focus was on coarse graining techniques that attempt to combine two (or more) descriptions of different spatial and temporal resolution. Such concepts have existed for many years in physics, chemistry, material science and technology and are often summarized under the headline \textit{multiscale modeling}. For example, recent progress in the field of chemistry has been reviewed in the Nobel lectures of Levitt, Karplus and Warshel who shared the Nobel prize in chemistry in 2013. The method we have been advocating in this paper---\textit{Dynamical Freeze out of Dominant Modes} (DFDM)---concentrates on the idea of extending first principle MD simulations to long time scales without loss of accuracy. This method requires the derivation of model equations that are formally exact, at sufficiently long time scales. Fortunately, generations of researchers have provided us with ample candidates for such equations which include kinetic equations, hydrodynamic equations, rate equations or a master equation. An (incomplete) overview was presented in table~\ref{tab1}. While these models are traditionally being used within certain approximation schemes for the relevant parameters, here we suggest to avoid any approximation. Instead we suggest to use \textit{exact input data} for the relevant transport coefficients or transition rates and to provide them by MD simulations. The idea of DFDM was demonstrated for the example of atom scattering from a metal surface, Sec.~\ref{ss:md-re}. It was shown that the use of MD input data in a system of rate equations allows to describe the sticking dynamics practically exactly. The first principle MD solutions go over smoothly into the result of the rate equations which can be extended to macroscopic time scales. The main requirement for our approach to be feasible is that the time scale after which the model is valid is short enough to be accessible by MD simulations. In the case of atom sticking this time is the relaxation time $t_{\rm rel}$. Recalling again the overview given in Fig.~\ref{fig:theory}, these simulations are capable of using the fluxes $\textbf{J}^p_A$, of atoms as an input from a plasma simulation and to return, as an output, the energy or momentum resolved fluxes $\textbf{J}^s_A$ of atoms that leave the surface. There are various ways how to extend the present idea. If the surface is inhomogeneous, a straightforward generalization would be to include the space dependence into the densities and the rates. Then, the rate equations turn into hydrodynamic equations. Furthermore, the effect of a plasma environment, such as characteristic particle fluxes or an adsorbate-covered surface, are straightforwardly included into our scheme, as discussed in Ref.~\cite{paper1}. To go beyond the problem of neutral atom sticking, the present theoretical idea can be straightforwardly extended also to other analytical models, which are listed in table~\ref{tab1}. One example are hydrodynamic equations for the particle densities and fluxes, e.g. in the plasma bulk or in the sheath. Instead of using an approximate decoupling, e.g. by using a model equation of state, one can use MD data as an input again. Finally, the idea of DFDM can also be extended to quantum systems where the dynamics are treated by quantum molecular dynamics or time-dependent DFT. These \textit{ab initio} input data can again be linked to macroscopic model equations such as diffusion equation, as in Ref.~\cite{franke_prb_10}, hydrodynamic equations or quantum hydrodynamics, e.g. \cite{michta_cpp15, zhandos_pop18}. Finally, let us return to the overview on theoretical methods for the plasma--solid interface that was sketched in Fig.~\ref{fig:theory}. While, in this paper, we have concentrated on molecular dynamics, the box in the center indicates that there is a much broader arsenal of tools available. In fact there is no unique method that allows to describe all processes. In particular for the description of electrons and ions crossing the interface, semiclassical MD fails, and quantum approaches are necessary. This concerns the neutralization of low-energy ions, e.g. \cite{pamperin_prb15} and their stopping in the solid, as well as the electron dynamics across the interface, e.g. \cite{bronold_prl15}. Here nonequlibrium quantum methods such as time-dependent DFT or nonequlibrium Green functions simulations, e.g. \cite{zhao_prb_15, balzer_prb_16, balzer_lnp_13} have to be used. These methods are extremely expensive, and the goal will have to be to use their results as input to simpler approaches such as quantum kinetic equations \cite{bonitz_qkt} or improved molecular dynamics simulations. Moreover, to properly capture the influence of the plasma on the solid, these surface simulations have to be linked to fluid or kinetic simulations of the plasma, as indicated by the arrows in Fig.~\ref{fig:theory}. This connection maybe summarized by adding a fifth step to the list of Sec.~\ref{s:intro}, meaning that plasma simulations will have to be supplied with accurate fluxes of electrons, ions and neutrals leaving the surface and, at the same time, provide those fluxes that impact the solid, to surface simulations. As a result of the fluxes across the interface, the specific plasma conditions are expected to influence the surface properties, such as surface roughness, morphology or chemical reactivity. Ultimately, an integrated modeling of the plasma and the solid surface will be required \cite{interface} to overcome the trial and error character of many experiments and to achieve a predictive modeling of the relevant processes. \section*{Acknowledgements} We acknowledge discussions with E. Pehlke und B. Hartke. \section*{References}
{ "timestamp": "2018-05-29T02:13:37", "yymm": "1802", "arxiv_id": "1802.08710", "language": "en", "url": "https://arxiv.org/abs/1802.08710" }
\section*{Introduction} Let $G$ be a residually finite group with normal subgroups $G\supset G_1\supset G_2\cdots$ such that $\cap_{n\in\mathbb N}G_n=\{e\}$. Let $X_n=G/G_n$ be the quotient finite group, with the natural action of $G$ denoted by $(g,x)\to gx$, $g\in G$, $x\in X_n$. Let $\nu_n=|X_n|$, where $|A|$ denotes the number of elements in the set $A$. A sequence $(\gamma_n)_{n\in\mathbb N}$ of integers such that $1\leq\gamma_n\leq \nu_n$ for any $n\in\mathbb N$ will be referred to as a {\it growth function}. It is well known that the regular representation $\lambda_G$ of $G$ is weakly contained in the direct sum of quasi-regular representations $\lambda_n$, which factorize through the quotients $G/G_n$, i.e. $\|\lambda_G(a)\|\leq\sup_{n\in\mathbb N}\|\lambda_n(a)\|$ for any $a\in \mathbb C[G]$. In view of abundance of exotic group norms \cite{Brown-Guentner,Wiersma}, we were interested to find intermediate norms between the norms $\|\lambda_G(\cdot)\|$ and $\sup_{n\in\mathbb N}\|\lambda_n(\cdot)\|$ (in the case of non-amenable $G$ --- otherwise these two norms coincide). We had not succeeded, but we have constructed a family $\pi_\gamma$, $\gamma\in Z$, of representations on Hilbert spaces $H_\gamma$, where $Z$ is a partially ordered set of growth functions, such that $\pi_\gamma$ is weakly equivalent to $\lambda$ for the minimal growth function ($\gamma_n=1$ for any $n\in\mathbb N$), and to $\oplus_{n\in\mathbb N}\lambda_n$ for the maximal growth function ($\gamma_n=\nu_n$ for any $n\in\mathbb N$). The most interesting case is that of intermediate growth. We prove an estimate on the norm of $\pi_\gamma$ when the growth function $\gamma$ is sufficiently slow. We also determine, in two cases, whether the trivial representation is weakly contained in $\pi_\gamma$. For a property $(\tau)$ group $G$ the answer is negative for all growth functions that are slower than the maximal one, but for a certain choice of finite index subgroups of a free group the answer is positive for some $\gamma$ of an intermediate growth. We were not able to check if the norms $\|\pi_\gamma(\cdot)\|$ are all different. We hope they are; if not, then there should be a boundary growth function such that slower growth functions give the regular norm, and faster ones give the norm $\sup_{n\in\mathbb N}\|\lambda_n(\cdot)\|$. \section{Construction of a family of representations} In this section we modify the Calkin's construction \cite{Calkin} to obtain representations of certain quotient $C^*$-algebras. Let $l^2(X_n)$, $n\in\mathbb N$, be the finitedimensional Hilbert spaces with respect to the atomic measure $\mu$ given by $\mu(x)=1$ for any $x\in X_n$. Set $\sigma=\oplus_{n\in\mathbb N}\lambda_n$, and let $\mathcal A=C^*_{\sigma}(G)$ be the $C^*$-algebra generated by all $\sigma(g)$, $g\in G$. Note that $\sigma$ is a representation of $G$ on the Hilbert space $\oplus_{n\in\mathbb N}l^2(X_n)$, and there is a canonical inclusion $\mathcal A\subset\prod_{n\in\mathbb N}\mathbb B(l^2(X_n))\subset\mathbb B(\oplus_{n\in\mathbb N}l^2(X_n))$. Fix some non-principal ultrafilter $\mathcal U$ on $\mathbb N$ and let $\tau_n$ be the trace on $\mathbb B(l^2(X_n))$ normalized by $\tau_n(\id)=1$. Let $J\subset\prod_{n\in\mathbb N}\mathbb B(l^2(X_n))$ be the ideal consisting of all sequences $(m_n)_{n\in\mathbb N}$, $m_n\in\mathbb B(l^2(X_n))$, such that $\lim_{\mathcal U}\tau_n(m_n^*m_n)=0$. Then $I=J\cap \mathcal A$ is an ideal in $\mathcal A$. Following Calkin, set $$ \widetilde{H}=\{\xi=(\xi_n)_{n\in\mathbb N}:\xi_n\in l^2(X_n),\ \sup_{n\in\mathbb N} \|\xi_n\|<\infty\} $$ and define a sesquilinear form on $\widetilde{H}$ by $\langle \xi,\eta\rangle=\lim_{\mathcal U}\langle\xi_n,\eta_n\rangle$, where $\xi_n,\eta_n\in l^2(X_n)$, $\xi=(\xi_n)_{n\in\mathbb N},\eta=(\eta_n)_{n\in\mathbb N}\in\widetilde{H}$. Set $\widetilde{H}_0=\{\xi\in\widetilde{H}:\lim_{\mathcal U}\langle\xi,\xi\rangle=0\}\subset\widetilde{H}$. When passing to the quotient linear space $\widetilde{H}/\widetilde{H}_0$, the sesquilinear form becomes positive-definite, hence defines an inner product. Completion of $\widetilde{H}/\widetilde{H}_0$ with respect to the corresponding norm is a (non-separable) Hilbert space $\widehat H$. The sequence $\lambda_n$, $n\in\mathbb N$, defines a unitary representation $\widehat \lambda$ of the $C^*$-algebra $A$ on $\widetilde H$ by $\widehat\lambda(g)(\xi_n)=(\lambda_n(g)\xi_n)$, and it is well-known due to Calkin \cite{Calkin} that this representation annulates the ideal $I_0$ in $A$ consisting of sequences $(m_n)_{n\in\mathbb N}$ that vanish at infinity, i.e. $\lim_{n\to\infty}\|m_n\|=0$. This ideal is smaller than $I$. One of our aims is to find a subspace $H_1\subset \widehat H$ invariant under $\widehat{\lambda}$ such that $\widehat{\lambda}|_{H_1}$ would annulate $I$. Recall that $\nu_n=|X_n|$, and, by the definition, $\frac{\nu_{n+1}}{\nu_n}\geq 2$, so $\lim_{n\to\infty}\nu_n=\infty$. Let $Z$ denote the set of all non-decreasing integer-valued sequences $\gamma=(\gamma_n)_{n\in\mathbb N}$ such that $1\leq \gamma_n\leq \nu_n$ for any $n\in\mathbb N$. For $\gamma,\gamma'\in Z$, we write $\gamma\leq\gamma'$ if there is $\varepsilon>0$ such that $\varepsilon\gamma_n\leq\gamma'_n$ for any $n\in\mathbb N$, and $\gamma\sim\gamma'$ if $\gamma\leq\gamma'$ and $\gamma'\leq\gamma$. The set $Z$ contains the minimal element $\iota$ such that $\iota_n=1$ for any $n\in\mathbb N$, and the maximal element $\nu$ ($\nu_n=|X_n|$ for any $n\in\mathbb N$). We write $\gamma\prec\gamma'$ if $\lim_{\mathcal U}\frac{\gamma_n}{\gamma'_n}=0$. Fix $\gamma\in Z$. Let $\widetilde H^{(k)}_\gamma\subset\widetilde H$ be the subset of all sequences $(\xi_n)$ such that $|\supp \xi_n|\leq k\gamma_n$ for each $n$. This is not a linear subspace, but $\widetilde H_\gamma=\cup_{k=1}^\infty \widetilde H^{(k)}_\gamma$ is. Indeed, if $(\xi_n)\in\widetilde H^{(k)}_\gamma$, $(\xi'_n)\in\widetilde H^{(k')}_\gamma$ then $(\xi_n+\xi'_n)\in\widetilde H^{(k+k')}_\gamma$. Let $H_\gamma$ denote the closure of $(\widetilde{H}^{(k)}_\gamma+\widetilde H_0)/\widetilde H_0\subset\widehat H$. Note that if $\gamma,\gamma'\in Z$, $\gamma\leq\gamma'$, then $H_\gamma\subset H_{\gamma'}$, hence $H_\gamma=H_{\gamma'}$ if $\gamma\sim\gamma'$. If $\gamma=\nu$ then $H_\nu=\widehat H$. \section{Some properties of the Hilbert spaces $H_\gamma$} \begin{lem} Let $\gamma\prec\gamma'$. Then $H_\gamma\neq H_{\gamma'}$. \end{lem} \begin{proof} Assume the contrary, and let $E'_n\subset X_n$, $|E'_n|=\gamma'_n$. Take $\xi=(\xi_n)\in\widetilde{H}$, with $\xi_n=\frac{1}{\sqrt{\gamma'_n}}\chi_{E'_n}$, where $\chi_E$ denotes the characteristic function of the set $E$. Then, for any $\varepsilon\in(0,1)$ there exists $k\in\mathbb N$ and $\eta\in \widetilde H^{(k)}_{\gamma}$ such that $\|\xi-\eta\|<\varepsilon$. Let $E_n=\supp\eta_n$, $|E_n|\leq k\gamma_n$. Then \begin{eqnarray*} \frac{\gamma'_n-k\gamma_n}{\gamma'_n}&\leq& \frac{|E'_n\setminus E_n|}{\gamma'_n}=\sum_{x\in E'_n\setminus E_n}|\xi_n(x)|^2\\ &\leq&\sum_{x\in E'_n\setminus E_n}|\xi_n(x)|^2+\sum_{x\in E_n}|\xi_n(x)-\eta_n(x)|^2=\|\xi_n-\eta_n\|^2<\varepsilon^2, \end{eqnarray*} hence $k\frac{\gamma_n}{\gamma'_n}>1-\varepsilon^2$. Then $0=k\lim_{\mathcal U}\frac{\gamma_n}{\gamma'_n}\geq 1-\varepsilon^2$, a contradiction with $\gamma\prec\gamma'$. \end{proof} \begin{remark}\label{Remark} Note that the same argument shows that $H_\gamma$ is strictly greater than the closure of the union of all $H_{\gamma'}$ over all $\gamma'$ such that $\gamma'\prec\gamma$. \end{remark} \begin{lem}\label{orthogonal} Let $\eta=(\eta_n)_{n\in\mathbb N}\in\widetilde H$. If $\eta\in \widetilde H_1$ and $\lim_{\mathcal U}\|\eta_n\|_\infty=0$ then $\eta\in\widetilde H_0$. \end{lem} \begin{proof} It suffices to prove Lemma in the case when $\eta\in\widetilde H^{(k)}_1$ for some $k\in\mathbb N$. This means that $|\supp\eta_n|\leq k$ for any $n\in\mathbb N$. Then $\|\eta_n\|^2\leq k\|\eta_n\|_\infty^2$. \end{proof} Remark \ref{Remark} shows that the family of Hilbert spaces is not lower semicontinuous. Now let us show that it is upper semicontinuous. \begin{prop} For any $\gamma^0\in Z$ one has $H_{\gamma^0}=\cap_{\gamma\succ\gamma^0}H_\gamma$. \end{prop} \begin{proof} Let $\xi\in \cap_{\gamma\succ\gamma^0}H_\gamma$ be represented by a sequence $(\xi_n)_{n\in\mathbb N}$. For any $\varepsilon>0$ and for any $n\in\mathbb N$, consider all $\eta_n\in l^2(X_n)$ such that $\|\xi_n-\eta_n\|<\varepsilon$. Among all these $\eta_n$ one can find $\eta'_n$ with minimal possible $|\supp\eta_n|$. Set $\alpha^\varepsilon_n=|\supp\eta'_n|$. Since $\xi\in \cap_{\gamma\succ\gamma^0}H_\gamma$, for any $\varepsilon>0$ and for any $\gamma\succ\gamma^0$ there exists $k\in\mathbb N$ and $\zeta\in \widetilde H^{(k)}_\gamma$ such that $\|\xi-\zeta\|<\varepsilon$. In particular, $|\supp\zeta_n|\leq k\gamma_n$ for all $n\in\mathbb N$. It follows from $\|\xi-\zeta\|<\varepsilon$ that there exists $\mathbb A\in\mathcal U$ sich that $\|\xi_n-\zeta_n\|<\varepsilon$ for any $n\in\mathbb A$. By assumption, $\alpha^\varepsilon_n\leq|\supp\zeta_n|\leq k\gamma_n$ for any $n\in\mathbb A$. Thus, for any $\varepsilon>0$ and any $\gamma\succ\gamma^0$ there exists $k\in\mathbb N$ and $\mathbb A\in\mathcal U$ such that $\alpha^\varepsilon_n\leq k\gamma_n$ for any $n\in\mathbb A$. Suppose that $\lim_{\mathcal U}\frac{\alpha^\varepsilon_n}{\gamma^0_n}=\infty$ for some $\varepsilon>0$. Then set $\gamma'_n=\sqrt{\alpha^\varepsilon_n\gamma^0_n}$. By assumption, $\lim_{\mathcal U}\frac{\gamma'_n}{\gamma^0_n}=\lim_{\mathcal U}\sqrt{\frac{\alpha^\varepsilon_n}{\gamma^0_n}}=\infty$, hence $\gamma'\succ\gamma^0$. As we have already shown, there exists $k\in\mathbb N$ and $\mathbb A\in\mathcal U$ such that $\alpha^\varepsilon_n\leq k\gamma'_n=k\sqrt{\alpha^\varepsilon_n\gamma^0_n}$ for any $n\in\mathbb A$, hence $\alpha^\varepsilon_n\leq k^2\gamma^0_n$. This contradicts our assumption. Thus, for any $\varepsilon>0$ we have $\lim_{\mathcal U}\frac{\alpha^\varepsilon_n}{\gamma^0_n}<\infty$, i.e. for any $\varepsilon>0$ there exists $C\in\mathbb R$ and $\mathbb A\in\mathcal U$ such that $\alpha^\varepsilon_n\leq C\gamma^0_n$ for any $n\in\mathbb A$. Set $\eta_n=\left\lbrace\begin{array}{cl}\eta'_n&\mbox{if\ }n\in\mathbb A;\\0&\mbox{if\ }n\notin\mathbb A.\end{array}\right.$ Then $\eta=(\eta_n)_{n\in\mathbb N}\in H_{\gamma^0}$, and $\|\xi-\eta\|=\lim_{\mathcal U}\|\xi_n-\eta_n\|<\varepsilon$. As $\varepsilon$ is arbitrary, this implies that $\xi\in H_{\gamma^0}$ and we are done. \end{proof} \begin{lem}\label{invariant} The subspaces $H_\gamma$ are invariant under $\widehat{\lambda}$. \end{lem} \begin{proof} Obvious: translation by $g\in G$, of functions in $l^2(X_n)$ does not change the size of their supports. \end{proof} Thus, we can restrict the representation $\widehat\lambda$ to $H_\gamma$ for any $\gamma\in Z$. Denote this restriction by $\pi_\gamma=\widehat\lambda|_{H_\gamma}$. Note that if $\gamma\leq\gamma'$ then $H_\gamma\subset H_{\gamma'}$, hence the representation $\pi_{\gamma'}$ contains the representation $\pi_\gamma$. \section{Case of maximal growth} \begin{thm} The representations $\pi_\nu$ and $\oplus_{n\in\mathbb N}\lambda_n$ are weakly equivalent. \end{thm} \begin{proof} When $\gamma=\nu$ is maximal, there is no restrictions on the size of supports of vectors in $\widetilde H$, so $\pi_\nu=\widehat\lambda$. The classical result of J. W. Calkin \cite{Calkin} states that the kernel of the representation $\widehat\lambda$ of $\mathcal A$ on $\widehat H$ is $\mathcal A\cap\mathbb K(\oplus_{n\in\mathbb N}l^2(X_n))$ (recall that $\mathcal A$ is generated by $\oplus_{n\in\mathbb N}\lambda_n(g)$, $g\in G$). As $\lambda_n$ is a subrepresentation in any $\lambda_m$ with $m>n$, each $\lambda_n$ repeats infinitely in $\sigma=\oplus_{n\in\mathbb N}\lambda_n$, hence $\|\widehat\lambda(\cdot)\|=\sup_{n\in\mathbb N}\|\lambda_n(\cdot)\|$. \end{proof} \section{Case of very slow growth} \begin{thm} There exists a growth function $\gamma$ with $\lim_{n\to\infty}\gamma_n=\infty$ such that $\pi_{\gamma'}$ is weakly equivalent to the regular representation $\lambda$ of $G$ for any $\gamma'\leq\gamma$. In particular, $\pi_\iota$ is weakly equivalent to $\lambda$. \end{thm} \begin{proof} It suffices to prove the theorem in the case when $G$ is finitely generated. Let $q_n:G\to G/G_n=X_n$ denote the quotient maps, and let $l$ (resp. $l_n$) be the word length function on $G$ (resp. on $X_n$) with respect to a fixed set $S\subset G$ (resp. $q_n(S)\subset X_n$) of generators, and let $d(g,h)=l(g^{-1}h$ (resp. $d_n(x,y)=l_n(x^{-1}y)$) determine a left invariant metric on $G$ (resp. on $X_n$). Let $B_r\subset G$ denote the ball of radius $r$. Let $a\in\mathbb C[G]$, $\supp a\subset B_r$. For any $\gamma\in Z$ and for any $\varepsilon>0$ there exists $\xi\in\widetilde H^{(k)}_\gamma$ such that $\|\xi_n\|=1$ for any $n\in\mathbb N$ and \begin{equation}\label{norme} \|\pi_\gamma(a)\|^2<\|\pi_\gamma(a)\xi\|^2+\varepsilon. \end{equation} Let $\supp\xi_n=A\subset X_n$. By assumption, $|A|\leq k\gamma_n$ for any $n\in\mathbb N$. Our aim is to show that although $A$ may be scattered on $X_n$, one can replace $\xi_n$ by another functions $\zeta_n$ such that $\|\pi_\gamma(a)\|^2<\|\pi_\gamma(a)\zeta\|^2+\varepsilon$, $\|\zeta_n\|=1$, $n\in\mathbb N$, and such that $\supp\zeta_n$ lies in a ball of a controlled radius. Decompose the set $A$ into a disjoint union of its subsets $A=A_1\sqcup\cdots\sqcup A_k$ such that \begin{itemize} \item[(A1)] For any $x\in A_i$, $i=1,\ldots, k$, there exists $y\in A_i$ such that $\rho(x,y)<3r$; \item[(A2)] If $x\in A_i$, $y\in A_j$, $i\neq j$, then $\rho(x,y)\geq 3r$. \end{itemize} Set $\xi_n|_{A_i}=\eta_i$. Then $\|\eta_1\|^2+\cdots+\|\eta_k\|^2=\|\xi_n\|^2=1$ and, as $\supp x^*x\subset B_{2r}$, so by (A2), one has $\langle\lambda_n(a^*a)\eta_i,\eta_j\rangle=0$ when $i\neq j$, hence $$ \|\lambda_n(a)\xi_n\|^2=\sum_{i=1}^k\langle\lambda_n(a^*a)\eta_i,\eta_i\rangle. $$ Suppose that for any $i=1,\ldots,k$ one has \begin{equation}\label{assum} \|\lambda_n(a^*a)\eta_i,\eta_i\|\leq (\|\lambda_n(a^*a)\|-\varepsilon)\|\eta_i\|^2. \end{equation} Then, summing (\ref{assum}) up, we get $$ \|\lambda_n(a)\xi_n\|^2=\sum_{i=1}^k\|\lambda_n(a^*a)\eta_i,\eta_i\|\leq(\|\lambda_n(a^*a)\|-\varepsilon)\sum_{i=1}^k\|\eta_i\|^2=\|\lambda_n(a^*a)\|-\varepsilon. $$ Passing to the limit over $\mathcal U$, we get $$ \|\pi_\gamma(a)\xi\|^2\leq\|\pi_\gamma(a)\|^2-\varepsilon, $$ which contradicts (\ref{norme}). Thus, for any $n\in\mathbb N$, there exists $i$ such that $$ \|\lambda_n(a^*a)\eta_i,\eta_i\|\leq(\|\lambda_n(a^*a)\|-\varepsilon)\|\eta_i\|^2. $$ Note that $|\supp\eta_i|=|A_i|\leq|A|\leq k\gamma_n$, hence, by (A1), there exists a ball $\widetilde B_{r'}(z_n)$ of radius $r'=k\gamma_n\cdot 3r$ centered at some $z_n\in X_n$, such that $\supp\eta_i\subset \widetilde B_{r'}(z_n)$ (we use tilde to distinguish balls in $X_n$ from balls in $G$). Set $\zeta_n=\frac{\eta_i}{\|\eta_i\|}$. Then $\supp\zeta_n\subset \widetilde B_{r'}(z_n)$, $|\supp\zeta_n|\leq k\gamma_n$ for any $n\in\mathbb N$, and there exists $\mathbb A\in\mathcal U$ such that $\|\lambda_n(a)\zeta_n\|>\|\lambda_n(a)\|-\varepsilon$ for any $n\in\mathbb A$. Note that $y\in \widetilde B_{r'}(z_n)$ if $l_n(z_n^{-1}y)\leq r'$. Let $\alpha_n=\min_{g\in G_n,g\neq e}l(g)$. It follows from $\cap_{n=1}^\infty G_n=\{e\}$ that $\lim_{n\to\infty}\alpha_n=\infty$. The following statement is folklore. \begin{lem} Let $g\in G$, $z=q_n(g)$, and let $B_{R}(g)\subset G$ be the ball of radius $R$ centered at $g$. Then $q_n$ maps $B_R(g)$ isometrically onto $\widetilde B_R(z)$ for any $R<\alpha_n/4$. \end{lem} \begin{proof} Let $h,k\in B_R(g)$. Then $d(h,k)=l(h^{-1}k)\leq 2R$. Note that for any $x\in G$, $l(x)\geq l(q_n(x))$, and $l_n(q_n(x))=\min_{y\in G_n,y\neq e}l(xy)$. Suppose that there exists $y\in G_n$ such that $l(xy)<l(x)$. Then, by the triangle inequality, $l(xy)\geq l(y)-l(x)$, hence $l(y)-l(x)<l(x)$, or, equivalently, $l(x)>l(y)/2$. Taking $x=h^{-1}k$, we get $2R\geq d(h,k)=l(x)>l(y)/2\geq \alpha_n/2$, hence $R\geq \alpha_n/4$ --- a contradiction. Thus, $d_n(q_n(h),q_n(k))=l_n(q_n(x))=l(x)=d(h,k)$. \end{proof} Let $g_n\in G$ satisfy $q_n(g_n)=z_n$, $R<\alpha_n/4$, and let $u_n:l^2(\widetilde B_R(z_n))\to l^2(G)$ be an isometry defined by $$ u_n(\varphi_n)(h)=\left\lbrace\begin{array}{cl}\varphi_n(q_n(h))&\mbox{\ if\ }h\in \widetilde B_R(g_n);\\0&\mbox{\ if\ }h\notin \widetilde B_R(g_n).\end{array}\right. $$ If the supports of $\varphi_n$ and of $\lambda_n(g)\varphi_n$ lie in $\widetilde B_R(z_n)$ then $u_n\lambda_n(g)\varphi_n=\lambda(g)u_n\varphi_n$. Both $\zeta_n$ and $\lambda_n(x)\zeta_n$ have supports in the ball $\widetilde B_{r'+r}(z_n)$, hence, for $r'+r<\alpha_n/4$ one has $u_n\lambda_n(x)\zeta_n=\lambda(x)u_n\zeta_n$, so $\|\lambda_n(x)\zeta_n\|=\|\lambda(x)\zeta'_n\|$, where $\zeta'_n=u_n\zeta_n\in l^2(G)$. Thus, \begin{equation}\label{es1} \|\pi_\gamma(a)\|^2<\|\lambda_n(a)\zeta_n\|^2+\varepsilon=\|\lambda(a)\zeta'_n\|^2+\varepsilon \end{equation} for any $n\in\mathbb A$ when $r'+r<\alpha_n/4$. For any $\varepsilon>0$, there exists $\mathbb A'\in\mathcal U$ such that \begin{equation}\label{es2} \|\lambda(a)\zeta'_n\|^2<\|\lambda(a)\|^2+\varepsilon \end{equation} for any $n\in\mathbb A'$. Choose $\gamma$ such that $\lim_{\mathcal U}\frac{\gamma_n}{\alpha_n}=0$. Then, as $r'=3rk\gamma_n$, so there exists $\mathbb A''\in\mathcal U$ such that $r'+r< \alpha_n/4$ holds for any $n\in\mathbb A''$. As the set $\mathbb A\cap\mathbb A'\cap\mathbb A''$ is not empty, it follows from (\ref{es1}) and (\ref{es2}) that $$ \|\pi_\gamma(a)\|^2<\|\lambda(a)\|^2+2\varepsilon $$ holds for any $a\in\mathbb C[G]$ with $\supp a\in B_r$. As $r$ is arbitrary, we conclude that $\|\pi_\gamma(a)\|\leq\|\lambda(a)\|$ for any $a\in\mathbb C[G]$. \end{proof} \section{Case of small growth} Let $U_{n-1}:l^2(X_{n-1})\to l^2(X_n)$ be the isometry defined by $$ U_{n-1}(\xi_{n-1})(x)=\frac{1}{\sqrt{\nu_n/\nu_{n-1}}}\xi_{n-1}(q_n(x)), $$ where $q_n:G/G_n\to G/G_{n-1}$ is the quotient homomorphism and $\nu_n=|X_n|$. This allows to consider $H_{n-1}$ as a subspace of $H_n$. Note that $\lambda_{n-1}$ is a subrepresentation of $\lambda_n$. Set $\rho_n=\lambda_n\ominus\lambda_{n-1}$. This is a representation of $G$ on $l^2(X_n)\ominus l^2(X_{n-1})$. \begin{thm} Let $\gamma\prec\gamma'$, where $\gamma'_n=\frac{\nu_n}{\nu_{n-1}}$. Then $\|\pi_\gamma(a)\|\leq \lim\sup_{n\to\infty}\|\rho_n(a)\|$ for any $a\in\mathbb C[G]$. \end{thm} \begin{proof} We are going to construct a Hilbert space $L$ such that $H_\gamma\subset L\subset H_\nu=\widehat H$ and a representation $\rho$ of $G$ on $L$ such that $\rho=\widehat\lambda|_L$ and $\|\rho(a)\|=\lim\sup_{n\to\infty}\|\rho_n(a)\|$ for any $a\in\mathbb C[G]$. This would obviously imply that $\rho$ contains $\pi_\gamma$, hence the claim of the theorem. In order to construct $L$ let us consider the set $\widetilde L$ of all the sequences $(\xi_n)_{n\in\mathbb N}$ such that $\xi_n\in l^2(X_n)\ominus l^2(X_{n-1})$ and the norms $\|\xi_n\|$ are uniformly bounded, with the degenerate inner product as before, which becomes positive definite after taking quotient modulo sequences with $\lim_{\mathcal U}\|\xi_n\|^2=0$. Then $\widetilde L\subset\widetilde H$, and we define $L$ as the closure of $\widetilde L+\widetilde H_0/\widetilde H_0$ in $\widehat H$. Then $\rho(g)(\xi_n)=(\rho_n(g)\xi_n)$, $g\in G$, defines the representation $\rho$ on $L$. Let $\sigma_k$, $k\in\mathbb N$, be the sequence of irreducible representations of $G$ that appear as direct summands in $\lambda_1,\lambda_2,\ldots$. Let $\sigma_k$ be a subrepresentation of $\lambda_{n-1}$. As the multiplicity of $\sigma_k$ in $\lambda_{n-1}$ and in $\lambda_n$ is the same and equals its dimension, it is not contained in $\rho_n$. Thus, each $\sigma_k$ appears only in one of $\rho_1,\rho_2,\ldots$. Let $\mathcal B$ be the $C^*$-algebra generated by all $\oplus_{n\in\mathbb N}\rho_n(g)$, $g\in G$, in $H'=\oplus_{n\in\mathbb N}l^2(X_n)\ominus l^2(X_{n-1})$. Then $\|\rho(a)\|$ equals the norm of $a$ in $\mathcal B/\mathcal B\cap\mathbb K(H')$ by Theorem 5.3 of \cite{Calkin}. Since each $\sigma_k$ appears only in one of $\rho_1,\rho_2,\ldots$, the latter norm equals $\lim\sup_{k\to\infty}\|\sigma_k(a)\|=\lim\sup_{n\to\infty}\|\rho_n(a)\|$. To finish the argument, it remains to show that $H_\gamma\subset L$. Let $P_n:l^2(X_n)\to l^2(X_n)\ominus l^2(X_{n-1})$ be the orthogonal projection, $Q_n=1-P_n$. We claim that $\lim_{\mathcal U}\|\xi_n-P_n\xi_n\|=0$ for any $\xi\in H_\gamma$. Let $\xi\in\widetilde H^{(k)}_\gamma$ for some $k\in\mathbb N$. Let $x\in X_{n-1}=G/G_{n-1}$, $\delta_x$ its characteristic delta-function, $\eta_x=U_{n-1}\delta_x\in l^2(X_n)$. Then $Q_n\xi_n=\sum_{x\in X_{n-1}}\langle\eta_x,\xi_n\rangle\eta_x$. Note that $\eta_x(y)=\left\lbrace\begin{array}{cl}\frac{1}{\sqrt{\nu_n/\nu_{n-1}}}&\mbox{\ if\ }q_n(y)=x;\\0&\mbox{\ if\ }q_n(y)\neq x.\end{array}\right.$ Note also that $\supp\eta_x\cap\supp\eta_y=\emptyset$ when $x\neq y$. Let $A_x=\supp\xi_n\cap\supp\eta_x\subset X_n$. Then $\supp\xi_n=\sqcup_{x\in X_{n-1}}A_x$, hence $\sum_{x\in X_{n-1}}|A_x|\leq k\gamma_n$. \begin{eqnarray*} \|Q_n\xi_n\|^2&=&\sum_{x\in X_{n-1}}|\langle\eta_x,\xi_n\rangle|^2\leq \sum_{x\in X_{n-1}}\sum_{y\in A_x}|\eta_x(y)|^2\sum_{y\in A_x}|\xi_n(y)|^2\\ &\leq& \sum_{x\in X_{n-1}}|A_x|\frac{1}{\nu_n/\nu_{n-1}}\|\xi_n\|^2\leq k\gamma_n\frac{\nu_{n-1}}{\nu_n}\|\xi_n\|^2. \end{eqnarray*} So, $\lim_{\mathcal U}\|Q_n\xi_n\|=0$, hence $\xi\in \widetilde L$. \end{proof} \section{Case of intermediate growth} Here we consider the case of intermediate growth and show two different kinds of behaviour of representations $\pi_\gamma$ --- for property (T) groups and for free groups. \subsection{Case of property $(\tau)$ groups} Recall that the property $(\tau)$ is a generalization of the property (T) of Kazhdan and means that the trivial representation is isolated among finitedimensional representations. For more details about property $(\tau)$ we refer to \cite{Lubotzky}. \begin{thm} Let $G$ be a finitely generated property $(\tau)$ group, and let $\gamma\prec \nu$. Then the trivial representation is not weakly contained in $\pi_\gamma$. \end{thm} \begin{proof} Let $S\subset G$ be a finite symmetric generating set, and let $x=\frac{1}{|S|}\sum_{g\in S}g\in\mathbb C[G]$. Suppose the contrary: for any $\varepsilon>0$ there exists $\xi^{(k)}\in\widetilde H_\gamma$ such that $\|\xi^{(k)}\|=1$ and $\|\pi_\gamma(x)\xi^{(k)}-\xi^{(k)}\|<\varepsilon$. Without loss of generality we may assume that $\xi^{(k)}\in\widetilde H^{(k)}_\gamma$, and that $\|\xi_n\|=1$ for each $n\in\mathbb N$, where $\xi=(\xi_n)_{n\in\mathbb N}$. A fixed $\varepsilon$ determines $k$ such that $$ \|\pi_\gamma(x)\xi^{(k)}-\xi^{(k)}\|=\lim_{\mathcal U}\|\lambda_n(x)\xi^{(k)}_n-\xi^{(k)}_n\|<\varepsilon. $$ Then there exists $\mathbb A\in\mathcal U$ such that \begin{equation}\label{eqq1} \|\lambda_n(x)\xi^{(k)}_n-\xi^{(k)}_n\|<\varepsilon \end{equation} for any $n\in\mathbb A$. Note that each $\lambda_n$ contains exactly one copy of the trivial representation, with the representation space spanned by the single vector $\xi^0_n=\frac{1}{\sqrt{|X_n|}}\chi_{X_n}$. By property $(\tau)$, there exists $\delta>0$ (which does not depend on $n$) such that $\|\lambda_n(x)\eta_n-\eta_n\|\geq \delta\|\eta_n\|$ for any $\eta_n\in l^2(X_n)$ orthogonal to $\xi^0_n$. Let $\xi^{(k)}_n=\alpha\xi^0_n+\eta_n$, where $\eta_n\perp \xi^0_n$. Then $$ \delta\|\eta_n\|\leq \|\lambda_n(x)\eta_n-\eta_n\| =\|\lambda_n(x)\xi^{(k)}_n-\xi^{(k)}_n\|<\varepsilon, $$ for any $n\in\mathbb A$, hence $\|\eta_n\|<\frac{\varepsilon}{\delta}$ when $n\in\mathbb A$. Then $|\alpha|=\sqrt{1-\|\eta_n\|^2}>\sqrt{1-\frac{\varepsilon^2}{\delta^2}}$, and \begin{equation}\label{estimate1} \|\xi^{(k)}_n-\xi^0_n\|=\sqrt{(1-\alpha)^2+\|\eta_n\|^2}<\sqrt{2\frac{\varepsilon}{\delta}} \end{equation} when $n\in\mathbb A$ and $\varepsilon\leq\delta$. But $|\supp\xi^{(k)}_n|\leq k\gamma_n$ for any $n\in\mathbb N$, hence $|\langle\xi^{(k)}_n,\xi^0_n\rangle|\leq\sqrt{\frac{k\gamma_n}{|X_n|}}$. So, by assumption, $\lim_{n\to\infty}\langle\xi^{(k)}_n,\xi^0_n\rangle=0$, which contradicts (\ref{estimate1}) when $\varepsilon<\delta/4$. \end{proof} \subsection{Case of free groups} Let $G=\mathbb F_2$ be the free group on two generators. Here we show that $H_\gamma$ may weakly contain the trivial representation for a certain sequence of finite index subgroups and for certain intermediate $\gamma$. We follow here \cite{Arzhantseva-Guentner}. Let $G_0=\mathbb F_2$, $G_1=\mathbb F_2^{(2)}$, $G_{n+1}=G_n^{(2)}$, be the iterated subgroups generated by the squares of all elements of the previous group. There is a nice description of $X_n=\mathbb F_2/G_n$ in \cite{Arzhantseva-Guentner} as vertices of Cayley graphs of $X_n$. The Cayley graph of $\mathbb F_2/G_0$ is the wedge of two circles with a single vertex, and the Cayley graphs $Cay(X_n)$ of $\mathbb F_2/G_n=X_n$ are constructed from it inductively. Let $V_n$ and $E_n$ denote the set of vertices and edges of the Cayley graph of $X_n$, let $T_n$ be a maximal tree in $Cay(X_n)$, and let $e_1,\ldots,e_{r_n}$ be the edges not in $T_n$. Then $V_{n+1}=V_n\times(\mathbb Z/2)^{r_n}$, $E_{n+1}=E_n\times(\mathbb Z/2)^{r_n}$. Let $e\in E_n$, $\alpha\in(\mathbb Z/2)^{r_n}$. Let $e$ connect the vertices $v,w\in V_n$. If $e\in T_n$ then the edge $(e,\alpha)\in E_{n+1}$ connects $(v,\alpha)$ with $(w,\alpha)$. If $e=e_i$, $1\leq i\leq r_n$, then $(e,\alpha)$ connects $(v,\alpha)$ with $(w,\alpha+\bar e_i)$, where $\bar e_i=(0,\ldots,0,1,0,\ldots,0)\in(\mathbb Z/2)^{r_n}$ has 1 as its $i$'s component. For a Cayley graph $Cay(X_n)$ and for a subset $A$ of vertices of $Cay(X_n)$ we denote by $|\partial A|$ the number of edges in $Cay(X_n)$ such that they connect a point from $A$ with a point from $X_n\setminus A$. \begin{lem}\label{lem-free} There exists a sequence $A_n\subset X_n$ such that \begin{equation}\label{l1} \lim_{n\to\infty}|A_n|=\infty; \end{equation} \begin{equation}\label{l3} \lim_{n\to\infty}\frac{|A_n|}{|X_n|}=0; \end{equation} \begin{equation}\label{l2} \lim_{n\to\infty}\frac{|\partial A_n|}{|A_n|}=0. \end{equation} \end{lem} \begin{proof} Let $1<k_n<r_n$. Set $$ A_{n+1}=\{(v,\alpha):v\in V_n,\alpha\in(\mathbb Z/2)^{r_n}, \alpha_i=0\mbox{\ for\ }k_n<i\leq r_n\}. $$ Then $|A_{n+1}|=|X_n|2^{k_n}$, and (\ref{l1}) holds for any $k_n\geq 1$. Note that (\ref{l3}) means that $$ \lim_{n\to\infty}\frac{|A_{n+1}|}{|X_{n+1}|}=\lim_{n\to\infty}2^{k_n-r_n}=0, $$ which is equivalent to $\lim_{n\to\infty}(r_n-k_n)=\infty$. Let us evaluate $|\partial A_{n+1}|$. If $e\in T_n$ then both ends of $(e,\alpha)$ are either in $A_{n+1}$ or in $V_{n+1}\setminus A_{n+1}$, so let $e=e_i$ be one of $e_1,\ldots,e_{r_n}$. If $(e,\alpha)\in\partial A_{n+1}$ then $i>k_n$, so $|\partial A_{n+1}|=2(r_n-k_n)2^{k_n}$. Thus, $\frac{|\partial A_{n+1}|}{|A_{n+1}|}=\frac{2(r_n-k_n)}{|X_n|}$. As both $(r_n)_{n\in\mathbb N}$ and $(|X_n|)_{n\in\mathbb N}$ grow faster than an iterated exponential, so one can easily find a sequence $(k_n)_{n\in\mathbb N}$ such that \begin{itemize} \item $\lim_{n\to\infty}(r_n-k_n)=\infty;$ \item $\lim_{n\to\infty}\frac{(r_n-k_n)}{|X_n|}=0$ \end{itemize} (the latter implies (\ref{l2})). \end{proof} \begin{thm} There exists $\gamma$ with $\lim_{n\to\infty}\gamma_n=\infty$ and $\lim_{n\to\infty}\frac{\gamma_n}{d_n}=0$ such that $\pi_\gamma$ weakly contains the trivial representation. \end{thm} \begin{proof} Let $A_n\subset X_n$ be as in Lemma \ref{lem-free}. Set $\gamma_n=|A_n|$, and $\xi_n=\frac{1}{\sqrt{\gamma_n}}\chi_{A_n}$. Then $\xi=(\xi_n)_{n\in\mathbb N}\in \widetilde H_\gamma$. If $g\in S\subset G$ is one of the generators then \begin{eqnarray*} \|\lambda_n(g)\xi_n-\xi_n\|^2&\leq&\frac{1}{\gamma_n}(|g(A_n)\setminus A_n|+|A_n\setminus g(A_n)|)\\ &=&\frac{1}{\gamma_n}(|g(A_n)\setminus A_n|+|g^{-1}(A_n)\setminus A_n|)\leq\frac{2|\partial A_n|}{\gamma_n}, \end{eqnarray*} hence $\xi$ is almost invariant for $\pi_\gamma$ by Lemma \ref{lem-free}. \end{proof}
{ "timestamp": "2018-02-28T02:12:41", "yymm": "1802", "arxiv_id": "1802.08592", "language": "en", "url": "https://arxiv.org/abs/1802.08592" }
\section{Preliminary Lemmas and Proof Sketch of Theorem~\ref{thm:expectation_bound_appendix}} \label{sec:lemmas} In this section, we outline the proof of Theorem~\ref{thm:expectation_bound}, our upper bound for the case of totally bounded metric spaces. The proof of the more general Theorem~\ref{thm:unbounded_upper_bound} for unbounded metric spaces, which is given in the next section, builds on this. We begin by providing a few basic lemmas; these lemmas are not fundamentally novel, but they will be used in the subsequent proofs of our main upper and lower bounds, and also help provide intuition for the behavior of the Wasserstein metric and its connections to other metrics between probability distributions. The proofs of these lemmas are given later, in Appendix~\ref{app:proofs}. Our first lemma relates Wasserstein distance to the notion of resolution of a partition. \begin{lemma} Suppose $\S \in \SS$ is a countable Borel partition of $\Omega$. Let $P$ and $Q$ be Borel probability measures such that, for every $S \in \S$, $P(S) = Q(S)$. Then, for any $r \geq 1$, $W_r(P, Q) \leq \operatorname{Res}(\S)$. \label{lemma:measures_agree_on_partition} \end{lemma} Our next lemma gives simple lower and upper bounds on the Wasserstein distance between distributions supported on a countable subset $\X \subseteq \Omega$, in terms of $\Diam(\X)$ and $\Sep(\X)$. Since our main results will utilize coverings and packings to approximate $\Omega$ by finite sets, this lemma will provide a first step towards approximating (in Wasserstein distance) distributions on $\Omega$ by distributions on these finite sets. Indeed, the lower bound in Inequality~\eqref{ineq:countable_support_bound} will suffice to prove our lower bounds, although a tighter upper bound, based on the upper bound in~\eqref{ineq:countable_support_bound}, will be necessary to obtain tight upper bounds. \begin{lemma} Suppose $(\Omega, \rho)$ is a metric space, and suppose $P$ and $Q$ are Borel probability distributions on $\Omega$ with countable support; i.e., there exists a countable set $\X \subseteq \Omega$ with $P(\X) = Q(\X) = 1$. Then, for any $r \geq 1$, \begin{equation} (\Sep(\X))^r \sum_{x \in \X} \left| P(\{x\}) - Q(\{x\}) \right| \leq W_r^r(P,Q) \leq (\Diam(\X))^r \sum_{x \in \X} \left| P(\{x\}) - Q(\{x\}) \right|. \label{ineq:countable_support_bound} \end{equation} \label{lemma:countable_support_bound} \end{lemma} \begin{remark} Recall that the term $\sum_{x \in \X} \left| P(\{x\}) - Q(\{x\}) \right|$ in Inequality~\eqref{ineq:countable_support_bound} is the $\L_1$ distance \[\|p - q\|_1 := \sum_{x \in \X} \left| p(x) - q(x) \right|\] between the densities $p$ and $q$ of $P$ and $Q$ with respect to the counting measure on $\X$, and that this same quantity is twice the total variation distance \[TV(P,Q) := \sup_{A \subseteq \Omega} \left| P(A) - Q(A) \right|.\] Hence, Lemma~\ref{lemma:countable_support_bound} can be equivalently written as \[\Sep(\Omega) \left( \|p - q\|_1 \right)^{1/r} \leq W_r(P,Q) \leq \Diam(\Omega) \left( \|p - q\|_1 \right)^{1/r}\] and as \[\Sep(\Omega) \left( 2 TV(P,Q) \right)^{1/r} \leq W_r(P,Q) \leq \Diam(\Omega) \left( 2 TV(P,Q) \right)^{1/r},\] bounding the $r$-Wasserstein distance in terms of the $\L_1$ and total variation distance. As noted in Example~\ref{ex:discrete_bound}, equality holds in \eqref{ineq:countable_support_bound} precisely when $\rho$ is the unit discrete metric given by $\rho(x,y) = 1_{\{x \neq y\}}$ for all $x,y \in \Omega$. On metric spaces that are discrete (i.e., when $\Sep(\Omega) > 0$), the Wasserstein metric is (topologically) at least as strong as the total variation metric (and the $\L_1$ metric, when it is well-defined), in that convergence in Wasserstein metric implies convergence in total variation (and $\L_1$, respectively). On the other hand, on bounded metric spaces, the converse is true. In either of these cases, \emph{rates} of convergence may differ between metrics, although, in metric spaces that are both discrete \textit{and} bounded (e.g., any finite space), we have $W_r \asymp TV^{1/r}$. \label{remark:Wasserstein_L1_TV} \end{remark} To obtain tight bounds as discussed below, we will require not only a partition of the sample space $\Omega$, but a nested sequence of partitions, defined as follows. \begin{definition}[Refinement of a Partition, Nested Partitions] Suppose $\S, \T \in \SS$ are partitions of $\Omega$. $\T$ is said to be a \emph{refinement of $\S$} if, for every $T \in \T$, there exists $S \in \S$ with $T \subseteq S$. A sequence $\{\S_k\}_{k \in \N}$ of partitions is called \emph{nested} if, for each $k \in \N$, $\S_k$ is a refinement of $\S_{k + 1}$, \end{definition} While Lemma~\ref{lemma:countable_support_bound} gave a simple upper bound on the Wasserstein distance, the factor of $\Diam(\Omega)$ turns out to be too large to obtain tight rates for a number of cases of interest (such as the $D$-dimensional unit cube $\Omega = [0,1]^D$, discussed in Example~\ref{ex:unit_cube_lower_bound}). The following lemma gives a tighter upper bound, based on a hierarchy of nested partitions of $\Omega$; this allows us to obtain tighter bounds (than $\Diam(\Omega)$) on the distance that mass must be transported between $P$ and $Q$. Note that, when $K = 1$, Lemma~\ref{lemma:nested_partitions_Wasserstein_bound} reduces to a trivial combination of Lemmas~\ref{lemma:measures_agree_on_partition} and \ref{lemma:countable_support_bound}; indeed, these lemmas are the starting point for proving Lemma~\ref{lemma:nested_partitions_Wasserstein_bound} by induction on $K$. Note that the idea of such a ``multi-resolution'' upper bound has been utilized extensively before, and numerous versions have been proven before (see, e.g., Fact 6 of \citet{do2011sublinearTimeEMD}, Lemma 6 of \citet{fournier2015rate}, or Proposition 1 of \citet{weed2017sharp}). Most of these versions have been specific to Euclidean space; to the best of our knowledge, only Proposition 1 of \citet{weed2017sharp} applies to general metric spaces. However, that result also requires that $(\Omega,\rho)$ is totally bounded (more precisely, that $m_x^\infty(P) < \infty$, for some $x \in \Omega$). \begin{lemma} Let $K$ be a positive integer. Suppose $\{\S_k\}_{k \in \N}$ is a nested sequence of countable Borel $\delta$-partitions of $(\Omega,\rho)$. Then, for any $r \geq 1$ and Borel probability measures $P$ and $Q$ on $\Omega$, \begin{equation} W_r^r(P, Q) \leq (\operatorname{Res}(\S_0))^r + \sum_{k = 1}^\infty \left( \operatorname{Res}(\S_k) \right)^r \left( \sum_{S \in \S_{k + 1}} \left| P(S) - Q(S) \right| \right). \label{ineq:multiresolution_bound} \end{equation} \label{lemma:nested_partitions_Wasserstein_bound} \end{lemma} Lemma~\ref{lemma:nested_partitions_Wasserstein_bound} requires a sequence of partitions of $\Omega$ that is not only multi-resolution but also nested. While the $\epsilon$-covering number implies the existence of small partitions with small resolution, these partitions need not be nested as $\epsilon$ becomes small. For this reason, we now give a technical lemma that, given any sequence of partitions, constructs a \textit{nested} sequence of partitions of the same cardinality, with only a small increase in resolution. \begin{lemma} Suppose $\S$ and $\T$ are partitions of $(\Omega,\rho)$, and suppose $\S$ is countable. Then, there exists a partition $\S'$ of $(\Omega,\rho)$ such that: \begin{enumerate}[label=\alph*)] \item $|\S'| \leq |\S|$. \item $\operatorname{Res}(\S') \leq \operatorname{Res}(\S) + 2\operatorname{Res}(\T)$. \item $\T$ is a refinement of $\S'$. \end{enumerate} \label{lemma:fine_refinement} \end{lemma} Lemmas~\ref{lemma:nested_partitions_Wasserstein_bound} and \ref{lemma:fine_refinement} are the main tools needed to bound the expected Wasserstein distance $\E[W_r^r(P, \hat P)]$ of the empirical distribution from the true distribution into a sum of its expected errors on each element of a nested partition of $\Omega$. Then, we will need to control the total expected error across these partition elements, which we will show behaves similarly to the $\L_1$ error of the standard maximum likelihood (mean) estimator a multinomial distribution from its true mean. Thus, the following result of \citet{han2015minimax} will be useful. \begin{lemma}[Theorem 1 of \citep{han2015minimax}] Suppose $(X_1,...,X_K) \sim \operatorname{Multinomial}(n,p_1,...,p_K)$. Let \[Z := \|X - n p\|_1 = \sum_{k = 1}^K \left| X_k - n p_k \right|.\] Then, $\E \left[ Z/n \right] \leq \sqrt{(K - 1)/n}$. \label{lemma:multinomial_expectation} \end{lemma} Finally, we are ready to prove Theorem~\ref{thm:expectation_bound_appendix}. \begin{customthm}{\ref{thm:expectation_bound}} Let $(\Omega,\rho)$ be a metric space on which $P$ is a Borel probability measure. Let $\hat P$ denote the empirical distribution of $n$ IID samples $X_1,...,X_n \stackrel{IID}{\sim} P$, give by \[\hat P(S) := \frac{1}{n} \sum_{i = 1}^n 1_{\{X_i \in S\}}, \quad \forall S \in \Sigma.\] Then, for any sequence $\{\epsilon_k\}_{k \in [K]} \in (0,\infty)^K$ with $\epsilon_0 = \Diam(\Omega)$, \[\E \left[ W_r^r(P, \hat P) \right] \leq \epsilon_K^r + \frac{1}{\sqrt{n}} \sum_{k = 1}^K \left( \sum_{j = k - 1}^K 2^{j - k} \epsilon_j \right)^r \sqrt{N(\epsilon_k) - 1}.\] \label{thm:expectation_bound_appendix} \end{customthm} \begin{proof} By recursively applying Lemma~\ref{lemma:fine_refinement}, there exists a sequence $\{\S_k\}_{k \in [K]}$ of partitions of $(\Omega,\rho)$ satisfying the following conditions: \begin{enumerate} \item for each $k \in [K]$, $|\S_k| = N(\epsilon_k)$. \item for each $k \in [K]$, $\displaystyle \operatorname{Res}(\S_k) \leq \sum_{j = k}^K 2^{j - k} \epsilon_j$. \item $\{S_k\}_{k \in [K]}$ is nested. \end{enumerate} Note that, for any $k \in [K]$, the vector $n\hat P(S)$ (indexed by $S \in \S_k$) follows an $n$-multinomial distribution over $|\S_k|$ categories, with means given by $P(S)$; i.e., \[(n\hat P(S_1),...,n\hat P(S_k)) \sim \operatorname{Multinomial}(n,P(S_1),...,P(S_k)).\] Thus, by Lemma~\ref{lemma:multinomial_expectation}, for each $k \in [K]$, \[\E \left[ \sum_{S \in \S_k} \left| P(S) - \hat P(S) \right| \right] \leq \sqrt{\frac{|\S_k| - 1}{n}} = \sqrt{\frac{N(\epsilon_k) - 1}{n}}.\] Thus, by Lemma~\ref{lemma:nested_partitions_Wasserstein_bound}, \begin{align*} \E \left[ W_r^r(P, Q) \right] & \leq \E \left[ \epsilon_K^r + \sum_{k = 1}^K \left( \sum_{j = k}^K 2^{j - k} \epsilon_j \right)^r \left( \sum_{S \in \S_k} \left| P(S) - Q(S) \right| \right) \right] \\ & \leq \epsilon_K^r + \sum_{k = 1}^K \left( \sum_{j = k}^K 2^{j - k} \epsilon_j \right)^r \E \left[ \sum_{S \in \S_k} \left| P(S) - Q(S) \right| \right] \\ & \leq \epsilon_K^r + \frac{1}{\sqrt{n}} \sum_{k = 1}^K \left( \sum_{j = k}^K 2^{j - k} \epsilon_j \right)^r \sqrt{N(\epsilon_k) - 1} \end{align*} \end{proof} \section{Proof Sketch of Theorem~\ref{thm:unbounded_upper_bound_appendix}} In this section, we prove our more general upper bound, Theorem~\ref{thm:unbounded_upper_bound_appendix}, which applies to potentially unbounded metric spaces $(\Omega,\rho)$, assuming that $P$ is sufficiently concentrated (i.e., has at least $\ell > 0$ finite moments). The basic idea is to partition the potentially unbounded metric space $(\Omega,\rho)$ into countably many totally bounded subsets $B_1,B_2,...$, and to decompose the Wasserstein error into its error on each $B_i$, weighted by the probability $P(B_i)$. Specifically, fixing an arbitrary base point $x_0$, $B_1,B_2,...$ will be spherical shells, such that $x_0 \in B_1$, and both the distance between $B_i$ and $x_0$, as well as the size (covering number) of $B_i$, increase with $i$. For large $i$, the assumption that $P$ has $\ell$ bounded moments implies (by Markov's inequality) that $P(B_i)$ is small, whereas, for small $i$, we adapt our previous result Theorem~\ref{thm:expectation_bound_appendix} in terms of the covering number. To carry out this approach, we will need two new lemmas. The first decomposes Wasserstein distance into the sum of its distances on each $B_i$, and can be considered an adaptation of Lemma 2.2 of \citet{lei2018convergence} (for Banach spaces) to general metric spaces. \begin{lemma} Fix a reference point $x_0 \in \Omega$ and a non-decreasing real-valued sequence $\{w_k\}_{k \in \N}$ with $w_0 = 0$ and $\lim_{k \to \infty} w_k = \infty$. For each $k \in \N$, define \[B_k := \left\{x \in \Omega : w_k \leq \rho(x_0, x) < w_{k + 1} \right\}.\] Then, there exists a constant $C_r$ depending only on $r$ such that, for any Borel probability measures $P$ and $Q$ on $\Omega$, \[W_r^r(P,Q) \leq C_r \sum_{k = 0}^\infty w_k^r \min \left\{ P(B_k), Q(B_k) \right\} W_r^r(P_{B_k},Q_{B_k}) + \left| P(B_k) - Q(B_k) \right|.\] where, for any sets $A, B \subseteq \Omega$, \[P_A(B) = \frac{P(A \cap B)}{P(B)}\] (under the convention that $\frac{0}{0} = 0$) denotes the conditional probability of $B$ given $A$, under $P$. \label{lemma:sigma_finite_partition} \end{lemma} The second lemma is more nuanced variant of Lemma~\ref{lemma:multinomial_expectation} (albeit, leading to slightly looser constants). When $i$ is large the covering number of $B_i$ can become quite large, but the total probability $P(B_i)$ is quite small. Whereas Lemma~\ref{lemma:multinomial_expectation} depends only on the size of the partition, the following result will allow us to control the total error using both of these factors. \begin{lemma}[Theorem 1 of \citet{berend2013binomialMAD}] Suppose $X \sim \operatorname{Binomial}(n,p)$. Then, we have the bound \begin{equation} \E \left[ \left| X - n p \right| \right] \leq n \min \left\{ 2P(A), \sqrt{P(A)/n} \right\}. \label{ineq:binomial_MAD} \end{equation} on the mean absolute deviation of $X$. \label{lemma:binomial_MAD} \end{lemma} Finally, we are ready to prove our main upper bound result for unbounded metric spaces. \begin{customthm}{\ref{thm:unbounded_upper_bound}}[General Upper Bound for Unbounded Metric Spaces] Let $x_0 \in \Omega$ and suppose $m_{\ell,x_0}(P) \in [1, \infty)$. Let $J$ be a positive integer. Fix two non-decreasing real-valued sequences $\{w_k\}_{k \in \N}$ and $\{\epsilon_j\}_{j \in \N}$, of which $\{w_k\}_{k \in \N}$ is non-decreasing with $w_0 = 0$ and $\lim_{k \to \infty} w_k = \infty$ and $\{\epsilon_j\}_{j \in [J]}$ is non-increasing. For each $k \in \N$, define \[B_k(x_0) := \left\{ y \in \Omega : w_k \leq \rho(x_0, x) < w_{k + 1} \right\}.\] Then, \begin{align*} \E \left[ W_r^r(P, \hat P) \right] & \leq m_{\ell,x_0}^\ell \sum_{k \in \N} w_k^{-\ell} \left( \epsilon_J \right)^r + 2^r w_k^{r - \ell/2} \min \left\{ 2w_k^{-\ell/2}, \sqrt{\frac{1}{n}} \right\} \\ & \hspace{2cm} + \sum_{j = 1}^J \left( \sum_{t = j}^J 2^{J - t} \epsilon_t \right)^r \min \left\{ 2w_k^{-\ell}, \sqrt{\frac{w_k^{-\ell}}{n} N(B_k,\rho,\epsilon_j)} \right\}. \end{align*} \label{thm:unbounded_upper_bound_appendix} \end{customthm} \begin{proof} As in the proof of Theorem~\ref{thm:expectation_bound_appendix}, by recursively applying Lemma~\ref{lemma:fine_refinement}, for each $k \in \N$, we can construct a nested sequence $\{\S_{k,j}\}_{j \in [J_k]}$ of partitions of $B_k$ such that, for each $j \in [J_k]$, \begin{equation} |\S_{k,j}| = N(B_k,\rho,\epsilon_{k,j}) \quad \text{ and } \quad \operatorname{Res}(\S_{k,j}) \leq \sum_{t = 0}^j 2^t \epsilon_{k,j}. \label{eq:recursive_fine_refinement} \end{equation} Since each $P_{B_k}$ and $\hat P_{B_k}$ are supported only on $B_k$, plugging the bound Lemma~\ref{lemma:nested_partitions_Wasserstein_bound} into the bound in Lemma~\ref{lemma:sigma_finite_partition} gives \begin{align*} & W_r^r(P, \hat P) \\ & \leq \sum_{k \in \N} \min \left\{ P(B_k), \hat P(B_k) \right\} \left( \left( \operatorname{Res}(\S_{k,0}) \right)^r + \sum_{j = 1}^{J_k} \left( \operatorname{Res}(\S_{k,j}) \right)^r \sum_{S \in \S_{k,j + 1}} \left| P_{B_k}(S) - \hat P_{B_k}(S) \right| \right) \\ & \hspace{1cm} + 2^r w_k^r \left| P(B_k) - \hat P(B_k) \right| \\ & \leq \sum_{k \in \N} 2^r w_k^r \left| P(B_k) - \hat P(B_k) \right| + P(B_k) \left( \operatorname{Res}(\S_{k,0}) \right)^r + \sum_{j = 1}^J \left( \operatorname{Res}(\S_{k,j}) \right)^r \sum_{S \in \S_{k,j + 1}} \left| P(S) - \hat P(S) \right|. \end{align*} Since each $\hat P(S) \sim \operatorname{Binomial}(n, P(S))$, for each $k \in \N$ and $j \in [J_k]$, Lemma~\ref{lemma:binomial_MAD} followed by Cauchy-Schwarz gives \begin{align*} \E \left[ \sum_{S \in \S_{k,j}} \left| P(S) - \hat P(S) \right| \right] & \leq \sum_{S \in \S_{k,j + 1}} \min \left\{ 2P(S), \sqrt{P(S)/n} \right\} \\ & \leq \min \left\{ 2P(B_k), \sqrt{\frac{P(B_k)}{n} |\S_{k,j}|} \right\}. \end{align*} Therefore, taking expectations (over $X_1,...,X_n$), applying Inequality~\ref{eq:recursive_fine_refinement}, and applying Lemma~\ref{lemma:binomial_MAD} once more gives \begin{align*} \E \left[ W_r^r(P, \hat P) \right] & \leq \sum_{k \in \N} P(B_k) \left( \epsilon_{k,0} \right)^r + 2^r w_k^r \min \left\{ 2P(B_k), \sqrt{P(B_k)/n} \right\} \\ & \hspace{1cm} + \sum_{j = 1}^{J_k} \left( \sum_{t = 0}^j 2^t \epsilon_{k,j} \right)^r \min \left\{ 2P(B_k), \sqrt{\frac{P(B_k)}{n} N(B_k,\rho,\epsilon_{k,j + 1})} \right\}. \end{align*} Now note that, by Markov's inequality, \begin{equation} P(B_k) \leq \pr_{X \sim P} \left[ \rho(x_0, X) \geq w_k \right] = \pr_{X \sim P} \left[ \rho^\ell (x_0, X) \geq w_k^\ell \right] \leq \frac{m_{\ell,x_0}^\ell(P)}{w_k^\ell}. \end{equation} Therefore, assuming that each $m_{\ell,x_0}^\ell \geq 1$, so that $m_{\ell,x_0}^\ell \geq m_{\ell,x_0}^{\ell/2}$, \begin{align*} \E \left[ W_r^r(P, \hat P) \right] & \leq m_{\ell,x_0}^\ell \sum_{k \in \N} w_k^{-\ell} \left( \epsilon_{k,0} \right)^r + 2^r w_k^r \min \left\{ 2w_k^{-\ell}, \sqrt{w_k^{-\ell}/n} \right\} \\ & \hspace{1cm} + \sum_{j = 1}^{J_k} \left( \sum_{t = 0}^j 2^t \epsilon_{k,j} \right)^r \min \left\{ 2w_k^{-\ell}, \sqrt{\frac{w_k^{-\ell}}{n} N(B_k,\rho,\epsilon_{k,j + 1})} \right\}, \end{align*} proving the theorem. \end{proof} \section{Proofs of Lemmas} \label{app:proofs} \begin{customlemma}{\ref{lemma:measures_agree_on_partition}} Suppose $\S \in \SS$ is a countable Borel partition of $\Omega$. Let $P$ and $Q$ be Borel probability measures such that, for every $S \in \S$, $P(S) = Q(S)$. Then, for any $r \geq 1$, $W_r(P, Q) \leq \operatorname{Res}(\S)$. \end{customlemma} \begin{proof} This fact is intuitively obvious; clearly, there exists a transportation map $\mu$ from $P$ to $Q$ that moves mass only within each $S \in \S$ and therefore without moving any mass further than $\delta$. For completeness, we give a formal construction. Let $\mu : \Sigma^2 \to [0,1]$ denote the coupling that is conditionally independent given any set $S \in \S$ with $P(S) = Q(S) > 0$ (that is, for any $A, B \in \Sigma$, $\mu(A \times B \cap S \times S) P(S) = P(A \cap S) Q(B \cap S)$).\footnote{The existence of such a measure can be verified by the Hahn-Kolmogorov theorem, similarly to that of the usual product measure (see, e.g., Section IV.4 of \citet{doob2012measure}).} It is easy to verify that $\mu \in \mathcal{C}(P,Q)$. Since $\S$ is a countable partition and $\mu$ is only supported on $\bigcup_{S \in \S} S \times S$, \begin{align*} W_r(P, Q) & \leq \left( \int_{\Omega \times \Omega} \rho^r(x,y) \, d\mu(x,y) \right)^{1/r} \\ & = \left( \sum_{S \in \S} \int_{S \times S} \rho^r(x,y) \, d\mu(x,y) \right)^{1/r} \\ & \leq \left( \sum_{S \in \S} \int_{S \times S} \delta^r \, d\mu(x,y) \right)^{1/r} \\ & = \delta \left( \sum_{S \in \S} \mu(S \times S) \right)^{1/r} = \delta \left( \sum_{S \in \S} \frac{P(S) Q(S)}{P(S)} \right)^{1/r} = \delta \left( \sum_{S \in \S} Q(S) \right)^{1/r} = \delta. \end{align*} \end{proof} \begin{customlemma}{\ref{lemma:countable_support_bound}} Suppose $(\Omega, \rho)$ is a metric space, and suppose $P$ and $Q$ are Borel probability distributions on $\Omega$ with countable support; i.e., there exists a countable set $\X \subseteq \Omega$ with $P(\X) = Q(\X) = 1$. Then, for any $r \geq 1$, \[(\Sep(\X))^r \sum_{x \in \X} \left| P(\{x\}) - Q(\{x\}) \right| \leq W_r^r(P,Q) \leq (\Diam(\X))^r \sum_{x \in \X} \left| P(\{x\}) - Q(\{x\}) \right|.\] \end{customlemma} \begin{proof} The term $\sum_{x \in \X} \left| P(\{x\}) - Q(\{x\}) \right| = TV(P, Q)$ is precisely the (unweighted) amount of mass that must be transported to transform between $P$ and $Q$. Hence, the result is intuitively fairly obvious; all mass moved has a cost of at least $\Sep(\Omega)$ and at most $\Diam(\Omega)$. However, for completeness, we give a more formal proof below. To prove the lower bound, suppose $\mu \in \Pi(P, Q)$ is any coupling between $P$ and $Q$. For $x \in \X$, \[\mu(\{x\} \times \{x\}) + \mu(\{x\} \times (\Omega \sminus \{x\})) = \mu(\{x\} \times \Omega) = P(\{x\})\] and, similarly, \[\mu(\{x\} \times \{x\}) + \mu((\Omega \sminus \{x\}) \times \{x\}) = \mu(\Omega \times \{x\}) = Q(\{x\}).\] Since $P(\{x\}), Q(\{x\}) \in [0,1]$, it follows that \[\mu(\{x\} \times (\Omega \sminus \{x\})) + \mu(\mu((\Omega \sminus \{x\}) \times \{x\})) \geq \left| P(\{x\} - Q(\{x\}) \right|.\] Therefore, since $\rho(x,y) = 0$ whenever $x = y$ and $\rho(x, y) \geq \Sep(\Omega)$ whenever $x \neq y$, \begin{align*} \int_{\Omega \times \Omega} \rho^r(x, y) \, d\mu(x,y) & = \int_{\X \times \X} \rho^r(x, y) \, d\mu(x,y) \\ & = \sum_{x \in \X} \int_{\{x\} \times (\Omega \sminus \{x\})} \rho^r(x, y) \, d\mu(x,y) + \int_{(\Omega \sminus \{x\}) \times \{x\}} \rho^r(x, y) \, d\mu(x,y) \\ & \geq (\Sep(\Omega))^r \sum_{x \in \X} \mu(\{x\} \times (\Omega \sminus \{x\})) + \mu((\Omega \sminus \{x\}) \times \{x\}) \\ & \geq (\Sep(\Omega))^r \sum_{x \in \X} \left| P(\{x\}) - Q(\{x\}) \right|. \end{align*} Taking the infimum over $\mu$ on both sides gives \[(\Sep(\Omega))^r \sum_{x \in \X} \left| P(\{x\}) - Q(\{x\}) \right| \leq W_r^r(P, Q).\] To prove the upper bound, since $\rho$ is upper bounded by $\Diam(\Omega)$, it suffices to construct a coupling $\mu$ that only moves mass into or out of each given point, but not both; that is, for each $x \in \X$, \[\min\{\mu(\{x\} \times (\Omega \sminus \{x\})), \mu((\Omega \sminus \{x\}) \times \{x\})\} = 0.\] One way of doing this is as follows. Fix an ordering $x_1,x_2,...$ of the elements of $\X$. For each $i \in \N$, define \[X_i := \sum_{\ell = 1}^i (P(x_\ell) - Q(x_\ell))_+ \quad \text{ and } \quad Y_i := \sum_{\ell = 1}^i (Q(x_\ell) - P(x_\ell))_+,\] and further define \[j_i := \min \{ j \in \N : X_i \leq Y_j \} \quad \text{ and } \quad k_i := \min \{ k \in \N : X_j \geq Y_i \}.\] Then, for each $i \in \N$, move $X_i$ mass from $\{x_1,...,x_i\}$ to $\{y_1,...,y_{j_i}\}$ and move $Y_i$ mass from $\{y_1,...,y_i\}$ to $\{x_1,...,x_{k_i}\}$. As $i \to \infty$, by construction of $X_i$ and $Y_i$, the total mass moved in this way is \[\mu((\X \times \X) \sminus \{(x,x) : x \in \X\}) = \lim_{i \to \infty} X_i + Y_i = \sum_{x \in \X} \left| P(x) - Q(x) \right|.\] \end{proof} \begin{customlemma}{\ref{lemma:nested_partitions_Wasserstein_bound}} Let $K$ be a positive integer. Suppose $\{\S_k\}_{k \in [K]}$ is a sequence of nested countable Borel partitions of $(\Omega,\rho)$, with $\S_0 = \Omega$. Then, for any $r \geq 1$ and any Borel probability distributions $P$ and $Q$ on $\Omega$, \[W_r^r(P, Q) \leq (\operatorname{Res}(\S_K))^r + \sum_{k = 1}^K \left( \operatorname{Res}(\S_{k - 1}) \right)^r \left( \sum_{S \in \S_k} \left| P(S) - Q(S) \right| \right).\] \end{customlemma} \begin{proof} Our proof follows the same ideas as and slightly generalizes of the proof of Proposition 1 in \citet{weed2017sharp}. Intuitively, to prove Lemma~\ref{lemma:nested_partitions_Wasserstein_bound} it suffices to find a transportation map such that For each $k \in [K]$, recursively define \[P_k := P - \sum_{j = 0}^{k - 1} \mu_k \quad \text{ and } \quad Q_k := Q - \sum_{j = 0}^{k - 1} \nu_k,\] where, for each $k \in [K]$, $\mu_k$ and $\nu_k$ are Borel measures on $\Omega$ defined for any $E \in \Sigma$ by \[\mu_k(E) := \sum_{S \in \S_k : P_k(S) > 0} \left( P_k(S) - Q_k(S) \right)_+ \frac{P_k(E \cap S)}{P_k(S)}\] and \[\nu_k(E) := \sum_{S \in \S_k : Q_k(S) > 0} \left( Q_k(S) - P_k(S) \right)_+ \frac{Q_k(E \cap S)}{Q_k(S)}.\] By construction of $\mu_k$ and $\nu_k$, each $\mu_k$ and $\nu_k$ is a non-negative measure and $\sum_{k = 1}^K \mu_k \leq P$ and $\sum_{k = 1}^K \nu_k \leq Q$. Furthermore, for each $k \in [K - 1]$, for each $S \in \S_k$, $\mu_{k + 1}(S) = \nu_{k + 1}(S)$, and \[\mu_k(\Omega) = \nu_k(\Omega) \leq \sum_{S \in \S_k} \left| P(S) - Q(S) \right|.\] Consequently, although $\mu$ and $\nu$ are not probability measures, we can slightly generalize the definition of Wasserstein distance by writing \[W_r^r \left( \mu_k, \nu_k \right) := \mu(\Omega) \inf_{\tau \in \Pi \left( \frac{\mu_k}{\mu_k(\Omega)}, \frac{\nu_k}{\nu_k(\Omega)}\right)} \E_{(X,Y) \sim \mu} \left[ \rho^r \left( X, Y \right) \right]\] (or $W_r^r(\mu_k, \nu_k) = 0$ if $\mu_k = \nu_k = 0$). In particular, this is convenient because we one can easily show that, by construction of the sequences $\{P_k\}_{k \in [K]}$ and $\{Q_k\}_{k \in [K]}$, \begin{equation} W_r^r(P, Q) \leq W_r^r \left( P_K, Q_K \right) + \sum_{k = 1}^K W_r^r \left(\mu_k, \nu_k \right). \label{ineq:decomposition} \end{equation} For each $k \in [K]$, Lemma~\ref{lemma:countable_support_bound} implies that \begin{align*} W_r^r(\mu_k,\nu_k) & \leq \sum_{S \in \S_{k - 1}} \left( \Diam(S) \right)^r \sum_{T \in \S_k : T \subseteq S} \left| P(T) - Q(T) \right| \\ & \leq \left( \operatorname{Res}(\S_{k - 1}) \right)^r \sum_{S \in \S_{k - 1}} \sum_{T \in \S_k : T \subseteq S} \left| P(T) - Q(T) \right| \\ & = \left( \operatorname{Res}(\S_{k - 1}) \right)^r \sum_{T \in \S_k} \left| P(T) - Q(T) \right|. \end{align*} Furthermore, for each $S \in \S_K$, $P_K = Q_K$, Lemma~\ref{lemma:measures_agree_on_partition} gives that \[W_r^r \left( P_K, Q_K \right) \leq \left( \operatorname{Res}(\S_K) \right)^r\] Plugging these last two inequalities into Inequality~\eqref{ineq:decomposition} gives the desired result: \[W_r^r(P, Q) \leq \left( \operatorname{Res}(\S_K) \right)^r + \sum_{k = 1}^K \left( \operatorname{Res}(\S_{k - 1}) \right)^r \sum_{S \in \S_k} \left| P(S) - Q(S) \right|.\] \end{proof} \begin{customlemma}{\ref{lemma:fine_refinement}} Suppose $\S$ and $\T$ are partitions of $(\Omega,\rho)$, and suppose $\S$ is countable. Then, there exists a partition $\S'$ of $(\Omega,\rho)$ such that: \begin{enumerate}[label=\alph*)] \item $|\S'| \leq |\S|$. \item $\operatorname{Res}(\S') \leq \operatorname{Res}(\S) + 2\operatorname{Res}(\T)$. \item $\T$ is a refinement of $\S'$. \end{enumerate} \end{customlemma} \begin{proof} Enumerate the elements of $\S$ as $S_1,S_2,...$. Define $S_0' := \emptyset$, and then, for each $i \in \{1,2,...\}$, recursively define \[S_i' := \left. \left( \bigcup_{T \in \T : T \cap S_i \neq \emptyset} T \right) \middle \sminus \left( \bigcup_{j = 1}^{i - 1} S_j' \right) \right.,\] and set $\S' = \{S_1',S_2',...\}$. Clearly, $|\S'| \leq |\S|$ (equality need not hold, as we may have some $S_i' = \emptyset$). By the triangle inequality, each \[\Diam(S_i') \leq \Diam \left( \bigcup_{T \in \T : T \cap S_i \neq \emptyset} T \right) \leq \delta_\S + 2 \delta_T.\] Finally, since $\T$ is a partition and we can write \[S_i' = \left. \left( \bigcup_{T \in \T : T \cap S_i \neq \emptyset} T \right) \middle \sminus \left( \bigcup_{j = 1}^{i - 1} \bigcup_{T \in \T : T \cap S_j' \neq \emptyset} T \right) \right.,\] $\T$ is a refinement of $\S'$. \end{proof} \section{Proof of Lower Bound} In this section, we provide a proof of our main lower bound, Theorem~\ref{thm:Wasserstein_distribution_estimation_lower_bound} in the main text. The proof consists of two main steps. First, we show that the minimax error of estimation in Wasserstein distance can be lower bounded by a product of two terms, one depending on the packing radius $R$ and the other depending on the minimax risk of estimating a particular discrete (i.e., multinomial) distribution under $\L_1$ loss. The second step is then to apply a minimax lower bound on the risk of estimating a multinomial distribution under $\L_1$ loss. These two steps respectively rely on two lemmas, Lemma~\ref{lemma:wasserstein_projections} and Lemma~\ref{lemma:multinomial_minimax_lower_bound} given below. The first lemma implies that, when a distribution $P$ is supported on a finite subset $\D$ of the sample space, then there exists an estimator $\hat P_\D$ of $\hat P$ that is supported on $\D$ is minimax optimal, up to a small constant factor. While this fact is relatively obvious for measure-theoretic metrics such as $\L_p$ distances, it is somewhat less obvious for Wasserstein distances, which also depend on metric properties of the space. This observation is key to lower bounding the minimax rate in terms of the minimax rate for estimating a discrete distribution. \begin{lemma}[Wasserstein Projections] Let $(\X,\rho)$ be a metric space and let $\D \subseteq \X$ be finite. Let $\P$ denote the family of all Borel probability distributions on $\X$, and let \[\P_\D := \{P \in \P : P(\D) = 1\}\] denote the set of distributions supported only on $\D$. Suppose $P \in \P_\D$ and $Q \in \P$. Then, \[\mathop{\arg\!\min}_{\tilde Q \in \P_\D} W_r(Q, \tilde Q) \neq \emptyset \quad \text{ and, for any } \quad Q' \in \mathop{\arg\!\min}_{\tilde Q \in \P'} W_r(Q, \tilde Q),\] we have $W_r(P, Q') \leq 2W_r(P, Q)$. \label{lemma:wasserstein_projections} \end{lemma} \begin{proof} Let $\{\S_x\}_{x \in \D}$ denote the Voronoi diagram of $\X$ with respect to $\D$; that is, for each $x \in \D$, let \[\S_x := \{y \in \X : x \in \mathop{\arg\!\min}_{z \in \D} \rho(x,y) \}.\] Since $\{S_x\}_{x \in \D}$ is a finite cover of $\X$, we can disjointify it (see Remark~\ref{remark:disjointification}) while retaining the property that, for every $x \in \D$ and every $y \in \S_x$, $\rho(x,y) = \min_{z \in \D} \rho(z,y)$; hence, we assume without loss of generality that $\{\S_x\}_{x \in \D}$ is a partition of $\X$. Then, there is a unique distribution $Q' \in \P_D$ such that, for each $x \in \D$, $Q'(\{x\}) = Q(\S_x)$. It is easy to see by definition of the Voronoi diagram that $Q' \in \mathop{\arg\!\min}_{\tilde Q \in \P_\D} W_r(Q, \tilde Q)$; the unique transportation map $\mu_* \in \Pi(Q,Q')$ such that each $\mu(\S_x,\{x\}) = Q(\S_x)$ clearly minimizes \[\E_{(X,Y) \sim \mu} \left[ \rho^r(X,Y) \right]\] over all $\mu \in \bigcup_{\tilde Q \in \P_\D} \Pi(Q, \tilde Q)$. Moreover, since $P \in \P_D$, by the triangle inequality and the definition of $Q'$, $W_r(P, Q') \leq W_r(P, Q) + W_r(Q, Q') \leq 2 W_r(P, Q)$. \end{proof} The second lemma is a simple minimax lower bound for the problem of estimating the mean vector of a multinomial distribution, under $\L_1$ loss. \begin{lemma}[Minimax Lower Bound for Mean of Multinomial Distribution] Suppose $k \leq 32 n$. Let $p \in \Delta^k$, and suppose $X_1,...,X_n \stackrel{IID}{\sim} \operatorname{Categorical}(p_1,...,p_k)$ are distributed IID according to a categorical distribution on $[k]$, with mean vector $p$. Then, we have the following minimax lower bound for estimating $p$ under $\L^1$-loss: \[\inf_{\hat p} \sup_{p \in \Delta^k} \E \left[ \|p - \hat p\|_1 \right] \geq \frac{3\log 2}{4096} \sqrt{\frac{k - 1}{n}},\] where the infimum is taken over all estimators (i.e., all (potentially randomized) functions $\hat p : [k]^n \to \Delta^k$ of the data). \label{lemma:multinomial_minimax_lower_bound} \end{lemma} Note that, while the above result is phrased for categorical distributions to simplify notation in the proof, the result is equivalent to a statement for multinomial distributions, since $\sum_{i = 1}^n X_i \sim \text{Multinomial}(n,p_1,...,p_k)$ and $X_1,...,X_n$ are assumed to be IID and therefore exchangeable. \begin{proof} We follow a standard procedure for proving minimax lower bounds based on Fano's inequality, as outlined in Section 2.6 of \citet{tsybakov2009introduction}. Let $p_0 = \left( 1/k, ...., 1/k \right) \in \Delta^K$ denote the uniform vector in $\Delta^k$. Let $\I := \left[ \lfloor \frac{k}{2} \rfloor \right]$. For each $j \in \I$, define $\phi_j : [k] \to \R^k$ by \[\phi_j := 1_{\{2j - 1\}} - 1_{\{2j\}},\] and, for each $\tau \in \{-1,1\}^\I$, define \[p_\tau := p_0 + \frac{c}{k} \sum_{j \in \I} \tau_j \phi_j,\] where \[c = \frac{1}{16} \sqrt{\frac{k - 1}{n}\log 2} \leq \frac{1}{2}.\] Note that, since $|c| \leq 1$ and, for each $j \in \I$, $\sum_{\ell \in [k]} \phi_j(\ell) = 0$, each $p_\tau \in \Delta^k$. Observe that, for any $\tau,\tau' \in \{-1,1\}^\I$, we have \[\|p_\tau - p_{\tau'}\|_1 = \frac{4 c \omega(\tau,\tau')}{k}, \quad \text{ where } \quad \omega(\tau,\tau') = \sum_{i \in \I} 1_{\{\tau_i \neq \tau_i'\}}\] denotes the Hamming distance between $\tau$ and $\tau'$. By the Varshamov-Gilbert bound (see, e.g., Lemma 2.9 of \citet{tsybakov2009introduction}), there exists a subset $T \subseteq \{-1,1\}^\I$ such that $\log |T| \geq \frac{\lfloor k/2 \rfloor \log 2}{8}$ and, for every $\tau, \tau' \in T$, \[\omega(\tau,\tau') \geq \frac{|\I|}{8} = \frac{\lfloor k/2 \rfloor}{8}, \quad \text{ and hence } \quad \|p_\tau - p_{\tau'}\|_1 \geq c \frac{\lfloor k/2 \rfloor}{2k}.\] Also, for any $\tau \in T$, \begin{align*} D_{KL}(p_\tau^n,p_0^n) & = n D_{KL}(p_\tau,p_0) \\ & = n \sum_{j \in [k]} p_{\tau,j} \log \left( \frac{p_{\tau,j}}{p_{0,j}}\right) \\ & = n \sum_{j \in \I} p_{\tau,2j - 1} \log \left( \frac{p_{\tau,2j - 1}}{1/k} \right) + p_{\tau,2j} \log \left( \frac{p_{\tau,2j}}{1/k} \right) \\ & = \frac{n |\I|}{k} \left( (1 - c) \log \left( 1 - c \right) + (1 + c) \log \left( 1 + c \right) \right) \end{align*} One can check (e.g., by Taylor expansion) that, for any $c \in (0,1/2)$, \[(1 - c) \log \left( 1 - c \right) + (1 + c) \log \left( 1 + c \right) < 2c^2.\] Thus, since $|\I| \leq k/2$, \[D_{KL}(p_\tau^n,p_0^n) \leq \frac{2 n |\I| c^2}{k} \leq n c^2.\] It follows that from the choice of $c$ (and noting that, by the assumptions that $k \leq 32n$, $c \in (0,1/2)$) that \[\frac{1}{|T|} \sum_{\tau \in T} D_{KL}(p_\tau^n, p_0^n) \leq nc^2 \leq \frac{\lfloor k/2 \rfloor \log 2}{128} \leq \frac{1}{16} \log |T|.\] Therefore, by Fano's method for lower bounds (see, e.g., Theorem 2.5 of \citet{tsybakov2009introduction}, with $\alpha = 1/16$ and \[s := \frac{c}{16} \leq c \frac{\lfloor k/2 \rfloor}{4k} \leq \frac{1}{2} \|p_\tau - p_{\tau'}\|_1,\] we have \begin{align*} \inf_{\hat p} \sup_{p \in \Delta^k} \E \left[ \|p - \hat p\|_1 \right] & \geq \inf_{\hat p} \sup_{p \in \Delta^k} c \frac{\lfloor k/2 \rfloor}{4k} \pr \left[ \|p - \hat p\|_1 \geq c \frac{\lfloor k/2 \rfloor}{4k} \right] \\ & \geq c \frac{\lfloor k/2 \rfloor}{4k} \frac{3}{16} \\ & \geq \frac{3\log 2}{4096} \sqrt{\frac{k - 1}{n}}. \end{align*} \end{proof} \begin{customthm}{\ref{thm:Wasserstein_distribution_estimation_lower_bound}} Let $(\Omega,\rho)$ be a metric space, and let $\P$ denote the set of Borel probability measures on $(\Omega,\rho)$. \[\inf_{\hat P : \X^n \to \P} \sup_{P \in \P} \E_{X_1,...,X_n \stackrel{IID}{\sim} P} \left[ W_r^r(P, \hat P(X_1,...,X_n)) \right] \geq c_r \sup_{k \in [32n]} R^r(k) \sqrt{\frac{k - 1}{n}},\] where \[c_r = \frac{3\log 2}{4096\cdot 2^r}.\] is independent of $n$ and the infimum is taken over all estimators (i.e., all (potentially randomized) functions $\hat P : \X^n \to \P$ of the data). \label{thm:Wasserstein_distribution_estimation_lower_bound_appendix} \end{customthm} \begin{proof} Let $k \leq 32n$, and let $\D$ be an $R(k)$-packing $\D$ of $(\Omega,\rho)$ with $|\D| = k$. Let $\P_\D$ denote the class of (discrete) distributions over $\D$. By Lemma~\ref{lemma:countable_support_bound}, Lemma~\ref{lemma:wasserstein_projections}, Lemma~\ref{lemma:multinomial_minimax_lower_bound}, and the definition of the packing radius (in that order) \begin{align*} \inf_{\hat P : \X^n \to \P} \sup_{P \in \P} \E \left[ W_r^r(\hat P, P) \right] & \geq \left( \Sep(\D) \right)^r \inf_{\hat P : \X^n \to \P} \sup_{P \in \P} \E \left[ \|\hat P - P\|_1 \right] \\ & \geq \left( \Sep(\D) \right)^r \inf_{\hat P : \X^n \to \P} \sup_{P \in \P_\D} \E \left[ \|\hat P - P\|_1 \right] \\ & \geq \left( \frac{\Sep(\D)}{2} \right)^r \inf_{\hat P : \X^n \to \P_\D} \sup_{P \in \P_\D} \E \left[ \|\hat P - P\|_1 \right] \\ & \geq \frac{3\log 2}{4096 \cdot 2^r} \left( \Sep(\D) \right)^r \sqrt{\frac{|\D| - 1}{n}} \\ & \geq \frac{3\log 2}{4096\cdot 2^r} R^r(k) \sqrt{\frac{k - 1}{n}}. \end{align*} The theorem follows by taking the supremum over $k \leq 32n$ on both sides. \end{proof} \section{Introduction} The Wasserstein metric is an important measure of distance between probability distributions, based on the cost of transforming either distribution into the other through mass transport, under a base metric on the sample space. Originating in the optimal transport literature,\footnote{The Wasserstein metric has been variously attributed to Monge, Kantorovich, Rubinstein, Gini, Mallows, and others; see Chapter 3 of \citep{villani2008optimalTransport} for detailed history.} the Wasserstein metric has, owing to its intuitive and general nature, been utilized in such diverse areas as probability theory and statistics, economics, image processing, text mining, robust optimization, and physics~\citep{villani2008optimalTransport,fournier2015rate,esfahani2015robustOptimization,gao2016distributionallyRobust}. In the analysis of image data, the Wasserstein metric has been used for various tasks such as texture classification and face recognition~\citep{sandler2011NMFImageAnalysis}, reflectance interpolation, color transfer, and geometry processing~\citep{solomon2015imageOptimalTrans}, image retrieval~\citep{rubner2000imageRetrieval}, and image segmentation~\citep{ni2009imageSegmentation}, and, in the analysis of text data, for tasks such as document classification~\citep{kusner2015documentDistances} and machine translation~\citep{zhang2016machineTranslation}. In contrast to a number of other popular notions of dissimilarity between probability distributions, such as $\L_p$ distances or Kullback-Leibler and other $f$-divergences~\citep{morimoto1963divergences,csiszar1964divergences,ali1966divergences}, which require distributions to be absolutely continuous with respect to each other or to a base measure, Wasserstein distance is well-defined between \emph{any} pair of probability distributions over a sample space equipped with a metric.\footnote{Hence, we use ``distribution estimation'' in this paper, rather than the more common ``density estimation''.} As a particularly important consequence, Wasserstein distances between discrete (e.g., empirical) distributions and continuous distributions are well-defined, finite, and informative (e.g., can decay to $0$ as the distributions become more similar). Partly for this reason, many central limit theorems and related approximation results~\citep{ruschendorf1985wasserstein,johnson2005central,chatterjee2008normalApproximation,rio2009upper,rio2011asymptotic,chen10SteinsMethod,reitzner2013central} are expressed using Wasserstein distances. Within machine learning and statistics, this same property motivates a class of so-called \emph{minimum Wasserstein distance estimates}~\citep{del1999CLT,del2003correction,bassetti2006minimum,bernton2017inferenceUsingWasserstein} of distributions, ranging from exponential distributions~\citep{baillo2016exponentialWasserstein} to more exotic models such as restricted Boltzmann machines (RBMs)~\citep{montavon2016wassersteinRBMs} and generative adversarial networks (GANs)~\citep{arjovsky2017wassersteinGAN}. This class of estimators also includes $k$-means and $k$-medians, where the hypothesis class is taken to be discrete distributions supported on at most $k$ points~\citep{pollard1982quantization}; more flexible algorithms such as hierarchical $k$-means~\citep{ho2017multilevel} and $k$-flats~\citep{tseng2000kFlats} can also be expressed in this way, using a more elaborate hypothesis classes. PCA can also be expressed and generalized to manifolds using Wasserstein distance minimization~\citep{boissard2015template}. These estimators are conceptually equivalent to empirical risk minimization, leveraging the fact that Wasserstein distances between the empirical distribution and distributions in the relevant hypothesis class are well-behaved. Moreover, these estimates often perform well in practice because they are free of both tuning parameters and strong distributional assumptions. For many of the above applications, it is important to understand how quickly the empirical distribution converges to the true distribution in Wasserstein distance, and whether there exist distribution estimators that converge more quickly. For example, \citet{canas2012learning} use bounds on Wasserstein convergence to prove learning bounds for $k$-means, while \citet{arora2017generalization} used the slow rate of convergence in Wasserstein distance in certain cases to argue that GANs based on Wasserstein distances fail to generalize with fewer than exponentially many samples in the dimension. To this end, the {\bf main contribution} of this paper is to identify, in a wide variety of settings, the minimax convergence rate for the problem of estimating a distribution using Wasserstein distance as a loss function. Our setting is very general, relying only on metric properties of the support of the distribution and the number of finite moments the distribution has; some diverse examples to which our results apply are given in Section~\ref{sec:examples}. Specifically, we assume only that the distribution is has some number of finite moments in a given metric. We then prove bounds on the minimax convergence rates of distribution estimation, utilizing covering numbers of the sample space for upper bounds and packing numbers for lower bounds. It may at first be surprising that positive results can be obtained under such mild assumptions; this highlights that the Wasserstein metric is quite a weak metric (see our Lemma 11 and the subsequent remark for discussion of this). Moreover, our results imply that, without further assumptions on the population distribution, the empirical distribution is typically minimax rate-optimal. Note that, while there has been previous work on upper bounds (discussed in Section~\ref{sec:related_work}), this paper is the first to study minimax lower bounds for this problem. \textbf{Organization: } The remainder of this paper is organized as follows. Section~\ref{sec:notation} provides notation required to formally state both the problem of interest and our results, while Section~\ref{sec:related_work} reviews previous work studying convergence of distributions in Wasserstein distance. Sections~\ref{sec:upper_bounds} and \ref{sec:lower_bounds} respectively contain our main upper and lower bound results. Since the proofs of the upper bounds, are fairly long, Appendices A and B provide high-level sketches of the proofs, followed by detailed proofs in Appendix C. The lower bound is proven in Appendix D Finally, in Section~\ref{sec:examples}, we apply our upper and lower bounds to identify minimax convergence rates in a number of concrete examples. Section~\ref{sec:conclusion} concludes with a summary of our contributions and suggested avenues for future work. \section{Notation and Problem Setting} \label{sec:notation} For any positive integer $n \in \N$, $[n] = \{1,2,...,n\}$ denotes the set of the first $n$ positive integers. For sequences $\{a_n\}_{n \in \N}$ and $\{b_n\}_{n \in \N}$ of non-negative reals, $a_n \lesssim b_n$ and, equivalently $b_n \gtrsim a_n$, indicate the existence of a constant $C > 0$ such that $\limsup_{n \to \infty} \frac{a_n}{b_n} \leq C$. $a_n \asymp b_n$ indicates $a_n \lesssim b_n \lesssim a_n$. \subsection{Problem Setting} For the remainder of this paper, fix a metric space $(\Omega,\rho)$, over which $\Sigma$ denotes the Borel $\sigma$-algebra, and let $\P$ denote the family of all Borel probability distributions on $\Omega$. The main object of study in this paper is the Wasserstein distance on $\P$, defined as follows: \begin{definition}[$r$-Wasserstein Distance] Given two Borel probability distributions $P$ and $Q$ over $\Omega$ and $r \in [1,\infty)$, the $r$-\emph{Wasserstein distance} $W_r(P,Q) \in [0,\infty]$ between $P$ and $Q$ is defined by \[W_r(P,Q) := \inf_{\mu \in \Pi(P,Q)} \left( \E_{(X,Y) \sim \mu} \left[ \rho^r \left( X, Y \right) \right] \right)^{1/r},\] where $\Pi(P,Q)$ denotes all couplings between $X \sim P$ and $Y \sim Q$; that is, \[\Pi(P,Q) := \left\{ \mu : \Sigma^2 \to [0,1] \middle| \text{ for all } A \in \Sigma, \mu(A \times \Omega) = P(A) \text{ and } \mu(\Omega \times A) = Q(A) \right\},\] is the set of joint probability measures over $\Omega \times \Omega$ with marginals $P$ and $Q$. \end{definition} Intuitively, $W_r(P,Q)$ quantifies the $r$-weighted total cost of transforming mass distributed according to $P$ to be distributed according to $Q$, where the cost of moving a unit mass from $x \in \Omega$ to $y \in \Omega$ is $\rho(x,y)$. $W_r(P,Q)$ is sometimes defined in terms of equivalent (e.g., dual) formulations; these formulations will not be needed in this paper. $W_r$ it is symmetric in its arguments and satisfies the triangle inequality, and, for all $P \in \P$, $W_r(P, P) = 0$. Thus, $W_r$ is always a pseudometric. Moreover, it is a proper metric (i.e., $W_r(P,Q) = 0 \Rightarrow P = Q$) if and only if $\rho$ is as well. This paper studies the following problem: \textbf{Formal Problem Statement:} Suppose $(\Omega,\rho)$ is a known metric space. Suppose $P$ is an unknown Borel probability distribution on $\Omega$, from which we observe $n$ IID samples $X_1,...,X_n \stackrel{IID}{\sim} P$. We are interested in studying the minimax rates at which $P$ can be estimated from $X_1,...,X_n$, in terms of the ($r^{th}$ power of the) $r$-Wasserstein loss. Specifically, we are interested in deriving finite-sample upper and lower bounds, in terms of only properties of the space $(\Omega,\rho)$, on the quantity \begin{equation} \inf_{\hat P} \sup_{P \in \P} \E_{X_1,...,X_n \stackrel{IID}{\sim} P} \left[ W_r^r \left( P, \hat P(X_1,...,X_n) \right) \right], \label{exp:minimax_error} \end{equation} where the infimum is taken over all estimators $\hat P$ (i.e., (potentially randomized) functions $\hat P : \Omega^n \to \P$ of the data). In the sequel, we suppress the dependence of $\hat P = \hat P(X_1,...,X_n)$ in the notation. \subsection{Definitions for Stating our Results} Here, we give notation and definitions needed to state our main results in Sections~\ref{sec:upper_bounds} and \ref{sec:lower_bounds}. Let $2^\Omega$ denote the power set of $\Omega$. Let $\SS \subseteq 2^{2^{\Omega}}$ denote the family of all Borel partitions of $\Omega$: \[\SS := \left\{ \S \subseteq \Sigma \quad : \quad \Omega \subseteq \bigcup_{S \in \S} S \quad \text{ and } \quad \forall S,T \in \S, S \cap T = \emptyset \right\}.\] We now define some metric notions that will later be useful for bounding Wasserstein distances: \begin{definition}[Diameter and Separation of a Set, Resolution of a Partition] For any set $S \subseteq \Omega$, the \emph{diameter $\Diam(S)$ of $S$} is defined by $\Diam(S) := \sup_{x,y \in S} \rho(x,y)$, and the \emph{separation $\Sep(S)$ of $S$} is defined by $\Sep(S) := \inf_{x \neq y \in S} \rho(x,y)$. If $\S \in \SS$ is a partition of $\Omega$, then the \emph{resolution $\operatorname{Res}(\S)$ of $\S$} defined by $\operatorname{Res}(\S) := \sup_{S \in \S} \Diam(S)$ is the largest diameter of any set in $\S$. \end{definition} We now define the covering and packing number of a metric space, which are classic and widely used measures of the size or complexity of a metric space \citep{dudley1967coveringNumbers,haussler1995sphere,zhou2002covering,zhang2002covering}. Our main convergence results will be stated in terms of these quantities, as well as the packing radius, which acts, approximately, as the inverse of the packing number. \begin{definition}[Covering Number, Packing Number, and Packing Radius of a Metric Space] The \emph{covering number $N : (0,\infty) \to \N$ of $(\Omega,\rho)$} is defined for all $\epsilon > 0$ by \[N(\epsilon) := \min \left\{ |\S| : \S \in \SS \quad \text{ and } \quad \operatorname{Res}(\S) \leq \epsilon \right\}.\] The \emph{packing number $M : (0,\infty) \to \N$ of $(\Omega,\rho)$} is defined for all $\epsilon > 0$ by \[M(\epsilon) := \max \left\{ |S| : S \subseteq \Omega \quad \text{ and } \quad \Sep(S) \geq \epsilon \right\}.\] Finally, the \emph{packing radius $R : \N \to [0,\infty]$} is defined for all $n \in \N$ by \[R(n) := \sup\{ \Sep(S) : S \subseteq \Omega \quad \text{ and } \quad |S| \geq n\}.\] Sometimes, we use the covering or packing number of a metric space, say $(\Theta, \tau)$, other than $(\Omega,\rho)$; in such cases, we write $N(\Theta;\tau;\epsilon)$ or $M(\Theta;\tau;\epsilon)$ rather than $N(\epsilon)$ or $M(\epsilon)$, respectively. For specific $\epsilon > 0$, we will also refer to $N(\Theta;\tau;\epsilon)$ as the \emph{$\epsilon$-covering number of $(\Theta,\tau)$}. \end{definition} \begin{remark} The covering and packing numbers of a metric space are closely related. In particular, for any $\epsilon > 0$, we always have \begin{equation} M(\epsilon) \leq N(\epsilon) \leq M(\epsilon/2). \label{ineq:covering_packing_relationship} \end{equation} The packing number and packing radius also have a close approximate inverse relationship. In particular, for any $\epsilon > 0$ and $n \in \N$, we always have \begin{equation} R(M(\epsilon)) \geq \epsilon \quad \text{ and } \quad M(R(n)) \geq n. \label{ineq:packing_number_radius_relationship} \end{equation} However, it is possible that $R(M(\epsilon)) > \epsilon$ or $M(R(n)) > n$. \end{remark} Finally, when we consider unbounded metric spaces, we will require some sort of concentration conditions on the probability distributions of interest, to obtain useful results. Specifically, we an appropriately generalized version of the moment of the distribution: \begin{remark} We defined the covering number slightly differently from usual (using partitions rather than covers). However, the given definition is equivalent to the usual definition, since (a) any partition is itself a cover (i.e., a set $\C \subseteq 2^\Omega$ such that $\Omega \subseteq \bigcup_{C \in \C} C$), and (b), for any countable cover $\C := \{C_1,C_2,...\} \subseteq 2^\Omega$, there exists a partition $\S \in \SS$ with $|\S| \leq |\C|$ and each $S_i \subseteq C_i$, defined recursively by $S_i := C_i \sminus \bigcup_{j = 1}^{i - 1} S_i$. $\S$ is often called the \emph{disjointification} of $\C$. \label{remark:disjointification} \end{remark} \begin{definition}[Metric Moments of a Probability Distribution] For any $\ell \in [0,\infty]$, probability measure $P \in \P$, and $x \in \Omega$, the \emph{$\ell^{th}$ metric moment $m_{\ell,x}(P)$ of $P$ around $x$} is defined by \[m_{\ell,x}(P) := \left( \E_{Y \sim P} \left[ \left( \rho(x,Y) \right)^\ell \right] \right)^{1/\ell} \in [0,\infty],\] using the appropriate limit if $\ell = \infty$. The chosen reference point $x$ only affects constant factors since, \[\text{ for all } x,x' \in \Omega, \quad \left| m_{\ell,x}^\ell(P) - m_{\ell,x'}^\ell(P) \right| \leq \left( \rho(x,x') \right)^\ell.\] Note that, if $\Omega$ has linear structure with respect to which $\rho$ is translation-invariant (e.g., if $(\Omega,\rho)$ is a Fr\'echet space), we can state our results more simply in terms of $m_\ell(P) := \inf_{x \in \Omega} m_{\ell,x}(P)$. As an example, if $\Omega = \R$ and $\rho(x,y) = |x - y|$, then $m_2(P)$ is precisely the standard deviation of $P$. \end{definition} \section{Related Work} \label{sec:related_work} A long line of work~\citep{dudley1969speed,ajtai1984optimalMatchings,canas2012learning,dereich2013constructive,boissard2014mean,fournier2015rate,weed2017sharp,lei2018convergence} has studied the rate of convergence of the empirical distribution to the population distribution in Wasserstein distance. In terms of upper bounds, the most general and tight upper bounds are the recent works of \citep{weed2017sharp} and \citep{lei2018convergence}. As we describe below, while these two papers overlap significantly, neither supersedes the other, and our upper bound combines the key strengths of those in \citep{weed2017sharp} and \citep{lei2018convergence}. The results of \citep{weed2017sharp} are expressed in terms of a particular notion of dimension, which they call the \textit{Wasserstein dimension} $s$, since they derive convergence rates of order $n^{-r/s}$ (matching the $n^{-r/D}$ rate achieved on the unit cube $[0,1]^D$). The definition of $s$ is complex (e.g., it depends on the sample size $n$), but \citep{weed2017sharp} show that, in many cases, $s$ converges to certain common definitions of the intrinsic dimension of the support of the distribution. This paper overcomes three main limitations of \citep{weed2017sharp}: \begin{enumerate}[nolistsep,leftmargin=2em] \item The upper bounds of \citep{weed2017sharp} apply only to totally bounded metric spaces. In contrast, our upper bounds permit unbounded metric spaces under the assumption that the distribution $P$ has some finite moment $m_\ell(P) < \infty$. The results of \citep{weed2017sharp} correspond to the special case $\ell = \infty$. \item Their main upper bound (their Proposition 10) only holds when $s > 2r$, with constant factors diverging to infinity as $s \downarrow 2r$. Hence, their rates are loose when $r$ is large or when the data have low intrinsic dimension. In contrast, our upper bound is tight even when $s \leq 2r$. \item As we discuss in our Example~\ref{ex:Lipschitz_Ball}, the upper bound of \citep{weed2017sharp} becomes loose as the Wasserstein dimension $s$ approaches $\infty$, limiting its utility in infinite-dimensional function spaces. In contrast, we show that our upper and lower bounds match for several standard function spaces. \end{enumerate} Intuitively, we find that the finite-sample bounds of \citep{weed2017sharp} are tight when the intrinsic dimension of the data lies in an interval $[a, b]$ with $2r < a < b < \infty$, but they can be loose outside this range. In contrast, we find our results give tight rates for a larger class of problems. On the other hand, \citep{lei2018convergence} focuses on the case where $\Omega$ is a (potentially unbounded and infinite-dimensional) Banach space, under moment assumptions on the distributions. Thus, while the results of \citep{lei2018convergence} cover interesting cases such as infinite-dimensional Gaussian processes, they do not demonstrate that convergence rates improve when the intrinsic dimension of the support of $P$ is smaller than that of $\Omega$ (unless this support lies within a \textit{linear} subspace of $\Omega$). As a simple example, if the distribution is in fact supported on a finite set of $k$ linearly independent points, the bound of \citep{lei2018convergence} implies only a convergence rate, whereas we give a bound of order $O(\sqrt{k/n})$. Although we do not delve into this here, our results (unlike those of \citep{lei2018convergence}) should also benefit from the multi-scale behavior discussed in Section 5 of \citep{weed2017sharp}; namely, much faster convergence rates are often observed for small $n$ than for large $n$. This may help explain why algorithms such as functional $k$-means~\citep{garcia2015functionalKMeans} work in practice, even though the results of \citep{lei2018convergence} imply only a slow convergence rate of $O\left( (\log n)^{-p} \right)$, for some constant $p > 0$, in this case. Under similarly general conditions, \citep{sriperumbudur2010integralProbabilityMetrics,sriperumbudur2012empirical} have studied the related problem of estimating the Wasserstein distance between two unknown distributions given samples from those two distributions. Since one can estimate Wasserstein distances by plugging in empirical distributions, our upper bounds imply upper bounds for Wasserstein distance estimation. These bounds are tighter, in several cases, than those of \citep{sriperumbudur2010integralProbabilityMetrics,sriperumbudur2012empirical}; for example, when $\X = [0,1]^D$ is the Euclidean unit cube, we give a rate of $n^{-1/D}$, whereas they give a rate of $n^{-\frac{1}{D + 1}}$. Minimax rates for this problem are currently unknown, and it is presently unclear to us under what conditions recent results on estimation of $\L_1$ distances between discrete distributions~\citep{jiao2017minimaxL1} might imply an improved rate as fast as $\left( n \log n \right)^{-1/D}$ for estimation of Wasserstein distance. To the best of our knowledge, minimax lower bounds for distribution estimation under Wasserstein loss remain unstudied, except in the very specific case when $\Omega = [0,1]^D$ is the Euclidean unit cube and $r = 1$~\citep{liang2017well}. As noted above, most previous works have focused on studying convergence rate of the empirical distribution to the true distribution in Wasserstein distance. For this rate, several lower bounds have been established, matching known upper bounds in many cases. However, many distribution estimators besides the empirical distribution can be considered. For example, it is tempting (especially given the infinite dimensionality of the distribution to be estimated) to try to reduce variance by techniques such as smoothing or importance sampling~\citep{bucklew2013introduction}. Our lower bound results, given in Section~\ref{sec:lower_bounds}, imply that the empirical distribution is already minimax optimal, up to constant factors, in many cases. \section{Upper Bounds} \label{sec:upper_bounds} In this section, we present our main upper bounds on the convergence rate of the empirical distribution to the true distribution in Wasserstein distance. We begin by presenting a simpler result for the case of totally bounded metric spaces, followed by a more complex but general result for arbitrary metric spaces under finite-moment assumptions on the distribution. \begin{theorem} Let $(\Omega,\rho)$ be a metric space on which $P$ is a Borel probability measure. Let $\hat P$ denote the empirical distribution of $n$ IID samples $X_1,...,X_n \stackrel{IID}{\sim} P$, give by \begin{equation} \hat P(S) := \frac{1}{n} \sum_{i = 1}^n 1_{\{X_i \in S\}}, \quad \forall S \in \Sigma. \label{eq:empirical_distribution} \end{equation} Then, for any non-increasing sequence $\{\epsilon_k\}_{k \in [K]} \in (0,\infty)^K$ with $\epsilon_0 = \Diam(\Omega)$, \[\E \left[ W_r^r(P, \hat P) \right] \leq \epsilon_K^r + \frac{1}{\sqrt{n}} \sum_{k = 1}^K \left( \sum_{j = k}^K 2^{K - j} \epsilon_j \right)^r \sqrt{N(\epsilon_k) - 1}.\] \label{thm:expectation_bound} \end{theorem} In the proof of the above theorem, the sequence $\{\epsilon_k\}_{k \in [K]} \in (0,\infty)^K$ gives the resolutions of a sequence of increasingly fine partitions of $\Omega$. The basic idea of the proof is to recursively bound the error over each partition at resolution $\epsilon_j$ in terms of $\epsilon_j$ and the error over the partition of resolution $\epsilon_{j + 1}$. The parameter $K$ restricts us to a particular finite resolution, with optimal value typically increasing with $n$. Note that this ``multi-resolution'' proof approach has been utilized in several special cases, apparently originating in the analysis of Our Theorem~\ref{thm:expectation_bound} is most comparable to the upper bound (Proposition 10) of \citet{weed2017sharp}. Theorem~\ref{thm:expectation_bound} requires $(\Omega,\rho)$ to be totally bounded in order for $N(\epsilon)$ to be finite. Next, we present a more complex bound, which, under the additional assumption that $P$ has some number of finite moments, is often finite even when $(\Omega,\rho)$ is \textit{not} totally bounded. The key idea of the proof is to partition $\Omega = \bigcup_{k = 0}^\infty B_k$ into bounded subsets, over each of which we can apply a bound similar to Theorem~\ref{thm:expectation_bound}. Thus, instead of the covering number $N(\epsilon)$ of $(\Omega,\rho)$, this result uses covering numbers $N(B_k,\rho,\epsilon)$ of a partition $\Omega = \bigcup_{k = 0}^\infty B_k$ into totally bounded subsets. \begin{theorem}[General Upper Bound for Unbounded Metric Spaces] Let $x_0 \in \Omega$ and suppose $m_{\ell,x_0}(P) \in [1, \infty)$. Let $J \in \N$. Fix two non-decreasing real-valued sequences $\{w_k\}_{k \in \N}$ and $\{\epsilon_j\}_{j \in \N}$, of which $\{w_k\}_{k \in \N}$ is non-decreasing with $w_0 = 0$ and $\lim_{k \to \infty} w_k = \infty$ and $\{\epsilon_j\}_{j \in [J]}$ is non-increasing. For each $k \in \N$, define $B_k(x_0) := \left\{ y \in \Omega : w_k \leq \rho(x_0, x) < w_{k + 1} \right\}$. Then, \begin{align*} \E \left[ W_r^r(P, \hat P) \right] & \leq m_{\ell,x_0}^\ell(P) \sum_{k \in \N} w_k^{-\ell} \left( \epsilon_J \right)^r + 2^r w_k^{r - \ell/2} \min \left\{ 2w_k^{-\ell/2}, \sqrt{\frac{1}{n}} \right\} \\ & \hspace{2cm} + \sum_{j = 1}^J \left( \sum_{t = j}^J 2^{J - t} \epsilon_t \right)^r \min \left\{ 2w_k^{-\ell}, \sqrt{\frac{w_k^{-\ell}}{n} N(B_k,\rho,\epsilon_j)} \right\}. \end{align*} \label{thm:unbounded_upper_bound} \end{theorem} In the above, $w_k$ corresponds to radii of the partition of $\Omega = \bigcup_{k = 0}^\infty B_k$ into a sequence of ``spherical shells'', whereas $\epsilon_j$, as in the previous result, corresponds to resolutions of partitions of the $B_k$'s. As with $K$ in the previous result, $J$ is used to ensure that we restrict ourselves to a particular finite resolution. The $\min$ terms appear because, for large $k$, the error is controlled by the fact that $P(B_k)$ is small (due to the moment assumption), rather than using a covering of $B_k$. \section{Lower Bounds} \label{sec:lower_bounds} In this section, we provide a minimax lower bound (over the family $\P$ of all Borel distributions on $\Omega$) for density estimation in Wasserstein distance (that is, the quantity \begin{equation} \inf_{\hat P : \X^n \to \P} \sup_{P \in \P} \E_{X_1,...,X_n \stackrel{IID}{\sim} P} \left[ W_r^r \left( P, \hat P \right) \right], \label{exp:distribution_estimation_minimax} \end{equation} where the infimum is over all estimators $\hat P$ of $P$ (i.e., all (potentially randomized) functions $\hat P : \Omega^n \to \P$)). Our bound depends primarily on the packing radius $R$ of $(\Omega,\rho)$, and, presently, we handle only the case without finite-moment assumptions on $P$. However, we show in the next section that this often implies tight lower bounds when enough (roughly, $\ell \geq \max\{D,2r\}$) moments exist. \begin{theorem} Let $(\Omega,\rho)$ be a metric space, on which $\P$ is the set of Borel probability measures. Then, \[\inf_{\hat P : \X^n \to \P} \sup_{P \in \P} \E_{X_1,...,X_n \stackrel{IID}{\sim} P} \left[ W_r^r(P, \hat P(X_1,...,X_n)) \right] \geq c_r \sup_{k \in [32n]} R^r(k) \sqrt{\frac{k - 1}{n}},\] where $c_r = \frac{3\log 2}{2^{r + 12}}$ depends only on $r$. \label{thm:Wasserstein_distribution_estimation_lower_bound} \end{theorem} \section{Example Applications} \label{sec:examples} Our theorems in the previous sections are quite abstract and have many tuning parameters. Thus, we conclude by exploring applications of our results to cases of interest. In each of the following examples, $P$ is an unknown Borel probability measure over the specified $\Omega$, from which we observe $n$ IID samples. For upper bounds, $\hat P$ denotes the empirical distribution~\eqref{eq:empirical_distribution} of these samples. \begin{example}[Finite Space] Consider the case where $\Omega$ is a finite set, over which $\rho$ is the discrete metric given, for some $\delta > 0$, by $\rho(x, y) = \delta 1_{\{x = y\}}$, for all $x,y \in \Omega$. Then, for any $\epsilon \in (0,\delta)$, the covering number is $N(\epsilon) = |\Omega|$. Thus, setting $K = 1$ and sending $\epsilon_1 \to 0$ in Theorem~\ref{thm:expectation_bound} gives \[\E \left[ W_r^r(P, \hat P) \right] \leq \delta^r \sqrt{\frac{|\Omega| - 1}{n}}.\] On the other hand, $R(|\Omega|) = \delta$, and so, setting $k = |\Omega|$ in Theorem~\ref{thm:Wasserstein_distribution_estimation_lower_bound} yields \[\inf_{\hat P} \sup_{P \in \P} \E_{X_1,...,X_n \stackrel{IID}{\sim} P} \left[ W_r^r(P, \hat P) \right] \gtrsim \delta^r \sqrt{\frac{|\Omega| - 1}{n}}.\] \label{ex:discrete_bound} \end{example} \begin{example}[Unit Cube, Euclidean Metric] Consider the case where $\Omega = \R^D$ is the unit cube and $\rho$ is the Euclidean metric. Assuming $\ell > r$, using the fact that $N \left( B_k, \rho, \epsilon \right) \leq \left( \frac{3 w_k}{\epsilon} \right)^D$~\citep{pollard1990empirical} and plugging $\epsilon_j = 2^{-2j}$ and $w_k = 2^k$ into Theorem~\ref{thm:unbounded_upper_bound} gives (after a straightforward but very tedious calculation) a constant $C_{D,r,\ell}$ depending only on $D$, $r$, and $\ell$ such that \begin{equation} \E \left[ W_r^r(P, \hat P) \right] \leq C_{D,\ell,r} m_\ell^\ell(P) \left( n^{\frac{\ell - r}{\ell}} + 2^{-2Jr} + \sum_{j = 1}^J 2^{(D - 2r)j} \right). \label{ineq:general_Euclidean_bound} \end{equation} Of these three terms, the first depends only on the number $\ell$ of finite moments $P$ is assumed to have and the order $r$ of the Wasserstein distance, whereas the second and third terms depend on choosing the parameter $J$. The optimal choice of $J$ scales with the sample size $n$ at a rate depending on the quantity $D - 2r$. Specifically, if $D = 2r$, then setting $J \asymp \frac{1}{4r} \log_2 n$ gives a rate of $\E \left[ W_r^r(P, \hat P) \right] \lesssim n^{\frac{\ell - r}{\ell}} + n^{-1/2} \log n$. If $D \neq 2r$, then~\eqref{ineq:general_Euclidean_bound} reduces to \[\E \left[ W_r^r(P, \hat P) \right] \leq C_{D,\ell,r} m_\ell^\ell(P) \left( n^{\frac{\ell - r}{\ell}} + 2^{-2Jr} + \frac{2^{(D - 2r)J} - 1}{2^{D - 2r} - 1} \right).\] Then, if $D > 2r$, sending $J \to \infty$ gives $\E \left[ W_r^r(P, \hat P) \right] \lesssim n^{\frac{\ell - r}{\ell}} + n^{-1/2}$. Finally, if $D < 2r$, then setting $J \asymp \frac{1}{2D} \log n$ gives $\E \left[ W_r^r(P, \hat P) \right] \lesssim n^{\frac{\ell - r}{\ell}} + n^{-\frac{r}{D}}$. To summarize \[\E \left[ W_r^r(P, \hat P) \right] \lesssim n^{\frac{\ell - r}{\ell}} + \left\{ \begin{array}{ll} n^{-1/2} & \text { if } 2r > D \\ n^{-1/2} \log n & \text { if } 2r = D \\ n^{-r/D} & \text { if } 2r < D \end{array} \right.\] (reproducing Theorem 1 of \citep{fournier2015rate}). On the other hand, it is easy to check that the packing radius $R$ satisfies $R(n) \geq n^{-1/D}$ and $R(2) \geq \sqrt{D}$. Thus, Theorem~\ref{thm:Wasserstein_distribution_estimation_lower_bound} with $k = n$ and $k = 2$ yields \[\inf_{\hat P} \sup_{P \in \P} \E \left[ W_r^r(\hat P, P) \right] \gtrsim \max \left\{ (n + 1)^{-r/D}, D^{r/2} n^{-1/2} \right\}.\] Together, these bounds give the following minimax rates for density estimation in Wasserstein loss: \[\inf_{\hat P} \sup_{P \in \P} \E \left[ W_r^r(\hat P, P) \right] \asymp \left\{ \begin{array}{ll} n^{-1/2} & \text { if } \ell > 2r > D \\ n^{-r/D} & \text { if } 2r < D, \ell > \frac{Dr}{D - r} \end{array} \right.\] When $2r = D$ and $\ell > 2r$, our upper and lower bounds are separated by a factor of $\log n$. The main result of \citep{ajtai1984optimalMatchings} implies that, for the case $D = 2$ and $r = 1$, the empirical distribution converges as $n^{-1/2} \log n$, suggesting that the $\log n$ factor in our upper bound may be tight. Further generalization of Theorem~\ref{thm:Wasserstein_distribution_estimation_lower_bound} is needed to give lower bounds when both $D, \ell \leq 2r$ or when $D > 2r$ and $\ell \leq \frac{Dr}{D - r}$. \label{ex:unit_cube_lower_bound} \end{example} The next example demonstrates how the rate of convergence in Wasserstein metric depends on properties of the metric space $(\Omega,\rho)$ at both large and small scales. Specifically, if we discretize $\Omega$, then the phase transition at $2r = D$ disappears. \begin{example} Suppose $\Omega = \mathbb{Z}^D$ is a $D$-dimensional grid of integers and $\rho$ is $\ell_\infty$-metric (given by $\rho(x,y) = \max_{j \in [D]} |x_j - y_j|$). Since $\Z^D \subseteq \R^D$ and the $\ell_\infty$ and Euclidean metrics are topologically equivalent, the upper bounds from Example~\ref{ex:unit_cube_lower_bound} clearly apply, up to a factor of $\sqrt{D}$. However, we also have the fact that, whenever $\epsilon < 1$, $N(B_k,\rho,\epsilon) = w_k^D$. Therefore, setting $J = 0$, $\epsilon_0 = 0$, and $w_k = 2^k$ in Theorem~\ref{thm:unbounded_upper_bound} gives, for a constant $C_{D,\ell,r}$ depending only on $D$, $\ell$, and $r$, \begin{align*} \E \left[ W_r^r(P, \hat P) \right] & \leq C_{D,\ell,r} m_\ell^\ell(P) \left( n^{\frac{\ell - r}{\ell}} + \sum_{k \in \N} \sqrt{\frac{2^{(D - \ell)k}}{n}} \right). \end{align*} When $\ell > D$, this reduces to $\E \left[ W_r^r(P, \hat P) \right] \lesssim n^{\frac{\ell - r}{\ell}} + n^{-1/2}$, giving a tighter rate than in Example~\ref{ex:unit_cube_lower_bound} when $2r \leq D < \ell$. To the best of our knowledge, no prior results in the literature imply this fact. \end{example} Finally, we consider distributions over an infinite dimensional space of smooth functions. \begin{example}[H\"older Ball, $\L_\infty$ Metric] Suppose that, for some $\alpha \in (0,1]$, \[\Omega := \left\{ f [0,1]^D \to [-1,1] \quad \middle| \quad \forall x,y \in [0,1]^D, \quad |f(x) - f(y)| \leq \|x - y\|_2^\alpha \right\}\] is the class of unit $\alpha$-H\"older functions on the unit cube and $\rho$ is the $\L_\infty$-metric given by \[\rho(f,g) = \sup_{x \in [0,1]^D} |f(x) - g(x)|, \quad \text{ for all } f,g \in \Omega.\] The covering and packing numbers of $(\Omega,\rho)$ are well-known to be of order $\exp \left( \epsilon^{-D/\alpha} \right)$ \citep{devore1993approximation}; specifically, there exist positive constants $0 < c_1 < c_2$ such that, for all $\epsilon \in (0,1)$, \[c_1 \exp \left( \epsilon^{-D/\alpha} \right) \leq N(\epsilon) \leq M(\epsilon) \leq c_2 \exp \left( \epsilon^{-D/\alpha} \right).\] Since $\Diam(\Omega) = 2$, applying Theorem~\ref{thm:expectation_bound} with $K = 1$ and \[\epsilon_1 = \left( 2\log n - (\alpha r/D) \log \log n \right)^{-\frac{\alpha r}{D}} \quad \text{ gives } \quad \E \left[ W_r^r(P, \hat P) \right] \lesssim \left( \log n \right)^{\frac{- \alpha r}{D}}.\] Conversely, Inequality~\eqref{ineq:packing_number_radius_relationship} implies $R(n) \geq \left( \log(n/c_1) \right)^\frac{-\alpha}{D}$, and so setting $k = n$ in Theorem~\ref{thm:Wasserstein_distribution_estimation_lower_bound} gives \[\inf_{\hat P} \sup_{P \in \P} \E_{X_1,...,X_n \stackrel{IID}{\sim} P} \left[ W_r^r(P, \hat P) \right] \gtrsim \left( \frac{1}{\log(n/c_1)} \right)^\frac{\alpha r}{D},\] showing that distribution estimation over $(\P,W_r^r)$ has the extremely slow minimax rate $\left( \log n \right)^\frac{-\alpha r}{D}$. Although we considered only $\alpha \in (0,1]$ (due to the notational complexity of defining higher-order H\"older spaces), analogous rates hold for all $\alpha > 0$. Also, since our rates depend only on covering and packing numbers of $\Omega$, identical rates can be derived for related Sobolev and Besov classes. Note that the Wasserstein dimension used in the prior work \citep{weed2017sharp} is of order $\frac{D}{\alpha} \log n$, and so their upper bound (their Proposition 10) gives a rate of $n^{-\frac{\alpha r}{D \log n}} = \exp \left( -\frac{\alpha r}{D} \right)$, which fails to converge as $n \to \infty$. \label{ex:Lipschitz_Ball} \end{example} One might wonder why we are interested in studying Wasserstein convergence of distributions over spaces of smooth functions, as in Example~\ref{ex:Lipschitz_Ball}. Our motivation comes from the historical use of smooth function spaces have been widely used for modeling images and other complex naturalistic signals \citep{mallat1999wavelet,peyre2011numerical,sadhanala2016totalVariation}. Empirical breakthroughs have recently been made in generative modeling, particularly of images, based on the principle of minimizing Wasserstein distance between the empirical distribution and a large class of models encoded by a deep neural network~\citep{montavon2016wassersteinRBMs,arjovsky2017wassersteinGAN,gulrajani2017improved}. However, little is known about theoretical properties of these methods; while there has been some work studying the optimization landscape of such models~\citep{nagarajan2017gradient,liang2018interaction}, we know of far less work exploring their \textit{statistical} properties. Given the extremely slow minimax convergence rate we derived above, it must be the case that the class of distributions encoded by such models is far smaller or sparser than $\P$. An important avenue for further work is thus to explicitly identify stronger assumptions that can be made on distributions over interesting classes of signals, such as images, to bridge the gap between empirical performance and our theoretical understanding. \section{Conclusion} \label{sec:conclusion} In this paper, we derived upper and lower bounds for distribution estimation under Wasserstein loss. Our upper bounds generalize prior results and are tighter in certain cases, while our lower bounds are, to the best of our knowledge, the first minimax lower bounds for this problem. We also provided several concrete examples in which our bounds imply novel convergence rates. \subsection{Future Work} We studied minimax rates over the very large entire class $\P$ of all distributions with some number of finite moments. It would be useful to understand how minimax rates improve when additional assumptions, such as smoothness, are made (see, e.g., \citep{liang2017well} for somewhat improved upper bounds under smoothness assumptions when $(\Omega,\rho)$ is the Euclidean unit cube). Given the slow convergence rates we found over $\P$ in many cases, studying minimax rates under stronger assumptions may help to explain the relatively favorable empirical performance of popular distribution estimators based on empirical risk minimization in Wasserstein loss. Moreover, while rates over all of $\P$ are of interest only for very weak metrics such as the Wasserstein distance (as stronger metrics may be infinite or undefined), studying minimax rates under additional assumptions will allow for a better understanding of the Wasserstein metric in relation to other commonly used metrics. \newpage \subsubsection*{Acknowledgments} This work was partly supported by a NSF Graduate Research Fellowship DGE-1252522 to S.S. {\small \bibliographystyle{plainnat}
{ "timestamp": "2018-05-24T02:05:10", "yymm": "1802", "arxiv_id": "1802.08855", "language": "en", "url": "https://arxiv.org/abs/1802.08855" }
\section{Introduction} Solar activity has a strong influence on the modulation of the flux of galactic cosmic rays (GCRs) arriving at Earth, whose transport through the heliosphere is one of the topics of major interest in space physics, presenting several unresolved questions. Space weather physics is experiencing a fast growing interest nowadays because of evidence that environmental conditions in the near-Earth space have direct and indirect impacts on technology and the global economy\,\citep{Schrijver2015}. One of most puzzling modulation of the cosmic rays flux at ground level is called Forbush Decrease (FD): a rapid reduction in the observed galactic cosmic ray intensity followed by a slow exponential-like recovery\,\citep{Usoskin2008}. This phenomenon initially was reported by S.E. Forbush --and almost simultaneously by V.F. Hess \& A. Demmelmair-- in 1937\, \citep[see e.g.][]{Forbush1937, victor-demmelamair1937}. Later, in the 1950's, the works of Simpson, Fonger \& Treiman\,\citep{SimpsonFongerTreiman1953}, and Singer\,\citep{Singer1954}, showed that FDs are related to the solar activity interacting with the interplanetary medium. More recently, Lockwood showed a dependence of the magnitude of the FD upon the vertical cutoff rigidity\,\citep{Lockwood1971}. FDs can be classified in two groups: recurrent and non-recurrent FD. While the first group\,\citep{Lockwood1971} have a symmetric profile and are well associated with co-rotating high speed solar wind streams\,\citep{Cane2000}, non-recurrents FD have sudden onsets with a maximum depression within a day, and a more gradual recovery\,\citep{Cane2000, Belov-etal2014} , and are related to the interaction of GCRs, Interplanetary Coronal Mass Ejections (ICME) and a perturbed geomagnetic field (it is perturbed by its interaction with the same ICME). Understanding these very complex phenomena depends upon: in situ measurements of the interaction of GCRs-ICME; tracking the propagation of GCRs through the Geomagnetic Field (GF hereafter); taking into account their interaction with the atmosphere, and also the variation of the particles produced by the latter interaction (hereafter secondaries) at ground level. As the secondaries are produced by the interaction of GCRs with the atmosphere (hereafter denoted as primaries), the modulation of secondary particles needs to be monitored and carefully corrected by taking into account several atmospheric factors that could modify the flux of secondaries at Earth's surface (the Atmospheric density profile is one of those factors because this is proportional to the absorption of secondaries particles)\,\citep{ThePierreAugerCollaboration2011, 2012AdSpR..49.1563D, Asorey2013, Auger2015,Suarez2015}. Furthermore, a series of completed and detailed simulations are needed to characterize the expected flux at the detector altitude. This kind of simulation must take into account several important factors such as GF conditions, i.e., estimations of the magnetic rigidity of the GCRs, the interaction of GCRs with atmosphere, the variations in atmospheric depth and the detector response. Direct solar wind observations using spacecraft can provide insight of the interplanetary magnetic field; but its global structure can not be completely monitored by on-board measurements, because they can only detect local variations along the trajectory of the probe within the solar wind. On the other hand, ground observatories with detectors spread on very large areas, registering indirectly low energy GCRs, can provide important, alternative and complementary information about the broader structure of the interplanetary magnetic fields and its influence on the GCRs flux. In this context, a global network of both type of observatories combines different techniques to monitor the development of FDs at different geomagnetic latitudes and rigidity cutoffs, will enrich future studies on the detection of solar modulation of GCRs\,\citep[see e.g.][]{Abbasi2008,Auger2015}. The Latin American Giant Observatory (LAGO)\,\citep{Allard-etal2008, Asorey2015a} has developed a program to understand the influence of the space weather phenomena on the flux of GCRs. This program, called LAGO Space Weather (LAGO-SW)\,\citep{Asorey2013}, includes a precise simulation that takes into account the geomagnetic corrections and a detailed measurement of the modulation on the flux of secondaries, and evaluates if this modulation have possible causal correlations with space weather phenomena, like FDs\,\citep{Suarez2015}. Nowadays, computational capabilities allow the extension of the usual approach, which is to consider only the components of the GCR flux locally and include geomagnetic effects by an effective rigidity cutoff for vertical primaries\,\citep{MasiasDasso2014}. The detailed simulations described in this work, include these effects over different arrival directions during dynamic events affecting the geomagnetic field and atmospheric conditions. We have generalized previous attempts by including not only secular, but also transient variations of the directional geomagnetic rigidity cutoff. This paper is organized as follows: in section \ref{LAGO}, the Latin American Giant Observatory and its space weather program are briefly described; then, in Section \ref{chain}, we present our space weather simulation chain, focusing on geomagnetic corrections of the primary flux; in Section \ref{results} we discuss our main results in secular conditions and later under geomagnetic disturbances for two LAGO sites: Bucaramanga-Colombia and San Carlos de Bariloche-Argentina. Finally, in section \ref{conclusions} some finals remarks and future projects are considered. \section{The LAGO Space Weather Program} \label{LAGO} The Latin American Giant Observatory (LAGO) is an extended astroparticle observatory on a continental scale, promoting training and research in astroparticle physics in Latin America covering three main areas: search for the high energy component of gamma rays bursts (GRBs) at high altitude sites, space weather phenomena, and background radiation at ground level\,\citep{Asorey2015a}. The LAGO detection network consists of ground-level water-Cherenkov particle detectors (WCDs), spanning over several sites, located at significantly different latitudes and various altitudes --from Mexico to the Patagonia and from mean sea level up to more than 5000\, meters of altitude. After the installation of new detectors at the Antarctica Peninsula\,\citep{DassoEtal2015}, LAGO will cover a large range of geomagnetic rigidity cutoffs and atmospheric absorption/depths\,\citep{Sidelnik2015}. The current/planned distribution and status of the LAGO detection network is shown in Figure \ref{lago-sites}. This network of detectors is operated by the LAGO Collaboration: a non-centralized and distributed collaborative network of more than 80 scientists from institutions of te Latin American countries (Argentina, Bolivia, Brazil, Chile, Colombia, Ecuador, Guatemala, Mexico, Peru and Venezuela) and Spain. The LAGO Collaboration is using WCDs in all sites, due to their proved reliability, high detection, low cost and efficiency of the detection of all components present in atmospheric extensive showers\,\citep{Asorey2015a, Sidelnik2015, Galindo2015, DassoEtal2015}. \begin{figure}[h] \centering \includegraphics[width=32pc]{lago-map-rc_small2_0.pdf} \caption {Left panel indicates the geographical distribution and altitudes of LAGO water Cherenkov detectors: operational are represented by blue triangles; in deployment by orange squares and finally, red circles indicating planned sites. The right panel shows the vertical rigidity cutoff at each LAGO site.} \label{lago-sites} \end{figure} As explained in \citet{Asorey2015a}, the LAGO scientific and academic objectives are organized in different programs. The Space Weather LAGO Program is devised to study variations in the flux of secondary particles at ground level and its relation to the heliospheric modulation of GCRs. The LAGO detector network determines the flux of secondary particles in different bands of deposited energy in the detector, by using pulse shape discrimination techniques. This is what we have called the multi-spectral analysis technique\,\citep{Suarez2015}. The total energy threshold for the detection of secondary particles are $\simeq 0.4$\,MeV for gammas, $\simeq 0.8$\,MeV for electrons and $\simeq 160$\,MeV for muons. By combining all the data measured at different locations, LAGO provides simultaneous and detailed information of the temporal evolution of the secondary flux at different geomagnetic locations. This can help to get a better understanding of the small and large spacetime scales of the disturbances produced by different space weather phenomena\,\citep{Suarez2015}. Any attempt to estimate the expected flux of secondaries at any detector of the LAGO network should be based on a detailed simulation that takes into account all possible sources of flux variation. This complex approach comprises processes occurring at different spatial and time scales, following this conceptual scheme: \begin{center} \begin{tabular}{ccccc} GCR Flux & $\xrightarrow{\mathrm{Heliosphere}}$ & Modulated Flux & $\xrightarrow{\mathrm{Magnetosphere}}$ & Primaries $\rightarrow \cdots$ \\ $\cdots \rightarrow$ Primaries & $\xrightarrow{\mathrm{Atmosphere}}$ & Secondaries & $\xrightarrow{\mathrm{Detector}}$ & Signals. \end{tabular} \end{center} As it can be easily appreciated, the above simulation pipeline considers three important factors with different spatial and time scales: the geomagnetic effects, the development of the extensive air showers in the atmosphere, and the detector response at ground level. According to the above scheme, in this work we focused in stages covering from the modulated flux to the flux of secondary particles at ground level. Our {\it Simulation Chain} can be depicted in three main consecutive blocks: \begin{enumerate} \item The effects of GF on the propagation of charged particles, contributing to the background radiation at ground level, are characterized by the directional rigidity cutoff, $R_\mathrm{C}$, at each LAGO site and calculated using the MAGNETOCOSMICS code\,\citep{Desorgher2004} applying the backtracking technique\,\citep[see e.g.][]{MasiasDasso2014}. The Geomagnetic Field at any point on Earth is determined by using the International Geomagnetic Field Reference, version 11\,\citep{IGRF11} at the near-Earth GF ($r<5 R_{\oplus}$) --$r$ distance from Earth center and $R_{\oplus}$ is the Earth radius ($6371$\,km)-- and through the Tsyganenko Magnetic Field model version 2001 (TSY01 hereafter)\,\citep{TSY01a} to describe the outer GF ($r > 5 R_{\oplus}$). \item The second step of the chain is based on the CORSIKA code\,\citep{Corsika}. Extensive air showers produced during the interaction of cosmic rays with the atmosphere are simulated with extreme detail to obtain a very comprehensive set of secondaries at ground level. \item Finally, a GEANT4 model\,\citep{Agostinelli2003} of the detector response to the different types of secondary particles is being implemented\,\citep{Otiniano2015, Vargas2015, CalderonAsoreyNunez2015} and will be reported in the near future. \end{enumerate} \section{The space weather simulation chain} \label{chain} The propagation of charged particle through the GF has been studied since the 60s and was focused on understanding how the penumbra region changes with the geographical position\,\citep[see e.g.][]{shea-smart-mccracken-1965, Carmicheal-etal1969, smart-shea2012}. In this section we shall discuss our novel approach to understand the penumbra region and our proposal for a new method to calculate the magnetic rigidity as a function of time. We shall also describe in detail how geomagnetic effects on the low energy flux of primaries can be infered from observations of secondary particles at ground level by means of the following procedure: \begin{enumerate} \item To find a magnetic rigidity function, $R_\mathrm{m}(\mathrm{Lat}, \mathrm{Lon}, \mathrm{Alt}, t, \theta, \phi)$, at a particular geographical position --i.e. latitude (Lat), longitude (Lon) and altitude above sea level (Alt)--, time ($t$) and arrival direction of the particle --i.e. zenithal ($\theta$) and azimuthal ($\phi$) angle; \item To calculate the flux of primaries at the top of the atmosphere ($\approx 112$\,km a.s.l. (Above Sea Level)), filtered by the magnetic rigidity function $R_\mathrm{m}(\mathrm{Lat}, \mathrm{Lon}, \mathrm{Alt}, t, \theta, \phi)$; \item To estimate the flux of secondaries at ground level produced by the interactions of the impinging GCRs with the atmosphere. \end{enumerate} The following subsections will develop all details for the above-mentioned actions. \subsection{Magnetic rigidity as function of time} \label{sub:rm-time} The direction of the velocity of a GCR ($\hat{I}=\vec{v}/|\vec{v}|$) changes along the particle trajectory inside the dynamic GF, $\mathbf{\vec{B}(\mathrm{Lat}, \mathrm{Lon}, \mathrm{Alt}, t)}$, according to the equation \begin{linenomath*} \begin{equation}\label{def:rm} \frac{d\hat{I}}{ds} = \frac{1}{R_\mathrm{m}} \left( \hat{I} \times \mathbf{\vec{B}(\mathrm{Lat}, \mathrm{Lon}, \mathrm{Alt}, t)} \right)\, , \end{equation} \end{linenomath*} where $s$ is the path length along the particle trajectory and $R_\mathrm{m}=pc/Ze$ the magnetic rigidity; with $p$ as the particle momentum, $c$ the light speed, $Z$ the atomic number and $e$ the electric charge of the electron. The variation of $\hat{I}$ is weighted by $R_\mathrm{m}$ and therefore a GCR is able to arrive at some specific geographical point --under some configuration of $\mathbf{\vec{B}(\mathrm{Lat}, \mathrm{Lon}, \mathrm{Alt}, t)}$ associated with the trajectory of the particle arriving to a particular position, i.e Latitude ($\mathrm{Lat}$), Longitude ($\mathrm{Lon}$), Altitude ($\mathrm{Alt}$)-- if its $R_\mathrm{m}$ has the right value. Thus, we can write $R_\mathrm{m}$ as \begin{linenomath*} \begin{equation} \label{def:newrm} R_\mathrm{m} \equiv R_\mathrm{m}(\mathrm{Lat}, \mathrm{Lon}, \mathrm{Alt},t)\,. \end{equation} \end{linenomath*} Following standard definitions\,\citep[see e.g.][] {Cooke-etal1991}, particles with \textit{allowed} $R_\mathrm{m}$ will reach at certain geographical position, while those with \textit{forbidden} $R_\mathrm{m}$ will not. With these considerations, three different ranges of $R_\mathrm{m}$ can be defined: \begin{itemize} \item \textit{Forbidden} range: a continuous range which goes from zero to the first \textit{allowed} value of $R_\mathrm{m}$, say $R_\mathrm{L}$; \item \textit{Allowed} range: cotaining all the rigidities above a certain $R_\mathrm{m}$ value, say $R_\mathrm{U}$, for which all the rigidities containted in this range are \textit{allowed}. \item \textit{Penumbra} range: the range ($R_\mathrm{L} < R_\mathrm{m} < R_\mathrm{U}$) connecting the \textit{allowed} and \textit{forbidden} ranges. \end{itemize} The \textit{penumbra} is characterized by a single, effective, rigidity value\,\citep{shea-smart-mccracken-1965, SmartShea2009}, which is used to establish whether a GCR arrives, or not, at the particular geographical point. This value is called \textit{the rigidity cutoff} ($R_\mathrm{C}$) and can be defined as \begin{linenomath*} \begin{equation} \label{def:basic-rc} R_\mathrm{C} = R_\mathrm{U} - \sum_{k=R_\mathrm{L}}^{R_\mathrm{U}} \Delta {R_k}^{\mathrm{allowed}}\, , \end{equation} \end{linenomath*} where $\Delta{R_k}$ is the resolution of the $R_\mathrm{m}$ calculation. Strictly speaking, $R_\mathrm{U}$ and $R_\mathrm{L}$ depend on time, the arrival direction, the geographical position and the altitude; thus, we should consider that, at a geographical point, \begin{linenomath*} \begin{equation} \label{def:renewrm} R_\mathrm{m} = R_\mathrm{m}\left( \mathrm{Lat},\mathrm{Lon},\mathrm{Alt},t,\theta,\phi \right)\, , \end{equation} \end{linenomath*} It is important to note that definition (\ref{def:basic-rc}) has the implicit assumption that all the particle trajectories have the same contribution in the penumbra region, i.e., the flat GCR spectrum approximation, according to\,\citep{Dorman-etal2008}. In this approximation, the very complex problem of allowed trajectories in the penumbra region is simply replaced by an effective cutoff, only calculated for vertical primaries. We have refined this approximation by considering the penumbra not as a sharp cutoff, but as a relatively smooth transition between the forbidden and the allowed regions. In our approach, we extend the concept of the effective rigidity cutoff assuming that it can be approximated by a cumulative probability function (CDF). The next subsection outlines the method we have implemented to calculate $R_\mathrm{m}$ and to characterize the penumbra region as a cumulative probability function (CPF). \subsubsection{Magnetic Rigidity Calculation} \label{subsubsec:rm-calcu} We performed the $R_\mathrm{m}$ calculation by the backtracking technique\,\citep[see e.g.][]{MasiasDasso2014}, via the MAGNETCOSMICS (MAGCOS) code, with a resolution of $0.01$\,GV, considering two conditions: secular and dynamic geomagnetic field effects. For secular conditions we used the configuration of the geomagnetic field on $6$\,UTC April 26$^{th}$ 2005, because at this time the registered Disturbance Storm Time Index (Dst index hereafter,\\\noindent https://www.ngdc.noaa.gov/stp/geomag/dst.html) was zero with a variability of $0.79$\,nT from 0 UTC of April 26$^{th}$ to 12 UTC of the same day (http://wdc.kugi.kyoto-u.ac.jp/dst\_final/200504/index.html). For the dynamic GF contribution, we calculated the $R_\mathrm{m}$ according to the GF configuration for each hour of May 2005, setting the parameters: Dynamic pressure, Dst index, interplanetary magnetic fields components $B_x$ and $B_z$, and Tsyganenko's parameters for model TSY01: G1 and G2\citep{TSY01a}. These parameters were taken from the Virtual Radiation Belt Observatory\,\citep{Weigel-etal2009}; $R_\mathrm{m}$ values were calculated for zenith angles from $0^\circ$ to $90^\circ$, with $\Delta\theta = 15^\circ$ and azimuth angles (for each $\theta$) from $0^\circ$ to $360^\circ$ with $\Delta\phi=15^\circ$, for both site. \subsubsection{Interpreting the Penumbra Region} \label{subsubsec:inter-penum} Instead of the standard simplifying assumption for the penumbra region we build a cumulative probability function (CPF), valid from $R_\mathrm{L}$ to $R_\mathrm{U}$, which replaces the usual concept of $R_\mathrm{C}$, (defined in equation (\ref{def:basic-rc})). We denote this CPF as $P(R_\mathrm{m}(\theta))$, which represents the probabilty of the cosmic ray arriving at some geographical position with zenith angle $\theta$, at time $t$, with $R_\mathrm{m}$; we take into account the following considerations: \begin{itemize} \item the backtracking technique performed by MAGCOS is a deterministic method, which implies that it is not possible to calculate a statistical set of $R_\mathrm{m}$ values for a specific arrival direction, i.e., pair of ($\theta$,$\phi$); \item for each zenith angle we consider $24$ uniform ranges in azimuth with an angular amplitude of $15^{\mathrm{o}}$ each one, i.e. for each zenith angle we have $24$ different penumbra regions; and \item for each $\theta$, the associated set of $R_\mathrm{m}$ in the $24$ penumbra regions has a global minimal value ($R_\mathrm{Lmin}$ hereafter) and a global maximum value ($R_\mathrm{Umax}$ hereafter). \end{itemize} Accordingly, it is possible to come up with a frequentist approach, assuming a probability function defined as: \begin{linenomath*} \begin{equation} \label{def:pdf} \wp(R_\mathrm{m}(\theta)) = \frac{\#R_\mathrm{m_{\mathrm{allowed}}}(\theta)} {\#R_\mathrm{m_{\mathrm{tot-allowed}}}(\theta)}\, , \end{equation} \end{linenomath*} where, for each $\theta$, we have averaged over the azimuth angle, $\phi$, within each penumbral range; the fraction of the number ($\#$) of allowed $R_\mathrm{m}$ values ($R_\mathrm{m_{allowed}}(\theta)$) over the total number of $R_m$ values calculated for the $\theta$'s set ($R_\mathrm{m_{tot-allowed}} (\theta)$). Equation (\ref{def:pdf}) implies that the domain interval for $R_\mathrm{m_{tot-allowed}}$ is \begin{linenomath*} \begin{equation} \label{def:domain-cdf} \mathcal{D}(R_{\mathrm{m}_{\mathrm{tot-allowed}}}) = \left\{ R_\mathrm{m}(\theta): \left( R_\mathrm{m}\geq R_{\mathrm{Lmin}}, \, R_\mathrm{m} \leq R_{\mathrm{Umax}} \right) \right\}\, . \end{equation} \end{linenomath*} Thus, from (\ref{def:pdf}) and (\ref{def:domain-cdf}) we define the cumulative distribution function for a GCR, arriving at the observation point with rigidity $R_\mathrm{m}(\theta)$ as \begin{linenomath*} \begin{equation} \label{def:cdf} P(R_\mathrm{m}(\theta)) = \sum_{\mathcal{R}_m = \mathrm{R_{Lmin}}}^{\mathcal{R}_m = \mathrm{R_{Umax}}}\wp(\mathcal{R}_m(\theta))\, . \end{equation} \end{linenomath*} Notice that equation (\ref{def:cdf}) implies that a GCR with $R_\mathrm{m} > R_\mathrm{Umax}$ has a probability of $1$ to arrive at the observation point through the zenith angle $\theta$; meanwhile a GCR with $R_\mathrm{m}<R_\mathrm{Lmin}$ has $0$ probability to arrive at the same point with the same angle. Currently, the usual $R_\mathrm{C}$ is interpreted as a unique value in the penumbra region, which separates only two possibilities for an incoming particle with a zenith angle $\theta$: arriving or not arriving. If a charged particle has a $R_\mathrm{m}~>~R_\mathrm{C}$ then it is considered to arrive at the geographical point, in the opposite case, if $R_\mathrm{m}~<~R_\mathrm{C}$, it does not arrive. With our approach, it is clear that a GCR, with $R_\mathrm{m}$ and zenith angle $\theta$, can reach at the geographical point if $P(R_\mathrm{m}(\theta))~=~1$, whereas with $P(R_\mathrm{m}(\theta))~=~0$ will not. But, if the $R_\mathrm{m}$ belongs to the penumbra region, it does not meet any of the above criteria because $P(R_\mathrm{m}(\theta))$ is between 0 and 1. To set this value of $P(R_\mathrm{m}(\theta))$ in terms of arriving or not arriving, i.e., 0 or 1, we implement a Metropolis Monte Carlo algorithm as follows: for a $P(R_\mathrm{m}(\theta))$ value, different from $1$ and $0$, we calculate a random number: $0~<~P_\mathrm{temp}~<~1$. Then, \begin{itemize} \item if $P(R_\mathrm{m}(\theta)) \geq P_\mathrm{temp}$, then $P(R_\mathrm{m}(\theta))=1$; otherwise \item if $P(R_\mathrm{m}(\theta)) < P_\mathrm{temp}$, then $P(R_\mathrm{m}(\theta))=0$. \end{itemize} Therefore, we interpret the rigidity cutoff $R_\mathrm{C}$ as function of the cumulative distribution function, i.e., \begin{linenomath*} \begin{equation} \label{def:rc-sec-cdf} R_\mathrm{C} = R_\mathrm{C}(\mathrm{Lat}, \mathrm{Lon}, \mathrm{Alt}, \theta, P(R_\mathrm{m}(\theta)) ) \, . \end{equation} \end{linenomath*} Now, from the dynamic magnetic rigidity definition we perform the same type of calculations but including the time ($t$) dependence, by evaluating equation (\ref{def:renewrm}) for different conditions at particular moments. After applying the same procedure, we obtained the dynamic rigidity cut-off of a site, as \begin{linenomath*} \begin{equation} \label{def:rc-dyn} R_\mathrm{C} = R_\mathrm{C}(\mathrm{Lat}, \mathrm{Lon}, \mathrm{Alt}, \theta, t, P(R_\mathrm{m}(\theta), t)) \, , \end{equation} \end{linenomath*} where $P(R_\mathrm{m}(\theta),t)$ represents the cumulative distribution function calculated under GF conditions at the moment $t$, i.e. \begin{linenomath*} \begin{equation} \label{def:cdf-dyn} P(R_\mathrm{m}(\theta)) = \sum_{\mathcal{R}\mathrm{m} = \mathrm{R_\mathrm{Lmin}}}^{\mathcal{R}m = \mathrm{R_\mathrm{Umax}}}\wp(\mathcal{R}m(\theta), t)\,. \end{equation} \end{linenomath*} At this point, we shall introduce three types of rigidity cut off labeled by three different indexes i.e, $R_\mathrm{C} \rightarrow \mathrm{R}_{\mathrm{C}(i)}\, $: \begin{itemize} \item $i=0$: for the standard definition of rigidity cutoff, i.e. equation (\ref{def:basic-rc}). \item $i=1$: for rigidity cutoff --secular conditions, i.e. typical time scales greater than one year-- as a function of the cumulative distribution function, i.e. equation (\ref{def:cdf}) and (\ref{def:rc-sec-cdf}). \item $i=2$: for rigidity cutoff --dynamics conditions-- as a function of the cumulative distribution function for the UTC time-stamp, i.e. equation (\ref{def:rc-dyn}) and (\ref{def:cdf-dyn}). \end{itemize} With these two new types of rigidity cut-off, we shall redefine, in the next sections, several physical parameters associated with $\mathrm{R}_{\mathrm{C}(i)}$. \begin{figure}[!ht] \centering \includegraphics[trim = 70mm 25mm 70mm 10mm, clip, angle=-90, width=35pc]{rc-usual-pcf-bga.pdf} \caption{These two plots show the representation of the penumbra region. For both, we have calculated --according with the method presented in section \ref{subsubsec:rm-calcu}-- the magnetic rigidity for a cosmic ray arriving at Bucaramanga, Colombia with a zenith angle of $15^\circ$, azimuth angle of $300^\circ$ and $t$ corresponding to May $13^{\mathrm{th}}$, $2005$. In the left plot, we show the standard representation of the penumbra region, i.e. equation (\ref{def:basic-rc}), where the violet bars represent the intervals for allowed rigidities. The right plot displays our interpretation of the penumbra region {\bf averaged for all the azimuth angles}, i.e., equations (\ref{def:pdf}) and (\ref{def:cdf}): the cosmic ray probability of reaching Bucaramanga, Colombia with $\theta=15^\circ$, on May $13^{\mathrm{th}}$ of 2005, versus magnetic rigidity $R_\mathrm{m}$.} \label{rigidity-comparison} \end{figure} In the Figure \ref{rigidity-comparison}, we show an example of the refinement for the estimation of the magnetic rigidity at Bucaramanga-Colombia, on May 13$^\mathrm{th}$ 2005. In the left plot, we display the results of the standard method to calculate the rigidity cutoff (equation (\ref{def:basic-rc})). It is clear that even in those ``not allowed zones'' (between $10.2$\,GeV and $11.2$\,GeV) there are several trajectories that could contribute to the flux at the observation point. This could be particularly important when it is needed to determine the background flux at high altitude sites, such as the LAGO site of Mount Chacaltaya at $5250$\,m a.s.l., or even for the determination of the expected flux of secondaries impacting aircrafts\,\citep{Pinilla2015,AsoreyEtAl2017A}. In the same Figure \ref{rigidity-comparison} (right plot), we illustrate our new method, displaying $P(R_\mathrm{m}(\theta))$ for different magnetic rigidities, considering a primary with $\theta=15^\circ$. With our method in determining the local directional rigidity cutoff, it is possible to refine the calculation of the flux of particles at any observation point while taking into account GF disturbances, either in long time scales (secular conditions) on during short term transient phenomena. \subsection{Estimation of the Primary Flux filtered by $\mathrm{R}_{\mathrm{C}(i)}$} \label{sub:step1} The second step in our simulation chain is to estimate GCR flux arriving at some geographical point ($112$\,km a.s.l., $\mathrm{Lat}, \mathrm{Lon}$) in the area $\mathrm{d}S$, during time $\mathrm{d}t$, in the solid angle --$\theta$ is the zenith angle-- $\mathrm{d}\Omega=2\pi \sin(\theta) \mathrm{d}\theta$, within the energy interval $\mathrm{d}E$ and with minimum allowed primary momentum of \begin{linenomath*} \begin{equation} \label{def:fil-rc} p_{\mathrm{min}} = \frac{Ze}{c} \mathrm{R}_{\mathrm{C}(i)}\, , \end{equation} \end{linenomath*} with $Z$ the atomic number. Equation (\ref{def:fil-rc}) allows us to filter primaries with insufficient $R_\mathrm{m}$ to arrive at the point ($\mathrm{Lat}, \mathrm{Lon}, \mathrm{Alt}$). We estimated the GCR flux, $\Phi$, at an altitude of $112$\,km a.s.l., in accordance with the Linsley atmospheric model\,\citep{NOAA1976}; e.g. at this altitude the mass overburden vanishes\,\citep{Corsika}, and we approximate $\Phi$ by a simple power law of the form: \begin{linenomath*} \begin{equation} \label{gcr-spectrum-noindex} \Phi(E,Z,A,\Omega)= \frac{\mathrm{d}N(E)}{\mathrm{d}S\, \mathrm{d}\Omega \, \mathrm{d}t \, \mathrm{d}E} \simeq j_0(Z,A) \left(\frac{E}{E_0}\right)^{\alpha(E,Z,A)}\,, \end{equation} \end{linenomath*} where the spectral index ($\alpha(E,Z,A)$) can be considered constant with respect the energy, i.e. $\alpha(E,Z,A)~\approx~\alpha(Z,A)$, from $10^{11}$\,eV to $10^{15}$\,eV\,\citep{Letessieretal2011} and $E_0$ has a value of $10^{12}$\,eV. For each type of GCR considered, $\alpha(Z,A)$, is individualized by its mass number (A) and atomic number (Z). Finally, $j_0(Z,A)$ is the normalization parameter. Both, the spectral indices and $j_0$, have been obtained from the compilation produced by\,\citep{Wiebel-Sooth-etal1998}. We calculated $\Phi$ using the fact that multiple observations have confirmed that at low energies ($E\lesssim 5.5\times 10^{19}$\,eV) the GCR flux can be considered isotropic\,\citep[see e.g.][]{Abraham2007} and, in this case, equation (\ref{gcr-spectrum-noindex}) is integrated to obtain the expected number of primaries for every nuclei $(Z,A)$ as: \begin{linenomath*} \begin{equation} \label{number-by-chem} N(Z,A,\theta) = \left ( \pi \Delta S \Delta t \sin^2(\theta) \right ) j_0(Z,A) \frac{\left(E/E_0\right)^{\alpha(Z,A) + 1}}{\alpha(Z,A) + 1} \Big|_{E_{\min}}^{E_{\max}}\, , \end{equation} \end{linenomath*} with $E_{\max} - E_{\min} \equiv \Delta E$ as the energy gap, which, in our case, varies from a few GeV ($E_{\min}$) up to $10^6$\,GeV ($E_{\max}$)\,\citep{Asorey2011}. It is clear that the first factor depends only on the zenith angle $\theta$, and so, $\mathcal{N}(\theta) \equiv \sin^2(\theta) \pi \Delta S \Delta t$. Thus, equation (\ref{number-by-chem}) can be expressed as \begin{linenomath*} \begin{equation} \label{number-by-chem-0} N(Z,A,\theta) = \mathcal{N}(\theta) j_0(Z,A) \frac{\left(E/E_0\right)^{\alpha^\prime(Z,A)}}{\alpha^\prime(Z,A)} \Big|_{E_{\min}}^{E_{\max}}\, , \end{equation} \end{linenomath*} where $\alpha^\prime(Z,A)=\alpha(Z,A) + 1$. For the calculation of $N(Z,A,\theta)$ we have used: $\Delta S~=~1$\,m$^2$, $\Delta t=86400$\,s, i.e., at least one day of the primary flux per square meter for each primary in the range $1\leq Z \leq 26$, for $\theta$ from $0^\circ$ to $90^\circ$. The $N(Z,A,\theta)$ will then be filtered via $E_\mathrm{\min}$ according to equation (\ref{def:fil-rc}). This means that we can identify three different kinds of primary fluxes, one per each different GF conditions ${R_\mathrm{C}(i)}$: \begin{itemize} \item $\Phi_{(0)}$ for $R_{\mathrm{C}(0)}$, i.e. $p_\mathrm{\min} c = Ze \times R_{\mathrm{C}(0)}$. \item $\Phi_{(1)}$ for $R_{\mathrm{C}(1)}$, i.e. $p_\mathrm{\min} c = Ze \times R_{\mathrm{C}(1)}$. \item $\Phi_{(2)}$ for $R_{\mathrm{C}(2)}$, i.e. $p_\mathrm{\min} c = Ze \times R_{\mathrm{C}(2)}$. \end{itemize} Thus, the number of particles given by (\ref{number-by-chem-0}) will be susceptible to $R_\mathrm{m}$ corrections by the modification of the local rigidity cutoff, and it can be re-written as \begin{linenomath*} \begin{equation} \label{number-by-chem-index} N_{(i)} = \mathcal{N}(\theta) j_0(Z,A) \frac{\left(E/E_0\right)^{\alpha^\prime(Z,A)}}{\alpha^\prime(Z,A)} \end{equation} \end{linenomath*} Here, the subindex $i$ of any quantity denotes the type of effect included. As value of $E_{\max}$ we have used $10^6$\,GeV, because at these energies the flux is so low that it can not affect the distribution of the secondary background at ground level. It is important to stress that, for a given point, $E_{\min(i)}$ depends on the primary $Z$, theta arrival direction $(\theta)$ and time, i.e., $E_{\min(i)} \equiv E_{\min(i)}(Z,\theta,t)$. When GF corrections are calculated with our method, the new expected primary flux $\Phi_{(1)}$ obtained from equation (\ref{number-by-chem-index}) will depend on the arrival direction of each primary\,\citep{Dorman-etal2008}. \subsection{Estimation of flux of secondary particles at ground level corrected by the Geomagnetic Field} \label{sub:step2} The following step of the simulation chain is the correction for the effect of the Geomagnetic Field on the flux of secondary particles at ground level. As was mentioned in section \ref{LAGO}, one of the main objectives of this simulation chain is to calculate the expected flux of secondaries at the detector level at any site of the LAGO network. Once the primary flux $\Phi_{(i)}$ is calculated, the second step is to determine the interactions of those primaries with the atmosphere. This simulation step is performed with the CORSIKA code\,\citep{Corsika} (Currently, CORSIKA v7.3500, compiled using the following options: QGSJET-II-04\,\citep{Ostapchenko2011}; GHEISHA-2002; EGS4; curved and external atmosphere, and volumetric detector). The local geomagnetic field values $B_x$ and $B_z$ needed by CORSIKA to account for GF effects during particles propagation in the atmosphere are obtained from the IGRF-11 model. Secondary particles are tracked to the lowest energy threshold allowed by CORSIKA for each type (currently, $E_s\geq 5\times 10^{-2}$\,GeV for $\mu^\pm$ and hadrons (excluding $\pi^0$), and $E_s \geq 5\times10^{-5}$\,GeV for $e^\pm, \pi^0$ and $\gamma$) to get the most comprehensive distribution of secondaries at each site. In this work, the atmosphere at each site was simulated by using profiles of the applicable MODTRAN atmospheric model\,\citep{Kneizys1996} provided with CORSIKA. For the Bucaramanga site we use a tropical profile and for San Carlos de Bariloche a midlatitude summer profile. Currently, the LAGO collaboration is developing and validating a method to obtain the local atmospheric profiles for each LAGO site during different weather conditions based on the Global Data Assimilation System (GDAS)\,\citep{NOAA2009}, as differences have been observed between generic MODTRAN models and balloon measurements at the planed LAGO site in Antarctica\,\citep{DassoEtal2015}. A large number of primary showers need to be simulated (typical values are of several billions of showers for 24\,h of flux at each site). A set of local clusters have been deployed and tuned for this particular calculation. These are maintained at some institutions of the LAGO Collaboration. This simulation chain has been also integrated into a dedicated Virtual Organization, {\textit{lagoproject}}, as part of the European Grid Infrastructure (EGI, http://www.egi.eu) activities. The Grid implementation of CORSIKA was deployed with two ``flavors'', which run by using GridWay Metascheduler (http://www.gridway.org/doku.php)\,\citep{HuedoMonteroLlorente2001} or with a second approach through a Catania Science Gateway interface\,\citep{BarberaFargettaRotondo2011}. Massive calculations can be executed with the former, via the Montera\,\citep{RodriguezMayoLlorente2013}, the GWpilot\,\citep{RubioEtal2015}\, or the GWcloud\,\citep{RubioHuedoMayo2015} frameworks. In the Science Gateway approach a user can seamlessly run a code on different infrastructures by accessing a unique web-based entry point with an identity provision. Users only have to upload the input data or invoke a PID (persistent identifier or reference to a digital set of files) and click on the run icon. The final result will be retrieved whenever the job has ended. The underlying infrastructure is absolutely transparent to the user and the system decides on which sites and computing platform the code will be compiled and run\,\citep{RodriguezEtal2015, AsoreyEtal2016}. To deal with the computational complexity introduced by the refinement described in the previous subsection, we built a library for each site containing the simulated particles starting from a very low momentum primary threshold of $\sim (350 \times Z)$\,MeV$/c$ (i.e. $1$\,GeV of total energy for protons). Each secondary impinging the detector is tagged with information from its parent primary particle, which allows the calculation of its magnetic rigidity $R_\mathrm{m}$. Then, because each secondary at the ground comes from some primary impinging at the atmosphere, from the $R_{\mathrm{C}(i)}$ obtained for each condition according to equations (\ref{def:fil-rc}) and (\ref{number-by-chem-index}), we are able to determine if each secondary would reach the detector under that particular GF condition. \section{Results for the LAGO sites of Bucaramanga, Colombia and San Carlos de Bariloche, Argentina} \label{results} \subsection{Magnetic Rigidity} As we mentioned before, we applied our simulation chain to the location of two LAGO sites: Bucaramanga, Colombia (BGA) and San Carlos de Bariloche, Argentina (BRC). Results for the standard rigidity cutoff calculation, $R_{\mathrm{C}(0)}$, are displayed in Figure \ref{figRcSecBGABRC} for each site. As expected, there is a strong dependence between $R_{\mathrm{C}(0)}$ and the arrival directions at both cities, which induces a noticeable decrease in the number of GCRs producing secondary particles at ground level. For Bucaramanga, it is interesting to mention the oddity of the behavior of the rigidity cutoff for large azimuthal ($250^{\circ} \lesssim \phi\lesssim 300^{\circ}$) and zenithal ($45^{\circ} \lesssim \theta \lesssim 90^{\circ}$) angles. We have backtracked several incoming trajectories and discovered that this anomaly in the rigidity cutoff seems to be associated with the deflection of particles with low $R_\mathrm{m}$, whose trajectories cross zones with high gradients of the GF\,\citep{Suarez2015a}. \begin{figure}[h] \centering \includegraphics[trim = 70mm 25mm 70mm 10mm, clip, angle=-90, width=35pc]{rc-sec-ze-az_bga-brc.pdf} \caption{ Calculation of $R_{\mathrm{C}(0)}$ (GF secular conditions, equation (\ref{def:basic-rc})) at the atmosphere edge ($112$\,km a.s.l.), as a function of the azimuth angle $\phi$ and for different zenith angles $\theta$. This calculation is presented for two LAGO sites: Bucaramanga, Colombia, left, and San Carlos de Bariloche, Argentina, right. The strong dependence between $R_{\mathrm{C}(0)}$ and the arrival directions at both cities, is evident. } \label{figRcSecBGABRC} \end{figure} The cumulative probability distributions (\ref{def:cdf}), as functions of the magnetic rigidity for various zenith angles, are displayed in Figure \ref{asymptotic-rc-stat}. There, lower magnetic rigidities are associated with particle trajectories having small zenith angles. Notice that both plots are qualitatively different and this probably is evidence of the complexity of the GF present at the two very distinct latitudes. \begin{figure}[h] \centering \includegraphics[trim = 70mm 25mm 70mm 10mm, clip, angle=-90, width=35pc]{asymp-rc-stat-may13-2005_bga-brc.pdf} \caption{Magnetic rigidity calculated on the penumbra region as a cumulative distribution function, according to equation (\ref{def:cdf}) for different arriving zenith angles at the sites of Bucaramanga (left) and San Carlos de Bariloche (right).} \label{asymptotic-rc-stat} \end{figure} \subsection{Primary Flux Corrected by GF} As was explained in section \ref{subsubsec:inter-penum}, once the penumbral CDFs is obtained, it is possible to refine the calculation of the expected primary flux $\Phi_{(i)}$ and the corresponding flux of background secondaries $\Xi_{(i)}$ at ground level. In Figure \ref{en-sec-enerpri-con} the GCR flux $\Phi_{(0)}$ and $\Phi_{(1)}$ are displayed for both LAGO sites: BGA and BRC. Only those primaries producing secondaries at ground level are shown. In both cases, the influence of the GF corrections is only significant at lower energies, $\mathbf{\sim 15}$\,GeV. As expected, instead of a sharp cutoff as in the standard case, a smooth cutoff is observed, corresponding to the different rigidities cutoff in the different regimes, and the flux of primaries is affected according with figure \ref{asymptotic-rc-stat}, i.e., a bigger effect for BGA (rigidity up to $\sim50$\,GV) than BRC (rigidity up to $\sim30$\,GV). \begin{figure}[h] \centering \includegraphics[trim = 70mm 25mm 70mm 10mm, clip, angle=-90, width=35pc]{pri-wsec-corr-espect_bga-brc.pdf} \caption{ Expected GCRs flux at 112 km a.s.l. producing secondaries at ground level with only the standard corrections ($\Phi_{(0)}$, blue crosses) and with the proposed method in secular conditions ($\Phi_{(1)}$, green squares), for Bucaramanga (left) and San Carlos de Bariloche (right). The influence of the GF corrections is evident at the low energy limits.} \label{en-sec-enerpri-con} \end{figure} The primary flux interacts with the atmosphere and produces the secondary flux $\Xi_{(i)}$ at the ground level. These interactions were simulated by CORSIKA obtaining a very comprehensive distribution of particles at the detector level. To estimate the response of the WCD to each type of secondary particle, this flux is analyzed using a detailed Geant4 simulation of the LAGO detector will be described and the first preliminary results were showed by \citet{Otiniano2015}. \subsection{Secondary Flux Corrected by GF} Figure \ref{secondary-flux} displays the simulated spectra of secondaries $\Xi_{(1)}$ (under secular conditions) at both cities. A noticeable peak for the distribution of secondary neutrons and protons are evident at both sites. At these low altitudes, a muon hump is also visible in the distribution spectra, and this is typically used as a calibration point for WCD\,\citep[see e.g.][]{AEtchegoyen-etal2005, Asorey2013}. \begin{figure}[h] \centering \includegraphics[trim = 70mm 25mm 70mm 10mm, clip, angle=-90, width=35pc]{sec-espect-corr_bga-brc.pdf} \caption{The expected secondary spectra at detector level $\Xi_{(1)}$ is shown for Bucaramanga (BGA, left) and San Carlos de Bariloche (BRC, right), for different type of secondaries: photons (purple), electrons and positrons (blue), muons (orange), pions (red), neutrons (pink), protons (gold), total spectrum of secondaries (green).} \label{secondary-flux} \end{figure} By defining the \textit{flux percentage difference}, $\Delta \Xi_{i-j}$, it is possible to get a better understanding of the energy range where the geomagnetic corrections are more important, specially when dynamic variations are considered. Thus \begin{linenomath*} \begin{equation} \label{FluxDiference} \mathrm{Difference}_{\%} \equiv \Delta \Xi_{i-j} = 100 \left ( \frac{\Xi_{(i)} -\Xi_{(j)}}{\Xi_{(i)}} \right ) \%\, , \end{equation} \end{linenomath*} where $i,j$ are the indices corresponding to the configuration of the GF introduced in the section \ref{subsubsec:inter-penum}. To evaluate the impact of this new method, the differences between cases $(0)$ (standard calculation) and $(1)$ (new method), as a function of the secondary particles momentum, are illustrated in Figure \ref{en-sec-espec-rigi-ground}. The presence of a peak at $\sim500$\,MeV/c is evident for both sites in the $\Delta \Xi_{0-1}$ distribution, located between $100$\,MeV/c and $\sim 3$\,GeV/c. For lower energies, the difference is a bit larger at BGA than at BRC, as we expected after figure \ref{figRcSecBGABRC}. When we explored in more detail the particle component of the secondaries at these momenta, we found that these differences are dominated by secondary neutrons\,\citep{Suarez2015a}, where the diminution is of the order of $35\%$. This result, in our simulation, agree with the fact that variations in the flux registered by Neutron Monitors are a proxy of the changing conditions in the near-earth space environment. For energies higher than $\sim10$\,GeV, corrections are not important. \begin{figure}[h] \centering \includegraphics[trim = 70mm 25mm 70mm 10mm, clip, angle=-90, width=35pc]{sec-espec-rigi-ground_bga-brc.pdf} \caption{Percentage Flux Difference of secondaries at the ground between the flux calculated for for the standard definition of rigidity cutoff (equation (3)) and for rigidity cutoff secular conditions (equation (8)), i.e. $\Delta \Xi_{0-1}$: BGA (left) and BRC (right). It clearly peaks at $\sim 500$\,MeV/c and is framed between $100$\,MeV/c and $3$\,GeV.} \label{en-sec-espec-rigi-ground} \end{figure} Finally, this tool allows the study of the impact of dynamic conditions of the GF in the distribution of secondary particles by comparing secular conditions of the GF with the evolution of the GF states as a function of time, $\Delta \Xi_{1-2}$. This is because the calculations performed by this method are focusing on the background of the GCR flux, e.g., we did not consider the solar particle event during the geomagnetic storm of May 13-17, 2005, but just the influence of the state of the GF, for this UTC-time, over the background of GCR flux. This is shown in Figure \ref{en-sec-filt-dst-all-a}, where the time evolution of $\Delta \Xi_{1-2}$ is displayed at both sites for May of 2005. We have selected this particular month because the strong geomagnetic storm on May 13-17, 2005, generated intense disturbances in the GF\,\citep[see e.g.][] {Adekoya2012, Bisi-etal2010, Galav-etal2014}. Three particular cases are shown: the total flux secondary particles, $\Delta \Xi_{1-2}$, the muon flux $\Delta \Xi_{1-2}^\mu$ and the neutron flux $\Delta \Xi_{1-2}^n$. It is clear that, beside the time coincidence of the flux variations, it is more significant at Bucaramanga than at Bariloche, and that the neutron flux at ground level is the most affected component by the GF activity, which reinforces the known sensitivity of this particular constituent for the observation of geomagnetic disturbances\,\citep[see e.g.][]{Belov2005}. As reference, on the background of each sub-plot in figure \ref{en-sec-filt-dst-all-a} (gray line) the flux of neutrons at ground level is shown, registered by two Neutron Monitors (NM) with similar rigidities to both sites; i.e. $R_{\mathrm{C}(0)}$ of $10.75$\,GV for NM of ESOI and $8.28$\,GV for NM of Mexico. As reference, the $R_{\mathrm{C}(0)}$ for BGA is $11$\,GV and for BRC is $8.1$\,GV. Both NM show a decrease between $300$ and $400$ in elapsed UTC time, that is in coincidence with our simulation results. Because we have simulated the effect of the GF under GCR flux, i.e., we do not simulate solar particle events, it is possible, with our approach, to estimate the contribution of the GF topology to a Forbush decrease event. \newpage \begin{figure}[htb!] \centering \includegraphics[trim = 20mm 55mm 20mm 55mm, clip, width=35pc]{sec-fill-dst-family_bga-brc.pdf} \caption{Time evolution of the expected flux of secondaries during dynamic conditions of the GF, $\Delta \Xi_{1-2}$ for May of 2005 at Bucaramanga (left) and at San Carlos de Bariloche (right). On background, in gray, is presented the neutron flux is shown, registered by two Neutron monitors with similar $R_{\mathrm{C}(0)}$: to BGA, ESOI with $R_{\mathrm{C}(0)}=10.75$\,GV (left); to BRC, Mexico with $R_{\mathrm{C}(0)}=8.28$\,GV (right). The data for the neutron monitor was taken from http://www.nmdb.eu. The first row corresponds to the total flux of particles at ground level, while in the second row we illustrate the evolution of the muon flux $\Delta \Xi_{1-2}^\mu$. The third one displays the neutron flux $\Delta \Xi_{1-2}^n$. Note the scale difference on the y-axis for each plot.} \label{en-sec-filt-dst-all-a} \end{figure} \newpage \section{Final remarks} \label{conclusions} In this paper, we have presented the LAGO space weather chain of simulations devised to obtain precise calculations of secondary particle flux at ground level that can be used at every geographical position. It takes into account geomagnetic corrections for both secular (long term phenomena with typical time scale larger than a year) and transient events (with typical time scales of hours to days). We shall consider all calculations performed without this new method as a first approximation to the more precise determination of the real flux of secondaries particles, calculated when the effects of the geomagnetic field are fully considered. Variations of the flux of secondaries at two LAGO sites with different latitude/longitude (Bucaramanga, $7^\circ\,8^{'}$N, $73^\circ\, 0^{'}$W and San Carlos de Bariloche $41^\circ\,9^{'}$N, $71^\circ\, 18^{'}$W) were calculated for both secular geomagnetic conditions and under transient events, during the geomagnetic active month of May 2005. Our simulations show that the secondary flux is sensitive to the latitude and that the secondary neutrons at the ground level are the most affected flux component due to variations of the geomagnetic field during space weather phenomena. While our calculation relies on the isotropy of the GCR flux, it is important to note that during certain FDs, small anisotropies in the flux of primaries could be induced by the configuration of the incoming magnetic cloud and the disturbances of the geomagnetic field during these particular events. Actually a $\sim 1\%$ anisotropy in the flux of secondary muons was observed at ground level during the Forubush decrease of December 13, 2006\,\citep{Kane2006, Fushishita-etal2010}. However, since our WCD are not sensitive to the arrival direction of secondary particles, we will not be able to detect such small effect while the total flux of secondaries remains constant. Several dedicated clusters and a Grid-based implementation have been deployed for these calculations. A dedicated Virtual Organization, {\textit{lagoproject}}, part of the European Grid Infrastructure (EGI, http://www.egi.eu) activities has been created, and available tools for Grid have been adapted and implemented to run CORSIKA in a absolutely transparent way to the user. The standard definition of the penumbra region for magnetic rigidities generates a complex structure of particle trajectories: permitted, prohibited and quasi-trapped orbits, which does not allow to derive all values for the $R_\mathrm{m}$\,\citep{SmartEtal2006}. Currently, calculations of rigidity cutoff tend not to consider the effects involved in the penumbra, and always use a single effective value (equation (\ref{def:basic-rc})) to account and characterize all the complexity of involved effects\,\citep{SmartShea2009}. In this paper the concept of rigidity cutoff $R_\mathrm{C}$ has been generalized as a time dependent function of the cumulative probability distribution (see equation (\ref{def:cdf-dyn})). With this refinement, at the penumbra region, we can obtain a non-vanishing probability to have an incoming particle (with a zenithal angle $\theta$) contributing to the flux of primaries at the observation point. Combining the data measured at different locations of the LAGO detection network, with those obtained from the detailed simulation performed by this space weather chain, we are now capable of providing a better understanding of the temporal evolution and of the small and large scales disturbances of the space weather phenomena. \acknowledgments The authors thanks the enlightening suggestions and criticism from the anonymous referees and also from the AGU Space Weather Editorial Office which have helped to make this work clearer and more focused. We appreciate the support of Vicerrector\'{\i}a Investigaci\'on y Extensi\'on Universidad Industrial de Santander for its permanent sponsorship, and acknowledge the financial support of Departamento Administrativo de Ciencia, Tecnolog\'ia e Innovaci\'on of Colombia (COLCIENCIAS) under contracts FP44842-051-2015 and FP44842-661-2015. The authors acknowledge the support of COLCIENCIAS, CONICET and MINCyT for funding bilateral cooperation Argentina-Colombia, grant AR:CO-15/02 CO:729-2015. HA and MSD acknowledge the support from Inn\'ovate Per\'u, grant 398-PNICP-PIBA-2014. Significant parts of the calculations needed for this work was performed with the computational support of the Centros de Supercomputaci\'on y C\'alculo Cient\'ifico de la Universidad Industrial de Santander. We also acknowledge the NMDB database (www.nmdb.eu), founded under the European Union's FP7 programme (contract no. 213007) for providing the data. Neutron monitor of the Emilio Segre Observatory is supported by collaboration ICRC-ESO (Tel Aviv University and Israel Space Agency, Israel) and University ``Roma Tre'' with IFSI-CNR (Italy). Neutron monitor data of Mexico City is provided by the Cosmic Ray Group of the Geophysical Institute at the Universidad Aut\'onoma de M\'exico (UNAM). The authors are grateful to the LAGO and Pierre Auger Observatory Collaboration members (http://lagoproject.org/collab.html) for their continuous engagement and support. \newpage
{ "timestamp": "2018-04-17T02:08:16", "yymm": "1802", "arxiv_id": "1802.08867", "language": "en", "url": "https://arxiv.org/abs/1802.08867" }
\section{Introduction} Ultrastrong coupling between artificial atoms and electromagnetic cavity modes is achieved when the coupling strength $\lambda$ becomes comparable to, or even exceeds the resonator frequency $\omega$. Such regime, which is nowadays experimentally addressed in circuit QED~\cite{bourassa,gross,mooij,lupascu,semba}, is of interest both for the development of quantum technologies and for fundamental physics. Indeed, strong matter-field coupling is preliminary to the implementation of fast quantum protocols. On the other hand, in the ultrastrong coupling regime strongly correlated matter-light states emerge~\cite{lupascu,semba}. A prominent phenomenon in ultrastrong matter-field coupling is the dynamical Casimir effect, that is, the generation of photons from the vacuum due to time-dependent boundary conditions or, more generally, as a consequence of the \emph{nonadiabatic} change of some parameters of a system \cite{moore,dodonov,noriRMP} (this latter case is usually refereed to as parametric DCE \cite{dodonov}). DCE has been discussed in several contexts, for instance in Bose-Einstein condensates~\cite{jaskula}, in excition-polariton condensates~\cite{koghee}, for multipartite entanglement generation in cavity networks~\cite{solano2014}, in relation to several forms of quantum correlations~\cite{johansson2015,adesso2015,savasta2015}, in the generation of exotic field states~\cite{exotic}§, for quantum communication protocols~\cite{casimirqip}, in quantum thermodynamics~\cite{frigo}; the DCE can also be amplified via optimal control techniques~\cite{DCEoptimal}. Moreover, pioneering experimental demonstrations of the DCE have been reported in superconducting circuit QED~\cite{norinature,lahteenmaki}. In contrast with standard QED, here we consider a single (cavity) mode rather than an infinite number of modes. Moreover, the quantization volume (of the cavity) is fixed and the limit of infinite volume is not taken at the end. Finally, the interaction is not switched on and off adiabatically, but we rather focus on \emph{transient} phenomena associated with the nonadiabatic switching of the matter-field coupling. That is, we are considering \emph{finite-time} QED, a problem barely considered in the literature~\cite{nomoto}. The quantum Rabi model~\cite{micromaser,QRM}, which describes the dipolar light-matter coupling, with the addition of a switchable coupling, is the ideal testing ground to explore finite-time QED in the ultrastrong coupling regime. In this paper, we examine applications of the the Picard and Magnus expansions to both the semiclassical and the quantum Rabi model, with a time-dependent coupling. While the dynamics of these models can be addressed numerically via a Runge-Kutta integration of the equations of motion, perturbative methods can shed light on the physical mechanisms and elementary processes which govern the dynamics. We first investigate the Picard series, which allows an intuitive diagrammatic representation. Such series, truncated to low orders, provides a rather accurate description only for short interaction times (and not too strong coupling). In particular, we show that regular oscillations in the mean number of photons, can be ascribed to the coherent generation (DCE) and destruction (anti-DCE~\cite{antiDCE,motazedifard}) of photons. Such oscillations take place at a frequency $2\omega$ that can be predicted by first-order Picard expansion, and are a clear dynamical ``smoking gun'' of the ultrastrong coupling regime. We then examine the Magnus expansion, and show that through concatenation it can be used as an efficient numerical integrator. In particular, we study the Fourier spectrum of motion for the semiclassical Rabi model and show that it has a characteristic structure, with a single peak at the Rabi frequency $\Omega$ and doublets at frequencies $2n\omega\pm\Omega$, with $n=1,2,3,...$. The doublets, which are a feature beyond the RWA, are explained on the basis of the Picard series. Finally, we discuss analogies between the semiclassical Rabi model and the Mathieu equation. \section{The finite-time Rabi model} We consider both the semiclassical and the quantum finite-time Rabi models, describing the interaction of a two-level atom (qubit) with the electromagnetic field~\cite{micromaser}. In both cases, the Hamiltonian \begin{equation} H(t)=H_0+H_I(t), \end{equation} where $H_0$ refers to the free evolution for the qubit and the field, and $H_I(t)$ describes a time-modulated qubit-field coupling, which extends over a finite time $0\le t\le \tau$. In the semiclassical Rabi model, which describes within the dipole approximation, the interaction between the qubit and a classical monochromatic field, (hereafter we set the reduced Planck's constant $\hbar=1$), \begin{equation} \begin{array}{c} {\displaystyle H_0=-\frac{1}{2}\,\omega_q\sigma_z, } \\ \\ {\displaystyle H_I(t)=f(t)\,[2\Omega \cos(\omega t+\phi)]\,\sigma_x, } \end{array} \end{equation} where $\omega_q$ and $\omega$ are the qubit and field frequency, respectively, $\Omega$ is the (Rabi) frequency of the field-induced oscillations between the two levels $|g\rangle$ and $|e\rangle$, the Pauli matrices $\sigma_k$ ($k=x,y,z$) are written in the $\{|g\rangle,|e\rangle\}$ basis, and the function $f(t)$ modulates the qubit-field coupling. Hereafter, for simplicity's sake we shall assume the phase $\phi= 0$, the resonant case $\omega_q=\omega$, and a sudden switch on/off of the coupling: $f(t)=1$ for $0\le t\le \tau$, $f(t)=0$ othertwise. In the case of the quantum Rabi model, which describes the interaction between the qubit and a single mode of the quantized field, \begin{equation} \begin{array}{c} {\displaystyle H_0=-\frac{1}{2}\,\omega_{q} \sigma_z + \omega\left(a^\dagger a +\frac{1}{2}\right), } \\ \\ {\displaystyle H_I(t)=f(t)\,[\lambda \,\sigma_+\,(a^\dagger+a) +\lambda^\star \sigma_-\,(a^\dagger+a)], } \end{array} \label{eq:noRWAparam} \end{equation} where $\sigma_\pm = \frac{1}{2}\,(\sigma_x\mp i \sigma_y)$ are the raising and lowering operators for the qubit (so that $\sigma_+=|e\rangle\langle g|$ and $\sigma_-=|g\rangle\langle e|$): $\sigma_+ |g\rangle = |e\rangle$, $\sigma_+ |e\rangle = 0$, $\sigma_- |g\rangle = 0$, $\sigma_- |e\rangle = |g\rangle$. The operators $a^\dagger$ and $a$ for the field create and annihilate a photon: $a^\dagger |n\rangle=\sqrt{n+1}|n+1\rangle$, $a |n\rangle=\sqrt{n}|n-1\rangle$, $|n\rangle$ being the Fock state with $n$ photons. For the sake of simplicity, from now on we consider a real coupling strength, $\lambda\in\mathbb{R}$, $\omega_q=\omega$ and a time-dependent modulation set as above for the semiclassical model. The rotating wave approximation (valid for $\lambda\to0$) is obtained neglecting the term $\sigma_+ a^\dagger$, which simultaneously excites the qubit and creates a photon, and $\sigma_- a$, which de-excites the qubit and annihilates a photon. In this limit, the Hamiltonian (\ref{eq:noRWAparam}) reduces to the Jaynes-Cummings Hamiltonian \cite{micromaser} with a time-dependent modulation. In the RWA the swapping time needed to transfer an excitation from the qubit to the field or vice versa ($|e\rangle |0\rangle\leftrightarrow |g\rangle |1\rangle$) is $\tau_s=\pi/2\lambda$, {and no DCE is possible since the total number of excitations in the system is conserved}. Within RWA, the (Rabi) frequency of the Rabi oscillations between the states $|e\rangle |n-1\rangle$ and $|g\rangle |n\rangle$ is $\Omega_n=\lambda\sqrt{n}$. In the interaction picture, the Hamiltonian reads $\tilde{H}_I(t)=U^\dagger(t)H_I(t) U(t)$, where $U(t)=e^{-iH_0t}$. From now on we shall omit tildes and always refer to the interaction picture. For the semiclassical Rabi model, \begin{equation} H_I(t)=\Omega f(t)\left[(1+e^{-2i\omega t}) \sigma_- + (1+e^{2i\omega t}) \sigma_+\right], \label{eq:HIsemi} \end{equation} while in the quantum Rabi model \begin{equation} H_I(t)=\lambda f(t)\,[\sigma_- ae^{-2i\omega t}+\sigma_+a+ \sigma_- a^\dagger +\sigma_+ a^\dagger e^{2i\omega t}]. \label{eq:HIquantum} \end{equation} In both cases, the RWA is recovered if we neglect the counter-rotating terms at frequency $2\omega$. \section{Picard series} The solution to the time-dependent Schr\"odinger equation $i\,|\dot{\psi}(t)\rangle=H_I(t)\,|\psi(t)\rangle$ can be approximated by the Picard iterative process. We start by writing the integral associated equation \begin{equation} |\psi(t)\rangle=|\psi(0)\rangle-i\int_0^t H_I(t') \,|\psi(t')\rangle \,dt'. \end{equation} Iterating the process we obtain \begin{eqnarray} \begin{array}{c} {\displaystyle |\psi(t)\rangle=|\psi(0)\rangle-i\int_0^{t'} H_I(t') \left[|\psi(0)\rangle\right. } \\ {\displaystyle \left. -i\int_0^{t'} H_I(t'')\,|\psi(t'')\rangle \,dt''\right]dt', } \end{array} \end{eqnarray} and so on. Hence we can write \begin{equation} |\psi(t)\rangle=\sum_{k=0}^\infty |\psi^{(n)}(t)\rangle, \end{equation} with the zeroth-order approximation $|\psi^{(0)}(t)\rangle=|\psi(0)\rangle$, the first-order correction \begin{equation} |\psi^{(1)}(t)\rangle=-i\int_0^t H_I(t') \,|\psi^{(0)}(t')\rangle \,dt', \label{eq:firstorder} \end{equation} and so on, with the $n$-th-order correction given by \begin{equation} |\psi^{(n)}(t)\rangle=-i\int_0^t H_I(t') \,|\psi^{(n-1)}(t')\rangle \,dt'. \label{eq:nthorder} \end{equation} \subsection{Semiclassical Rabi model} We expand the state vector in the $\{|g\rangle,|e\rangle\}$ basis for the qubit: $|\psi(t)\rangle=C_g(t)|g\rangle+C_e(t)|e\rangle$. For concreteness, we consider the initial state $|\psi(0)\rangle=|g\rangle$ (however, the considerations of this subsection would not change for a different initial state). It is instructive to consider first the RWA limit, in which we easily obtain the exact solution to the Schr\"odinger equation, $|\psi(t)\rangle = \cos(\Omega t)|g\rangle-i \sin(\Omega t)|e\rangle$, corresponding to Rabi oscillations between the two states $|g\rangle$ and $|e\rangle$. In this case, the $n$-th order Picard expansion of $|\psi(t)\rangle$ coincides with the result obtained from the $n$-th order Taylor expansion of the exact coefficients $C_g(t)=\cos(\Omega t)$ and $C_e(t)=-i \sin(\Omega t)$: \begin{eqnarray} \begin{array}{c} {\displaystyle |\psi^{(0)}(t)\rangle=|g\rangle, \; |\psi^{(1)}(t)\rangle=-i (\Omega t)|e\rangle, } \\ {\displaystyle |\psi^{(2)}(t)\rangle=-\frac{(\Omega t)^2}{2!}|g\rangle, \; |\psi^{(3)}(t)\rangle=i \frac{(\Omega t)^3}{3!}|e\rangle,...\,. } \end{array} \end{eqnarray} Including the counter-rotating terms, we obtain \begin{eqnarray} \begin{array}{c} {\displaystyle |\psi^{(0)}(t)\rangle=|g\rangle, } \\ {\displaystyle |\psi^{(1)}(t)\rangle=\left[\frac{\Omega}{2 \omega}\left (1 - e^{2 i \omega t} \right)- i \, (\Omega \, t) \right] | e \rangle, } \\ {\displaystyle |\psi^{(2)}(t)\rangle=\left[ \frac{\Omega^{2}}{4 \omega^{2}}\left (-1 + e^{2i \omega t }\right)\right. } \\ {\displaystyle \left. - i \frac{ \Omega}{2 \omega} \, (\Omega t) e^ {- 2 i \omega t } - \frac{(\Omega t)^{2}}{2} \right] | g \rangle, } \\ {\displaystyle |\psi^{(3)}(t)\rangle= \left[ \frac{\Omega^{3}}{8 \omega^{3}} \left( \frac{5}{2} - e^{ - 2 i \omega t } - e^{2 i \omega t } - \frac{1}{2} \, e^{4 i \omega t } \right)\right. } \\ {\displaystyle +\frac{\Omega^{2}}{4 \omega^{2}} \,(\Omega t) \left(1 - e^{ - 2 i \omega t } + e^{ 2 i \omega t }\right) } \\ {\displaystyle \left. -\frac{\Omega}{4 \omega} \,(\Omega t)^{2} \, \left( 1 - e^{ 2 i \omega t }\right) +i \frac{(\Omega t)^{3}}{6}\right] | e \rangle,...\,. } \end{array} \end{eqnarray} From these expressions, it is clear that besides the RWA terms (Taylor expansions of $\cos(\Omega t)$ and $\sin(\Omega t)$), we have terms proportional to $e^{n(2i\omega t)}$, multiplied by powers of $\Omega t$. We will discuss in Sec~\ref{sec:Maghnussemi} the signatures of these terms in the frequency domain. An example of the comparison between the exact (numerical) solution of the semiclassical Rabi model and the truncated Picard series is shown in Fig.~\ref{fig:noRWAsemiclassical}. It is clear that the Picard expansion is suitable only for short times. Indeed, with expansion up to thirty-third order we can faithfully reproduce the exact dynamics only up to less than two Rabi periods. From these plots we can also appreciate small (beyond RWA) oscillations, superposed to the main Rabi oscillations. The amplitude and frequency of these small oscillations will be discussed in Sec~\ref{sec:Maghnussemi}. \begin{figure}[h] \includegraphics[angle=0.0, width=8cm]{fig1.pdf} \caption{(Color online) Comparison (for ${\rm Re}[C_g]$) between the numerical solution of the semiclassical Rabi model (dashed red line) and the Picard series (solid blue line) up to third (top left), eleventh (top right), twenty-first (bottom left), and thirty-third (bottom right) order, for $\Omega/\omega=0.1$.} \label{fig:noRWAsemiclassical} \end{figure} \subsection{Quantum Rabi model} \label{sec:PicardQRM} In this subsection, we review with more details the Picard expansion for the finite-time quantum Rabi model introduced in Ref.~\cite{exotic}. We expand the state vector in the $\{|l,n\rangle\}$ basis ($l=g,e$; $n=0,1,2,...$) as $|\psi(t)\rangle=\sum_{l,n} C_{l,n}(t) |l,n\rangle$. For every term in the Hamiltonian (\ref{eq:HIquantum}) it is possible to give a diagrammatic representation (see Fig.~\ref{fig:vertices}). The interaction vertex is represented by a full circle, a photon by a wavy line, the qubit in the ground (excited) state by a straight line (two parallel straight lines). Time flows from bottom to top. The vertex corresponding to the term proportional to $\sigma_+ a$ in the Hamiltonian tells us that we start from the qubit in the ground state and a photon. As a consequence of the qubit-field interaction, the photon is absorbed and the qubit is promoted to its excited state. The term $\sigma_- a^\dagger$ de-excites the atom while creating a photon, $\sigma_- a$ simultaneously destroys a photon and de-excites the atom, and $\sigma_+ a^\dagger$ simultaneously creates a photon and excites the atom. The last two terms are responsible of the anti-DCE and DCE effect, respectively. \begin{figure}[h] \includegraphics[angle=0.0, width=7cm]{fig2.pdf} \caption{ Vertices associated to the terms in Hamiltonian (\ref{eq:HIquantum}). The vertices in the bottom line correspond to terms neglected within the RWA.} \label{fig:vertices} \end{figure} We focus on the initial condition $|\psi(0)\rangle=|g,0\rangle$, corrsponding to both the qubit and the field in their ground state. Within the RWA, which conserves the total number of excitations $N_T = \sigma_+ \sigma_- + a^\dagger a$, no excitations are possible and $|\psi(t)\rangle=|g,0\rangle$ at all times. On the other hand, the dynamics is nontrivial when the terms beyond RWA are included, since one can simultaneously excite the qubit and create a photon, $\sigma_+ a^\dagger |g,0\rangle=|e,1\rangle$. The generation of photons from the vacuum is due to the nonadiabatic change of a system parameter (switching of the qubit-field coupling constant) and is a manifestation of the (parametric) DCE \cite{dodonov}. To the zeroth-order approximation $|\psi^{(0)}(t)\rangle=|g,0\rangle$. Such state is diagrammatically represented as a vertical single line (see the left diagram in Fig.~\ref{fig:order0-1}), meaning that the qubit remains in its ground state $|g\rangle$, while no photons are emitted. The two horizontal lines in Fig.~\ref{fig:order0-1} (left) (as well as in all other diagrams in this paper) mean that interaction is switched on at time $t=0$ (lower line) and switched off at time $t=\tau$ (upper line) That is, these lines outline the fact that we are dealing with finite-time QED. To compute the first-order terms, we first observe that $H_I(t') |\psi^{(0)}(t')\rangle= e^{2i\omega t'} \sigma_+ a^\dagger |g,0\rangle =e^{2i\omega t'}|e,1\rangle$. After integrating $H_I(t') |\psi^{(0)}(t')\rangle$ from $t'=0$ to $t'=t$ according to Eq.~(\ref{eq:firstorder}), we obtain \begin{equation} |\psi^{(1)}(t)\rangle= \frac{\lambda}{2\omega}\,\left(1-e^{2i\omega t}\right) |e,1\rangle. \end{equation} The diagrammatic representation of the first-order contribution is shown in Fig.~\ref{fig:order0-1} (right): the system starts from the state $|g,0\rangle$ and performs a transition to the state $|e,1\rangle$, with the qubit left in the excited state $|e\rangle$ and the emission of a single (real) photon. Note that this diagram is beyond the RWA, since the energy is not conserved: both the qubit and the field start from their ground states and are eventually excited. \begin{figure}[h] \includegraphics[angle=0.0, width=8cm]{fig3.pdf} \caption{ Diagrammatic representation of the zeroth- (left) and first-order (right) contributions in the Picard series for the quantum Rabi model, with initial condition $|\psi(0)\rangle=|g,0\rangle$.} \label{fig:order0-1} \end{figure} To obtain the second-order contributions, we apply $H_I(t')$ to the first-order correction $|\psi^{(1)}(t')\rangle$. Since $|\psi^{(1)}(t')\rangle\propto |e,1\rangle$, we obtain terms proportional to $\sigma_- a |e,1\rangle= |g,0\rangle$ and $\sigma_- a^\dagger |e,1\rangle= \sqrt{2}\,|g,2\rangle$. These contributions are represented by the diagrams of Fig.~\ref{fig:order2}. Note that in the first case (left diagram) the photon is virtual, while in the second (right diagram) two real photons are emitted. After integrating over time $H_I(t') |\psi^{(1)}(t')\rangle$ according to Eq.~(\ref{eq:nthorder}) (with $n=2$), we obtain \begin{eqnarray} \begin{array}{c} {\displaystyle |\psi^{(2)}(t)\rangle = i\,\frac{\lambda^2}{2\omega}\left[ t+\frac{i}{2\omega}\left(1-e^{-2i\omega t}\right)\right]|g,0\rangle} \\ {\displaystyle +\, i\,\frac{\sqrt{2} \lambda^2}{2\omega}\left[ -t+\frac{i}{2\omega}\left(1-e^{2i\omega t}\right)\right]|g,2\rangle.} \end{array} \label{eq:dia2} \end{eqnarray} It is interesting to remark that in the latter term the $\sqrt{2}$ factor is due to the stimulated emission of the second photon by the first one. \begin{figure}[h] \includegraphics[angle=0.0, width=8cm]{fig4.pdf} \caption{ Same as in Fig.~\ref{fig:order0-1}, but for the second-order contribtions.} \label{fig:order2} \end{figure} To obtain the third-order contribution, we apply $H_I(t')$ to $|\psi^{(2)}(t')\rangle$. As a result, from the term proportional to $|g,0\rangle$ in $|\psi^{(2)}(t')\rangle$ we obtain a term proportional to $|e,1\rangle$ (top left diagram in Fig.~\ref{fig:order3}), while from the term proportional to $|g,2\rangle$ we obtain two terms, one proportional to $|e,3\rangle$ (top right diagram in Fig.~\ref{fig:order3}) and one to $|e,1\rangle$ (bottom diagram in Fig.~\ref{fig:order3}). After integrating over time $H_I(t') |\psi^{(2)}(t')\rangle$, we obtain \begin{eqnarray} \begin{array}{c} {\displaystyle |\psi^{(3)}(t)\rangle = \frac{\lambda^{3}}{4 \omega^{3}} \left[ - 1 + e^{2 i \omega t} - i (\omega t) \left(1 + e^{2 i \omega t}\right)\right] |e,1\rangle }\\ {\displaystyle +\sqrt{\frac{3}{2}} \frac{\lambda^{3}}{8 \omega^{3}} \left[ 1 - e^{4 i \omega t} + 4 i e^{2 i \omega t} (\omega t)\right]|e,3\rangle }\\ {\displaystyle +\frac{\lambda^{3}}{4 \omega^{3}} \left[1 - e^{2 i \omega t} + 2 i (\omega\,t) - 2 (\omega t)^{2}\right] |e,1\rangle, } \end{array} \end{eqnarray} where the three terms of this equation correspond, respectively, to the top left, top right, and bottom diagram of Fig.~\ref{fig:order3}). The perturbative treatment outlined in this subsection can be easily iterated to higher orders. \begin{figure}[h] \includegraphics[angle=0.0, width=8cm]{fig5.pdf} \caption{ Same as in Fig.~\ref{fig:order0-1}, but for the third-order contribtions.} \label{fig:order3} \end{figure} As an example of application of the Picard expansion, we compute the mean number of generated photons, as a function of the qubit-field coupling constant $\lambda$ and of the interaction time $t$. We can see from Fig.~\ref{fig:picard-3d} that the fourth-order plot (top panel) is in good agreement with the exact solution (bottom panel), provided $\lambda$ and $t$ are not too large. \begin{figure}[h] \includegraphics[angle=0.0, width=8cm]{fig6.pdf} \caption{(Color online) Mean number of generated photons $\langle n \rangle$ as a function of the coupling strength $\lambda$ (in units of $\omega$) and of the interaction time $t$, measured in units of the swapping time $\tau_s = \pi/2 \lambda$. The fourth-order Picard expansion (top) is compared with the numerical results (bottom).} \label{fig:picard-3d} \end{figure} For $\lambda/\omega=0.1$, the Picard expansion is compared (up to the fourth order) with the exact numerical solution in Fig.~\ref{fig:npicardexact}. It can be seen that the Picard series truncated to the fourth order can reproduce the behavior of $\langle n \rangle$ up to $t/\tau_s\approx 0.5$. On the other hand, for this value of $\lambda$ the amplitude and time of the first peak can be estimated already from the first-order expansion. To the first order, \begin{equation} \langle n \rangle (t) =\left(\frac{\lambda}{\omega}\right)^2 \sin^2(\omega t), \label{eq:taup} \end{equation} corresponding to the first peak at time $\tau_p$, with $\tau_p/\tau_s=\lambda/\omega$, and peak value $\langle n\rangle (\tau_p)=(\lambda/\omega)^2$. As shown in Fig.~\ref{fig:npertexact}, this analytical prediction for $\tau_p$ is in good agreement with the numerical results up to $\lambda/\omega\approx 0.3$. \begin{figure}[h] \includegraphics[angle=0.0, width=8cm]{fig7.pdf} \caption{(Color online) Mean number of photons as a function of time, with the numerical results (dashed red line) compared with the Picard expansion (solid blue line) truncated to the first (top left), second (top right), third (bottom left), and fourth (bottom right) order, for $\lambda/\omega=0.1$.} \label{fig:npicardexact} \end{figure} \begin{figure}[h] \includegraphics[angle=0.0, width=7.5cm]{fig8.pdf} \caption{(Color online) Time $\tau_p$ of the first peak in $\langle n \rangle (t)$: comparison of the first-order (dashed line) with the numerical results (triangles).} \label{fig:npertexact} \end{figure} The oscillations in $\langle n \rangle (t)$, due to the coherent generation (DCE) and destruction (anti-DCE) of photons, are a clear dynamical ``smoking gun'' of the ultrastrong coupling regime. Such oscillations, as shown in Figs.~\ref{fig:picard-3d} and \ref{fig:npicardexact}, are, for relatively small values of $\lambda/\omega$, regular. At small times, a quasi-periodic behavior with frequency $2\omega$ is clearly seen, and also predicted by first-order perturbation theory, Eq.~(\ref{eq:taup}). This reasult might be interesting for experimental investigations in that clear features of the DCE are observable with short interaction times and relatively small interaction strengths. \section{Magnus expansion} The Magnus expansion starts by assuming that an exponential form for the (unitary) time-evolution operator $U(t)$ (defined by $|\psi(t)\rangle=U(t)|\psi(0)\rangle$) exists: \begin{equation} U(t)=e^{\Omega(t)}, \quad \Omega(0)=0, \label{eq:magnus1} \end{equation} with a series expansion for $\Omega$: \begin{equation} \Omega(t)=\sum_{n=1}^\infty \Omega^{(n)}(t). \label{eq:magnus2} \end{equation} An approximate expression for the time-evolution operator is obtained by truncation of the Magnus expansion. The first few terms in such expansion are \begin{eqnarray} \begin{array}{c} {\displaystyle \Omega^{(1)}(t)=\int_0^t dt_1 A(t_1), } \\ {\displaystyle \Omega^{(2)}(t)=\frac{1}{2}\int_0^t dt_1 \int_0^{t_1} dt_2 [A(t_1),A(t_2)], } \\ {\displaystyle \Omega^{(3)}(t)=\frac{1}{6}\int_0^t dt_1 \int_0^{t_1} dt_2 \int_0^{t_2} dt_3 } \\ {\displaystyle ([A(t_1),[A(t_2),A(t_3)]]+[A(t_3),[A(t_2),A(t_1)]]), } \end{array} \label{eq:Magnusgeneric} \end{eqnarray} where we have defined the (anti-Hermitian) operator $A(t)=-iH_I(t)$, with $H_I$ Hamiltomian in the interaction picture. For a derivation of the terms $\Omega^{(n)}(t)$ see, \emph{e.g.}, Ref.~\cite{Blanes2009}. Note that, since the expansion is for $\Omega$ and not for $U$ as in the Picard series, the Magnus expansion provides a \emph{unitary perturbation theory}, in contrast to the Picard series. This is one of the most appealing features of the Magnus expansion. The Magnus expansion, in particular conditions for the convergence of the Magnus series and several applications of the method, including its use as a numerical integrator, are reviewed in Ref.~\cite{Blanes2009}. Hereafter, we shall discuss applications of the Magnus expansion to the Rabi model. \subsection{Semiclassical Rabi model} \label{sec:Maghnussemi} We write explicitly the first three terms of the Magnus expansion for the semiclassical Rabi model. Let $\Omega^{(n)}_{ij}=\langle i | \Omega^{(n)}|j\rangle$, with $i,j=g,e$, denote the matrix elements of $\Omega^{(n)}$ in the $\{|g\rangle,|e\rangle\}$ basis. From Eq.~(\ref{eq:Magnusgeneric}), using the semiclassical Rabi Hamiltonian (\ref{eq:HIsemi}) we obtain \begin{eqnarray} \begin{array}{c} {\displaystyle \Omega^{(1)}_{gg}(t)=\Omega^{(1)}_{ee}(t)=0, } \\ {\displaystyle \Omega^{(1)}_{ge}(t)= -\frac{\Omega}{2\omega} \left( 1-e^{-2i\omega t} +2i\omega t \right) =-[\Omega^{(1)}_{eg}(t)]^\star, } \end{array} \end{eqnarray} \begin{eqnarray} \begin{array}{c} {\displaystyle \Omega^{(2)}_{ge}(t)=\Omega^{(2)}_{eg}(t)=0, } \\ {\displaystyle \Omega^{(2)}_{gg}(t)= \frac{i \Omega^2}{4 \omega^2} \left( - 2\omega t \cos(2\omega t)+ \sin(2\omega t) \right) } \\ {\displaystyle =-[\Omega^{(2)}_{ee}(t)], } \end{array} \end{eqnarray} \begin{eqnarray} \begin{array}{c} {\displaystyle \Omega^{(3)}_{gg}(t)=\Omega^{(3)}_{ee}(t)=0, } \\ {\displaystyle \Omega^{(3)}_{ge}(t)= \frac{\Omega^3}{8 \omega^3} \left[ - 3 + i \omega t + \frac{4}{3} \, \omega^{2} t^{2} \right. } \\ {\displaystyle +\left( \frac{3}{2} + 2 i \omega t - \frac{2}{3} \,\omega^{2} t^{2} \right) e^{-2i\omega t} } \\ {\displaystyle + \left( \frac{7}{6} - \frac{4}{3} \, i \omega t - \frac{2}{3} \,\omega^{2} t^{2} \right) e^{2i\omega t} } \\ {\displaystyle \left. +\left( \frac{1}{3} + \frac{1}{3} \, i \omega t \right) \, e^{-4i\omega t} \right] =-[\Omega^{(3)}_{eg}(t)]^\star. } \end{array} \end{eqnarray} Within the RWA, the semiclassical Rabi model is, in the interaction picture, time-independent, and therefore the Magnus expansion reduces to its first-order term, $\Omega(t)=\Omega^{(1)}(t)=-i H_I t$. On the other hand, when the terms beyond RWA are taken into account, in general $[A(t_1),A(t_2)]\ne 0$ if $t_1\ne t_2$ and therefore we must consider also higher-order terms in the Magnus expansion. As an example, in Fig.~\ref{fig:magnusrk} (left panel) we compare the Magnus expansion, truncated to the fourth order, with the numerical integration of the Schr\"odinger equation via a fourth-order Runge-Kutta method. If we compare these results with those obtained by means of the Picard series (see Fig.~\ref{fig:noRWAsemiclassical}), it is clear that the Magnus expansion allows us to address much longer evolution times already at small orders. On the other hand, the convergence of the Magnus expansion is not guaranteed at all times. More precisely, a sufficiently condition \cite{moan,casas2007} for the convergence of the Magnus expansion is that \begin{equation} \int_0^t ||A(t')||_2 d t' < \pi, \label{eq:magnusconvergence} \end{equation} where $||A||_2$ is the square root of the largest eigenvalue of $A^\dagger A$. In the example of Fig.~\ref{fig:magnusrk}, this criterion ensures convergence for times $t<t_c$, with $\Omega t_c\approx 5.1$ (vertical dashed line in the figure). For $t>t_c$, the strong oscillations and the discrepancy between the Magnus expansion truncated to the fourth-order and the exact numerical solution, suggest a different numerical approach. That is, we concatenate truncated Magnus expansions. With this approach, we can address arbitrarily long time scales. For instance, Fig.~\ref{fig:magnusrk} (right panel) shows the good agreeement between the numerical solution and the concatenation of five first-order Magnus expansions. \begin{figure}[h] \includegraphics[angle=0.0, width=8.5cm]{fig9.pdf} \caption{(Color online) Comparison (for ${\rm Re}[C_g]$) between the numerical solution of the semiclassical Rabi model (dashed red line) and the Magnus expansion (solid blue line), up to fourth order (left) or iterating five times the first order expansion (right), for $\Omega/\omega=0.1$. The dashed line shows the time ($\Omega t\approx 5.1$) up to which convergence of the Magnus expansion is guaranteed by criterion (\ref{eq:magnusconvergence}).} \label{fig:magnusrk} \end{figure} To further assess the validity of the Magnus expansion, we follow the dynamics up to 30 Rabi periods ($\Omega t=60\pi$), by concatenating ${\cal N}$ times the fourth-order Magnus expansion, and then compute the Fourier transforms $F$ of $C_g$ and $C_e$. As an example, we show in Fig.~\ref{fig:fft-semiclassical} $F[{\rm Re}({C}_g)]$, for different values of ${\cal N}$. We can see that ${\cal N}=3\times 10^3$ allow us to reproduce the main features of the Fourier spectrum: for that purpose, more than $10^5$ time steps are necessary when using the Runge-Kutta method (see the bottom right panel of Fig.~\ref{fig:fft-semiclassical}). The Magnus expansion can then be used as a numerical integrator, more efficient for this problem than the Runge-Kutta mehod, as it allows much longer time steps. \begin{figure}[h] \includegraphics[angle=0.0, width=8.5cm]{fig10.pdf} \caption{(Color online) Fourier transform $F$ of ${\rm Re}({C}_g)$ (arbitrary units in the plot), obtained from integration of the semiclassical Rabi model up to $\Omega t=60 \pi$, with $\Omega/\omega=0.1$, iterating the fourth-order Magnus expansion ${\cal N}=5$ (top left), 100 (top right), 500 (bottom left), and 3000 (bottom right) times. The dashed red curve in the bottom right panel is instead obtained by fourth-order Runge-Kutta integration of the equations of motion, with $5.12\times 10^5$ points. The dashed line ${\rm Log}[|F[{\rm Re}(C_g)]|]=a-b(w/\Omega)$, with $a\approx 0.106$ and $b\approx 0.136$, fits the decay of the peaks in the Fourier transform.} \label{fig:fft-semiclassical} \end{figure} The Fourier spectrum has characteristic double-peaks. More precisely, Fig.~\ref{fig:fft-semiclassical} exhibits a single peak at the Rabi frequency $\Omega$, and doublets at frequencies $2n\omega \pm \Omega$, with $n=1,2,3,...$. Such features can be qualitatively explained as follows. The peak at frequency $\Omega$ corresponds to Rabi oscillations and already exists within the RWA. On the other hand, the doublets are structures beyond RWA, which can be conveniently understood from the Picard series. At each order of the Picard expansion, we integrate in time terms proportional to $e^{\pm 2i\omega t}$ times the wave-function at the previous order. We therefore generate new harmonics at higher frequency as we increase the perturbative order in the Picard series. Terms proportional to $e^{\pm 2 i n \omega t}$ multiply the Rabi oscillations, proportional to $e^{i\Omega t}$, and therefore in conclusion we generate harmonics at frequencies $2 n\omega \pm \Omega$. Note that each integration in time of $e^{\pm 2i\omega t}$ implies a decay of the weight of the corresponding harmonic by a factor $1/(2\omega)$. If we write the Schr\"odinger equation for the semiclassical Rabi model (\ref{eq:HIsemi}) as \begin{equation} \left [ \begin{array}{c} {\dot C}_{g}(t) \\ {\dot C}_{e}(t) \end{array} \right ] = - i f(t) \left [ \begin{array}{cc} 0 & 1 + e^{- 2 i \omega t} \\ 1 + e^{ 2i\omega t} & 0 \end{array} \right ] \left [ \begin{array}{c} C_{g}(t) \\ C_{e}(t) \end{array} \right ], \label{eq:schrodingerRabi} \end{equation} we can clearly see that at each order of the Picard series we improve the approximation for either $C_g$ or $C_e$. Therefore, we need two steps in the Picard expansion to improve $C_g$ (or $C_e$) and generate new harmonics. Since this implies two integrations in time, the harmonics at frequencies $2n\omega\pm \Omega$ are scaled by a factor $[\Omega/(2\omega)]^2$ with respect to the harmonics at frequencies $2(n-1)\omega\pm \Omega$. This estimate is in good agreement with the numerical results of Fig.~\ref{fig:fft-semiclassical}. Indeed, for $\Omega/\omega=0.1$ the decay of the first peaks in the Fourier transform is fitted by an exponential law, ${\rm Log}[|F[{\rm Re}(C_g)]|]=a-b(w/\Omega)$, with $a\approx 0.106$ and $b\approx 0.136$. This implies that the ratio between the amplitude of nearby doublets is approximately equal to $10^{-2(\omega/\Omega) b}\approx 1/525$, not far from $[\Omega/(2\omega)]^2=1/400$. A more precise calculation appears difficult, since at each perturbative order new harmonics are generated but also the weight of the already existing harmonics is modified. Note that in the Magnus series, since we have an exponential approximation theory (i.e., we consider $e^{\Omega}$, with a truncated expansion for $\Omega$), higher-order harmonics are visible already at the lowest orders. The above discussion can be visualized by means of the analog circuit reported in Fig.~\ref{fig:circuitRabi}. It corresponds to two orders in the Picard expansion, and each integration brings a factor $\Omega/(2\omega)$. The signal ($C_g$ and $C_e$) can be reinjected and at each loop the approximation is improved, adding each time two more orders in the Picard series. \begin{figure}[h] \includegraphics[angle=0.0, width=8.5cm]{fig11.pdf} \caption{Schematic drawing of an analog circuit for the integration of the Schr\"odinger equation (\ref{eq:schrodingerRabi}) for the semiclassical Rabi model.} \label{fig:circuitRabi} \end{figure} Finally, we point out that there is an interesting analogy, in particular with respect to the occurrence of doublets, between the semiclassical Rabi model and the Mathieu equation in an appropriate range of parameters, see Appendix~\ref{app:mathieu}. \subsection{Quantum Rabi model} The Magnus expansion can also be applied to the quantum Rabi model, using Eqs.~(\ref{eq:magnus1}), (\ref{eq:magnus2}) and (\ref{eq:Magnusgeneric}). For the sake of simplicity, we do not report explicit expressions for $\Omega^{(n)}(t)$. As the Hilbert space is infinite-dimensional, we cannot use convergence criteria like Eq.~(\ref{eq:magnusconvergence}), since the eigenvalues of $A^\dagger A$ are not upper bounded. On the other hand, for any given initial condition the Hilbert space actually explored by the dynamics is finite. For instance, if initially both the field and the qubit are prepared in their ground state, as discussed in Sec.~\ref{sec:PicardQRM} the mean number of photons does not grow indefinitely but oscillates due to coherent generation (DCE) and destruction (anti-DCE) of photons. Hence, we expect convergence of the Magnus expansion for sufficiently short integration times. Such expectation is borne out by numerical data, as shown in Fig.~\ref{fig:magnusqrk}. \begin{figure}[h] \includegraphics[angle=0.0, width=8.5cm]{fig12.pdf} \caption{(Color online) Comparison (for ${\rm Re}[C_{0g}]$) between the numerical solution of the semiclassical Rabi model (dashed red line) and the Magnus expansion (solid blue line), up to fourth order (left) or iterating a hundred times the fourth order expansion (right), for $\lambda/\omega=0.12$. Note that the initial condition we used, $C_{g,0}(t=0)=1$, is such that within the RWA the dynamics is trivial, $C_{g,0}(t)=1$ at all times.} \label{fig:magnusqrk} \end{figure} \section{Conclusions} In this paper, we have applied the Picard and Magnus expansions to the ultrastrong matter-field coupling, in the paradigmatic Rabi model. The Picard series, truncated to low orders, is suitable only for short interaction times. On the other hand, we have shown that the Magnus expansion, through concatenation, is an efficient numerical integrator, in that it allows time steps much longer than in the Runge-Kutta method. We have highlighted clear features of the dynamics in the ultrastrong coupling regime, and in particular of the dynamical Casimir effect. Regular oscillations in the mean number $\langle n \rangle$ of photons, due to the coherent generation (DCE) and destruction (anti-DCE) of photons take place. This reasult provides a clear ``smoking gun'' of the DCE, which might be of interest for experimental investigations in circuit QED, in that the above oscillations are observable with short interaction times and relatively small interaction strengths. We have shown that the Fourier spectrum of motion in the semiclassical Rabi model exhibits a peak at the Rabi frequency $\Omega$ and doublets at frequencies $2n\omega\pm\Omega$, with $n$ positive integer. While the Rabi frequency is trivially obtained by solving the Rabi model within the rotating wave approximation, the doublets are features beyond the RWA. Both the oscillations in $\langle n \rangle$ and the doublets can be explained by means of the Picard series. The Fourier analysis can be extended also to the quantum Rabi model finding similar, even though more complicated structures with doublets. Finally, the analogy with the Mathieu equation highligths the fact that doublets are a general feature of time-modulated systems. {\it Acknowledgments:} We acknowledge support by the INFN through the project ``QUANTUM''.
{ "timestamp": "2018-02-27T02:07:38", "yymm": "1802", "arxiv_id": "1802.08897", "language": "en", "url": "https://arxiv.org/abs/1802.08897" }
\section{Introduction}\label{sec::intro} With the exponential rise in data-demand far exceeding the capacity of the traditional macro-only cellular network operating in sub-6 GHz bands, network densification using mm-wave base stations (BSs) is becoming a major driving technology for the 5G wireless evolution~\cite{dehos2014millimeter}. While heterogeneous cellular networks (HetNets) with low power SBSs overlaid with traditional macro BSs improve the spectral efficiency of the access link (the link between a user and its serving BS), mm-wave communication can further boost the data rate by offering high bandwidth. {That said, one of the main hindrances in the way of large-scale deployment of small cells is that} the existing high-speed optical fiber backhaul network that connects the BSs to the network core is not scalabale to the extent of ultra-densification envisioned for small cells~\cite{quek2013small,tipmongkolsilp2011evolution,DhillonCaireBackhaul}. However, with recent advancement in mm-wave communication with highly directional beamforming~\cite{rangan2014millimeter,GhoshMMwave}, it is possible to replace the so-called {\em last-mile fibers} for SBSs by establishing fixed mm-wave backhaul links between the SBS and the MBS equipped with fiber backhaul, also known as the anchored BS (ABS), thereby achieving Gigabits per second (Gbps) range data rate over backhaul links~\cite{GaoMassiveMimomm-waveBackhaul}. While mm-wave fixed wireless backhaul is targetted to be a part of the first phase of the commercial roll-out of 5G~\cite{mm-waveMagazineDahlmamn}, 3GPP is exploring a more ambitious solution of IAB where the ABSs will use {the} same spectral resources and infrastructure of mm-wave transmission to serve cellular users in access as well as the SBSs in backhaul~\cite{accessbackhaul3gpp}. In this paper, we develop a tractable analytical framework {for} IAB-enabled mm-wave cellular networks using tools from stochastic geometry and obtain some design insights that will be useful for the ongoing pre-deployment studies on IAB. \subsection{Background and related works} Over recent years, stochastic geometry has emerged as a powerful tool for modeling and analysis of cellular networks operating in sub-6 GHz~\cite{haenggi2012stochastic}. The locations of the BSs and users are commonly modeled as independent Poisson point processes (PPPs) over an infinite plane. This model, initially developed for the analysis of traditional macro-only cellular networks~\cite{AndrewsTractable}, was further extended for the analysis of HetNets in~\cite{dhillon2012modeling,mukherjee2012distribution,Prasanna_globecomm,jo2012heterogeneous}. In the followup works, this PPP-based HetNet model was used to study many different aspects of cellular networks such as load balancing, BS cooperation, multiple-input multiple-output (MIMO), energy harvesting, and many more. Given the activity this area has seen over the past few areas, any attempt towards summarizing all key relevant prior works here would be futile. Instead, it would be far more beneficial for the interested readers to refer to dedicated surveys and tutorials~\cite{elsawy2013stochastic,elsawy2016modeling,andrews2016primer,mukherjee2014analytical} that already exist on this topic. While these initial works were implicitly done for cellular networks operating in the sub-6 GHz spectrum, tools from stochastic geometry have also been leveraged further to characterize their performance in the mm-wave spectrum~\cite{AndrewsMMWave,Direnzo_mmwave,MillimeterWaveBai2015,mmWaveHetNetTurgut}. These mm-wave cellular network models specifically focus on the mm-wave propagation characteristics which signifantly differ from those of the sub-6 GHz~\cite{rangan2014millimeter}, such as the severity of blocking of mm-wave signals by physical obstacles like walls and trees, directional beamforming using antenna arrays, and inteference being dominated by noise~\cite{Kulkarni_backhaul_asilomar}. These initial modeling approaches were later extended to study different problems specific to mm-wave cellular networks, such as, cell search~\cite{liAndrews2017directional}, antenna beam alignment~\cite{HeathAlkhateeb2017BeamAssociation}, {and cell association in the mm-wave spectrum~\cite{Elshaer_cell_association}.} With this brief introduction, we now shift our attention to the main focus of this paper which is IAB in mm-wave cellular networks. In what follows, we provide the rationale behind mm-wave IAB and how stochastic geometry can {be used for its performance evaluation.} For traditional cellular networks, it is reasonable to assume that the capacity achieved by the access links is not limited by the backhaul constraint on the serving BS since all BSs have access to the high capacity wired backhaul. As expected, backhaul constraint was ignored in almost all prior works on stochastic geometry-based modeling and analysis of cellular networks. However, with the increasing network densification with small cells, it may not be feasible to connect every SBS to the wired backhaul network which is limited by cost, infrastructure, maintenance, and scalability. These limitations motivated a significant body of research works on the expansion of the cellular networks by deploying relay nodes connected to the ABS by wireless backhaul links, {e.g. see}~\cite{backhaul_survey_1}. Among different techniques of wireless backhaul, 3GPP included layer 3 relaying as a part of the long term evolution advanced (LTE-A) standard in Release 10~\cite{relay3gpp1,relay3gpp2} for coverage extension of the cellular network. Layer 3 relaying follows the principle of IAB architecture, which is often synonymously referred to as {\em self-backhauling}, where the relay nodes have the functionality of SBS and the ABS multiplexes its time-frequency resources to establish access links with the users and wireless backhaul links with SBSs that may not have access to wired backhaul~\cite{johansson2016self}. However, despite being the part of the standard, layer 3 relays have never really been deployed on a massive scale in 4G mostly due to the spectrum shortage in sub-6 GHz. For instance, in urban regions with high capacity demands, the operators are not willing to {relinquish} {any part of the cellular bandwidth (costly and scarce resource)} for wireless backhaul. However, with recent advancement in mm-wave communication, IAB has gained substantial interest since spectral bottleneck will not be a primary concern once high bandwidth in mm-wave spectrum (at least 10x the cellular BW in sub-6 GHz) is exploited. Some of the notable industry initiatives driving mm-wave IAB are mm-wave small cell access and backhauling (MiWaveS)~\cite{miWaveS} and 5G-Crosshaul~\cite{crosshaul}. In 2017, 3GPP also {started working on} a new study item to investigate the performance of IAB-enabled mm-wave cellular network~\cite{accessbackhaul3gpp}. Although backhaul is becoming a primary bottleneck of cellular networks, there is very little existing work on the stochastic geometry-based analyses considering the backhaul constraint~\cite{QuekBackhaul,suryaprakash2014analysis,SinghAndrews2014}. While these works are focused on the traditional networks in sub-6 GHz, contributions on IAB-enabled mm-wave HetNet are even sparser, except an extension of the PPP-based model~\cite{SinghKulkarniSelfBackhaul}, where the authors modeled wired and wirelessly backhauled BSs and users as three independent PPPs. In \cite{Ganti-self-backhaul,tabassum2016analysis}, similar modeling approach was used to study IAB in sub-6 GHz using full duplex BSs. The fundamental shortcoming of these PPP-based models is the assumption of independent locations of the BSs and users which are spatially coupled in actual networks. For instance, in reality, the users form spatial clusters, commonly known as {\em user hotspots} and the centers of the user hotspots are targetted as the potential cell-cites of the short-range mm-wave SBSs~\cite{Saha_J1}. Not surprisingly, such spatial configurations of users and BSs are at the heart of the 3GPP simulation models~\cite{saha20173gpp}. To address this shortcoming of the analytical models, in this paper, we propose the {\em first 3GPP-inspired} {\em stochastic geometry-based} finite network model for the performance analysis of HetNets with IAB. The key contributions are summarized next. \subsection{Contributions and outcomes} \label{subsec::contributions} \subsubsection{New tractable model for IAB-enabled mm-wave HetNet} We develop a realistic and tractable analytical framework to study the performance of IAB-enabled mm-wave HetNets. Similar to the models used in 3GPP-compliant {simulations}~\cite{accessbackhaul3gpp}, we consider a two-tier HetNet where a {circular macrocell with ABS at the center} is overlaid by numerous low-power small cells. The users are assumed to be non-uniformly distributed over the macrocell forming hotspots and the SBSs are located at the geographical centers of these user hotspots. {The non-uniform distribution of the users and the spatial coupling of their locations with those of the SBSs means that the analysis of this setup is drastically different from the state-of-the-art PPP-based models. Further, the consideration of a single macrocell (justified by the noise-limited nature of mm-wave communications), allows us to glean crisp insights into the coverage zones, which further facilitate a novel analysis of load on ABS and SBSs\footnote{In our discussion, BS load refers to the number of users connected to the BS.}.} Assuming that the total system BW is partitioned into two splits for access and backhaul communication, we use this model to study the performance of three backhaul BW partition strategies, namely, (i) {\em equal partition}, where each SBS gets equal share of BW irrespective of its load, (ii) {\em instantaneous load-based partition}, where the ABS frequently collects information from the SBSs on their instantaneous loads and partitions the backhaul BW proportional to the {instantaneous} load on each SBS, and (iii) {\em average load-based partition}, where the ABS collects information from the SBSs on {their average loads} and partitions the backhaul BW proportional to the average load on each SBS. \subsubsection{New load modeling and downlink rate analysis} For the purpose of performance evaluation and comparisons between the aforementioned strategies, we evaluate the downlink rate coverage probability i.e. probability that the downlink data rate experienced by a randomly selected user will exceed a target data rate. As key intermediate steps of our analysis we {characterize} the two essential components of rate coverage, which are (i) {signal-to-noise-ratio} ($\mathtt{SNR}$)-coverage probability, and (ii) the distribution of ABS and SBS load, {which} directly impacts the amount of resources allocated by the serving BS to the user of interest. We compute the {probability mass functions} (PMFs) of the ABS and the SBS loads assuming the number of users per hotspot is fixed. We {then} relax this fixed user assumption by considering independent Poisson distribution on the number of users in each hotspot. Due to a significantly different spatial model, our approach of load modeling is quite different from the load-modeling in PPP-based networks~\cite{SinghAndrews2014}. \subsubsection{System design insights} Using the proposed analytical framework, we obtain the following system design insights. \begin{itemize} \item We {compare} the three backhaul BW partition strategies in terms of three metrics, (i) rate coverage probability, (ii) median rate, and (iii) $5^{th}$ percentile rate. {Our numerical results indicate that for a given combination of the backhaul BW partition strategy and the performance metric of interest, there exists an optimal access-backhaul BW split for which the metric is maximized.} \item Our results demonstrate that the optimal access-backhaul partition fractions for median and $5^{th}$ percentile rates are not very sensitive to the choice of backhaul BW partition strategies. Further, the median and $5^{th}$ percentile rates are invariant to system BW. \item For given infrastructure and spectral resources, the IAB-enabled network outperforms the macro-only network with no SBSs up to a critical volume of total cell-load, beyond which the performance gains disappear and its performance converges to that of the macro-only network. Our numerical results also indicate that this critical total {cell-load} increases almost linearly with the system BW. \end{itemize} \section{System Model}\label{sec::system::model} \subsection{mm-wave Cellular System Model} \begin{figure} \centering \subfigure[User and BS locations]{ \includegraphics[width=.98\linewidth]{./Fig/system_model_1.pdf} \label{fig::system::model::1}} \subfigure[Resource Allocation]{ \label{fig::system::model::2} \includegraphics[width=.99\linewidth]{./Fig/system_model_2.pdf} } \caption{Illustration of the system model.} \label{fig::system::model} \end{figure} \subsubsection{BS and user locations} Inspired by the spatial configurations used in 3GPP {simulations}~\cite{accessbackhaul3gpp,saha20173gpp} for a typical outdoor deployment scenario of a two-tier HetNet, we assume that $n$ SBSs are deployed inside a circular macrocell of radius $R$ (denoted by $b({\bf 0},R)$) with the macro BS at its center. We assume that this BS is connected to the core network with high speed optical fiber and {is hence} an ABS. Note that, in contrast to the infinite network models (e.g. the PPP-based networks defined over $\mathbb{R}^2$) which are suitable for interference-dominated networks (such as conventional cellular networks in sub-6 GHz), we are limiting the complexity of the system model by considering single macrocell. This assumption is justified by the noise-limited nature of mm-wave communications~\cite{Kulkarni_backhaul_asilomar}. {Moreover, as will be evident in the sequel, this setup will allow us to glean crisp insights into the properties of this network despite a more general user distribution model (discussed next) compared to the PPP-based model.} We model a user hotspot at ${\bf x}$ as $b({\bf x},R_{\rm s})$, i.e., a circle of radius $R_{\rm s}$ centered at ${\bf x}$. We assume that the macrocell contains $n$ user hotspots, located at $\{{\bf x}_i\equiv (x_i,\varphi_i), i=1,\dots,n\}$, which are distributed uniformly at random in $b({\bf 0},R-R_{\rm s})$.\footnote{{For notational simplicity, we use $x\equiv\|{\bf x}\|,\ \forall\ {\bf x}\in{\mathbb{R}}^2$.}} Thus, $\{{\bf x}_i\}$ is a sequence of independently and identically distributed (i.i.d.) random vectors with the distribution of ${\bf x}_i$ being: \begin{align}\label{eq::sbs::distribution} f_{\bf X}({\bf x}_i)=\begin{cases}\frac{x_i}{\pi(R-R_{\rm s})^2}, &\text{when }0<x_i\leq R- R_{\rm s}, 0<\varphi_i\leq 2\pi,\\ 0, &\text{otherwise.} \end{cases} \end{align}The marginal probability density function (PDF) of $x_i$ is obtained as: $f_{X}(x_i)=2x_i/(R-R_{\rm s})^2$ for $0<x_i\leq R- R_{\rm s}$ and $\varphi_{i}$ is a uniform random variable in $(0,2\pi]$. Note that this construction ensures that all hotspots lie entirely inside the macrocell, i.e., $b({\bf x}_i,R_{\rm s})\cap b({\bf 0},R)^c = \emptyset,\ \forall\ i$. {We assume that the number of users in the hotspot centered at ${\bf x}_i$ is $N_{{\bf x}_i}$, where} {\sc case}~$1$: $N_{{\bf x}_i}=\bar{m}$ is fixed and equal for all $i=1,\dots, n$ and {\sc case}~$2$: $\{N_{{\bf x}_i}\}$ is a sequence of i.i.d. Poisson random variables with mean $\bar{m}$. These ${N}_{{\bf x}_i}$ users are assumed to be located {\em uniformly at random} {independently of each other} in each hotspot. Thus, the location of a user belonging to the hotspot at ${\bf x}_i$ is denoted by ${\bf x}_i+{\bf u}$, where ${\bf u} \equiv (u,\xi)$ is a random vector in $\mathbb{R}^2$ with PDF: \begin{align}\label{eq::users::distribution} f_{\bf U}({\bf u})=\begin{cases}\frac{u}{\pi R_{\rm s}^2}, &\text{when }0< {u}\leq R_{\rm s}, 0<\xi\leq 2\pi\\ 0, &\text{otherwise.} \end{cases} \end{align} {The marginal PDF of $u$} is: $f_{U}(u)=2u/R_{\rm s}^2$ for $0<u\leq R_{\rm s}$ and $\xi$ is a uniform random variable in $(0,2\pi]$. We assume that the SBSs are deployed at the center of user hotspots, i.e., at $\{{\bf x}_i\}$. The ABS provides wireless backhaul to these SBSs over mm-wave links. See Fig.~\ref{fig::system::model::1} for an illustration. Having defined the spatial distribution of SBSs and users, we now define the {\em typical user} for which we will compute the rate coverage probability. The typical user is a user chosen uniformly at random from the network. The hotspot to which the typical user belongs is termed as the {\em representative hotspot}. We denote the center of representative hotspot as ${\bf x}$, where ${\bf x}={\bf x}_n$, without loss of generality and the location of the typical user as ${\bf x}+{\bf u}$. For {\sc case}~$1$, the number of users in the representative cluster is $N_{\bf x}=N_{{\bf x}_n}=\bar{m}$. For {\sc case}~$2$, although $N_{{\bf x}_i}$ is i.i.d. Poisson, $N_{\bf x}$ {does not follow the same distribution} since the typical user will more likely belong to a hotspot with more number of users~\cite{Qin2017}. If $n\to \infty$, $N_{\bf x}$ follows a weighted Poisson distribution with PMF $ {\mathbb{P}}(N_{\bf x} = k ) = \frac{\bar{m}^{k-1}e^{-\bar{m}}}{(k-1)!}$, where, $k\in{\mathbb{Z}}^+$. It can be easily shown that if $N_{\bf x}$ follows a weighted Poisson distribution, {we have} $N_{\bf x} = N_{{\bf x}_n}+1$. Hence, for $n\to \infty$, one can obtain the distribution of $N_{\bf x}$ by first choosing {a} hotspot uniformly at random and then {adding one user to it}. However, when $n$ is finite, $N_{\bf x}$ will lie between $N_{{\bf x}_{n}}$ and $N_{{\bf x}_{n}}+1$ ($N_{{\bf x}_n}\leq N_{\bf x}\leq N_{{\bf x}_{n}}+1$). The lower bound on $N_{\bf x}$ is trivially achieved when $n=1$. Since the actual distribution of $N_{\bf x}$ for finite number of hotspots ($n>1$) is not tractable, we fix the typical user for {\sc case}~$2$ according to the following Remark. \begin{remark}\label{rem::typical} For {\sc case}~$2$, we first choose a hotspot centered at ${\bf x}$ uniformly at random from $n$ hotspots, call it the representative hotspot, and then add the typical user at ${\bf x}+{\bf u}$, where $\bf u$ follows the PDF in \eqref{eq::users::distribution}. Although this process of selecting the typical user is asymptotically exact when $N_{{\bf x}_i}\stackrel{i.i.d.}{\sim}{\tt Poisson}(\bar{m}),\ \forall\ i = 1,2,\dots,n$, and $n\to\infty$, it will have negligible impact on the analysis since {our interest will be in the cases where the macrocells have moderate} to high number of hotspots~\cite{3gppreportr12}. \end{remark} \subsubsection{Propagation assumptions}\label{subsec::prop::assumption} All backhaul and access transmissions are assumed to be performed in mm-wave spectrum. {We assume that the ABS and SBS transmit at constant power spectral densities (PSDs) $P_{\rm m}/W$ and $P_{\rm s}/W$, respectively over a system BW $W$. The received power at ${\bf z}$ is given by $P \psi h L({\bf z},{\bf y})^{-1}$, where $P$ is a generic variable denoting transmit power with $P\in\{P_{\rm m},P_{\rm s}\}$, $\psi$ is the combined antenna gain of the transmitter and receiver, and $L ({\bf z},{\bf y})= 10^{((\beta +10\alpha\log_{10}\|{\bf z}-{\bf y}\|)/10)}$ is the associated pathloss.} We assume that all links undergo i.i.d. Nakagami-$m$ fading. Thus, $h\sim{\tt Gamma}(m,m^{-1})$. \subsubsection{Blockage model} \label{subsec::blockagemodel} Since mm-wave signals are sensitive to physical blockages such as buildings, trees and even human bodies, the LOS and NLOS path-loss characteristics have to be explicitly included into the analysis. On similar lines of \cite{Bai_mmWave}, we assume exponential blocking model. Each mm-wave link of distance $r$ between the transmitter (ABS/SBS) and receiver (SBS/user) is LOS or NLOS according to an independent Bernoulli random variable with LOS probability $p(r) =\exp(-r/\mu)$, where $\mu$ is the LOS range constant that depends on the geometry and density of blockages. {Since the blockage environment seen by the links between the ABS and SBS, SBS to user and ABS to user may be very different, one can assume three different blocking constants $\{\mu_{\rm b},\mu_{\rm s},\mu_{\rm m}\}$, respectively instead of a single blocking constant $\mu$. As will be evident in the technical exposition, this does not require any major changes in the analysis. However, in order to keep our notational simple, we will assume the same $\mu$ for all the links in this paper.} Also, LOS and NLOS links may likely follow different fading statistics, which is incorporated by assuming different Nakagami-$m$ parameters for LOS and NLOS, denoted by $m_L$ and $m_{NL}$, respectively. We assume that all BSs are equipped with steerable directional antennas and the user equipments have omni-directional antenna. Let $G$ {be} the directivity gain of the transmitting and receiving antennas of the BSs (ABS and SBS). Assuming perfect beam alignment, the effective gains on backhaul and access links are $G^2$ and $G$, respectively. We assume that the system is noise-limited, i.e., at any receiver, the interference is negligible compared to the thermal noise with PSD ${\tt N}_0$. Hence, the $\mathtt{SNR}$-s of a backhaul link from ABS to SBS at ${\bf x}$, access links from SBS at ${\bf x}$ to user at ${\bf x}+{\bf u}$, and ABS to user at ${\bf x}+{\bf u}$ are respectively expressed as: \begin{subequations}\label{eq::sbr::equations} \begin{alignat}{3} &\mathtt{SNR}_{\rm b}({\bf x}) = \frac{P_{\rm m} G^2 h_{\rm b}L({\bf 0},{\bf x})^{-1}}{{\tt N}_0 W},\\ &\mathtt{SNR}_{\rm a}^{\rm SBS}({\bf x}+{\bf u}) = \frac{P_{\rm s}G h_{\rm s}L({\bf x},{\bf x}+{\bf u})^{-1}}{{\tt N}_0W},\\ &\mathtt{SNR}_{\rm a}^{\rm ABS}({\bf x}+{\bf u}) = \frac{P_{\rm m} G h_{\rm m}L({\bf 0},{\bf x}+{\bf u})^{-1}}{{\tt N}_0W}, \end{alignat} \end{subequations} where $\{h_{\rm b}, h_{\rm s}, h_{\rm m}\}$ are the corresponding small-scale fading gains \subsubsection{User association}\label{subsec::user::association} { We assume that the SBSs operate in closed-access, i.e., users in hotspot can only connect to the SBS at {the} hotspot center, or the ABS. This model is inspired by the way smallcells with closed user groups, for instance the privately owned femtocells, are dropped in the HetNet models considered by 3GPP~\cite[Table A.2.1.1.2-1]{access2010further}.} Given the complexity of user association in mm-wave using beam sweeping techniques, we assume a much simpler way of user association which is performed by signaling in sub-6 GHz, analogous to the current LTE standard~\cite{HeathAlkhateeb2017BeamAssociation}. In particular, the BSs broadcast paging signal using omni-directional antennas in sub-6 GHz and the user associates to the candidate serving BS based on the maximum received power over the paging signals. Since the broadcast signaling is in sub-6 GHz, we assume the same power-law pathloss function for both LOS and NLOS components with path-loss exponent $\alpha$ due to rich scattering environment. We define the association event ${\cal E}$ for the typical user as: \begin{align} {\cal E} = \begin{cases} 1 &\text{ if } P_{\rm s}\|{\bf u}\|^{-\alpha} >P_{\rm m}\|{\bf x}+{\bf u}\|^{-\alpha}, \\ 0, &\text{ otherwise,} \end{cases} \end{align} where $\{0,1\}$ denote association to ABS and SBS, respectively. The typical user at ${\bf x}+{\bf u}$ is {\em under coverage} in the downlink if either of the following two events occurs: \begin{align} & {\cal E} = 1 \text{ and } \mathtt{SNR}_{\rm b}({\bf x})>\theta_1, \mathtt{SNR}_{\rm a}^{\rm SBS}({\bf u})>\theta_2, \text{ or,}\notag\\ &{\cal E} = 0 \text{ and }\mathtt{SNR}_{\rm a}^{\rm ABS}({\bf x}+{\bf u})>\theta_3,\label{eq::coverage::def} \end{align} where $\{\theta_1, \theta_2, \theta_3\}$ are the coverage thresholds for successful demodulation and decoding. \subsection{Resource allocation} \label{subsec::resourceallocation} {The ABS, SBSs and users are assumed to be capable of communicating on both mm-wave and sub-6 GHz bands.} The sub-6 GHz band is reserved for control channel and the mm-wave band is kept for data-channels. The total mm-wave BW $W$ for downlink transmission is partitioned into two parts, $W_{\rm b}=\eta W$ for backhaul and $W_{\rm a}=(1-\eta)W$ for access, where $\eta\in[0,1)$ determines the access-backhaul split. Each BS is assumed to employ a simple round robin scheduling policy for serving users, under which the total access BW is {shared equally} among its associated users, referred to alternatively as {\em load} on that particular BS. On the other hand, the backhaul BW is shared amongst $n$ SBSs by either of the three strategies as follows. \begin{enumerate} \item {\em Equal partition.} This is the simplest partition strategy where the ABS does not require any load information from the SBSs and divides $W_{\rm b}$ equally into $n$ splits. \item {\em Instantaneous load-based partition.} In this scheme, the SBSs regularly feed back the ABS its load information and accordingly the ABS allocates backhaul BW proportional to the {instantaneous} load on each small cell. \item {\em Average load-based partition.} Similar to the previous strategy, the ABS allocates backhaul BW proportional to the load on each small cell. But in this scheme, the SBSs feed back the ABS its load information after {sufficiently} long intervals. Hence the instantaneous fluctuations in SBS load are averaged out. \end{enumerate} If the SBS at ${\bf x}$ gets backhaul BW $W_{\rm s}({\bf x})$, then \begin{align}\label{eq::bandwidth::partition} W_{\rm s}({\bf x}) = \begin{cases} \frac{W_{\rm b}}{n}, & \text{for equal partition},\\ \multirow{2}{*}{${\frac{N^{\rm SBS}_{{\bf x}}}{N^{\rm SBS}_{{\bf x}}+\sum\limits_{i=1}^{n-1}N^{\rm SBS}_{{\bf x}_i}}} W_{\rm b}$,}& \text{for instantaneous }\\&\text{load-based partition},\\ \multirow{2}{*}{$\frac{{\mathbb{E}}[N^{\rm SBS}_{{\bf x}}]}{{\mathbb{E}}[N^{\rm SBS}_{{\bf x}}]+\sum\limits_{i=1}^{n-1}{\mathbb{E}}[N^{\rm SBS}_{{\bf x}_i}]} W_{\rm b}$,}& \text{for average load-based}\\&\text{ partition}, \end{cases} \end{align where $N_{\bf x}^{\rm SBS}$ and $N_{{\bf x}_i}^{\rm SBS}$ denote the load on the SBS of the representative hotspot and load on the SBS at ${\bf x}_i$, respectively. The BW partition is illustrated in Fig.~\ref{fig::system::model::2}. To compare the performance of these strategies, we define the network performance metric of interest next. \subsection{Downlink data rate}\label{subsec::downlink::data::rate} The maximum achievable downlink data rate, henceforth referred to as simply the {\em data rate}, on the backhaul link between the ABS and the SBS, the access link between SBS and user, and the access link between ABS and user can be expressed as: \begin{subequations} \begin{alignat}{3} {\cal R}_{\rm b}^{\rm ABS} &= W_{\rm s}({\bf x}) \log_2(1+\mathtt{SNR}_{\rm b}({\bf x})),\label{eq::rate_backhaul}\\ {\cal R}_{\rm a}^{\rm SBS} &= \min\bigg(\frac{W_{\rm a}}{N_{\bf x}^{\rm SBS}}\log_2(1+\mathtt{SNR}_{\rm a}^{\rm SBS}({\bf u})), \frac{ {\cal R}_{\rm b}^{\rm ABS}}{N_{\bf x}^{\rm SBS}}\bigg),\label{eq::rate_sbs_access} \\ {\cal R}_{\rm a}^{\rm ABS} &= \frac{W_{\rm a} }{N_{\bf x}^{\rm ABS}+\sum\limits_{i=1}^{n-1}N_{{\bf x}_i}^{\rm ABS}}\log_2(1+\mathtt{SNR}_{\rm a}^{\rm ABS}({\bf x}+{\bf u})), \label{eq::rate_abs_access} \end{alignat}\label{eq::rate} \end{subequations} where ${W_{\rm s}}({\bf x})$ is defined according to backhaul BW partition strategies in \eqref{eq::bandwidth::partition} and $N_{\bf x}^{\rm ABS}$ ($N_{{\bf x}_i}^{\rm ABS}$) denotes the load on the ABS due to the macro users of the representative hotspot (hotspot at ${\bf x}_i$). In \eqref{eq::rate_sbs_access}, the first term inside the $\min$-operation is the data rate achieved under no backhaul constraint when the access BW $W_{\rm a}$ is equally partitioned between $N_{\bf x}^{\rm SBS}$ users. However, due to finite backhaul, ${\cal R}_{\rm a}^{\rm SBS}$ is limited by the second term. \section{Rate Coverage Probability Analysis} In this Section, we derive the expression of rate coverage probability of the typical user conditioned on its location at ${\bf x}+{\bf u}$ and later decondition over them. This deconditioning step averages out all the spatial randomness of the user and hotspot locations in the given network configuration. We first partition each hotspot into SBS and ABS association regions such that the users lying in the SBS (ABS) association region connects to the SBS (ABS). Note that the formation of these {mathematically tractable} association regions is the basis of the distance-dependent load modeling which is one of the major contributions of this work. \begin{figure} \centering \includegraphics[scale=0.48]{./Fig/association_region} \caption{An illustration of association region.} \label{fig::association::construction} \end{figure} \begin{figure} \centering \includegraphics[width = 0.8\linewidth]{./Fig/association} \caption{Variation of association probability to SBS with distance from ABS }\label{fig::sbs::association} \end{figure} \subsection{Association Region and Association Probability} \label{subsec::association::region} We first define the association region in the representative user hotspot as follows. Given the representative hotspot is centered at ${\bf x}$, the SBS association region is defined as: ${\cal S}_{\bf x}=\{{\bf x}+{\bf u}\in b({\bf x},R_{\rm s}):P_{\rm m}\|{\bf x}+{\bf u}\|^{-\alpha}<P_{\rm s}u^{-\alpha}\}$ and the ABS association area is $b({\bf x},R_{\rm s})\cap {\cal S}_{\bf x}^c$. In the following Proposition, we characterize the shape of ${\cal S}_{\bf x}$. \begin{prop}\label{prop::SBS::assocaition::region} The SBS association region ${\cal S}_{\bf x}$ for the SBS at ${\bf x}$ can be written as: ${\cal S}_{\bf x} = $ \begin{equation} \begin{cases} b\bigg((1-k_p^2)^{-1}{\bf x}, \frac{k_p x}{1-k_p^2}\bigg),&0<x<\frac{k_pR_{\rm s}}{1+k_p},\\ b\bigg((1-k_p^2)^{-1}{\bf x}, \frac{k_p x}{1-k_p^2}\bigg)\cap b({\bf x},R_{\rm s}),&\frac{k_pR_{\rm s}}{1+k_p}\leq x\leq \frac{k_pR_{\rm s}}{1-k_p},\\ b({\bf x},R_{\rm s}), &x>\frac{k_pR_{\rm s}}{1-k_p}, \end{cases} \label{eq::SBS::assocaition::region} \end{equation}where $k_p =\bigg(\frac{P_{\rm s}}{P_{\rm m}}\bigg)^{1/\alpha}$. \end{prop} \begin{IEEEproof} Let ${\bf x} = (X_1, X_2)$ be the Cartesian representation of ${\bf x}$. Let, ${\cal S}_{\bf x}=\{(X_1+t_1,X_2+t_2)\}$. Then, following the definition of ${\cal S}_{\bf x}$, $ P_{\rm m}(t_1^2+t_2^2)^{-\alpha/2}\leq P_{\rm s}((t_1-X_1)^2+(t_2-X_2)^2)^{-\alpha/2} \Rightarrow \bigg(t_1 - \frac{X_1}{1-k_p^2} \bigg)^2 +\bigg(t_2 - \frac{X_2}{1-k_p^2} \bigg)^2\leq \bigg(\frac{k_p x}{1-k_p^2}\bigg)^2$. Thus, $\{(t_1,t_2)\} = b((1-k_p^2)^{-1}{\bf x}, k_px/(1-k_p^2))$. Since, ${\cal S}_{\bf x}$ can not spread beyond $b({\bf x},R_{\rm s})$, ${\cal S}_{\bf x} = b\bigg((1-k_p^2)^{-1}{\bf x}, \frac{k_p x}{1-k_p^2}\bigg)\cap b({\bf x},R_{\rm s})$. When $0<x<\frac{k_p}{1+k_p}R_{\rm s}$, $b\bigg((1-k_p^2)^{-1}{\bf x}, \frac{k_p x}{1-k_p^2}\bigg)\subset b({\bf x},R_{\rm s})$. Beyond this limit of $x$, a part of $b((1-k_p^2)^{-1}{\bf x}, k_px/(1-k_p^2))$ lies outside of $b({\bf x},R_{\rm s})$. Finally, when $x>\frac{k_p}{1-k_p}R_{\rm s}$, $b((1-k_p^2)^{-1}{\bf x}, k_px/(1-k_p^2))\supset b({\bf x},R_{\rm s})$. \end{IEEEproof} This formulation of ${\cal S}_{\bf x}$ is illustrated in Fig.~\ref{fig::association::construction}. We now compute the SBS association probability as follows. \begin{lemma}\label{lemm::association::sbs} Conditioned on the fact that the user belongs to the hotspot at ${\bf x}$, the association probability to SBS is given by: ${\cal A}_{\rm s}({\bf x}) ={\cal A}_{\rm s}(x)=$ \begin{align & \int_0^{2\pi}\frac{\bigg(\min\big(R_{\rm s},x\frac{k_p(\sqrt{1-k_p^2.\sin^2\xi}+k_p\cos\xi)}{1-k_p^2}\big)\bigg)^2}{2\pi R_{\rm s}^2}{\rm d}\xi\label{eq::association::abs}\\ &= \begin{cases} \frac{k_p^2x^2}{(1-k_p^2)^2R_{\rm s}^2}&\text{if }0<x< \frac{k_p}{1+k_p}R_{\rm s},\\ \frac{{\cal C}\big(R_{\rm s},\frac{k_p x}{1-k_p^2}, \frac{k_p^2 x}{1-k_p^2}\big)}{\pi R_{\rm s}^2}&\text{if }\frac{k_p}{1+k_p}R_{\rm s}\leq x\leq \frac{k_p}{1-k_p}R_{\rm s},\\ 1&\text{if }x>\frac{k_p}{1-k_p}R_{\rm s}\label{eq::association::abs::2} \end{cases}, \end{align} where \begin{multline*}{\cal C}(r_1,r_2,d)= r_1^2\tan^{-1}\bigg(\frac{t}{d^2+r_1^2-r_2^2}\bigg)\\+r_2^2\tan^{-1}\bigg(\frac{t}{d^2-r_1^2+r_2^2}\bigg)-{\frac{t}{2}} \end{multline*} is the area of intersection of two intersecting circles of radii $r_1$, and $r_2$ and distance between centers $d$ with $ t=(d+r_1+r_2)^{\frac{1}{2}}(d+r_1-r_2)^{\frac{1}{2}}(d-r_1+r_2)^{\frac{1}{2}}(-d+r_1+r_2)^{\frac{1}{2}}$. The association probability to the ABS is given by ${\cal A}_{\rm m}({ x})= 1-{\cal A}_{\rm s}({x})$. \end{lemma} \begin{IEEEproof} Conditioned on the location of the hotspot center at ${\bf x}$, ${\cal A}_{\rm s}({\bf x})= {\mathbb{P}}({\cal E} = 1|{\bf x})=$ \begin{align*} &{\mathbb{E}}[{\bf 1}(P_{\rm m}\|{\bf x}+{\bf u}\|^{-\alpha}<P_{\rm s}\|{\bf u}\|^{-\alpha})|{\bf x}]={\mathbb{P}}({\bf x}+{\bf u}\in {\cal S}_{\bf x}|{\bf x})\\ &={\mathbb{P}}(P_{\rm m}(x^2+u^2+2xu\cos\xi)^{-\alpha/2}<P_{\rm s}u^{-\alpha})|x)\\&= {\mathbb{P}}(u^2(1-k_p^2)-2x\cos\xi k_p^2u-k_p^2x^2<0|x)\\&\myeq{a}{\mathbb{P}}\bigg(u\in\bigg(0, \frac{xk_p\big(\sqrt{1-k_p^2\sin^2\xi}+k_p\cos\xi\big)}{1-k_p^2}\bigg), \\&\qquad\qquad\xi\in(0, 2\pi]\bigg|x\bigg)\\& =\int_{0}^{2\pi}\int_{0}^{R_{\rm s}}{\bf 1}\bigg(0\leq u < \frac{xk_p\big(\sqrt{1-k_p^2\sin^2\xi}+k_p\cos\xi\big)}{1-k_p^2}\bigg)\\&\times f_U(u){\rm d}u \frac{1}{2\pi}{\rm d}{\xi}, \end{align*} where $\xi = \arg({\bf u}-{\bf x})$ and is uniformly distributed in $(0,2\pi]$. Here, (a) follows from solving the quadratic inequality inside the indicator function. The last step follows from deconditioning over ${u}$ and $\xi$. Finally, \eqref{eq::association::abs} is obtained by {evaluating} the integration over $u$. Note that, due to angular symmetry, ${\cal A}_{\rm s}({\bf x})={\cal A}_{\rm s}({ x})$. Alternatively, $${\cal A}_{\rm s}(x)=\int_{{\cal S}_{\bf x}}f_{U}(u){\rm d}u\frac{1}{2\pi}{\rm d}{\xi}=\frac{|{\cal S}_{\bf x}|}{\pi R_{\rm s}^2}.$$ The final result in \eqref{eq::association::abs::2} {is} obtained by using Proposition~\ref{prop::SBS::assocaition::region}. \end{IEEEproof} In \figref{fig::sbs::association}, we plot ${\cal A}_{\rm s}(x)$ as a function of $x$. We now evaluate the coverage probability of a typical user which is the probability of the occurrence of the events defined in \eqref{eq::coverage::def}. \begin{theorem \label{thm::coverage::probability}The coverage probability is given by: \begin{equation}\label{eq::coverage::probability} \mathtt{P_c} =\int\limits_{0}^{R-R_{\rm s}}\big(\mathtt{P_c}_{\rm s}(\theta_1,\theta_2|x)+\mathtt{P_c}_{\rm m}(\theta_3|x)\big)f_X(x){\rm d}x, \end{equation} where $\mathtt{P_c}_{\rm s} (\theta_1,\theta_2|x) =$ \begin{multline*} \int\limits_{0}^{2\pi}\int\limits_{0}^{u_{\max}(x,\xi)} \bigg(p({x}) F_h\bigg(\frac{ x^{\alpha_L}\beta{\tt N}_0W\theta_1}{P_{\rm m}G^2 },m_L\bigg) +(1-p({x}))\\ \times F_h\bigg(\frac{ x^{\alpha_{NL}}\beta{\tt N}_0W\theta_1}{P_{\rm m}G^2 },m_{NL}\bigg)\bigg)\bigg(p({u}) F_h\bigg(\frac{ u^{\alpha_L}\beta{\tt N}_0W\theta_2}{P_{\rm s}G },m_{L}\bigg) \\+(1-p({u})) F_h\bigg(\frac{ u^{\alpha_{NL}}\beta{\tt N}_0W\theta_2}{P_{\rm s} G},m_{NL}\bigg)\bigg)\frac{f_{ U}(u)}{2\pi}{\rm d}{u}\:{\rm d}{\xi}, \end{multline*} where $u_{\max}(x,\xi) = \min\bigg(R_{\rm s}, x k_p\frac{\sqrt{(1-k_p^2\sin^2\xi)}+k_p\cos\xi)}{1-k_p^2}\bigg)$ and $F_{h}(\cdot)$ is the {complementary cumulative distribution function (CCDF) of Gamma distribution}, and $\mathtt{P_c}_{\rm m} (\theta_3|x)=$ \begin{multline*} \int\limits_{0}^{2\pi}\int\limits_{u_{\max}(x,\xi)}^{R_{\rm s}} {\bigg( p(\kappa(x,u,\xi)) F_h\bigg(\frac{{\kappa(x,u,\xi)}^{\alpha_L}\beta{\tt N}_0W\theta_3}{P_{\rm m}G },m_L\bigg)}\\ + (1-p(\kappa(x,u,\xi))) F_h\bigg(\frac{ {\kappa(x,u,\xi)}^{\alpha_{NL}}\beta{\tt N}_0W\theta_3}{P_{\rm m}G },m_{NL}\bigg)\bigg)\\\times\frac{f_{ U}(u){\rm d}u\:{\rm d}\xi}{2\pi}, \end{multline*} \end{theorem} where $\kappa(x,u,\xi)=({x^2+u^2+2 x u \cos\xi})^{1/2}$. \begin{IEEEproof} See Appendix~\ref{app::coverage::probability}. \end{IEEEproof} As expected, coverage probability is the summation of two terms, each corresponding to the probability of occurrences of the two mutually exclusive events appearing in \eqref{eq::coverage::def}. \subsection{Load distributions}\label{subsec::load::dist} While the load distributions for the PPP-based models are well-understood~\cite{OffloadingSingh,SinghKulkarniSelfBackhaul}, they are not directly applicable to the 3GPP-inspired finite model used in this paper. Consequently, in this Section, we provide a novel approach to characterize the ABS and SBS load for this model. As we saw in \eqref{eq::rate_abs_access}, the load on the ABS has two components, one is due to the contribution of the number of users of the {representative} hotspot connecting to the ABS (denoted by $N_{{\bf x}}^{\rm ABS}$) and the other is due to the macro users of {the} other clusters, which we lump into a single random variable, $N_{\rm o}^{\rm ABS} = \sum_{i=1}^{n-1}N_{{\bf x}_i}^{\rm ABS}$. On the other hand, $N_{\bf x}^{\rm SBS}$ and $N_{\rm o}^{\rm SBS}=\sum_{i=1}^{n-1}N_{{\bf x}_i}^{\rm SBS}$ respectively denote the load on the SBS at ${\bf x}$ and sum load of all SBSs except the one at ${\bf x}$. First, we obtain the PMFs of $N_{{\bf x}}^{\rm ABS}$ and $N_{{\bf x}}^{\rm SBS}$ using the fact that given the location of the representative hotspot centered at ${\bf x}$, {each user belongings to the association regions ${\cal S}_{\bf x}$ or $b({\bf x},R_{\rm s})\cap {{\cal S}}_{\bf x}^c$ according to an i.i.d. Bernoulli random variable.} \begin{lemma}\label{lemm::load::characterization::abs} Given the fact that the representative hotspot is centered at ${\bf x}$, load on the ABS due to the macro users in the hotspot at ${\bf x}$ ($N^{\rm ABS}_{{\bf x}}$) and load on the SBS at $\bf x$ ($N^{\rm SBS}_{\bf x}$) are distributed as follows: {\sc case}~$1$ ($N_{{\bf x}_i} = \bar{m}$, $\forall\ i=1,\dots,n$). \begin{subequations} \begin{alignat}{2} &{\mathbb{P}}(N^{\rm ABS}_{{\bf x}}=k|{\bf x})= {\bar{m}-1\choose k-1}{\cal A}_{\rm m}(x)^{k-1}{\cal A}_{\rm s}(x)^{\bar{m}-k}\label{eq::load::abs::load_x::fixedN},\\ &{\mathbb{P}}(N^{\rm SBS}_{{\bf x}}=k|{\bf x})= {\bar{m}-1\choose k-1}{\cal A}_{\rm s}(x)^{k-1}{\cal A}_{\rm m}(x)^{\bar{m}-k},\label{eq::load::sbs::load_x::fixedN} \end{alignat} \end{subequations} where $k=1,2,\dots,\bar{m}$. {\sc case}~$2$ ($N_{{\bf x}_i} \stackrel{i.i.d.}{\sim} {\tt Poisson}(\bar{m})$, $\forall\ i=1,\dots,n$). \begin{subequations} \begin{alignat}{2} &{\mathbb{P}}(N^{\rm ABS}_{{\bf x}}=k|{\bf x})= \frac{(\bar{m}{\cal A}_{\rm m}(x))^{k-1}}{(k-1)!}e^{-\bar{m}{\cal A}_{\rm m}(x)}\label{eq::load::abs::load_x::PoissonN},\\ &{\mathbb{P}}(N^{\rm SBS}_{{\bf x}}=k|{\bf x})= \frac{(\bar{m}{\cal A}_{\rm s}(x))^{k-1}}{(k-1)!}e^{-\bar{m}{\cal A}_{\rm s}(x)},\label{eq::load::sbs::load_x::PoissonN} \end{alignat} \end{subequations} where $k\in{\mathbb{Z}}^{+}$. \end{lemma} \begin{IEEEproof}See Appendix~\ref{app::load::characterization::abs}. \end{IEEEproof} {We present the first moments of these two load variables in the following Corollary which will be required for the evaluation of the rate coverage for the average load-based partition and the derivation of easy-to-compute approximations of rate coverage in the sequel.} \begin{cor}\label{cor::mean::representative::loads} {The conditional means of $N_{\bf x}^{\rm ABS}$ and $N_{\bf x}^{\rm SBS}$ given the center of the representative hotspot at $\bf x$ are} \begin{align*} &\text{{\sc case}~$1$: }{\mathbb{E}}[N_{\bf x}^{\rm ABS}] = (\bar{m}-1){\cal A}_{\rm m}(x)+1, {\mathbb{E}}[N_{\bf x}^{\rm SBS}] = \\&\qquad\qquad\qquad\qquad(\bar{m}-1){\cal A}_{\rm s}(x)+1,\\ &\text{{\sc case}~$2$: }{\mathbb{E}}[N_{\bf x}^{\rm ABS}] = \bar{m}{\cal A}_{\rm m}(x)+1, {\mathbb{E}}[N_{\bf x}^{\rm SBS}] = \bar{m}{\cal A}_{\rm s}(x)+1. \end{align*} \end{cor} We now obtain the PMFs of $N_{\rm o}^{\rm ABS}$ and $N_{\rm o}^{\rm SBS}$ in the following Lemma. {Note that, since ${\bf x}_i$-s are i.i.d., $N_{\rm o}^{\rm ABS}$ and $N_{\rm o}^{\rm SBS}$ are independent of ${\bf x}$.} In what follows, the exact PMF of $N_{{\rm o}}^{\rm ABS}$ ($N_{{\rm o}}^{\rm SBS}$) is in the form of $(n-1)$-fold discrete convolution and hence is not computationally efficient beyond very small values of $n$. We present an {alternate} easy-to-use expression of this PMF by invoking central limit theorem (CLT). In the numerical {results} Section, we verify that this approximation is {tight} even for moderate values of $n$. \begin{lemma}\label{lemm::load::characterization::others} Given the fact that the typical user belongs to a hotspot at ${\bf x}$, load on the ABS due to all other $n-1$ hotspots is distributed as: {$ \frac{N_{\rm o}^{\rm ABS}-\upsilon_{\rm m}}{\sigma_{\rm m}}\sim{\cal N}(0,1)\text{ (for large $n$)}$} and sum of the loads on the other SBSs at ${\bf x}_1,{\bf x}_2,\dots,{\bf x}_{n-1}$ is distributed as: $ \frac{N_{\rm o}^{\rm SBS}-\upsilon_{\rm s}}{\sigma_{\rm s}}\sim{\cal N}(0,1) \text{ (for large $n$)}$, where ${\cal N}(0,1)$ denotes the standard normal distribution, $ \upsilon_{\rm m}=(n-1)\bar{m}{\mathbb{E}}[{\cal A}_{\rm m}(X)], \upsilon_{\rm s}=(n-1)\bar{m}{\mathbb{E}}[{\cal A}_{\rm s}(X)] $ and \begin{align*} &\text{for {\sc case}~$1$, }\sigma_{\rm m}^2=(n-1)\big[\bar{m}{\mathbb{E}}[{\cal A}_{\rm m}(X){\cal A}_{\rm s}(X)]\\&\qquad\qquad\qquad\quad+\bar{m}^2{\rm Var}[{\cal A}_{\rm m}(X)]\big]=\sigma_{\rm s}^2, \\ &\text{for {\sc case}~$2$, }\sigma_{\rm m}^2=(n-1)\big[\bar{m}{\mathbb{E}}[{\cal A}_{\rm m}(X)]+\bar{m}^2{\rm Var}[{\cal A}_{\rm m}(X)]\big],\\ &\sigma_{\rm s}^2=(n-1)\big[\bar{m}{\mathbb{E}}[{\cal A}_{\rm s}(X)]+\bar{m}^2{\rm Var}[{\cal A}_{\rm s}(X)]\big]. \end{align*} Here, \begin{multline* {\mathbb{E}}[{\cal A}_{\rm m}(X)]=\int_{0}^{R-R_{\rm s}}{\cal A}_{\rm m}(x)f_{X}(x){\rm d}x,\text{ and } \\ {\rm Var}[{\cal A}_{\rm m}(X)] = \int\limits_{0}^{R-R_{\rm s}}\big({\cal A}_{\rm m}(x)\big)^2f_X(x){\rm d}x - ({\mathbb{E}}[{\cal A}_{\rm m}(X)])^2, \end{multline*} and ${\mathbb{E}}[{\cal A}_{\rm s}(X)]$, ${\rm Var}[{\cal A}_{\rm s}(X)]$ can be similarly obtained by replacing ${\cal A}_{\rm m}(X)$ with ${\cal A}_{\rm s}(X)$ in the above expressions. \end{lemma} \begin{IEEEproof} See Appendix~\ref{app::load::characterization::others}. \end{IEEEproof} \subsection{Rate Coverage Probability} We first define the downlink rate coverage probability (or simply, rate coverage) as follows. \begin{ndef}[Rate coverage probability]\label{def::rate::coverage} The rate coverage probability of a link with BW $\tilde{W}$ is defined as the probability that the maximum achievable data rate (${\cal R}$) exceeds a certain threshold $\rho$, i.e., ${\mathbb{P}}({\cal R}>\rho) =$ \begin{align} {\mathbb{P}}\bigg(\tilde{W}\log_{2}(1+\mathtt{SNR})>\rho\bigg) = {\mathbb{P}}(\mathtt{SNR}>2^{{\rho}/{\tilde{W}}}-1). \end{align} \end{ndef} Hence, we see that the rate coverage probability is the coverage probability evaluated at a modified $\mathtt{SNR}$-threshold. We now evaluate the rate coverage probability for different backhaul BW partition strategies for {a general distribution of $N_{{\bf x}_i}$ and $N_{{\bf x}}$ in the following Theorem. We later specialize this result for {\sc case} s~$1$ and $2$ for numerical evaluation.} \begin{theorem}\label{thm::rate::cov::equal::partition}The rate coverage probability for a target data rate $\rho$ is given by: \begin{equation} \mathtt{P_r} = \mathtt{P_r}_{\rm m} + \mathtt{P_r}_{\rm s}, \end{equation} where $\mathtt{P_r}_{\rm m}$ ($\mathtt{P_r}_{\rm s}$) denotes the ABS rate coverage (SBS rate coverage) which is the probability that the typical user is receiving data rate greater than or equal to $\rho$ and is served by the ABS (SBS). The ABS rate coverage is given by: \begin{multline}\label{eq::rate::cov::macro} \mathtt{P_r}_{\rm m}=\int\limits_{-\infty}^{\infty}\int\limits_{0}^{R-R_{\rm s}}{\mathbb{E}}_{N_{\bf x}^{\rm ABS}}\bigg[\mathtt{P_c}_{\rm m}\bigg(2^{\frac{\rho(t+N_{\bf x}^{\rm ABS})}{W_{\rm a}}}-1|x\bigg)\bigg]\\\times f_X(x){\rm d}x \frac{1}{\sigma_{\rm m}\sqrt{2\pi}}e^{-\frac{(t-\upsilon_{\rm m})^2}{2\sigma_{\rm m}^2}}{\rm d}t. \end{multline} The SBS rate coverage depends on the backhaul BW partition strategy. For equal partition, \begin{equation} \mathtt{P_r}_{\rm s} =\int\limits_{0}^{R-R_{\rm s}} {\mathbb{E}}_{N_{\bf x}^{\rm SBS}}\bigg[\mathtt{P_c}_{\rm s}\bigg(2^{\frac{\rho n N^{\rm SBS}_{\bf x}}{W_{\rm b}}}-1,2^{\frac{\rho N^{\rm SBS}_{\bf x}}{W_{\rm a}}}-1\big|x\bigg)\bigg]f_X(x){\rm d}x,\label{eq::rate::cov::sbs::eq::partition} \end{equation} \vspace{-1em} for instantaneous load-based partition, \begin{multline} \label{eq::rate::cov::inst-load::sbs} \mathtt{P_r}_{\rm s}= \int\limits_{-\infty}^{\infty}\int\limits_{0}^{R-R_{\rm s}}{\mathbb{E}}_{N_{\bf x}^{\rm SBS}}\bigg[\mathtt{P_c}_{\rm s}\bigg(2^{\frac{\rho( N^{\rm SBS}_{\bf x}+t)}{W_{\rm b}}}-1,2^{\frac{\rho N^{\rm SBS}_{\bf x}}{W_{\rm a}}}-1\big| x\bigg)\bigg]\\\times f_{X}(x){\rm d}x \frac{1}{\sigma_{\rm s}\sqrt{2\pi}}e^{-\frac{(t-\upsilon_{\rm s})^2}{2\sigma_{\rm s}^2}}{\rm d}t, \end{multline} and for average load-based partition, \begin{multline} \label{eq::rate::cov::avg-load::sbs} \mathtt{P_r}_{\rm s}= \int\limits_{-\infty}^{\infty}\int\limits_{0}^{R-R_{\rm s}}{\mathbb{E}}_{N_{\bf x}^{\rm SBS}}\bigg[\mathtt{P_c}_{\rm s}\bigg(2^{\frac{\rho N^{\rm SBS}_{\bf x}\left({\mathbb{E}}[N_{\bf x}^{\rm SBS}]+\bar{m}t\right)}{W_{\rm b}{\mathbb{E}}[N_{\bf x}^{\rm SBS}]}}-1,\\2^{\frac{\rho N^{\rm SBS}_{\bf x}}{W_{\rm a}}}-1\big| x\bigg)\bigg]f_{X}(x){\rm d}x \frac{e^{-\frac{\big(t-(n-1){\mathbb{E}}[{\cal A}_{\rm s}(X)]\big)^2}{2(n-1){\rm Var}[{\cal A}_{\rm s}(X)]}}}{\sqrt{2\pi(n-1){\rm Var}[{\cal A}_{\rm s}(X)]}}{\rm d}t. \end{multline} \end{theorem} \begin{IEEEproof}See Appendix~\ref{app::rate::cov::equal::partition}. \end{IEEEproof} {Note that the key enabler of the expression of $\mathtt{P_r}$ in Theorem~\ref{thm::rate::cov::equal::partition} is the fact that the system is considered to be noise-limited. Including the SBS interference into analysis is not straightforward from this point since it would involve coupling between the coverage probability and load since both are dependent on the locations of the other $n-1$ SBSs.} Having derived the exact expressions of rate coverage in Theorem~\ref{thm::rate::cov::equal::partition}, we present approximations of these expressions by replacing (i) $N_{\bf x}^{\rm ABS}$ in $\mathtt{P_r}_{\rm m}$ with its mean ${\mathbb{E}}[N_{\bf x}^{\rm ABS}]$, and (ii) $N_{\bf x}^{\rm SBS}$ in $\mathtt{P_r}_{\rm s}$ with its mean ${\mathbb{E}}[N_{\bf x}^{\rm SBS}]$ in the following Lemma. \begin{lemma}\label{lemm::approximation} {The ABS rate coverage can be approximated as} \begin{multline} \label{eq::rate::cov::macro::approx::fixedN} \mathtt{P_r}_{\rm m}=\int\limits_{-\infty}^{\infty}\int\limits_{0}^{R-R_{\rm s}}\mathtt{P_c}_{\rm m}\bigg(2^{\frac{\rho(t+{\mathbb{E}}[N_{\bf x}^{\rm ABS}])}{W_{\rm a}}}-1|x\bigg)f_X(x){\rm d}x \\ \times \frac{e^{-\frac{(t-\upsilon_{\rm m})^2}{2\sigma_{\rm m}^2}}}{\sigma_{\rm m}\sqrt{2\pi}}{\rm d}t. \end{multline} The SBS rate coverage can be approximated as follows. For equal partition, \begin{align} \mathtt{P_r}_{\rm s} = \int\limits_{0}^{R-R_{\rm s}}\mathtt{P_c}_{\rm s}\bigg(2^{\frac{\rho n{\mathbb{E}}[ N^{\rm SBS}_{\bf x}]}{W_{\rm b}}}-1,2^{\frac{\rho {\mathbb{E}}[N^{\rm SBS}_{\bf x}]}{W_{\rm a}}}-1\big|x\bigg)f_X(x){\rm d}x,\label{eq::rate::cov::sbs::eq::partition::approx} \end{align} for instantaneous load-based partition, \begin{multline \label{eq::rate::cov::inst-load::sbs::approx} \mathtt{P_r}_{\rm s}= \int\limits_{-\infty}^{\infty}\int\limits_{0}^{R-R_{\rm s}}\mathtt{P_c}_{\rm s}\bigg(2^{\frac{\rho( {\mathbb{E}}[N^{\rm SBS}_{\bf x}]+t)}{W_{\rm b}}}-1,2^{\frac{\rho {\mathbb{E}}[ N^{\rm SBS}_{\bf x}]}{W_{\rm a}}}-1\big| x\bigg)\\\times f_{X}(x){\rm d}x \frac{e^{-\frac{(t-\upsilon_{\rm s})^2}{2\sigma_{\rm s}^2}}}{\sigma_{\rm s}\sqrt{2\pi}}{\rm d}t, \end{multline} and for average load-based partition, \begin{multline \label{eq::rate::cov::avg-load::sbs::approx} \mathtt{P_r}_{\rm s}=\int\limits_{-\infty}^{\infty}\int\limits_{0}^{R-R_{\rm s}}\mathtt{P_c}_{\rm s}\bigg(2^{\frac{\rho \left({\mathbb{E}}[N_{\bf x}^{\rm SBS}]+\bar{m}t\right)}{W_{\rm b}}}-1,2^{\frac{\rho {\mathbb{E}}[ N^{\rm SBS}_{\bf x}]}{W_{\rm a}}}-1\big| x\bigg)\\\times f_{X}(x){\rm d}x \frac{e^{-\frac{\big(t-(n-1){\mathbb{E}}[{\cal A}_{\rm s}(X)]\big)^2}{2(n-1){\rm Var}[{\cal A}_{\rm s}(X)]}}}{\sqrt{2\pi(n-1){\rm Var}[{\cal A}_{\rm s}(X)]}}{\rm d}t. \end{multline} \end{lemma} \begin{table*}[t] \centering \caption{{Key system parameters} \label{tab::parameters { \begin{tabular}{|l|l|l|} \hline Notation & Parameter & Value \\ \hline $P_{\rm m},\ P_{\rm s}$ & BS transmit powers & 50, 20 dBm \\ \hline $\alpha_L, \alpha_{NL}$ & Path-loss exponent & 2.0, 3.3\\ \hline $\beta$ & Path loss at 1 m & 70 dB \\\hline $G$ & Main lobe gain & 18 dB \\ \hline $\mu$ & LOS range constant & 170 m \\ \hline ${\tt N}_0W$ & Noise power & \begin{tabular}[c]{@{}l@{}}-174 dBm/Hz+ $10\log_{10} W$ \\+10 dB {(noise-figure)}\end{tabular} \\ \hline $m_L,m_{NL}$& Parameter of Nakagami distribution& 2, 3\\\hline $R$, $R_{\rm s}$& Macrocell and hotspot radius & 200 m, 30 m\\ \hline $\bar{m}$& Average number of users per hotspot & 5\\\hline $\rho$& Rate threshold & 50 Mbps\\ \hline \end{tabular} } \end{table*} {We now specialize the result of Theorem~\ref{thm::rate::cov::equal::partition} for {\sc case} s~$1$ and $2$ in the following Corollaries.} \begin{cor}\label{cor::rate::cov::fixedN}For {\sc case}~$1$, i.e., when $N_{{\bf x}_i} = \bar{m}$, $\forall\ i=1,\dots,n$, the ABS rate coverage is \begin{multline}\label{eq::rate::cov::macro::fixedN} \mathtt{P_r}_{\rm m} = \sum\limits_{k=1}^{\bar{m}}{{\bar{m}-1} \choose k-1}\int\limits_{-\infty}^{\infty}\int\limits_{0}^{R-R_{\rm s}}\mathtt{P_c}_{\rm m}\bigg(2^{\frac{\rho(t+k)}{W_{\rm a}}}-1|x\bigg)\\\times{\cal A}_{\rm m}(x)^{k-1}{\cal A}_{\rm s}(x)^{\bar{m}-k} f_X(x){\rm d}x \frac{1}{\sigma_{\rm m}\sqrt{2\pi}}e^{-\frac{(t-\upsilon_{\rm m})^2}{2\sigma_{\rm m}^2}}{\rm d}t. \end{multline}The SBS rate coverages for the three backhaul BW parition strategies are expressed as follows. (i) For equal partition, \begin{multline} \mathtt{P_r}_{\rm s} = \sum\limits_{k = 1}^{\bar{m}}{{\bar{m}-1} \choose k-1}\int\limits_{0}^{R-R_{\rm s}}\mathtt{P_c}_{\rm s}\bigg(2^{\frac{\rho n k}{W_{\rm b}}}-1,2^{\frac{\rho k}{W_{\rm a}}}-1\big|x\bigg)\\\times {\cal A}_{\rm s}(x)^{k-1}{\cal A}_{\rm m}(x)^{\bar{m}-k}f_X(x){\rm d}x,\label{eq::rate::cov::sbs::fixedN::eq::partition} \end{multline} (ii) for instantaneous load-based partition, \begin{multline}\label{eq::rate::cov::inst-load::sbs::fixedN}\mathtt{P_r}_{\rm s} = \sum\limits_{k = 1}^{\bar{m}}{{\bar{m}-1} \choose k-1}\int\limits_{-\infty}^{\infty}\int\limits_{0}^{R-R_{\rm s}}\mathtt{P_c}_{\rm s}\bigg(2^{\frac{\rho(k+t)}{W_{\rm b}}}-1,2^{\frac{\rho k}{W_{\rm a}}}-1\big| x\bigg)\\\times{\cal A}_{\rm s}(x)^{k-1}{\cal A}_{\rm m}(x)^{\bar{m}-k}f_{X}(x){\rm d}x \frac{e^{-\frac{(t-\upsilon_{\rm s})^2}{2\sigma_{\rm s}^2}}}{\sigma_{\rm s}\sqrt{2\pi}}{\rm d}t, \end{multline} and (iii) for average load-based partition, \begin{multline}\label{eq::rate::cov::avg-load::sbs::fixedN} \mathtt{P_r}_{\rm s}=\sum\limits_{k = 1}^{\bar{m}}{{\bar{m}-1} \choose k-1}\int\limits_{-\infty}^{\infty}\int\limits_{0}^{R-R_{\rm s}}\mathtt{P_c}_{\rm s}\bigg(2^{\frac{\rho k\left(1+(\bar{m}-1){\cal A}_{\rm s}(x)+\bar{m} t\right)}{W_{\rm b}(1+(\bar{m}-1){\cal A}_{\rm s}(x))}}-1,\\2^{\frac{\rho k}{W_{\rm a}}}-1\big| x\bigg){\cal A}_{\rm s}(x)^{k-1}{\cal A}_{\rm m}(x)^{\bar{m}-k}\times f_{X}(x){\rm d}x\\\times \frac{e^{-\frac{\big(t-(n-1){\mathbb{E}}[{\cal A}_{\rm s}(X)]\big)^2}{2(n-1){\rm Var}[{\cal A}_{\rm s}(X)]}}}{\sqrt{2\pi(n-1){\rm Var}[{\cal A}_{\rm s}(X)]}}{\rm d}t. \end{multline} \end{cor} \begin{IEEEproof}The result can be obtained from Theorem~\ref{thm::rate::cov::equal::partition} by using the PMFs of $N_{\bf x}^{\rm ABS}$, $N_{\bf x}^{\rm SBS}$, $N_{o}^{\rm ABS}$ and $N_{o}^{\rm ABS}$ from Lemmas~\ref{lemm::load::characterization::abs} and \ref{lemm::load::characterization::others} and substituting ${\mathbb{E}}[N_{\bf x}^{\rm SBS}]$ {from} Corollary~\ref{cor::mean::representative::loads} for {\sc case}~$1$. \end{IEEEproof} \begin{cor}\label{cor::rate::cov::PoissonN}For {\sc case}~$2$, i.e., when $N_{{\bf x}_i} \stackrel{i.i.d.}{\sim} {\tt Poisson}(\bar{m})$, $\forall\ i=1,\dots,n$, the ABS rate coverage is expressed as \begin{multline}\label{eq::rate::cov::macro::PoissonN} \mathtt{P_r}_{\rm m} = \sum\limits_{k=1}^{\infty}\frac{\bar{m}^{k-1}}{(k-1)!} \int\limits_{-\infty}^{\infty}\int\limits_{0}^{R-R_{\rm s}} ({\cal A}_{\rm m}(x))^{k-1}e^{-\bar{m}{\cal A}_{\rm m}(x)}\\\times \mathtt{P_c}_{\rm m} \bigg(2^{\frac{\rho(t+k)}{W_{\rm a}}}-1|x\bigg) f_X(x){\rm d}x \frac{1}{\sigma_{\rm m}\sqrt{2\pi}}e^{-\frac{(t-\upsilon_{\rm m})^2}{2\sigma_{\rm m}^2}}{\rm d}t. \end{multline}The SBS rate coverages for the three backhaul BW parition strategies are expressed as follows. (i) For equal partition, \begin{multline} \mathtt{P_r}_{\rm s} = \sum\limits_{k=1}^{\infty}\frac{\bar{m}^{k-1}}{(k-1)!} \int\limits_{0}^{R-R_{\rm s}} ({\cal A}_{\rm s}(x))^{k-1}e^{-\bar{m}{\cal A}_{\rm s}(x)}\\\times \mathtt{P_c}_{\rm s}\bigg(2^{\frac{\rho n k}{W_{\rm b}}}-1,2^{\frac{\rho k}{W_{\rm a}}}-1\big|x\bigg)f_X(x){\rm d}x,\label{eq::rate::cov::sbs::PoissonN::eq::partition} \end{multline} (ii) for instantaneous load-based partition, \begin{multline}\label{eq::rate::cov::inst-load::sbs::PoissonN}\mathtt{P_r}_{\rm s} = \sum\limits_{k=1}^{\infty}\frac{\bar{m}^{k-1}}{(k-1)!} \int\limits_{-\infty}^{\infty}\int\limits_{0}^{R-R_{\rm s}} ({\cal A}_{\rm s}(x))^{k-1}e^{-\bar{m}{\cal A}_{\rm s}(x)}\\\times \mathtt{P_c}_{\rm s}\bigg(2^{\frac{\rho(k+t)}{W_{\rm b}}}-1,2^{\frac{\rho k}{W_{\rm a}}}-1\big| x\bigg)f_{X}(x){\rm d}x \frac{1}{\sigma_{\rm s}\sqrt{2\pi}}e^{-\frac{(t-\upsilon_{\rm s})^2}{2\sigma_{\rm s}^2}}{\rm d}t, \end{multline} and (iii) for average load-based partition, \begin{multline}\label{eq::rate::cov::avg-load::sbs::PoissonN} \mathtt{P_r}_{\rm s}=\sum\limits_{k=1}^{\infty}\frac{\bar{m}^{k-1}}{(k-1)!} \int\limits_{-\infty}^{\infty}\int\limits_{0}^{R-R_{\rm s}} ({\cal A}_{\rm s}(x))^{k-1}e^{-\bar{m}{\cal A}_{\rm s}(x)}\\\times \mathtt{P_c}_{\rm s}\bigg(2^{\frac{\rho k\left(1+\bar{m}{\cal A}_{\rm s}(x)+\bar{m}t\right)}{W_{\rm b}(1+\bar{m}{\cal A}_{\rm s}(x))}}-1,2^{\frac{\rho k}{W_{\rm a}}}-1\big| x\bigg)\\\times f_{X}(x){\rm d}x \frac{1}{\sqrt{2\pi(n-1){\rm Var}[{\cal A}_{\rm s}(X)]}}e^{-\frac{\big(t-(n-1){\mathbb{E}}[{\cal A}_{\rm s}(X)]\big)^2}{2(n-1){\rm Var}[{\cal A}_{\rm s}(X)]}}{\rm d}t. \end{multline} \end{cor} \begin{IEEEproof}The result can be similarly obtained from Theorem~\ref{thm::rate::cov::equal::partition} by using the PMFs of $N_{\bf x}^{\rm ABS}$, $N_{\bf x}^{\rm SBS}$, $N_{o}^{\rm ABS}$ and $N_{o}^{\rm ABS}$ from Lemmas~\ref{lemm::load::characterization::abs} and \ref{lemm::load::characterization::others}, and substituting ${\mathbb{E}}[N_{\bf x}^{\rm SBS}]$ {from} Corollary~\ref{cor::mean::representative::loads} for {\sc case}~$2$. \end{IEEEproof} \begin{figure*}% \centering \subfigure[Equal partition.]{ \label{fig::comparison::rate::cov::bw::eq} \includegraphics[width=.3\linewidth]{./Fig/ratecov_eq_bw.pdf}} \hspace{8pt}% \subfigure[Instantaneous load-based partition.]{% \label{fig::comparison::rate::cov::bw::load} \includegraphics[width=.3\linewidth]{./Fig/ratecov_inst_load_bw.pdf}} \subfigure[Average load-based partition.\newline]{% \label{fig::comparison::rate::cov::bw::avg::load} \includegraphics[width=0.3\linewidth]{./Fig/ratecov_avg_bw.pdf}} \caption[Rate coverage probability for different bandwidths ($\rho = 50$ Mbps, $n=10$) for {\sc case} s~$1$ and $2$ obtained by Corollaries~\ref{cor::rate::cov::fixedN} and \ref{cor::rate::cov::PoissonN}.]{Rate coverage probability for different bandwidths for {\sc case}~$1$ and $2$ obtained by Corollaries~\ref{cor::rate::cov::fixedN} and \ref{cor::rate::cov::PoissonN} ($\rho = 50$ Mbps, $n=10$). {Lines and markers indicate theoretical and simulation results, respectively. Theoretical results for {\sc case} s $1$ and $2$ are obtained from Corollaries~\ref{cor::rate::cov::fixedN} and \ref{cor::rate::cov::PoissonN}, respectively.}} \label{fig::comparison::rate::cov}% \end{figure*} \begin{figure} \centering \includegraphics[scale=0.47]{./Fig/snr_plot} \caption{The CDF plot of SNR from the ABS and SBS ($P_{\rm m}= 50$ dBm, $P_{\rm s} = 20$ dBm). The markers indicate empirical CDF obtained from Monte Carlo simulations.}\label{fig::snrplot} \end{figure} \section{Results and Discussion}\label{sec::results} \subsection{Trends of rate coverage}\label{subsec::trends} In this Section we {verify the accuracy of} our analysis of rate coverage with Monte Carlo simulations of the network model delineated in Section~\ref{sec::system::model} with parameters listed in Table~\ref{tab::parameters}. {For each simulation, the number of iterations was set to $10^6$. Since $\mathtt{P_r}$ fundamentally {depends upon} $\mathtt{SNR}$, we first plot the cumulative density function (CDF) of SNRs without beamforming in Fig.~\ref{fig::snrplot}, averaged over user locations. Precisely we plot $ {\mathbb{E}}_{\bf x}\left[{\mathbb{P}}\left(\frac{h_{\rm m} P_{\rm m}\|{\bf x}\|^{-\alpha}}{{\tt N}_0W}<\theta\right)\right] = \int_0^{R-R_{\rm s}}\mathtt{P_c}_{\rm m}(\theta|x)f_X(x){\rm d}x $ and $ {\mathbb{E}}_{\bf u}\left[{\mathbb{P}}\left(\frac{h_{\rm s} P_{\rm s}\|{\bf u}\|^{-\alpha}}{{\tt N}_0W}<\theta\right)\right] =\mathtt{P_c}_{\rm s}(-\infty,\theta|x) $, where $\mathtt{P_c}_{\rm m}$ and $\mathtt{P_r}_{\rm s}$ were defined in Theorem~\ref{thm::coverage::probability} from simulation and using our analytical results and observe a perfect match.} We now plot the rate coverages for different user distributions ({\sc case} s $1$ and $2$) for three different backhaul BW partition strategies in Figs.~\ref{fig::comparison::rate::cov::bw::eq}-\ref{fig::comparison::rate::cov::bw::avg::load}. Recall that one part of ABS and SBS load was approximated using CLT in Lemma~\ref{lemm::load::characterization::others} for efficient computation. Yet, we obtain a perfect match between simulation and analysis even for $n=10$ for {\sc case}~$1$ and {\sc case}~$2$. {Further, we observe that, (i) $\mathtt{P_r} = 0$ for $\eta=1$ since this corresponds to the extreme when no BW is given to access links, and (ii) the rate coverage is maximized for a particular access-backhaul BW split ($\eta^*=\arg\max_{\{\eta\}}\mathtt{P_r}$).} Also note that the rate coverage trends for {\sc case} s~$1$ and $2$ are the same, although $\mathtt{P_r}$ for {\sc case}~$1$ is slightly higher than $\mathtt{P_r}$ of {\sc case}~$2$ since the representative cluster, on average, has more number of users in {\sc case}~$2$ than in {\sc case}~$1$ (see Corollary~\ref{cor::mean::representative::loads}). However, for space constraint, we only present the results of {\sc case}~$1$ for subsequent discussions. \begin{figure} \centering \includegraphics[width=.7\linewidth]{./Fig/ratecov_bw_comp.pdf \caption{Comparison of backhaul partition strategies for {\sc case}~$1$ ($\rho = 50$ Mbps, $n=10$).} \label{fig::comparison::rate::cov::bw} \end{figure} \begin{figure \centering \includegraphics[width=.7\linewidth]{./Fig/low_mu.pdf \caption{Comparision of backhaul BW parition strategies ($\rho = 50$ Mbps, $n=10$, $W = 600$ MHz) for {\sc case}~$1$ and $\mu= 30$ m. {The results are obtained from Corollary~\ref{cor::rate::cov::fixedN}.}\newline\newline}\label{fig::comparison::rate::cov::bw::lowmu} \end{figure} \begin{figure} \centering \includegraphics[width=.7\linewidth]{./Fig/ratecov_approx_fixed_user.pdf \caption{Comparison of the exact expression (Corollary~\ref{cor::rate::cov::fixedN}) and approximate expression (Lemma~\ref{lemm::approximation}) of rate coverage probability for {\sc case}~$1$ ($\rho = 50$ Mbps, $W = 600$ MHz, $n=10$). Lines and markers indicate exact and approximate results, respectively. }\label{fig::comparison::rate::approx::fixedN} \end{figure} \subsubsection{Comparison of backhaul BW partition strategies} In Fig.~\ref{fig::comparison::rate::cov::bw}, we overlay $\mathtt{P_r}$ for three different backhaul BW partition strategies. We observe that the maximum rate coverage, $\mathtt{P_r}^* = \mathtt{P_r}(\eta^*)$ (marked as `*' in the figures) for instantaneous load-based partition dominates $\mathtt{P_r}^*$ in average load-based partition, {and} $\mathtt{P_r}^*$ in average load-based partition dominates $\mathtt{P_r}^*$ in equal partition. {Also note that $\eta^*$ is different for different combination of BW partition strategy and $W$. We further compared these three strategies in a high blocking environment in Fig.~\ref{fig::comparison::rate::cov::bw::lowmu} by setting $\mu = 30$ m and observe the same ordering of performance of the three strategies. As expected, $\mathtt{P_r}$ is in general lower for this case.} That said, it should be kept in mind that instantaneous load-based partition requires more frequent feedback of the load information from the SBSs and hence has the highest signaling overhead among the three strategies. The average load-based partition requires comparatively less signaling overhead since it does not require frequent feedback. On the other hand, equal partition does not have this overhead at all. This motivates an interesting performance-complexity trade-off for the design of cellular networks with IAB. \begin{figure} \centering \includegraphics[width=.7\linewidth]{./Fig/ratecov_users.pdf \caption{Rate coverage for different numbers of users per hotspot for {\sc case}~$1$ ($W = 600$ MHz, $\rho = 50$ Mbps). The values of $\mathtt{P_r}$ are computed using Lemma~\ref{lemm::approximation}.}\label{fig::comparison::rate::cov::users} \end{figure} \begin{figure} \centering \includegraphics[width=.7\linewidth ]{./Fig/load_critical.pdf} \caption{Total cell-load upto which IAB-enabled network outperforms macro-only network (for instantaneous load-based partition).\newline}\label{fig::total::critical::load} \end{figure} \subsubsection{Effect of system BW} We observe the effect of increasing system BW on rate coverage in Fig~\ref{fig::comparison::rate::cov::bw}. As expected, $\mathtt{P_r}$ increases as $W$ increases. However, the increment of $\mathtt{P_r}^*$ saturates for very high values of $W$ since high noise power {degrades} the link spectral efficiency. Another interesting observation is that starting from $\eta = 0$ to $\eta^*$, $\mathtt{P_r}$ does not increase monotonically. This is due to the fact that sufficient BW needs to be steered from access to backhaul so that the network with IAB performs better than the macro-only network (corresponding to $\eta = 0$). \subsubsection{Accuracy of approximation}We now plot $\mathtt{P_r}$ obtained by the approximations in Lemma~\ref{lemm::approximation} in Fig.~\ref{fig::comparison::rate::approx::fixedN}. It is observed that the approximation is surprisingly close to the exact values of $\mathtt{P_r}$ obtained by Corollary~\ref{cor::rate::cov::fixedN}. Motivated by the tightness of the approximation, we proceed with the easy-to-compute expressions of $\mathtt{P_r}$ obtained by Lemma~\ref{lemm::approximation} instead of the exact expressions (Corollary~\ref{cor::rate::cov::fixedN}) for the metrics evaluated in the sequel, namely, critical load, median rate, and $5^{th}$ percentile rate. {It is important to note that each numerical evaluation of these metrics requires high number of computations of $\mathtt{P_r}$ and is highly inefficient if $\mathtt{P_r}$ is computed by simulation, which further highlights the importance of analytical expressions derived in this paper.} \subsection{Critical load} We plot the variation of $\mathtt{P_r}$ with $\bar{m}$ in Fig.~\ref{fig::comparison::rate::cov::users}. We observe that as $\bar{m}$ increases, more number of users share the BW and as a result, $\mathtt{P_r}$ decreases. However, the optimality of $\mathtt{P_r}$ completely disappears for very large value of $\bar{m}$ ($10<\bar{m}<20$ in this case). This implies that for given BW $W$ there exists a {\em critical total cell-load} ($n\bar{m}$) beyond which the gain obtained by the IAB architecture completely disappears. {Observing Fig.~\ref{fig::total::critical::load}, we find that the critical total cell-load varies linearly with the system BW.} The reason of the existence of the critical total cell-load can be intuitively explained as follows. Recall that the SBS rate ${\cal R}_{\rm a}^{SBS}$ was limited by the backhaul constraint ${\cal R}_{\rm b}^{\rm ABS}/N_{\bf x}^{\rm SBS}$. When $\bar{m}$ is high, $N_{\bf x}^{\rm SBS}$ is also high and this puts stringent backhaul constraint on ${\cal R}_{\rm a}^{\rm SBS}$. Hence, an ABS can serve more users by direct macro-links at the target rate instead of allocating any backhaul partition \begin{figure} \centering \includegraphics[width=.7\linewidth]{./Fig/ratecov_bw_50ile.pdf} \caption{Median rate for {\sc case}~$1$ for instantaneous load-based partition ($n=10$).}\label{fig::median::rate} \end{figure} \begin{figure} \centering \includegraphics[width=.7\linewidth]{./Fig/ratecov_bw_95ile.pdf} \caption{$5^{th}$ percentile rate for {\sc case}~$1$ for instantaneous load-based partition ($n=10$).}\label{fig::5thpercentile::rate} \end{figure} \begin{figure} \centering \includegraphics[width=.7\linewidth]{./Fig/ratecov_bw_5ile_comp.pdf} \caption{{Median rate for {\sc case}~$1$ for different backhaul BW partition strategies ($n=10$, $W = 600$ MHz).}\newline}\label{fig::median::rate::comp} \end{figure} \begin{figure} \centering \includegraphics[width=.7\linewidth]{./Fig/ratecov_bw_95ile_comp.pdf} \caption{{$5^{th}$ percentile rate for {\sc case}~$1$ for different backhaul BW partition strategies ($n=10$, $W = 600$ MHz).}}\label{fig::5thpercentile::rate::comp} \end{figure} \subsection{Median and $5^{th}$ percentile rates} We now shift our attention to two more performance metrics of interest, the median ($50^{th}$ percentile) and $5^{th}$ percentile rate, which are denoted as $\rho_{50}$ and $\rho_{95}$, respectively. These rates are defined as the values where the rate CDF attains $0.5$ and $0.05$, respectively, i.e., $\mathtt{P_r} = 0.5$ at $\rho=\rho_{50}$ and $\mathtt{P_r} = 0.95$ at $\rho=\rho_{95}$. Figs.~\ref{fig::median::rate} and \ref{fig::5thpercentile::rate} illustrate $\rho_{50}$ and $\rho_{95}$ respectively for different $W$. {We first observe that for a given $\eta$, these rates increase linearly with $W$. This is because of the fact that in all the expressions of rate coverage, $\rho$ and $W$ appear as a ratio ($\rho/W$). Thus, once we find a desired rate coverage at a particular $\rho$ for a given $W$, same rate coverage will be observed for $kW$ at target data rate $k\rho$ (where $k$ is a positive constant).} Further, we notice that the median rate is relatively flat around the maximum compared to the $5^{th}$ percentile rate. Also, the optimal $\eta$ does not vary significantly (stays close to 0.4 in our setup) for median and $5^{th}$ percentile rates. In Figs.~\ref{fig::median::rate::comp} and \ref{fig::5thpercentile::rate::comp}, we have compared the three backhaul BW partition strategies in terms of these two rates. As expected, the ordering in performance is similar to the one observed for $\mathtt{P_r}^*$. Interestingly, from Fig.~\ref{fig::5thpercentile::rate::comp}, it appears that the average and instantaneous load-based partition policies have almost similar performance in terms of $5^{th}$ percentile rate. {This is because of the fact that $\rho_{95}$ is towards the tail of the rate distribution which is not significantly affected by difference between instantaneous or average load. However, the performance gap becomes prominent once median rate is considered.} \section{Conclusion}\label{sec::conclusion} In this paper, we proposed the first 3GPP-inspired analytical framework for two-tier mm-wave HetNets with IAB and investigated three backhaul BW partition strategies. In particular, our model was inspired by the spatial configurations of the users and BSs considered in 3GPP simulation models of mm-wave IAB, where the SBSs are deployed at the centers of user hotspots. Under the assumption that the mm-wave communication is noise limited, we evaluated the downlink rate coverage probability. As a key intermediate step, we characterized the PMFs of load on the ABS and SBS for two different user distributions per hotspot. Our analysis leads to two important system-level insights: (i) for three performance metrics namely, downlink rate coverage probability, median rate, and $5^{th}$ percentile rate, the existence of the optimal access-backhaul bandwidth partition splits for which the metrics are maximized, and (ii) maximum total cell-load that can be supported using the IAB architecture. { This work has numerous extensions. From the modeling perspective, although the noise power dominates interference power in most of the operating regimes in mm-wave networks, it is important to consider interference from the ABS and the SBSs in the access links which may not be negligible in {a very} dense setup. For the analysis of rate coverage in this case, one needs to evaluate the joint distribution of interference and load which will be spatially coupled by the locations of the SBSs. Built on this baseline model of IAB, one can also study the spatial multiplexing gains on the resource allocation obtained by massive multi-user-multiple input-multiple-output (MU-MIMO) transmissions in the downlink. Further, this framework can be used to study IAB-enabled cellular networks where the control signalling for cell association is also performed in mm-wave. Under this setup, the problems of mm-wave beam sweeping and corresponding cell association delays can be studied. Another useful extension of this work is to consider an additional {set of} of users having open access to all the SBSs, while in this paper we only considered users with closed-access to the SBSs at the hotspot center. From the stochastic geometry perspective, it will be useful to develop analytical generative models for correlated blocking of the mm-wave signals making these analytical models more accurate. As said earlier, this work also {lays} the foundations of analytical characterization of cell-load in the 3GPP-inspired HetNet models. Using the fundamentals of this load modeling approach, one can further design optimal bias-factors depending on the volume of cell load at each hotspot and improve the per-user rates. Finally, the analysis can be extended to design delay sensitive routing/scheduling policies which is also a relevant research directions for IAB-enabled networks. } \section{Introduction}\label{sec::intro} With the exponential rise in data-demand far exceeding the capacity of the traditional macro-only cellular network operating in sub-6 GHz bands, network densification using mm-wave base stations (BSs) is becoming a major driving technology for the 5G wireless evolution~\cite{dehos2014millimeter}. While heterogeneous cellular networks (HetNets) with low power SBSs overlaid with traditional macro BSs improve the spectral efficiency of the access link (the link between a user and its serving BS), mm-wave communication can further boost the data rate by offering high bandwidth. {That said, one of the main hindrances in the way of large-scale deployment of small cells is that} the existing high-speed optical fiber backhaul network that connects the BSs to the network core is not scalabale to the extent of ultra-densification envisioned for small cells~\cite{quek2013small,tipmongkolsilp2011evolution,DhillonCaireBackhaul}. However, with recent advancement in mm-wave communication with highly directional beamforming~\cite{rangan2014millimeter,GhoshMMwave}, it is possible to replace the so-called {\em last-mile fibers} for SBSs by establishing fixed mm-wave backhaul links between the SBS and the MBS equipped with fiber backhaul, also known as the anchored BS (ABS), thereby achieving Gigabits per second (Gbps) range data rate over backhaul links~\cite{GaoMassiveMimomm-waveBackhaul}. While mm-wave fixed wireless backhaul is targetted to be a part of the first phase of the commercial roll-out of 5G~\cite{mm-waveMagazineDahlmamn}, 3GPP is exploring a more ambitious solution of IAB where the ABSs will use {the} same spectral resources and infrastructure of mm-wave transmission to serve cellular users in access as well as the SBSs in backhaul~\cite{accessbackhaul3gpp}. In this paper, we develop a tractable analytical framework {for} IAB-enabled mm-wave cellular networks using tools from stochastic geometry and obtain some design insights that will be useful for the ongoing pre-deployment studies on IAB. \subsection{Background and related works} Over recent years, stochastic geometry has emerged as a powerful tool for modeling and analysis of cellular networks operating in sub-6 GHz~\cite{haenggi2012stochastic}. The locations of the BSs and users are commonly modeled as independent Poisson point processes (PPPs) over an infinite plane. This model, initially developed for the analysis of traditional macro-only cellular networks~\cite{AndrewsTractable}, was further extended for the analysis of HetNets in~\cite{dhillon2012modeling,mukherjee2012distribution,Prasanna_globecomm,jo2012heterogeneous}. In the followup works, this PPP-based HetNet model was used to study many different aspects of cellular networks such as load balancing, BS cooperation, multiple-input multiple-output (MIMO), energy harvesting, and many more. Given the activity this area has seen over the past few areas, any attempt towards summarizing all key relevant prior works here would be futile. Instead, it would be far more beneficial for the interested readers to refer to dedicated surveys and tutorials~\cite{elsawy2013stochastic,elsawy2016modeling,andrews2016primer,mukherjee2014analytical} that already exist on this topic. While these initial works were implicitly done for cellular networks operating in the sub-6 GHz spectrum, tools from stochastic geometry have also been leveraged further to characterize their performance in the mm-wave spectrum~\cite{AndrewsMMWave,Direnzo_mmwave,MillimeterWaveBai2015,mmWaveHetNetTurgut}. These mm-wave cellular network models specifically focus on the mm-wave propagation characteristics which signifantly differ from those of the sub-6 GHz~\cite{rangan2014millimeter}, such as the severity of blocking of mm-wave signals by physical obstacles like walls and trees, directional beamforming using antenna arrays, and inteference being dominated by noise~\cite{Kulkarni_backhaul_asilomar}. These initial modeling approaches were later extended to study different problems specific to mm-wave cellular networks, such as, cell search~\cite{liAndrews2017directional}, antenna beam alignment~\cite{HeathAlkhateeb2017BeamAssociation}, {and cell association in the mm-wave spectrum~\cite{Elshaer_cell_association}.} With this brief introduction, we now shift our attention to the main focus of this paper which is IAB in mm-wave cellular networks. In what follows, we provide the rationale behind mm-wave IAB and how stochastic geometry can {be used for its performance evaluation.} For traditional cellular networks, it is reasonable to assume that the capacity achieved by the access links is not limited by the backhaul constraint on the serving BS since all BSs have access to the high capacity wired backhaul. As expected, backhaul constraint was ignored in almost all prior works on stochastic geometry-based modeling and analysis of cellular networks. However, with the increasing network densification with small cells, it may not be feasible to connect every SBS to the wired backhaul network which is limited by cost, infrastructure, maintenance, and scalability. These limitations motivated a significant body of research works on the expansion of the cellular networks by deploying relay nodes connected to the ABS by wireless backhaul links, {e.g. see}~\cite{backhaul_survey_1}. Among different techniques of wireless backhaul, 3GPP included layer 3 relaying as a part of the long term evolution advanced (LTE-A) standard in Release 10~\cite{relay3gpp1,relay3gpp2} for coverage extension of the cellular network. Layer 3 relaying follows the principle of IAB architecture, which is often synonymously referred to as {\em self-backhauling}, where the relay nodes have the functionality of SBS and the ABS multiplexes its time-frequency resources to establish access links with the users and wireless backhaul links with SBSs that may not have access to wired backhaul~\cite{johansson2016self}. However, despite being the part of the standard, layer 3 relays have never really been deployed on a massive scale in 4G mostly due to the spectrum shortage in sub-6 GHz. For instance, in urban regions with high capacity demands, the operators are not willing to {relinquish} {any part of the cellular bandwidth (costly and scarce resource)} for wireless backhaul. However, with recent advancement in mm-wave communication, IAB has gained substantial interest since spectral bottleneck will not be a primary concern once high bandwidth in mm-wave spectrum (at least 10x the cellular BW in sub-6 GHz) is exploited. Some of the notable industry initiatives driving mm-wave IAB are mm-wave small cell access and backhauling (MiWaveS)~\cite{miWaveS} and 5G-Crosshaul~\cite{crosshaul}. In 2017, 3GPP also {started working on} a new study item to investigate the performance of IAB-enabled mm-wave cellular network~\cite{accessbackhaul3gpp}. Although backhaul is becoming a primary bottleneck of cellular networks, there is very little existing work on the stochastic geometry-based analyses considering the backhaul constraint~\cite{QuekBackhaul,suryaprakash2014analysis,SinghAndrews2014}. While these works are focused on the traditional networks in sub-6 GHz, contributions on IAB-enabled mm-wave HetNet are even sparser, except an extension of the PPP-based model~\cite{SinghKulkarniSelfBackhaul}, where the authors modeled wired and wirelessly backhauled BSs and users as three independent PPPs. In \cite{Ganti-self-backhaul,tabassum2016analysis}, similar modeling approach was used to study IAB in sub-6 GHz using full duplex BSs. The fundamental shortcoming of these PPP-based models is the assumption of independent locations of the BSs and users which are spatially coupled in actual networks. For instance, in reality, the users form spatial clusters, commonly known as {\em user hotspots} and the centers of the user hotspots are targetted as the potential cell-cites of the short-range mm-wave SBSs~\cite{Saha_J1}. Not surprisingly, such spatial configurations of users and BSs are at the heart of the 3GPP simulation models~\cite{saha20173gpp}. To address this shortcoming of the analytical models, in this paper, we propose the {\em first 3GPP-inspired} {\em stochastic geometry-based} finite network model for the performance analysis of HetNets with IAB. The key contributions are summarized next. \subsection{Contributions and outcomes} \label{subsec::contributions} \subsubsection{New tractable model for IAB-enabled mm-wave HetNet} We develop a realistic and tractable analytical framework to study the performance of IAB-enabled mm-wave HetNets. Similar to the models used in 3GPP-compliant {simulations}~\cite{accessbackhaul3gpp}, we consider a two-tier HetNet where a {circular macrocell with ABS at the center} is overlaid by numerous low-power small cells. The users are assumed to be non-uniformly distributed over the macrocell forming hotspots and the SBSs are located at the geographical centers of these user hotspots. {The non-uniform distribution of the users and the spatial coupling of their locations with those of the SBSs means that the analysis of this setup is drastically different from the state-of-the-art PPP-based models. Further, the consideration of a single macrocell (justified by the noise-limited nature of mm-wave communications), allows us to glean crisp insights into the coverage zones, which further facilitate a novel analysis of load on ABS and SBSs\footnote{In our discussion, BS load refers to the number of users connected to the BS.}.} Assuming that the total system BW is partitioned into two splits for access and backhaul communication, we use this model to study the performance of three backhaul BW partition strategies, namely, (i) {\em equal partition}, where each SBS gets equal share of BW irrespective of its load, (ii) {\em instantaneous load-based partition}, where the ABS frequently collects information from the SBSs on their instantaneous loads and partitions the backhaul BW proportional to the {instantaneous} load on each SBS, and (iii) {\em average load-based partition}, where the ABS collects information from the SBSs on {their average loads} and partitions the backhaul BW proportional to the average load on each SBS. \subsubsection{New load modeling and downlink rate analysis} For the purpose of performance evaluation and comparisons between the aforementioned strategies, we evaluate the downlink rate coverage probability i.e. probability that the downlink data rate experienced by a randomly selected user will exceed a target data rate. As key intermediate steps of our analysis we {characterize} the two essential components of rate coverage, which are (i) {signal-to-noise-ratio} ($\mathtt{SNR}$)-coverage probability, and (ii) the distribution of ABS and SBS load, {which} directly impacts the amount of resources allocated by the serving BS to the user of interest. We compute the {probability mass functions} (PMFs) of the ABS and the SBS loads assuming the number of users per hotspot is fixed. We {then} relax this fixed user assumption by considering independent Poisson distribution on the number of users in each hotspot. Due to a significantly different spatial model, our approach of load modeling is quite different from the load-modeling in PPP-based networks~\cite{SinghAndrews2014}. \subsubsection{System design insights} Using the proposed analytical framework, we obtain the following system design insights. \begin{itemize} \item We {compare} the three backhaul BW partition strategies in terms of three metrics, (i) rate coverage probability, (ii) median rate, and (iii) $5^{th}$ percentile rate. {Our numerical results indicate that for a given combination of the backhaul BW partition strategy and the performance metric of interest, there exists an optimal access-backhaul BW split for which the metric is maximized.} \item Our results demonstrate that the optimal access-backhaul partition fractions for median and $5^{th}$ percentile rates are not very sensitive to the choice of backhaul BW partition strategies. Further, the median and $5^{th}$ percentile rates are invariant to system BW. \item For given infrastructure and spectral resources, the IAB-enabled network outperforms the macro-only network with no SBSs up to a critical volume of total cell-load, beyond which the performance gains disappear and its performance converges to that of the macro-only network. Our numerical results also indicate that this critical total {cell-load} increases almost linearly with the system BW. \end{itemize} \section{System Model}\label{sec::system::model} \subsection{mm-wave Cellular System Model} \begin{figure} \centering \subfigure[User and BS locations]{ \includegraphics[width=.98\linewidth]{./Fig/system_model_1.pdf} \label{fig::system::model::1}} \subfigure[Resource Allocation]{ \label{fig::system::model::2} \includegraphics[width=.99\linewidth]{./Fig/system_model_2.pdf} } \caption{Illustration of the system model.} \label{fig::system::model} \end{figure} \subsubsection{BS and user locations} Inspired by the spatial configurations used in 3GPP {simulations}~\cite{accessbackhaul3gpp,saha20173gpp} for a typical outdoor deployment scenario of a two-tier HetNet, we assume that $n$ SBSs are deployed inside a circular macrocell of radius $R$ (denoted by $b({\bf 0},R)$) with the macro BS at its center. We assume that this BS is connected to the core network with high speed optical fiber and {is hence} an ABS. Note that, in contrast to the infinite network models (e.g. the PPP-based networks defined over $\mathbb{R}^2$) which are suitable for interference-dominated networks (such as conventional cellular networks in sub-6 GHz), we are limiting the complexity of the system model by considering single macrocell. This assumption is justified by the noise-limited nature of mm-wave communications~\cite{Kulkarni_backhaul_asilomar}. {Moreover, as will be evident in the sequel, this setup will allow us to glean crisp insights into the properties of this network despite a more general user distribution model (discussed next) compared to the PPP-based model.} We model a user hotspot at ${\bf x}$ as $b({\bf x},R_{\rm s})$, i.e., a circle of radius $R_{\rm s}$ centered at ${\bf x}$. We assume that the macrocell contains $n$ user hotspots, located at $\{{\bf x}_i\equiv (x_i,\varphi_i), i=1,\dots,n\}$, which are distributed uniformly at random in $b({\bf 0},R-R_{\rm s})$.\footnote{{For notational simplicity, we use $x\equiv\|{\bf x}\|,\ \forall\ {\bf x}\in{\mathbb{R}}^2$.}} Thus, $\{{\bf x}_i\}$ is a sequence of independently and identically distributed (i.i.d.) random vectors with the distribution of ${\bf x}_i$ being: \begin{align}\label{eq::sbs::distribution} f_{\bf X}({\bf x}_i)=\begin{cases}\frac{x_i}{\pi(R-R_{\rm s})^2}, &\text{when }0<x_i\leq R- R_{\rm s}, 0<\varphi_i\leq 2\pi,\\ 0, &\text{otherwise.} \end{cases} \end{align}The marginal probability density function (PDF) of $x_i$ is obtained as: $f_{X}(x_i)=2x_i/(R-R_{\rm s})^2$ for $0<x_i\leq R- R_{\rm s}$ and $\varphi_{i}$ is a uniform random variable in $(0,2\pi]$. Note that this construction ensures that all hotspots lie entirely inside the macrocell, i.e., $b({\bf x}_i,R_{\rm s})\cap b({\bf 0},R)^c = \emptyset,\ \forall\ i$. {We assume that the number of users in the hotspot centered at ${\bf x}_i$ is $N_{{\bf x}_i}$, where} {\sc case}~$1$: $N_{{\bf x}_i}=\bar{m}$ is fixed and equal for all $i=1,\dots, n$ and {\sc case}~$2$: $\{N_{{\bf x}_i}\}$ is a sequence of i.i.d. Poisson random variables with mean $\bar{m}$. These ${N}_{{\bf x}_i}$ users are assumed to be located {\em uniformly at random} {independently of each other} in each hotspot. Thus, the location of a user belonging to the hotspot at ${\bf x}_i$ is denoted by ${\bf x}_i+{\bf u}$, where ${\bf u} \equiv (u,\xi)$ is a random vector in $\mathbb{R}^2$ with PDF: \begin{align}\label{eq::users::distribution} f_{\bf U}({\bf u})=\begin{cases}\frac{u}{\pi R_{\rm s}^2}, &\text{when }0< {u}\leq R_{\rm s}, 0<\xi\leq 2\pi\\ 0, &\text{otherwise.} \end{cases} \end{align} {The marginal PDF of $u$} is: $f_{U}(u)=2u/R_{\rm s}^2$ for $0<u\leq R_{\rm s}$ and $\xi$ is a uniform random variable in $(0,2\pi]$. We assume that the SBSs are deployed at the center of user hotspots, i.e., at $\{{\bf x}_i\}$. The ABS provides wireless backhaul to these SBSs over mm-wave links. See Fig.~\ref{fig::system::model::1} for an illustration. Having defined the spatial distribution of SBSs and users, we now define the {\em typical user} for which we will compute the rate coverage probability. The typical user is a user chosen uniformly at random from the network. The hotspot to which the typical user belongs is termed as the {\em representative hotspot}. We denote the center of representative hotspot as ${\bf x}$, where ${\bf x}={\bf x}_n$, without loss of generality and the location of the typical user as ${\bf x}+{\bf u}$. For {\sc case}~$1$, the number of users in the representative cluster is $N_{\bf x}=N_{{\bf x}_n}=\bar{m}$. For {\sc case}~$2$, although $N_{{\bf x}_i}$ is i.i.d. Poisson, $N_{\bf x}$ {does not follow the same distribution} since the typical user will more likely belong to a hotspot with more number of users~\cite{Qin2017}. If $n\to \infty$, $N_{\bf x}$ follows a weighted Poisson distribution with PMF $ {\mathbb{P}}(N_{\bf x} = k ) = \frac{\bar{m}^{k-1}e^{-\bar{m}}}{(k-1)!}$, where, $k\in{\mathbb{Z}}^+$. It can be easily shown that if $N_{\bf x}$ follows a weighted Poisson distribution, {we have} $N_{\bf x} = N_{{\bf x}_n}+1$. Hence, for $n\to \infty$, one can obtain the distribution of $N_{\bf x}$ by first choosing {a} hotspot uniformly at random and then {adding one user to it}. However, when $n$ is finite, $N_{\bf x}$ will lie between $N_{{\bf x}_{n}}$ and $N_{{\bf x}_{n}}+1$ ($N_{{\bf x}_n}\leq N_{\bf x}\leq N_{{\bf x}_{n}}+1$). The lower bound on $N_{\bf x}$ is trivially achieved when $n=1$. Since the actual distribution of $N_{\bf x}$ for finite number of hotspots ($n>1$) is not tractable, we fix the typical user for {\sc case}~$2$ according to the following Remark. \begin{remark}\label{rem::typical} For {\sc case}~$2$, we first choose a hotspot centered at ${\bf x}$ uniformly at random from $n$ hotspots, call it the representative hotspot, and then add the typical user at ${\bf x}+{\bf u}$, where $\bf u$ follows the PDF in \eqref{eq::users::distribution}. Although this process of selecting the typical user is asymptotically exact when $N_{{\bf x}_i}\stackrel{i.i.d.}{\sim}{\tt Poisson}(\bar{m}),\ \forall\ i = 1,2,\dots,n$, and $n\to\infty$, it will have negligible impact on the analysis since {our interest will be in the cases where the macrocells have moderate} to high number of hotspots~\cite{3gppreportr12}. \end{remark} \subsubsection{Propagation assumptions}\label{subsec::prop::assumption} All backhaul and access transmissions are assumed to be performed in mm-wave spectrum. {We assume that the ABS and SBS transmit at constant power spectral densities (PSDs) $P_{\rm m}/W$ and $P_{\rm s}/W$, respectively over a system BW $W$. The received power at ${\bf z}$ is given by $P \psi h L({\bf z},{\bf y})^{-1}$, where $P$ is a generic variable denoting transmit power with $P\in\{P_{\rm m},P_{\rm s}\}$, $\psi$ is the combined antenna gain of the transmitter and receiver, and $L ({\bf z},{\bf y})= 10^{((\beta +10\alpha\log_{10}\|{\bf z}-{\bf y}\|)/10)}$ is the associated pathloss.} We assume that all links undergo i.i.d. Nakagami-$m$ fading. Thus, $h\sim{\tt Gamma}(m,m^{-1})$. \subsubsection{Blockage model} \label{subsec::blockagemodel} Since mm-wave signals are sensitive to physical blockages such as buildings, trees and even human bodies, the LOS and NLOS path-loss characteristics have to be explicitly included into the analysis. On similar lines of \cite{Bai_mmWave}, we assume exponential blocking model. Each mm-wave link of distance $r$ between the transmitter (ABS/SBS) and receiver (SBS/user) is LOS or NLOS according to an independent Bernoulli random variable with LOS probability $p(r) =\exp(-r/\mu)$, where $\mu$ is the LOS range constant that depends on the geometry and density of blockages. {Since the blockage environment seen by the links between the ABS and SBS, SBS to user and ABS to user may be very different, one can assume three different blocking constants $\{\mu_{\rm b},\mu_{\rm s},\mu_{\rm m}\}$, respectively instead of a single blocking constant $\mu$. As will be evident in the technical exposition, this does not require any major changes in the analysis. However, in order to keep our notational simple, we will assume the same $\mu$ for all the links in this paper.} Also, LOS and NLOS links may likely follow different fading statistics, which is incorporated by assuming different Nakagami-$m$ parameters for LOS and NLOS, denoted by $m_L$ and $m_{NL}$, respectively. We assume that all BSs are equipped with steerable directional antennas and the user equipments have omni-directional antenna. Let $G$ {be} the directivity gain of the transmitting and receiving antennas of the BSs (ABS and SBS). Assuming perfect beam alignment, the effective gains on backhaul and access links are $G^2$ and $G$, respectively. We assume that the system is noise-limited, i.e., at any receiver, the interference is negligible compared to the thermal noise with PSD ${\tt N}_0$. Hence, the $\mathtt{SNR}$-s of a backhaul link from ABS to SBS at ${\bf x}$, access links from SBS at ${\bf x}$ to user at ${\bf x}+{\bf u}$, and ABS to user at ${\bf x}+{\bf u}$ are respectively expressed as: \begin{subequations}\label{eq::sbr::equations} \begin{alignat}{3} &\mathtt{SNR}_{\rm b}({\bf x}) = \frac{P_{\rm m} G^2 h_{\rm b}L({\bf 0},{\bf x})^{-1}}{{\tt N}_0 W},\\ &\mathtt{SNR}_{\rm a}^{\rm SBS}({\bf x}+{\bf u}) = \frac{P_{\rm s}G h_{\rm s}L({\bf x},{\bf x}+{\bf u})^{-1}}{{\tt N}_0W},\\ &\mathtt{SNR}_{\rm a}^{\rm ABS}({\bf x}+{\bf u}) = \frac{P_{\rm m} G h_{\rm m}L({\bf 0},{\bf x}+{\bf u})^{-1}}{{\tt N}_0W}, \end{alignat} \end{subequations} where $\{h_{\rm b}, h_{\rm s}, h_{\rm m}\}$ are the corresponding small-scale fading gains \subsubsection{User association}\label{subsec::user::association} { We assume that the SBSs operate in closed-access, i.e., users in hotspot can only connect to the SBS at {the} hotspot center, or the ABS. This model is inspired by the way smallcells with closed user groups, for instance the privately owned femtocells, are dropped in the HetNet models considered by 3GPP~\cite[Table A.2.1.1.2-1]{access2010further}.} Given the complexity of user association in mm-wave using beam sweeping techniques, we assume a much simpler way of user association which is performed by signaling in sub-6 GHz, analogous to the current LTE standard~\cite{HeathAlkhateeb2017BeamAssociation}. In particular, the BSs broadcast paging signal using omni-directional antennas in sub-6 GHz and the user associates to the candidate serving BS based on the maximum received power over the paging signals. Since the broadcast signaling is in sub-6 GHz, we assume the same power-law pathloss function for both LOS and NLOS components with path-loss exponent $\alpha$ due to rich scattering environment. We define the association event ${\cal E}$ for the typical user as: \begin{align} {\cal E} = \begin{cases} 1 &\text{ if } P_{\rm s}\|{\bf u}\|^{-\alpha} >P_{\rm m}\|{\bf x}+{\bf u}\|^{-\alpha}, \\ 0, &\text{ otherwise,} \end{cases} \end{align} where $\{0,1\}$ denote association to ABS and SBS, respectively. The typical user at ${\bf x}+{\bf u}$ is {\em under coverage} in the downlink if either of the following two events occurs: \begin{align} & {\cal E} = 1 \text{ and } \mathtt{SNR}_{\rm b}({\bf x})>\theta_1, \mathtt{SNR}_{\rm a}^{\rm SBS}({\bf u})>\theta_2, \text{ or,}\notag\\ &{\cal E} = 0 \text{ and }\mathtt{SNR}_{\rm a}^{\rm ABS}({\bf x}+{\bf u})>\theta_3,\label{eq::coverage::def} \end{align} where $\{\theta_1, \theta_2, \theta_3\}$ are the coverage thresholds for successful demodulation and decoding. \subsection{Resource allocation} \label{subsec::resourceallocation} {The ABS, SBSs and users are assumed to be capable of communicating on both mm-wave and sub-6 GHz bands.} The sub-6 GHz band is reserved for control channel and the mm-wave band is kept for data-channels. The total mm-wave BW $W$ for downlink transmission is partitioned into two parts, $W_{\rm b}=\eta W$ for backhaul and $W_{\rm a}=(1-\eta)W$ for access, where $\eta\in[0,1)$ determines the access-backhaul split. Each BS is assumed to employ a simple round robin scheduling policy for serving users, under which the total access BW is {shared equally} among its associated users, referred to alternatively as {\em load} on that particular BS. On the other hand, the backhaul BW is shared amongst $n$ SBSs by either of the three strategies as follows. \begin{enumerate} \item {\em Equal partition.} This is the simplest partition strategy where the ABS does not require any load information from the SBSs and divides $W_{\rm b}$ equally into $n$ splits. \item {\em Instantaneous load-based partition.} In this scheme, the SBSs regularly feed back the ABS its load information and accordingly the ABS allocates backhaul BW proportional to the {instantaneous} load on each small cell. \item {\em Average load-based partition.} Similar to the previous strategy, the ABS allocates backhaul BW proportional to the load on each small cell. But in this scheme, the SBSs feed back the ABS its load information after {sufficiently} long intervals. Hence the instantaneous fluctuations in SBS load are averaged out. \end{enumerate} If the SBS at ${\bf x}$ gets backhaul BW $W_{\rm s}({\bf x})$, then \begin{align}\label{eq::bandwidth::partition} W_{\rm s}({\bf x}) = \begin{cases} \frac{W_{\rm b}}{n}, & \text{for equal partition},\\ \multirow{2}{*}{${\frac{N^{\rm SBS}_{{\bf x}}}{N^{\rm SBS}_{{\bf x}}+\sum\limits_{i=1}^{n-1}N^{\rm SBS}_{{\bf x}_i}}} W_{\rm b}$,}& \text{for instantaneous }\\&\text{load-based partition},\\ \multirow{2}{*}{$\frac{{\mathbb{E}}[N^{\rm SBS}_{{\bf x}}]}{{\mathbb{E}}[N^{\rm SBS}_{{\bf x}}]+\sum\limits_{i=1}^{n-1}{\mathbb{E}}[N^{\rm SBS}_{{\bf x}_i}]} W_{\rm b}$,}& \text{for average load-based}\\&\text{ partition}, \end{cases} \end{align where $N_{\bf x}^{\rm SBS}$ and $N_{{\bf x}_i}^{\rm SBS}$ denote the load on the SBS of the representative hotspot and load on the SBS at ${\bf x}_i$, respectively. The BW partition is illustrated in Fig.~\ref{fig::system::model::2}. To compare the performance of these strategies, we define the network performance metric of interest next. \subsection{Downlink data rate}\label{subsec::downlink::data::rate} The maximum achievable downlink data rate, henceforth referred to as simply the {\em data rate}, on the backhaul link between the ABS and the SBS, the access link between SBS and user, and the access link between ABS and user can be expressed as: \begin{subequations} \begin{alignat}{3} {\cal R}_{\rm b}^{\rm ABS} &= W_{\rm s}({\bf x}) \log_2(1+\mathtt{SNR}_{\rm b}({\bf x})),\label{eq::rate_backhaul}\\ {\cal R}_{\rm a}^{\rm SBS} &= \min\bigg(\frac{W_{\rm a}}{N_{\bf x}^{\rm SBS}}\log_2(1+\mathtt{SNR}_{\rm a}^{\rm SBS}({\bf u})), \frac{ {\cal R}_{\rm b}^{\rm ABS}}{N_{\bf x}^{\rm SBS}}\bigg),\label{eq::rate_sbs_access} \\ {\cal R}_{\rm a}^{\rm ABS} &= \frac{W_{\rm a} }{N_{\bf x}^{\rm ABS}+\sum\limits_{i=1}^{n-1}N_{{\bf x}_i}^{\rm ABS}}\log_2(1+\mathtt{SNR}_{\rm a}^{\rm ABS}({\bf x}+{\bf u})), \label{eq::rate_abs_access} \end{alignat}\label{eq::rate} \end{subequations} where ${W_{\rm s}}({\bf x})$ is defined according to backhaul BW partition strategies in \eqref{eq::bandwidth::partition} and $N_{\bf x}^{\rm ABS}$ ($N_{{\bf x}_i}^{\rm ABS}$) denotes the load on the ABS due to the macro users of the representative hotspot (hotspot at ${\bf x}_i$). In \eqref{eq::rate_sbs_access}, the first term inside the $\min$-operation is the data rate achieved under no backhaul constraint when the access BW $W_{\rm a}$ is equally partitioned between $N_{\bf x}^{\rm SBS}$ users. However, due to finite backhaul, ${\cal R}_{\rm a}^{\rm SBS}$ is limited by the second term. \section{Rate Coverage Probability Analysis} In this Section, we derive the expression of rate coverage probability of the typical user conditioned on its location at ${\bf x}+{\bf u}$ and later decondition over them. This deconditioning step averages out all the spatial randomness of the user and hotspot locations in the given network configuration. We first partition each hotspot into SBS and ABS association regions such that the users lying in the SBS (ABS) association region connects to the SBS (ABS). Note that the formation of these {mathematically tractable} association regions is the basis of the distance-dependent load modeling which is one of the major contributions of this work. \begin{figure} \centering \includegraphics[scale=0.48]{./Fig/association_region} \caption{An illustration of association region.} \label{fig::association::construction} \end{figure} \begin{figure} \centering \includegraphics[width = 0.8\linewidth]{./Fig/association} \caption{Variation of association probability to SBS with distance from ABS }\label{fig::sbs::association} \end{figure} \subsection{Association Region and Association Probability} \label{subsec::association::region} We first define the association region in the representative user hotspot as follows. Given the representative hotspot is centered at ${\bf x}$, the SBS association region is defined as: ${\cal S}_{\bf x}=\{{\bf x}+{\bf u}\in b({\bf x},R_{\rm s}):P_{\rm m}\|{\bf x}+{\bf u}\|^{-\alpha}<P_{\rm s}u^{-\alpha}\}$ and the ABS association area is $b({\bf x},R_{\rm s})\cap {\cal S}_{\bf x}^c$. In the following Proposition, we characterize the shape of ${\cal S}_{\bf x}$. \begin{prop}\label{prop::SBS::assocaition::region} The SBS association region ${\cal S}_{\bf x}$ for the SBS at ${\bf x}$ can be written as: ${\cal S}_{\bf x} = $ \begin{equation} \begin{cases} b\bigg((1-k_p^2)^{-1}{\bf x}, \frac{k_p x}{1-k_p^2}\bigg),&0<x<\frac{k_pR_{\rm s}}{1+k_p},\\ b\bigg((1-k_p^2)^{-1}{\bf x}, \frac{k_p x}{1-k_p^2}\bigg)\cap b({\bf x},R_{\rm s}),&\frac{k_pR_{\rm s}}{1+k_p}\leq x\leq \frac{k_pR_{\rm s}}{1-k_p},\\ b({\bf x},R_{\rm s}), &x>\frac{k_pR_{\rm s}}{1-k_p}, \end{cases} \label{eq::SBS::assocaition::region} \end{equation}where $k_p =\bigg(\frac{P_{\rm s}}{P_{\rm m}}\bigg)^{1/\alpha}$. \end{prop} \begin{IEEEproof} Let ${\bf x} = (X_1, X_2)$ be the Cartesian representation of ${\bf x}$. Let, ${\cal S}_{\bf x}=\{(X_1+t_1,X_2+t_2)\}$. Then, following the definition of ${\cal S}_{\bf x}$, $ P_{\rm m}(t_1^2+t_2^2)^{-\alpha/2}\leq P_{\rm s}((t_1-X_1)^2+(t_2-X_2)^2)^{-\alpha/2} \Rightarrow \bigg(t_1 - \frac{X_1}{1-k_p^2} \bigg)^2 +\bigg(t_2 - \frac{X_2}{1-k_p^2} \bigg)^2\leq \bigg(\frac{k_p x}{1-k_p^2}\bigg)^2$. Thus, $\{(t_1,t_2)\} = b((1-k_p^2)^{-1}{\bf x}, k_px/(1-k_p^2))$. Since, ${\cal S}_{\bf x}$ can not spread beyond $b({\bf x},R_{\rm s})$, ${\cal S}_{\bf x} = b\bigg((1-k_p^2)^{-1}{\bf x}, \frac{k_p x}{1-k_p^2}\bigg)\cap b({\bf x},R_{\rm s})$. When $0<x<\frac{k_p}{1+k_p}R_{\rm s}$, $b\bigg((1-k_p^2)^{-1}{\bf x}, \frac{k_p x}{1-k_p^2}\bigg)\subset b({\bf x},R_{\rm s})$. Beyond this limit of $x$, a part of $b((1-k_p^2)^{-1}{\bf x}, k_px/(1-k_p^2))$ lies outside of $b({\bf x},R_{\rm s})$. Finally, when $x>\frac{k_p}{1-k_p}R_{\rm s}$, $b((1-k_p^2)^{-1}{\bf x}, k_px/(1-k_p^2))\supset b({\bf x},R_{\rm s})$. \end{IEEEproof} This formulation of ${\cal S}_{\bf x}$ is illustrated in Fig.~\ref{fig::association::construction}. We now compute the SBS association probability as follows. \begin{lemma}\label{lemm::association::sbs} Conditioned on the fact that the user belongs to the hotspot at ${\bf x}$, the association probability to SBS is given by: ${\cal A}_{\rm s}({\bf x}) ={\cal A}_{\rm s}(x)=$ \begin{align & \int_0^{2\pi}\frac{\bigg(\min\big(R_{\rm s},x\frac{k_p(\sqrt{1-k_p^2.\sin^2\xi}+k_p\cos\xi)}{1-k_p^2}\big)\bigg)^2}{2\pi R_{\rm s}^2}{\rm d}\xi\label{eq::association::abs}\\ &= \begin{cases} \frac{k_p^2x^2}{(1-k_p^2)^2R_{\rm s}^2}&\text{if }0<x< \frac{k_p}{1+k_p}R_{\rm s},\\ \frac{{\cal C}\big(R_{\rm s},\frac{k_p x}{1-k_p^2}, \frac{k_p^2 x}{1-k_p^2}\big)}{\pi R_{\rm s}^2}&\text{if }\frac{k_p}{1+k_p}R_{\rm s}\leq x\leq \frac{k_p}{1-k_p}R_{\rm s},\\ 1&\text{if }x>\frac{k_p}{1-k_p}R_{\rm s}\label{eq::association::abs::2} \end{cases}, \end{align} where \begin{multline*}{\cal C}(r_1,r_2,d)= r_1^2\tan^{-1}\bigg(\frac{t}{d^2+r_1^2-r_2^2}\bigg)\\+r_2^2\tan^{-1}\bigg(\frac{t}{d^2-r_1^2+r_2^2}\bigg)-{\frac{t}{2}} \end{multline*} is the area of intersection of two intersecting circles of radii $r_1$, and $r_2$ and distance between centers $d$ with $ t=(d+r_1+r_2)^{\frac{1}{2}}(d+r_1-r_2)^{\frac{1}{2}}(d-r_1+r_2)^{\frac{1}{2}}(-d+r_1+r_2)^{\frac{1}{2}}$. The association probability to the ABS is given by ${\cal A}_{\rm m}({ x})= 1-{\cal A}_{\rm s}({x})$. \end{lemma} \begin{IEEEproof} Conditioned on the location of the hotspot center at ${\bf x}$, ${\cal A}_{\rm s}({\bf x})= {\mathbb{P}}({\cal E} = 1|{\bf x})=$ \begin{align*} &{\mathbb{E}}[{\bf 1}(P_{\rm m}\|{\bf x}+{\bf u}\|^{-\alpha}<P_{\rm s}\|{\bf u}\|^{-\alpha})|{\bf x}]={\mathbb{P}}({\bf x}+{\bf u}\in {\cal S}_{\bf x}|{\bf x})\\ &={\mathbb{P}}(P_{\rm m}(x^2+u^2+2xu\cos\xi)^{-\alpha/2}<P_{\rm s}u^{-\alpha})|x)\\&= {\mathbb{P}}(u^2(1-k_p^2)-2x\cos\xi k_p^2u-k_p^2x^2<0|x)\\&\myeq{a}{\mathbb{P}}\bigg(u\in\bigg(0, \frac{xk_p\big(\sqrt{1-k_p^2\sin^2\xi}+k_p\cos\xi\big)}{1-k_p^2}\bigg), \\&\qquad\qquad\xi\in(0, 2\pi]\bigg|x\bigg)\\& =\int_{0}^{2\pi}\int_{0}^{R_{\rm s}}{\bf 1}\bigg(0\leq u < \frac{xk_p\big(\sqrt{1-k_p^2\sin^2\xi}+k_p\cos\xi\big)}{1-k_p^2}\bigg)\\&\times f_U(u){\rm d}u \frac{1}{2\pi}{\rm d}{\xi}, \end{align*} where $\xi = \arg({\bf u}-{\bf x})$ and is uniformly distributed in $(0,2\pi]$. Here, (a) follows from solving the quadratic inequality inside the indicator function. The last step follows from deconditioning over ${u}$ and $\xi$. Finally, \eqref{eq::association::abs} is obtained by {evaluating} the integration over $u$. Note that, due to angular symmetry, ${\cal A}_{\rm s}({\bf x})={\cal A}_{\rm s}({ x})$. Alternatively, $${\cal A}_{\rm s}(x)=\int_{{\cal S}_{\bf x}}f_{U}(u){\rm d}u\frac{1}{2\pi}{\rm d}{\xi}=\frac{|{\cal S}_{\bf x}|}{\pi R_{\rm s}^2}.$$ The final result in \eqref{eq::association::abs::2} {is} obtained by using Proposition~\ref{prop::SBS::assocaition::region}. \end{IEEEproof} In \figref{fig::sbs::association}, we plot ${\cal A}_{\rm s}(x)$ as a function of $x$. We now evaluate the coverage probability of a typical user which is the probability of the occurrence of the events defined in \eqref{eq::coverage::def}. \begin{theorem \label{thm::coverage::probability}The coverage probability is given by: \begin{equation}\label{eq::coverage::probability} \mathtt{P_c} =\int\limits_{0}^{R-R_{\rm s}}\big(\mathtt{P_c}_{\rm s}(\theta_1,\theta_2|x)+\mathtt{P_c}_{\rm m}(\theta_3|x)\big)f_X(x){\rm d}x, \end{equation} where $\mathtt{P_c}_{\rm s} (\theta_1,\theta_2|x) =$ \begin{multline*} \int\limits_{0}^{2\pi}\int\limits_{0}^{u_{\max}(x,\xi)} \bigg(p({x}) F_h\bigg(\frac{ x^{\alpha_L}\beta{\tt N}_0W\theta_1}{P_{\rm m}G^2 },m_L\bigg) +(1-p({x}))\\ \times F_h\bigg(\frac{ x^{\alpha_{NL}}\beta{\tt N}_0W\theta_1}{P_{\rm m}G^2 },m_{NL}\bigg)\bigg)\bigg(p({u}) F_h\bigg(\frac{ u^{\alpha_L}\beta{\tt N}_0W\theta_2}{P_{\rm s}G },m_{L}\bigg) \\+(1-p({u})) F_h\bigg(\frac{ u^{\alpha_{NL}}\beta{\tt N}_0W\theta_2}{P_{\rm s} G},m_{NL}\bigg)\bigg)\frac{f_{ U}(u)}{2\pi}{\rm d}{u}\:{\rm d}{\xi}, \end{multline*} where $u_{\max}(x,\xi) = \min\bigg(R_{\rm s}, x k_p\frac{\sqrt{(1-k_p^2\sin^2\xi)}+k_p\cos\xi)}{1-k_p^2}\bigg)$ and $F_{h}(\cdot)$ is the {complementary cumulative distribution function (CCDF) of Gamma distribution}, and $\mathtt{P_c}_{\rm m} (\theta_3|x)=$ \begin{multline*} \int\limits_{0}^{2\pi}\int\limits_{u_{\max}(x,\xi)}^{R_{\rm s}} {\bigg( p(\kappa(x,u,\xi)) F_h\bigg(\frac{{\kappa(x,u,\xi)}^{\alpha_L}\beta{\tt N}_0W\theta_3}{P_{\rm m}G },m_L\bigg)}\\ + (1-p(\kappa(x,u,\xi))) F_h\bigg(\frac{ {\kappa(x,u,\xi)}^{\alpha_{NL}}\beta{\tt N}_0W\theta_3}{P_{\rm m}G },m_{NL}\bigg)\bigg)\\\times\frac{f_{ U}(u){\rm d}u\:{\rm d}\xi}{2\pi}, \end{multline*} \end{theorem} where $\kappa(x,u,\xi)=({x^2+u^2+2 x u \cos\xi})^{1/2}$. \begin{IEEEproof} See Appendix~\ref{app::coverage::probability}. \end{IEEEproof} As expected, coverage probability is the summation of two terms, each corresponding to the probability of occurrences of the two mutually exclusive events appearing in \eqref{eq::coverage::def}. \subsection{Load distributions}\label{subsec::load::dist} While the load distributions for the PPP-based models are well-understood~\cite{OffloadingSingh,SinghKulkarniSelfBackhaul}, they are not directly applicable to the 3GPP-inspired finite model used in this paper. Consequently, in this Section, we provide a novel approach to characterize the ABS and SBS load for this model. As we saw in \eqref{eq::rate_abs_access}, the load on the ABS has two components, one is due to the contribution of the number of users of the {representative} hotspot connecting to the ABS (denoted by $N_{{\bf x}}^{\rm ABS}$) and the other is due to the macro users of {the} other clusters, which we lump into a single random variable, $N_{\rm o}^{\rm ABS} = \sum_{i=1}^{n-1}N_{{\bf x}_i}^{\rm ABS}$. On the other hand, $N_{\bf x}^{\rm SBS}$ and $N_{\rm o}^{\rm SBS}=\sum_{i=1}^{n-1}N_{{\bf x}_i}^{\rm SBS}$ respectively denote the load on the SBS at ${\bf x}$ and sum load of all SBSs except the one at ${\bf x}$. First, we obtain the PMFs of $N_{{\bf x}}^{\rm ABS}$ and $N_{{\bf x}}^{\rm SBS}$ using the fact that given the location of the representative hotspot centered at ${\bf x}$, {each user belongings to the association regions ${\cal S}_{\bf x}$ or $b({\bf x},R_{\rm s})\cap {{\cal S}}_{\bf x}^c$ according to an i.i.d. Bernoulli random variable.} \begin{lemma}\label{lemm::load::characterization::abs} Given the fact that the representative hotspot is centered at ${\bf x}$, load on the ABS due to the macro users in the hotspot at ${\bf x}$ ($N^{\rm ABS}_{{\bf x}}$) and load on the SBS at $\bf x$ ($N^{\rm SBS}_{\bf x}$) are distributed as follows: {\sc case}~$1$ ($N_{{\bf x}_i} = \bar{m}$, $\forall\ i=1,\dots,n$). \begin{subequations} \begin{alignat}{2} &{\mathbb{P}}(N^{\rm ABS}_{{\bf x}}=k|{\bf x})= {\bar{m}-1\choose k-1}{\cal A}_{\rm m}(x)^{k-1}{\cal A}_{\rm s}(x)^{\bar{m}-k}\label{eq::load::abs::load_x::fixedN},\\ &{\mathbb{P}}(N^{\rm SBS}_{{\bf x}}=k|{\bf x})= {\bar{m}-1\choose k-1}{\cal A}_{\rm s}(x)^{k-1}{\cal A}_{\rm m}(x)^{\bar{m}-k},\label{eq::load::sbs::load_x::fixedN} \end{alignat} \end{subequations} where $k=1,2,\dots,\bar{m}$. {\sc case}~$2$ ($N_{{\bf x}_i} \stackrel{i.i.d.}{\sim} {\tt Poisson}(\bar{m})$, $\forall\ i=1,\dots,n$). \begin{subequations} \begin{alignat}{2} &{\mathbb{P}}(N^{\rm ABS}_{{\bf x}}=k|{\bf x})= \frac{(\bar{m}{\cal A}_{\rm m}(x))^{k-1}}{(k-1)!}e^{-\bar{m}{\cal A}_{\rm m}(x)}\label{eq::load::abs::load_x::PoissonN},\\ &{\mathbb{P}}(N^{\rm SBS}_{{\bf x}}=k|{\bf x})= \frac{(\bar{m}{\cal A}_{\rm s}(x))^{k-1}}{(k-1)!}e^{-\bar{m}{\cal A}_{\rm s}(x)},\label{eq::load::sbs::load_x::PoissonN} \end{alignat} \end{subequations} where $k\in{\mathbb{Z}}^{+}$. \end{lemma} \begin{IEEEproof}See Appendix~\ref{app::load::characterization::abs}. \end{IEEEproof} {We present the first moments of these two load variables in the following Corollary which will be required for the evaluation of the rate coverage for the average load-based partition and the derivation of easy-to-compute approximations of rate coverage in the sequel.} \begin{cor}\label{cor::mean::representative::loads} {The conditional means of $N_{\bf x}^{\rm ABS}$ and $N_{\bf x}^{\rm SBS}$ given the center of the representative hotspot at $\bf x$ are} \begin{align*} &\text{{\sc case}~$1$: }{\mathbb{E}}[N_{\bf x}^{\rm ABS}] = (\bar{m}-1){\cal A}_{\rm m}(x)+1, {\mathbb{E}}[N_{\bf x}^{\rm SBS}] = \\&\qquad\qquad\qquad\qquad(\bar{m}-1){\cal A}_{\rm s}(x)+1,\\ &\text{{\sc case}~$2$: }{\mathbb{E}}[N_{\bf x}^{\rm ABS}] = \bar{m}{\cal A}_{\rm m}(x)+1, {\mathbb{E}}[N_{\bf x}^{\rm SBS}] = \bar{m}{\cal A}_{\rm s}(x)+1. \end{align*} \end{cor} We now obtain the PMFs of $N_{\rm o}^{\rm ABS}$ and $N_{\rm o}^{\rm SBS}$ in the following Lemma. {Note that, since ${\bf x}_i$-s are i.i.d., $N_{\rm o}^{\rm ABS}$ and $N_{\rm o}^{\rm SBS}$ are independent of ${\bf x}$.} In what follows, the exact PMF of $N_{{\rm o}}^{\rm ABS}$ ($N_{{\rm o}}^{\rm SBS}$) is in the form of $(n-1)$-fold discrete convolution and hence is not computationally efficient beyond very small values of $n$. We present an {alternate} easy-to-use expression of this PMF by invoking central limit theorem (CLT). In the numerical {results} Section, we verify that this approximation is {tight} even for moderate values of $n$. \begin{lemma}\label{lemm::load::characterization::others} Given the fact that the typical user belongs to a hotspot at ${\bf x}$, load on the ABS due to all other $n-1$ hotspots is distributed as: {$ \frac{N_{\rm o}^{\rm ABS}-\upsilon_{\rm m}}{\sigma_{\rm m}}\sim{\cal N}(0,1)\text{ (for large $n$)}$} and sum of the loads on the other SBSs at ${\bf x}_1,{\bf x}_2,\dots,{\bf x}_{n-1}$ is distributed as: $ \frac{N_{\rm o}^{\rm SBS}-\upsilon_{\rm s}}{\sigma_{\rm s}}\sim{\cal N}(0,1) \text{ (for large $n$)}$, where ${\cal N}(0,1)$ denotes the standard normal distribution, $ \upsilon_{\rm m}=(n-1)\bar{m}{\mathbb{E}}[{\cal A}_{\rm m}(X)], \upsilon_{\rm s}=(n-1)\bar{m}{\mathbb{E}}[{\cal A}_{\rm s}(X)] $ and \begin{align*} &\text{for {\sc case}~$1$, }\sigma_{\rm m}^2=(n-1)\big[\bar{m}{\mathbb{E}}[{\cal A}_{\rm m}(X){\cal A}_{\rm s}(X)]\\&\qquad\qquad\qquad\quad+\bar{m}^2{\rm Var}[{\cal A}_{\rm m}(X)]\big]=\sigma_{\rm s}^2, \\ &\text{for {\sc case}~$2$, }\sigma_{\rm m}^2=(n-1)\big[\bar{m}{\mathbb{E}}[{\cal A}_{\rm m}(X)]+\bar{m}^2{\rm Var}[{\cal A}_{\rm m}(X)]\big],\\ &\sigma_{\rm s}^2=(n-1)\big[\bar{m}{\mathbb{E}}[{\cal A}_{\rm s}(X)]+\bar{m}^2{\rm Var}[{\cal A}_{\rm s}(X)]\big]. \end{align*} Here, \begin{multline* {\mathbb{E}}[{\cal A}_{\rm m}(X)]=\int_{0}^{R-R_{\rm s}}{\cal A}_{\rm m}(x)f_{X}(x){\rm d}x,\text{ and } \\ {\rm Var}[{\cal A}_{\rm m}(X)] = \int\limits_{0}^{R-R_{\rm s}}\big({\cal A}_{\rm m}(x)\big)^2f_X(x){\rm d}x - ({\mathbb{E}}[{\cal A}_{\rm m}(X)])^2, \end{multline*} and ${\mathbb{E}}[{\cal A}_{\rm s}(X)]$, ${\rm Var}[{\cal A}_{\rm s}(X)]$ can be similarly obtained by replacing ${\cal A}_{\rm m}(X)$ with ${\cal A}_{\rm s}(X)$ in the above expressions. \end{lemma} \begin{IEEEproof} See Appendix~\ref{app::load::characterization::others}. \end{IEEEproof} \subsection{Rate Coverage Probability} We first define the downlink rate coverage probability (or simply, rate coverage) as follows. \begin{ndef}[Rate coverage probability]\label{def::rate::coverage} The rate coverage probability of a link with BW $\tilde{W}$ is defined as the probability that the maximum achievable data rate (${\cal R}$) exceeds a certain threshold $\rho$, i.e., ${\mathbb{P}}({\cal R}>\rho) =$ \begin{align} {\mathbb{P}}\bigg(\tilde{W}\log_{2}(1+\mathtt{SNR})>\rho\bigg) = {\mathbb{P}}(\mathtt{SNR}>2^{{\rho}/{\tilde{W}}}-1). \end{align} \end{ndef} Hence, we see that the rate coverage probability is the coverage probability evaluated at a modified $\mathtt{SNR}$-threshold. We now evaluate the rate coverage probability for different backhaul BW partition strategies for {a general distribution of $N_{{\bf x}_i}$ and $N_{{\bf x}}$ in the following Theorem. We later specialize this result for {\sc case} s~$1$ and $2$ for numerical evaluation.} \begin{theorem}\label{thm::rate::cov::equal::partition}The rate coverage probability for a target data rate $\rho$ is given by: \begin{equation} \mathtt{P_r} = \mathtt{P_r}_{\rm m} + \mathtt{P_r}_{\rm s}, \end{equation} where $\mathtt{P_r}_{\rm m}$ ($\mathtt{P_r}_{\rm s}$) denotes the ABS rate coverage (SBS rate coverage) which is the probability that the typical user is receiving data rate greater than or equal to $\rho$ and is served by the ABS (SBS). The ABS rate coverage is given by: \begin{multline}\label{eq::rate::cov::macro} \mathtt{P_r}_{\rm m}=\int\limits_{-\infty}^{\infty}\int\limits_{0}^{R-R_{\rm s}}{\mathbb{E}}_{N_{\bf x}^{\rm ABS}}\bigg[\mathtt{P_c}_{\rm m}\bigg(2^{\frac{\rho(t+N_{\bf x}^{\rm ABS})}{W_{\rm a}}}-1|x\bigg)\bigg]\\\times f_X(x){\rm d}x \frac{1}{\sigma_{\rm m}\sqrt{2\pi}}e^{-\frac{(t-\upsilon_{\rm m})^2}{2\sigma_{\rm m}^2}}{\rm d}t. \end{multline} The SBS rate coverage depends on the backhaul BW partition strategy. For equal partition, \begin{equation} \mathtt{P_r}_{\rm s} =\int\limits_{0}^{R-R_{\rm s}} {\mathbb{E}}_{N_{\bf x}^{\rm SBS}}\bigg[\mathtt{P_c}_{\rm s}\bigg(2^{\frac{\rho n N^{\rm SBS}_{\bf x}}{W_{\rm b}}}-1,2^{\frac{\rho N^{\rm SBS}_{\bf x}}{W_{\rm a}}}-1\big|x\bigg)\bigg]f_X(x){\rm d}x,\label{eq::rate::cov::sbs::eq::partition} \end{equation} \vspace{-1em} for instantaneous load-based partition, \begin{multline} \label{eq::rate::cov::inst-load::sbs} \mathtt{P_r}_{\rm s}= \int\limits_{-\infty}^{\infty}\int\limits_{0}^{R-R_{\rm s}}{\mathbb{E}}_{N_{\bf x}^{\rm SBS}}\bigg[\mathtt{P_c}_{\rm s}\bigg(2^{\frac{\rho( N^{\rm SBS}_{\bf x}+t)}{W_{\rm b}}}-1,2^{\frac{\rho N^{\rm SBS}_{\bf x}}{W_{\rm a}}}-1\big| x\bigg)\bigg]\\\times f_{X}(x){\rm d}x \frac{1}{\sigma_{\rm s}\sqrt{2\pi}}e^{-\frac{(t-\upsilon_{\rm s})^2}{2\sigma_{\rm s}^2}}{\rm d}t, \end{multline} and for average load-based partition, \begin{multline} \label{eq::rate::cov::avg-load::sbs} \mathtt{P_r}_{\rm s}= \int\limits_{-\infty}^{\infty}\int\limits_{0}^{R-R_{\rm s}}{\mathbb{E}}_{N_{\bf x}^{\rm SBS}}\bigg[\mathtt{P_c}_{\rm s}\bigg(2^{\frac{\rho N^{\rm SBS}_{\bf x}\left({\mathbb{E}}[N_{\bf x}^{\rm SBS}]+\bar{m}t\right)}{W_{\rm b}{\mathbb{E}}[N_{\bf x}^{\rm SBS}]}}-1,\\2^{\frac{\rho N^{\rm SBS}_{\bf x}}{W_{\rm a}}}-1\big| x\bigg)\bigg]f_{X}(x){\rm d}x \frac{e^{-\frac{\big(t-(n-1){\mathbb{E}}[{\cal A}_{\rm s}(X)]\big)^2}{2(n-1){\rm Var}[{\cal A}_{\rm s}(X)]}}}{\sqrt{2\pi(n-1){\rm Var}[{\cal A}_{\rm s}(X)]}}{\rm d}t. \end{multline} \end{theorem} \begin{IEEEproof}See Appendix~\ref{app::rate::cov::equal::partition}. \end{IEEEproof} {Note that the key enabler of the expression of $\mathtt{P_r}$ in Theorem~\ref{thm::rate::cov::equal::partition} is the fact that the system is considered to be noise-limited. Including the SBS interference into analysis is not straightforward from this point since it would involve coupling between the coverage probability and load since both are dependent on the locations of the other $n-1$ SBSs.} Having derived the exact expressions of rate coverage in Theorem~\ref{thm::rate::cov::equal::partition}, we present approximations of these expressions by replacing (i) $N_{\bf x}^{\rm ABS}$ in $\mathtt{P_r}_{\rm m}$ with its mean ${\mathbb{E}}[N_{\bf x}^{\rm ABS}]$, and (ii) $N_{\bf x}^{\rm SBS}$ in $\mathtt{P_r}_{\rm s}$ with its mean ${\mathbb{E}}[N_{\bf x}^{\rm SBS}]$ in the following Lemma. \begin{lemma}\label{lemm::approximation} {The ABS rate coverage can be approximated as} \begin{multline} \label{eq::rate::cov::macro::approx::fixedN} \mathtt{P_r}_{\rm m}=\int\limits_{-\infty}^{\infty}\int\limits_{0}^{R-R_{\rm s}}\mathtt{P_c}_{\rm m}\bigg(2^{\frac{\rho(t+{\mathbb{E}}[N_{\bf x}^{\rm ABS}])}{W_{\rm a}}}-1|x\bigg)f_X(x){\rm d}x \\ \times \frac{e^{-\frac{(t-\upsilon_{\rm m})^2}{2\sigma_{\rm m}^2}}}{\sigma_{\rm m}\sqrt{2\pi}}{\rm d}t. \end{multline} The SBS rate coverage can be approximated as follows. For equal partition, \begin{align} \mathtt{P_r}_{\rm s} = \int\limits_{0}^{R-R_{\rm s}}\mathtt{P_c}_{\rm s}\bigg(2^{\frac{\rho n{\mathbb{E}}[ N^{\rm SBS}_{\bf x}]}{W_{\rm b}}}-1,2^{\frac{\rho {\mathbb{E}}[N^{\rm SBS}_{\bf x}]}{W_{\rm a}}}-1\big|x\bigg)f_X(x){\rm d}x,\label{eq::rate::cov::sbs::eq::partition::approx} \end{align} for instantaneous load-based partition, \begin{multline \label{eq::rate::cov::inst-load::sbs::approx} \mathtt{P_r}_{\rm s}= \int\limits_{-\infty}^{\infty}\int\limits_{0}^{R-R_{\rm s}}\mathtt{P_c}_{\rm s}\bigg(2^{\frac{\rho( {\mathbb{E}}[N^{\rm SBS}_{\bf x}]+t)}{W_{\rm b}}}-1,2^{\frac{\rho {\mathbb{E}}[ N^{\rm SBS}_{\bf x}]}{W_{\rm a}}}-1\big| x\bigg)\\\times f_{X}(x){\rm d}x \frac{e^{-\frac{(t-\upsilon_{\rm s})^2}{2\sigma_{\rm s}^2}}}{\sigma_{\rm s}\sqrt{2\pi}}{\rm d}t, \end{multline} and for average load-based partition, \begin{multline \label{eq::rate::cov::avg-load::sbs::approx} \mathtt{P_r}_{\rm s}=\int\limits_{-\infty}^{\infty}\int\limits_{0}^{R-R_{\rm s}}\mathtt{P_c}_{\rm s}\bigg(2^{\frac{\rho \left({\mathbb{E}}[N_{\bf x}^{\rm SBS}]+\bar{m}t\right)}{W_{\rm b}}}-1,2^{\frac{\rho {\mathbb{E}}[ N^{\rm SBS}_{\bf x}]}{W_{\rm a}}}-1\big| x\bigg)\\\times f_{X}(x){\rm d}x \frac{e^{-\frac{\big(t-(n-1){\mathbb{E}}[{\cal A}_{\rm s}(X)]\big)^2}{2(n-1){\rm Var}[{\cal A}_{\rm s}(X)]}}}{\sqrt{2\pi(n-1){\rm Var}[{\cal A}_{\rm s}(X)]}}{\rm d}t. \end{multline} \end{lemma} \begin{table*}[t] \centering \caption{{Key system parameters} \label{tab::parameters { \begin{tabular}{|l|l|l|} \hline Notation & Parameter & Value \\ \hline $P_{\rm m},\ P_{\rm s}$ & BS transmit powers & 50, 20 dBm \\ \hline $\alpha_L, \alpha_{NL}$ & Path-loss exponent & 2.0, 3.3\\ \hline $\beta$ & Path loss at 1 m & 70 dB \\\hline $G$ & Main lobe gain & 18 dB \\ \hline $\mu$ & LOS range constant & 170 m \\ \hline ${\tt N}_0W$ & Noise power & \begin{tabular}[c]{@{}l@{}}-174 dBm/Hz+ $10\log_{10} W$ \\+10 dB {(noise-figure)}\end{tabular} \\ \hline $m_L,m_{NL}$& Parameter of Nakagami distribution& 2, 3\\\hline $R$, $R_{\rm s}$& Macrocell and hotspot radius & 200 m, 30 m\\ \hline $\bar{m}$& Average number of users per hotspot & 5\\\hline $\rho$& Rate threshold & 50 Mbps\\ \hline \end{tabular} } \end{table*} {We now specialize the result of Theorem~\ref{thm::rate::cov::equal::partition} for {\sc case} s~$1$ and $2$ in the following Corollaries.} \begin{cor}\label{cor::rate::cov::fixedN}For {\sc case}~$1$, i.e., when $N_{{\bf x}_i} = \bar{m}$, $\forall\ i=1,\dots,n$, the ABS rate coverage is \begin{multline}\label{eq::rate::cov::macro::fixedN} \mathtt{P_r}_{\rm m} = \sum\limits_{k=1}^{\bar{m}}{{\bar{m}-1} \choose k-1}\int\limits_{-\infty}^{\infty}\int\limits_{0}^{R-R_{\rm s}}\mathtt{P_c}_{\rm m}\bigg(2^{\frac{\rho(t+k)}{W_{\rm a}}}-1|x\bigg)\\\times{\cal A}_{\rm m}(x)^{k-1}{\cal A}_{\rm s}(x)^{\bar{m}-k} f_X(x){\rm d}x \frac{1}{\sigma_{\rm m}\sqrt{2\pi}}e^{-\frac{(t-\upsilon_{\rm m})^2}{2\sigma_{\rm m}^2}}{\rm d}t. \end{multline}The SBS rate coverages for the three backhaul BW parition strategies are expressed as follows. (i) For equal partition, \begin{multline} \mathtt{P_r}_{\rm s} = \sum\limits_{k = 1}^{\bar{m}}{{\bar{m}-1} \choose k-1}\int\limits_{0}^{R-R_{\rm s}}\mathtt{P_c}_{\rm s}\bigg(2^{\frac{\rho n k}{W_{\rm b}}}-1,2^{\frac{\rho k}{W_{\rm a}}}-1\big|x\bigg)\\\times {\cal A}_{\rm s}(x)^{k-1}{\cal A}_{\rm m}(x)^{\bar{m}-k}f_X(x){\rm d}x,\label{eq::rate::cov::sbs::fixedN::eq::partition} \end{multline} (ii) for instantaneous load-based partition, \begin{multline}\label{eq::rate::cov::inst-load::sbs::fixedN}\mathtt{P_r}_{\rm s} = \sum\limits_{k = 1}^{\bar{m}}{{\bar{m}-1} \choose k-1}\int\limits_{-\infty}^{\infty}\int\limits_{0}^{R-R_{\rm s}}\mathtt{P_c}_{\rm s}\bigg(2^{\frac{\rho(k+t)}{W_{\rm b}}}-1,2^{\frac{\rho k}{W_{\rm a}}}-1\big| x\bigg)\\\times{\cal A}_{\rm s}(x)^{k-1}{\cal A}_{\rm m}(x)^{\bar{m}-k}f_{X}(x){\rm d}x \frac{e^{-\frac{(t-\upsilon_{\rm s})^2}{2\sigma_{\rm s}^2}}}{\sigma_{\rm s}\sqrt{2\pi}}{\rm d}t, \end{multline} and (iii) for average load-based partition, \begin{multline}\label{eq::rate::cov::avg-load::sbs::fixedN} \mathtt{P_r}_{\rm s}=\sum\limits_{k = 1}^{\bar{m}}{{\bar{m}-1} \choose k-1}\int\limits_{-\infty}^{\infty}\int\limits_{0}^{R-R_{\rm s}}\mathtt{P_c}_{\rm s}\bigg(2^{\frac{\rho k\left(1+(\bar{m}-1){\cal A}_{\rm s}(x)+\bar{m} t\right)}{W_{\rm b}(1+(\bar{m}-1){\cal A}_{\rm s}(x))}}-1,\\2^{\frac{\rho k}{W_{\rm a}}}-1\big| x\bigg){\cal A}_{\rm s}(x)^{k-1}{\cal A}_{\rm m}(x)^{\bar{m}-k}\times f_{X}(x){\rm d}x\\\times \frac{e^{-\frac{\big(t-(n-1){\mathbb{E}}[{\cal A}_{\rm s}(X)]\big)^2}{2(n-1){\rm Var}[{\cal A}_{\rm s}(X)]}}}{\sqrt{2\pi(n-1){\rm Var}[{\cal A}_{\rm s}(X)]}}{\rm d}t. \end{multline} \end{cor} \begin{IEEEproof}The result can be obtained from Theorem~\ref{thm::rate::cov::equal::partition} by using the PMFs of $N_{\bf x}^{\rm ABS}$, $N_{\bf x}^{\rm SBS}$, $N_{o}^{\rm ABS}$ and $N_{o}^{\rm ABS}$ from Lemmas~\ref{lemm::load::characterization::abs} and \ref{lemm::load::characterization::others} and substituting ${\mathbb{E}}[N_{\bf x}^{\rm SBS}]$ {from} Corollary~\ref{cor::mean::representative::loads} for {\sc case}~$1$. \end{IEEEproof} \begin{cor}\label{cor::rate::cov::PoissonN}For {\sc case}~$2$, i.e., when $N_{{\bf x}_i} \stackrel{i.i.d.}{\sim} {\tt Poisson}(\bar{m})$, $\forall\ i=1,\dots,n$, the ABS rate coverage is expressed as \begin{multline}\label{eq::rate::cov::macro::PoissonN} \mathtt{P_r}_{\rm m} = \sum\limits_{k=1}^{\infty}\frac{\bar{m}^{k-1}}{(k-1)!} \int\limits_{-\infty}^{\infty}\int\limits_{0}^{R-R_{\rm s}} ({\cal A}_{\rm m}(x))^{k-1}e^{-\bar{m}{\cal A}_{\rm m}(x)}\\\times \mathtt{P_c}_{\rm m} \bigg(2^{\frac{\rho(t+k)}{W_{\rm a}}}-1|x\bigg) f_X(x){\rm d}x \frac{1}{\sigma_{\rm m}\sqrt{2\pi}}e^{-\frac{(t-\upsilon_{\rm m})^2}{2\sigma_{\rm m}^2}}{\rm d}t. \end{multline}The SBS rate coverages for the three backhaul BW parition strategies are expressed as follows. (i) For equal partition, \begin{multline} \mathtt{P_r}_{\rm s} = \sum\limits_{k=1}^{\infty}\frac{\bar{m}^{k-1}}{(k-1)!} \int\limits_{0}^{R-R_{\rm s}} ({\cal A}_{\rm s}(x))^{k-1}e^{-\bar{m}{\cal A}_{\rm s}(x)}\\\times \mathtt{P_c}_{\rm s}\bigg(2^{\frac{\rho n k}{W_{\rm b}}}-1,2^{\frac{\rho k}{W_{\rm a}}}-1\big|x\bigg)f_X(x){\rm d}x,\label{eq::rate::cov::sbs::PoissonN::eq::partition} \end{multline} (ii) for instantaneous load-based partition, \begin{multline}\label{eq::rate::cov::inst-load::sbs::PoissonN}\mathtt{P_r}_{\rm s} = \sum\limits_{k=1}^{\infty}\frac{\bar{m}^{k-1}}{(k-1)!} \int\limits_{-\infty}^{\infty}\int\limits_{0}^{R-R_{\rm s}} ({\cal A}_{\rm s}(x))^{k-1}e^{-\bar{m}{\cal A}_{\rm s}(x)}\\\times \mathtt{P_c}_{\rm s}\bigg(2^{\frac{\rho(k+t)}{W_{\rm b}}}-1,2^{\frac{\rho k}{W_{\rm a}}}-1\big| x\bigg)f_{X}(x){\rm d}x \frac{1}{\sigma_{\rm s}\sqrt{2\pi}}e^{-\frac{(t-\upsilon_{\rm s})^2}{2\sigma_{\rm s}^2}}{\rm d}t, \end{multline} and (iii) for average load-based partition, \begin{multline}\label{eq::rate::cov::avg-load::sbs::PoissonN} \mathtt{P_r}_{\rm s}=\sum\limits_{k=1}^{\infty}\frac{\bar{m}^{k-1}}{(k-1)!} \int\limits_{-\infty}^{\infty}\int\limits_{0}^{R-R_{\rm s}} ({\cal A}_{\rm s}(x))^{k-1}e^{-\bar{m}{\cal A}_{\rm s}(x)}\\\times \mathtt{P_c}_{\rm s}\bigg(2^{\frac{\rho k\left(1+\bar{m}{\cal A}_{\rm s}(x)+\bar{m}t\right)}{W_{\rm b}(1+\bar{m}{\cal A}_{\rm s}(x))}}-1,2^{\frac{\rho k}{W_{\rm a}}}-1\big| x\bigg)\\\times f_{X}(x){\rm d}x \frac{1}{\sqrt{2\pi(n-1){\rm Var}[{\cal A}_{\rm s}(X)]}}e^{-\frac{\big(t-(n-1){\mathbb{E}}[{\cal A}_{\rm s}(X)]\big)^2}{2(n-1){\rm Var}[{\cal A}_{\rm s}(X)]}}{\rm d}t. \end{multline} \end{cor} \begin{IEEEproof}The result can be similarly obtained from Theorem~\ref{thm::rate::cov::equal::partition} by using the PMFs of $N_{\bf x}^{\rm ABS}$, $N_{\bf x}^{\rm SBS}$, $N_{o}^{\rm ABS}$ and $N_{o}^{\rm ABS}$ from Lemmas~\ref{lemm::load::characterization::abs} and \ref{lemm::load::characterization::others}, and substituting ${\mathbb{E}}[N_{\bf x}^{\rm SBS}]$ {from} Corollary~\ref{cor::mean::representative::loads} for {\sc case}~$2$. \end{IEEEproof} \begin{figure*}% \centering \subfigure[Equal partition.]{ \label{fig::comparison::rate::cov::bw::eq} \includegraphics[width=.3\linewidth]{./Fig/ratecov_eq_bw.pdf}} \hspace{8pt}% \subfigure[Instantaneous load-based partition.]{% \label{fig::comparison::rate::cov::bw::load} \includegraphics[width=.3\linewidth]{./Fig/ratecov_inst_load_bw.pdf}} \subfigure[Average load-based partition.\newline]{% \label{fig::comparison::rate::cov::bw::avg::load} \includegraphics[width=0.3\linewidth]{./Fig/ratecov_avg_bw.pdf}} \caption[Rate coverage probability for different bandwidths ($\rho = 50$ Mbps, $n=10$) for {\sc case} s~$1$ and $2$ obtained by Corollaries~\ref{cor::rate::cov::fixedN} and \ref{cor::rate::cov::PoissonN}.]{Rate coverage probability for different bandwidths for {\sc case}~$1$ and $2$ obtained by Corollaries~\ref{cor::rate::cov::fixedN} and \ref{cor::rate::cov::PoissonN} ($\rho = 50$ Mbps, $n=10$). {Lines and markers indicate theoretical and simulation results, respectively. Theoretical results for {\sc case} s $1$ and $2$ are obtained from Corollaries~\ref{cor::rate::cov::fixedN} and \ref{cor::rate::cov::PoissonN}, respectively.}} \label{fig::comparison::rate::cov}% \end{figure*} \begin{figure} \centering \includegraphics[scale=0.47]{./Fig/snr_plot} \caption{The CDF plot of SNR from the ABS and SBS ($P_{\rm m}= 50$ dBm, $P_{\rm s} = 20$ dBm). The markers indicate empirical CDF obtained from Monte Carlo simulations.}\label{fig::snrplot} \end{figure} \section{Results and Discussion}\label{sec::results} \subsection{Trends of rate coverage}\label{subsec::trends} In this Section we {verify the accuracy of} our analysis of rate coverage with Monte Carlo simulations of the network model delineated in Section~\ref{sec::system::model} with parameters listed in Table~\ref{tab::parameters}. {For each simulation, the number of iterations was set to $10^6$. Since $\mathtt{P_r}$ fundamentally {depends upon} $\mathtt{SNR}$, we first plot the cumulative density function (CDF) of SNRs without beamforming in Fig.~\ref{fig::snrplot}, averaged over user locations. Precisely we plot $ {\mathbb{E}}_{\bf x}\left[{\mathbb{P}}\left(\frac{h_{\rm m} P_{\rm m}\|{\bf x}\|^{-\alpha}}{{\tt N}_0W}<\theta\right)\right] = \int_0^{R-R_{\rm s}}\mathtt{P_c}_{\rm m}(\theta|x)f_X(x){\rm d}x $ and $ {\mathbb{E}}_{\bf u}\left[{\mathbb{P}}\left(\frac{h_{\rm s} P_{\rm s}\|{\bf u}\|^{-\alpha}}{{\tt N}_0W}<\theta\right)\right] =\mathtt{P_c}_{\rm s}(-\infty,\theta|x) $, where $\mathtt{P_c}_{\rm m}$ and $\mathtt{P_r}_{\rm s}$ were defined in Theorem~\ref{thm::coverage::probability} from simulation and using our analytical results and observe a perfect match.} We now plot the rate coverages for different user distributions ({\sc case} s $1$ and $2$) for three different backhaul BW partition strategies in Figs.~\ref{fig::comparison::rate::cov::bw::eq}-\ref{fig::comparison::rate::cov::bw::avg::load}. Recall that one part of ABS and SBS load was approximated using CLT in Lemma~\ref{lemm::load::characterization::others} for efficient computation. Yet, we obtain a perfect match between simulation and analysis even for $n=10$ for {\sc case}~$1$ and {\sc case}~$2$. {Further, we observe that, (i) $\mathtt{P_r} = 0$ for $\eta=1$ since this corresponds to the extreme when no BW is given to access links, and (ii) the rate coverage is maximized for a particular access-backhaul BW split ($\eta^*=\arg\max_{\{\eta\}}\mathtt{P_r}$).} Also note that the rate coverage trends for {\sc case} s~$1$ and $2$ are the same, although $\mathtt{P_r}$ for {\sc case}~$1$ is slightly higher than $\mathtt{P_r}$ of {\sc case}~$2$ since the representative cluster, on average, has more number of users in {\sc case}~$2$ than in {\sc case}~$1$ (see Corollary~\ref{cor::mean::representative::loads}). However, for space constraint, we only present the results of {\sc case}~$1$ for subsequent discussions. \begin{figure} \centering \includegraphics[width=.7\linewidth]{./Fig/ratecov_bw_comp.pdf \caption{Comparison of backhaul partition strategies for {\sc case}~$1$ ($\rho = 50$ Mbps, $n=10$).} \label{fig::comparison::rate::cov::bw} \end{figure} \begin{figure \centering \includegraphics[width=.7\linewidth]{./Fig/low_mu.pdf \caption{Comparision of backhaul BW parition strategies ($\rho = 50$ Mbps, $n=10$, $W = 600$ MHz) for {\sc case}~$1$ and $\mu= 30$ m. {The results are obtained from Corollary~\ref{cor::rate::cov::fixedN}.}\newline\newline}\label{fig::comparison::rate::cov::bw::lowmu} \end{figure} \begin{figure} \centering \includegraphics[width=.7\linewidth]{./Fig/ratecov_approx_fixed_user.pdf \caption{Comparison of the exact expression (Corollary~\ref{cor::rate::cov::fixedN}) and approximate expression (Lemma~\ref{lemm::approximation}) of rate coverage probability for {\sc case}~$1$ ($\rho = 50$ Mbps, $W = 600$ MHz, $n=10$). Lines and markers indicate exact and approximate results, respectively. }\label{fig::comparison::rate::approx::fixedN} \end{figure} \subsubsection{Comparison of backhaul BW partition strategies} In Fig.~\ref{fig::comparison::rate::cov::bw}, we overlay $\mathtt{P_r}$ for three different backhaul BW partition strategies. We observe that the maximum rate coverage, $\mathtt{P_r}^* = \mathtt{P_r}(\eta^*)$ (marked as `*' in the figures) for instantaneous load-based partition dominates $\mathtt{P_r}^*$ in average load-based partition, {and} $\mathtt{P_r}^*$ in average load-based partition dominates $\mathtt{P_r}^*$ in equal partition. {Also note that $\eta^*$ is different for different combination of BW partition strategy and $W$. We further compared these three strategies in a high blocking environment in Fig.~\ref{fig::comparison::rate::cov::bw::lowmu} by setting $\mu = 30$ m and observe the same ordering of performance of the three strategies. As expected, $\mathtt{P_r}$ is in general lower for this case.} That said, it should be kept in mind that instantaneous load-based partition requires more frequent feedback of the load information from the SBSs and hence has the highest signaling overhead among the three strategies. The average load-based partition requires comparatively less signaling overhead since it does not require frequent feedback. On the other hand, equal partition does not have this overhead at all. This motivates an interesting performance-complexity trade-off for the design of cellular networks with IAB. \begin{figure} \centering \includegraphics[width=.7\linewidth]{./Fig/ratecov_users.pdf \caption{Rate coverage for different numbers of users per hotspot for {\sc case}~$1$ ($W = 600$ MHz, $\rho = 50$ Mbps). The values of $\mathtt{P_r}$ are computed using Lemma~\ref{lemm::approximation}.}\label{fig::comparison::rate::cov::users} \end{figure} \begin{figure} \centering \includegraphics[width=.7\linewidth ]{./Fig/load_critical.pdf} \caption{Total cell-load upto which IAB-enabled network outperforms macro-only network (for instantaneous load-based partition).\newline}\label{fig::total::critical::load} \end{figure} \subsubsection{Effect of system BW} We observe the effect of increasing system BW on rate coverage in Fig~\ref{fig::comparison::rate::cov::bw}. As expected, $\mathtt{P_r}$ increases as $W$ increases. However, the increment of $\mathtt{P_r}^*$ saturates for very high values of $W$ since high noise power {degrades} the link spectral efficiency. Another interesting observation is that starting from $\eta = 0$ to $\eta^*$, $\mathtt{P_r}$ does not increase monotonically. This is due to the fact that sufficient BW needs to be steered from access to backhaul so that the network with IAB performs better than the macro-only network (corresponding to $\eta = 0$). \subsubsection{Accuracy of approximation}We now plot $\mathtt{P_r}$ obtained by the approximations in Lemma~\ref{lemm::approximation} in Fig.~\ref{fig::comparison::rate::approx::fixedN}. It is observed that the approximation is surprisingly close to the exact values of $\mathtt{P_r}$ obtained by Corollary~\ref{cor::rate::cov::fixedN}. Motivated by the tightness of the approximation, we proceed with the easy-to-compute expressions of $\mathtt{P_r}$ obtained by Lemma~\ref{lemm::approximation} instead of the exact expressions (Corollary~\ref{cor::rate::cov::fixedN}) for the metrics evaluated in the sequel, namely, critical load, median rate, and $5^{th}$ percentile rate. {It is important to note that each numerical evaluation of these metrics requires high number of computations of $\mathtt{P_r}$ and is highly inefficient if $\mathtt{P_r}$ is computed by simulation, which further highlights the importance of analytical expressions derived in this paper.} \subsection{Critical load} We plot the variation of $\mathtt{P_r}$ with $\bar{m}$ in Fig.~\ref{fig::comparison::rate::cov::users}. We observe that as $\bar{m}$ increases, more number of users share the BW and as a result, $\mathtt{P_r}$ decreases. However, the optimality of $\mathtt{P_r}$ completely disappears for very large value of $\bar{m}$ ($10<\bar{m}<20$ in this case). This implies that for given BW $W$ there exists a {\em critical total cell-load} ($n\bar{m}$) beyond which the gain obtained by the IAB architecture completely disappears. {Observing Fig.~\ref{fig::total::critical::load}, we find that the critical total cell-load varies linearly with the system BW.} The reason of the existence of the critical total cell-load can be intuitively explained as follows. Recall that the SBS rate ${\cal R}_{\rm a}^{SBS}$ was limited by the backhaul constraint ${\cal R}_{\rm b}^{\rm ABS}/N_{\bf x}^{\rm SBS}$. When $\bar{m}$ is high, $N_{\bf x}^{\rm SBS}$ is also high and this puts stringent backhaul constraint on ${\cal R}_{\rm a}^{\rm SBS}$. Hence, an ABS can serve more users by direct macro-links at the target rate instead of allocating any backhaul partition \begin{figure} \centering \includegraphics[width=.7\linewidth]{./Fig/ratecov_bw_50ile.pdf} \caption{Median rate for {\sc case}~$1$ for instantaneous load-based partition ($n=10$).}\label{fig::median::rate} \end{figure} \begin{figure} \centering \includegraphics[width=.7\linewidth]{./Fig/ratecov_bw_95ile.pdf} \caption{$5^{th}$ percentile rate for {\sc case}~$1$ for instantaneous load-based partition ($n=10$).}\label{fig::5thpercentile::rate} \end{figure} \begin{figure} \centering \includegraphics[width=.7\linewidth]{./Fig/ratecov_bw_5ile_comp.pdf} \caption{{Median rate for {\sc case}~$1$ for different backhaul BW partition strategies ($n=10$, $W = 600$ MHz).}\newline}\label{fig::median::rate::comp} \end{figure} \begin{figure} \centering \includegraphics[width=.7\linewidth]{./Fig/ratecov_bw_95ile_comp.pdf} \caption{{$5^{th}$ percentile rate for {\sc case}~$1$ for different backhaul BW partition strategies ($n=10$, $W = 600$ MHz).}}\label{fig::5thpercentile::rate::comp} \end{figure} \subsection{Median and $5^{th}$ percentile rates} We now shift our attention to two more performance metrics of interest, the median ($50^{th}$ percentile) and $5^{th}$ percentile rate, which are denoted as $\rho_{50}$ and $\rho_{95}$, respectively. These rates are defined as the values where the rate CDF attains $0.5$ and $0.05$, respectively, i.e., $\mathtt{P_r} = 0.5$ at $\rho=\rho_{50}$ and $\mathtt{P_r} = 0.95$ at $\rho=\rho_{95}$. Figs.~\ref{fig::median::rate} and \ref{fig::5thpercentile::rate} illustrate $\rho_{50}$ and $\rho_{95}$ respectively for different $W$. {We first observe that for a given $\eta$, these rates increase linearly with $W$. This is because of the fact that in all the expressions of rate coverage, $\rho$ and $W$ appear as a ratio ($\rho/W$). Thus, once we find a desired rate coverage at a particular $\rho$ for a given $W$, same rate coverage will be observed for $kW$ at target data rate $k\rho$ (where $k$ is a positive constant).} Further, we notice that the median rate is relatively flat around the maximum compared to the $5^{th}$ percentile rate. Also, the optimal $\eta$ does not vary significantly (stays close to 0.4 in our setup) for median and $5^{th}$ percentile rates. In Figs.~\ref{fig::median::rate::comp} and \ref{fig::5thpercentile::rate::comp}, we have compared the three backhaul BW partition strategies in terms of these two rates. As expected, the ordering in performance is similar to the one observed for $\mathtt{P_r}^*$. Interestingly, from Fig.~\ref{fig::5thpercentile::rate::comp}, it appears that the average and instantaneous load-based partition policies have almost similar performance in terms of $5^{th}$ percentile rate. {This is because of the fact that $\rho_{95}$ is towards the tail of the rate distribution which is not significantly affected by difference between instantaneous or average load. However, the performance gap becomes prominent once median rate is considered.} \section{Conclusion}\label{sec::conclusion} In this paper, we proposed the first 3GPP-inspired analytical framework for two-tier mm-wave HetNets with IAB and investigated three backhaul BW partition strategies. In particular, our model was inspired by the spatial configurations of the users and BSs considered in 3GPP simulation models of mm-wave IAB, where the SBSs are deployed at the centers of user hotspots. Under the assumption that the mm-wave communication is noise limited, we evaluated the downlink rate coverage probability. As a key intermediate step, we characterized the PMFs of load on the ABS and SBS for two different user distributions per hotspot. Our analysis leads to two important system-level insights: (i) for three performance metrics namely, downlink rate coverage probability, median rate, and $5^{th}$ percentile rate, the existence of the optimal access-backhaul bandwidth partition splits for which the metrics are maximized, and (ii) maximum total cell-load that can be supported using the IAB architecture. { This work has numerous extensions. From the modeling perspective, although the noise power dominates interference power in most of the operating regimes in mm-wave networks, it is important to consider interference from the ABS and the SBSs in the access links which may not be negligible in {a very} dense setup. For the analysis of rate coverage in this case, one needs to evaluate the joint distribution of interference and load which will be spatially coupled by the locations of the SBSs. Built on this baseline model of IAB, one can also study the spatial multiplexing gains on the resource allocation obtained by massive multi-user-multiple input-multiple-output (MU-MIMO) transmissions in the downlink. Further, this framework can be used to study IAB-enabled cellular networks where the control signalling for cell association is also performed in mm-wave. Under this setup, the problems of mm-wave beam sweeping and corresponding cell association delays can be studied. Another useful extension of this work is to consider an additional {set of} of users having open access to all the SBSs, while in this paper we only considered users with closed-access to the SBSs at the hotspot center. From the stochastic geometry perspective, it will be useful to develop analytical generative models for correlated blocking of the mm-wave signals making these analytical models more accurate. As said earlier, this work also {lays} the foundations of analytical characterization of cell-load in the 3GPP-inspired HetNet models. Using the fundamentals of this load modeling approach, one can further design optimal bias-factors depending on the volume of cell load at each hotspot and improve the per-user rates. Finally, the analysis can be extended to design delay sensitive routing/scheduling policies which is also a relevant research directions for IAB-enabled networks. }
{ "timestamp": "2018-11-13T02:09:39", "yymm": "1802", "arxiv_id": "1802.08776", "language": "en", "url": "https://arxiv.org/abs/1802.08776" }
\section{Introduction} In recent years, research in control theory and robotics has focused on developing efficient controllers for robots that operate in the real world. Controller synthesis techniques such as reinforcement learning, optimal control, and model predictive control have been used to synthesize complex policies. However, if there is a large amount of uncertainty about the real world environment that the system interacts with, the robustness of the synthesized controller becomes critical. This is particularly true in \textit{safety-critical} systems, where the actions of an autonomous agent may affect human lives. This motivates us to provably verify the properties of controllers in simulation before deployment in the real world. In this paper, we present an active machine learning framework that is able to verify black-box systems against, or alternatively find, adversarial counter examples to a given set of safety specifications. We test the controller safety under uncertainty that arises from stochastic environments and errors in modeling. In essence, we actively search for adversarial environments under which the controller could have to operate that lead failure modes in simulation. \begin{figure} \vspace{2mm} \includegraphics{figures/framework} \caption{Framework for active testing. We query the simulation of the system with environment parameters~$w$ to obtain trajectories~$\xi$. We test these for safety violations in a specification monitor. The Bayesian optimization framework actively queries new parameters~$w$ that are promising candidates to find counter examples that violates the safety specification.} \label{Fig:framework} \end{figure} Historically, designing robust controllers has been considered in control theory \cite{Sastry:1989:ACS:63437,Stengel:1986:SOC:26887}. A common issue with these techniques is that, although they consider uncertainty, they rely on simple linear models of the underlying system. This means that resulting controllers are often either overly conservative or violate safety constraints if they fail to capture nonlinear effects. For nonlinear models with complex dynamics, reinforcement learning has been successful for synthesizing high fidelity controllers. Recently, algorithms based on reinforcement learning that can handle uncertainty have been proposed~\cite{KahnVPAL17,niv2002evolution,poupart2008model}, where the performance is measured in expectation. A fundamental issue with learned controllers is that it is difficult to provide formal guarantees for safety in the presence of uncertainty. For example, a controller for an autonomous vehicle must consider human driver behaviors, pedestrian behaviors, traffic lights, uncertainty due to sensors, etc. Without formally verifying that these controllers are indeed safe, deploying them on the road could lead to loss of property or human lives. \textit{Formal safety certificates}, i.e., mathematical proofs for safety, have been considered in the formal methods community, where safety requirements are referred to as a \textit{specification}. There, the goal is to verify that the behaviors of a particular model satisfies a specification~(\cite{Clarke:2000:MC:332656, mitchell2000level}). Synthesizing controllers which satisfy a high level temporal specification have been studied in the context of motion planning~\cite{bhatia2010sampling} and for cyber-physical systems~\cite{raman2014model}. However, these techniques rely on simple model dynamics. For nonlinear systems, reachability algorithms based on level set methods have been used to approximate backward reachable sets for safety verification~\cite{mitchell2004computing,Berkenkamp2017Safe}. However, these methods suffer from two major drawbacks: (1)~the curse of dimensionality of the state space, which limits them to low-dimensional systems; and (2)~\textit{a priori} knowledge of the system dynamics. A dual, and often simpler, problem is \textit{falsification}, which tests the system within a set of environment conditions for adversarial examples. Adversarial examples have recently been considered for neural networks~\cite{goodfellow2014explaining,papernot2016practical,DBLP:journals/corr/BehzadanM17, carlini2017towards}, where the input is typically perturbed locally in order to find counterexamples. In \cite{DBLP:journals/corr/HuangPGDA17}, the authors compute adversarial perturbations for a trained neural network policy for a subset of white box and black-box systems. However, these local perturbations are often not meaningful for dynamic systems. Recently,~\cite{dreossi2017compositional, Pei2017} have focused on testing of closed-loop safety critical systems with neural networks by finding ``meaningful" perturbations. Testing black-box systems in simulators is a well studied problem in the formal methods community~\cite{donze2010breach,c2e2,annpureddy2011s}. The heart of research in black-box testing focuses on developing smarter search techniques which efficiently samples the uncertainty space. Indeed, in recent years, several sequential search algorithms based on heuristics such as Simulated Annealing~\cite{annpureddy2011s}, Tabu search~\cite{deshmukh2015stochastic}, and CMA-ES~\cite{hansen2016cma} have been suggested. Although these algorithms sample the uncertainty space efficiently, they do not utilize any of the information gathered during previous simulations. One active method that has been used recently for testing black-box systems is Bayesian Optimization (BO)~\cite{mockus2012bayesian}, an optimization method that aims to find the global optimum of an \textit{a priori} unknown function based on noisy evaluations. Typically, BO algorithms are based on Gaussian Process (GP~\cite{3569}) models of the underlying function and certain algorithms provably converge close to the global optimum~\cite{srinivas2009gaussian}. It has been used in robotics to, for example, safely optimize controller parameters of a quadrotor~\cite{Berkenkamp2016SafeOpt}. In the testing setting, BO has been used to actively find counter examples by treating the search problem as a minimization problem in~\cite{DHJMP17} over adversarial control signals. However, the authors do not consider the structure of the problem and thereby violate the smoothness assumptions made by the GP model. As a result, their methods are slow to converge or may fail to find counterexamples. In this paper, we provide a formal framework that uses BO to actively test and verify closed-loop black-box systems in simulation. We model the relation between environments and safety specification using GPs and use BO to predict the environment scenarios most likely to cause failures in our controllers. Unlike previous approaches, we exploit structure in the problem in order to provide a formal way to reason across multiple safety constraints in order to find counterexample. Hence, our approach is able to find counterexamples more quickly than previous approaches. Our main contributions are: \begin{itemize} \item An active learning framework for testing and verifying robotic controllers in simulation. Our framework can find adversarial examples for a synthesized controller independent of its structure or how it was synthesized. \item A common GP framework to model logical safety specifications along with theoretical analysis on when a system is verified. \end{itemize} \section{Problem Statement} \label{sec:problem_statement} We address the problem of testing complex black-box closed-loop robotic systems in simulation. We assume that we have access to a simulation of the robot that includes the control strategy, i.e., the closed-loop system. The simulator is parameterized by a set of parameters~${\mathbf{w} \in \mathbb{W}}$, which model all sources of uncertainty. For example, they can represent environment effects such as weather, non-deterministic components such as other agents interacting with the simulator, or uncertain parameters of the physical system, e.g., friction. The goal is to test whether the system remains safe for all possible sources of uncertainty in~$\mathbb{W}$. We specify these safety constraints on finite-length trajectories of the system that can be obtained by simulating the robot for a given set of environment parameters~${\mathbf{w} \in \mathbb{W}}$. Safety constraints on these trajectories are specified using logic. We explain this in detail in~\cref{Sec:Req}, but the result is a specification~$\varphi$ that can, in general, be written as a requirement~$\varphi(\mathbf{w}) > 0, \forall \mathbf{w} \in \mathbb{W}$. For example,~$\varphi$ can encode state or input constraints that have to be satisfied over time. We want to test whether there exists an adversarial example~${\mathbf{w} \in \mathbb{W}}$ for which the specification is violated, i.e.,~${ \varphi(\mathbf{w}) < 0 }$. Typically, adversarial examples are found by randomly sampling the environment and simulating the behaviors. However, this approach does not provide any guarantees and does not allow us to conclude that no adversarial example exist if none are found in our samples. Moreover, since high-fidelity simulations can often be very expensive, we want to minimize the number of simulations that we have to carry out in order to find a counterexample. We propose an active learning framework for testing, where we utilize the results from previous simulation runs to make more informed decisions about which environment to simulate next. In particular, we pose the search problem for a counterexample as an optimization problem, \begin{equation} \label{Eqn:BoolFalse} \operatornamewithlimits{argmin}_{\mathbf{w} \in \mathbb{W}}\, \varphi(\mathbf{w}) , \end{equation} where we want to minimize the number of queries~$\mathbf{w}$ until a counterexample is found or we can verify that no counterexample exists. The main challenge is that the functional dependence~$\varphi(\cdot)$ between parameters in~$\mathbb{W}$ and the specification is unknown \textit{a priori}, since we treat the simulator as a black-box. Solving this problem is difficult in general, but we can exploit regularity properties of~$\varphi(\mathbf{w})$. In particular, in the following we use GP to model the specification and use the model to pick parameters that are likely to be counterexamples. \section{Background} \label{sec:background} In this section, we introduce an overview of formal safety specifications and Gaussian processes, which we use in~\cref{sec:main_section} to verify the closed-loop black-box system. \subsection{Safety Specification} \label{Sec:Req} In the formal methods community, complex safety requirements are expressed using automatons~\cite{alur1994theory} and temporal logic~\cite{Pnueli:1977:TLP:1382431.1382534,maler2004monitoring}. These allow us to specify complex constraints, which can also have temporal dependence. \begin{example} \label{Ex:quadcopter_spec} A safety constraint for a quadcopter might be that the quadcopter cannot fly at an altitude~$h$ greater than \unit[3]{m} when the battery level~$b$ is below 30\%." In logic, we can express this as ``$b < 0.3$ \textbf{implies}($\rightarrow$) $h < 3$'', which in words says if the battery level is less than $30\%$ the quadcopter is flying at a height less than \unit[3]{m}. \end{example} Importantly, these kind specifications make no assumptions about the underlying system themselves. They just state requirements that must hold for all simulations in~$\mathbb{W}$. Formally, a logic specification is a function that tests properties of a particular trajectory. However, we will continue to write~$\varphi(\mathbf{w})$ to denote the specification that tests trajectories generated by the simulator with parameters $\mathbf{w}$. A specification~$\varphi$ consists of multiple individual constraints, called predicates, which form the basic building blocks of the logic. These predicates can be combined using a syntax or grammar of logical operations: \begin{equation} \varphi := \mu \,|\, \neg \mu \,|\, \varphi \wedge \psi \,|\, \varphi \vee \psi. \end{equation} where $\mu: \Xi \rightarrow \mathbb{R}$ is a predicate, and is assumed to be a smooth and continuous function of a trajectory $\xi \in \Xi$. The constraint $\mu > 0$ forms the basic building block of the overall system specification $\varphi$. We say a predicate is satisfied if $\mu(\xi)$ is greater than $0$ or falsified otherwise. The operations $\neg, \wedge, \vee$ represent \textit{negation}, \textit{conjunction(and)} and \textit{disjunction(or)} , respectively. These basic operations can be combined to define complex boolean formula such as implication, $\rightarrow$, and if-and-only-if, $\leftrightarrow$ using the rules \begin{equation} \varphi \rightarrow \psi := \neg \varphi \vee \psi, \text{~and~} \varphi \leftrightarrow \psi := (\neg \varphi \wedge \neg \psi) \vee (\varphi \wedge \psi). \label{eq:rewrite_rules} \end{equation} Since $\mu$ is a real valued function, we can convert these boolean logic statements into an equivalent equation with continuous output, which defines the \textit{quantitative semantics}, \begin{equation} \begin{aligned} \mu(\xi) & := \mu(\xi), &(\varphi \wedge \psi )(\xi) &:= \min(\varphi(\xi), \psi(\xi)), \\ \neg \mu(\xi) & := -\mu(\xi), &(\varphi \vee \psi)(\xi) &:= \max(\varphi(\xi), \psi(\xi)). \end{aligned} \label{eq:convert_spec_to_function} \end{equation} This allows us to confirm that a logic statement~$\varphi$ holds true for all trajectories generated by simulators~$\mathbb{W}$, by confirming that the function~$\varphi(\mathbf{w})$ takes positive values for all~${\mathbf{w} \in \mathbb{W}}$. In the quantitative semantics~\cref{eq:convert_spec_to_function}, the satisfaction of a requirement is no longer a yes or no answer, but can be quantified by a real number. The nature of this quantification is similar to that of a reward function, where lower values indicate a larger safety violation. This allows us to introduce a ranking among failures: ${\varphi(\mathbf{w}_1) < \varphi(\mathbf{w}_2)}$ implies $\mathbf{w}_1$ is a more "dangerous" failure case than $\mathbf{w}_2$. To guarantees safety, we have to take a pessimistic outlook, and denote $\varphi(\mathbf{w}) \leq 0$ as a violation and $\varphi(\mathbf{w}) > 0$ as satisfaction of the specification~$\varphi$. \begin{example} Let us look at the specification in~\cref{Ex:quadcopter_spec}, ${\varphi := (b < 0.3 )\rightarrow (h < 3)}$. Applying the re-write rule~\cref{eq:rewrite_rules}, this can be written as ${\neg(b < 0.3) \vee (h < 3)}$. Applying the quantitative semantics~\cref{eq:convert_spec_to_function}, we get ${\varphi = \max(b > 0.3, h < 3)}$, which consists of two predicates, ${\mu_1 = b - 0.3}$ and ${\mu_2 = 3 - h}$. Intuitively, this means $\varphi > 0$, i.e., the specification is satisfied, if the battery is greater than 30$\%$ or if the quadcopter flies at an altitude less than 3m . \end{example} \subsection{Gaussian Process} \label{Sec:GP} For general black-box systems, the dependence of the specification $\varphi(\cdot)$ on the parameters~$\mathbf{w} \in \mathbb{W}$ is unknown \textit{a priori}. We use a GP to approximate each predicate $\mu(\cdot)$ in the domain $\mathbb{W}$. We detail the modeling of $\varphi(\cdot)$ in~\cref{sec:main_section}. The following introduction about GPs is based on~\cite{3569}. GPs are non-parametric regression method from machine learning, where the goal is to find an approximation of the nonlinear function $\mu : \mathbb{W} \rightarrow \mathbb{R}$ from an environment $\mathbf{w} \in \mathbb{W}$ to the function value $\mu$. This is done by considering the function values $\mu(\mathbf{w})$ to be random variables, such that any finite number of them have a joint Gaussian distribution. The Bayesian, non-parametric regression is based on a prior mean function and the kernel function $k(\mathbf{w}, \mathbf{w}')$, which defines the covariance between the function values $\mu(\mathbf{w}), \mu(\mathbf{w}')$ at two points $\mathbf{w}, \mathbf{w}' \in \mathbb{W}$. We set the prior mean to zero, since we do not have any knowledge about the system. The choice of kernel function is problem-dependent and encodes assumptions about the unknown function. We can obtain the posterior distribution of a function value $\mu(\mathbf{w})$ at an arbitrary state $\mathbf{w} \in \mathbb{W}$ by conditioning the GP distribution of $\mu$ on a set of $n$ past measurements, ${\mathbf{y}_n = (\hat{\mu}(\mathbf{w}_1),\dots,\hat{\mu}(\mathbf{w}_n))}$ at environment scenarios ${W_n = \{\mathbf{w}_1,\dots,\mathbf{w}_n\}}$, where $\hat{\mu}(\mathbf{w}) = \mu(\mathbf{w}) + \omega$ and ${\omega \sim \mathcal{N}(0, \sigma^2) }$ is Gaussian noise. The posterior over $\mu(\mathbf{w})$ is a GP distribution again, with mean $m_n(\mathbf{w})$, covariance $k_n(\mathbf{w},\mathbf{w}′)$, and variance $\sigma_n(\mathbf{w})$: \begin{equation} \begin{split} \label{Eqn:Predict_GP} m_n(\mathbf{w}) &= \mathbf{k}_n(\mathbf{w}) (\mathbf{K}_n + \mathbf{I}_n \sigma^2)^{-1} \mathbf{y}_n, \\ k_n(\mathbf{w}, \mathbf{w}') & = k(\mathbf{w}, \mathbf{w}') - \mathbf{k}_n(\mathbf{w})(\mathbf{K}_n + \mathbf{I}_n \sigma^2)^{-1}\mathbf{k}_n^T(\mathbf{w}'), \\ \sigma_n^2(\mathbf{w}) & = k_n(\mathbf{w},\mathbf{w}'), \end{split} \end{equation} where the vector $\mathbf{k}_n(\mathbf{w}) = [k(\mathbf{w}, \mathbf{w}_1), \dots , k(\mathbf{w}, \mathbf{w}_n)]$ contains the covariances between the new environment, $\mathbf{w}$, and the environment scenarios in $W_n$, the kernel matrix ${ \mathbf{K}_n \in \mathbb{R}^{n\times n} }$ has entries ${[\mathbf{K}_n](i,j) = k(\mathbf{w}_i, \mathbf{w}_j)}$, with ${i,j \in \{1,\dots,n\}}$, and $\mathbf{I}_n \in \mathbb{R}^{n\times n}$ is the identity matrix. \subsection{Bayesian Optimization (BO)} \label{sec:bayesian_optimization} In the following we use BO in order to find the minimum of the unknown function~$\varphi$, which we construct using the GP models on~$\mu$ in~\cref{sec:main_section}. BO uses a GP model to query parameters that are informative about the minimum of the function. In particular, the \textsc{GP-LCB} algorithm from~\cite{srinivas2009gaussian} uses the GP prediction and associated uncertainty in~\cref{Eqn:Predict_GP} to trade off exploration and exploitation by, at iteration~$n$, selecting an environment according to \begin{equation} \label{Eqn:BO_f_acqu} \mathbf{w}_n = \operatornamewithlimits{argmin}_{\mathbf{w} \in \mathbb{W}} m_{n-1}(\mathbf{w}) - \beta_n^{1/2} \sigma_{n-1}(\mathbf{w}) , \end{equation} where $\beta_n$ determines the confidence interval. We provide an appropriate choice for~$\beta_n$ in~\cref{thm:verif}. At each iteration,~\cref{Eqn:BO_f_acqu} selects parameters for which the lower confidence bound of the GP is minimal. Repeatedly evaluating the true function~$\varphi$ at samples given by~\cref{Eqn:BO_f_acqu} improves the GP model and decreases uncertainty at candidate locations for the minimum, such that the global minimum is found eventually~\cite{srinivas2009gaussian}. \section{Active Testing for Counterexamples} \label{sec:main_section} In this section, we show how to model specifications~$\varphi$ in~\cref{Eqn:BoolFalse} using GPs without violating smoothness assumptions and use this to find adversarial counterexamples. In order to use BO to optimize~\cref{Eqn:BoolFalse}, we need to construct reliable confidence intervals on~$\varphi$. However, if we were to model $\varphi$ as a GP with commonly-used kernels, it would need it to be a smooth function of~$\mathbf{w}$. Even though the predicates,~$\mu$, are typically smooth functions of the trajectories, and hence smooth in $\mathbf{w}$, conjunction and disjunction ($\min$ and $\max$) in~\cref{eq:convert_spec_to_function} are non-smooth operators that render $\varphi$ to become non-smooth as well. Instead, we exploit the structure of the specification~$\varphi$ and decompose~$\varphi$ into a parse tree, where the leaf nodes are the predicates. \begin{definition}[Parse Tree $\mathcal{T}$] Given a specification formula $\varphi$, the corresponding parse tree,~$\mathcal{T}$, has leaf nodes that correspond to function predicates, while other nodes are $\max$ (disjunctions) and $\min$ (conjunctions). \end{definition} A parse tree is an equivalent graphical representation of~$\varphi$. For example, consider the specification \begin{equation} \label{Eqn:Example_parseTree} \varphi := (\mu_1 \vee \mu_2) \rightarrow (\mu_3 \vee \mu_4) = (\neg \mu_1 \wedge \neg \mu_2) \vee (\mu_3 \vee \mu_4), \end{equation} where the second equality follows from De-Morgan's law. We can obtain an equivalent function~${ \varphi(\mathbf{w}) }$ with~\cref{eq:convert_spec_to_function}, \begin{equation} \begin{aligned} \varphi(\mathbf{w}) = \max\big( &\min (-\mu_1(\mathbf{w}),\, -\mu_2(\mathbf{w})), \\ & \max (\mu_3(\mathbf{w}),\, \mu_4(\mathbf{w})) \big) . \end{aligned} \label{eq:parse_tree_function} \end{equation} The parse tree, $\mathcal{T}$, for $\varphi$ in~\cref{eq:parse_tree_function} is shown in~\cref{Fig:parse_tree}. We can use the parse tree to decompose any complex specification into~$\min$ and~$\max$ functions of the individual predicates; that is, $\varphi(\mathbf{w}) = \mathcal{T}(\mu_1(\mathbf{w}), \dots, \mu_q(\mathbf{w}))$. \begin{figure}[t] \centering \includegraphics[scale=0.45]{figures/parse_tree} \caption{Equivalent parse tree~$\mathcal{T}$ for~$\varphi$ in~\cref{Eqn:Example_parseTree} to the function~\cref{eq:parse_tree_function}. We replace the predicates~$\mu_i$ with their corresponding pessimistic GP predictions to obtain a lower bound on~$\varphi(\mathbf{w})$.} \label{Fig:parse_tree} \end{figure} We now model each predicate~$\mu_i(w)$ in the parse tree $\mathcal{T}$ of $\varphi$ with a GP and combine them with the parse tree to obtain confidence intervals on the overall specification~$\varphi(\mathbf{w})$ for BO. GP-LCB as expressed in~\cref{Eqn:BO_f_acqu} can be used to search for the minimum for a single GP. A key insight to extending~\cref{Eqn:BO_f_acqu} across multiple GPs, is that the minimum of~\cref{Eqn:BoolFalse} is, with high probability, lower bounded by the lower-confidence interval of one of the GPs used to model the predicates of $\varphi$. This is because, the $\max$ and $\min$ operators do not change the value of the predicates, but only make a choice between them. As a consequence, we can model the smooth parts of $\varphi$, i.e., the predicates, using GPs and then consider the non-smoothness through the parse tree. For each predicate~$\mu_i$ in the parse tree $\mathcal{T}$ of $\varphi$, we construct a lower confidence bound~$l_i = m^i_{n-1}(\mathbf{w}) - \beta_n^{1/2} \sigma^i_{n-1}(\mathbf{w})$, where $m^i, \sigma^i$ are the mean and standard deviation of the GP corresponding to $\mu_i$. From this, we can construct a lower-confidence interval on~$\varphi$ as $\mathcal{T}(l_1(\mathbf{w}), \dots, l_q(\mathbf{w}))$, where we replace the $i$th leaf node~$\mu_i$ of the parse tree with the pessimistic prediction $l_i$ of the corresponding GP. Similar to~\cref{Eqn:BO_f_acqu}, the corresponding acquisition function for BO uses this lower bound to select the next evaluation point, \begin{equation} \label{Eqn:BO_multi_GP} \mathbf{w}_n = \operatornamewithlimits{argmin}_{\mathbf{w} \in \mathbb{W}}\, \mathcal{T}(l_1(\mathbf{w}), \dots, l_q(\mathbf{w})). \end{equation} Intuitively, the next environment selected to simulate is the one that minimizes the worst-case predictions on~$\varphi$. Effectively, we propagate the confidence intervals associated with the GP for each predicates through the parse tree~$\mathcal{T}$ in order to obtain predictions about~$\varphi$ directly. Note, that~\cref{Eqn:BO_multi_GP} does not return an environment sample that minimizes the satisfaction of all the predicates, it only minimizes the lower bound on~$\varphi$. \cref{Algo:active_testing} describes our active testing procedure. The algorithm proceeds by first computing the parse tree $\mathcal{T}$ from the specification, $\varphi$. At each iteration~$n$ of BO, we select new environment parameters~$\mathbf{w}_n$ according to~\cref{Eqn:BO_multi_GP}. We then simulate the system with parameters~$\mathbf{w}_n$ and evaluate each predicate~$\mu_i$ on the simulated trajectories. Lastly, we update each GP with the corresponding measurement of~$\mu_i$. The algorithm either returns a counterexample that minimizes~\cref{Eqn:BoolFalse}; or when~$\mathcal{T}(l_1(\mathbf{w}), \dots, l_q(\mathbf{w}))$ is greater then zero, and we can conclude that the system has been verified. \subsection{Theoretical Results} \label{sec:theory} We can transfer theoretical convergence results for GP-LCB~\cite{srinivas2009gaussian} to the setting of~\cref{Algo:active_testing}. To do this, we need to make structural assumptions about the predicates. In particular, we assume that they have bounded norm in the Reproducing Kernel Hilbert Space (RKHS, \cite{steinwart2008support}) that corresponds to the GP's kernel. These are well-behaved functions of the form~$\mu_i(\mathbf{w}) = \sum_{j=0} \alpha_j k_i(\mathbf{w}, \mathbf{w}_j)$ with representer points~$\mathbf{w}_j$ and weights~$\alpha$ that decay sufficiently quickly. We leverage theoretical results from~\cite{ChowdhuryG17} and~\cite{Berkenkamp2016SafeOpt} that allow us to build reliable confidence intervals using the GP models from~\cref{Sec:GP}. We have the following result. \begin{restatable}{theorem}{maintheorem} Assume that each predicate $\mu_i$ has RKHS norm bounded by~$B_i$ and that the measurement noise is $\sigma$-sub-Gaussian. Select $\delta \in (0,1)$, $\mathbf{w}_n$ according to~\cref{Eqn:BO_multi_GP}, and let ${\beta^{1/2}_n = \sum_i B_i + 4 \sigma \sqrt{ 1 + \ln(1/\delta) + \sum_i I(\mathbf{y}^i_{n-1}; \mu_i)}}$. If~$\mathcal{T}(l_1(\mathbf{w}_n), \dots, l_q(\mathbf{w}_n)) > 0$, then with probability at least $1-\delta$ we have that $\min_{\mathbf{w} \in \mathbb{W}} \varphi(\mathbf{w}) > 0$ and the system has been verified against all environments in~$\mathbb{W}$. \label{thm:verif} \end{restatable} Here~$I(\mathbf{y}^i_{n-1}; \mu_i)$ is the mutual information between~$\mathbf{y}^i_{n-1}$, the~$n-1$ noisy measurements of~$\mu_i$, and the GP prior of~$\mu_i$. This function was shown to be sublinear in~$n$ for many commonly-used kernels in~\cite{srinivas2009gaussian}, see the appendix for more details. \cref{thm:verif} states that we can verify the system against adversarial examples with high probability, by checking whether the worst-case lower-confidence bound is greater than zero. We provide additional theoretical results about the existence of a finite~$n$ such that the system can be verified up to~$\epsilon$ accuracy in the appendix. \begin{algorithm}[t] \caption{Active Testing with Bayesian Optimization}\label{Algo:active_testing} \begin{algorithmic}[1] \Procedure{ActiveTesting}{$\varphi, \mathbb{W}, \beta, \textit{GP}s$} \State Build parse tree $\mathcal{T}$ based on specification $\varphi$ \For {$n = 0,\dots$} \Comment{Until budget or convergence} \State $l_i(\mathbf{w}) = \mu_i(\mathbf{w}) - \beta_n^{1/2} \sigma_i(\mathbf{w}), \, i=1,\dots,q$ \State $w_n = \operatornamewithlimits{argmin}_{w \in \mathbb{W}} \mathcal{T}(l_1(\mathbf{w}), \dots, l_q(\mathbf{w}))$ \State Update each GP model of the predicates with \par \hspace{1.5ex} measurements $(\mathbf{w}_n,\, \mu_i(\mathbf{w}_n))$. \EndFor \State return~$\min_i\, \varphi(w_i)$, the worst result. \EndProcedure \end{algorithmic} \end{algorithm} \section{Evaluation} \label{sec:evaluation} In this section, we evaluate our method on several challenging test cases. A Python implementation of our framework and the following experiments can be found at \mbox{\url{https://github.com/shromonag/adversarial_testing.git}} In order to use~\cref{Algo:active_testing}, we have to solve the optimization problem~\cref{Eqn:BO_multi_GP}. In practice, different optimization techniques have been proposed to find the global minimum of the function. One popular algorithm is DIRECT~\cite{finkel2003direct}, a gradient-free optimization method. An alternative is to use gradient-based methods together with random-restarts. Particularly, we sample a large number of potential environment scenarios at random from $\mathbb{W}$, and run seperate optimization routines to minimize~\cref{Eqn:BO_multi_GP} from these. Another challenge is that the dimensionality of the optimization problem can often be very large. However, methods that allow for more efficient computation do exist. These methods reduce the effective size of the input space and thereby make the optimization problem more tractable. One possibility is to use random embedding to reduce the input dimension as done in Random Embedding Bayesian Optimization (REMBO~\cite{wang2013bayesian}). We can then model the GP in this smaller input dimension and carry out BO in the lower dimension input space. \subsection{Modeling smooth functions vs non-smooth function} In the following, we show the effectiveness of modeling smooth functions by GPs and considering the non-smooth operations in the BO search as opposed to modeling the non-smooth function by a single GP. \label{Eval:smooth_non_smooth} Consider the following, illustrative optimization problem, \begin{equation} \label{Eqn:sin_cos} w^{*} = \operatornamewithlimits{argmin}_{w \in (0, 10)} \max (\sin(w)+ 0.65, \cos(w)+0.65) \end{equation} \begin{figure*} \centering \begin{subfigure}[t]{0.32\textwidth} \includegraphics[scale=0.6]{figures/sin_cos} \caption{True functions.} \label{Fig:sin_cos} \end{subfigure} \hfill \begin{subfigure}[t]{0.32\textwidth} \includegraphics[scale=0.6]{figures/fail_joint_max} \caption{Modeling the non-smooth~$\varphi(w)$ directly.} \label{Fig:max_x} \end{subfigure} \hfill \begin{subfigure}[t]{0.32\textwidth} \includegraphics[scale=0.6]{figures/fail_joint_sin_cos} \caption{Modeling and combining smooth predicates~$\mu$.} \label{Fig:sin_cos_x} \end{subfigure} \caption{The dashed orange line in~\cref{Fig:sin_cos} represents the true, non-smooth optimization function in~\cref{Eqn:sin_cos} while the green and blue line represent $\sin(w)$ and $\cos(w)$ respectively. Modeling this function directly as a GP leads to model errors~\cref{Fig:max_x}, where the $95\%$ confidence interval of the GP (blue shaded) with mean estimate (in blue line) does not capture the true function~$\varphi(w)$ in orange. In fact, the minimum (red star) is not contained within the shaded region, causing the optimization to diverge. BO converges to the green dot, where $\varphi(w)> 0$ which is not a counterexample. Instead, modeling the two predicates individually and combining them with the parse tree, leads to the model in~\cref{Fig:sin_cos_x}. Here, the true function is completely captured in the confidence interval. As a consequence, BO converges to the global minimum (the red star and green dot converge).} \label{Fig:model} \end{figure*} We consider two modeling scenarios, one where we model $\max(\sin(w), \cos(w))$ as a single GP, and another where we model $\sin(w)$ by one GP and $\cos(w)$ by another. We initialize the GP models for $\sin(w)$, $\cos(w)$ and $\max(\sin(w), \cos(w))$ with 5 samples chosen in random. We then use BO to find $w^{*}$. We were able to model smooth functions like $\sin(w)$ and $\cos(w)$ with GPs, even with fewer samples. At each iteration of BO, we computed the next sample by solving for the $w \in (0, 10)$ which minimized the maximum across the two GPs. This quickly stabilizes to the true $w^*$~(\cref{Fig:sin_cos_x}). When we model $\max(\sin(w), \cos(w))$ using a GP, in~\cref{Fig:max_x}, the initial 5 samples were not able to model it well. In fact, the original function in orange is not contained within the uncertainty bounds of the GP. Hence, in each iteration of BO, where we chose $w \in (0,10)$ which minimized this function, we were never able to converge $w^{*}$. It is not surprising to see that, given these models, BO does not always converge when we model non-smooth functions such as in~\cref{Eqn:sin_cos}. To support our claim, we repeat this experiment 15 times with different initial samples. In each experiment we run BO for 50 iterations. When modeling $\sin(w)$ and $\cos(w)$ as separate GPs, BO stabilized to $w^{*}$ in about 5 iterations in all 15 experiments. However, when modeling $\max(\sin(w), \cos(w))$ as a single GP, it takes over 35 iterations to converge and in 5 out of the 15 cases, it did not converge to $w^{*}$. We show these two different behaviors in~\cref{Fig:stabilizing}. \begin{figure*} \centering \begin{subfigure}[t]{0.45\linewidth} \centering \includegraphics[scale=0.75]{figures/stab} \caption{Modeling as separate GPs take around 5 iterations to stabilize to $w^{*}$~(in blue), while modeling as a single GP takes around 45 iteration to stabilize to $w^{*}$~(in orange)} \label{Fig:stab_late} \end{subfigure} \hfill \begin{subfigure}[t]{0.45\linewidth} \centering \includegraphics[scale=0.75]{figures/non_stab} \caption{Modeling as separate GPs take around 5 iterations to stabilize to $w^{*}$~(in blue), while modeling as a single GP does not stabilize to $w^{*}$~(in orange)} \label{Fig:fail_joint} \end{subfigure} \caption{The orange and blue lines in~\cref{Fig:stab_late} and ~\cref{Fig:fail_joint} show the evolution of samples returned over the BO iterations when~\cref{Eqn:sin_cos} is modeled as a single GP and multiple GPs respectively for two different initialization. We see the that when modeling as a single GP, it takes longer to stabilize to $w^{*}$ and in some cases~(\cref{Fig:fail_joint}) does not stabilize to $w^{*}$.} \label{Fig:stabilizing} \end{figure*} \subsection{Collision Avoidance with High Dimensional Uncertainty} \label{Sec:car_high} Consider an autonomous car that travels on a straight road with a obstacle at $x_{obs}$. We require that the car can come to a stop before colliding with an obstacle. The car has two states; location, $x$, and velocity, $v$; and one control input acceleration; $a$. The dynamics of the car is given by, \begin{equation} \dot{x} = v , \quad \dot{v} = a. \end{equation} Our safety specification for collision avoidance is given by, $\varphi = \min(x_{obs} - x(t))$, i.e., the minimum distance between the position of the car and the obstacle over a horizon of length $100$. We assume that the car does not know where the obstacle is \textit{a priori}, but receives locations of the obstacle through a sensor at each time instant, $x_s(t)$. The controller is a simple linear state feedback control, $K$, such that at time $t$, $a(t) = K \cdot \left[ \begin{matrix} x(t)-x_s(t), \,v(t) \end{matrix} \right]^\mathrm{T}$. We assume that the car initially starts at location $x_{init} = 0$, with a velocity $v_{init} = \unit[3]{m/s}$. Let the obstacle be at $x_{obs} = 5$, which is not known by the car. Instead, it receives sensor readings for the location of the obstacle such that $x_s = [4.5, 5.5]$. If $\varphi$ is negative, then $x(t) > x_{obs}$ for some $t$ which signifies collision. Moreover, we constrain the acceleration to lie in $a \in [-3, 3]$. The domain of our uncertainty is $\mathbb{W} = [4.5, 5.5]^{100}$, i.e., the sensor readings $x_s$ over the horizon $H = 100$. We compare across three experimental setups, first, we model the GP in the original space of $\mathbb{W}$ i.e., with $100$ inputs; second, we model the GP in a lower dimension input space as described in the preamble of this section; and third, we randomly sample inputs and test them. We run BO for 250 iterations on the GPs, and consider 250 random samples for the random testing. We repeat this experiment 10 times and show our results in~\cref{Fig:car_example}. \begin{figure}[t] \centering \includegraphics[scale=0.75]{figures/car_example} \caption{The red, blue and green bars shows the average number of counterexamples found using random sampling; applying BO on the reduced input space and original input space respectively for the example in~\cref{Sec:car_high}. The black lines show the standard deviation across the experiments.} \label{Fig:car_example} \end{figure} The green and blue bar in~\cref{Fig:car_example} show the average number of counterexamples returned running BO on the GP defined over the original input space and in the low dimension input space. In general, active testing in the high-dimensional input space gives the best results, which deteriorates with an increase in compression of the input space. Random testing, shown in red performs the worst. This is not surprising as, (1) $250$ samples is not sufficient to cover an input space of $100$ dimensions uniformly; and (2) the samples are all independent of each other. Moreover, in the uncompressed input case, the specification evaluated at the worst counterexample, $\varphi(w^{*})$, has a mean and standard deviation of $-0.0138$ and $0.004$ as compared to $-0.0067$ and $0.0011$ for random sampling. \subsection{OpenAI Gym Environments} We interfaced our tool with environments from OpenAI gym~\cite{1606.01540} to test controllers from Open AI baselines~\cite{baselines}. For brevity, we refer the details of the environments to~\cite{ai_envs}. In both case studies, we introduce uncertainty around the parameters the controller has been trained for. The rationale behind this is that the parameters in a simulator are an estimate of the true values. This ensures that counterexamples found, can indeed occur in the real system. \subsubsection{Reacher} \label{Sec:reach} In the reacher environment, we have a 2D robot trying to reach a target. For this environment we have six sources of uncertainty: two for the goal position, $(x_{goal}, y_{goal}) \in [-0.2, 0.2]^2$, two for state perturbations $(\delta_x, \delta_y) \in [-0.1, 0.1]^2$ and two for velocity perturbations $(\delta_{vx}, \delta_{vy}) \in [-0.005, 0.005]^2$. The state of the reacher is tuple with the current location, $\textbf{x} =(x, y)$, velocity $\textbf{v} = (v_x, v_y)$, and rotation, $\theta$. A trajectory of the system, $\xi$, is a sequence of states over time, i.e., $\xi = (\textbf{x}(t), \textbf{v}(t), \theta(t)), t = 0, 1, 2, \dots$. Our uncertainty space is, $\mathbb{W} = [-0.2, 0.2]^2 \times [-0.1, 0.1]^2 \times [-0.005, 0.005]^2$. Given an instance of $w \in \mathbb{W}$, the trajectory, $\xi$, of the system is uniquely defined. We trained a controller using the Proximal Policy Optimization (PPO)~\cite{SchulmanWDRK17} implementation available at Open AI baselines. We determine a trajectory to be safe if either the reacher reaches the goal, or if it does not rotate unnecessarily. This can be captured as $\varphi = \mu_1 \vee \mu_2$, where, $\mu_1(w)$ is the minimum distance between the trajectory and the goal position, and $\mu_2$ is total rotation accumulated over the trajectory; and its continuous variant, $\varphi = \max(\mu_1, \mu_2)$. Using our modeling approach, we model this using two GPs, one for $\mu_1$ and another for $\mu_2$. We compare this to modeling $\varphi$ as a single GP and random sampling. We run 200 BO iterations and consider 200 random samples for random testing. We repeat this experiment 10 times. \begin{figure} \centering \includegraphics[scale=0.75]{figures/reacher} \caption{The green, blue and red bars show the number of counter examples generated when modeling $\mu_1, \mu_2$ as separate GPs; modeling $\varphi$ as a single GP and random testing respectively for the reacher example~(\cref{Sec:reach}). Our modeling paradigm, finds more counterexamples compared to the other two methods.} \label{Fig:reacher_exps} \end{figure} In~\cref{Fig:reacher_exps}, we plot the number of counterexamples found by each of the three methods over 10 runs of the experiment. Modeling the predicates by separate GPs and applying BO across them~(shown in green) consistently performs better than applying BO on a single GP modeling $\varphi$~(shown in blue) and random testing (shown in red). We see the that random testing performs very poorly, in some cases (experiment runs $4, 8, 10$) finds no counterexamples. By modeling the predicates separately, the specification evaluated at the worst counterexample, $\varphi(w^{*})$, has a mean and standard deviation of $-0.1283$ and $0.0006$ as compared to $-0.1212$ and $0.0042$ when considering a single GP. This suggests, that using our modeling paradigm BO converges (since the standard deviation is small) to a more falsifying counterexample (since the mean is smaller). \subsubsection{Mountain Car Environment} \label{Sec:mc} The mountain car environment in OpenAI gym, is a car on a one-dimensional track, positioned between two mountains. The goal is to drive the car up the mountain on the right. The environment comes with one source of uncertainty, the initial state $x_{init} \in [-0.6, -0.4]$. We introduced four other sources of uncertainty, for the initial velocity, $v_{init} \in [-0.025, 0.025]$; goal location, $x_{goal} \in [0.4, 0.6]$; maximum speed, $v_{max} \in [0.55, 0.75]$ and maximum power magnitude, $p_{max} \in [0.0005, 0.0025]$. The state of the mountain car is a tuple with the current location, $x$, and velocity, $v$. A trajectory of the system, $\xi$, is a sequence of states over time, i.e., $\xi = (x(t), v(t)), t = 0, 1, 2, \dots$. Our uncertainty space is given by, $\mathbb{W} = [-0.6, -0.4] \times [-0.025, 0.025] \times [0.4, 0.6] \times [0.55, 0.75] \times [0.0005, 0.0025]$. Given an instance of $w \in \mathbb{W}$, the trajectory, $\xi$, of the system is uniquely defined. We trained two controllers one using PPO and another using an actor critic method (DDPG) for continuous Deep Q-learning~\cite{LillicrapHPHETS15}. We determine a trajectory to be safe, if it reaches the goal quickly or if does not deviate too much from its initial location and always maintains its velocity in some bound. Our safety specification can be written as $\varphi = \mu_1 \vee (\mu_2 \wedge \mu_3)$, where, $\mu_1(w)$ is time taken to reach the goal, $\mu_2$ is the deviation from the initial location and $\mu_3$ is the deviation from the velocity bound; and its continuous variant of $\varphi = \max(\mu_1, \min(\mu_2, \mu_3))$. We model $\varphi$, by modeling each predicate, $\mu$, by a GP. We compare this to modeling $\varphi$ with a single GP and random sampling. We run 200 BO iterations for the GPs and consider 200 random samples for random testing. We repeat this experiment 10 times. We show our results in~\cref{Fig:mc}, where we plot the number of counterexamples found by each of the three methods over 10 runs of the experiment for each controller. ~\cref{Fig:mc} demonstrates the strength of our approach. The number of counterexamples found by our method (in green bar) is much higher compared to random sampling (in red) and modeling $\varphi$ as a single GP (in blue). In~\cref{Fig:mc_ppo} the blue bars are smaller than even the ones in red, suggesting random sampling performs better than applying BO on the GP modeling $\varphi$. The is because the GP is not able to model $\varphi$, and is so far away from the true model, that the sample returned by the BO is worse than if were to sample randomly. \begin{figure*}[t] \centering \begin{subfigure}[t]{0.45\textwidth} \centering \includegraphics[scale=0.75]{figures/mc_ppo} \caption{Controller trained with PPO} \label{Fig:mc_ppo} \end{subfigure} \begin{subfigure}[t]{0.45\textwidth} \centering \includegraphics[scale=0.75]{figures/mc_ddpg} \caption{Controller trained with DDPG} \label{Fig:mc_ddpg} \end{subfigure} \caption{The green, blue and red bars show the number of counter examples generated when modeling $\mu_1, \mu_2$ as separate GPs, modeling $\varphi$ as a GP and random testing respectively for the mountain car example~(\cref{Sec:mc}). While our modeling paradigm, finds orders of magnitude more counterexample compared to the other two methods, we notice that modeling $\varphi$ as a single GP performs much worse than random sampling for the controller trained with PPO~\cref{Fig:mc_ppo} and comparable for the controller trained with DDPG~\cref{Fig:mc_ddpg}.} \label{Fig:mc} \end{figure*} This is further highlighted by the value of the specification at worst counterexample, $\varphi(w^*)$. The mean and standard deviation for $\varphi(w^*)$ over the 10 experiment runs is $-0.5435$ and $0.028$ for our method, $-0.3902$ and $0.0621$ when $\varphi$ is modeled as a single GP; and $-0.04379$ and $0.0596$ for random sampling. A similar but less drastic result holds in the case of the controller trained with DDPG. \section{Conclusion} We presented an \textit{active testing} framework that uses Bayesian Optimization to test and verify closed-loop robotic systems in simulation. Our framework handles complex logic specifications and models them efficiently using Gaussian Processes in order to find adversarial examples faster. We showed the effectiveness of our framework on controllers designed on OpenAI gym environments. As future work, we would like to extend this framework to test more complex robotic systems and find regions in the environment parameter space where the closed-loop control is expected to fail. \section*{Acknowledgments} Research reported in this paper was sponsored by the Army Research Laboratory and was accomplished under Cooperative Agreement Number W911NF-17-2-0196 \footnote{The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the Army Research Laboratory or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation here on.} and was accomplished under Cooperative Agreement Number W911NF-17-2-0196; and in part by Toyota under the iCyPhy center. \section{Proofs} In this section, we prove the convergence of our algorithm under specified regularity assumptions on the underlying predicates. Consider the specification \begin{equation} \varphi(\mathbf{w}) = \mathcal{T}( \mu_1(\mathbf{w}), \dots, \mu_{q}(\mathbf{w})), \end{equation} where $q$ represents the number of predicates. Let the domain of the predicate indices be represented by, $\mathcal{I}= \{1, \dots, q\}$. The convergence proofs for classical Bayesian optimization in~\cite{srinivas2009gaussian,ChowdhuryG17} proceed by building reliable confidence intervals for the underlying function and then showing, that these confidence intervals concentrate quickly enough at the location of the optimum under the proposed evaluation strategy. For ease of exposition, we assume that measurements of each predicate~$\mu_i$ are corrupted by the same measurement noise. To leverage these proofs, we need to account for the fact that our GP model is composed of several individual predicates and that we obtain one measurement for each predicates at every iteration of the algorithm. We start by defining a composite function ${f:\mathbb{W} \times \mathcal{I} \rightarrow \mathbb{R}}$, which returns the function values for the individual predicates indexed by~$i$. \begin{equation} f(\mathbf{w}, i) = \mu_i(\mathbf{w}) \end{equation} The function $f(\cdot, \cdot)$ is a single output function, which can be modeled with a single GP with a scalar output over the extended input space, $\mathbb{W} \times \mathcal{I}$. For example, if we assume that the predicates are independent of each other, the kernel function for $f$ would look like, \begin{equation} \begin{split} k((\mathbf{w}, i), (\mathbf{w}', i')) = \begin{cases} k_i(\mathbf{w}, \mathbf{w}') \text{ if } i = i' \\ 0 \text{ otherwise} \end{cases} \end{split}, \label{eq:independent_kernel} \end{equation} where $k_i$ is the kernel function corresponding to the GP for the $i-$th predicate, $\mu_i$. It is straightforward to include correlations between functions in this formulation too. This reformulation allows us to build reliable confidence intervals on the underlying predicates, given regularity assumptions. In particular, we make the assumption that the function~$f$ has bounded norm in the Reproducing Kernel Hilbert Space (RKHS, \cite{steinwart2008support}) corresponding to the same kernel~$k(\cdot, \cdot)$ that is used for the GP on~$f$. \begin{remark} Note, that this model is more general then the case where we assume that each predicate, $\mu_i$, individually has bounded RKHS norm $B_i$. In this case, the function, $f(\mathbf{w}, i)$ has RKHS norm with respect to the kernel in~\cref{eq:independent_kernel} bounded by~${ B = \sum_i^{q} B_i }$. \label{note:sum_of_bounded_rkhs} \end{remark} \begin{lemma} Assume that $f$ has RKHS norm bounded by $B$ and the measurements are corrupted by $\sigma$-sub-Gaussian noise. If $\beta_{n \cdot q}^{1/2} = B + 4\sigma \sqrt{I(y_{q \cdot (n - 1)}; f) + 1 + \ln{1/\delta}}$, then the following holds for all environment scenarios, $\mathbf{w} \in \mathbb{W}$, predicate indices, $i \in \mathcal{I}$, and iterations $n \geq 1$ jointly with probability at least $1- \delta$, \begin{equation} |f(\mathbf{w}, i) - m^i_{q \cdot(n-1)}(\mathbf{w}, i)|\leq \beta_{q \cdot n}^{1/2} \sigma^i_{q \cdot (n -1)}(\mathbf{w}, i) \end{equation} \label{lem:mu_confidence} \end{lemma} \begin{proof} This follows directly from~\cite{Berkenkamp2016SafeOpt}, which extends the results from~\cite{ChowdhuryG17} and Lemma 5.1 from~\cite{srinivas2009gaussian} to the case of multiple measurements. \end{proof} The scaling factor for the confidence intervals,~$\beta_{n\cdot q}$, depends on the mutual information~$I(\mathbf{y}_{q \cdot (n-1)}; f)$ between the GP model of~$f$ and the~$q$ measurements of the individual predicates that we have obtained for each time step so far. It can easily be computed as \begin{equation} \begin{aligned} I(\mathbf{y}_{q \cdot (n-1)}; f) &= \log ( 1 + \frac{1}{\sigma^2} \mathbf{K}_{q \cdot (n-1)} ), \\ &= \sum_{j=1}^{n - 1} \sum_{i=1}^q \log (1 + \sigma^2_{j \cdot q}(\mathbf{w}_j, i) / \sigma^2), \end{aligned} \label{eq:mutual_information} \end{equation} where~$\mathbf{K}_{q \cdot (n-1)}$ is the kernel matrix of the single GP over the extended parameter space and the inner sum in the second equation indicates the fact that we obtain~$q$ measurements at every iteration. Based on these individual confidence intervals on~$\mu$, we can construct confidence intervals on~$\varphi$. In particular, let \begin{equation} \begin{aligned} l_i(\mathbf{w}) &= m_{q\cdot(n - 1)}(\mathbf{w}, i) - \beta^{1/2}_{q \cdot n} \sigma_{q \cdot (n - 1)}(\mathbf{w}, i) \\ u_i(\mathbf{w}) &= m_{q \cdot (n - 1)}(\mathbf{w}, i) + \beta^{1/2}_{q \cdot n} \sigma_{q \cdot (n - 1)}(\mathbf{w}, i) \end{aligned} \end{equation} be the lower and upper confidence intervals on each predicate. From this, we construct reliable confidence intervals on~$\varphi(\mathbf{w})$ as follows: \begin{lemma} Under the assumptions of~\cref{lem:mu_confidence}. Let~$\mathcal{T}$ be the parse tree corresponding to~$\varphi$. Then the following holds for all environment scenarios, $\mathbf{w} \in \mathbb{W}$ and iterations $n \geq 1$ jointly with probability at least $1- \delta$, \begin{equation} \mathcal{T}(l_1(\mathbf{w}), \dots, l_q(\mathbf{w})) \leq \varphi(\mathbf{w}) \leq \mathcal{T}(u_1(\mathbf{w}), \dots, u_q(\mathbf{w})) \end{equation} \label{lem:confidence_on_combined_fun} \end{lemma} \begin{proof} This is a direct consequence of~\cref{lem:mu_confidence} and the properties of the~$\min$ and~$\max$ operators. \end{proof} We are now able to prove the main theorem as a direct consequence of~\cref{lem:confidence_on_combined_fun}. \maintheorem* \begin{proof} For independent variables the mutual information decomposes additively and following~\cref{note:sum_of_bounded_rkhs} this is a direct consequence of~\cref{lem:confidence_on_combined_fun}, since~${ \mathcal{T}(l_1(\mathbf{w}), \dots, l_q(\mathbf{w})) \leq \varphi(\mathbf{w}) }$ holds for all ${ \mathbf{w} \in \mathbb{W} }$ with probability at least~$1 - \delta$. \end{proof} \subsection{Convergence proof} In the following, we prove a stronger result about convergence of our algorithm. The key quantity in the behavior of the algorithm is the mutual information~\cref{eq:mutual_information}. Importantly, it was shown in~\cite{Berkenkamp2016SafeOpt} that it can be upper bounded by the worst-case mutual information, the information capacity, which in turn was shown to be sublinear by~\cite{srinivas2009gaussian}. In particular, let~$\mathbf{f}_\mathbb{W}$ denote the noisy measurements obtained when evaluating the function~$f$ at points in~$\mathbb{W}$. The mutual information obtained by the algorithm can be bounded according to \begin{equation} \begin{split} I(\mathbf{f}_{\mathbb{W}_n \times \mathcal{I}}; f) & \leq \max_{\bar{\mathbb{W}} \subset \mathbb{W}, |\bar{\mathbb{W}}| \leq n}I(\mathbf{f}_{\bar{\mathbb{W}}\times \mathcal{I}}; f);\\ & \leq \max_{\mathcal{D} \subset \mathbb{W} \times \mathcal{I}, |\mathcal{D}| \leq n \cdot q} I (\mathbf{f}_{\mathcal{D}}; f); \\ &=\gamma_{q\cdot n} , \end{split} \label{Eqn:MI_bound} \end{equation} where~$\gamma_n$ is the worst-case mutual information that we can obtain from~$n$ measurements, \begin{equation} \gamma_n = \max_{\mathcal{D} \subset \mathbb{W} \times \mathcal{I}, |\mathcal{D}| = n} I(\mathbf{f}_{\mathcal{D}}; f). \end{equation} This quantity was shown to be sublinear in~$n$ for many commonly-used kernels in~\cite{srinivas2009gaussian}. A key quantity to show convergence of the algorithm is the instantaneous regret, \begin{equation} r_n = \min_{\mathbf{w} \in \mathbb{W}} \varphi(\mathbf{w}) - \varphi(\mathbf{w}_n), \end{equation} the difference between the unknown true minimizer of~$\varphi$ and the environment parameters~$\mathbf{w}_n$ that~\cref{Algo:active_testing} selects at iteration~$n$. If the instantaneous regret is equal to zero, the algorithm has converged. In the following, we will show that the cumulative regret, ${R_n = \sum_{i=1}^{n} r_i}$ is sublinear in~$n$, which implies convergence of~\cref{Algo:active_testing}. We start by bounding the regret in terms of the confidence intervals on~$\mu_i$. \begin{lemma} Fix $n \geq 1$, if $|f(\mathbf{w}, i) - m_{q \cdot (n-1)}(\mathbf{w}, i)|\leq \beta^{1/2}_{q\cdot n}\sigma_{q \cdot(n-1)}(\mathbf{w}, i)$ for all $\mathbf{w}, i \in \mathbb{W} \times \mathcal{I}$, then the regret is bounded by $r_n \leq 2 \beta^{1/2}_{q\cdot n} \max_i \sigma_{q \cdot (n-1)}(\mathbf{w}, i)$. \label{lem:2} \end{lemma} \begin{proof} The proof is analogous to~\cite[Lemma 5.2]{srinivas2009gaussian}. The maximum standard deviation follows from the properties of the $\max$ and $\min$ operators in the parse tree~$\mathcal{T}$. In particular, let~$a_1, b_1, a_2, b_2 \in \mathbb{R}$ with~$a_1 - b_1 < a_2 - b_2$. Then for all~$c_1 \in [-b_1, b_1]$ and~$c_2 \in [-b_2, b_2]$ we have that \begin{equation} a_1 - b_1 \leq \min(a_1 + c_1, a_2 + c_2) \leq a_1 + b_1. \end{equation} The~$\max$ operator is analogous. Thus, since the parse tree~$\mathcal{T}$ is composed only of min and max nodes, the regret is bounded by the maximum error over all predicates. The result follows. \end{proof} \begin{lemma} Pick $\delta \in (0,1)$ and $\beta_{q \cdot n}$ as shown in~\cref{lem:mu_confidence}, then the following holds with probability at least $1-\delta$, \begin{equation} \sum_{i=1}^n r_n^2 \leq \beta_{q \cdot n} C_1 q \mathbf{I}(\mathbf{f}_{\mathbb{W}_n \times \mathcal{I}}; f) \leq \beta_{q \cdot n} C_1 \gamma_{q\cdot n} \end{equation} where $r_n$ is the regret between the true minimizing environment scenario, $\mathbf{w}^*$ and the current sample, $\mathbf{w}_n$; and $C_1 = 8 / \log{1+\sigma^{-2}}$ \label{lem:3} \end{lemma} \begin{proof} The first inequality follows similar to~\cite[Lemma 5.4]{srinivas2009gaussian} and the proofs in~\cite{Berkenkamp2016SafeOpt}. In particular, as in \cite{Berkenkamp2016SafeOpt}, \begin{equation*} r_n^2 \leq 4 \beta_{q \cdot n}^2 \max_{i \in \mathcal{I}} \sigma^2_{q \cdot (n-1)}(\mathbf{w}_n, i) \end{equation*} The second inequality follows from~\cref{Eqn:MI_bound}. \end{proof} \begin{lemma} Under the assumptions of~\cref{lem:confidence_on_combined_fun}, let $\delta \in (0,1)$ and choose $\mathbf{w}_n$ according to~\cref{Eqn:BO_multi_GP}. Then, the cumulative regret $R_N$ over $N$ iterations of~\cref{Algo:active_testing} is bounded with high probability, \begin{equation} \text{Pr} \left\{ R_n \leq \sqrt{C_1 \beta_N N \gamma_{q \cdot N}} \quad \forall N \geq 1 \right\} \geq 1 - \delta \end{equation} where $C_1 = \frac{8}{\log{1+\sigma^{-2}}}$. \label{lem:4} \end{lemma} \begin{proof} Since, $R_N = \sum_{i=1}^{N} r_i$, from Cauchy-Schwartz inequality we have, $R_N^2 \leq N \sum_{i=1}^N r_i^2$. The rest follows from~\cref{lem:3}. \end{proof} We introduce some notation, let \begin{equation} \hat{\mathbf{w}}_n =\text{argmin}_{\mathbf{w} \in \{\mathbf{w}_1, \dots, \mathbf{w}_n\}} \varphi(\mathbf{w}) \label{Eqn:min_env} \end{equation} be the minimizing environment scenario sampled by BO in $n$ iterations and let \begin{equation} \mathbf{w}^* = \operatornamewithlimits{argmin}_{\mathbf{w} \in \mathbb{W}} \varphi(\mathbf{w}) \end{equation} be the unknown, optimal parameter. \begin{corollary} For any $\delta \in (0,1)$ and $\epsilon \in \mathbb{R}^+$, there exits a $n^*$, \begin{equation} \frac{n^*}{\beta_{q \cdot n^*} \gamma_{q \cdot n^*}} = \frac{C_1}{\epsilon^2} \end{equation} such that $\forall n \geq n^*$, $\varphi(\mathbf{w}^*) - \varphi(\hat{\mathbf{w}}_n) \leq \epsilon$ holds with probability at least $1 - \delta$. \label{cor:1} \end{corollary} \begin{proof} The cumulative reward over $n$ iterations, $R_n = \sum_{i=1}^{n} \varphi(\mathbf{w}^*) - \varphi(\mathbf{w}_i)$ where $\mathbf{w}_i$ is the $i$-th BO sample. Defining $\hat{\mathbf{w}}_n$ as in~\cref{Eqn:min_env} we have, \begin{equation} \begin{split} R_n &= \sum_{i=1}^{n} \varphi(\mathbf{w}^*) - \varphi(\mathbf{w}_i) \\ & \geq \sum_{i=1}^n \varphi(\mathbf{w}^*) - \varphi(\hat{\mathbf{w}}_n) \\ & = n (\varphi(\hat{\mathbf{w}}_n) - \varphi(\mathbf{w}^*)) \end{split} \end{equation} Combining this result with~\cref{lem:4}, we have with probability greater than $1-\delta$ that \begin{equation} \begin{split} \varphi(\mathbf{w}^*) - \varphi(\hat{\mathbf{w}}_n) &\leq \frac{R_n}{n} \\ & \leq \sqrt{\frac{C_1 \beta_{q\cdot n} \gamma_{q \cdot n}}{n}} \end{split} \end{equation} To find, $n^*$, we bound the RHS by $\epsilon$, \begin{equation} \sqrt{\frac{C_1 \beta_{q \cdot n^*} \gamma_{q \cdot n^*}}{n^*}} \leq \epsilon \Rightarrow \frac{n^*}{\beta_{q \cdot n^*} \gamma_{q \cdot n^*}} \geq \frac{C_1}{\epsilon^2} \end{equation} For $n > n^*$, the minimum $\varphi(\hat{\mathbf{w}}_n) \leq \varphi(\hat{\mathbf{w}}_{n^*}) \implies \varphi(\hat{\mathbf{w}}_n)- \varphi(\mathbf{w}^*) \leq \epsilon$. \end{proof} We are now ready to prove our main convergence theorem. \begin{theorem} Under the assumptions of~\cref{lem:confidence_on_combined_fun}, choose $\delta \in (0,1)$, $\epsilon \in \mathbb{R}^+$ and define $n^*$ using~\cref{cor:1}. If $n \geq n^*$ and $\varphi(\hat{\mathbf{w}}_n) > \epsilon$, then, with probability greater than $1-\delta$, the following statements hold jointly \begin{itemize} \item $\varphi(\mathbf{w}^*) > 0$ \item The closed loop satisfies $\varphi$, i.e., the control can safely control the system in all environment scenarios, $\mathbb{W}$ \item The system has been verified against all environments, $\mathbb{W}$ \end{itemize} \end{theorem} \begin{proof} This holds from~\cref{lem:4} and~\cref{cor:1}. From~\cref{cor:1}, we have $\forall n \geq n^*$, $\text{Pr}(\varphi(\mathbf{w}^*) - \varphi(\hat{\mathbf{w}}_n) \leq \epsilon) > 1-\delta$. If $\exists n \geq n^*$, such that $\varphi(\hat{\mathbf{w}}_n) > \epsilon$, then we have $\text{Pr}(\varphi(\mathbf{w}^*) > 0) 1-\delta$, i.e., the minimum value $\varphi$ can achieve on the closed loop system is greater than $0$. $\varphi$ is hence, satisfied by our system in all $\mathbf{w} \in \mathbb{W}$. \end{proof}
{ "timestamp": "2018-02-27T02:16:17", "yymm": "1802", "arxiv_id": "1802.08678", "language": "en", "url": "https://arxiv.org/abs/1802.08678" }
\section{Background} \label{sec:background} \input system \input channel \input decoder \section{Speeding Things Up} \label{sec:optimization} \subsection{Batch Computation of Receiver Metric} \label{sec:batchreceiver} In a naïve implementation, each $\gamma$ computation \eqref{eqn:gamma} requires the computation of the receiver metric as a separate forward pass using \eqref{eqn:alpha_dot}. However it can be observed that for a given starting state $m'$ and symbol $D$, the $\gamma$ metric will be computed for each end state $m$ within the limits considered (c.f.\ equations \eqref{eqn:L}, \eqref{eqn:sigma}, \eqref{eqn:alpha}, and \eqref{eqn:beta}, where the $\gamma$ computations are used). In turn, this means that for a given $\vec{x}$, the receiver metric will need to be determined for all subsequences $\dot{\vec{y}}$ within the drift limit considered. It is therefore sufficient to compute the forward pass \eqref{eqn:alpha_dot} once, with the longest subsequence $\dot{\vec{y}}$ required. In doing so, the values of the receiver metric for shorter subsequences are obtained for free. We call this approach \emph{batch} computation. This effectively reduces the complexity of computing the collection of $\gamma$ metrics by a factor of $M_n$. The asymptotic complexity of the MAP decoder is therefore reduced to $\Theta(N n q M_\tau M_n M_1)$. \subsection{Channel Model} \label{sec:channel} We consider the Binary Substitution, Insertion, and Deletion (BSID) channel, an abstract random channel with unbounded synchronization and substitution errors, originally presented in \cite{bahl75} and more recently used in \cite{dm01ids, ratzer05telecom, bs08isita, bsw10icc, bb11isit} and others. At \emph{time} $t$, one bit enters the channel, and one of three events may happen: insertion with probability $\ensuremath{P_\mathrm{i}}$ where a random bit is output; deletion with probability $\ensuremath{P_\mathrm{d}}$ where the input is discarded; or transmission with probability $\ensuremath{P_\mathrm{t}}=1-\ensuremath{P_\mathrm{i}}-\ensuremath{P_\mathrm{d}}$. A substitution occurs in a transmitted bit with probability $\ensuremath{P_\mathrm{s}}$. After an insertion the channel remains at time $t$ and is subject to the same events again, otherwise it proceeds to time $t+1$, ready for another input bit. We define the \emph{drift} $S_t$ at time $t$ as the difference between the number of received bits and the number of transmitted bits before the events of time $t$ are considered. As in \cite{dm01ids}, the channel can be seen as a Markov process with the state being the drift $S_t$. It is helpful to see the sequence of states as a trellis diagram, observing that there may be more than one way to achieve each state transition. Also note that the state space is unlimited for positive drifts but limited for negative drifts. Specifically, $S_t$ may take any positive value for $t>0$, though with decreasing probability as the value increases. On the other hand, $S_t \geq -t$ where the lower limit corresponds to receiving the null sequence. \section{Conclusions} \label{sec:closure} In this paper we have considered TVB codes, which generalize a number of previous codes for synchronization errors. We discussed the applicable design criteria for TVB codes, expressing some previously published codes as TVB codes and showing that the greater flexibility of TVB codes allows improved constructions. For example, our $(7,8,4)$ TVB code achieves a SER of $10^{-4}$ at a $\ensuremath{P_\mathrm{i}},\ensuremath{P_\mathrm{d}}$ that is almost two orders of magnitude higher than a marker or distributed marker code of the same size, and slightly better than our earlier SEC codes. We also considered a number of important issues related to a practical implementation of the corresponding MAP decoder. Specifically, we have given an expression for the expected distribution of drift between transmitter and receiver due to synchronization errors, with consideration for practical concerns when evaluating this expression. We have shown how to determine an appropriate choice for state space limits based on the drift probability distribution. The decoder complexity under given channel conditions is then expressed as a function of the state space limits used. For a given state space, we have also given a number of optimizations that reduce the algorithm complexity with no further loss of decoder performance. The proposed speedups, which are independent of the TVB code construction, result in a considerable reduction in complexity of almost two orders of magnitude for typical code sizes and channel conditions. For code constructions with appropriate mathematical structure we expect to be able to replace the receiver metric, which considers each possible transmitted codeword, with a faster soft-output algorithm. Next, we have considered the practical problem of stream decoding, where there is no prior knowledge of the received frame boundary positions. In doing so we have also shown how an appropriate choice of decoder parameters allows stream decoding to approach the performance when frame boundaries are known, at the expense of some increase in complexity. Finally, practical comparisons of TVB codes with earlier constructions were given, showing that TVB code designs can in fact achieve improved performance. Even compared to the state of the art codes of \cite{mans12}, the TVB codes presented here achieve a FER of $10^{-3}$ at 24\% higher $\ensuremath{P_\mathrm{i}},\ensuremath{P_\mathrm{d}}$ for a rate-$\frac{1}{10}$ code, and at 84\% higher $\ensuremath{P_\mathrm{i}},\ensuremath{P_\mathrm{d}}$ for a rate-$\frac{3}{14}$ code. We expect further improvements to the codes shown here to be possible, particularly by co-designing optimized outer codes. However, a detailed treatment of the design process is beyond the scope of this paper, and will be the subject of further work. \section{Algorithm Complexity} \label{sec:complexity} \subsection{Complexity of the MAP Decoder} As a first step towards determining the overall complexity of the decoder, consider first the calculation of the state transition metric in \eqref{eqn:gamma}. This is recursively computed using the forward pass of \eqref{eqn:alpha_dot} over a sequence of $n$ bits, for each of $M_n$ states $m$. Each recursion consists of a summation over $M_1$ prior states $m'$ as argued in Section~\ref{sec:summation_limits}. The bit-level probability $Q$ can be obtained by a look-up table. Thus the complexity for calculating a single state transition metric is $\Theta(n M_n M_1)$. The final output of the algorithm consists of $q$ probabilities for each of $N$ symbols, calculated using \eqref{eqn:L}. This equation sums over all $M_\tau$ prior states $m'$ and $M_n$ states $m$, defining the domain for $\sigma_i(m',m,D)$. It follows from \eqref{eqn:sigma} that the domain for $\gamma_i(m',m,D)$ is the same. Now the computation of \eqref{eqn:sigma} is dominated by the evaluation of $\gamma_i(m',m,D)$ in \eqref{eqn:gamma}, whose complexity is $\Theta(n M_n M_1)$ as shown. Considering the number of times the $\gamma$ metric is computed, the MAP decoder has an asymptotic complexity of $\Theta(N n q M_\tau M_n^2 M_1)$. \subsection{Complexity of the Davey-MacKay Decoder} It would initially appear that the MAP decoder complexity is significantly higher than that of the Davey-MacKay decoder, given as $O(N n M_\tau M_1)$ in \cite{dm01ids} for a direct implementation (using our notation). However, the expression of the Davey-MacKay decoder seems to consider only the complexity of the initial forward and backward passes, ignoring the additional small forward passes needed to compute the final decoder output. The final output of the Davey-MacKay algorithm also consists of $q$ probabilities for each of $N$ symbols. Each of these is computed using \cite[(4)]{dm01ids}, which sums over all possible prior and posterior states. In a direct implementation all possible prior states need to be considered; using our notation the number of states is $M_\tau$. While not stated in \cite{dm01ids}, the number of posterior states that need to be considered is $M_n$, as argued for the MAP decoder. The computation within the summation of \cite[(4)]{dm01ids} is dominated by the conditional probability, which is computed using a separate forward pass. This forward pass is effectively identical to \eqref{eqn:alpha_dot} whose complexity is $\Theta(n M_n M_1)$. Considering the number of times the forward pass is computed, it follows that the Davey-MacKay decoder has an overall asymptotic complexity of $\Theta(N n q M_\tau M_n^2 M_1)$. \subsection{Comments on Algorithm Complexity} \label{sec:complexity_comparison} Comparing the complexity expressions for the MAP decoder for TVB codes and the Davey-MacKay decoder for sparse codes with a distributed marker sequence, it follows that the asymptotic complexity for both decoders is the same. This is consistent with experimental running times for both decoders in \cite{bsw10icc}. In the complexity expression note that $N$, $n$, and $q$ depend only on the code parameters while $M_\tau$, $M_n$, and $M_1$ also depend on the channel conditions. For $M_1$, it was argued in \cite[Section~VII.A]{dm01ids} that it is sufficient to consider a maximum of two successive insertions, at a minimal cost to decoding performance. This is equivalent to setting $m_1^{+}=2$, so that $M_1=4$; these limits were also used in \cite{bs08isita,bsw10icc}. However, this artificially low limit is insufficient for more advanced code constructions, as shown in \cite{bb11isit}. It was also argued in \cite{dm01ids} that useful speedups can be obtained by only following paths through the trellis that pass through nodes with probabilities above a certain threshold. However, the choice of this threshold was not analyzed. It is also likely that this choice would depend on the properties of the inner code being used. \subsection{The MAP Decoder} \label{sec:mapdecoder} We summarize here the MAP decoder of \cite{briffa13jcomml}; this is the same as the MAP decoder of \cite{bsw10icc} with a trivial modification to work with the notation of TVB codes. The decoder uses the standard forward-backward algorithm for hidden Markov models. We assume a message sequence $\vec{D}\sw{0}{N}$, encoded using a $(n,q,M)$ TVB code to the sequence $\seq{X}{\tau}$, where $\tau = nN$. The sequence $\seq{X}{\tau}$ is transmitted over the BSID channel, resulting in the received sequence $\seq{Y}{\rho}$, where in general $\rho$ is not equal to $\tau$. To avoid ambiguity, we refer to the message sequence as a \emph{block} of size $N$ and the encoded sequence as a \emph{frame} of size $\tau$. We calculate the APP $L_i(D)$ of having encoded symbol $D \in \field{F}_q$ in position $i$ for $0 \leq i < N$, given the entire received sequence, using \begin{align} \label{eqn:L} L_i(D) & = \frac{1}{\lambda_N(\rho-\tau)} \sum_{m',m} \sigma_i(m',m,D) \text{,}\\ \text{where~} \label{eqn:lambda} \lambda_i(m) &= \alpha_i(m) \beta_i(m) \text{,}\\ \label{eqn:sigma} \sigma_i(m',m,D) &= \alpha_i(m') \gamma_i(m',m,D) \beta_{i+1}(m) \text{,} \end{align} and $\alpha_i(m)$, $\beta_i(m)$, and $\gamma_i(m',m,D)$ are the forward, backward, and state transition metrics respectively. Note that strictly, the above metrics depend on $\seq{Y}{\rho}$, but for brevity we do not indicate this dependence in the notation. The summation in \eqref{eqn:L} is taken over the combination of $m',m$, being respectively the drift before and after the symbol at index $i$. The forward and backward metrics are obtained recursively using \begin{align} \label{eqn:alpha} \alpha_i(m) &= \sum_{m',D} \alpha_{i-1}(m') \gamma_{i-1}(m',m,D)\text{,}\\ \label{eqn:beta} \text{and~} \beta_i(m) &= \sum_{m',D} \beta_{i+1}(m') \gamma_i(m,m',D)\text{.} \end{align} Initial conditions for known frame boundaries are given by \begin{equation*} \alpha_0(m) = \begin{cases} 1 & \text{if $m = 0$} \\ 0 & \text{otherwise,} \end{cases} \text{and~} \beta_N(m) = \begin{cases} 1 & \text{if $m = \rho-\tau$} \\ 0 & \text{otherwise.} \end{cases} \end{equation*} Finally, the state transition metric is defined as \begin{equation} \label{eqn:gamma} \gamma_i(m',m,D) = \prob{ D_i = D } R( \vec{Y}\sw{ni+m'}{n(i+1)+m} \mid C_i(D) ) \end{equation} where $C_i(D)$ is the $n$-bit sequence encoding $D$ and $R( \vec{\dot{y}} | \vec{x} )$ is the probability of receiving a sequence $\vec{\dot{y}}$ given that $\vec{x}$ was sent through the channel (we refer to this as the receiver metric). The \emph{a priori} probability $\prob{ D_i=D }$ is determined by the source statistics, which we generally assume to be equiprobable so that $\prob{ D_i=D } = 1/q$. In iterative decoding, the prior probabilities are set using extrinsic information from the previous pass of an outer decoder, as explained in Section~\ref{sec:system_description}. The receiver metric is obtained by calculating the forward recursion \begin{equation} \label{eqn:alpha_dot} \dot{\alpha}_t(m) = \sum_{m'} \dot{\alpha}_{t-1}(m') \cdot Q\left( \vec{\dot{y}}\sw{t-1+m'}{t+m}| x_{t-1} \right) \text{,} \end{equation} where for brevity we do not show the dependence on $\vec{\dot{y}}$ and $\vec{x}$, and $Q(\vec{y}|x)$ can be directly computed from $\vec{y},x$ and the channel parameters: \begin{align*} \begin{split} &Q( \vec{y} | x) = \begin{cases} \ensuremath{P_\mathrm{d}} & \text{if $\mu = 0$} \\ \left(\frac{\ensuremath{P_\mathrm{i}}}{2}\right)^{\mu-1} \left( \ensuremath{P_\mathrm{t}}\ensuremath{P_\mathrm{s}} + \frac{1}{2} \ensuremath{P_\mathrm{i}} \ensuremath{P_\mathrm{d}} \right) & \text{if $\mu > 0, y_{\mu-1} \neq x$} \\ \left(\frac{\ensuremath{P_\mathrm{i}}}{2}\right)^{\mu-1} \left( \ensuremath{P_\mathrm{t}}\bar\ensuremath{P_\mathrm{s}} + \frac{1}{2} \ensuremath{P_\mathrm{i}} \ensuremath{P_\mathrm{d}} \right) & \text{if $\mu > 0, y_{\mu-1} = x$}, \\ \end{cases} \end{split} \end{align*} where $\mu$ is the length of $\vec{y}$ and $\bar\ensuremath{P_\mathrm{s}} = 1-\ensuremath{P_\mathrm{s}}$. The required value of the receiver metric is given by $R( \vec{\dot{y}} | \vec{x} ) = \dot{\alpha}_{n}(\dot{\mu}-n)$, where $\dot{\mu}$ is the length of $\vec{\dot{y}}$, $n$ is the length of $\vec{x}$. As in \cite{briffa13jcomml}, the $\alpha$, $\beta$, and $\dot\alpha$ metrics are normalized as they are computed to avoid exceeding the limits of floating-point representation. We also assume that \eqref{eqn:alpha_dot} is computed at single precision\footnote{% We refer to 32-bit floating point as single precision, and 64-bit floating point as double precision.}, while the remaining equations use double precision. \section{Introduction} \label{sec:intro} Most error-control systems are designed to detect and/or correct substitution errors, where individual symbols of the received sequence have been substituted while maintaining synchronization with the transmitted sequence. Some channels, however, also experience \emph{synchronization errors}, where symbols may additionally be deleted from or inserted into the received sequence. It has long been recognized that codes can be designed specifically for synchronization error correction \cite{sell62, leve66}. Except for the less-known work by Gallager \cite{gallager61}, for a long time only some short block codes were known. This changed when Davey and MacKay \cite{dm01ids} proposed a concatenated scheme combining an outer LDPC code with good error-correction capability with an inner code whose aim is to correct synchronization errors. Ratzer \cite{ratzer05telecom} took a different approach, using short marker sequences inserted in binary LDPC codewords; a similar approach was used by Wang \emph{et al.} \cite{wfd2011transcomm}. Yet another approach extends the state space of convolutional codes to allow correction of synchronization errors \cite{sfs2004africon, spm2008sitis, mansour2010jsac}. The problem of convolutional code design for synchronization error channels has been considered in \cite{mans10}. More recently, this approach has been applied successfully to turbo codes \cite{mans12}. The renewed increase in interest is mainly due to new applications requiring such codes. A recent survey can be found in \cite{mbt2010commsurveys}. We have in previous papers extended the work of Davey-MacKay, proposing a \emph{maximum a posteriori} (MAP) decoder \cite{bsw10icc}, improved code designs \cite{bs08isita,bb11isit}, as well as a parallel implementation of the MAP decoder resulting in speedups of up to two orders of magnitude \cite{briffa13jcomml}. However, these papers were restricted to the case where the frame boundaries were known by the decoder. While Davey and MacKay showed that the frame boundaries could be accurately determined for their bit-level decoder and code construction \cite{dm01ids}, it has not been shown whether this property extends to our MAP decoder and improved constructions. In this paper we define Time-Varying Block (TVB) codes in terms of the encoding used in \cite{briffa13jcomml}, and show that TVB codes represent a new class of codes which generalizes a number of previous synchronization error-correcting codes. We use the MAP decoder of \cite{briffa13jcomml} for these codes, showing how it can be used in an iterative scheme with an outer code. We also consider a number of important issues related to any practical implementation of the MAP decoder. Specifically, we give an expression for the expected distribution of drift between transmitter and receiver due to synchronization errors. We determine an appropriate choice for state space limits based on the drift probability distribution. In turn, we obtain an expression for the decoder complexity under given channel conditions in terms of the state space limits used. For a given state space, we also give a number of optimizations that reduce the algorithm complexity with no further loss of decoder performance. We also show how the MAP decoder can be used for \emph{stream decoding}, where the boundaries of the received frames are not known \emph{a priori}. In doing so we demonstrate how an appropriate choice of decoder parameters allows stream decoding to approach the performance when frame boundaries are known, at the expense of some increase in complexity. We express some previously published codes as TVB codes, comparing performance with published results, and showing that the greater flexibility of TVB codes permits the creation of improved codes. In the following, we start with definitions in Section~\ref{sec:background} and summaries of results from earlier work. The applicable design criteria for TVB codes are considered in Section~\ref{sec:tvbcodes}, together with the representation of previously published codes as TVB codes. The appropriate choice for state space limits is given in Section~\ref{sec:statespace}, followed by expressions for the decoder complexity in Section~\ref{sec:complexity}. MAP decoder optimizations are given in Section~\ref{sec:optimization} and the changes necessary for stream decoding in Section~\ref{sec:streamdecoding}. Finally, practical results are given in Section~\ref{sec:results}. \subsection{Lattice Implementation of Receiver Metric} \label{sec:lattice} To compute the receiver metric, an alternative to the trellis of \eqref{eqn:alpha_dot} is to define a recursion over a lattice as in \cite{bahl75}. For the computation of $R( \vec{\dot{y}} | \vec{x} )$, the required lattice has $n+1$ rows and $\dot\mu+1$ columns. Each horizontal path represents an insertion with probability $\frac{1}{2}\ensuremath{P_\mathrm{i}}$, each vertical path is a deletion with probability $\ensuremath{P_\mathrm{d}}$, while each diagonal path is a transmission with probability $\ensuremath{P_\mathrm{t}}\ensuremath{P_\mathrm{s}}$ if the corresponding elements from $\vec{x}$ and $\vec{\dot{y}}$ are different or $\ensuremath{P_\mathrm{t}}\bar\ensuremath{P_\mathrm{s}}$ if they are the same. Let $F_{i,j}$ represent the lattice node in row $i$, column $j$. Then the lattice computation in the general case is defined by the recursion \begin{equation} \label{eqn:F} F_{i,j} = \frac{1}{2}\ensuremath{P_\mathrm{i}} F_{i,j-1} + \ensuremath{P_\mathrm{d}} F_{i-1,j} + \dot{Q}(\dot{y}_j | x_i) F_{i-1,j-1} \text{,} \end{equation} which is valid for $i<n$, and where $\dot{Q}(y|x)$ can be directly computed from $y,x$ and the channel parameters: \begin{equation} \begin{split} &\dot{Q}( y | x) = \begin{cases} \ensuremath{P_\mathrm{t}}\ensuremath{P_\mathrm{s}} & \text{if $y \neq x$} \\ \ensuremath{P_\mathrm{t}}\bar\ensuremath{P_\mathrm{s}} & \text{if $y = x$}\text{.} \\ \end{cases} \end{split} \end{equation} Initial conditions are given by \begin{equation} \begin{split} &F_{i,j} = \begin{cases} 1 & \text{if $i=0$, $j=0$} \\ 0 & \text{if $i<0$ or $j<0$.} \\ \end{cases} \end{split} \end{equation} The last row is computed differently as the channel model does not allow the last event to be an insertion. In this case, when $i=n$, the lattice computation is defined by \begin{equation} \label{eqn:F_lastrow} F_{n,j} = \ensuremath{P_\mathrm{d}} F_{n-1,j} + \dot{Q}(\dot{y}_j | x_n) F_{n-1,j-1} \text{.} \end{equation} Finally, the required receiver metric is obtained from this computation as $R( \vec{\dot{y}} | \vec{x} ) = F_{n,\dot{\mu}}$. The calculation of a single run through the lattice requires a number of computations proportional to the number of nodes in the lattice. Now for the transmitted sequence of $n$ bits considered, the number of rows will always be $n$ while the number of columns is at most $n + m_n^{+}$. The complexity of a direct implementation of this algorithm is therefore $\Theta(n [n + m_n^{+}])$. It has been argued in Section~\ref{sec:batchreceiver} that for a given $\vec{x}$, the receiver metric $R( \vec{\dot{y}} | \vec{x} )$ needs to be determined for all subsequences $\dot{\vec{y}}$ within the drift limit considered. Observe that the same argument applies equally when the receiver metric is computed using the lattice implementation \eqref{eqn:F}. Therefore, when the lattice implementation is used in batch mode, the MAP decoder has an asymptotic complexity of $\Theta(N n q M_\tau [n + m_n^{+}])$. \subsection{Optimizing the Lattice Implementation} \label{sec:lattice_corridor} In the lattice implementation of the receiver metric, it can be readily seen that the horizontal distance of a lattice node from the main diagonal is equivalent to the channel drift for the corresponding transmitted bit. It should therefore be clear that the likelihood of a path passing through a lattice node decreases as the distance to the main diagonal increases. We can take advantage of the above observation by limiting the lattice computation to paths within a fixed corridor around the main diagonal. Specifically, the arguments of Section~\ref{sec:statespace} can be applied directly, resulting in a corridor of width $M_n$ in general for the transmitted sequence of $n$ bits considered. Exceptions to this width occur in the first few rows with index $i < -m_n^{-}$ and the last few rows with index $i > \dot\mu - m_n^{+}$, where part of the corridor falls outside the lattice rectangle. The number of nodes within this corridor is given by \begin{align} \kappa &= n M_n - \kappa_\mathrm{UL} - \kappa_\mathrm{LR}\text{,} \\ \text{where~} \kappa_\mathrm{UL} &= \Delta(-m_n^{-}) - \Delta(-m_n^{-} - n) \text{,} \\ \kappa_\mathrm{LR} &= \Delta(n + m_n^{+} - \dot\mu) - \Delta(m_n^{+} - \dot\mu) \text{,} \\ \begin{split} \text{and~} \Delta(k) &= \begin{cases} \frac{k^2 + k}{2} & \text{if $k > 0$} \\ 0 & \text{otherwise.} \\ \end{cases} \end{split} \end{align} The complexity of the corridor-limited lattice algorithm is therefore $\Theta(n M_n - \kappa_\mathrm{UL} - \kappa_\mathrm{LR})$. Some simplification of this expression is possible when the corridor-limited lattice algorithm is used in the MAP decoder with batch computation for the channel considered. When batch computation is used, $\dot\mu = n + m_n^{+}$ by definition, so that $\kappa_\mathrm{LR} = 0$. Furthermore, for the BSID channel, $-n \le m_n^{-} \le 0$, so that $\kappa_\mathrm{UL} = \frac{1}{2}\left[(m_n^{-})^2 - m_n^{-}\right]$. Under these conditions, the MAP decoder has an asymptotic complexity of $\Theta\left(N q M_\tau \left[n M_n - \frac{1}{2}\left[(m_n^{-})^2 - m_n^{-}\right] \right] \right)$. \subsection{Comparing Complexity} \label{sec:complexity_improvement} A summary of the complexity expressions for the MAP decoder for various computation modes of the receiver metric is given in Table~\ref{tab:complexity}. \inserttab{tab:complexity}{cll} {Complexity expressions for the MAP decoder for various computation modes of the receiver metric.}{ & \emph{Algorithm} & \emph{Complexity} \\ \hline A & Original & $\Theta(N n q M_\tau M_n^2 M_1)$ \\ B & Batch computation & $\Theta(N n q M_\tau M_n M_1)$ \\ C & Lattice receiver & $\Theta(N n q M_\tau [n + m_n^{+}])$ \\ D & Corridor constraint & $\Theta\left(N q M_\tau \left[n M_n - \frac{1}{2}\left[(m_n^{-})^2 - m_n^{-}\right] \right] \right)$ \\ } Comparing the expressions in rows A and B of Table~\ref{tab:complexity} we can immediately see that the batch computation of the receiver metric reduces complexity by a factor equal to $M_n$. Unfortunately, the remaining complexity expressions contain terms that depend on the code parameters and channel conditions in a rather opaque way, making it harder to understand the benefits of these improvements. In the first instance, we can simplify the expressions further to facilitate comparison. Consider the expression in row C of Table~\ref{tab:complexity}, when the lattice implementation is used. It can be shown that $n + m_n^{+} \rightarrow M_n-1$ as channel conditions get worse; we can therefore simplify the complexity expression to $O(N n q M_\tau M_n)$. Comparing this to the expression in row B of Table~\ref{tab:complexity} we can see that the use of the lattice implementation reduces complexity by a factor of at least $M_1$. Finally, consider the expression in row D of Table~\ref{tab:complexity}, when the corridor constraint is applied to the lattice algorithm. Since $m_n^{-} \le 0$, the $\frac{1}{2}\left[(m_n^{-})^2 - m_n^{-}\right]$ term is strictly positive. The reduction in complexity offered by the corridor constraint is therefore equal to $\frac{2 n M_n}{2 n M_n - (m_n^{-})^2 + m_n^{-}}$, and becomes significant as channel conditions improve. As channel conditions get worse, $m_n^{-} \rightarrow -n$, so that the expression is dominated by the $n M_n$ term. Under these conditions, the complexity of the corridor-constrained lattice implementation becomes approximately equal to that of the unconstrained lattice implementation. We can also illustrate the effect of the proposed speedups by considering a rate-$\frac{1}{2}$ TVB code with typical block and codeword sizes $N=500$, $n=10$, and $q=32$. We compute the MAP decoder complexity for this code under a range of channel conditions, using the original algorithm of Section~\ref{sec:mapdecoder} and the improvements described above. We plot these in Fig.~\ref{fig:complexity} using the same summation limits as in Section~\ref{sec:limits_example}. \insertfig{fig:complexity}{Graphs/complexity} {MAP decoder complexity (in number of arithmetic operations) under a range of channel conditions, for various computation modes of the receiver metric.} Note that for a fairer comparison between the lattice and trellis modes, we include a constant factor of three in the lattice computation. This follows the observation that each lattice node computation \eqref{eqn:F} requires three multiplications while each trellis computation \eqref{eqn:alpha_dot} requires only one. A few general observations can be made on this graph: \begin{inparaenum}[a)] \item The batch computation of the receiver metric results in a considerable reduction of complexity throughout, but is even more significant under poor channel conditions. \item The lattice implementation is considerably less complex than the trellis implementation at high channel error rates. \item The lattice corridor constraint extends this improvement to the low channel error rate range. \end{inparaenum} In conclusion, the proposed speedups result in a considerable reduction in complexity of almost two orders of magnitude for typical code sizes and channel conditions. We have observed a similar trend under a range of typical code sizes, so this result can be taken as representative. \section*{Acknowledgment} \balance \bibliographystyle{IEEEtran} \section{Results} \label{sec:results} Practical results are given in this section. We show how an appropriate choice of decoder parameters allows stream decoding to perform as well as when frame boundaries are known. Results are also given for existing constructions which can be expressed as TVB codes, showing how the symbol-level MAP decoder improves on the original decoder (in the case of \cite{dm01ids}) or is equivalent (in the case of \cite{ratzer05telecom}). We also demonstrate some improved constructions allowed by the flexibility of TVB codes. These are achieved by using simulated annealing to find TVB codes of a required order with a good Levenshtein distance spectrum. Specifically, we seek to find constituent codes with the highest possible minimum Levenshtein distance and the lowest multiplicity at small distances. For all codes so designed, $M<N$; we construct our TVB codes using a random sampling with replacement of the unique constituent codes, and use this as our inner code. Construction parameters for all codes used in this section are given in Table~\ref{tab:codes}. To facilitate reproduction of these results, the TVB codebooks used are available for download from the first author's web site\footnote{% Available at \url{http://jabriffa.wordpress.com/publications/data-sets/}. }. \inserttabd{tab:codes}{cllll} {Construction parameters of codes used in simulations.}{ \emph{Label}\footnote{% Labels starting with P indicate previously published results, while labels starting with N indicate new simulation results.} & \emph{Inner code} & \emph{Marker} & \emph{Outer code} & \emph{Comment} \\ \hline P1 & $(5,16)$ sparse & random, distributed & LDPC $(999,888)$ $\mathbb{F}_{16}$ & Published in \cite[Fig.~8, Code D]{dm01ids} \\ N1a & $(5,16)$ sparse & random, distributed & LDPC $(999,888)$ $\mathbb{F}_{16}$ & Identical construction to P1, symbol-level MAP decoder \\ N1b & $(10,256,3)$ TVB & none & LDPC $(499,444)$ $\mathbb{F}_{256}$ & Same overall rate and block size as P1 \\ \hline P2 & $(6,8)$ sparse & random, distributed & LDPC $(1000,100)$ $\mathbb{F}_{8}$ & Published in \cite[Fig.~8, Code I]{dm01ids} \\ N2a & $(6,8)$ sparse & random, distributed & LDPC $(1000,100)$ $\mathbb{F}_{8}$ & Identical construction to P2, symbol-level MAP decoder \\ N2b & $(6,8,12)$ TVB & none & LDPC $(1000,100)$ $\mathbb{F}_{8}$ & Same overall rate, block size, and outer code as P2 \\ \hline P3 & $9$ bits, uncoded & $001/110$, appended & LDPC $(3001,2000)$ $\mathbb{F}_{2}$ & Published in \cite[Fig.~7, Code D]{ratzer05telecom} \\ N3 & $9$ bits, uncoded & $001/110$, appended & LDPC $(2997,1998)$ $\mathbb{F}_{2}$\footnote{\label{fn:n3a}% Obtained by truncating the LDPC $(3001,2000)$ $\mathbb{F}_{2}$ of P3. This truncation is necessary so that the outer-encoded sequence can be expressed by an integral number of inner codewords.} & Identical inner code to P3, marginally smaller outer code \\ \hline P4 & not applicable & not applicable & Rate $\frac{3}{14}$ turbo code $\mathbb{F}_{2}$ & Published in \cite[Fig.~4, Code T2]{mans12} \\ N4 & $(7,8,8)$ TVB & none & LDPC $(666,333)$ $\mathbb{F}_{8}$ & Same overall rate as P4 \\ \hline P5 & not applicable & not applicable & Rate $\frac{1}{10}$ turbo code $\mathbb{F}_{2}$ & Published in \cite[Fig.~4, Code T4]{mans12} \\ N5 & $(7,4,32)$ TVB & none & LDPC $(855,300)$ $\mathbb{F}_{4}$ & Same overall rate as P5 \\ } \subsection{Stream Decoding with MAP Decoder} \label{sec:results_stream} The results of \cite{bsw10icc} assumed known frame boundaries; under these conditions, the decoder is arguably at an advantage when comparing with the results of \cite{dm01ids, ratzer05telecom}. It is also not clear whether the MAP decoder can keep track of frame boundaries with the non-sparse constructions of \cite{bsw10icc, bb11isit} and the TVB codes introduced here, especially in the absence of a known marker sequence. In the following we investigate the performance of the MAP decoder under stream decoding conditions, and consider the choice of look-ahead required. As in \cite{dm01ids} we assume that the start of the first frame is known, while the decoder is responsible for keeping synchronization from that point onwards. We use the limits specified in Section~\ref{sec:statespace}. We start by investigating the effect of stream decoding on the ability of the MAP decoder to track codeword boundaries. We consider a $(6,8,12)$ TVB code, which is the inner code for the concatenated system N2b of Table~\ref{tab:codes}. We simulate this inner code with a block size $N=2000$ under the channel conditions at the onset of convergence for the concatenated system, that is at $\ensuremath{P_\mathrm{i}} = \ensuremath{P_\mathrm{d}} = 0.22$, assuming only the start position of the first frame is known. At each codeword boundary we plot the fraction of correctly determined drifts (fidelity) in Fig.~\ref{fig:fidelity}. \insertfig{fig:fidelity}{Graphs/fidelity} {The fraction of correctly resynchronized codeword boundaries (fidelity) as a function of codeword index, for code N2b of Table~\ref{tab:codes}.} As expected, the fidelity drops at the end of the frame, where the actual drift is unknown to the decoder. However, it can be observed that the fidelity reaches a steady high value within about 1000 codewords from the end of frame. It could therefore be supposed that a look-ahead of $\nu=1000$ would be sufficient for this code under these channel conditions. The dip at the start of the frame is caused by the uncertainty in the frame start position, due to the very low fidelity at the end of the previous frame. To test this hypothesis, we concatenate this inner code with the $(1000,100)$ LDPC code over $\field{F}_8$ of \cite[Code I]{dm01ids}. We simulate this system under the following conditions: \begin{inparaenum} \item known frame start and end (frame decoding); \item known start for the first frame, unknown frame ends (stream decoding), no look-ahead; \item stream decoding with look-ahead $\nu=1000$ codewords. \end{inparaenum} Results are shown in Fig.~\ref{fig:lookahead}. \insertfig{fig:lookahead}{Graphs/lookahead-fer} {Demonstration of the effect of look-ahead on the MAP decoder's performance under stream decoding conditions, and comparison with frame decoding.} We give results after the first and fifth iterations. As anticipated, performance under stream decoding conditions is poorer than frame decoding if there is no look-ahead. However, an appropriate look-ahead quantity allows the decoder to perform as well under stream decoding as under frame decoding. It is important to highlight that this result is dependent on the inner code structure, and that therefore the generalization to other constructions is not obvious. However, we have repeated the same test with other constructions, including those of \cite{dm01ids, ratzer05telecom, bs08isita, bsw10icc, bb11isit} and the new constructions in this paper, and under different channel conditions. In all cases we have found that the result is repeatable, in that it is possible to approach the performance of frame decoding with stream decoding, as long as an appropriate look-ahead quantity is chosen. The only cost of stream decoding is therefore the need for a fidelity analysis to determine a suitable look-ahead value and the increased decoding latency and complexity caused by the augmented block size. Since the code performance is undiminished, to simplify our analysis from this point onwards we assume known frame start and end positions. \subsection{Comparison with Prior Art} \label{sec:results_comparison} We have already shown in \cite{bsw10icc} that the (symbol-level) MAP decoder allows us to obtain better performance from the codes of \cite{dm01ids}. Further improvement can be obtained with iterative decoding, as we show here. Additionally, the flexibility of TVB codes allows us to obtain codes that perform better at the same size and/or rate. In the following, we simulate channel conditions $\ensuremath{P_\mathrm{i}}=\ensuremath{P_\mathrm{d}}; \ensuremath{P_\mathrm{s}}=0$ in order to compare with published results. For low channel error rates, consider \cite[Code D]{dm01ids}, listed as P1 in Table~\ref{tab:codes}. We compare the previously published result with a MAP decoding of the same code (N1a) in Fig.~\ref{fig:davey}. \insertfig{fig:davey}{Graphs/davey-fer} {Comparison with Davey-MacKay: improving the performance of \cite[Code D]{dm01ids} (left) and \cite[Code I]{dm01ids} (right) using our MAP decoder, iterative decoding, and an inner code with better Levenshtein distance spectrum.} As shown in \cite{bsw10icc}, the MAP decoder improves the performance of this code even after the first iteration; additional iterations improve the result further. At the same overall code rate and block size we can improve the performance further by designing an inner TVB code with a better Levenshtein distance spectrum (N1b). We repeat the process at higher channel error rates for \cite[Code I]{dm01ids}, listed as P2 in Table~\ref{tab:codes}. Again, compared to the published result, a MAP decoding of the same code (N2a) improves performance even after the first iteration, also in Fig.~\ref{fig:davey}. Additional iterations improve the result, but the difference in this case is less pronounced. Replacing the inner code with one of the same size but a better Levenshtein distance spectrum (N2b) improves performance further. As we have already discussed in Section~\ref{sec:system_description}, the marker codes given by Ratzer \cite{ratzer05telecom} can also be cast as TVB codes. In \cite{ratzer05telecom} binary outer LDPC codes were used. To use a binary outer code with our MAP decoder, the bitwise APPs can be obtained from the $q$-ary symbol APPs by marginalizing over the other bits \cite[p.~326]{mackay2003}; these are then passed to the decoder for the binary outer code. In this case, for a binary outer code we expect the performance of the concatenated code to be identical, whether the inner code is decoded with the bit-level (MAP) decoder of \cite{ratzer05telecom} or with our symbol-level decoder. We show this in Fig.~\ref{fig:ratzer-mansour} for \cite[Code~D]{ratzer05telecom}, listed as P3 in Table~\ref{tab:codes}, in comparison with an almost-identical code (N3) using our MAP decoder. \insertfig{fig:ratzer-mansour}{Graphs/ratzer-mansour-fer} {Comparison with Ratzer and with Mansour \& Tewfik: demonstrating the equivalence of our MAP decoder on \cite[Code~D]{ratzer05telecom}, with and without iterative decoding, and improving performance on \cite[Codes T2, T4]{mans12} at the same overall code rate, using inner codes with improved Levenshtein distance in concatenation with LDPC outer codes.} It is important to highlight that decoding marker codes as TVB codes provides no material advantage; in fact, a cost is paid in complexity for doing so. We do not propose or expect that marker codes will be decoded as TVB codes. However, there is value in showing that marker codes can be decoded as TVB codes with no loss in performance. Specifically, this allows us to compare the structure of marker codes with other constructions, within the same context. Finally, in Fig.~\ref{fig:ratzer-mansour} we compare with the more recent turbo codes of \cite[Codes T2, T4]{mans12}, respectively listed as P4 and P5 in Table~\ref{tab:codes}, with concatenated systems of the same overall rate, using a TVB inner code and an LDPC outer code. It can be seen that our concatenated systems outperform the codes of \cite{mans12}, significantly at lower channel error rates, and somewhat less so at higher channel error rates. \section{Appropriate Limits on State Space} \label{sec:statespace} The equations in Section~\ref{sec:mapdecoder} assume that summations can be taken over the set of all possible states. For a channel such as the one considered, the state space is unbounded for positive drifts. A practical implementation will have to take sums over a finite subset of states. In \cite{dm01ids} the state space was limited to a drift $|S_t| \le x_\textrm{max}$, where $x_\textrm{max}$ was chosen to be `several times larger' than the standard deviation of the synchronization drift over one block length, assuming this takes a Gaussian distribution. No recommendation was given for the value that should be used. Limiting the state space is by definition sub-optimal. However, we can arbitrarily lower the number of cases where the sub-optimal solution is worse than the optimal one, by ensuring that only the least likely states are omitted. The choice of summation limits also involves a trade-off with complexity, which has a polynomial relationship with the size of the state space (c.f.\ Section~\ref{sec:complexity}). Therefore, an appropriate choice of summation limits will result in the smallest state space such that the probability of the drift being outside that range is as low as required. The first step to identify good summation limits is to derive an accurate probability distribution of the state space, avoiding the Gaussian approximation of \cite{dm01ids}. \subsection{Drift Probability Distribution} The drift $S_T$ after transmission of $T$ bits was stated in \cite{dm01ids} (and shown in \cite{davey99}) to be normally distributed with zero mean and a variance equal to $T p / (1-p)$ for the special case where $p := \ensuremath{P_\mathrm{i}} = \ensuremath{P_\mathrm{d}}$. This distribution is asymptotically valid as $T \rightarrow \infty$. For cases where $\ensuremath{P_\mathrm{i}} \neq \ensuremath{P_\mathrm{d}}$ or where $T$ is not large enough, this distribution cannot be used. This is particularly relevant for determining the summation limits of \eqref{eqn:alpha_dot} where the sequence length $n$ is not large. An exact expression for the probability distribution of $S_T$ is given by \ifdraft \begin{equation} \Phi_T(m) = \prob{S_T = m} = \ensuremath{P_\mathrm{t}}^{T} \ensuremath{P_\mathrm{i}}^{m} \sum_{j=j_0}^{T} \binom{T}{j} \binom{T + m + j - 1}{m + j} \left[ \frac{\ensuremath{P_\mathrm{i}} \ensuremath{P_\mathrm{d}}}{\ensuremath{P_\mathrm{t}}} \right] ^j \text{,} \label{eqn:driftpdf} \end{equation} \else \begin{align} & \Phi_T(m) = \prob{S_T = m} \nonumber\\ & \quad = \ensuremath{P_\mathrm{t}}^{T} \ensuremath{P_\mathrm{i}}^{m} \sum_{j=j_0}^{T} \binom{T}{j} \binom{T + m + j - 1}{m + j} \left[ \frac{\ensuremath{P_\mathrm{i}} \ensuremath{P_\mathrm{d}}}{\ensuremath{P_\mathrm{t}}} \right] ^j \text{,} \label{eqn:driftpdf} \end{align} \fi where $j_0=\max(-m,0)$. Observe that for a drift of $m$ bits, we need $m$ insertion events more than we have deletion events. Over a sequence of $T$ bits, for $j$ deletion events, this means $m+j$ insertion events and $T-j$ transmission events. The probability of this is $\ensuremath{P_\mathrm{i}}^{m+j} \ensuremath{P_\mathrm{d}}^{j}\ensuremath{P_\mathrm{t}}^{T-j} = \ensuremath{P_\mathrm{t}}^{T} \ensuremath{P_\mathrm{i}}^{m}\left[ \ensuremath{P_\mathrm{i}} \ensuremath{P_\mathrm{d}} / \ensuremath{P_\mathrm{t}} \right] ^j$. We get \eqref{eqn:driftpdf} by adding all different combinations of these events, and summing over $j$, noting that we cannot have fewer than $0$ events of any type. Specifically, the number of combinations for $j$ deletions in $T$ transmitted bits is given by $\binom{T}{j}$. The number of combinations for $m+j$ insertions is given by $\binom{T + m + j - 1}{m + j}$, as the $m+j$ insertion events create an additional $m+j$ opportunities for insertion. \subsection{Avoiding Numerical Issues} \label{sec:driftpdf_numerical} In a practical implementation, computing the drift probability \eqref{eqn:driftpdf} requires a few special considerations. Practical codes from the literature have codeword size $n$ in the range 5--12 bits and number of codewords $N$ up to 1000, for a frame length $nN$ of about 4000--6000 bits. These codes are designed to operate under channel conditions $\ensuremath{P_\mathrm{i}},\ensuremath{P_\mathrm{d}}$ from $10^{-3}$ to above $10^{-1}$. Evaluating \eqref{eqn:driftpdf} under these conditions, one encounters very large values for the two binomial coefficients and very small values for the power term. For example, consider evaluating \eqref{eqn:driftpdf} at $m=0$ for $T=6000$ and $\ensuremath{P_\mathrm{i}}=\ensuremath{P_\mathrm{d}}=10^{-3}$. The two binomial coefficients have a range of up to $1.56\times10^{1804}$ (at $j=3000$) and $8.34\times10^{3609}$ (at $j=6000$) respectively. The power term has a range of down to $1.65\times10^{-35995}$ (at $j=6000$). This range is far beyond that representable even in double-precision floating point. A direct implementation of \eqref{eqn:driftpdf} will therefore result in numerical overflow and underflow (in computing the binomial coefficients and power term respectively) for typical frame sizes and channel conditions, even though the summation term itself is representable. The above numerical range problem can be avoided by combining the computation of all terms in the summation as follows. Observe that \eqref{eqn:driftpdf} can be rewritten as \begin{align} \label{eqn:phi_sum} \Phi_T(m) &= \sum_{j=j_0}^{T} \delta_j\text{,}\\ \text{where~} \delta_j &= \ensuremath{P_\mathrm{t}}^{T} \ensuremath{P_\mathrm{i}}^{m} \binom{T}{j} \binom{T + m + j - 1}{m + j} \left[ \frac{\ensuremath{P_\mathrm{i}} \ensuremath{P_\mathrm{d}}}{\ensuremath{P_\mathrm{t}}} \right] ^j \label{eqn:deltaj} \end{align} and $j_0=\max(-m,0)$ as before. In this expression, note that the summation is empty if $j_0 > T$, resulting in zero probability. Also, since $j \geq 0$, the first binomial coefficient is always non-zero, while the second binomial coefficient is non-zero if $T > 0$. Expanding the binomial coefficients using the factorial formula, we can express the summation term recursively as \begin{equation} \delta_j = \delta_{j-1} \cdot \frac{\ensuremath{P_\mathrm{i}} \ensuremath{P_\mathrm{d}}}{\ensuremath{P_\mathrm{t}}} \cdot \frac{T + m + j - 1}{m + j} \cdot \frac{T - j + 1}{j} \text{,} \label{eqn:deltaj_recursive} \end{equation} allowing successive factors to be determined easily from previous ones. The initial factor required is the one at $j_0$, and can be determined from \eqref{eqn:deltaj} by expanding the binomial coefficients using the multiplicative formula: \begin{equation} \delta_{j_0} = \ensuremath{P_\mathrm{t}}^{T} \ensuremath{P_\mathrm{i}}^{m} \prod_{i=1}^{j_0} \frac{T - j_0 - i}{i} \prod_{i=1}^{m+j_0} \frac{T - 1 - i}{i} \left[ \frac{\ensuremath{P_\mathrm{i}} \ensuremath{P_\mathrm{d}}}{\ensuremath{P_\mathrm{t}}} \right] ^{j_0} \text{.} \label{eqn:deltaj0} \end{equation} Consider the earlier example, now evaluating \eqref{eqn:phi_sum} at $m=0$ for $T=6000$ and $\ensuremath{P_\mathrm{i}}=\ensuremath{P_\mathrm{d}}=10^{-3}$. In this case, the initial value $\delta_{j_0} = 6.07\times10^{-6}$, and the multiplier $\delta_j/\delta_{j-1}$ in the recursive expression \eqref{eqn:deltaj_recursive} has its smallest value of $3.34\times10^{-10}$ at $j=6000$. Both values are easily representable as floating point numbers. Using \eqref{eqn:phi_sum}, numerical range issues remain when computing $\delta_{j_0}$ for larger values of $\ensuremath{P_\mathrm{i}},\ensuremath{P_\mathrm{d}}$, and consequently also for storing successive values of $\delta_j$. For example, consider evaluating \eqref{eqn:phi_sum} at $m=0$ for $T=6000$ and $\ensuremath{P_\mathrm{i}}=\ensuremath{P_\mathrm{d}}=10^{-1}$. In this case $\delta_{j_0} = 3.47\times10^{-582}$, and one needs to accumulate a number of $\delta_j$ values in this range to obtain the required result $\Phi_{6000}(0) = 0.0109$. Again, the intermediate values are beyond the range of double-precision floating point numbers although the final result is representable. These numerical range issues can be avoided by computing \eqref{eqn:deltaj_recursive} and \eqref{eqn:deltaj0} using logarithms. For the earlier example with $m=0$, $T=6000$, and $\ensuremath{P_\mathrm{i}}=\ensuremath{P_\mathrm{d}}=10^{-1}$, we now get $\log\delta_{j_0} = -1.34\times10^3$ and the smallest value of $\log\delta_j$ is $-1.93\times10^4$ at $j=6000$. Finally, the required drift probability is obtained by accumulating the exponential of the $\log\delta_j$ values using \eqref{eqn:phi_sum}. However, the individual values of $\delta_j$ are still beyond the range of double-precision floating point. In practice, we have found that the use of extended-precision (80-bit) floating point provides sufficient range. Alternatively, the accumulation in \eqref{eqn:phi_sum} may be computed in logarithmic domain using the property $\log(A+B) = \log{A} + \log\left( 1 + e^{\log{B}-\log{A}} \right)$. Note that expression \eqref{eqn:driftpdf} is valid for any $\ensuremath{P_\mathrm{i}} \ge 0$, $\ensuremath{P_\mathrm{d}} \ge 0$, and $\ensuremath{P_\mathrm{i}} + \ensuremath{P_\mathrm{d}} < 1$. However, the computation using logarithms cannot be applied directly when either or both of $\ensuremath{P_\mathrm{i}}$ and $\ensuremath{P_\mathrm{d}}$ are zero. These degenerate cases have to be handled as special cases, by first reducing \eqref{eqn:driftpdf} and then implementing the simplified equations using logarithms. \subsection{Probability of Drift Outside Range} \label{sec:drift_limits} We want to choose lower and upper limits $m_T^{-}, m_T^{+}$ such that the drift after transmitting a sequence of $T$ bits is outside the range $\{ m_T^{-} \ldots m_T^{+} \}$ with an arbitrarily low probability $\ensuremath{P_\mathrm{r}}$: \begin{eqnarray} \prob{S_T < m_T^{-}} + \prob{S_T > m_T^{+}} < \ensuremath{P_\mathrm{r}} \text{,}\\ \text{or equivalently:~} \label{eqn:limit_condition} 1 - \sum_{m = m_T^{-}}^{m_T^{+}} \Phi_T(m) < \ensuremath{P_\mathrm{r}} \text{.} \end{eqnarray} An appropriate choice of limits can be obtained iteratively as follows. Observe that for the BSID channel $\Phi_T(m)$ is monotonically decreasing with increasing $|m|$. A first estimate for the limits is given by: \begin{align} \label{eqn:limit_lower} m_T^{-(1)} = \max m \left| \Phi_T(m-1) < \frac{\ensuremath{P_\mathrm{r}}}{2} \right. \\ \label{eqn:limit_upper} \text{and~} m_T^{+(1)} = \min m \left| \Phi_T(m+1) < \frac{\ensuremath{P_\mathrm{r}}}{2} \right. \text{,} \end{align} where the number in superscript parentheses indicates the iteration count. If these estimates satisfy \eqref{eqn:limit_condition}, we use them as our lower and upper limits. Otherwise these estimates are updated iteratively as follows: \begin{align} m_T^{-(i+1)} &= \begin{cases} m_T^{-(i)}-1 & \text{if $\Phi_T(m_T^{+(i)}+1) \le \Phi_T(m_T^{-(i)}-1)$} \\ m_T^{-(i)} & \text{otherwise,} \end{cases} \\ m_T^{+(i+1)} &= \begin{cases} m_T^{+(i)}+1 & \text{if $\Phi_T(m_T^{+(i)}+1) > \Phi_T(m_T^{-(i)}-1)$} \\ m_T^{+(i)} & \text{otherwise.} \end{cases} \end{align} That is, we extend the range by one in the direction of greatest gain. The iterative process is repeated until \eqref{eqn:limit_condition} is satisfied. The size of the state space is given by $M_T = m_T^{+} - m_T^{-} + 1$. \subsection{Choice of Summation Limits} \label{sec:summation_limits} When considering the whole frame, $T=\tau$, so that the overall size of the state space is given by $M_\tau$. Now the final output of the MAP decoder is calculated using \eqref{eqn:L}, which sums over all $M_\tau$ prior states $m_\tau^{-} \le m' \le m_\tau^{+}$. For each prior state, however, only the drifts introduced by the transmission of $n$ bits need to be considered, corresponding to a subset $M_n$ of states $m_n^{-} \le m \le m_n^{+}$. Similarly, the computation of \eqref{eqn:alpha} and \eqref{eqn:beta} is required for all $M_\tau$ states $m_\tau^{-} \le m \le m_\tau^{+}$, each involving a summation over $M_n$ prior or posterior states $m_n^{-} \le m' \le m_n^{+}$ respectively. Finally, the state transition metric is obtained using the forward pass of \eqref{eqn:alpha_dot}; this is computed over a sequence of $n$ bits for each of $M_n$ states $m_n^{-} \le m \le m_n^{+}$. Each recursion consists of a summation over prior states $m'$; in this case only the drifts introduced by the transmission of one bit need to be considered, corresponding to a subset $M_1$ of prior states $m_1^{-} \le m' \le m_1^{+}$. Now consider that we want to limit the probability of any of these summations not covering an actual channel event over a whole frame to, say, no more than $\ensuremath{P_\mathrm{e}}$. When computing the limits over the whole frame, $m_\tau^{\pm}$, we simply need to set $\ensuremath{P_\mathrm{r}}=\ensuremath{P_\mathrm{e}}$. However, when computing limits over an $n$-bit sequence, $m_n^{\pm}$, since this summation is repeated for each of $N$ such sequences, we set $\ensuremath{P_\mathrm{r}}=1-\sqrt[N]{1-\ensuremath{P_\mathrm{e}}} \approx \frac{\ensuremath{P_\mathrm{e}}}{N}$ for small $\ensuremath{P_\mathrm{e}}$. Similarly, for limits over a $1$-bit sequence, $m_1^{\pm}$, we use $\ensuremath{P_\mathrm{r}}=1-\sqrt[\tau]{1-\ensuremath{P_\mathrm{e}}} \approx \frac{\ensuremath{P_\mathrm{e}}}{\tau}$ for small $\ensuremath{P_\mathrm{e}}$. Except in the case of stream decoding (c.f. Section~\ref{sec:streamdecoding}), the state space limits only need to be determined once and remain valid as long as the channel conditions do not change. In any case, the required values of $\Phi_T(m)$ depend only on the code parameters and channel conditions, so that a table may be pre-computed. This makes the average complexity of determining the state space limits negligible. \subsection{Example} \label{sec:limits_example} Overestimating the required state space increases computational complexity, while underestimating the state space often results in poor decoding performance. Accurate limits are particularly important for restricting the drifts considered across each codeword. It is therefore useful to illustrate the discrepancy between the approximate distribution of \cite{dm01ids} and the exact expression for the distribution of the drift. Consider a system with typical block and codeword sizes $N=500$, $n=10$. We plot in Fig.~\ref{fig:statespace-size} the number of states within summation limits using the approximate and exact expressions, in each case for $\ensuremath{P_\mathrm{e}}=10^{-10}$. \begin{figure}[!t] \centering \begin{subfigure}[b]{\figwidth} \includegraphics[width=\figwidth]{Graphs/statespace-size} \caption{} \label{fig:statespace-size} \end{subfigure} \begin{subfigure}[b]{\figwidth} \includegraphics[width=\figwidth]{Graphs/statespace-cover} \caption{} \label{fig:statespace-cover} \end{subfigure} \caption{ \protect\subref{fig:statespace-size} Number of states within summation limits, and \protect\subref{fig:statespace-cover} probability of encountering a channel event outside the chosen summation limits over a single frame, using the approximation of \cite{dm01ids} and our exact computation.} \label{fig:dminner2-750} \end{figure} For $T=1$, \cite[Section~VII.A]{dm01ids} assumes a maximum of two successive insertions; this is equivalent to setting $m_1^{+}=2$, so that $M_1=4$. It is immediately apparent that while the approximation is very close for large $T$ and high $\ensuremath{P_\mathrm{i}}=\ensuremath{P_\mathrm{d}}$, it quickly starts to underestimate the required range at lower channel error rates. As expected, the discrepancy is particularly large when considering shorter sequences. For $T=1$ it is not surprising that there is a large discrepancy for channels with high error rate. Next, we determine the probability of encountering a channel event outside the chosen limits over a single frame, shown in Fig.~\ref{fig:statespace-cover} for the same limits used in Fig.~\ref{fig:statespace-size}. For the exact distribution, this probability is always below the chosen threshold $\ensuremath{P_\mathrm{e}}=10^{-10}$, as expected. For the approximation, however, the probability of exceeding the chosen limits is higher than the threshold throughout the range considered. At lower channel error rates, the discrepancy is significant (several orders of magnitude) even for large $T$. For small $T$, the probability of exceeding the chosen limits is high enough to make the approximation useless. For $T=1$, the artificial limit of two successive insertions of \cite{dm01ids} means that channels with high error rate will exceed this limit with high probability. \section{Stream Decoding} \label{sec:streamdecoding} We have so far considered the case where frame boundaries are known exactly. While there are practical cases involving single-frame transmission where this is true, exact frame boundaries are often unknown. The MAP decoder can handle such cases by changing the initial conditions for \eqref{eqn:alpha} and \eqref{eqn:beta} and choosing appropriate state space limits. This obviates the need for explicit frame-synchronization markers as used in conventional communication systems, and can therefore reduce this overhead. The approach presented here is in principle similar to that used in \cite{dm01ids} for `sliding window' decoding. However, there are some critical differences which we explore further in Section~\ref{sec:stream_comparison}. \subsection{Choosing End-of-Frame Priors} \label{sec:eof_priors} Consider first the common case where a sequence of frames is transmitted in a stream. The usual practice in communication systems is for the receiver to decode one frame at a time, starting the decoding process as soon as all the data related to the current frame is obtained from the channel. In this case, the current received frame is considered to be $\vec{Y}\sw{m_\tau^{-}}{\tau+m_\tau^{+}}$, which may include some bits from the end of the previous frame and start of the next frame. The end-state boundary condition for \eqref{eqn:beta} can be obtained by convolving the expected end-of-frame drift probability distribution with the start-state distribution: \begin{equation} \beta_N(m) = \sum_{m'} \alpha_0(m') \Phi_\tau(m-m') \text{.} \end{equation} Note that in general this distribution $\beta_N(m)$ has a wider spread than $\Phi_\tau(m)$. As discussed in Sections~\ref{sec:drift_limits} and \ref{sec:summation_limits}, the choice of state space limits depends on the expected distribution of drift. For limits involving the whole frame, the distribution used is $\Phi_\tau(m)$, which assumes that the initial drift is zero. The assumption does not hold under stream decoding conditions, where the initial drift is not known \emph{a priori}, although its distribution can be estimated. The uncertainty in locating the start-of-frame position increases the uncertainty in locating the end-of-frame position, resulting in a wider prior distribution for the end-state boundary condition $\beta_N(m)$. Therefore, any limits on state space determined using $\Phi_\tau(m)$ will be underestimated. The severity of this error depends on the difference between $\beta_N(m)$ and $\Phi_\tau(m)$, which increases as channel conditions get worse. For stream decoding, therefore, it is sensible to recompute the state space limit $M_\tau$ at the onset of decoding a given frame, using $\beta_N(m)$ in lieu of $\Phi_\tau(m)$. Doing so avoids underestimating the required state space, and implies that for stream decoding, the state space size will change depending on how well-determined the frame boundaries are. After decoding the current frame, we obtain the posterior probability distribution for the drift at end-of-frame, given by: \ifdraft \begin{align} \prob{ S_{\tau}=m \;\middle\vert\; \vec{Y}\sw{m_\tau^{-}}{\tau+m_\tau^{+}} } & = \lambda_N(m) / \prob{ \vec{Y}\sw{m_\tau^{-}}{\tau+m_\tau^{+}} } = \frac{ \lambda_N(m) }{ \sum_{m'} \lambda_N(m') } \text{.} \end{align} \else \begin{align} \prob{ S_{\tau}=m \;\middle\vert\; \vec{Y}\sw{m_\tau^{-}}{\tau+m_\tau^{+}} } & = \lambda_N(m) / \prob{ \vec{Y}\sw{m_\tau^{-}}{\tau+m_\tau^{+}} } \nonumber\\ & = \frac{ \lambda_N(m) }{ \sum_{m'} \lambda_N(m') } \text{.} \end{align} \fi The most likely drift at end-of-frame can be found by: \begin{align} \hat S_{\tau} & = \argmax_{m} \frac{ \lambda_N(m) }{ \sum_{m'} \lambda_N(m') } = \argmax_{m} \lambda_N(m) \text{.} \end{align} As in \cite{dm01ids}, we determine the nominal start position of the next frame by shifting the received stream by $\tau + \hat S_{\tau}$ positions. The initial condition for the forward metric for the next frame, $\hat\alpha_0(m)$, is set to: \begin{align} \hat\alpha_0(m) & = \frac{ \lambda_N(m + \hat S_{\tau}) }{ \sum_{m'} \lambda_N(m') } \text{,} \end{align} replacing the initial condition for \eqref{eqn:alpha} reflecting a known frame boundary. \subsection{Stream Look-Ahead} \label{sec:lookahead} Taking advantage of the different constituent encodings in TVB codes, the MAP decoder can make use of information from the following frame to improve the determination of the end-of-frame position. We augment the current block of $N$ symbols with the first $\nu$ symbols from the following block (or blocks, when $\nu > N$), for an augmented block size $N' = N + \nu$. The MAP decoder is applied to the corresponding augmented frame. After decoding, only the posteriors for the initial $N$ symbols are kept; the start of the next frame is determined from the drift posteriors at the end of the first $N$ symbols, and the process is repeated. Consider the latency of the MAP decoder to be the time from when the first bit of a frame enters the channel to when the decoded frame is available. The cost of look-ahead is an increase in decoding complexity and latency corresponding to the change in block size from $N$ to $N'$. The effect on complexity is seen by using terms corresponding to the augmented block size in the expressions of Table~\ref{tab:complexity}. The latency is equal to the time it takes to receive the complete frame and decode it. Look-ahead increases the time to receive the augmented frame linearly with $\nu$ and the decoding time according to the increase in complexity. The required look-ahead $\nu$ depends on the channel conditions and the code construction. In general, a larger value is required as the channel error rate increases. We show how an appropriate value for $\nu$ can be chosen for a given code under specific channel conditions in Section~\ref{sec:results_stream}. Typical values for $\nu$ are small ($\nu < 10$) for good to moderate channels ($\ensuremath{P_\mathrm{i}},\ensuremath{P_\mathrm{d}} < 10^{-2}$). The required look-ahead increases significantly for poor channels: the example in Section~\ref{sec:results_stream} requires $\nu=1000$ at $\ensuremath{P_\mathrm{i}}=\ensuremath{P_\mathrm{d}} = 2\times10^{-1}$. \subsection{Comparison with Davey-MacKay Decoder} \label{sec:stream_comparison} A key feature of the Davey-MacKay construction is the presence of a known distributed marker sequence that is independent of the encoded message. This allows the decoder, in principle, to compute the forward and backward passes over the complete stream. However, to reduce decoding delay, the decoder of \cite{dm01ids} performs frame-by-frame decoding using a `sliding window' mechanism. The `sliding window' mechanism seems intended to approximate the computation of the forward and backward passes over all received data at once. This approach is similar in principle to ours when stream look-ahead is used; however, there are some critical differences which we discuss below. In \cite{dm01ids}, the starting index for a given frame is taken to be the most likely end position of the previous frame, as determined by the Markov model posteriors. This is the same as the approach we use in Section~\ref{sec:eof_priors}. However, in \cite{dm01ids}, the initial conditions of the forward pass are simply copied from the final values of the forward pass for the previous frame. This is consistent with the view that the `sliding window' mechanism approximates the computation over all received data at once, but contrasts with our method. In Section~\ref{sec:eof_priors} the initial conditions of the forward pass are determined from the posterior probabilities of the drift at the end of the previous frame. These drift posteriors include information from the look-ahead region and from the priors at the end of the augmented frame, which were determined analytically from the channel parameters. Observe that in the `sliding window' mechanism of \cite{dm01ids}, the backward pass values cannot be computed exactly as for the complete stream. Instead, the decoder of \cite{dm01ids} computes the forward pass for some distance beyond the expected end of frame position, and initializes the backward pass from that point. The suggested distance by which to exceed the expected end of frame position is `several (e.g.\ five) multiples of $x_\textrm{max}$', where $x_\textrm{max}$ is the largest drift considered. The concept is the same as the stream look-ahead of Section~\ref{sec:lookahead}. However, we recommend choosing the look-ahead quantity $\nu$ based on empirical evidence (c.f.\ Section~\ref{sec:results_stream}). It is claimed in \cite{dm01ids} that the backward pass is initialized from the final forward pass values; the reasoning behind this is unclear, and does not seem to have a theoretical justification. We initialize the backward pass with the prior probabilities for the drift at the end of frame, as explained in Section~\ref{sec:eof_priors}. \subsection{Initial Synchronization} The only remaining problem is to determine start-of-frame synchronization at the onset of decoding a stream. This can be obtained by choosing state space limits $M_\tau$ large enough to encompass the initial desynchronization and by setting equiprobable initial conditions: $\alpha_0(m) = \beta_N(m) = \frac{1}{M_\tau} \quad \forall m$. Previous experimental results \cite{dm01ids} have assumed a known start for the first frame, with the decoder responsible for maintaining synchronization from that point onwards. We adopt the same strategy in the following. \subsection{TVB Codes} \label{sec:system_description} Consider the encoding defined in \cite{briffa13jcomml}, used there to simplify the representation of the inner code of \cite{dm01ids}. We observe that this encoding generalizes a number of additional previous schemes, including the inner codes of \cite{ratzer05telecom, wfd2011transcomm, bb11isit} (c.f.\ Section~\ref{sec:system_generalization}). We define a TVB code in terms of this encoding by the sequence $\mathcal{C}=\sequence{C}{0}{N}$, which consists of the constituent encodings $C_i: \mathbb{F}_q \hookrightarrow \mathbb{F}_2^n$ for $i=0,\ldots,N-1$, where $n,q,N \in \mathbb{N}$, $2^n \geq q$, and $\hookrightarrow$ denotes an injective mapping. Two constituent encodings $C_i,C_j$ are said to be \emph{equal} if $C_i(D) \in C_j \quad\forall D$. For a given TVB code, the set of \emph{unique} constituent encodings is that set where no two constituent encodings are equal; the cardinality of this set, denoted by $M \leq N$, is called the order of the code. Note that unique constituent encodings may still have some common codewords. We denoted a TVB code by the tuple $(n,q,M)$. We restrict ourselves to binary TVB codes, where codewords are sequences of bits; the extension to the non-binary case is trivial. For any sequence $\vec{z}$, denote arbitrary subsequences as $\vec{z}\sw{a}{b} = \sequence{z}{a}{b}$, where $\vec{z}\sw{a}{a} = ()$ is an empty sequence. Given a message $\vec{D}\sw{0}{N} = \sequence{D}{0}{N}$, each $C_i$ maps the $q$-ary message symbol $D_i \in \mathbb{F}_q$ to codeword $C_i(D_i)$ of length $n$. That is, $\vec{D}\sw{0}{N}$ is encoded as $\vec{X}\sw{0}{nN} = C_0(D_0) \| \cdots \| C_{N-1}(D_{N-1})$, where $\vec{y}\|\vec{z}$ is the juxtaposition of $\vec{y}$ and $\vec{z}$. Each $q$-ary symbol is encoded independently of previous inputs, and different codebooks may be used for each input symbol. This time-variation offers no advantage on a fully synchronized channel. However, in the presence of synchronization errors, the differences between neighbouring codebooks provide useful information to the decoder to recover synchronization. In practice a TVB code is suitable as an inner code to correct synchronization errors in a serially concatenated construction. A conventional outer code corrects residual substitution errors. In such a scheme, the inner code's MAP decoder \emph{a posteriori} probabilities (APPs) are used to initialize the outer decoder. The concatenated code can be iteratively decoded, in which case the prior symbol probabilities of the inner decoder are set using extrinsic information from the previous pass of the outer decoder. \section{TVB Code Design} \label{sec:tvbcodes} \subsection{Construction Criteria} \label{sec:construction} In any error-correcting scheme, the decoder's objective is to minimize the probability of decoding error (at the bit or codeword level depending on the application). If the channel does not introduce synchronization errors, this optimization may be performed independently of previous or subsequent codewords. Hence the performance of the code depends exclusively on its distance properties. In particular, the performance of the code at low channel error rate is dominated by the code's minimum Hamming distance. At any channel error rate the performance is determined by the code's distance spectrum \cite{pere96,ferr03}. Thus when designing codes for substitution error channels, either the minimum Hamming distance or the more complete distance spectrum needs to be optimized for the given code parameters. In the case of the BSID channel, and other channels that allow synchronization errors, a similar behaviour is observed if the codeword boundaries are known, only this time the Levenshtein distance \cite{leve66} replaces the Hamming one. Recall that the Levenshtein distance gives the minimum number of edits (insertions, deletions, or substitutions) that will change one codeword into another. For the BSID channel an upper bound for the probability of decoding a codeword in error was given in \cite{bb11isit}, assuming codeword boundaries are known. For $\ensuremath{P_\mathrm{i}},\ensuremath{P_\mathrm{d}},\ensuremath{P_\mathrm{s}} \ll 1$, the bound of \cite[(9)]{bb11isit} is dominated by the number of correctable errors, $t$. Now, for a code with minimum Levenshtein distance $\ensuremath{d_{l_\mathrm{min}}}$, it can be shown that $t = \floor{\frac{\ensuremath{d_{l_\mathrm{min}}}-1}{2}}$ \cite{leve66}. Hence designing TVB codes with constituent encodings having large $\ensuremath{d_{l_\mathrm{min}}}$ will result in the greatest improvement to the code's performance at low channel error rates. However, in general the codeword boundaries are not known and need to be estimated by the decoder. Therefore decoding a given codeword on synchronization error channels depends not only on the current received word, but also on previous and subsequent ones. This means that the performance of a TVB code depends not only on the distance properties of constituent encodings considered separately, but also on the relationship between constituent encodings. This effect becomes more significant under poorer channel conditions, where the drift can easily exceed the length of a codeword. Unfortunately, the required relationship between constituent encodings for optimal performance over the BSID channel is still an open problem. What is known is that the diversity created by a sequence of different encodings helps the decoder estimate the drift within a codeword length, improving performance at higher channel error rates \cite{bb11isit}. \subsection{Representation of Previous Schemes as TVB Codes} \label{sec:system_generalization} TVB codes generalize a number of existing synchronization error-correcting codes. The flexibility of the generalization allows the creation of improved codes at the same size and rate, as we shall show. Consider first the sparse inner codes with a distributed marker sequence\footnote{This was originally referred to as a watermark sequence.} of the Davey-MacKay construction \cite{dm01ids}. It is clear that the sparse code is a fixed encoding $C': \mathbb{F}_q \hookrightarrow \mathbb{F}_2^n$; these codewords are then added to a distributed marker sequence $\vec{w}_i$ of length $n$, specific for each codeword index $i$. Thus we can write $C_i(D_i) = C'(D_i) + \vec{w}_i$ to represent the inner codes of \cite{dm01ids} as TVB codes. The equivalence of this mapping to the inner code of \cite{dm01ids} has also been shown in \cite{briffa13jcomml}. The distributed marker serves the same function as the use of different encodings in TVB codes. The decoder of \cite{dm01ids} tracks the marker sequence directly, treating the additive encoded message sequence as substitution errors. Therefore, to corrupt the marker sequence as little as possible, the inner code used is sparse. The sparseness results in a low $\ensuremath{d_{l_\mathrm{min}}}$, making it harder for the decoder to distinguish between the various codewords, and leads to relatively poor performance at low channel error rates. The codes of \cite{bb11isit} can similarly be represented as TVB codes, with $C'$ corresponding to the Synchronization and Error Correcting (SEC) code and $\vec{w}_i$ corresponding to the Allowed Modification Vectors (AMVs). SEC codes are designed with a large $\ensuremath{d_{l_\mathrm{min}}}$ for good performance at low channel error rates. For such channels this code can perform much better than the sparse code of \cite{dm01ids}. AMVs are chosen such that when added to the SEC code the resulting code's $\ensuremath{d_{l_\mathrm{min}}}$ does not change. Clearly, the AMVs serve the same function as the use of different encodings in TVB codes. In contrast to a random distributed marker sequence, the use of AMVs does not compromise the performance of the underlying SEC code at low channel error rates. In general, however, the Levenshtein distance spectrum is altered. The separate constituent encodings in TVB codes give greater design freedom than SEC codes with AMVs and also allows the design of constituent encodings that maintain the required optimized Levenshtein distance spectrum. The marker codes given by Ratzer \cite{ratzer05telecom} can also be cast as TVB codes by letting each possible sequence of data bits (between markers) be represented by a $q$-ary symbol. For example, consider a marker code with 3 marker bits inserted after every 9 data bits, where the 3-bit marker is randomly chosen between the sequences $001$ and $110$. This can be represented as a $(12,512,2)$ TVB code, where encoding $C_0$ consists of all possible 9-bit sequences appended with $001$, and $C_1$ consists of all possible 9-bit sequences appended with $110$. Like the sparse codes of \cite{dm01ids}, these marker codes suffer from a low $\ensuremath{d_{l_\mathrm{min}}}$, leading to relatively poor performance at low channel error rates. On the other hand, the fixed marker bits improve the determination of codeword boundaries, and the random use of different marker bits creates the necessary diversity to improve performance in poorer channel conditions. To illustrate the difference in performance between the various designs, consider encodings of size $(n,q)=(7,8)$ with $N=666$ (same size as codes C and H in \cite{dm01ids}). A $(7,8,4)$ TVB code where each constituent code has the best possible Levenshtein distance spectrum with $\ensuremath{d_{l_\mathrm{min}}}=3$, found through an exhaustive search, is given in Table~\ref{tab:exampletvb}. \inserttab{tab:exampletvb}{cccc} {A $(7,8,4)$ TVB code $\mathcal{C} = (C_0,\ldots,C_3$) with $\ensuremath{d_{l_\mathrm{min}}}=3$.}{ \emph{$C_0$} & \emph{$C_1$} & \emph{$C_2$} & \emph{$C_3$} \\ \hline 0000000 & 0000000 & 0000011 & 0000000 \\ 0000111 & 0000111 & 0001100 & 0001111 \\ 0011001 & 0011110 & 0011111 & 0101001 \\ 0110110 & 0110101 & 0101010 & 0110110 \\ 1001010 & 1001001 & 1011001 & 1000011 \\ 1100001 & 1100110 & 1100000 & 1001100 \\ 1111000 & 1111000 & 1100111 & 1110000 \\ 1111111 & 1111111 & 1111110 & 1111111 \\ } In Fig.~\ref{fig:inner} we compare this TVB code with earlier constructions from the literature at the same size. \insertfig{fig:inner}{Graphs/inner-ser} {Comparison of inner code designs of size $(n,q)=(7,8)$ with $N=666$: a sparse code with random distributed marker from \cite{dm01ids}, a marker code with $0011/1100$ marker bits similar to \cite{ratzer05telecom}, a SEC code with randomly-sequenced AMVs from \cite{bb11isit}, and the TVB code of Table~\ref{tab:exampletvb}.} Consider first the SEC code of the same size and $\ensuremath{d_{l_\mathrm{min}}}$ from \cite{bb11isit}, used with 8 AMVs in a random sequence. As expected, the TVB code performs better due to its improved Levenshtein distance spectrum, even though both TVB and SEC codes have the same $\ensuremath{d_{l_\mathrm{min}}}$. The performance of the sparse code with random distributed marker from \cite{dm01ids} is considerably worse, particularly at low channel error rates. Similarly a code with three data bits and four marker bits (randomly chosen between $0011/1100$), similar to \cite{ratzer05telecom}, also performs poorly at low channel error rates.
{ "timestamp": "2018-02-26T02:10:19", "yymm": "1802", "arxiv_id": "1802.08595", "language": "en", "url": "https://arxiv.org/abs/1802.08595" }
\section{Introduction} Critical behavior in the gravitational collapse of a massless scalar field was discovered by Choptuik~\cite{Choptuik:1992jv}, who sought to answer the question ``What happens at the threshold of black hole formation?'' Choptuik considered a massless scalar field undergoing gravitational collapse in a spherically symmetric spacetime. He found that for some parameter $p$ in the initial data, for example the amplitude of a Gaussian-distributed scalar field, the final mass of the black hole is related to $p$ by \begin{align} \label{eq:simple mass relation} M_{\mathrm{BH}}\propto\left|\frac{p}{p_\star}-1\right|^{\gamma_M}. \end{align} Here $p_\star$ is the critical value of the parameter $p$ that separates initial data that form a black hole (supercritical) from initial data that do not form a black hole (subcritical). Choptuik observed that the critical exponent $\gamma_M$ is independent of the initial data chosen---the critical behavior is universal. The currently accepted value of the critical exponent is $\gamma_{M}=0.374\pm0.001$~\cite{Gundlach:1996eg}. Not much later, Garfinkle and Duncan~\cite{Garfinkle:1998va} discovered that in subcritical evolutions the maximum absolute value of the Ricci scalar at the center of the collapse obeys the scaling relation \begin{align} \label{eq:simple ricci relation} R_{\max} \propto \left|\frac{p}{p_\star}-1\right|^{2 \gamma_{R_{\max}}}. \end{align} Interestingly, $\gamma_{R_{\max}}$ was found to have the same value as $\gamma_M$. Another key aspect of the critical behavior observed by Choptuik is that of a discretely self-similar solution, or ``echoing''. In the strong-field regime near the critical solution, Choptuik noticed that any gauge-invariant quantity $U$ obeys the relation \begin{align} \label{eq:rescaling} U(\mathcal{T}, x^i) = U(e^\Delta \mathcal{T}, e^\Delta x^i), \end{align} where $\Delta$ is a dimensionless constant. Here $\mathcal{T}=\tau-\tau_\star$, where $\tau$ is the proper time of a central observer and $\tau_\star$ is the value of $\tau$ when a naked singularity forms in the limit $p\to p_\star$. $\tau_\star$ is referred to as the accumulation time. As one moves closer in time to the critical solution by $e^\Delta$, the same field profile is observed for $U$ but at spatial scales $e^\Delta$ smaller. The echoing period $\Delta$, like the critical exponent, is universal in the sense that it does not depend on the initial data, only on the type of matter undergoing gravitational collapse. The currently accepted value for a massless scalar field is $\Delta=3.4453\pm0.0005$~\cite{Gundlach:1996eg}. Since the seminal work by Choptuik, many studies to better understand critical behavior in gravitational collapse have been performed. Studies of critical collapse of a massless scalar field in spherical symmetry have found that the critical exponent and echoing period are both independent of the initial data profile but depend on the dimensionality of the spacetime~\cite{Garfinkle:1999zy,Bland:2005vu,Sorkin:2005vz,Taves:2011yt}. Similar studies observed that the critical exponent, echoing period, and possibly even the type of phase transition are changed in modified theories of gravity~\cite{Deppe:2012wk,Golod:2012yt}. Interestingly, the presence of critical behavior appears to be independent of the matter source, but the value of the critical exponent, echoing period, and type of phase transition depend on the type of matter~\cite{Choptuik:1996yg,Gundlach:1996je,Brady:1997fj,Garfinkle:2003jf,Baumgarte:2015aza,Gundlach:2016jzm,Baumgarte:2016xjw,Gundlach:2017tqq}. Vacuum critical collapse was first studied in~\cite{Abrahams:1993wa,Abrahams:1994nw}, which found that critical behavior is present and that the critical exponent and echoing period have values different from those found in simulations with matter. Unfortunately, studying vacuum gravitational collapse has proven to be quite difficult~\cite{Sorkin:2009wh,Sorkin:2010tm,Hilditch:2013cba,Hilditch:2015aba}. In critical collapse the phase transition is either Type I or Type II. In Type II phase transitions the black hole mass continuously goes to zero as $p_\star$ is approached. This has been the most common case observed so far when studying critical collapse. In Type I transitions the mass of the black hole that forms approaches a constant, non-zero value as $p_\star$ is approached. Type I phase transitions have been clearly identified in critical collapse of a massive scalar field\cite{Brady:1997fj}. The discussion in this paper is only relevant for Type II critical behavior. In 1997 both Gundlach~\cite{Gundlach:1996eg}, and Hod and Piran~\cite{Hod:1996az} independently discovered fine structure in addition to the power-law behavior of the black hole masses: There is a small-amplitude modulation of~\eqref{eq:simple mass relation}. Specifically, the scaling relation is altered to \begin{align} \ln(M_{\mathrm{BH}})=&\gamma_{M}\ln\left|p/p_\star-1\right|+C\notag\\ &+A\sin(w\ln\left|p/p_\star-1\right|+\delta), \end{align} where $C$, $A$, $w$, and $\delta$ are constants. These authors predicted and verified that $w=\Delta/(2\gamma_{M})$ for massless scalar field collapse in spherical symmetry. Whether or not this relation holds for different matter sources and beyond spherical symmetry is an open question. Unfortunately, answering the question of how symmetry assumptions affect the critical exponent and echoing period has turned out to be quite challenging. The reason is that spatiotemporal scales varying over four to six orders of magnitude must be resolved in order to properly study the fine structure and echoing, and a large number of high-resolution simulations are necessary. In addition, the well-posedness and stability of the formulation of the Einstein equations solved and the choice of gauge has proven to be as problematic here as in other simulations in numerical relativity. Akbarian and Choptuik~\cite{Akbarian:2015oaa} have recently studied how formulations of the Einstein equations commonly used for binary black hole mergers behave when studying critical collapse. However, that work was restricted to spherical symmetry. Critical collapse of a massless scalar field in axial symmetry was studied using perturbation theory by Martin-Garcia and Gundlach~\cite{MartinGarcia:1998sk}, who found that all non-spherical modes decay. In 2003 Choptuik et.~al~\cite{Choptuik:2003ac} performed numerical simulations of massless scalar field collapse in axial symmetry. They found that the critical solution in this case is the same as the solution found in spherical symmetry. However, in contrast to~\cite{MartinGarcia:1998sk}, they also found tentative evidence for a non-decaying $l=2$ mode. More recently, Healy and Laguna~\cite{Healy:2013xia} studied critical collapse of a massless scalar field that is symmetric about the $xz$-plane. Healy and Laguna observed results consist with spherically symmetric collapse, but were unable to verify the echoing of gauge-independent fields. The work of Healy and Laguna has been followed by a study of massless scalar field collapse with a quartic potential by Clough and Lim~\cite{Clough:2016jmh}. Clough and Lim also studied initial data similar to that of~\cite{Healy:2013xia} and obtained results similar to those of Healy and Laguna. In this paper we present a study of critical collapse of a massless scalar field with no symmetry assumptions, and the first study beyond spherical symmetry that is able to resolve the fine structure in the black hole mass scaling relation. We are able to resolve small-scale dynamics in both supercritical and subcritical evolutions, allowing us to directly compare the results. In $\S$\ref{sec:Equations} we review the equations solved, in $\S$\ref{sec:InitialData} we discuss the initial data used, in $\S$\ref{sec:NumericalMethods} we provide details about the numerical method, in $\S$\ref{sec:Results} we present the results, and we conclude in $\S$\ref{sec:Conclusions}. After this work was completed, a paper by Baumgarte appeared\cite{Baumgarte:2018fev} in which axially symmetric initial data similar to that of~\cite{Choptuik:2003ac} is studied. We discuss the relation between this paper and our work at the end of $\S$\ref{sec:Results}. \section{Equations}\label{sec:Equations} We study the dynamics near the critical solution in gravitational collapse of the Einstein-Klein-Gordon system. We solve the Einstein equations, \begin{align} \label{eq:EE} R_{ab}=8\pi\left(T_{ab}-\frac{1}{2}\psi_{ab}T^c{}_c\right) \end{align} where $R_{ab}$ is the Ricci tensor, $\psi_{ab}$ the spacetime metric, and $T_{ab}$ the stress tensor. Here and throughout the rest of the paper we will use latin indices at the beginning of the alphabet, e.g.~$a,b,c,\ldots$ to refer to spacetime indices running from 0 to 3, and later indices, $i,j,k,\ldots$ to refer to spatial indices running from 1 to 3. We use the ADM form of the metric, \begin{align} ds^2=-N^2dt^2+g_{ij}\left(N^i dt + dx^i\right) \left(N^j dt + dx^j\right) \end{align} where $N(t,x^i)$ is the lapse, $N^j(t,x^i)$ the shift, and $g_{ij}(t, x^k)$ the spatial metric. We denote the timelike unit normal orthogonal to the spacelike hypersurfaces by \begin{align} t^a = (N^{-1},-N^i/N). \end{align} We solve Eq.~(\ref{eq:EE}) using a first-order generalized harmonic (GH) formulation~\cite{Lindblom:2005qh}. The matter source is a massless scalar field $\varphi$ with \begin{align} \label{eq:StressTensor} T_{ab}=\partial_a\varphi\partial_b\varphi- \frac{1}{2}\psi_{ab}\psi^{cd}\partial_c\varphi\partial_d\varphi. \end{align} To bring the resulting equations of motion into first-order form, we define the auxiliary variables $\Phi_i=\partial_i\varphi$ and $\Phi_{iab}=\partial_i\psi_{ab}$, and the conjugate variables $\Pi=-N^{-1}\left(\partial_t\varphi-N^i\partial_i\varphi\right)$ and $\Pi_{ab}=-N^{-1}\left(\partial_t \psi_{ab}-N^i\Phi_{iab}\right)$. The first-order GH system is~\cite{Lindblom:2005qh} \begin{align} \label{eq:metric_evolution} \partial_t\psi_{ab}-&\left(1+\gamma_1\right)N^k\partial_k\psi_{ab}=-N\Pi_{ab}-\gamma_1N^i\Phi_{iab},\\ \label{eq:metric_conjugate_evolution} \partial_t\Pi_{ab}-&N^k\partial_k\Pi_{ab}+Ng^{ki}\partial_k\Phi_{iab}-\gamma_1\gamma_2N^k\partial_k\psi_{ab}\notag\\ =&2N\psi^{cd}\left(g^{ij}\Phi_{ica}\Phi_{jdb}-\Pi_{ca}\Pi_{db}-\psi^{ef}\Gamma_{ace}\Gamma_{bdf}\right)\notag\\ &-2N\nabla_{(a}H_{b)}-\frac{1}{2}Nt^c t^d\Pi_{cd}\Pi_{ab}-Nt^c \Pi_{ci}g^{ij}\Phi_{jab}\notag\\ &+N\gamma_0\left(2\delta^c{}_{(a} t_{b)}-\psi_{ab}t^c\right)\left(H_c+\Gamma_c\right)\notag\\ &-\gamma_1\gamma_2N^i\Phi_{iab}\notag\\ &-16\pi N\left(T_{ab}-\frac{1}{2}\psi_{ab}T^c{}_c\right),\\ \label{eq:metric_derivative_evolution} \partial_t\Phi_{iab}-&N^k\partial_k\Phi_{iab}+N\partial_i\Pi_{ab}-N\gamma_2\partial_i\psi_{ab}\notag\\ =&\frac{1}{2}Nt^c t^d\Phi_{icd}\Pi_{ab}+Ng^{jk}t^c\Phi_{ijc}\Phi_{kab}\notag\\ &-N\gamma_2\Phi_{iab}, \end{align} where $H_a$ is the so-called gauge source function and must satisfy the constraint $H_a=\psi_{ab}\Gamma^b_{cd}\phi^{cd}$. The parameters $\gamma_0,\gamma_1$ and $\gamma_2$ are described in $\S$\ref{sec:ConstraintDamping}. The first-order massless-Klein-Gordon system is \begin{align} \label{eq:sw_psi_evolution} \partial_t\psi =& N^i\partial_i\psi-N\Pi+\gamma^{KG}_1N^i\left(\partial_i\psi-\Phi_i\right),\\ \label{eq:sw_pi_evolution} \partial_t\Pi=&N\Pi K+N^i\partial_i\Pi+N\Phi_i g^{jk}\Gamma^i_{jk}\notag\\ &+\gamma^{KG}_1\gamma^{KG}_2N^i\left(\partial_i\psi-\Phi_i\right)\notag\\ &-g^{ij}\left(N\partial_j\Phi_i+\Phi_j\partial_i N\right),\\ \label{eq:sw_phi_evolution} \partial_t\Phi_i=&-N\partial_i\Pi-\Pi\partial_i N-\gamma^{KG}_2N\left(\Phi_i-\partial_i\psi\right)\notag\\ &+N^j \partial_j\Phi_i + \Phi_j\partial_i N^j. \end{align} The parameters $\gamma^{KG}_1$ and $\gamma^{KG}_2$ are described in $\S$\ref{sec:ConstraintDamping}, and $K$ is the trace of the extrinsic curvature. \section{Initial Data}\label{sec:InitialData} We generate initial data for the evolutions by solving the extended conformal thin-sandwich equations~\cite{Pfeiffer:2002iy} using the spectral elliptic solver~\cite{2003CoPhC.152..253P} in \texttt{SpEC}~\cite{SpECwebsite}. The contributions to the equations from the scalar field are given by \begin{align} \label{eq:rhoID} \rho =&t^a{}t^b{}T_{ab}=\frac{1}{2}\left(\Pi{}^2+g^{ij}\Phi_i\Phi_j\right),\\ \label{eq:momentumID} S^i =&-g^{ij}t^a{}T_{aj}=g^{ij}\Pi\Phi_{j}, \end{align} and \begin{align} \label{eq:stressID} S =& g_{ij}g^{ia}g^{jb}T_{ab}=\frac{1}{2}\left(3\Pi{}^2-g^{ij}\Phi_i\Phi_j\right), \end{align} where $g^{ia}$ projects the spacetime index $a$ onto the spatial hypersurface orthogonal to $t^a$. Let $r=\delta_{ij}x^ix^j$ and \begin{align} \label{eq:1d gaussian} f(r) = \varphi_0 \exp\left[-\left(\frac{r-r_0}{\sigma}\right)^2\right]. \end{align} For concreteness we focus on three types of initial data: spherically symmetric data given by \begin{align} \label{eq:Spherical ID} \varphi(t,x^i)=\varphi_{\mathrm{sph}}=\frac{f(-r)+f(r)}{r}, \end{align} data where the second term has no $y$-coordinate dependence (recall $xz\sim r\cos\phi\sin2\theta$) similar to that studied in~\cite{Healy:2013xia,Clough:2016jmh} \begin{align} \label{eq:Reflection ID} \varphi(t,x^i)=\varphi_{\Re (Y^2_1)}:=\varphi_{\mathrm{sph}}\left(1-\delta\cos\phi\sin2\theta\right), \end{align} and finally generic initial data of the form \begin{align} \label{eq:Generic ID} \varphi(t,x^i)=\varphi_{3-d}:=\varphi_{\mathrm{sph}} &\left\{1-\frac{\delta}{1.56}\left[(\cos\phi+\sin\phi)\sin2\theta\right.\right.\notag\\ &\left.\left.-\left(3\cos^2\theta-1\right)\right]\right\}. \end{align} The conjugate momentum to the $\varphi$ in the spherically symmetric case is given by \begin{align} \Pi_{\mathrm{sph}}=\frac{\partial_rf(-r)-\partial_rf(r)}{r}, \end{align} and is multiplied by the same non-spherical terms as $\varphi$. This is ingoing spherical wave initial data. The numerical factor $1.56$ is chosen so that when $\delta=1$, the maximum of the second term is approximately unity. We choose $\sigma=1$ and $r_0=5$ for the results presented here. For the initial data~\eqref{eq:Reflection ID} we (arbitrarily) choose $\delta=0.9$ and for data given by~\eqref{eq:Generic ID} we choose $\delta=1$. \section{Numerical Methods}\label{sec:NumericalMethods} \subsection{Domain Decomposition} \texttt{SpEC} decomposes the computational domain into possibly overlapping subdomains. Within each subdomain a suitable set of basis functions that depends on the topology of the subdomain is chosen to approximate the solution. The domain decomposition for finding the initial data is a cube at the center with an overlapping spherical shell that is surrounded by concentric spherical shells. For the evolution, a filled sphere surrounded by non-overlapping spherical shells is used until a black hole forms. At this point a ringdown or excision grid nearly identical to that used during the ringdown phase of binary black hole merger evolutions is used~\cite{Scheel:2008rj, Szilagyi:2009qz, Hemberger:2012jz}. The ringdown grid consists of a set of non-overlapping spherical shells with the inner shell's inner radius approximately $94\%$ of the apparent horizon radius. \subsection{Dual Frames and Mesh Refinement} To resolve the large range of spatial and temporal scales required, finite-difference codes typically use adaptive mesh refinement (AMR). However, for the spatiotemporal scales required here, AMR is computationally prohibitively expensive in 3+1 dimensions without any symmetries. \texttt{SpEC} achieves its high accuracy by using spectral methods to solve the PDEs rather than finite differencing. In addition, two further tools are employed to achieve high accuracy: dual frames~\cite{Scheel:2006gg, Szilagyi:2009qz, Hemberger:2012jz} and spectral AMR~\cite{Szilagyi:2014fna}. In the dual frames approach, the PDEs are solved in what is called the grid frame. This frame is related to the ``inertial frame'', the frame in which the PDEs are originally written, by time-dependent spatial coordinate maps. The dual frames method ``moves'' the grid points inward as the scalar field collapses, which gives an additional two orders of magnitude of resolution compared to the initial inertial coordinates without the use of any mesh refinement. We also employ a coordinate map to slowly drift the outer boundary inward so that any constraint-violating modes near the outer boundary are propagated out of the computational domain. While the slow drift of the outer boundary is not essential for stability, it is helpful in long evolutions. Denote the coordinate map that moves the grid points inward during collapse by $\mathcal{M}_{\mathrm{scaling}}$ and the map that drifts the outer boundary inward by $\mathcal{M}_{\mathrm{drift}}$. Then the coordinate map used during collapse before a black hole forms is given by $\mathcal{M}_{\mathrm{collapse}}=\mathcal{M}_{\mathrm{drift}}\circ\mathcal{M}_{\mathrm{scaling}}$. The mapping $\mathcal{M}_{\mathrm{collapse}}$ relates the initial coordinates, $\bar{x}^i$ to the grid coordinates $x^i$ by $\bar{x}^i=\mathcal{M}_{\mathrm{collapse}}x^i$. The specific spatial coordinate map we use for both $\mathcal{M}_{\mathrm{drift}}$ and $\mathcal{M}_{\mathrm{scaling}}$ is of the form \begin{align} \label{eq:cubicScale} \bar{r}=a(t)r+\left[1-a(t)\right]\frac{r^3}{r_{\mathrm{outer}}^2}, \end{align} where $r=\delta_{ij}x^ix^j$, $\bar{r}=\delta_{ij}\bar{x}^i\bar{x}^j$, $a(t)$ is a time-dependent function we call an expansion factor, and $r_{\mathrm{outer}}$ is a parameter of the map. For $\mathcal{M}_{\mathrm{scaling}}$ we choose \begin{align} \label{eq:aScaling} a_{\mathrm{scaling}}(t) = A \exp\left[-{\left(\frac{t}{\sigma_{\mathrm{scaling}}}\right)}^{2n}\right] +B \end{align} with $A=0.99$, $B=0.01$, $n=2$ and $\sigma_{\mathrm{scaling}}=3.8$. The value of $r_{\mathrm{outer}}$ for $\mathcal{M}_{\mathrm{scaling}}$ is $r_{\mathrm{outer}}=100$. For $\mathcal{M}_{\mathrm{drift}}$ we use $r_{\mathrm{outer}}=180$ and \begin{align} \label{eq:aDrift} a_{\mathrm{drift}}(t)=1+v\frac{t^3}{b+t^2}, \end{align} with $b=10^{-4}$ and $v=-3.23\times10^{-3}$. We find these choices for the coordinate maps lead to accurate and stable long-term evolutions with sufficient resolution to resolve both scaling and echoing. After an apparent horizon is found we switch over to an excision grid and use the same coordinate maps used in the ringdown portion of the binary black hole evolutions~\cite{Scheel:2008rj, Szilagyi:2009qz, Hemberger:2012jz}. Specifically, we excise the interior of the apparent horizon with the excision surface's radius being approximately 94 per cent of the apparent horizon's coordinate radius. Near the apparent horizon, all the characteristics are directed toward the center of the apparent horizon and so no boundary conditions need to be imposed there. Thus, as long as the excision surface remains close to the apparent horizon, the simulation remains stable without the need to impose additional boundary conditions. One difficulty is that during the very early phase of ringdown the apparent horizon's coordinate radius increases very rapidly. To deal with the rapid expansion, a control system is used to track the apparent horizon and adjust the location of the excision boundary to follow the apparent horizon~\cite{Scheel:2006gg, Scheel:2008rj, Hemberger:2012jz}. While the spatial coordinate maps work extremely well for resolving the small length scales that appear near the critical solution, they do not provide any guarantees about the truncation error of the simulations. The temporal error is controlled by using an adaptive, fifth-order Dormand-Prince time stepper. The spatial error is controlled using the spectral AMR algorithm described in~\cite{Szilagyi:2014fna}. Using AMR we control the relative error in the metric, the spatial derivative of the metric and the conjugate momentum of the metric. For the results presented in this manuscript we set a relative maximum spatiotemporal error of $10^{-8}$. \subsection{Gauge Choice} In binary black hole evolutions with the GH system, large constraint violations occur unless an appropriate gauge condition is chosen. The key ingredient in a successful choice~\cite{Lindblom:2009tu} is to control the growth of $\sqrt{g}/N$, where $g$ is the determinant of the spatial metric. As one might expect, evolutions of critical behavior at black hole formation require even more stringent control of the gauge than in binary simulations. We find that without such control, explosive growth in both $\sqrt{g}/N$ and $1/N$ prevents the code from finding an apparent horizon before the constraints blow up and the evolution fails. Accordingly, we adopt a modified version of the damped harmonic gauge used in Ref.~\cite{Lindblom:2009tu}: \begin{align} \label{eq:targetGauge} H_a=&\left[\mu_{L,1}\log\left(\frac{\sqrt{g}}{N}\right) +\mu_{L,2}\log\left(\frac{1}{N}\right)\right]t_a\notag\\ &-\mu_S N^{-1}g_{ai}N^i. \end{align} The coefficients $\mu_{L,1}$, $\mu_{L,2}$ and $\mu_{S}$ are described below. Fortunately, the region of the spatial hypersurfaces where $\sqrt{g}/N$ diverges is different from that where $1/N$ diverges and so having the coefficients $\mu_{L,1}$ and $\mu_{L,2}$ depend on $\log(\sqrt{g}/N)$ and $\log{1/N}$ respectively allows us to control both divergences with a single equation. The functional forms of the coefficients are \begin{align} \label{eq:muL1} \mu_{L,1}=&R(t)W(x^i)\left[\log\left(\frac{\sqrt{g}}{N}\right)\right]^4,\\ \label{eq:muL2} \mu_{L,2}=&R(t)W(x^i)\left[\log\left(\frac{1}{N}\right)\right]^4, \end{align} and \begin{align} \label{eq:muS} \mu_{S}=&\mu_{L,1}. \end{align} The roll-on function $R(t)$ is given by \begin{align} \label{eq:rollon} R(t)=1-\exp\left[-\left(\frac{t-t_0}{\sigma_t}\right)^4\right], \end{align} where we choose $t_0=0$ and $\sigma_t=2$, while the spatial weight function, $W(x^i)$ is given by \begin{align} \label{eq:spatialWeight} W(x^i)=\exp\left[-34.54\left(\frac{r}{r_{\max}}\right)^2\right], \end{align} where we set $r_{\max}=30$. The function $R(t)$ is used to transition from the initial maximal slicing to the damped harmonic gauge needed later in the evolution, while $W(x^i)$ makes the gauge be pure harmonic near the outer boundary of the computational domain. The $\log$ factors in Eq.~\eqref{eq:muL1} and~\eqref{eq:muL2} make the gauge pure harmonic in the region of the spatial slice where $\sqrt{g}/N$ and $1/N$ are near unity, respectively. We found that using the fourth power as opposed to the second power that is typically used for controlling the growth of $\sqrt{g}/N$ in binary black hole evolutions is required for stable long-term evolutions. \subsection{Constraint Damping}\label{sec:ConstraintDamping} Both the Klein-Gordon and the GH system have constraints that must remain satisfied during evolutions. For the Klein-Gordon system the constraint is \begin{align} \label{eq:KgConstraint} \mathcal{C}^{KG}_i=\partial_{i}\psi-\Phi_{i}=0. \end{align} The constraints for the GH system are given in reference~\cite{Lindblom:2005qh}. Failure to satisfy the constraints indicates that the numerical simulation is no longer solving the physical system of interest and should not be trusted. To control growth of constraint violations from numerical inaccuracies, constraint damping parameters are added to the evolution equations. For the GH system the constraint damping parameters are $\gamma_0, \gamma_1$ and $\gamma_2$, and for the Klein-Gordon system $\gamma_1^{\mathrm{KG}}$ and $\gamma_2^{\mathrm{KG}}$. See Eqs.(\ref{eq:metric_evolution}--\ref{eq:sw_phi_evolution}) for how the constraint damping parameters appear in the evolution equations. We find that choosing $\gamma_1^{\mathrm{KG}}=1$ and $\gamma_2^{\mathrm{KG}}=0$ works well for the scalar field. For the GH system, finding good constraint damping parameters is more difficult, especially during ringdown. The dimensions of the constraint damping parameters are $\mathrm{time}^{-1}$, which suggests that for smaller black holes where the characteristic time scale is shorter, the constraint damping parameters must be increased. During ringdown we choose \begin{align} \gamma_0 &= A_0\exp\left(-\frac{r^2}{10^2}\right)+10^{-3},\\ \gamma_1 &= A_1\left[\exp\left(-\frac{r^2}{1000^2}\right)-1\right],\\ \gamma_2 &= A_2\exp\left(-\frac{r^2}{10^2}\right)+10^{-3}, \end{align} with $A_0\in [20, 100]$, $A_1=0.999$, and $A_2\in[20, 80]$. Larger values of $A_0$ and $A_2$ are used for smaller black holes. During the collapse phase of the evolutions we find less sensitivity to the choice of the damping parameters. We use the same functional form as during the ringdown but always choose $A_0 = A_2 = 20$. \section{Results}\label{sec:Results} All files used to produce figures in this paper, including the data, are available from the arXiv version of this paper. \subsection{Scaling}\label{sec:Scaling} In this section we present two sets of scaling relations. The first involves the final mass of the black hole $M_{\mathrm{BH}}$ for supercritical evolutions. For each class of initial data we evolve the data with amplitudes large enough that a black hole forms and gradually decrease the amplitude. While decreasing the amplitude we focus on simulations that form a black hole. Rather than performing a binary search to estimate $p_\star$, we fit the relationship $\ln(M_{\mathrm{BH}})=\gamma\ln(p/p_\star-1)+C$ to the data for $\gamma$, $p_\star$, and $C$, where we take $p$ to be the amplitude $\varphi_0$ of the initial data. We then use the $p_\star$ from the fit to determine an amplitude that should form a black hole but is closer to the critical solution. This is repeated until $\log_{10}(p/p_\star-1)\approx-6$, the target value. Choosing suitable values of $p$ to fit for $\gamma$ and $\Delta$ is tricky. We describe our procedure in the \hyperref[sec:Appendix]{Appendix}. Note that the relationship used for determining which amplitude to use next is not used for analyzing the results. The second scaling relation involves, $R_{\max}$ the maximum Ricci scalar at the center for subcritical evolutions. We run simulations to obtain an approximately even distribution of masses and maximum Ricci scalars for $\ln(p/p_\star-1)\in(-14,-5]$. We estimate the errors in the final mass of the black hole and $R_{\max}$ using convergence tests with values of $p$ nearest $p_\star$. Once we have reached the target number of simulations, with the lowest amplitude that forms a black hole having $\log_{10}(p/p_\star-1)\approx-6$, we fit the mass of the resulting black hole to \begin{align} \label{eq:SineMassFit} \ln(M_{\mathrm{BH}})=&\gamma^M\ln(p/p_\star-1)+C^M\notag\\ &+A^M\sin\left[w^M\ln(p/p_\star-1)+\delta^M\right], \end{align} as suggested in~\cite{Gundlach:1996eg, Hod:1996az}. Note that the superscript $M$ is not an exponent but denotes that parameter was obtained from fitting to the mass of the black hole rather than the maximum Ricci scalar at the center. We find that the probability of $\chi^2$ and the reduced $\chi^2$ are better for this function than the one where the sinusoidal term is omitted. We fit for all parameters in~\eqref{eq:SineMassFit}, including $p_\star$. The fitting function used for the maximum Ricci scalar at the origin is \begin{align} \label{eq:SineRicciFit} \ln(R_{\max})=&2\gamma^R\ln(p/p_\star-1)+C^R\notag\\ &+A^R\sin\left[w^R\ln(p/p_\star-1)+\delta^R\right]. \end{align} However, for consistency we use the value of $p_\star$ obtained from fitting to the masses when fitting to the maximum Ricci scalar as well. In Fig.~\ref{fig:ScalingMasses} we plot $\ln(M_{\mathrm{BH}})$ as a function of $\ln(p/p_\star-1)$ for the three types of initial data studied. For data $\varphi_{\Re(Y^2_1)}$ we arbitrarily choose $\delta=0.9$, which is a large deviation from the spherical solution. For reference, when $\delta=1$ the scalar field profile is zero at the zeros of $1-\cos(\varphi)\sin(2\theta)$. For initial data $\varphi_{\text{3-d}}$ we choose $\delta=1$, an even stronger deviation from spherical symmetry. In Fig.~\ref{fig:ScalingMasses} we offset the curves vertically by $\beta_{i}=\{0.3,0,-0.3\}${} so that they do not overlap and are easier to compare. The critical exponents we find are \gammamass{}, where the number in parentheses is the uncertainty in the last digit. These are all close to the accepted value for spherically symmetric initial data, $0.374\pm0.001$~\cite{Gundlach:1996eg} strongly suggesting that the spherical mode dominates. \begin{figure}[] \centering \includegraphics[width=0.47\textwidth]{ScalingMasses.pdf} \caption{Plotted is $\ln(M_{\mathrm{BH}})$ as a function of $\ln(p/p_\star-1)$ for the three types of initial data studied. We find critical exponents \gammamass{}. We shift the curves vertically by $\beta_{i}=\{0.3,0,-0.3\}${} so that data points from different initial data are easily distinguished.}\label{fig:ScalingMasses} \end{figure} In addition to studying the final mass of the resulting black hole, we follow~\cite{Garfinkle:1998va} and calculate the maximum Ricci scalar at the center of the collapse for subcritical evolutions. In Fig.~\ref{fig:ScalingRicci} we plot $\ln(R_{\max})$ as a function of $\ln(p/p_\star-1)$ along with a fit using Eq.~(\ref{eq:SineRicciFit}) for the initial data studied. We again offset the plots vertically by amounts $\beta_{i}=\{0.4, 0, -0.4\}${} to aid readability. In this case we find critical exponents \gammaricci{}, which are comparable to the values for mass scaling and to the accepted value in spherically symmetric critical collapse, $\gamma=0.374\pm0.001$. \begin{figure}[] \centering \includegraphics[width=0.47\textwidth]{ScalingRicci.pdf} \caption{Plotted is $\ln(R_{\max})$ as a function of $\ln(1-p/p_\star)$ for the three types of initial data studied. We find critical exponents \gammaricci{}. We shift the curves vertically by $\beta_{i}=\{0.4, 0, -0.4\}${} so that data points from different initial data are easily distinguished.}\label{fig:ScalingRicci} \end{figure} \subsection{Echoing}\label{sec:Echoing} \begin{figure}[] \centering \includegraphics[width=0.47\textwidth]{Residuals_DataSphericalMasses.pdf} \caption{The residuals of the fitting $\ln(M_{\mathrm{BH}})=\gamma^M\ln(p/p_\star-1)+C$ (blue dots) and Eq.~\ref{eq:SineMassFit} (green triangles) to the black hole masses for the spherical symmetry case, $\varphi_{\rm{sph}}$. The sinusoidal residual of the straight line fit is identical to what is observed in~\cite{Hod:1996az}.}\label{fig:ResidualsSs} \end{figure} Having studied the scaling we now turn to the fine structure and echoing of the critical behavior. Echoing of any gauge-invariant quantity was described by Eq.~\eqref{eq:rescaling} above. A small-amplitude sinusoidal modulation about the straight line expected from critical behavior was conjectured and observed in~\cite{Hod:1996az}. Fig.~\ref{fig:ScalingMasses} and ~\ref{fig:ScalingRicci} both show this feature. In Fig.~\ref{fig:ResidualsSs} we plot the residuals when fitting only the linear term and when fitting the linear plus sine term for the spherically symmetric mass scaling case.\footnote{The residuals of the fits for non-spherical initial data and for Ricci scaling are qualitatively identical.} The sinusoidal modulation is much clearer in Fig.~\ref{fig:ResidualsSs} than in Fig.~\ref{fig:ScalingMasses}. From the fit, Eq.~(\ref{eq:SineMassFit}), we estimate the period, $T=2\pi/w$. In~\cite{Hod:1996az} it was found that the relationship between the echoing period, $\Delta$ and the scaling period, $T$ is $T=\Delta/ (2 \gamma)$. To test this relationship, we calculate $\Delta$ using $T$ and also by estimating it directly from the Ricci scalar at the origin as a function of the logarithmic time, $-\ln(1-\tau/\tau_\star)$. $\tau$ is the proper time at the origin given by \begin{align} \label{eq:ProperTimeOrigin} \tau=\int_0^t N(\tilde{t}, 0)d\tilde{t}, \end{align} and $\tau_\star$ is the accumulation time of the self-similar solution. We find that despite being able to resolve the fine structure and knowing $p_\star$ to six significant figures, the estimate of $\tau_\star$ from the apparent horizon formation time is only accurate to about two digits. This is because the formation time of an apparent horizon is a gauge-dependent quantity. We estimate $\tau_\star$ by assuming that the logarithmic time between successive echoes becomes constant and adjusting $\tau_\star$ until this is true. The resulting $\tau_\star$ is consistent with what we estimate from apparent horizon formation times. In Fig.~\ref{fig:Echoing} we plot $\ln(R(t,r=0))$, a geometric invariant, which shows the expected echoing that has been studied in previous work~\cite{Garfinkle:1998va,Sorkin:2005vz}. From Fig.~\ref{fig:Echoing} we estimate the echoing period to be $\Delta=3.2\pm0.1$. \begin{figure}[] \centering \includegraphics[width=0.47\textwidth]{Echoing.pdf} \caption{Plotted is $\ln(R(t,r=0))$ as a function of $\ln(1-\tau/\tau_\star)$ for the three types of initial data studied. The echoing is clearly visible and very similar between the different evolutions, which all have $\ln(1-p/p_\star)\approx-6$. The echoing period is $\Delta=3.2\pm0.1$ for all simulations.}\label{fig:Echoing} \end{figure} \begin{table} \centering \begin{tabularx}{\columnwidth}{@{\extracolsep{\stretch{1}}}*{7}{c}@{}} \hline Initial Data & $2\gamma^MT^M$ & $2\gamma^RT^R$ & $\Delta_{\mathrm{echoing}}$ \\ \hline $\varphi_{\mathrm{sph}}$ & $3.46\pm0.01$ & $3.557\pm0.001$ & $3.2\pm0.1$ \\ $\varphi_{\Re (Y^2_1)}$ & $3.46\pm0.02$ & $3.518\pm0.002$ & $3.2\pm0.1$ \\ $\varphi_{\mathrm{3-d}}$ & $3.67\pm0.04$ & $3.512\pm0.003$ & $3.2\pm0.1$ \\ \hline \end{tabularx} \caption{Comparison of $2\gamma^MT^M$ and the echoing period $\Delta$. In~~\cite{Hod:1996az} it was found that $\Delta=2\gamma T$, which we are unable to verify within our error estimates. The accepted value of the echoing period in spherical symmetry is $\Delta=3.4453\pm0.0005$~\cite{Gundlach:1996eg}.}\label{tab:EchoingPeriods} \end{table} In Table~\ref{tab:EchoingPeriods} we summarize and compare direct estimates of $\Delta$ to $2\gamma T$. Specifically, we find that $2\gamma^MT^M\approx3.46$, near the best known value of $\Delta=3.4453\pm0.0005$~\cite{Gundlach:1996eg}. For simulations that do not form a horizon, where we compute $2\gamma^RT^R$ from the Ricci scalar scaling plot, Fig.~\ref{fig:ScalingRicci}, we find that $2\gamma^R_{\mathrm{sph}}T^R_{\mathrm{sph}}=3.556\pm0.001$, $2\gamma^R_{\Re (Y^2_1)}T^R_{\Re (Y^2_1)}=3.518\pm0.002$, and $2\gamma^R_{\text{3-d}}T^R_{\text{3-d}}=3.512\pm0.003$. The discrepancy between $2\gamma T$ from mass scaling and Ricci scalar scaling is currently not understood. When studying the echoing of $\ln(-R(t,r=0))$, we find $\Delta=3.2\pm0.1$, where the larger error is explained by the difficulty in estimating $\tau_\star$. \begin{figure}[] \centering \includegraphics[width=0.47\textwidth]{PowerPsiPlanar.pdf} \caption{The power in $\varphi_\ell$ for $\ell=0, 2$ for the $\Re(Y^2_1)$ initial data with $\varphi_0=0.07586803$.}\label{fig:PowerPsiPlanar} \end{figure} A power spectrum analysis shows that the spherical mode dominates the evolution. We define the power in a given $\ell$-mode as \begin{align} P_\ell = \frac{1}{N_r} \sum_{i=0}^{N_r-1}\sum_{m=-\ell}^{\ell}\left|C_{i,\ell,m}\right|^2 \end{align} where $N_r$ is the number of radial points, and $C_{i,\ell,m}$ are the coefficients in spectral expansion. This definition is consistent with Parseval's theorem given that \begin{align} \int \rvert Y_m^\ell(\theta, \phi)\rvert^2d\Omega=1. \end{align} Also note that with this definition at a given radius \begin{align} \int \rvert f(\theta, \phi)\rvert^2d\Omega = \sum_{\ell=0}^{\infty}P_\ell. \end{align} For the $\Re(Y^2_1)$ data we find that initially \begin{align} \frac{P_2}{P_0} = \frac{27}{125} \Rightarrow \frac{P_2}{\sum_{\ell}P_\ell} = \frac{P_2}{P_0 + P_2} \approx 0.18, \end{align} or that approximately 18 percent of the power is in the $\ell=2$ mode. For the 3-d initial data we find that initially \begin{align} \frac{P_2}{P_0} \approx 0.548 \Rightarrow \frac{P_2}{\sum_{\ell}P_\ell} = \frac{P_2}{P_0 + P_2} \approx 0.35, \end{align} or that approximately 35 percent of the power is in the $\ell=2$ mode. In Fig.~\ref{fig:PowerPsiPlanar} we plot the power in $\varphi_\ell$ for $\ell=0, 2$ for the $\Re(Y^2_1)$ initial data. Fig.~\ref{fig:PowerPsiPlanar} shows that the $\ell=2$ mode decays much more rapidly than the $\ell=0$ mode, suggesting that the spherically symmetric critical solution is approached. However, given the different initial data and that we are further from the critical solution than~\cite{Choptuik:2003ac}, we are unable to corroborate or dispute their results. The initial data used in~\cite{Baumgarte:2018fev} is given by \begin{align} \label{eq:Baumgarte data} \varphi_{Y_2^{2}} =& \varphi_0 \exp\left(-\frac{r}{r_0}\right)\left[\sin^2\theta +\left(1-\delta^2\right)\cos^2\theta\right]\notag \\ =&\varphi_0 \exp\left(-\frac{r}{r_0}\right)\left(1 - \delta^2 + \delta^2\sin^2\theta\right). \end{align} The deformation in this case is proportional to the $Y_2^{\pm2}$ spherical harmonics as opposed to the $Y_2^1$ spherical harmonic. Ref.~\cite{Baumgarte:2018fev} found that for $\delta=0.75$ the critical behavior differs significantly from that of the spherically symmetric evolutions. For example, the critical exponent is observed to be $\gamma\approx0.306$. The percentage of the power in the $\ell=2$ mode for $\delta=0.75$ is approximately 47 percent. This is 12 percent more than our 3-d initial data that has behavior consistent with the spherically symmetric evolutions. This raises the question as to whether the reason~\cite{Baumgarte:2018fev} see different behavior is because of the increased power in the $\ell=2$ modes or because the initial data is proportional to the $Y_2^{\pm2}$ spherical harmonics instead of the $Y_2^1$ spherical harmonic. Work is underway to attempt to resolve this question. \section{Conclusions}\label{sec:Conclusions} We present results of a study of critical behavior in the 3-d gravitational collapse of a massless scalar field with no symmetry assumptions. We are able to resolve the dominant critical behavior as well as the fine structure in both supercritical and subcritical evolutions. We use the Spectral Einstein Code, \texttt{SpEC}~\cite{SpECwebsite} to perform the evolutions, with several key changes to the gauge condition and constraint damping. We study how the critical exponent and echoing period obtained from the data depend on how close to the critical solution the simulations are, as well as how the simulations are distributed in parameter space. This is especially important in 3-d where simulations are costly to perform. We find the critical exponents to be \gammamass{}, consistent with the accepted result in spherical symmetry of $0.374\pm0.001$~\cite{Gundlach:1996eg}. The accepted value of the echoing period $\Delta$ in spherical symmetry is $\Delta=3.4453\pm0.0005$~\cite{Gundlach:1996eg}, while we find echoing periods $\Delta=3.2\pm0.1$ for all initial data consider. The discrepancy can be attributed to the difficulty in directly measuring the echoing period. We also test the predicted relationship~\cite{Gundlach:1996eg, Hod:1996az} between the echoing period and the fine structure of the scaling, $2\gamma T=\Delta$. We find that for mass scaling \Deltamass{}, where $T^M$ is the period of the sinusoidal fine structure. The agreement of the critical exponent, echoing period, and fine structure between the spherically symmetric and highly non-spherical simulations leads us to conclude that even for initial data far from spherical symmetry the critical solution is that of spherical symmetry. However, the reason why our results differ from those of~\cite{Choptuik:2003ac} and~\cite{Baumgarte:2018fev}, where data far from spherical symmetry approaches a different critical solution, is not yet fully understood. One reason for the discrepancy could be that in our data approximately 18 percent of the total power is in the $\ell=2$ mode for the $\Re(Y_1^2)$ initial data and 35 percent for the $3-d$ initial data, while in~\cite{Baumgarte:2018fev} approximately 47 percent of the power is in the $\ell=2$ mode. In other words, more power than we used is needed in the $\ell=2$ mode. Another possible reason is that~\cite{Baumgarte:2018fev} studied $\ell=2, m=2$ initial data while we study $\ell=2,m=1$ initial data. Work is underway to understand if either of these scenarios are responsible for the discrepancy and to independently reproduce the simulations of~\cite{Baumgarte:2018fev}. \section{Acknowledgements} We are grateful to Andy Bohn, Fran\c{c}ois H\'{e}bert, and Leo Stein for insightful discussions and feedback on earlier versions of this paper. We are also grateful to the anonymous referee for the feedback. This work was supported in part by a Natural Sciences and Engineering Research Council of Canada PGS-D grant to ND, NSF Grant PHY-1606654 at Cornell University, and by a grant from the Sherman Fairchild Foundation. Computations were performed on the Zwicky cluster at Caltech, supported by the Sherman Fairchild Foundation and by NSF award PHY-0960291.
{ "timestamp": "2018-12-20T02:06:37", "yymm": "1802", "arxiv_id": "1802.08682", "language": "en", "url": "https://arxiv.org/abs/1802.08682" }
\section{Introduction} Mesh patterns were first introduced by Br\"and\'en and Claesson in \cite{Bra11} as a generalisation of permutation patterns, and have been studied extensively in recent years, see e.g.,~\cite{CTU15,JKR15}. A mesh pattern consist of a pair $(\pi,P)$, where $\pi$ is a permutation and $P$ is a set of coordinates in a square grid. For example, $(312,\{(0,0),(1,2)\})$ is a mesh pattern, which we depict by \begin{center} \patt{0.5}{3}{3,1,2}[0/0,1/2][][][][][4]. \end{center} A natural definition of when one mesh patterns occurs in another mesh patterns was given in~\cite{TU17}, which we present in \cref{sec:PosMP}. This allows us to generalise the classical permutation poset to a poset of mesh patterns, where $(\sigma,S)\le(\pi,P)$ if there is an occurrence of $(\sigma,S)$ in~$(\pi,P)$. The permutation poset has received a lot of attention in recent years, but due to its complicated structure a full understanding of it has proven elusive, see~\cite{McSt13,Smith15}. The poset of mesh patterns, which we define here, contains the poset of permutations as an induced subposet. Therefore, investigating the poset of mesh patterns may lead to a better understanding of the poset of permutations. Moreover, studying this poset may help to answer some of the open questions on mesh patterns. In \cref{sec:PosMP} we introduce the poset of mesh patterns and related definitions, including a brief overview of poset topology. In \cref{sec:MF} we prove some results on the M\"obius function of this poset. In \cref{sec:purity} we give a characterisation of the non-pure (or non-ranked) intervals of the poset. In \cref{sec:topology} we give some results on the topology of the poset. \section{The Poset of Mesh Patterns}\label{sec:PosMP} To define a mesh pattern we begin with a permutation $\pi=\pi_1\pi_2\ldots\pi_n$. We can plot $\pi$ on an $n\times n$ grid, where we place a dot at coordinates $(i,\pi_i)$, for all $1\le i\le n$. A \emph{mesh pattern} is then obtained by shading some of the boxes of this grid, so a mesh pattern takes the form $p=(\cl{p},\sh{p})$, where $\cl{p}$ is a permutation and $\sh{p}$ is a set of coordinates recording the shaded boxes, which are indexed by their south west corner. For ease of notation we sometimes denote the mesh pattern $(\cl{p},\sh{p})$ as $\cl{p}^{\sh{p}}$. We let~$|\cl{p}|$ represent the length of $\cl{p}$ and $|\sh{p}|$ the size of $\sh{p}$, and define the \emph{length} of $p$ as $|\cl{p}|$, which we denote $|p|$. For example, the mesh pattern $(132,\{(0,0),(0,1),(2,2)\})$, or equivalently $132^{(0,0),(0,1),(2,2)}$, has the form: \begin{center} \patt{0.5}{3}{1,3,2}[0/1,0/0,2/2][][][][][4] \end{center} To define when a mesh pattern occurs within another mesh pattern, we first need to recall two other well-known definitions of occurrence. A permutation $\sigma$ \emph{occurs} in a permutation $\pi$ if there is a subsequence, $\eta$, of $\pi$ whose letters appear in the same relative order of size as the letters of $\sigma$. The subsequence $\eta$ is called an \emph{occurrence} of $\sigma$ in $\pi$. If no such occurrence exists we say that $\pi$ \emph{avoids} $\sigma$. Consider a mesh pattern $(\sigma,S)$ and an occurrence $\eta$ of $\sigma$ in $\pi$, in the classical permutation pattern sense. Each box $(i,j)$ of $S$ corresponds to an area $R_{\eta}(i,j)$ in the plot of $\pi$, which is the rectangle whose corners are the points in $\pi$ which in $\eta$ correspond to the letters $\sigma_i,\sigma_{i+1},j,j+1$ of $\sigma$, and the letters $\sigma_0,\sigma_{|\sigma|+1},0$ and $|\sigma|+1$ are to the south, north, east and west boundaries, respectively. A point is contained in $R_\eta(i,j)$ if it is in the interior of $R_\eta(i,j)$, that is, not on the boundary. For example, in \cref{fig:occEx} where $\eta$ is the occurence in red, the area of $R_\eta(0,0)$ contains the boxes $\{(0,0),(1,0),(0,1),(1,1)\}$, and it contains exactly one point. We say that $\eta$ is an occurrence of the mesh pattern $(\sigma,S)$ in the permutation~$\pi$ if there is no point in $R_{\eta}(i,j)$, for all shaded boxes~$(i,j)\in S$. Using these definitions of occurrences we can recall a concept of mesh pattern containment in another mesh pattern introduced in \cite{TU17}. An example of which is given in \cref{fig:occEx}. \begin{defn}[\cite{TU17}]\label{defn:meshOcc} An occurrence of a mesh pattern $(\sigma,S)$ in another mesh pattern $(\pi,P)$ is an occurrence~$\eta$ of~$(\sigma,S)$ in $\pi$, where for any $(i,j)\in S$ every box in $R_\eta(i,j)$ is shaded in $(\pi,P)$. \end{defn} \begin{figure}\centering \begin{subfigure}[b]{0.3\textwidth} \centering\patt{0.75}{2}{1,2}[0/1,0/2,2/2][][][][] \caption{}\label{subfiga}\end{subfigure} \begin{subfigure}[b]{0.3\textwidth}\centering \colpatt{0.75}{3}{1,2,3}[0/0,0/2,0/3,1/3,1/1,1/2,2/1,2/2,3/3][2/2,3/3][3/3,0/2,0/3,1/3,1/2] \caption{}\label{subfigc}\end{subfigure} \caption{A pair of mesh patterns, with an occurrence of (a) in~(b) depicted in red. }\label{fig:occEx} \end{figure} The classical permutation poset $\mathcal{P}$ is defined as the poset of all permutations, with $\sigma\le_\mathcal{P}\pi$ if and only if $\sigma$ occurs in $\pi$. Using \cref{defn:meshOcc} we can similarly define the mesh pattern poset $\mathcal{M}$ as the poset of all mesh patterns, with $m\le_\mathcal{M} p$ if $m$ occurs in $p$. We drop the subscripts from $\le$ when it is clear which partial order is being considered. An \emph{interval} $[\alpha,\beta]$ of a poset is defined as the subposet induced by the set $\{\kappa\,|\,\alpha\le\kappa\le\beta\}$. See \cref{fig:intEx} for an example of an interval of $\mathcal{M}$. \begin{figure}\centering \begin{tikzpicture} \def1.35{3} \def2{3} \def0.25{0.5} \node (123-123) at (0*2,5*1.35){\patt{0.25}{3}{1,2,3}[0/3,1/3,2/3][][][][][4]}; \node (123-12) at (-1*2,4*1.35){\patt{0.25}{3}{1,2,3}[0/3,1/3][][][][][4]}; \node (123-13) at (0*2,4*1.35){\patt{0.25}{3}{1,2,3}[0/3,2/3][][][][][4]}; \node (123-23) at (1*2,4*1.35){\patt{0.25}{3}{1,2,3}[1/3,2/3][][][][][4]}; \node (123-1) at (-1.5*2,3*1.35){\patt{0.25}{3}{1,2,3}[0/3][][][][][4]}; \node (12-12) at (-0.5*2,3*1.35){\patt{0.25}{2}{1,2}[0/2,1/2][][][][][4]}; \node (123-2) at (0.5*2,3*1.35){\patt{0.25}{3}{1,2,3}[1/3][][][][][4]}; \node (123-3) at (1.5*2,3*1.35){\patt{0.25}{3}{1,2,3}[2/3][][][][][4]}; \node (12-1) at (-1*2,2*1.35){\patt{0.25}{2}{1,2}[0/2][][][][][4]}; \node (123-0) at (0*2,2*1.35){\patt{0.25}{3}{1,2,3}[][][][][][4]}; \node (12-2) at (1*2,2*1.35){\patt{0.25}{2}{1,2}[1/2][][][][][4]}; \node (12-0) at (-1*2,1*1.35){\patt{0.25}{2}{1,2}[][][][][][4]}; \node (1-1) at (1*2,1*1.35){\patt{0.25}{1}{1}[0/1][][][][][4]}; \node (1-0) at (0*2,0*1.35){\patt{0.25}{1}{1}[][][][][][4]}; \draw (123-123) -- (123-12);\draw (123-123) -- (123-13);\draw (123-123) -- (123-23); \draw[-] (123-123) to [bend right=15] (12-12);\draw[-] (12-12) to [bend left=15] (1-1); \draw (123-12) -- (123-1);\draw (123-12) -- (123-2); \draw (123-13) -- (123-1);\draw (123-13) -- (123-3); \draw (123-23) -- (123-2);\draw (123-23) -- (123-3); \draw (123-1) -- (123-0);\draw (123-1) -- (12-1); \draw (123-2) -- (123-0); \draw (123-3) -- (123-0);\draw (123-3) -- (12-2); \draw (12-12) -- (12-1);\draw (12-12) -- (12-2); \draw (12-1) -- (12-0);\draw (12-2) -- (12-0); \draw (123-0) -- (12-0);\draw (12-0) -- (1-0); \draw (1-1) -- (1-0); \end{tikzpicture} \caption{The interval $[1^\emptyset,123^{(0,3),(1,3),(2,3)}]$ of $\mathcal{M}$.}\label{fig:intEx} \end{figure} The first result on the mesh pattern poset is that there are infinitely many maximal elements, which shows a significant difference to the permutation poset, where there are no maximal elements. \begin{lem}\label{lem:max} The poset of mesh pattern contains infinitely many maximal elements, which are the mesh patterns in which all boxes are shaded. \end{lem} \begin{proof} This follows from the easily proven fact that a fully shaded mesh pattern occurs only in itself, and in no other mesh patterns. \end{proof} \subsection{Poset Topology} In this subsection we briefly introduce some poset topology, and refer the reader to \cite{Wac07} for a comprehensive overview of the topic, including any definitions we omit here. The \emph{M\"obius function} of an interval $[\alpha,\beta]$ of a poset is defined by:\linebreak ${\mu(a,a)=1}$, for all $a$, $\mu(a,b)=0$ if $a\not\le b$, and $$\mu(a,b)=-\sum_{c\in[a,b)}\mu(a,c).$$ See \cref{fig:1-123} for an example. The M\"obius function of a poset $P$ is given by $\mu(P)=\mu(\hat{0},\hat{1})$, where $\hat{0}$ and $\hat{1}$ are unique minimal and maximal elements which we add to $P$. In a poset we say that $\alpha$ \emph{covers} $\beta$, denoted $\alpha\gtrdot\beta$, if $\alpha>\beta$ and there is no $\kappa$ such that $\alpha>\kappa>\beta$. A \emph{chain} of length $k$ in a poset is a totally ordered subset $c_1<c_2<\cdots<c_{k+1}$, and the chain is \emph{maximal} if $c_i\lessdot c_{i+1}$, for all $1\le i \le k$. A poset is \emph{pure} (also known as \emph{ranked}) if all maximal chains have the same length. The \emph{dimension} of a poset $P$, denoted $\dim P$, is the length of the longest maximal chain. For example, the interval in \cref{fig:intEx} is nonpure because there is one maximal chain of length $3$ ($\pattin{1}{1}{}\lessdot\pattin{1}{1}{0/1}\lessdot \pattin{2}{1,2}{0/2,1/2}\lessdot\pattin{3}{1,2,3}{0/3,1/3,2/3}$), two maximal chains of length $4$ and all other maximal chains have length $5$, so the interval has dimension $5$. The \emph{interior} of an interval $[\alpha,\beta]$ is obtained by removing $\alpha$ and $\beta$, and is denoted $(\alpha,\beta)$. The \emph{order complex} of an interval $[\alpha,\beta]$, denoted $\Delta(\alpha,\beta)$, is the simplicial complex whose faces are the chains of $(\alpha,\beta)$. When we refer to the \emph{topology} of an interval we mean the topology of the order complex of the interval. A simplicial complex is \emph{shellable} if we can order the maximal faces $F_1,\ldots,F_t$ such that the subcomplex $\left(\cup_{i=1}^{k-1}F_i\right)\cap F_k$ is pure and $(\dim F_k)$-dimensional, for all $k=2,\ldots,t$. Being shellable implies other properties on the topology, such as having the homotopy type of a wedge of spheres. An interval $I$ is \emph{disconnected} if the interior can be split into two disjoint pairwise incomparable sets, that is, $I=A\cup B$ with $A\cap B=\emptyset$ and for every~$a\in A$ and $b\in B$ we have $a\not\le b$ and $b\not\le a$. Each interval $I$ can be decomposed into its smallest connected parts, which we call the \emph{components} of $I$. A component is \emph{nontrivial} if it contains more than one element and we say an interval is \emph{strongly disconnected} if it has at least two nontrivial components. For example, the interval $[1^\emptyset,12^{(0,2),(1,2)}]$ in \cref{fig:intEx} is disconnected but not strongly disconnected. Note that if an interval has dimension less than $3$ it can never be strongly disconnected. We can use disconnectivity as a test for shellability using the following results. \begin{lem}\label{lem:strongdis} If an interval is strongly disconnected, then it is not shellable. \begin{proof} Consider any ordering of the maximal chains and let $F_k$, with ${k>1}$, be the first chain where every preceding chain belongs to a different component and $F_k$ belongs to a nontrivial component. Note that such an $F_k$ exists in every ordering because the interval is strongly disconnected, and because $F_k$ belongs to a nontrivial component it must have dimension of at least $1$. So $\left(\cup_{i=1}^{k-1}F_i\right)\cap F_k=\emptyset$, which has dimension $-1$, so it is not $\dim(F_k-1)$-dimensional. Therefore, the ordering is not a shelling. \end{proof} \end{lem} Since every subinterval of a shellable interval is shellable, \cite[Corollary 3.1.9]{Wac07}, we obtain the following: \begin{cor} An interval which contains a strongly disconnected subinterval is not shellable. \end{cor} Finally, we present a useful result known as the Quillen Fiber Lemma \cite{Quillen78}. Two simplicial complexes are homotopy equivalent if one can be obtained by deforming the other but not breaking or creating any new ``holes", for a formal definition see \cite{Hat02}. A simplicial complex is \emph{contractable} if it is homotopy equivalent to a point and if two posets are homotopy equivalent their M\"obius functions are equal. Given a poset $P$, with $p \in P$ define the upper ideal $P_{\ge p}=\{q\in P\,|\,q\ge p\}$. \begin{prop}\label{thm:Quil}(Quillen Fiber Lemma) Let $\phi:P\rightarrow Q$ be an order-preserving map between posets such that for any $x\in Q$ the complex\linebreak $\Delta(\phi^{-1}(Q_{\ge x}))$ is contractible. Then $P$ and $Q$ are homotopy equivalent. \end{prop} \section{M\"obius Function}\label{sec:MF} In this section we present some results on the M\"obius function of the mesh pattern poset. We begin with some simple results on: mesh patterns with the same underlying permutations; the mesh patterns with no points~$\epsilon^\emptyset$ and $\epsilon^{(0,0)}$; and mesh patterns with no shaded boxes. Throughout the remainder of the paper we assume that $m$ and $p$ are mesh patterns. \begin{lem} Let $\pi$ be a permutation. For any sets $A\subseteq B$ the interval~$[\pi^A,\pi^B]$ is isomorphic to the boolean lattice~$B_{|B|-|A|}$. Therefore,\linebreak ${\mu(\pi^A,\pi^B)=(-1)^{|B|-|A|}}$ and $[\pi^A,\pi^B]$ is shellable. \begin{proof} The elements of $[\pi^A,\pi^B]$ are exactly the mesh patterns $\pi^C$ where $C\subseteq B\setminus A$, which implies the result. \end{proof} \end{lem} \begin{lem} Consider $A\in\{\emptyset,(0,0)\}$, then: $$\mu(\epsilon^{A},p)=\begin{cases} 1,&\mbox{ if }p=\epsilon^A \\ -1,&\mbox{ if }A=\emptyset\,\,\&\,\,|\cl{p}|+|\sh{p}|=1\\ 0,&\mbox{ otherwise} \end{cases}.$$ \begin{proof} The first two cases are trivial. By the proof of \cref{lem:max} we know that $\epsilon^{(0,0)}$ is not contained in any larger mesh patterns, which implies $\mu(\epsilon^{(0,0)},p)=0$, for all $p\not=\epsilon^{(0,0)}$. If $|\cl{p}|+|\sh{p}|>1$, then $(\epsilon^\emptyset,p)$ contains a unique minimal element $1^\emptyset$, so $\mu(\epsilon^\emptyset,p)=0$. \end{proof} \end{lem} \begin{lem} The interval $[\sigma^\emptyset,\pi^\emptyset]$ is isomorphic to $[\sigma,\pi]$ in $\mathcal{P}$, so $$\muM(\sigma^\emptyset,\pi^\emptyset)=\muP(\sigma,\pi).$$ \end{lem} The M\"obius function of the classical permutation poset is known to be unbounded \cite{Smith13}. So we get the following corollary: \begin{cor} The M\"obius function is unbounded on $\mathcal{M}$. \end{cor} We can also show that the M\"obius function is unbounded if we include shaded boxes. We do this by mapping to the poset $\mathcal{W}$ of words with subword order, that is, the poset made up of all words and $u\le w$ if there is a subword of $w$ that equals $u$. The map we introduce is analogous to the map in \cite[Section 2]{Smith14}, which maps certain intervals of the permutation poset to intervals of $\mathcal{W}$. A \emph{descent} in a permutation $\pi=\pi_1\pi_2\ldots\pi_n$ is a pair of letters $\pi_i,\pi_{i+1}$ with $\pi_{i}>\pi_{i+1}$. We call $\pi_{i+1}$ the \emph{descent bottom}. An \emph{adjacency tail} is a letter $\pi_i$ with $\pi_i=\pi_{i-1}\pm 1$. Let $adj(\pi)$ be the number of adjacency tails in~$\pi$. Consider the set $\Gamma$ of mesh patterns where the permutation has exactly one descent, the descent bottom is $1$ and we shade everything south west of~$1$. For example, the mesh pattern $2314^{(0,0),(1,0),(2,0)}$: \begin{center} \patt{0.4}{4}{2,3,1,4}[0/0,1/0,2/0][][][][][4]. \end{center} \begin{lem}\label{lem:mobUn} Consider a mesh pattern $m\in \Gamma$, then $[21^{(0,0),(1,0)},m]$ is shellable and \[ \mu(21^{(0,0),(1,0)},m)=\begin{cases} (-1)^{|m|}\lfloor\frac{|m|}{2}\rfloor,&\text{ if } adj(\cl{m})=0\\ (-1)^{|m|},& \text{ if } adj(\cl{m})=1 \text{ \& tail before descent}\\ 0, &\text{ otherwise} \end{cases}. \] \begin{proof} First note that every mesh pattern in $[21^{(0,0),(1,0)},m]$ is also in $\Gamma$. We define a map $f$ from $\Gamma$ to binary words in the following way. Let $b(x)$ be the set of letters that appear before $1$ in $x\in \Gamma$. Set $\hat{f}(x)$ as the word where the $i$th letter is $0$ if it is in $b(x)$ and $1$ otherwise, and let $f(x)$ equal $\hat{f}(x)$ with the first letter removed. So $f(\Gamma)$ is the set of binary words with at least one~$0$. The inverse of this map is obtained by the following procedure: 1) take a binary word $w\in f(\Gamma)$ and prepend a $1$; 2) put the positions that are $0$'s in increasing order followed by the positions that are $1$ in increasing order; and 4) shade everything southwest of $1$. So $f$ is a bijection. It is straightforward to check that $f$ is order preserving. So the interval~$[21^{(0,0),(1,0)},m]$ is isomorphic to $[0,f(m)]$ in $\mathcal{W}$. It was shown in \cite{Bjo90} that intervals of $\mathcal{W}$ are shellable, which proves the shellability part. It was also shown that the M\"obius function equals the number of normal occurrences with the sign given by the dimension, where an occurrence is \emph{normal} if in any consecutive sequence of equal elements every non-initial letter is part of the occurrence. So for an occurrence of $0$ in $f(m)$ to be normal there can be no $1$ directly preceded by a $1$ and at most one $0$ directly preceded by a $0$. If such a $0$ exists it must be the occurrence, otherwise any $0$ can be the occurrence. In our bijection a non-initial letter of such a sequence maps to an adjacency tail. Combining this with the fact that if there are no adjacency tails, then the letters before the descent must be all the even letters of which there are $\lfloor\frac{|m|}{2}\rfloor$, completes the proof. \end{proof} \end{lem} The M\"obius function on $\mathcal{P}$ often takes larger values than on $\mathcal{M}$, but it is not always true that $\muM(m,p)\le \muP(\cl{m},\cl{p})$. A simple counterexample is the interval $$[1^{(0,1)},123^{(0,2),(0,3),(1,2),(1,3)}],$$ which has M\"obius function $1$, however $\muP(1,123)=0$, see \cref{fig:1-123}. \begin{figure}\centering \begin{tikzpicture} \def0.25{0.35} \node (123) at (0,3){\footnotesize\textcolor{white}{1}\patt{0.25}{3}{1,2,3}[0/2,0/3,1/2,1/3][][][][][4]\textcolor{red}{1}}; \node (12a) at (-1,1.5){\footnotesize\textcolor{red}{-1}\patt{0.25}{2}{1,2}[0/2,1/2][][][][][4]\textcolor{white}{-1}}; \node (12b) at (1,1.5){\footnotesize\textcolor{white}{-1}\patt{0.25}{2}{1,2}[0/1,0/2][][][][][4]\textcolor{red}{-1}}; \node (1) at (0,0){\footnotesize\textcolor{white}{-1}\patt{0.25}{1}{1}[0/1][][][][][4]\textcolor{red}{1}}; \draw (1) -- (12a) -- (123) -- (12b) -- (1); \node (123) at (5,3){{\footnotesize\textcolor{red}{0}} $123$ \textcolor{white}{0}}; \node (12) at (5,1.5){{\footnotesize\textcolor{red}{-1}} $12$ \textcolor{white}{-1}}; \node (1) at (5,0){{\footnotesize\textcolor{red}{1}} $1$ \textcolor{white}{1}}; \draw (1) -- (12) -- (123); \end{tikzpicture} \caption{The interval $[1^{(0,1)},123^{(0,2),(0,3),(1,2),(1,3)}]$ (left) in $\mathcal{M}$ and $[1,123]$ (right) in $\mathcal{P}$, with the M\"obius function in red.}\label{fig:1-123} \end{figure} If we consider intervals where the bottom mesh pattern has no shadings, then we get the following result: \begin{lem}\label{lem:mu0} Consider an interval $[s^\emptyset,p]$ in $\mathcal{M}$ with $\sh{p}\not=\emptyset$. If $s^B\not\in(s^\emptyset,p)$ for any set $B$, then $\mu(s^\emptyset,p)=0$. \begin{proof} Consider the map $f:(s^\emptyset,p)\rightarrow A:x\mapsto\cl{x}^\emptyset,$ that is, $f$ removes all shadings from~$x$. We can see that ${A=(s^\emptyset,\cl{p}^{\emptyset}]}$, so $A$ is contractible, because it has the unique maximal element~$\cl{p}^\emptyset$, hence $\mu(A)=0$. Moreover,~$f^{-1}(A_{\ge y})=[y,p)$, for all $y\in A$, which is contractible. Therefore,~$(s^\emptyset,p)$ is homotopy equivalent to $A$ by the Quillen Fiber Lemma (\cref{thm:Quil}), which implies~$\mu(s^\emptyset,p)=0$. \end{proof} \end{lem} \begin{ex} Consider the subinterval $[1^\emptyset,12^{(0,2)}]$ in \cref{fig:intEx}, applying \cref{lem:mu0} implies $\mu(1^\emptyset,12^{(0,2)})=0$. However, we cannot apply \cref{lem:mu0} to $[1^\emptyset,12^{(0,2),(1,2)}]$ because it contains the element $1^{(0,1)}$. \end{ex} We can combine Lemma~\ref{lem:mu0} with the following result to see that the M\"obius function is almost always zero on the interval $[1^\emptyset,p]$. \begin{lem} As $n$ tends to infinity the proportion of mesh patterns of length~$n$ that contain any of $\{1^{(0,0)},1^{(1,0)},1^{(0,1)},1^{(1,1)}\}$ approaches $0$. \begin{proof} Let $P(n,i)$ be the probability that the letter $i$ is an occurrence of~$1^{(0,0)}$ in a length $n$ mesh pattern, and let $P(n)$ be the probability that a length $n$ mesh pattern contains $1^{(0,0)}$. The probability $P(n,i)$ can be bounded above by first considering the index $k$ of $i$, each having probability $\frac{1}{n}$, and then requiring that all boxes south west of $i$ are filled, of which there are $ik$. This provides an upper bound, because it is possible that there is a point south west of $i$, which would imply $i$ is not an occurrence of $1^{(0,0)}$. We can formulate this as: \begin{align*} P(n,i)&\le\sum_{k=1}^{n}\frac{1}{n}\left(\frac{1}{2^i}\right)^k =\frac{1}{n}\left(\frac{1-2^{-i(n+1)}}{1-2^{-i}}-1\right)\\ &=\frac{1}{n}\left(\frac{2^{-i}-2^{-i(n+1)}}{1-2^{-i}}\right) =\frac{1}{n2^i}\left(\frac{1-2^{-in}}{1-2^{-i}}\right) \le\frac{2}{n2^i} \end{align*} To compute the probability $P(n)$ we can sum over all the $P(n,i)$. Note again this is an over estimate because if a mesh pattern contains multiple occurrences of $1^{(0,0)}$ it counts that mesh pattern more than once. $$ P(n)\le\sum_{i=1}^{n}P(n,i)\le\sum_{i=1}^{n}\frac{2}{n2^i} =\frac{2}{n}\left(\frac{1-\left(\frac{1}{2}\right)^{n+1}}{1-\frac{1}{2}}-1\right) \le\frac{2}{n} $$ Repeating this calculation for the other three shadings of $1$ implies that the probability of containing any of the forbidden mesh patterns is bounded by $\frac{8}{n}$ which tends to zero as $n$ tends to infinity. \end{proof} \end{lem} Because of the previous lemma we obtain: \begin{cor} As $n$ tends to infinity the proportion of mesh patterns $p$ of length n such that $\mu(1^\emptyset,p)=0$ approaches $1$. \end{cor} In the classical case it is true that given a permutation $\sigma$ the probability a permutation of length $n$ contains $\sigma$ tends to $1$ as $n$ tends to infinity, this follows from the Marcus-Tardos Theorem \cite{MT04}. By the above result we can see the same is not true in the mesh pattern case. In fact we conjecture the opposite is true: \begin{conj} Given a mesh pattern $m$, with at least one shaded box, the probability that a random mesh pattern of length $n$ contains $m$ tends to~$0$ as $n$ tends to infinity. \end{conj} \section{Purity}\label{sec:purity} Recall that a poset is pure (also known as ranked) if all the maximal chains have the same length, and as we can see from \cref{fig:intEx}, intervals of the mesh pattern poset can be non-pure. In this section we classify which intervals~$[1^\emptyset,m]$ are non-pure. First we consider the length of the longest maximal chain in any interval~$[1^\emptyset,m]$, that is, the dimension of $[1^\emptyset,m]$. \begin{lem} For any mesh pattern $m$, we have $\dim(1^\emptyset,m)=|\cl{m}|+|\sh{m}|$. \begin{proof} We can create a chain from $m$ to $1^\emptyset$ by deshading all boxes, in any order, and then deleting all but one point, in any order. The length of this chain is $|\cl{m}|+|\sh{m}|$. Moreover, we cannot create a longer chain because at every step of a chain we must deshade a box or delete a point. \end{proof} \end{lem} Therefore, we define the \emph{dimension} of a mesh pattern as $\dim(m)=|\cl{m}|+|\sh{m}|$ and we say an edge $m\lessdot p$ is \emph{impure} if $\dim(p)-\dim(m)>1$. Next we give a classification of impure edges. Let $\mX{m}{x}$ be the mesh pattern obtained by deleting the point $x$ in $m$ and let $\occX{m}{x}$ be the occurrence of $\mX{m}{x}$ in $m$ that does not use the point $x$. An occurrence $\eta$ of $m$ in $p$ \emph{uses the shaded box $(a,b)\in\sh{p}$} if $(a,b)\in R_\eta(i,j)$ for some shaded box $(i,j)\in\sh{m}$. We say that deleting a point $x$ \emph{merges shadings} if there is a shaded box in $\mX{m}{x}$ that corresponds to more than one shaded box in $\occX{m}{x}$, see \cref{fig:impEx}. \begin{lem}\label{lem:impureEdge} Two mesh patterns $m<p$ form an impure edge if and only if all occurrences of $m$ in $p$ use all the shaded boxes of $p$ and are obtained by deleting a point that merges shadings. \end{lem} \begin{proof} First we show the backwards direction. Because $m$ is obtained by deleting a point that merges shadings, $m$ must have one less point and at least one less shaded box so $\dim(p)-\dim(m)\ge2$. So it suffices to show that there is no $z$ such that $m<z<p$. Suppose such a $z$ exists, then if~$z$ is obtained by deshading a box in $p$ it can no longer contain $m$ because all occurrences of $m$ in $p$ use all the shaded boxes of $p$. If $z$ is obtained by deleting a point and $m<z$, then $\cl{m}=\cl{z}$. Therefore, we can deshade some boxes of $z$ to get $m$, which implies there is an occurrence of $m$ in $p$ that does not use all the shaded boxes of $p$. Now consider the forward direction. Suppose $m\lessdot p$ is impure, so $\dim(p)-\dim(m)\ge2$. Therefore, $m$ is obtained by deleting a single point which merges shadings, but does not delete shadings because any other combination of deleting points and deshading can be done in successive steps. Furthermore, this must be true for any point that can be deleted to get $m$, that is, for all occurrences of $m$ in $p$. Moreover, if there is an occurrence that does not use all the shaded boxes of $p$, we can deshade the box it doesn't use and get an element that lies between $m$ and $p$. \end{proof} \begin{figure}\centering \begin{subfigure}[b]{0.3\textwidth} \centering\colpatt{0.5}{2}{1,2}[0/2,1/2][2/2][0/2,1/2] \caption*{$a=12^{(0,2),(1,2)}$}\label{subfig:a}\end{subfigure} \begin{subfigure}[b]{0.3\textwidth} \centering\colpatt{0.5}{3}{1,2,3}[1/3,2/3][1/1,3/3][1/3,2/3] \caption*{$b=123^{(1,3),(2,3)}$}\label{subfig:b}\end{subfigure} \caption{Two mesh patterns with a point $x$ in black whose deletion merges shadings and the occurrences $\occX{a}{x}$ and $\occX{b}{x}$ in red. By \cref{lem:impureEdge} $\mX{a}{x}\lessdot a$ is impure, but $\mX{b}{x}<b$ is not an impure edge because there is a second occurrence of $\mX{b}{x}$ in $b$, using points $23$, that does not use all the shaded boxes in $b$.}\label{fig:impEx} \end{figure} \begin{lem}\label{lem:topImpure} If $[m,p]$ contains an impure edge, then it contains an impure edge $a\lessdot b$ where $\cl{p}=\cl{b}$. \begin{proof} Let $x\lessdot y$ be an impure edge in $[m,p]$. So $x$ is obtained from $y$ by deleting a point $i$. Consider an occurrence $\eta$ of $y$ in $p$ and let $b$ be the mesh pattern where $\cl{b}=\cl{p}$ and $\sh{b}$ are the shaded boxes used by $\eta$. Let $a$ be the mesh pattern obtained from $b$ by deleting the point which corresponds to $i$ in $\eta$. The mesh pattern $b$ is constructed from $y$ by adding a collection of points. None of these added points can be touching a shaded box in $b$, as they must be added to empty boxes of~$y$. Moreover, the set of occurrences of~$a$ in $b$ correspond to the set of occurrences of $x$ in $y$, after adding the new points. This implies that the occurrences of $x$ in $y$ satisfy the conditions of \cref{lem:impureEdge} if and only if the occurrences of~$a$ in $b$ satisfy the same conditions. So \cref{lem:impureEdge} implies $a\lessdot b$ is an impure edge. \end{proof} \end{lem} \begin{prop} The interval $[1^\emptyset,m]$ is non-pure if and only if there exists a point $x$ in $m$ whose deletion merges shadings and there is no other occurrence of $\mX{m}{x}$ in $m$ which uses a subset of the shadings used by $\occX{m}{x}$. \begin{proof} First we show the backwards direction. Let $t$ be the mesh pattern obtained by inserting~$x$ back into $\mX{m}{x}$, and $\phi$ the corresponding occurrence of $\mX{m}{x}$ in $t$. Note that there are no other occurrences of $\mX{m}{x}$ in $t$ because there is no occurrence of $\mX{m}{x}$ in $m$ which uses a subset of the shadings used by $\occX{m}{x}$. Therefore, by Lemma~\ref{lem:impureEdge} we get that $\mX{m}{x}\lessdot t$ is an impure edge. To see the other direction suppose there is an impure edge in $[1^{\emptyset},m]$. By Lemma~\ref{lem:topImpure} there is an impure edge $a\lessdot b$ where $\cl{b}=\cl{m}$. By \cref{lem:impureEdge} all occurrences of $a$ in $b$ use all shaded boxes of $b$ and are obtained by deleting a point that merges shadings. Moreover, if deleting a point merges shadings in $b$, then its deletion merges shadings in $m$, which implies the result. \end{proof} \end{prop} \begin{cor} There is an impure edge in the interval $[m,p]$ if and only if there exists a point $x$ in $p$ whose deletion merges shadings and there is no other occurrence of $\mX{p}{x}$ in $p$ with a subset of shadings of $\occX{p}{x}$, and $\mX{p}{x}\ge m$. \end{cor} Note that containing an impure edge in $[m,p]$ does not necessarily imply that $[m,p]$ is non-pure. For example, if $[m,p]$ contains only one edge and that edge is impure, then $[m,p]$ is still pure. Although it is also possible to have a pure poset that contains impure and pure edges, see \cref{fig:pureIm}. \begin{figure}\centering \begin{tikzpicture} \def1.35{1.35} \def2{2} \def0.25{0.25} \node (2413) at (0*2,3*1.35){\patt{0.25}{4}{2,4,1,3}[0/0,1/0,2/0][][][][][4]}; \node (213) at (-1*2,1.15*1.35){\patt{0.25}{3}{2,1,3}[0/0,1/0][][][][][4]}; \node (312) at (0*2,1.15*1.35){\patt{0.25}{3}{3,1,2}[0/0,1/0][][][][][4]}; \node (231) at (1*2,1.85*1.35){\patt{0.25}{3}{2,3,1}[0/0,1/0,2/0][][][][][4]}; \node (21) at (0*2,0*1.35){\patt{0.25}{2}{2,1}[0/0,1/0][][][][][4]}; \draw[-] (21) to [bend left=15] (213); \draw[-] (21) to (312); \draw[-] (21) to [bend right=15] (231); \draw[-] (213) to [bend left=15] (2413); \draw[-] (312) to (2413); \draw[-] (231) to [bend right=15] (2413); \end{tikzpicture} \caption{The interval $[21^{(0,0),(1,0)},2413^{(0,0),(1,0),(2,0)}]$, which is pure but contains both pure and impure edges.} \label{fig:pureIm} \end{figure} \section{Topology}\label{sec:topology} A full classification of shellable intervals has not been obtained for the classical permutation poset, so finding such a classification for the mesh pattern poset would be equally difficult, if not more so. However, in \cite{McSt13} all disconnected intervals of the permutation poset are described, and containing a disconnected subinterval implies a pure interval is not shellable. So this gives a large class of non-shellable intervals, in fact it is shown that almost all intervals are not shellable. We showed in \cref{lem:strongdis} that containing a strongly disconnected interval implies an interval is not shellable. So in this section we consider when an interval is strongly disconnected. Firstly we look at the relationship between connectivity in $\mathcal{P}$ and $\mathcal{M}$. The connectivity of the interval $[\cl{m},\cl{p}]$ in $\mathcal{P}$ does not necessarily imply the same property for $[m,p]$ in $\mathcal{M}$. For example, the interval $[123,456123]$ is disconnected in $\mathcal{P}$ but the interval \begin{equation}\label{eq:a}\left[\patt{.25}{3}{1,2,3}[3/0,3/1,3/2][][][][][4] ,\patt{.25}{6}{4,5,6,1,2,3}[6/0,6/1,6/2][][][][][4]\right]\end{equation} is a chain in $\mathcal{M}$, so is connected. Furthermore, the interval $[321,521643]$ is connected in $\mathcal{P}$ but the interval \begin{equation}\label{eq:b}\left[\patt{.25}{3}{3,2,1}[1/3][][][][][4], \patt{.25}{6}{5,2,1,6,4,3}[1/5,1/6,4/6][][][][][4]\right]\end{equation} is strongly disconnected in~$\mathcal{M}$. Therefore, if $[\cl{m},\cl{p}]$ is (non-)shellable in $\mathcal{P}$, then it is not true that $[m,p]$ has the same property in $\mathcal{M}$. For example, $[123,456123]$ is not shellable but~\eqref{eq:a} is shellable, and $[321,521643]$ is shellable but \eqref{eq:b} is not shellable. In \cite{McSt13} the direct sum operation is used to show that almost all intervals of the permutation poset are not shellable in $\mathcal{P}$. We generalise the direct sum operation to mesh patterns. Given two permutations $\alpha=\alpha_1\ldots\alpha_a$ and~$\beta=\beta_1\ldots\beta_b$ the direct sum of the two is defined as $\alpha\oplus\beta=\alpha_1\ldots\alpha_a(\beta_1+a)(\beta_2+a)\ldots(\beta_b+a)$, that is, we increase the value of each letter of $\beta$ by the length of $\alpha$ and append it to $\alpha$. This can also be thought of in terms of the plots of $\alpha$ and $\beta$ by placing a copy of $\beta$ to the north east of $\alpha$. Similarly we can define the skew-sum $\alpha\ominus\beta$ by prepending $\alpha$ to $\beta$ and increasing the value of each letter of $\alpha$ by the length of $\beta$. We extend these definitions to mesh patterns in the following way: \begin{defn}\label{defn:directsum} Consider two mesh patterns $s$ and $t$, where the top right corner of $s$ and bottom left corner of $t$ are not shaded. The direct sum~$s\oplus t$ has the classical pattern $\cl{s}\oplus\cl{t}$ and shaded boxes $\sh{s}\cup\{(i+|\cl{s}|,j+|\cl{s}|)\,|\,(i,j)\in\sh{t}\}$, and also for any shaded boxes $(i,|\cl{s}|)$, $(|\cl{s}|,i)$, $(j,|\cl{s}|)$ or $(|\cl{s}|,j)$, shaded all the boxes north, east, south or west of the box, respectively, for all $0\le i< |\cl{s}|$ and $|\cl{s}|< j\le |\cl{s}|+|\cl{t}|$. We similarly define the skew-sum for when the bottom right corner of $s$ and top left corner of $t$ are not shaded. \end{defn} The direct product $s\oplus t$ can be consider as placing a copy of $t$ north east of $s$ and any shaded box that was on a boundary we extend to the new boundary, see \cref{fig:directsum}. We define the direct sum in this way because it maintains one of the most important properties in the permutation sense, that the first $|\cl{s}|$ letters are an occurrence of $s$ and the final $|\cl{t}|$ letters are an occurrence of $t$. A permutation is said to be indecomposable if it cannot be written as the direct sum of smaller permutations. We generalise this to mesh patterns. \begin{defn} A mesh pattern $m$ is \emph{indecomposable} (resp. \emph{skew-\linebreak indecomposable}) if it cannot be written $m=a\oplus b$ (resp. $m=a\ominus b$), where neither $a$ nor $b$ is $m$. \end{defn} \begin{rem} It is well known that a permutation has a unique decomposition into indecomposable permutations. This implies that a mesh pattern also has a unique decomposition. \end{rem} Using these definitions we can give a large class of strongly disconnected intervals, which is a mesh pattern generalisation of Lemma 4.2 in \cite{McSt13}. \begin{figure}\centering $\patt{.5}{3}{1,3,2}[1/3,2/2]\oplus\patt{.5}{3}{3,2,1}[0/3,1/2,2/1,3/3,3/0][][][][][4] =\patt{.5}{6}{1,3,2,6,5,4}[1/3,2/2,3/6,4/5,5/4,6/6,6/3,1/4,1/5,1/6,0/6,2/6,6/0,6/1,6/2][][][][][4]$ \caption{The direct sum of two mesh patterns.}\label{fig:directsum} \end{figure} \begin{lem} If $m$ is indecomposable, $\dim m > 1$ and $(0,0),(|m|,|m|)\not\in\sh{m}$, then $[m,m\oplus m]$ is strongly disconnected. \begin{proof} By Lemma 4.2 in \cite{McSt13} the interval $[\cl{m},\cl{m}\oplus \cl{m}]$ is strongly disconnected, with components $P_1=\{\cl{m}\oplus x\,|\,x\in [1,\cl{m})\}$ and $P_2=\{x\oplus \cl{m}\,|\,x\in [1,\cl{m})\}$. Consider any pair $\alpha,\beta\in[m,m\oplus m]$, if $\cl{\alpha}$ and~$\cl{\beta}$ are not in the same component of $[\cl{m},\cl{m}\oplus \cl{m}]$, then $\alpha$ and $\beta$ are incomparable. Let $\hat{P_1}=\{\alpha\,|\,\cl{\alpha}\in P_1\}$ and $\hat{P_2}=\{\alpha\,|\,\cl{\alpha}\in P_2\}$. However, $\hat{P_1}\cup\hat{P_2}\not=(\cl{m},\cl{m\oplus m})$ because it does not include the mesh patterns $\alpha$ with ${\cl{\alpha}=\cl{m}\oplus \cl{m}}$. There are exactly two occurrences of $m$ in $m\oplus m$. These are $\eta_1$ the first~$|m|$ letters and $\eta_2$ the last $|m|$ letters. Note that each shaded box of~${m\oplus m}$ is used by at least one of $\eta_1$ and $\eta_2$, so if we deshade a box the resulting pattern $x$ contains at most one occurrence of $m$, either the first or last $|m|$ letters. Let $Q_1$ and $Q_2$ be sets of patterns with underlying permutation $\cl{m}\oplus \cl{m}$ where the first and last $|m|$ letters are the only occurrence of $m$, respectively. So any element $Q_1$ cannot contain an element in $P_2\cup Q_2$ and similarly any element of $Q_2$ cannot contain an element of ${P_1\cup Q_1}$. Therefore, $P_1\cup Q_1$ and $P_2\cup Q_2$ are disconnected nontrivial components of $[m,m\oplus m]$. \end{proof} \end{lem} \begin{cor} If $m$ is skew-indecomposable, $(|m|,0),(0,|m|) \not\in\sh{m}$ and $\dim m>1$, then $[m,m\ominus m]$ is strongly disconnected. \end{cor} Using Lemma 4.2 in \cite{McSt13} it is shown that almost all intervals of the classical permutation poset are not shellable. The proof of this follows from the Marcus-Tardos theorem. We have seen this result does not apply in the mesh pattern case, so we cannot prove a similar result using this technique. A similar problem was studied for boxed mesh patterns in permutations in~\cite{AKV13}, which is equivalent to boxed mesh patterns in fully shaded mesh patterns. So we present the following open question: \begin{que} What proportion of intervals of $\mathcal{M}$ are shellable? \end{que} The M\"obius function in the permutation poset can be computed more easily by decomposing the permutations into smaller parts using the direct sum, or skew-sum, see \cite{BJJS11,McSt13}. Which leads to the following question: \begin{que} Can a formula for the M\"obius function of $\mathcal{M}$ be obtained by decomposing mesh patterns using direct sums and skew sums? \end{que} \section*{\refname}
{ "timestamp": "2018-02-26T02:12:28", "yymm": "1802", "arxiv_id": "1802.08672", "language": "en", "url": "https://arxiv.org/abs/1802.08672" }
\section{Introduction} In this paper, we shed new light on Langevin-based Monte Carlo algorithms by drawing connections to the Wasserstein gradient flow literature and the operator splitting approach to solving PDEs. In a seminal paper, \citet{jordan1998variational} expressed the solution of the Fokker--Planck equation as the gradient flow of the relative entropy functional (otherwise known as the KL-divergence) with respect to the $2$-Wasserstein distance. Their constructive proof used a time discretization approach that has since become known as the JKO scheme. We show that applying the JKO scheme in conjunction with a splitting approach to solving the Fokker--Planck equation reduces to a proximal version of the Unadjusted Langevin Algorithm. Our proofs rely heavily on the theory developed by \citet{ambrosio2005}, and has the benefit of holding for potentials that are not necessarily differentiable. In turn, this allows us to provide some new results regarding the convergence of the algorithm. Our work is related to \citet{durmus2016efficient}, and we will make comparisons to their theoretical results. To motivate the use of Langevin-based Monte Carlo algorithms, consider a log-concave target distribution $\pi$, given in terms of the Lebesgue density $\pi(x) = Z^{-1}e^{-V(x)}$, where $V:\mathbb{R}^d \to \mathbb{R}$ is a convex function, $d\in \mathbb{N}$ is an integer, and $Z$ is the normalizing constant. In the case where $V$ is differentiable, we can associate with it the Langevin diffusion, given in terms of the It\^o stochastic differential equation \begin{equation}\label{eq:langevin} dX(t) = -\nabla V(X(t))dt + \sqrt{2}dW(t), \quad X(0) = X_0 \sim \rho_0. \end{equation} It represents the position $X(t) \in \mathbb{R}^d$ of a particle at time $t >0$, initialized at the random location $X_0 \sim \rho_0$, with drift according to the gradient of the potential $V$ and subject to random perturbations $dW(t)$. The process $W(t)$ is the standard Wiener process. The density of $X(t)$ at time $t$, written $\rho(t)$, satisfies the linear Fokker--Planck equation: \begin{equation}\label{eq:fp} \frac{d\rho}{dt} = \text{div}(\rho \nabla V) + \Delta \rho, \quad \rho(0) = \rho_0. \end{equation} A classical result says that under quite weak convexity and smoothness conditions on $V$, the unique stationary solution of \eqref{eq:fp} is equal to $\pi$, and that convergence to $\pi$ is exponentially fast \citep[see for example][Chapter 4]{pavliotis2014stochastic}. These attractive properties have spawned a range of sampling algorithms targeting $\pi$ based on time discretizations of the process in \eqref{eq:langevin}. Notably, the Unadjusted Langevin Algorithm (ULA) and its Metropolis adjusted counterpart MALA have received much attention. The Unadjusted Langevin Algorithm is simply an explicit Euler discretization of \eqref{eq:langevin}: for a time-step $h> 0$ and for $k \geq 0$, \begin{equation} X_h^{k+1} = X_h^{k} - h\nabla V(X_h^{k}) + \sqrt{2h}\eta^{k+1}, \quad X_h^{0} = X_0, \end{equation} where $(\eta^{k})_{k\geq 1}$ is a sequence of independent $\mathcal{N}(0,\mathcal{I}_d)$ random variables and $\mathcal{I}_d$ is the $d$-dimensional identity matrix. In MALA, $X_h^{k+1}$ is either accepted or rejected in a Metropolis step with the purpose of removing the asymptotic bias of ULA stemming from discretization error. Originating with \citet{roberts1996exponential}, there has been a lot of interest in quantifying the performance of these algorithms, with early work primarily focusing on MALA \citep[see e.g.][]{jarner2000geometric,roberts2002langevin,pillai2012optimal,xifara2014langevin}. It was not until \citet{dalalyan2014theoretical}, who gave precise bounds for the total variation distance between the law of $X_h^{k}$ and $\pi$ in terms of $d, k$, and $h$, that ULA garnered similar attention. His results were further improved and extended to other metrics and discrepancies by \citet{durmus2016sampling,durmus2017nonasymptotic, cheng2017convergence, dalalyan2017further}. For instance, \citet{dalalyan2017user} show that if $V$ is strongly convex and has Lipschitz continuous gradient, then $\mathcal{O}(d/\varepsilon^2)$ iterations is sufficient for ULA to achieve an error of $\varepsilon$ in the $2$-Wasserstein distance. Similar results also hold in situations where only a (sufficiently regular) approximation of the gradient is available. In what follows, we will view Langevin-based Monte Carlo through the lens of Wasserstein gradient flow, and show that this perspective can lead to interesting results on the computational complexity of such algorithms. Wasserstein gradient flow was also used by \citet{cheng2017convergence} as a theoretical tool to study ULA, but our approach makes closer connections to the operator splitting literature, and as such leads to different results. We hope that further connections can have methodological implications in these fields, by considering the wide variety of JKO schemes, splitting schemes, and Langevin Monte Carlo algorithms that exist. The rest of this paper is structured as follows. Section \ref{sec:notation} defines the notation and states some important definitions, Section \ref{sec:wasserstein_gradient_flow} reviews some concepts from the Wasserstein gradient flow literature, Section \ref{sec:operator_splitting} briefly discusses the operator splitting approach to solving PDEs, Section 4 establishes connections between Wasserstein gradient flow, operator splitting and Langevin Monte Carlo and includes some convergence results, and Section 5 concludes. Proofs are given in the Appendix. \subsection{Notation and definitions}\label{sec:notation} Let $\|\cdot\|_p$ be the $\ell_p$-norm on $\mathbb{R}^d$, unless $p = 2$, in which case it reduces to the Euclidean distance and is denoted by $\|\cdot\|$. Define $\mathcal{P}_2(\mathbb{R}^d)$ to be the set of probability measures on $\mathbb{R}^d$ with finite second moments with respect to the Euclidean distance. The $2$-Wasserstein distance is a metric on $\mathcal{P}_2(\mathbb{R}^d)$, and is for any $\mu,\nu \in \mathcal{P}_2(\mathbb{R}^d)$ defined by \begin{equation} \mathcal{W}_2(\mu,\nu) = \left(\inf_{\gamma \in \Gamma(\mu,\nu)} \int_{\mathbb{R}^d\times \mathbb{R}^d} \|x-y\|^2 d\gamma(x,y)\right)^{\frac{1}{2}}, \end{equation} where $\Gamma(\mu,\nu)$ is the set of all joint distributions with marginals $\mu$ and $\nu$. A desirable feature of the $2$-Wasserstein distance is that $\mathcal{W}_2(\mu_n, \mu) \to 0$ as $n \to \infty$ if and only if $\mu_n$ converges weakly to $\mu$ and the corresponding sequence of second moments also converges \citep[Theorem 6.9]{villani2008}. The entropy and potential energy functionals, $\rho \mapsto \mathcal{H}(\rho)$ and $\rho \mapsto \mathcal{V}(\rho)$ respectively, are given by \begin{equation}\label{eq:entropy} \mathcal{H}(\rho) = \begin{cases} \int \log \rho d\rho & \text{for $\rho \ll \mu_{\text{Leb}}$}, \\ +\infty & \text{otherwise}, \end{cases} \end{equation} where $\mu_{\text{Leb}}$ denotes the Lebesgue measure on $\mathbb{R}^d$, and \begin{equation}\label{eq:internal_energy} \mathcal{V}(\rho) = \int V d\rho. \end{equation} The relative energy functional $\rho \mapsto \mathcal{H}(\rho | \pi)$, also called the KL-divergence, is given by \begin{equation}\label{eq:relative_entropy} \mathcal{H}(\rho | \pi) = \mathcal{H}(\rho) + \mathcal{V}(\rho) + \log Z. \end{equation} An important concept in optimal transport, which will play a significant role later, is the notion of displacement convexity. A functional $\rho \mapsto \mathcal{F}(\rho)$ is said to be $\lambda$-displacement convex for some $\lambda \in\mathbb{R}$ if, for all $t\in [0,1]$, \begin{equation} \mathcal{F}(\mu_t) \leq (1-t)\mathcal{F}(\mu_0) + t\mathcal{F}(\mu_1) - \frac{\lambda}{2}t(1-t)\mathcal{W}_2^2(\mu_0,\mu_1) \end{equation} for any constant speed geodesic $\mu:[0,1]\to \mathcal{P}_2(\mathbb{R}^d)$. A curve $\mu:[0,1]\to \mathcal{P}_2(\mathbb{R}^d)$ is a constant speed geodesic if, for any $0\leq s\leq t\leq 1$, we have that $\mathcal{W}_2(\mu_s,\mu_t) = (t-s)\mathcal{W}_2(\mu_0,\mu_1).$ We use the following notation for the density of a Gaussian distribution with zero mean and covariance matrix $2t\mathcal{I}_d$: \begin{equation}\label{eq:gaussian_kernel} \phi_t(x) = \frac{1}{(4\pi t)^{d/2}}\exp\left(- \frac{\|x\|^2}{4t}\right). \end{equation} By a Markov operator, we mean a functional $R$ that maps the set of non-negative Lebesgue integrable functions into itself. A family of Markov operators $(R_t)_{t\geq0}$ is called a Markov semigroup if $R_0$ is the identity map, $R_{t+s} = R_tR_s$ for any $s,t\geq 0$, and the map $t\mapsto R_t f$ is continuous for any non-negative and Lebesgue integrable $f$. \section{Wasserstein gradient flow} \label{sec:wasserstein_gradient_flow} The theory of gradient flows in the space of probability measures was pioneered by Ambrosio, Gigli and Savar\'e in their book \citet{ambrosio2005}, generalizing the variational structure \citet{jordan1998variational} had used to describe the diffusion and Fokker--Planck equations. With Langevin Monte Carlo in mind, we provide only a brief introduction to this theory, and refer to the aforementioned references and the accessible review of \citet{santambrogio2016euclidean} for further details. We first consider continuous time flows, which will lead to a useful perspective on generalizations of the continuous time processes in \eqref{eq:langevin} and \eqref{eq:fp}. Secondly, we consider the time discretizations through which the existence and uniqueness of gradient flows are typically established. Although they were originally introduced as theoretical tools in the literature, it will later become clear that Langevin Monte Carlo in fact numerically approximates such a time discretization. \subsection{Continuous time flows} In Euclidean space, a curve $x:[0,\infty) \to \mathbb{R}^d$ is the gradient flow, or steepest descent, of a differentiable function $f:\mathbb{R}^d \to \mathbb{R}$ if \begin{equation}\label{eq:grad_flow_euclidean} \frac{dx}{dt} = -\nabla f(x), \quad {x(0) = x_0}. \end{equation} By analogy, one can interpret the gradient flow of a functional $\mathcal{F} :\mathcal{P}_2(\mathbb{R}^d) \to \mathbb{R}$ to be a curve $\rho:[0,\infty) \to \mathcal{P}_2(\mathbb{R}^d)$ that satisfies \begin{equation}\label{eq:grad_flow_wasserstein} \frac{d\rho}{dt} = -\nabla_{\mathcal{W}_2} \mathcal{F}(\rho), \quad {\rho(0) = \rho_0}, \end{equation} for some generalized notion of gradient $\nabla_{\mathcal{W}_2}$, in terms of the $\mathcal{W}_2$ metric. For sufficiently regular $\rho$ and $\mathcal{F}$, $\nabla_{\mathcal{W}_2} \mathcal{F}(\rho)$ corresponds to $-\text{div}(\rho \nabla\frac{ \delta\mathcal{F}}{\delta \rho})$, where $\delta\mathcal{F} /\delta \rho$ is the first variation of $\mathcal{F}$. Applied to the functional of interest, namely $\mathcal{F}(\rho) = \mathcal{H}(\rho | \pi)$, one has that $\delta\mathcal{F} /\delta \rho = V + \log \rho + 1$. Thus, if $V$ is differentiable one recovers \eqref{eq:fp} \citep[see e.g.][Lemma 10.4.1]{ambrosio2005}. Due to the technically challenging nature of defining Wasserstein gradients this way when $V$ is not differentiable, we instead adopt the definition given in \citet{ambrosio2009existence}, inspired by the characterization of gradient flows in terms of evolution variational inequalities (EVIs) shown in \citet[][Theorem 11.1.4]{ambrosio2005}. In particular, we say that a continuous curve $\rho : (0,+\infty) \to \mathcal{P}_2(\mathbb{R}^d)$ is a gradient flow of a $\lambda$-displacement convex functional $\mathcal{F}$ if \begin{equation}\label{eq:grad_flow} \frac{d}{dt} \frac{1}{2} \mathcal{W}_2^2(\rho(t),\nu) + \frac{\lambda}{2}\mathcal{W}_2^2(\rho(t),\nu) + \mathcal{F}(\rho(t)) \leq \mathcal{F}(\nu), \end{equation} holds in the sense of distributions, for all $\nu\in \mathcal{D}(\mathcal{F}) = \{\mu \in \mathcal{P}_2(\mathbb{R}^d) : \mathcal{F}(\mu) < +\infty\}$. The flow is said to start from $\rho_0$ if $\mathcal{W}_2(\rho(t),\rho_0) \to 0$ as $t \to 0$. Here, ``in the sense of distributions'' means that for all infinitely differentiable and compactly supported test functions, denoted $f \in C_c^\infty((0,\infty); \mathbb{R})$, such that $f\geq 0$, we have \begin{equation} \label{eq:sense_of_distributions} -\frac{1}{2}\int_0^\infty\mathcal{W}_2^2(\rho(t),\nu)f^\prime(t) dt \leq \int_0^\infty \left[\mathcal{F}(\nu)- \mathcal{F}(\rho(t)) - \frac{\lambda}{2}\mathcal{W}_2^2(\rho(t),\nu) \right]f(t) dt. \end{equation} The connection between \eqref{eq:grad_flow} and \eqref{eq:sense_of_distributions} can be seen by imagining the left hand side of \eqref{eq:sense_of_distributions} being integrated by parts. One of the most attractive features of the process $\rho(t)$ is its convergence properties. When $\lambda > 0$, the functional $\rho \mapsto \mathcal{F}(\rho)$ has a unique minimum $\bar{\rho}$, and Theorem 11.2.1 of \citet{ambrosio2005} states that for any $t\geq 0$, \begin{equation}\label{eq:contraction_F} \mathcal{W}_2(\rho(t),\bar{\rho}) \leq \mathcal{W}_2(\rho_0,\bar{\rho})e^{-\lambda t} \quad \text{and}\quad \mathcal{F}(\rho(t)) - \mathcal{F}(\bar{\rho}) \leq \left[\mathcal{F}(\rho_0) - \mathcal{F}(\bar{\rho})\right]e^{-2\lambda t}. \end{equation} Convergence results also exist in the case where $\lambda = 0$, but do not yield the exponential convergence observed above. This result can be applied to the relative entropy by making the following observations: when $V$ is $\lambda$-strongly convex with $\lambda > 0$, it follows that $\rho\mapsto\mathcal{V}(\rho)$ is $\lambda$-displacement convex \citep[Proposition 9.3.2]{ambrosio2005}. In turn, this implies that $\rho\mapsto \mathcal{H}(\rho | \pi)$ is $\lambda$-displacement convex. Recall that $\mathcal{H}(\rho | \pi) \geq 0$ for any $\rho$, and that $\rho \mapsto \mathcal{H}(\rho | \pi)$ is uniquely minimized at $\pi$ due to the strict convexity of the function $x \mapsto x \log x$ for $x > 0$ appearing in $\mathcal{H}(\rho)$, and Jensen's inequality. The result in \eqref{eq:contraction_F} can then be formulated as \begin{equation}\label{eq:contraction} \mathcal{W}_2(\rho(t),\pi) \leq \mathcal{W}_2(\rho_0,\pi)e^{-\lambda t} \quad \text{and}\quad \mathcal{H}(\rho(t) | \pi) \leq \mathcal{H}(\rho_0 | \pi)e^{-2\lambda t}. \end{equation} This is a more general statement of the exponential convergence to $\pi$ of the solution to the Fokker--Planck equation mentioned in the introduction, and is as such one of the main motivations for studying Langevin Monte Carlo algorithms. \subsection{Time discretized flows} An important theoretical tool in establishing the existence of gradient flows is the minimizing movement scheme, often also called the JKO scheme. For a time-step $h>0$, $k \geq 0$, and $\rho_h^0 = \rho_0$, consider the iterated minimization problems \begin{equation}\label{eq:jko} \rho_h^{k+1} = \operatornamewithlimits{argmin\,}_{\rho \in \mathcal{P}_2(\mathbb{R}^d)} \mathcal{F}(\rho) + \frac{1}{2h}\mathcal{W}_2^2(\rho,\rho_h^k). \end{equation} Such minimizers exist and are unique under weak assumptions, such as lower semi-continuity and (strong) displacement convexity of $\mathcal{F}$ \citep[see e.g.][Proposition 4.2]{ambrosio2009existence}. Both of these conditions hold for the relative entropy functional $\rho\mapsto \mathcal H( \rho | \pi)$ when $V$ is convex: the first property holds in more generality and is well-known, whereas the second was proved in \citet{mccann1997convexity}. In the Euclidean setting, the sequence $(x_h^k)_{k\geq 0}$ is an implicit Euler discretization with step-size $h$ of the gradient flow of $f:\mathbb{R}^d \to \mathbb{R}$ given in \eqref{eq:grad_flow_euclidean} with initial condition $x_h^0 = x_0$ if \begin{equation}\label{eq:implicit_euler} x_h^{k+1} = \operatornamewithlimits{argmin\,}_{y \in \mathbb{R}^d} f(y) + \frac{1}{2h}\|x_h^{k}-y\|^2. \end{equation} The map defined by the right hand side of \eqref{eq:implicit_euler} is often written $\text{prox}_f^h(x_h^k)$ in the optimization literature, and is referred to as the proximal operator \citep[see e.g.][]{parikh2014proximal}. By analogy, the JKO scheme \eqref{eq:jko} can be seen as an implicit Euler discretization of the flow in \eqref{eq:grad_flow_wasserstein}. It was this time discretization scheme applied to the functional $\rho \mapsto \mathcal{H}(\rho | \pi)$ that \citet{jordan1998variational} employed, showing that the interpolation \begin{equation}\label{eq:interpolation} \rho^h(t) = \rho_h^{k+1} \quad \text{for $t \in (kh, (k+1)h]$} \end{equation} converges (in some formal sense) to the solution of the Fokker--Planck equation as $h \to 0$, in the case where $V$ is smooth and satisfies certain growth conditions. Building on results by \citet{cepa1998problame}, \citet{ambrosio2009existence} used a minimizing movement scheme to show existence and uniqueness of the gradient flow of the relative entropy functional given any convex $V$. In particular, they show that there exists a semigroup $(P_t)_{t\geq 0}$ and a unique Markov family $\{\mathbb{P}_x : x \in \mathbb{R}^d\}$ of probability measures on $(\mathbb{R}^d)^{[0,+\infty)}$ such that $\mathbb{E}_x f(X_t) = P_t f(x)$ for all bounded Borel functions $f$ and all $x \in\mathbb{R}^d$. Moreover, it is shown that $\{\mathbb{P}_x : x \in \mathbb{R}^d\}$ is reversible with respect to $\pi$, and that $\pi$ is uniquely invariant for $(P_t)_{t\geq 0}$. Restricting $(P_t)_{t\geq 0}$ to indicator functions of Borel sets $B \in \mathcal{B}(\mathbb{R}^d)$, we define $(R_t)_{t\geq 0}$ by $R_t \rho_0(B) = \int P_t 1_B d\rho_0$. The process $\rho(t) = R_t \rho_0$ then uniquely satisfies \eqref{eq:grad_flow} and the associated properties outlined in the previous section. After originally being introduced as a theoretical tool, there has recently been interest in developing numerical implementations of the JKO scheme for solving PDEs. Several Eulerian grid-based approaches exist, see e.g. \citet{burger2012regularized,carrillo2015finite,peyre2015entropic}. By virtue of being grid-based, these have limited application in the high-dimensional sampling setting. It will later be seen that Langevin-based Monte Carlo can be considered a Lagrangian scheme using a particle approximation to the gradient flow. Other Lagrangian approaches have been considered by e.g. \citet{carrillo2015numerical,benamou2016discretization,carrillo2017blob}. These methods are typically adapted to accurately solving PDEs in two or three dimensions, and do not scale well with $d$. For instance, \citet{carrillo2017blob} used the modified relative entropy functional \begin{equation} \mathcal{F}_\gamma(\rho) = \int \log (\phi_\gamma * \rho) d\rho + \int V d\rho + \log Z, \end{equation} where $\varphi_\gamma = \gamma^{-d} \varphi(x/\gamma)$ denotes a mollifier, typically a Gaussian kernel with standard deviation $\gamma>0$. This modification makes the functional well-behaved when evaluated at an empirical measure, with the first term providing a kernel-based estimate of the entropy of the underlying distribution. For small time steps $h$, their algorithm reduces to solving a system of ODEs to evolve the particles in the empirical measure. The application of this approach to the high-dimensional setting is limited by the kernel-based estimate of entropy. \section{Operator splitting}\label{sec:operator_splitting} In the previous section, we alluded to the idea that Langevin Monte Carlo numerically approximates the time discretizations used to theoretically study Wasserstein gradient flows. Before making this connection clear, we first need to introduce the concept of operator splitting. Consider the generic Cauchy problem \begin{equation}\label{eq:cauchy_problem} \frac{df}{dt} = \mathcal{A}(f), \quad f(0) = f_0, \end{equation} with solution given by $f(t) = S_t f_0$ in semigroup notation. In many situations, the operator $\mathcal{A}$ can be split into the sum of two simpler operators: $\mathcal{A} = \mathcal{A}_1 + \mathcal{A}_2$. Let $f_j(t) = S^j_t f_0$ for $j = 1, 2$ denote the solutions to the problems \begin{equation}\label{eq:cauchy_problem_split} \frac{df_j}{dt} = \mathcal{A}_j(f_j), \quad f_j(0) = f_0. \end{equation} One can hope to estimate the solution $f$ of \eqref{eq:cauchy_problem} via $f(t) \approx (S^2_{t/n} S^1_{t/n})^n f_0$ for some large positive integer $n$, which can be justified if a Lie--Trotter--Kato product formula of the form \begin{equation}\label{eq:trotter} f(t) = \lim_{n \to +\infty} (S^2_{t/n} S^1_{t/n})^n f_0 \end{equation} holds. The book of \citet{holden2010splitting} contains a thorough overview of such results. Returning to the Fokker--Planck equation \eqref{eq:fp}, there is a natural split between the transport part of the equation: \begin{equation}\label{eq:fp_transport} \frac{d\rho}{d t} = \text{div}(\rho \nabla V), \quad \rho(0) = \rho_0, \end{equation} and the diffusion part: \begin{equation}\label{eq:fp_heat} \frac{d \rho}{d t} = \Delta \rho, \quad \rho(0) = \rho_0. \end{equation} In his Ph.D. thesis, \citet{stojkovic2011geometric} considers such a split for the Fokker--Planck equation with smooth drift satisfying a monotonicity property, but which is not necessarily a gradient. \citet{bowles2015weak} also consider this split for the fractional Fokker--Planck equation, where the Laplacian in the diffusion equation \eqref{eq:fp_heat} is substituted for a fractional Laplacian. In both of these works, operator splitting is introduced as a theoretical tool to establish the existence of solutions to generalized Fokker--Planck equations, but they do not consider numerical aspects nor the general case of convex $V$. The splitting interpretation carries over to the Wasserstein gradient flow formulation, where the transport equation \eqref{eq:fp_transport} can be interpreted as the gradient flow of the potential energy functional $\rho \mapsto \mathcal{V}(\rho)$, and the diffusion equation \eqref{eq:fp_heat} can be interpreted as the gradient flow of the entropy functional $\rho \mapsto \mathcal{H}(\rho)$. We now take a brief closer look at these two gradient flows. \subsection{The transport equation} In addition to the formulation in \eqref{eq:grad_flow}, the gradient flow of $\rho \mapsto \mathcal{V}(\rho)$ can be characterized by the semigroup $(T_t)_{t\geq 0 }$, induced by the differential inclusion \begin{equation} \label{eq:transport_map} \frac{d}{dt}T_t(x) \in -\partial V(T_t(x)), \quad \text{$T_0(x) = x\quad$ for all $x$ s.t. $V(x)<+\infty$}. \end{equation} According to Theorem 11.2.3 of \citet{ambrosio2005}, there exists a unique gradient flow of $\rho \mapsto \mathcal{V}(\rho)$ and solution to \eqref{eq:transport_map}. This gradient flow satisfies $\rho(t) = (T_t)_\#\rho_0$, where $(T_t)_\#$ denotes the push-forward map associated with $T_t$. The corresponding JKO scheme performs minimizations of the form \begin{equation} \rho_h^{k+1} = \operatornamewithlimits{argmin\,}_{\rho \in \mathcal{P}_2(\mathbb{R}^d)} \mathcal{V}(\rho) + \frac{1}{2h}\mathcal{W}_2^2(\rho,\rho_h^{k}). \end{equation} By the proof of Proposition 10.4.2 in \citet{ambrosio2005}, it is clear that these steps are well-defined. Moreover, the map $\mathcal{T}_h(x) = \text{prox}_V^h(x)$ is such that $\rho_h^{k+1} = (\mathcal{T}_h)_\#\rho_h^{k}$. Since the proximal operator satisfies $y = \text{prox}_V^h(x) \iff (x-y)/h \in \partial V(x)$ \citep[see e.g.][]{parikh2014proximal}, this can be seen as an implicit Euler step for the evolution of $T_t$ given in \eqref{eq:transport_map}. \subsection{The diffusion equation} The classical diffusion equation \eqref{eq:fp_heat}, also known as the heat equation, was first described as the gradient flow of the entropy functional $\rho \mapsto \mathcal{H}(\rho)$ on the set of densities in $\mathcal{P}_2(\mathbb{R}^d)$ by \citet{jordan1998variational}. Note that $\mathcal{H}(\rho)$ is the negative Gibbs--Boltzmann entropy of $\rho$. As pointed out in the aforementioned paper, the interpretation of the diffusion equation as the gradient flow of $\mathcal{H}$ therefore provides a natural interpretation of diffusion as the tendency of a system to maximize entropy. Unlike the other gradient flows we have discussed, the flow of $\rho \mapsto \mathcal{H}(\rho)$ is known in closed form: it is well-known that the solution of the diffusion equation \eqref{eq:fp_heat} is given by the density $\rho(t) = \phi_t * \rho_0$, where $\phi_t$ is the Gaussian kernel defined in \eqref{eq:gaussian_kernel}. \section{Proximal Langevin Monte Carlo} We are now ready to describe connections between JKO discretized gradient flows, operator splitting, and Langevin-based Monte Carlo algorithms. For a time-step $h>0$ and for $k\geq 0$, consider the iterative scheme \begin{equation}\label{eq:numerical_steps} \rho_h^{k+1/2} = (\mathcal{T}_h)_\# \rho_h^{k}, \quad \quad \rho_h^{k+1} = \phi_h * \rho_h^{k+1/2}, \end{equation} which can be seen as alternating between performing a JKO step for the gradient flow of $\rho \mapsto \mathcal{V}(\rho)$ and solving the exact gradient flow of $\rho \mapsto \mathcal{H}(\rho)$. Taking instead the particle perspective, let $X_h^{0} \sim \rho_0$ and perform \begin{equation}\label{eq:prox_lmc} X_h^{k+1/2} = \mathcal{T}_h(X_h^{k}) = \text{prox}_V^h(X_h^{k}), \quad \quad X_h^{k+1} = X_h^{k+1/2} + \sqrt{2h}\eta^{k+1}, \end{equation} where $(\eta^{k})_{k\geq 1}$ is a sequence of independent $\mathcal{N}(0,\mathcal{I}_d)$ random variables. For each $k$, the laws of $X_h^{k+1/2}$ and $X_h^{k+1}$ are equal to $\rho_h^{k+1/2}$ and $\rho_h^{k+1}$ respectively. A generalization of this algorithm was proposed by \citet{pereyra2016proximal} and studied further in \citet{durmus2016efficient}. Note that $\text{prox}_V^h(x) = x - h \nabla M_V^h(x)$, where \begin{equation} M_V^h(x) = \inf_{y \in \mathbb{R}^d}\left\{V(y) + \frac{1}{2h}\|x - y\|^2\right\} \end{equation} is the Moreau--Yosida regularization of $V$. Moreover, in the case where $V$ is twice differentiable with positive definite Hessian $D^2V(x)$ for every $x\in\mathbb{R}^d$, it is known that $\text{prox}_V^h(x) = x - h\nabla V(x) + o(h)$ as $h\to 0$ \citep[see e.g.][Section 3.3]{parikh2014proximal}. Hence, for small $h$, the steps in \eqref{eq:prox_lmc} can be thought of as approximating the Unadjusted Langevin Algorithm. \subsection{Convergence analysis} We follow the approach of \citet{clement2011trotter}, which itself is an adaptation of the methods in \citet[Chapter 4]{ambrosio2005}, to establish that the scheme in \eqref{eq:numerical_steps} satisfies a Lie--Trotter--Kato formula. We will also derive an upper bound on the 2-Wasserstein distance between the interpolation $\rho^h(t) = \rho_h^{k+1}$ for $t \in (kh, (k+1)h]$ and the gradient flow $\rho(t)$ of $\rho \mapsto \mathcal{H}(\rho | \pi)$. In turn, this allows us to bound the quantity of interest, $\mathcal{W}_2(\rho^h(t),\pi)$. Before stating the main results, we introduce some notation. For any $n \geq 1$ and any $0\leq k \leq n-1$, define the quantities \begin{equation} \delta_{h}^{k+1} = \mathcal{V}(\rho_{h}^{k+1}) -\mathcal{V}(\rho_{h}^{k+1/2}), \qquad \Delta_{h}^{k+1} = \sum_{j=1}^{k+1} \delta_{h}^{j}. \end{equation} Note that $\delta_{h}^{k+1}$ can also be expressed \begin{equation} \delta_{h}^{k+1} = \mathbb{E}V(X+\eta) - \mathbb{E}V(X), \end{equation} where $X\sim \rho_{h}^{k+1/2}$ and $\eta \sim \mathcal{N}(0,2h\mathcal{I}_d)$ independently. By convexity of $V$ and Jensen's inequality, it is clear that $\delta_{h}^{k+1} \geq \mathbb{E}V(\mathbb{E}(X+\eta | X)) - \mathbb{E}V(X) \geq 0$. The next results show that controlling these quantities is sufficient to establish convergence. We also remark that if one has access to independent runs of the algorithm given in \eqref{eq:prox_lmc}, one can estimate $\delta_{h}^{k+1}$ by averaging $V(X_h^{k+1}) - V(X_h^{k})$ across those runs. \begin{theorem}\label{theorem:converge_to_gradient_flow} Let $(\rho^{h_m}(t))_{m\geq1}$ be a sequence of discrete solutions generated from $\rho_0$, such that $h_m\Delta_{h_m}^m \to 0$ and $h_m m \to T$ for some $T >0$, as $m \to \infty$. Then, $\rho^{h_m}(t)$ converges uniformly on $[0,T]$ to $\rho(t)$, the gradient flow of $\rho\mapsto\mathcal{H}(\rho | \pi)$ started from $\rho_0$. Moreover, if $h >0$ and $n\geq 1$ are such that $hn \leq T$, then for any $t \in [0,hn]$, \begin{equation}\label{eq:uniform_approx_of_gradient_flow} \mathcal{W}_2 (\rho^{h}(t), \rho(t)) \leq \sqrt{6h \left(\mathcal{H}(\rho_{0} | \pi) + \Delta_{h}^{n}\right)}. \end{equation} \end{theorem} The corollary below follows from combining \eqref{eq:contraction} and \eqref{eq:uniform_approx_of_gradient_flow} via the triangle inequality. \begin{corollary}\label{cor:convergence} Suppose $V$ is $\lambda$-strongly convex. Then, under the assumptions of Theorem \ref{theorem:converge_to_gradient_flow}, we have \begin{equation} \mathcal{W}_2 (\rho^{h}(t),\pi) \leq \sqrt{6h \left(\mathcal{H}(\rho_{0} | \pi) + \Delta_{h}^{n}\right)} + \mathcal{W}_2(\rho_0,\pi)e^{-\lambda t}, \end{equation} for any $t \in [0,hn]$, where $h> 0$ and $n\geq 1$. \end{corollary} \subsection{Explicit rates} It is clear that the rate at which $h\Delta_{h}^{n} \to 0$ as $h\to 0$ is crucial in determining the quality of the approximation $\rho^{h}(t)$. Under some assumptions on $\rho_0$ and $V$, we can obtain explicit bounds on $\Delta_{h}^{n}$ in terms of $h, n$, and $d$. In this section, we take ``$a \sim b$" to mean ``$a$ is of the same order as $b$''. For instance, suppose $V = f + g$, where $f$ is $\lambda$-strongly convex and has Lipschitz continuous gradient, and $g$ is convex and Lipschitz. More explicitly, assume that there exist $M(d)$ and $L(d)$ such that for all $x,y \in \mathbb{R}^d$, \begin{align} \|\nabla f(x) - \nabla f(y)\| &\leq M(d)\|x-y\| \\ |g(x) - g(y)| &\leq L(d)\|x-y\|, \end{align} where the notation $M(d)$ and $L(d)$ reflects potential dependence of the Lipschitz constants on dimension. Under this assumption, we can bound $\delta_{h}^{k+1}$ as follows: \begin{align} \mathbb{E}V(X+\eta) - \mathbb{E}V(X) &= \mathbb{E}[f(X+\eta) - f(X)] + \mathbb{E}[g(X+\eta) - g(X)]\\ &\leq \mathbb{E}\left[\nabla f(X)^\top \eta + \frac{M(d)}{2}\|\eta\|^2\right] +L(d) \mathbb{E}\|\eta\| \label{eq:from_nesterov}\\ &\leq M(d)hd + L(d)\sqrt{2hd}, \end{align} where \eqref{eq:from_nesterov} follows from the basic property that \begin{equation} f(y) \leq f(x) +\nabla f(x)^\top(y-x) + \frac{M(d)}{2}\|x-y\|^2, \end{equation} for all $x,y \in \mathbb{R}^d$, see for example \citet{nesterov2013introductory}. Then, $h\Delta_{h}^n \leq M(d)hd \cdot hn + L(d)\sqrt{2hd}\cdot hn$. Hence, for any $T>0$ we could take $h_m = T/m$ and satisfy the conditions of Corollary \ref{cor:convergence}. Moreover, is not unreasonable to assume that one can choose $\rho_0$ such that $\mathcal{W}_2(\rho_0,\pi) \sim \sqrt{d}$ and $\mathcal{H}(\rho_0 | \pi) \sim d$. See Appendix \ref{appendix:rates} for a justification. If we want $\mathcal{W}_2 (\rho^{h}(hn),\pi) \sim \varepsilon$ for a threshold $\varepsilon >0$, we could take $h\mathcal{H}(\rho_{0} | \pi) + h\Delta_{h}^{n} \sim \varepsilon^2$ and $\mathcal{W}_2(\rho_0,\pi)e^{-\lambda hn}\sim \varepsilon$. To ensure $\mathcal{W}_2(\rho_0,\pi)e^{-\lambda hn}\sim \varepsilon$, it is sufficient to take $hn \sim \log(\sqrt{d}/\varepsilon)$. To get $h\Delta_{h}^{n} \sim \varepsilon^2$, we can take both $M(d)hd \log(\sqrt{d}/\varepsilon)\sim \varepsilon^2$ and $L(d)\sqrt{2hd}\log(\sqrt{d}/\varepsilon) \sim \varepsilon^2$. Respectively, this can be achieved when the order of $h$ is \begin{equation} h \sim \frac{\varepsilon^2}{d M(d) \log(\sqrt{d}/\varepsilon)}, \qquad h \sim \frac{\varepsilon^4}{d L(d)^2 \log(\sqrt{d}/\varepsilon)^2}. \end{equation} In turn, this holds when the order of $n$ is \begin{equation}\label{eq:order_of_n} n \sim \frac{d M(d) \log(\sqrt{d}/\varepsilon)^2}{\varepsilon^2}, \qquad n \sim \frac{d L(d)^2 \log(\sqrt{d}/\varepsilon)^3}{\varepsilon^4}, \end{equation} respectively. Lastly, to get $h\mathcal{H}(\rho_{0} | \pi) \sim \varepsilon^2$ it is sufficient to take $h \sim \varepsilon^2/d$, which in turn corresponds to $n \sim d/\varepsilon^2 \log(\sqrt{d}/\varepsilon)$. In the case where $g=0$ and $M(d) \sim 1$, we recover the assumptions on $V$ that were made in e.g. \citet{dalalyan2017further, dalalyan2017user}. Using the derivations above, we see that $n \sim d \varepsilon^{-2} \log(d \varepsilon^{-2})^2$ iterations are sufficient to achieve a 2-Wasserstein error of $\varepsilon$. Up to log-terms, this is the same rate as those derived for ULA in the aforementioned papers. However, in the case where $g(x) \propto \|x\|_1$ so that $L(d) \sim \sqrt{d}$, we get that $n \sim d^2/\varepsilon^4$ (ignoring the log-terms). Comparing to the remark accompanying Theorem 3 of \citet{durmus2016efficient}, this appears less sharp than the bounds they derive, in which $n$ depends linearly on $d$ (up to log-terms) whenever $V$ is strongly convex. As can be seen in Appendix \ref{appendix:proofs}, this likely stems from not optimally accounting for $\lambda$-displacement convexity in Lemma \ref{lemma:bound_function_with_integral}. \section{Conclusion} In this paper, we have developed novel connections between the fields of Wasserstein gradient flow, operator splitting, and Langevin Monte Carlo. We have demonstrated that the gradient flow perspective allows us to derive new convergence results about a proximal version of the Unadjusted Langevin Algorithm. Under certain assumptions on the potential $V$, we derive results that are on par with the contemporary literature on ULA. However, we point out that there is room for improvement in our current proofs. In particular, they could be improved by better accounting for the condition that $V$ is $\lambda$-strongly convex, allowing us to obtain sharper bounds when that assumption is present. On the other hand, the proof of Theorem \ref{theorem:converge_to_gradient_flow} generalizes to any convex $V$. Hence, to obtain control over the proximal ULA algorithm in such a case, one would only need to formulate conditions under which one can still derive a rate of convergence of the exact gradient flow to $\pi$, though one should no longer expect this convergence to be exponentially fast. We also hope that these connections can have implications on methodology. The many other splitting schemes discussed by \citet{holden2010splitting} can potentially lead to new sampling algorithms. The same holds for alternative JKO schemes, such as the one developed by \citet{legendre2017second}. For the Fokker--Planck equation, they show that their new scheme is second-order convergent, improving the original JKO scheme's first-order convergence. It is also likely that the growing literature on Langevin Monte Carlo and its variations can lead to new time discretization schemes that are of both practical and theoretical interest to the gradient flow community. \textbf{Acknowledgements} In a very recent preprint, \citet{wibisono2018sampling} uses the Wasserstein gradient flow perspective to propose the symmetrized Langevin algorithm. I'm greatly indebted to Nicolas Chopin and Marco Cuturi for hosting my visit to ENSAE ParisTech and CREST, where the material in this paper was developed. I'd also like to thank L\'ena\"ic Chizat, Arnak Dalalyan, Jeremy Heng, Pierre E. Jacob, Boris Muzellec and Gabriel Peyr\'e for interesting conversations about optimal transport, gradient flows, and Monte Carlo sampling. This material is based upon research supported by the Chateaubriand Fellowship of the Office for Science \& Technology of the Embassy of France in the United States.
{ "timestamp": "2018-02-26T02:12:27", "yymm": "1802", "arxiv_id": "1802.08671", "language": "en", "url": "https://arxiv.org/abs/1802.08671" }
\section{Introduction} Uncertainty quantification (UQ) aims to develop numerical methods that can accurately approximate quantities of interest (QoI) of a complex engineering system and facilitate the quantitative validation of the simulation model. One challenge in UQ is in building surrogates for approximation of a parameterized simulation model, often involving differential equations. To characterize the uncertainty that parameters effect on such a system, one usually models the uncertain inputs as a $d$-dimensional vector of independent random variables $\bs{x}=(x_1,\ldots,x_d)$. The QoI $f$ that we seek to approximate is a function of these random parameters, $f(\bs{x}): \mathbb{R}^d\to \mathbb{R}$. Here we will approximate $f(\bs{x})$ with a generalized Polynomial Chaos Expansion (PCE) \cite{R.Ghanem1991_SFE,Xiu2002_WPC}. In this situation, we assume $f$ can be well-approximated as a finite expansion in multivariate orthogonal polynomials, and the key step is to determine the expansion coefficients. Recently, stochastic collocation methods have been identified as effective strategies to compute PCE coefficients \cite{A.Narayan15_SC}. Stochastic collocation allows one to treat existing deterministic simulation models as black box routines in a larger pipeline for performing parametric analysis with PCE. Popular stochastic colloation approaches include sparse grids approximations \cite{Agarwal_2009domainadasc, Bieri_2011SCC, Eldred,GZ, Mazabaras, F.Nobile08_s}, pseudospectral projections \cite{reagan}, and least squares approaches \cite{J.hampton2015Cm,Tang_2014DLSp,Zhou_Narayan_Xu, Chkifa_2015dlsp, Narayan_2016Christoffel,ZNX,Guo_2018}. Each of these methods requires repeated queries of the black-box simulation model. In many practical applications, scarce computational resources limit the number of possible queries for the black-box simulatino model, thus limiting the amount of available information about the function $f$, and this makes accurate approximation of the PCE coefficients a difficult task. One popular computational strategy that constructs PCE approximations with limited information is stochastic colloation via $\ell_1$-minimization \cite{Doostan11_nonada,Yan12_Sc,pengdoostan,J.Hampton2015Cs,JNZ,Guo_2016scrqgauss}. The approach is very effective when the number of non-zero terms in the PCE approximation of the model output is small (i.e. $f$ has a sparse represenation in the PCE basis) or the magnitude of the PCE coefficients decays rapidly (i.e. the PCE expanstion of $f$ has a compressible representation). In this paper, we consider a gradient enhanced $\ell_1$-minimization approach for constructing PCE ceofficients. We consider $\ell_1$ minimization with both function and gradient evaluations. Recent advances \cite{Roderick10_Pr,Alekseev,Li11_Oba,Baar,Lockwood13_Gb,jakeman,Peng_2016gradient} have shown that the inclusion of derivative evaluations have the potential to greatly enhance the construction of surrogates especially if those derivatives can be obtained inexpensively, e.g. by solving adjoint equations \cite{Griewank_2003ad}. Potential applications of this approach also include Hermite-type interpolative approximations \cite{XUZHOU,Ben,Ward}. The gradient enhanced approach here can be viewed as a Hermite-type interpolation, however, such an approach differs from classical Hermite interpolation (see e.g., \cite{Interpolation_1,Interpolation_2,Interpolation_3,Learning_1}), since this approach seeks to finding a \textit{sparse} representation. The main contribution of this work is to present a general framework to include the gradient evaluations in an $\ell^1$ minimization framework. More precisely, we design appropriate preconditioners for the measure matrix, and we show that the inclusion of these derivative measurements can almost-surely lead to improved conditions for a successful solution recovery. The framework is quite general, and it applies to approximation of functions with either bounded or unbounded domain. Comparisons between the gradient-enhanced approach and standard $\ell_1$-minimization are also presented, and numerical examples suggest that the inclusion of derivative information can guarantee sparse recovery at a reduced computational cost. The rest of the paper is organized as follows. In section 2, we present some preliminaries for the collocation methods with $\ell_1$ minimization, we call this the ``standard" approach. The gradient-enhanced $\ell_1$ minimization approach is presented in Section 3, and this is followed by some further discussions in Section 4. Numerical examples are provided in Section 5, and we finally give some conclusions in Section 6. \section{Preliminaries} \subsection{Generalized polynomial chaos expansions} Let $\bs{x}=(x_1 , \ldots , x_d)^\top$ be a random vector with $d$ mutually independent components; each $x_i$ takes values in $\Gamma^i \subset \mathbb{R}.$ Since the variables $\{x_i\}_{i=1}^d$ are mutually independent, their marginal probability density functions $\rho_i$, associated with random variable $x_i$, completely characterize the distribution of $\bs{x}$. Define $\Gamma:= \otimes_{i=1}^d\Gamma^i \subset \mathbb{R}^d,$ and let $\rho(\bs{x})= \prod_{i=1}^d \rho_i(x_i): \Gamma\rightarrow \mathbb{R}^+$ denote the joint probability density function of $\bs{x}.$ Our objective is to approximate the QoI $f(\bs{x}):\Gamma \to \mathbb{R}$. In a simple stochastic collocation approach, we wish to recover information about this function from limited set of function evaluations. In this paper, we seek this approximation using a PCE and so we first introduce the multivariate orthogonal PCE basis. For each marginal density $\rho_i,$ we can define the univariate PCE basis elements, $\varphi^i_n$, which are polynomials of degree $n$, via the orthogonality relation \begin{align}\label{eq:phi-orthonormality} \mathbb{E} \left[\varphi^i_n(x_i) \varphi^i_{\ell}(x_i)\right] = \int_{\Gamma^i} \varphi^i_n(x_i) \varphi^i_{\ell}(x_i) \rho_i(x_i) dx_i = \delta_{n,\ell}, \quad n,\,\,\ell \geq 0, \end{align} with $\delta_{n,\ell}$ the Kronecker delta function. Up to a multiplicative sign, this defines the polynomials $\varphi^i_n$ uniquely; thus the probability measure $\rho_i$ determines the type of orthogonal polynomial basis. For example, the Gaussian (normal) distribution yields the Hermite polynomials, the uniform distribution pairs with Legendre polynomials, etc. For a more detailed account of the correspondence, see \cite{Xiu2002_WPC}. One convenient representation for a multivariate gPC basis is as a product of the univariate gPC polynomials in each direction. We define \begin{align} \label{gpcbais} \psi_{\boldsymbol{n}}(\bs{x}) \coloneqq \prod_{i=1}^d \varphi^{i}_{n_i}\left(x_{i}\right), \end{align} where $\boldsymbol{n} = \left(n_1, \ldots, n_d\right) \in {\mathbb N}_0^d$ is a multi-index set with $|\boldsymbol{n}| = \sum_{i=1}^d n_i$. The product functions $\psi_{\boldsymbol{n}}$ are $L^2$ orthogonal under the joint probability density function $\rho$ for $\bs{x}$: \begin{align} \label{othorgonal} \mathbb{E} \left[\psi_{\boldsymbol{n}}(\bs{x}) \psi_{\boldsymbol{j}}(\bs{x})\right] &= \int_{\Gamma} \psi_{\boldsymbol{n}}(\bs{x}) \psi_{\boldsymbol{j}}(\bs{x}) \rho(\bs{x}) d\bs{x} = \delta_{\boldsymbol{n},\boldsymbol{j}}, & \bs{n}, \bs{j} &\in {\mathbb N}_0^d \end{align} where $\delta_{\boldsymbol{n}, \boldsymbol{j}} = \prod_{i=1}^d \delta_{n_i, j_i}$. We denote by $T_n^d$ the total degree space, i.e., the space of $d$-variate algebraic polynomials of degree $n$ or less. An element $f_n$ in $T_n^d$ has a unique expansion in the $\phi_{\bs{n}}$ basis: \begin{align} \label{eq:pce1} f_n=\sum_{\boldsymbol{k} \in \Lambda^T_{n}}c_{\boldsymbol{k}}\psi_{\boldsymbol{k}}(\bs{x}), \end{align} where $ \Lambda^T_{n}$ is the total-degree multi-index set, \begin{align*} \Lambda^T_{n} &\coloneqq \left\{ \boldsymbol{k} \in \mathbb{N}_0^d\,\, \big| \,\, \sum_{i=1}^d k_i \leq n \right\} \end{align*} The dimension of $T_n^d$ is \begin{align}\label{eq:td-dim} M= \left|\Lambda^T_{n}\right| \coloneqq \dim T_n^d = \left( \begin{array}{c} d+n \\ n \end{array}\right). \end{align} By defining an(y) total order on the elements of $\Lambda^T_n$, we can re-write \eqref{eq:pce1} as the following scalar-indexed version \begin{align} \label{eq:pce} f_n=\sum_{\boldsymbol{k} \in \Lambda^T_{n}}c_{\boldsymbol{k}}\psi_{\boldsymbol{k}}(\bs{x})=\sum_{j=1}^{M}{c}_j \psi_j(\bs{x}), \end{align} where $\bs{c} \in {\mathbb R}^M$ contains the vector of expansion coefficients, and hence uniquely defines a function $f_n$. \subsection{The compressed sensing approach.} In recent years, stochastic collocation via compressive sensing is one of the popular approaches to determine the coefficients $c_j$ in (\ref{eq:pce}). Such as approach uses fewer evaluations, and seeks to compute a PCE approximation with a sparse coefficient vector. We denote by $\Xi \subset \Gamma$ a set of samples, i.e., \begin{align*} \Xi :=\{\bs{z}^{(1)},...,\bs{z}^{(N)}\} \subset \Gamma. \end{align*} We will eventually take $\Xi$ as a collection of $N$ iid samples of a random variable. The standard compressed sensing approach attempts the $\ell_0$ approach, \begin{align} \label{eq:l0minimization} \argmin_{\mathbf{c}\in \mathbb{R}^M} \|\mathbf{c}\|_0 \quad \text{subject to} \quad \mathbf{\Phi} \mathbf{c} = \mathbf{f}, \end{align} where $\mathbf{f}=(f(\bs{z}^{(1)}),...,f(\bs{z}^{(N)}))^T$, $\mathbf{c}=(c_1,\dots, c_M)^T \in \mathbb{R}^M$ is the unknown coefficient vector to be determined that defines the PCE expansion \eqref{eq:pce}, and $\mathbf{\Phi} \in \mathbb{R}^{N \times M}$ is the measurement matrix whose entries are \begin{eqnarray}\label{matrixelem} [\mathbf{\Phi}]_{ij}=\psi_j(\bs{z}^{(i)}), \quad i=1,\dots, N, \quad j=1,\dots, M. \end{eqnarray} The $\ell_0$ norm $\|\bs{c}\|_0$ is the number of nonzero entries (the ``sparsity") of the vector $\bs{c}$. The convex relaxation of the above problem is the following $\ell_1$ approach \begin{align} \label{eq:l1minimization} \argmin_{\mathbf{c}\in \mathbb{R}^M} \|\mathbf{c}\|_1 \quad \text{subject to} \quad \mathbf{\Phi} \mathbf{c} = \mathbf{f}, \end{align} where $\|\bs{c}\|_1$ is the standard $\ell_1$ norm on finite-dimensional vectors. The interpolation condition $\mathbf{\Phi c=f}$ can be relaxed to $\mathbf{\|\Phi c-f\|_2}\leq \epsilon$, for some tolerance value $\epsilon$ and with $\|\cdot\|_2$ the vector Euclidean norm, resulting in a regression type "denoising" approach. Fixing $M$, certain conditions on $N$ and $\bs{\Phi}$ can guarantee that the $\ell_1$-relaxed minimization \eqref{eq:l1minimization} produces the sought solution to the $\ell_0$ problem \eqref{eq:l0minimization}. Several types of such sufficient conditions on $\mathbf \Phi$ have been presented in the compressive sampling (CS) literature, such as the mutual incoherence property (MIP) and restricted isometry property (RIP). Our invesigation in this paper concerns the MIP: The mutual incoherence constant (MIC) of $\mathbf{\Phi}$ is defined as \begin{equation}\label{eq:MIC} \mu\,\,=\,\,\mu(\mathbf{\Phi})\,\,:=\,\, \max_{k\neq j}\frac{\abs{\innerp{\mathbf{\Phi}_k, \mathbf{\Phi}_j}}}{\|\mathbf{\Phi}_k\|_2 \cdot \|\mathbf{\Phi}_j\|_2}, \end{equation} where $\bs{\Phi}_j$ is the $j$th column of $\bs{\Phi}$. Assume that $\mathbf{c}_0$ is an $s$-sparse vector in $\mathbb{C}^M$, i.e., $\|\mathbf c\|_0\leq s,$ and if \begin{equation}\label{eq:BPMIC} \mu \,\,<\,\,\frac{1}{2s-1}, \end{equation} then the solution to the $\ell_1$ minimization (\ref{eq:l1minimization}) with $\mathbf{f}=\mathbf{\Phi}\mathbf{c}_0$ is exactly $\mathbf{c}_0$, i.e., $$ \mathbf{c}_0=\argmin_{\mathbf{c}\in \C^M} \left\{\|\mathbf{c}\|_1 \,\, \text{\rm subject to}\,\, \mathbf{\Phi}\mathbf{c}=\mathbf{\Phi}\mathbf{c}_0\right\}. $$ This result was first presented in \cite{DonHuo} for the case with $\mathbf{\Phi}$ being the union of two orthogonal matrices, and was later extended to general matrices by Fuchs \cite{Fuchs} and Gribonval \& Nielsen \cite{GrNi}. In \cite{Cai_Wang_Xu}, it is also shown that $\mu < \frac{1}{2s-1}$ is sufficient for stable approximation of $\mathbf{c}$ in the noisy case. \section{A gradient enhanced compressed sensing approach} We consider inclusion of gradient measurements in an $\ell_1$ optimization approach for compressed sensing. The motivation is that the gradient measurements can usually be obtained in a relatively inexpensive way from model simulations, e.g, by using the adjoint techniques \cite{Griewank_2003ad}. Consider the availability of the following data \begin{align*} &y=f(\bs{z}), \qquad \qquad \bs{z}\in \Xi,\\ &\partial_k(y)=\partial_k f(\bs{z}), \quad \,\,\bs{z}\in \Xi, \quad k=1,...d, \end{align*} where $\partial_kf(\bs{x}) = \frac{\partial f(\bs{x})}{\partial x_k}$ stands for the derivative with respect to the $k$th variable $x_k$. Then concatenating all the measurement conditions above into matrix-vector format in an $\ell_1$ optimization problem yields the following approach: \begin{align}\label{eq:gradientl1} \argmin_{\mathbf{c}\in \mathbb{R}^M}\|\mathbf{c}\|_1 \quad \textmd{subject} \ \textmd{to} \quad \mathbf{W}\tilde{\mathbf{\Phi}}\mathbf{P}\mathbf{c}=\mathbf{W}\tilde{\mathbf{f}} \end{align} with \begin{align*} \tilde{\mathbf{f}}=\left(\begin{array}{l} \mathbf{f} \\ \mathbf{f}_{\partial} \end{array}\right), \quad \tilde{\mathbf{\Phi}}=\left(\begin{array}{l} \mathbf{\Phi} \\ \mathbf{\Phi}_{\partial} \end{array}\right),\quad \mathbf{\Phi}_{\partial}=& \left[ \begin{array}{c} \frac{\partial\mathbf{\Phi}}{\partial x_1} \\ \vdots \\ \frac{\partial\mathbf{\Phi}}{\partial x_d}\\\end{array} \right ], \quad \mathbf{f}_{\partial}=\left[ \begin{array}{c} \frac{\partial\mathbf{f}}{\partial x_1} \\ \vdots \\ \frac{\partial\mathbf{f}}{\partial x_d}\\\end{array} \right ] \end{align*} where for $k=1,...,d$, $\frac{\partial\mathbf{\Phi}}{\partial x_k}\in R^{N\times M}$, $\frac{\partial\mathbf{f}}{\partial x_k}\in R^N$ are defined as following \begin{align*} \bigg [\frac{\partial\mathbf{\Phi}}{\partial x_k}\bigg ]_{ij}=\frac{\partial\psi_{j}(\bs{x})}{\partial x_k}(\bs{z}_i),\quad \bigg [\frac{\partial\mathbf{f}}{\partial x_k}\bigg ]_{i}=\frac{\partial f(\mathbf{\bs{x}})}{\partial x_k}(\bs{z}_i), \quad i=1,...,N, \quad j=1,...,M. \end{align*} Note that now $\tilde{\mathbf{\Phi}}\in \mathbb{R}^{N(d+1)\times M}$, and we refer to this matrix as the gradient-enhanced measurement/design matrix, with $\tilde{\mathbf{f}}\in \mathbb{R}^{N(d+1)}$ is the data vector. Notice that compared to the standard $\ell_1$ approach, the gradient enhanced approach (\ref{eq:gradientl1}) involves two additional matrices: \begin{itemize} \item The preconditioning matrix $\mathbf{W}:$ this is designed to enhance recovery properties in $\ell_1$ optimization. Its definition will depend on the type of PCE basis and on how the sample set $\Xi$ is generated. We will discuss this in detail later. \item The normalizing/weighting matrix $\mathbf{P}:$ this matrix is included to normalize the design matrix, so that $\widehat{\mathbf{\Phi}}:=\mathbf{W}\tilde{\mathbf{\Phi}}\mathbf{P}$ satisfies mean isotropy. \end{itemize} We shall show that the preconditioned matrix $\widehat{\mathbf{\Phi}}:=\mathbf{W}\tilde{\mathbf{\Phi}}\mathbf{P}$ is much more stable in the sense that its MIP (or RIP) constant better behaved than that of the matrix $\tilde{\mathbf{\Phi}}.$ In what follows, we shall give a general guide for choosing these preconditioning matrices. \subsection{Legendre expansion with Chebyshev sampling} To illustrate the idea, we begin with Legendre expansion with Chebyshev sampling. I.e., $\rho$ is the uniform measure on $\Gamma = [-1,1]^d$, the PCE basis functions $\psi_j$ are tensor-product Legendre polynomials, and $\Xi$ is constructed via iid sampling from the Chebyshev (arcsine) measure. The use of Chebyshev sampling when approximating with a Legendre polynomial basis (where available data is only function values) has been widely investigated \cite{Rauhutward,XUZHOU,JNZ}, and can produce better results (compared to uniform sampling) when large-degree approximations are required. Here we shall show how inclusion of gradient information can be accomplished in a systematic way. Suppose that $\Xi$ is comprised of $N$ iid samples generated from the uniform measure $\rho$. Since the (orthonormal) Legendre polynomials satisfy \eqref{othorgonal}, then we have \begin{align} \mathbb{E}\left[\frac{1}{N} \mathbf{\Phi}^T \mathbf{\Phi}\right] = \mathbf{I}. \end{align} This is the mean isotropy property. However, if we instead construct $\Xi$ as $N$ iid samples from a different measure, say the Chebyshev measure, then we must introduce a preconditioner to retain the mean isotropy property. Our gradient-enchanced $\ell_1$ minimization strategy aims to maintain mean isotropy when gradient evaluations are included in the measurement matrix. We recall a standard fact, that derivatives of the univariate Legendre polynomials are orthogonal with respect to the weight function $\eta(x)= (1-x^2)$ \cite{Szego_1959}. By using the above facts we can derive that if $\bs{z} \in {\mathbb R}^M$ is a random variable distributed according to the product Chebyshev weight function, \begin{align*} \rho_c(\bs{x}) = \prod_{j=1}^d \frac{1}{\pi \sqrt{1 - x_j^2}}, \end{align*} then we have \begin{align} \mathbb{E}^c\left[\frac{2^d}{\rho_c(\bs{z})} \psi_i(\bs{z}) \psi_j(\bs{z}) +\sum\limits_{k=1}^d \frac{1-z_k^2}{\rho_c(\bs{z})}\frac{\partial \psi_i}{\partial x_k}(\bs{z})\frac{\partial \psi_j}{\partial x_k}(\bs{z})\right]=\delta_{ij}\bigg (1+\sum\limits_{k=1}^d c_k i_k(i_k+1)\bigg), \label{eq:legendre_chebyshev} \end{align} where $c_k$ is a constant that we make precise later. Here we use $\mathbb{E}^c$ to emphasize that the expectation is taken with respect to the Chebyshev measure The above derivation suggests the following choices for the matrices $\mathbf{W}$ and $\mathbf{P}:$ \begin{align} \mathbf{W}=\left[ \begin{array}{cccc} \mathbf{W}^0 & & &\\ &\mathbf{W}^1 & &\\ & & \ddots &\\ &&& \mathbf{W}^d\\\end{array}\right], \end{align} where $\mathbf{W}^k$ are diagonal matrices whose entries are defined as \[ \mathbf{W}_{n,n}^0=\bigg ((4/\pi^2) (1-(z^{(n)}_j)^2)\bigg )^{d/4}, \quad \mathbf{W}^j_{n,n}= \frac{\mathbf{W}_{n,n}^0}{\sqrt{2}} \left(1 - \left(z_j^{(n)}\right)^2\right)^{1/2}, \quad j=1,...,d, \quad n=1,...,N. \] Here $z^{(n)}_j$ is the $j$th component of the random vector $\mathbf{z}^{(n)}.$ The normalizing matrix $\mathbf{P}$ is a diagonal matrix with entries $\mathbf{P}_{i,i}=\left(1+\sum\limits_{k=1}^d c_k i_k(i_{k}+1)\right)^{-1/2}$. With the above definitions, one can easily show that the design matrix is mean isotropy, namely, \begin{align} \mathbb{E}^c\left[\frac{1}{N}\widehat{\mathbf{\Phi}}^T \widehat{\mathbf{\Phi}}\right] = \mathbf{I}, \quad \textmd{with} \quad \widehat{\mathbf{\Phi}}=\mathbf{W}\tilde{\mathbf{\Phi}}\mathbf{P}. \end{align} This is the general strategy for our gradient formulation: we take the sampling measure from which $\Xi$ is constructed to be a degree-asymptotica ``good" sampling measure for the PCE basis $\psi_j(\bs{x})$, we design a preconditioning matrix so that the PCE basis is mean isotropic, and finally we choose a weighting matrix $\bs{P}$ to retain isotropy of the gradient evaluations. Having shown the idea for the special case of Legendre polynomials, we now generalize to arbitrary Jacobi families. \subsection{General Jacobi expansions with Chebyshev sampling}\label{sec:jacobi} Now, we turn to the case of General Jacobi expansions with Chebyshev sampling, which includes the Legendre expansion with uniform sampling as a special case. The univariate probability density \begin{align}\label{eq:jacobi-density} \rho^{(\alpha, \beta)}(x) &= d^{(\alpha,\beta)}(1-x)^\alpha (1+x)^\beta, & \alpha,\beta &\geq - \frac{1}{2} \end{align} is the Beta density function on $[-1,1]$. The normalization coefficient is \begin{align*} d^{(\alpha,\beta)} =\frac{\Gamma(\alpha + \beta + 2)}{\Gamma(\beta+1) \Gamma (\alpha + 1)2^{\alpha+\beta+1} }. \end{align*} Keeping with earlier notation, we use $\rho_c \equiv \rho^{(-1/2, -1/2)}$. Then given \begin{align*} \bs{\alpha} &= \left(\alpha_1, \ldots, \alpha_d\right) \in \left[-\frac{1}{2}, \infty\right)^d, & \bs{\beta} &= \left(\beta_1, \ldots, \beta_d\right) \in \left[-\frac{1}{2}, \infty\right)^d, \end{align*} we can define the notation for multi-dimensional Jacobi probability densities: \begin{align*} \rho^{(\bs{\alpha}, \bs{\beta})}(\bs{x}) &= \prod_{j=1}^d \rho^{(\alpha_j, \beta_j)}(x_j) \\ \end{align*} The multivariate PCE basis elements $\psi$ associated to $\rho^{({\bs{\alpha}, \bs{\beta}})}$ is likewise now well-defined, but to avoid notational clutter we will omit showing explicit dependence of $\psi$ and the measurement matrix $\mathbf{\Phi}$ on $\bs{\alpha}$ and $\bs{\beta}$. By using the identity between Jacobi polynomials and their derivatives, we can derive that if $\bs{z}$ is a random variable distributed according to the measure $\rho_c$, then \begin{align}\label{eq:jacobi_chebyshev} \mathbb{E}\left[ \frac{\rho^{(\bs{\alpha},\bs{\beta})}(\bs{z})}{\rho_c(\bs{z})} \psi_i(\bs{z}) \psi_j(\bs{z}) + \sum_{k=1}^d \frac{\rho^{(\bs{\alpha}+\bs{e}_k,\bs{\beta}+ \bs{e}_k)}(\bs{z})}{\rho_c(\bs{z})} \frac{\partial \psi_i}{\partial x_k}(\bs{z})\frac{\partial \psi_j}{\partial x_k}(\bs{z})\right]=\delta_{ij} \left (1+\sum\limits_{k=1}^d c^2(i_k, \alpha_k, \beta_k) \right), \end{align} where $\bs{e}_j \in {\mathbb R}^d$ is the cardinal unit vector in the $j$th direction; i.e., $(e_j)_k = \delta_{j,k}$. We also define $\bs{e}_0 = \bs{0}$ as the zero vector. The normalization constant $c_k$ is \begin{align*} c^2(i_k,\alpha_k, \beta_k) = i_k (i_k+\alpha_k + \beta_k + 1) \frac{(\alpha_k + \beta_k+2)(\alpha_k + \beta_k + 3)}{4 (\alpha_k+1)(\beta_k+1)}. \end{align*} The above derivation suggests the following choices for the matrices $\mathbf{W}$ and $\mathbf{P}:$ \begin{align} \mathbf{W}=\left[ \begin{array}{cccc} \mathbf{W}^0 & & &\\ &\mathbf{W}^1 & &\\ & & \ddots &\\ &&& \mathbf{W}^d\\\end{array}\right], \end{align} where $\mathbf{W}^k$ are diagonal matrices whose entries are defined as \begin{align*} W^0_{n,n} = \sqrt{\frac{\rho^{(\bs{\alpha},\bs{\beta})}(\bs{z}^{(n)})}{\rho_c(\bs{z}^{(n)})}}, \quad W^j_{n,n} = \sqrt{\frac{\rho^{(\bs{\alpha}+\bs{e}_j,\bs{\beta} + \bs{e}_j)}(\bs{z}^{(n)})}{\rho_c(\bs{z}^{(n)})}} \end{align*} for $n=1, \ldots, N$, and $j = 1, \ldots d$. The normalizing matrix $\mathbf{P}$ is a diagonal matrix with entries \begin{align}\label{eq:jacobi-normalizing} \mathbf{P}_{i,i}= \left (1+\sum\limits_{k=1}^d c^2(i_k, \alpha_k, \beta_k) \right)^{-1/2}. \end{align} With the above definitions, one can, just as for the Legendre case, show that the whole design matrix is mean isotropy, i.e., \begin{align} \mathbb{E}\left[\frac{1}{N}\widehat{\mathbf{\Phi}}^T \widehat{\mathbf{\Phi}}\right] = \mathbf{I}, \quad \textmd{with} \quad \widehat{\mathbf{\Phi}}=\mathbf{W}\tilde{\mathbf{\Phi}}\mathbf{P}. \end{align} For this gradient enhanced approach, we are interested in understanding inclusion of derivative information can improve the recovery ability. We shall provide one answer to this question in the following theorem by analyzing the coherence parameter of the design matrix. To this end, we define the coherence parameter of the original compressed sensing approach as \begin{equation*} \mu_L(\mathbf{\Phi}) := \sup_{i, \,\mathbf{z}\in \Xi} |\mathbf{\Phi}_i(\bs{z})|^2_2. \end{equation*} We have that $|\mathbf{\Phi}_i(\bs{z})|_2$ is the norm of one column in the design matrix $\mathbf{\Phi}$. The parameter $\mu_L$ provides a quantitative recovery quality metric for compressed sensing approaches \cite{E.J.candes2010Ap,J.Hampton2015Cs}. Smaller parameter values result in better recovery properties. Similarly, following the notation in \cite{Peng_2016gradient}, we define the corresponding parameter of the gradient enhanced approach as \begin{equation*} \beta_L\big(\mathbf{\widehat{\Phi}}\big) := \sup_{i, \,\mathbf{z}\in \Xi} \left\| \bs{\widehat{\Phi}}_{i}(\bs{z}) \right\|^2, \end{equation*} where \begin{align*} \bs{\widehat{\Phi}}_{i}(\bs{z}) = \frac{1}{\mathbf{P}_{i,i}} \left( \begin{array}{c} \sqrt{\frac{\rho^{(\bs{\alpha},\bs{\beta})}(\bs{z})}{\rho_c(\bs{z})}} \Phi_{i}(\bs{z}) \\ \sqrt{\frac{\rho^{(\bs{\alpha}+\bs{e}_1,\bs{\beta}+\bs{e}_1)}(\bs{z})}{\rho_c(\bs{z})}} \ppx{x_1} \Phi_{i}(\bs{z}) \\ \sqrt{\frac{\rho^{(\bs{\alpha}+\bs{e}_2,\bs{\beta}+\bs{e}_2)}(\bs{z})}{\rho_c(\bs{z})}} \ppx{x_2} \Phi_{i}(\bs{z}) \\ \cdots \\ \sqrt{\frac{\rho^{(\bs{\alpha}+\bs{e}_d,\bs{\beta}+\bs{e}_d)}(\bs{z})}{\rho_c(\bs{z})}} \ppx{x_d} \Phi_{i}(\bs{z}) \end{array}\right). \end{align*} In the following, we present the main theorem of this paper, which shows the bound for the coherence parameters $\mu_L$ and $\beta_L$. \begin{theorem}\label{th:main} Recall that $\mathbf{\Phi}$ and $\widehat{\mathbf{\Phi}}$ are design matrices for the standard $\ell_1$ and the gradient enhanced $\ell_1$ approach via Jacobi expansions with Chebyshev sampling, respectively. Then the two coherence parameters satisfy the following estimates: \begin{align}\label{eq:coherence-bound} \mu_L\left(\bs{\Phi}\right) &\leq \prod_{j=1}^d 2 e \left( 2 + \sqrt{\alpha_j^2 + \beta_j^2} \right) \\ \label{eq:d-coherence-bound} \beta_L\left(\bs{\widehat{\Phi}}\right) &\leq C\prod_{j=1}^d 2 e \left( 2 + \sqrt{\alpha_j^2 + \beta_j^2} \right) \end{align} where $1 \leq C \leq 1 + \frac{\sqrt{2}}{2}$. The lower bound for $C$ is achieved when $\alpha_k = \beta_k = -\frac{1}{2}$ and the upper bound occurs when there is a $k$ such that $\alpha_k = \beta_k = 0$. If $\mathcal{N}(\cdot)$ represents the nullspace of a matrix, then $$\mathcal{N}\big(\mathbf{\widehat{\Phi}}\big) \subset \mathcal{N}\big(\mathbf{\Phi}\big),$$ and this is almost-surely a strict subset when $\mathbf{\Phi}$ is under-sampled. \end{theorem} \begin{proof} See Appendix A2. \end{proof} We remark that ideally we could show the gradient approach admits an improved (smaller) parameter $\beta_L$, i.e. $\beta_L\left( \bs{\widehat{\Phi}} \right) \leq \mu_L\left( \bs{\Phi} \right)$, yielding a better recovery property. Our analysis does not bear this fruit, but we have shown that (i) the coherence for both $\bs{\Phi}$ and $\bs{\widehat{\Phi}}$ is a constant raised to the $d$th power, independent of polynomial degree; and (ii) the constant $C$ in the estimate \eqref{eq:d-coherence-bound} is dimension-independent and relatively small. \subsection{Hermite expansions with Gaussian sampling} In the last two sections, we presented two examples in bounded domain. Here we present a unbounded case, where the basis elements are Hermite polynomials and the samples are chosen according to the Gaussian measure. The authors in \cite{Peng_2016gradient} notice that the gradient of the Hermite basis elements are orthogonal with respect to the same Gaussian measure. The authors show that if $\psi_j$ are suitably normalized Hermite polynomials and $\bs{z}$ is a multivariate standard normal random variable, then \begin{align} \label{hermite-l2norm} \mathbb{E}\bigg (\psi_i(\bs{z})\psi_j(\bs{z})+\sum\limits_{k=1}^d\frac{\partial \psi_i}{\partial x_k}(\bs{z})\frac{\partial \psi_j}{\partial x_k}(\bs{z})\bigg)=\delta_{ij}\bigg (1+\sum\limits_{k=1}^di_k\bigg ). \end{align} This motivates the following choice of normalizing matrix: $$\mathbf{P}^h= \textmd{diag}(\mathbf{P}_{1,1}, ... , \mathbf{P}_{N,N}), \quad \mathbf{P}_{i,i}=\bigg (1+\sum\limits_{k=1}^di_k\bigg )^{-1/2}, \quad i=1, ... , N. $$ The preconditioning matrix $\bs{W}$ would be set to the identity in this case. One main result of \cite{Peng_2016gradient} then shows a similar result as in Theorem \ref{th:main}. We also remark that extensions to general unbounded problems (e.g., Laguerre expansions) would use similar techniques as above. We note that there are more sophisticated sampling strategies one can use in the unbounded case \cite{Narayan_2016Christoffel,JNZ} so that the choice of $\bs{W} = \bs{I}$ is not necessarily optimal. Finally, we make some remarks about the weighting matrices $\bs{P}$ that we have constructed. Our choice of this matrix for the Hermite case above, and for the general Jacobi case in \eqref{eq:jacobi-normalizing} have been diagonal matrices due to the orthogonality property of \textit{derivatives} of orthogonal polynomials. In fact, the only univariate polynomial families whose derivatives are also sets orthogonal polynomials are the Jacobi, Laguerre, and Hermite polynomials \cite{hahn_uber_1935,webster_orthogonal_1938,krall_derivatives_1936}. Therefore, if a PCE basis associated to a non-classical polynomial family is used, then the choice of $\bs{P}$ will not be diagonal: instead it will be any inverse square root of the Gramian associated to the polynomial derivatives. \section{Further discussions} In the last section, we have present a general framework to include the gradient information in the compressed sensing approach. Notice that in our approach, the gradient information is included directly for each direction (variable). However, one may consider different ways to include those information. For instance, partial gradient measurements, e.g., an incomplete set of directional derivatives, may be provided. We may therefore consider the following problem: \begin{itemize} \item Find a sparse expansion of $f(\bs{x})$ with \begin{align} f(\bs{z}^{(j)})\,\,&=\,\,f_j, \qquad\qquad\qquad\qquad\,\, \bs{z}^{(j)}\in \Xi, \label{eq:value1}\\ D_{\mathbf v_t}f(\bs{z}^{(j)})\,\,&=\,\,f'_{j,t},\qquad t=1,\ldots,k, \,\,\,\bs{z}^{(j)}\in \Xi, \label{eq:value2} \end{align} where $D_{\mathbf v_t}f(\bs{z}_j):=\innerp{\nabla f(\mathbf x),\mathbf v_t}|_{\mathbf x=\bs{z}_j}$ and $\mathbf v_t\in {\mathbb R}^d$ are directional vectors. Namely, we assume that both function values and the directional derivative information at the sampling points are known. \end{itemize} The above approach can be viewed as a generalization of the approach in the last section. Here, we have more flexibility to choose the directions $\{\mathbf v_j\}_j,$ and it is expected that a smart choice of $\{\mathbf v_j\}_j$ may lead to a improved recovery results. However this approach might not be of practical value, as there is no evidence to show how to get such directional derivatives. Nevertheless, this can be viewed as an interesting mathematical problem, as discussed in \cite{xu_zhou_2018}. Besides the above approach, one may also interested in the following mathematical problem: \begin{itemize} \item Find a sparse approximation of $f(\bs{x})$ with \begin{equation}\label{eq:hi} D_{\mathbf v_j}^{\tau_j}f(\bs{z}_j)\,\,=\,\, y_j, \quad \bs{z}_j\in \Xi, \quad j=1,\ldots,N, \end{equation} where $\mathbf v_j\in {\mathbb R}^d$ are directional vectors, and $\tau_j\in {\mathbb N}_0$ are non-negative integers. \end{itemize} Here, it is supposed that one knows either the $\tau_j$-order directional derivative of $f$ at $\bs{z}^{(j)}$ or the function value $f(\bs{z}^{(j)})$. If $\tau_j=0$, then (\ref{eq:hi}) means that we know only the function value of $f$ at $\bs{z}^{(j)}$, i.e., $y_j=f(\bs{z}^{(j)})$. Notice that a main feature of this approach is that the locations (samples) for evaluating the function values and the gradient information are independent, while normally one assumes that function values and the gradient information are evaluated in the same locations (which is more practical). Finally, we would like to remark that for the gradient-enhanced approach it seems that the precondition matrix is the key for the recovery property. We believe that such matrices presented here is not optimal, and one may consider alternative choices, e.g., the Christoffel weighted approach in \cite{Narayan_2016Christoffel,JNZ} that is optimal for degree-asymptotic approximations. \section{Numerical examples} We now provide some numerical examples to show the performance of the gradient-enhanced $\ell_1$-minimization approach. For the implementation of the $\ell_1$ minimization, we employ the available tools such as Spectral Projected Gradient algorithm (SPGL1) from \cite{Vanden} that was implemented in the MATLAB package SPGL1 \cite{vanFrie2}. To compare the standard and gradient-enhanced $\ell$- minimization solutions, we will use \textit{standard} to denote the numerical results by using the standard $\ell_1$-minimization, while we shall denote by \textit{gradient-enhanced} the numerical results obtained by using gradient enhanced $\ell_1$ approach. We shall also use \textit{standard-double} to denote the standard approach with "doubled" function values. More precisely, consider for example a two dimensional example, suppose we have $N$ function values and $2N$ gradient values (with respect to each variable). Then, the full gradient enhanced approach will use $3N$ information ($100\%$ information, i.e., $N$ function values and $2N$ gradient values). A $50\%$ gradient enhanced approach would involve $N$ function values and $N$ gradient information (with respect to a randomly chosen direction/variable). While the \textit{standard-double} will stands for the standard approach with $3N$ function values. \subsection{Stability tests} We first show some stability tests between the preconditioned matrix $\widehat{\mathbf{\Phi}}=\mathbf{W}\tilde{\mathbf{\Phi}}\mathbf{P}$ and the original matrix $\tilde{\mathbf{\Phi}}.$ This is done by showing the MIP constant in equation (\ref{eq:MIC}), which is a key index for stable sparse recovery. Notice that the smaller the MIC constant is, the better the recovery guarantee. We consider the Legendre expansion with Chebyshev sampling. For a fixed polynomial space, we show in Fig.\ref{fig:legendre_MIC_via_samples} the MIP constants of $\widehat{\mathbf{\Phi}}$ and $\tilde{\mathbf{\Phi}}$ with respect to the number of samples. While Fig.\ref{fig:legendre_MIC_via_number of PCE} presents the MIP constants, for a fixed number of samples, with respect to the number of expansion terms $M.$ In both cases, we also present the MIP constant of the matrix $\mathbf{\Phi}$ where no derivative information is included. It is clear shown that the preconditioned matrix $\widehat{\mathbf{\Phi}}$ admits a much well behaved MIC constant (see the purple-triangular lines). While it is also shown that the direct inclusion of derivative information (the matrix $\tilde{\mathbf{\Phi}}$) can actually destroy the stability of the matrix $\mathbf{\Phi}$ (see the blue lines and red lines). \begin{figure}[!ht] \centering \includegraphics[width=0.48\textwidth,height=0.24\textheight]{MIC_legendre_with_chebysampling_d2n30s8.pdf}\quad \includegraphics[width=0.48\textwidth,height=0.24\textheight]{MIC_legendre_with_chebysampling_d6n5s8.pdf} \caption{The MIP constant for three matrices against the number of samples: $\widehat{\mathbf{\Phi}}$, $\tilde{\mathbf{\Phi}},$ and $ \mathbf{\Phi}$. Left: $d=2$, $n=30$. Right: $d=6$, $n=5$. Legendre polynomial and Chebyshev samples.}\label{fig:legendre_MIC_via_samples} \end{figure} \begin{figure}[!ht] \centering \includegraphics[width=0.48\textwidth,height=0.24\textheight]{MIC_legendre_with_chebysampling_d3n80_via_pceterm.pdf}\quad \includegraphics[width=0.48\textwidth,height=0.24\textheight]{MIC_legendre_with_chebysampling_d6n80_via_pceterm.pdf} \caption{The MIP constant for three matrices against the number of PCE terms with fixed number of samples: $\widehat{\mathbf{\Phi}}$, $\tilde{\mathbf{\Phi}},$ and $ \mathbf{\Phi}$. Left: $d=2$, $N=80$. Right: $d=6$, $N=80$. Legendre polynomial and Chebyshev samples.}\label{fig:legendre_MIC_via_number of PCE} \end{figure} \subsection{Benchmark Test: fixed sparsity} In this section, We assume that the target (exact) function has a sparse polynomial expansion, i.e. $f(\bs{x})=\sum _{j=1}^M c_j \psi_j(\bs{x})$ with $\|\mathbf{c}\|_{0}=s$, and attempt to recover this vector. In all our tests, we assume the random input is uniform distributed, and the samples are chosen randomly with the chebyshev measure. Notice that numerical examples for the Hermite expansion can be found in \cite{Peng_2016gradient}. For a given sparsity level $s$, we shall fix $s$ coefficients of the polynomial while keeping the rest of the coefficients zero. The values of the $s$ non-zero coefficients are drawn as $i.i.d.$ samples from a standard normal distribution. We approximate the PCE coefficients $\mathbf{c}$ via the gradient enhanced approach from these generated data. We examine the frequency of successful recoveries. This is accomplished by $100$ trials of the algorithms and counting the successful ones. A recovery is considered successful when the resulting coefficient vector $\mathbf{c}$ satisfies $\|\mathbf{c}-\mathbf{\tilde{c}}\|_{\infty}\leq{10^{-3}}.$ We consider the two dimensional case first. In Figure.\ref{fig:legendre2d_gradient} (Left), we show the recovery probability against number of sample points $N$ with a fixed sparsity $s=8$. To have a better understanding, in the right plot of Figure.\ref{fig:legendre2d_gradient}, we also present recovery probability with respect to sparsity $s$ with a fixed number of random samples $N=35$. Both the two plots show that the use of gradient information can indeed improve the recovery rate, and furthermore, the more gradient information is included, the better recovery results obtained. \begin{figure}[!ht] \centering \includegraphics[width=0.48\textwidth,height=0.24\textheight]{Proi_GraLeCh_viam_partdim_d2n20s8.pdf}\quad \includegraphics[width=0.48\textwidth,height=0.24\textheight]{Proi_GraLeCh_vias_partdim_d2n20m50.pdf} \caption{Left: Recovery probability against number of samples, $s = 8$; Right: Recovery probability against sparsity, $N = 50$. Two dimensional tests with $d = 2, n = 20$. Legendre polynomial and Chebyshev samples.}\label{fig:legendre2d_gradient} \end{figure} We now consider the 10-dimensional case. In Figure.\ref{fig:legendre10d_gradient} (Left), we show the recovery probability against number of sample points $N$ with a fixed sparsity $s=6$ and the right plot, we present the recovery probability with respect to sparsity with a fixed number of points $N=50$. In this example, we test the $10\%$ and $20\%$ gradient-enhanced approach, meaning that only one or two partial derivatives are involved in the $\ell_1$ minimization. Once again, better performance can be observed when gradient information is included. \begin{figure}[!ht] \centering \includegraphics[width=0.48\textwidth,height=0.24\textheight]{Proi_GraLeCh_viam_partdim_d10n3s6.pdf}\quad \includegraphics[width=0.48\textwidth,height=0.24\textheight]{Proi_GraLeCh_vias_partdim_d10n3m70.pdf} \caption{Left: Recovery probability against number of samples, $s = 6$; Right: Recovery probability against sparsity, $N = 70$. Two dimensional tests with $d = 10, n =3$. Legendre polynomial and Chebyshev samples.}\label{fig:legendre10d_gradient} \end{figure} \subsection{Applications to function approximations} In this section, we demonstrate the utility of using gradient data to build PCE approximations for different kind of test functions defined as follows. Sphere function: \begin{equation*} f_1(x)=\sum_{i=1}^dx_i^2, \end{equation*} Gaussian\ function: \begin{equation*} f_2(x)=\exp\bigg(-\sum_{i=1}^{d}0.01(1/2(x_{i}+1)-0.375)^2\bigg), \end{equation*} Sinusoids\ function: \begin{equation*} f_3(x)=\sum_{i=1}^d0.3+\sin(16/15x_i-0.7)+\sin^2(16/15x_i-0.7). \end{equation*} In Figure.\ref{fig:legendref1_gradient}, we consider to approximate the sphere function with Legendre polynomial chaos and random evaluations using the $\ell_1$ approach. The left plot shows the root-mean-square-error (RMSE) against the number of sample points $N$ for the two dimensional case (with $n=20$ and $M=231$), while the right plot presents the RMSE against the number of sample points for the 10-dimensional case (with $n=3$ and $M=286$). In both cases, it is clear shown that the use of gradient information can dramatically enhance the approximation accuracy. Similar tests are done for the Gaussian and Sinusoids functions, and the numerical results are presented in Figure. \ref{fig:legendref2_gradient} and Figure. \ref{fig:legendref3_gradient}, respectively. \begin{figure}[!ht] \centering \includegraphics[width=0.48\textwidth,height=0.24\textheight]{Errnew_Sphere_GLC_L1_viam_pdim_d2n20.pdf}\quad \includegraphics[width=0.48\textwidth,height=0.24\textheight]{Errnew_Sphere_GLC_L1_viam_pdim_d10n3.pdf} \caption{Discrete $L_2$ error against number of samples with random points of $f_1(x)$. Legendre polynomial with Chebyshev sampling. Left: $d = 2, n = 20$. Right: $d = 10, n = 3$.}\label{fig:legendref1_gradient} \end{figure} \begin{figure}[!ht] \centering \includegraphics[width=0.48\textwidth,height=0.24\textheight]{Err_Gaussian_func_GraLeCh_L1_viam_partanydim_d2n20.pdf}\quad \includegraphics[width=0.48\textwidth,height=0.24\textheight]{Err_Gaussian_func_GraLeCh_L1_viam_partanydim_d5n6.pdf} \caption{ Discrete $L_2$ error against number of samples with random points of $f_2(x)$. Legendre polynomial and Chebyshev samples. Left: $d = 2, n = 20$. Right: $d = 6, n = 5$.}\label{fig:legendref2_gradient} \end{figure} \begin{figure}[!ht] \centering \includegraphics[width=0.48\textwidth,height=0.24\textheight]{Errnew_Sinusoids_GLC_L1_viam_dim_d2n20.pdf}\quad \includegraphics[width=0.48\textwidth,height=0.24\textheight]{Errnew_Sinusoids_GLC_L1_viam_dim_d5n6.pdf} \caption{Discrete $L_2$ error against number of samples with random points of $f_3(x)$. Legendre polynomial and Chebyshev samples. Left: $d = 2, n = 20$. Right: $d = 5, n = 6$.}\label{fig:legendref3_gradient} \end{figure} \subsection{Elliptic PDE with Random Inputs} We next consider the following stochastic linear two-dimensional (in spatial) elliptic PDE problem \begin{equation}\label{eq:PDEmodel} \begin{cases} -\nabla\cdot(a(\mathbf{y},\omega)\nabla u(\mathbf{y},\omega))=f(\mathbf{y},\omega)\quad \textmd{in} \ \mathcal{D}\times\Omega,\\ u(\mathbf{y},\omega)=0 \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \ \textmd{on} \ \partial\mathcal{D}\times\Omega, \end{cases} \end{equation} with spatial domain $\mathcal{D}=[0,1]^{2}$. We take a deterministic load $f(\mathbf{y},\omega)=\cos( y_{1})\sin( y_{2})$ for these numerical examples. And construct the random diffusion coefficient $a_{N}(\mathbf{y},\omega)$ with one-dimensional spatial dependence as in \cite{Babuka_2010SCEPDE}: \begin{equation*} \log\big(a_{N}(\mathbf{y},\omega)-0.5\big)=1+\xi_{1}(\omega)\Big(\sqrt{\pi}L/2\Big)^{1/2}+\sum_{i=2}^{d} \zeta_{i}g_{i}(\mathbf{y})\xi_{i}(\omega), \end{equation*} where \begin{equation*} \zeta_{i}:=(\sqrt{\pi}L)^{1/2}\exp\Big(\frac{-(\lfloor\frac{i}{2}\rfloor\pi L)^{2}}{8}\Big), \quad i>1 \end{equation*} and \begin{equation*} g_{i}(\mathbf{y}):= \begin{cases} \sin\left(-\lfloor\frac{i}{2}\rfloor\pi y_{1}\right), \,\,\, \textmd{i} \ \textmd{even},\\[12pt] \cos\left(-\lfloor\frac{i}{2}\rfloor\pi y_{1}\right), \,\,\, \textmd{i} \ \textmd{odd}. \end{cases} \end{equation*} Here $\{\xi_{i}\}^{d}_{i=1}$ are uniformly distributed on the interval $[-1,1]$. We assume that $\xi_i$ are mutually independent from each other. Hence, a family of Legendre polynomials is used to approximate the quantities of interest of $\bs \xi$. Here $\mathbf{y}$ represents the physical domain. The random diffusion coefficient $a_{N}(\mathbf{y},\omega)$ used here only depends on $y_1$. For $y_{1}\in[0,1]$, let $L=1/12$ be a desired physical correlation length for $a(\mathbf{y},\omega)$. The deterministic elliptic equation are solved by a standard finite element method with a fine mesh. The convergence rates are shown in Fig. \ref{fig:lcPDE3d_gradient} and Fig. \ref{fig:lcPDE10d_gradient} for a low dimensional case ($d=3,n=10$) and a high dimensional case ($d=10,n=4$), respectively. In the numerical tests, we employ a FEM solver as the deterministic solver and the Monte Carlo method with $6000$ samples are used to get the reference mean and standard deviation of the solution. The gradient information is obtained by solving the adjoint equation as in \cite{jakeman}. Finally, the numerical error of our approach for the mean and standard deviation are presented. We learn again in the pictures that the gradient-enhanced approach performs much better than the standard $\ell_1$ approach. \begin{figure}[!ht] \centering \includegraphics[width=0.48\textwidth,height=0.24\textheight]{new_Meanerr_LeChTD_GraL1_viam_d3n10.pdf}\quad \includegraphics[width=0.48\textwidth,height=0.24\textheight]{new_Varerr_LeChTD_GraL1_viam_d3n10.pdf} \caption{Error in $\ell_2$ norm of the mean and variance between the reference and approximation for the various gradient-enhanced method as a function of the number of samples $N$. $d = 3, n = 10.$}\label{fig:lcPDE3d_gradient} \end{figure} \begin{figure}[!ht] \centering \includegraphics[width=0.48\textwidth,height=0.24\textheight]{elliptictd321_Meanerr_LeChTD_GraL1_viam_d10n4.pdf}\quad \includegraphics[width=0.48\textwidth,height=0.24\textheight]{elliptictd321_Varerr_LeChTD_GraL1_viam_d10n4.pdf} \caption{Error in $\ell_2$ norm of the mean and variance between the reference and approximation for the various gradient-enhanced method as a function of the number of samples $N$. $d = 10, n =4.$}\label{fig:lcPDE10d_gradient} \end{figure} \section{Conclusion} In this work, we present a general framework for the gradient-enhanced $\ell_1$-minimization for constructing the sparse polynomial chaos expansions. By designing appropriate pre-conditioners to the measure matrix, we show the inclusion of derivative information can indeed improve the recovery property. And the framework is quite general and it applies to both problems with bounded random input and unbounded random input. Several numerical examples are presented to support the theoretical finding. \section*{Acknowledgments} We would like to thank Prof. Dongbin Xiu from Ohio State university for introducing this topic to us a few years ago, and also for his very helpful comments.
{ "timestamp": "2018-02-27T02:05:33", "yymm": "1802", "arxiv_id": "1802.08837", "language": "en", "url": "https://arxiv.org/abs/1802.08837" }
\section{Introduction} Consider a random vector $(Y,X')'$ with distribution $P$ and finite second moments, where the outcome $Y$ takes values in $\Bbb{R}$ and the covariate $X$ taking values $ x \in \mathcal{X}$, a Borel subset of $\Bbb{R}^d$. Denote the conditional expectation function map $x \mapsto {\mathrm{E}}[Y \mid X =x]$ by $\gamma^*_0$. We consider a function $\gamma_0$, given by $x \mapsto b(x)'\beta_0$, as a sparse linear approximation to $\gamma^*_0$, where $b$ is a $p$-dimensional vector, a dictionary of basis functions, mapping $\mathcal{X}$ to $\Bbb{R}^p$. The dimension $p$ here can be large, potentially much larger than the sample size. Our goal is to construct high-quality inference methods for a real-valued linear functional of $\gamma_0$ given by: $$ \theta_0 = {\mathrm{E}} m(X, \gamma_0) = \int m(x, \gamma_0) d F(x), $$ where $F$ is the distribution of $X$ under $P$, for example the average derivative and other functionals listed below. (See Section 2 below regarding formal requirements on $m$). When the approximation error $\gamma_0 - \gamma^*_0$ is small, our inference will automatically re-focus on a more ideal target -- the linear functional of the conditional expectation function, $$\theta_0^* = {\mathrm{E}} m(X, \gamma^*_0).$$ \begin{example}[\textbf{Average Derivative}] Consider an average derivative in the direction $a$: $$ \theta_0 = {\mathrm{E}} a'\nabla_x \gamma_0(X), \quad a \in \Bbb{R}^{d}. $$ This functional corresponds to an approximation to the effect of policy that shifts the distribution of covariates via the map $x \mapsto x + a$, so that $$ \int (\gamma_0(x+ a) - \gamma_0(x)) d F(x) \approx \int a' \nabla_x \gamma_0(x) d F(x). $$ \end{example} \begin{example}[\textbf{Policy Effect from a Distribution Shift}] Consider the effect from a counterfactual change of covariate distribution from $F_0$ to $F_1$: $$ \theta_0 = \int \gamma_0(x) d (F_1(x) - F_0(x)) = \int \gamma_0(x) [d (F_1(x) - F_0(x))/dF(x)] d F(x). $$ \end{example} \begin{example}[\textbf{Average Treatment Effect}] Consider the average treatment effect under unconfoundedness. Here $X = (Z,D)$ and $\gamma_0(X) = \gamma_0(D,Z)$, where $D \in \{0,1\}$ is the indicator of the receipt of the treatment, and $$ \theta_0 = \int (\gamma_0(1,z) - \gamma_0(0,z)) d F(z) = \int \gamma_0(d,z) (1(d=1) - 1(d=0)) dF(x). $$ \end{example} We consider $\gamma \mapsto {\mathrm{E}} m(X, \gamma)$ as a continuous linear functional on $\Gamma_b$, the linear subspace of $L^2(F)$ spanned by the given dictionary $x \mapsto b(x)$. Such functionals can be represented via the Riesz representation: $$ {\mathrm{E}} m(X, \gamma) = {\mathrm{E}} \gamma_0(X) \alpha_0(X), $$ where the Riesz representer $\alpha_0(X) = b(X)'\rho_0$ is identified by the system of equations: $$ {\mathrm{E}} m(X, b) = {\mathrm{E}} b(X) b(X)'\rho_0, $$ where $m(X, b): = \{m(X,b_j)\}_{j=1}^p$ is a componentwise application of linear functional $m(X, \cdot)$ to $b = \{ b_j\}_{j=1}^p$. Having the Riesz representer allows us to write the following ``doubly robust" representation for $\theta_0$: $$ \theta_0 = {\mathrm{E}}[ m(X, \gamma_0) + \alpha_0(X) ( Y- \gamma_0(X)) ]. $$ This representation is approximately invariant to small perturbations of parameters $\gamma_0$ and $\alpha_0$ around their true values (see Lemma 2 below for details), a property sometime referred to as the Neyman-type orthogonality (see, e.g., \cite{DML}), making this representation a good one to use as a basis for estimation and inference in modern high-dimensional settings. (See also Proposition 5 in \cite{LR} for a formal characterization of scores of this sort as having the double robustness property in the sense of \cite{robins:dr}). Our estimation and inference will explore an empirical analog of this equation, given a random sample $(Y_i, X_i')_{i=1}^n$ generated as i.i.d. copies of $(Y,X')$. Instead of the unknown $\gamma_0$ and $\beta_0$ we will plug-in estimators obtained using $\ell_1-$ regularization. We shall use sample-splitting in the form of cross-fitting to obtain weak assumptions on the problem, requiring only approximate sparsity of either $\beta_0$ or $\rho_0$, with some weak restrictions on the sparsity indexes. For example, if both parameter values are sparse, then the product of effective dimensions has to be much smaller than $n$, the sample size. Moreover, one of the parameter values, but not both, can actually be ``dense" and estimated at the so called ``slow" rate, as long as the other parameter is sparse, having effective dimension smaller than $\sqrt {n}$. We establish that that the resulting ``double" (or de-biased) machine learning (DML) estimator $\hat \theta$ concentrates in a $1/\sqrt{n}$ neigborhood of the target with deviations controlled by the normal laws, $$ \sup_{t \in \Bbb{R} }\Big |\Pr ( \sqrt{n} \sigma^{-1} (\hat \theta - \theta_0) \leq t) - \Phi(t) \Big | \leq \varepsilon_n, $$ where the non-asymptotic bounds on $\varepsilon_n$ are also given. As the dimension of $b$ grows, suppose that the subspace $\Gamma_b$ becomes larger and approximates the infinite-dimensional linear subspace $\Gamma^* \subseteq L^2(P)$, that contains the true $\gamma^*_0$. We assume the functional $\gamma \mapsto {\mathrm{E}} m(X, \gamma)$ is continuous on $\Gamma^*$ in this case. If the approximation bias is small, $$\sqrt{n}(\theta_0 - \theta_0^*) \to 0,$$ our inference will automatically focus on the ``ideal" target $\theta^*_0$. Therefore our inference can be interpreted as targeting the functionals of the conditional expectation function in the regimes where we can successfully approximate them. However our approach does not hinge on this property and retains interpretability and good properties under misspecification. It is interesting to note that in the latter case $\alpha_0$ will approximate the true Riesz representer $\alpha^*_0$ for the linear functionals on $\Gamma^*$, identified by the system of equations: $$ {\mathrm{E}} m(X, \gamma) = {\mathrm{E}} \gamma(X) \alpha^*_0(X), \quad \forall \gamma \in \Gamma^*. $$ Note that $\theta_0^*$ has a ``doubly robust" representation: \begin{equation}\label{DR} \theta^*_0 = {\mathrm{E}}[ m(X, \gamma^*_0) + \alpha^*_0(X) ( Y- \gamma^*_0(X)) ], \end{equation} which is invariant to perturbations of $\gamma^*_0$ and $\alpha^*_0$. Hence, our approach can be viewed as approximately solving the empirical analog of these equations, in the regimes where $\gamma_0$ does approximate $\gamma^*_0$. In such cases our estimator attains the semi-parametric efficiency bound, because its influence function is in fact the efficient score for $\theta_0$; see \cite{vaart:1991,newey94}. When $\Gamma^*= L^2(F)$ and the functional $\gamma \mapsto {\mathrm{E}} m(X, \gamma)$ is continuous on $\Gamma^*$, the Riesz representer $ \alpha^*_0(X)$ belongs to $L^2(F)$ and can be stated explicitly in many examples: \begin{itemize} \item[] in Example 1: $\alpha^*_0(x) = -\partial_x \log f(x)$, where $f(x) = d F(x)/dx$, \item[] in Example 2: $\alpha^*_0(x) = d (F_1(x) - F_0(x))/d F(x)$, \item[] in Example 3: $\alpha^*_0(x) = (1(d=1) - 1(d= 0) )/P(d \mid z)$. \end{itemize} However, such closed-form solutions are not available in many other examples, or when $\Gamma^*$ is smaller than $L^2(F)$, which is probably the most realistic situation occurring in practice. Using closed-form solutions for Riesz representers $\alpha^*_0$ in several leading examples and their machine learning estimators, \cite{DML} defined DML estimators of $\theta^*_0$ in high-dimensional settings and established their good properties. Compared to this approach, the new approach proposed in this paper has the following advantages and some limitations: \begin{enumerate} \item It automatically estimates the Riesz representer $\alpha_0$ from the empirical analog of equations that implicitly characterize it. \item It does not rely on closed-form solutions for $\alpha^*_0$, which generally are not available. \item When closed-form solutions for $\alpha^*_0$ are available, it avoids directly estimating $\alpha^*_0$. For example, it avoids estimating derivatives of densities in Example 1 or inverting estimated propensity scores $P(d \mid z)$ in Example 3. Rather it estimates the projections $\alpha_0$ of $\alpha^*_0$ on the subspace $\Gamma_b$, which is a much simpler problem when the dimension of $X$ is high. \item Our approach remains interpretable under misspecification -- when approximation errors are not small, we simply target inference on $\theta_0$ instead of $\theta^*_0$. \item While the current paper focuses only on sparse regression methods, the approach readily extends to cover other machine learning estimators $\hat \gamma$ as long as we can find (numerically) dictionaries $b$ that (approximately span) the realizations of $\hat \gamma - \gamma_0$, where $\gamma_0$ is the probability limit of $\hat \gamma$. \item The current approach is limited to linear functionals, but in an ongoing work we are able to extend the approach to nonlinear functionals by first performing a linear expansion and then applying our new methods to the linear part of the expansion. \end{enumerate} The paper also builds upon ideas in classical semi-parametric learning theory with low-dimensional $X$, which focused inference on ideal $\theta_0^*$ using traditional smoothing methods for estimating nuisance parameters $\gamma_0$ and $\alpha_0$ [\cite{vaart:1991,newey94,bickel:semibook,robins:dr,vdV}], that do not apply to the current high-dimensional setting. Our paper also builds upon and contributes to the literature on the modern orthogonal/debiased estimation and inference [\cite{c.h.zhang:s.zhang,BelloniChernozhukovHansen2011,belloni2014pivotal,BCK-LAD,javanmard2014confidence,JM:ConfidenceIntervals,JM2015,vandeGeerBuhlmannRitov2013,ning2014general,CHS:AnnRev,neykov2015unified,HZhou,JV:cov,JV:eff,JV:m,bradic:QR,zhu:breaking,zhu2017linear}], which focused on inference on the coefficients in high-dimensional linear and generalized linear regression models, without considering the general linear functionals analyzed here. \\ \textbf{Notation.} Let $W = (Y, X')'$ be a random vector with law $P$ on the sample space $\mathcal{W}$, and $W_1^n = (Y_i, X_i)_{i=1}^n$ denote the i.i.d. copies of $W$. All models and probability measure $P$ can be indexed by $n$, a sample size, so that the models and their dimensions can change with $n$, allowing any of the dimensions to increase with $n$. We use the notation from the empirical process theory, see \cite{VW}. Let ${\mathbb{E}_I} f$ denote the empirical average of $f(W_i)$ over $i \in I \subset \{1,..., n\}$: $${\mathbb{E}_I} f := {\mathbb{E}_I} f(W) = | I | ^{-1} \sum_{i \in I} f(W_i).$$ Let $\Bbb{G}_I$ denote the empirical process over $f \in \mathcal{F}: \mathcal{W} \to \Bbb{R}^p$ and $ i \in I$, namely $$\mathbb{G}_I f := \mathbb{G}_I f (W) := | I | ^{-1/2} \sum_{ i \in I} (f(W_i) - P f),$$ where $Pf := P f(W) := \int f(w) d P(w)$. Denote by the $L^q(P)$ norm of a measurable function $f$ mapping the support of $W$ to the real line and also the $L^q(P)$ norm of random variable $f(W)$ by $\| f \|_{P,q} = \| f(W)\|_{P,q}$. We use $\| \cdot\|_q$ to denote $\ell_q$ norm on $\Bbb{R}^d$. For a differentiable map $x \mapsto f(x)$, mapping $\mathbb{R}^d$ to $\mathbb{R}^k$, we use $\partial_{x'} f$ to abbreviate the partial derivatives $(\partial/\partial x') f$, and we correspondingly use the expression $\partial_{x'} f(x_0)$ to mean $\partial_{x'} f (x) \mid_{x = x_0}$, etc. We use $x'$ to denote the transpose of a column vector $x$. \section{The DML with Regularized Riesz Representers} \subsection{ Sparse Approximations for the Regression Function and the Riesz Representer} We work with the set up above. Consider a conditional expectation function $x \mapsto \gamma^*_0(x) = {\mathrm{E}} [Y \mid X = x]$ such that $\gamma_0 \in L^2(F)$ and a $p$-vector of dictionary terms $x \mapsto b(x) = (b_j(x))_{j=1}^p$ such that $b \in L^2(F)$. The dimension $p$ of the dictionary can be large, potentially much larger than $n$. We approximate $\gamma^*_0$ as $$ \gamma^*_0 = \gamma_0 + r_\gamma := b'\beta_0 + r_\gamma, $$ where $r_\gamma$ is the approximation error, and $\gamma_0 := b'\beta_0$ is the ``best sparse linear approximation" defined via the following Dantzig Selector type problem (\cite{candes2007dantzig}). \begin{definition}[Best Sparse Linear Predictor] Let $\beta_0$ be a minimal $\ell_1$-norm solution to the approximate best linear predictor equations $$ \beta_0 \in \arg \min \|\beta \|_1 : \| {\mathrm{E}} [ b(X) ( Y - b(X)'\beta) ] \|_\infty \leq \lambda^\beta_0. $$ When $\lambda^\beta_0 = 0$, $\beta_0$ becomes the best linear predictor parameter (BLP). \end{definition} We refer to the resulting approximation as ``sparse", since solutions $\beta_0$ often are indeed sparse. Note that since ${\mathrm{E}}[Y \mid X] = \gamma^*_0(X)$, the approximation error $r_\gamma$ is approximately orthogonal to $b$: $$ \| {\mathrm{E}} [ b(X) ( Y - b(X)'\beta_0) \|_\infty = \| {\mathrm{E}} [ b(X) ( \gamma^*_0(X) - b(X)'\beta_0) ]\|_\infty = \| {\mathrm{E}} [ b(X) r_\gamma(X) ] \|_\infty \leq\lambda^\beta_0. $$ Consider a linear subspace $\Gamma^* \subset L^2(F)$ that contains $\Gamma_b$, the linear subspace generated by $b$. In some of the asymptotic results that follow, we can have $\Gamma_b \uparrow \Gamma^*$ as $p \to \infty$. \begin{itemize} \item[C.] \textit{For each $x \in \mathcal{X}$, consider a linear map $\gamma \mapsto m(x,\gamma)$ from $\Gamma^*$ to $\Bbb{R}$, such that for each $\gamma \in \Gamma^*$, the map $x \mapsto m(x,\gamma)$ from $\mathcal{X}$ to $\Bbb{R}$ is measurable, and the functional $\gamma \mapsto {\mathrm{E}} m(X, \gamma)$ is continuous on $\Gamma^*$ with respect to the $L^2(P)$ norm.} \end{itemize} Under the continuity condition in C, this functional admits a Riesz representer $\alpha^*_0\in L^2(F)$. We approximate the Riesz representer via: $$ \alpha^*_0(X) = b(X)'\rho_0 + r_\alpha(X), $$ where $r_\alpha(X)$ is the approximation error, and $b(X)'\rho_0$ is the best sparse linear approximation defined as follows. \begin{definition}[Best Sparse Linear Riesz Representer] Let $\rho_0$ be a minimal $\ell_1$-norm solution to the approximate Riesz representation equations: $$ \rho_0 \in \arg \min \| \rho\|_1: \| {\mathrm{E}} m(X, b) - {\mathrm{E}} b(X) b(X)' \rho \|_\infty \leq \lambda^\rho_0, $$ where $\lambda^\rho_0$ is a regularization parameter. When $\lambda^\rho_0 =0$, we obtain $\alpha_0(X) = b(X)'\rho_0$, a Riesz representer for functionals ${\mathrm{E}} m(X, \gamma)$ when $\gamma \in \Gamma_b.$ \end{definition} As before, we refer to the resulting approximation as ``sparse", since the solutions to the problem would often be sparse. Since ${\mathrm{E}} \alpha^*(X) b(X) = {\mathrm{E}} m(X, b)$, we conclude that the approximation error $r_\alpha(X)$ is approximately orthogonal to $b(X)$: $$\| {\mathrm{E}} (\alpha^*_0(X) - b(X)'\rho_0) b(X) \|_\infty = \| {\mathrm{E}} [r_\alpha (X) b(X) ] \|_\infty \leq \lambda^{\rho}_0.$$ The estimation will be carried out using the sample analogs of the problems above, and is a special case of the following Dantzig Selector-type problem. \begin{definition}[\textbf{Regularized Minimum Distance (RMD)}] Consider a parameter $t \in T \subset \Bbb{R}^p$, such that $T$ is a convex set with $ \|T\|_1:= \sup_{t \in T} \| t\|_1 \leq B$. Consider the moment functions $t \mapsto g(t) $ and the estimated function $t \mapsto \hat g(t) $, mapping $T$ to $\Bbb{R}^p$: $$ g(t) = G t + M; \quad \hat g(t) = \hat G t + \hat M, $$ where $G$ and $\hat G$ are $p$ by $p$ non-negative-definite matrices and $M$ and $\hat M$ are $p$-vectors. Assume that $t_0$ is the target parameter that is well-defined by: \begin{equation}\label{estimand:RMD} t_0 \in \arg \min \|t\|_1: \| g(t) \|_\infty \leq \lambda_0, \quad t \in T. \end{equation} Define the RMD estimator $\hat t$ by solving $$ \hat t \in \arg \min \| t\|_1: \|\hat g(t) \|_\infty \leq \lambda_0 + \lambda_1, \quad t \in T, $$ where $\lambda_1 $ is chosen such that $ \|\hat g(t_0) - g(t_0)\|_\infty \leq \lambda_1,$ with probability at least $1-\epsilon_n$. \end{definition} We define the estimators of $\beta_0$ and $\rho_0$ over subset of data, indexed by a non-empty subset $A$ of $\{1,..., n\}$. \begin{definition}[RMD Estimator for Sparse BLP]\label{D.1} Define $\hat \beta_A$ as the RMD estimator with parameters $t_0 = \beta_0$, $T$ a convex set with $\| T\|_1 \leq B$, and $$ \hat G= \mathbb{E}_A b b', \quad G = {\mathrm{E}} b b', \quad \hat M = \mathbb{E}_A Y b, \quad M = {\mathrm{E}} Y b(X). $$ \end{definition} \begin{definition}[RMD Estimator for Sparse Riesz Representer]\label{D.2} Define $\hat \rho_A$ as the RMD estimator with parameters $t_0 = \beta_0$, $T$ a convex set with $\| T\|_1 \leq B$, and $$ \hat G = \mathbb{E}_A b b', \quad G = {\mathrm{E}} b b', \quad \hat M = -\mathbb{E}_A m(X, b), \quad M = -{\mathrm{E}} m(X, b). $$ \end{definition} \subsection{Properties of RMD Estimators} Consider sequences of constants $\ell_{1n} \geq 1$, $\ell_{2n} \geq 1$, $B_n \geq 0$, and $\epsilon_n \searrow 0$, indexed by $n$. \begin{itemize} \item[MD] \textit{We have that $t_0 \in T$ with $\|T\|_1:= \sup_{t \in T} \|t \|_1 \leq B_n$, and the empirical moments obey the following bounds with probability at least $1 - \epsilon_n$: $$ \sqrt{n} \| \hat G - G\|_\infty \leq \ell_{1n} \text{ and } \sqrt{n} \|\hat M - M\|_\infty \leq \ell_{2n}. $$} \end{itemize} Note that in many applications the factors $\ell_{1n}$ and $\ell_{2n}$ can be chosen to grow slowly, like $\sqrt{ \log (p \vee n) }$, using self-normalized moderate deviation bounds (\cite{Shao,belloni2014pivotal}), under mild moment conditions, without requiring sub-Gaussianity. Define the identifiablity factors for $t_0 \in T$ as : $$s^{-1}(t_0) := \inf_{\delta \in R(t_0) } | \delta'G \delta|/\| \delta \|^2_1, $$ where $R(t_0)$ is the restricted set: $$ R(t_0) := \{\delta: \| t_0+ \delta\|_1 \leq \|t_0\|_1, \quad t_0 + \delta \in T\},$$ where $s^{-1}(t_0) := \infty$ if $t_0 = 0$. The restricted set contains the estimation error $\hat t - t_0$ for RMD estimators with probability at least $1- \epsilon_n$. We call the inverse of the identifiability factor, $s(t_0)$, the ``effective dimension", as it captures the effective dimensionality of $t_0$; see remark below. \begin{Remark}[Identifiability and Effective Dimension] The identifiability factors were introduced in \cite{CCK} as a generalization of the restricted eigenvalue of \cite{BickelRitovTsybakov2009}. Indeed, given a vector $\delta \in \Bbb{R}^p$, let $\delta_A$ denote a vector with the $j$-th component set to $\delta_j$ if $j \in A$ and $0$ if $j \not \in A$. Then $s^{-1} (t_0) \geq s^{-1} k/2$ or $$s(t_0) \leq 2 s/k,$$ where $k$ is the restricted eigenvalue: $ k := \inf |\delta'G\delta|/\|\delta_M\|^2_2: \delta \neq 0, \|\delta_{M^c}\|_1 \leq \|\delta_M\|_1,$ $M = \text{support} (t_0)$, $M^c = \{1,...,p\} \setminus M$, and $s = \| t_0 \|_0.$ The inequality follows since $\| t_0+ \delta\|_1 \leq \|t_0\|_1$ implies $\|\delta_{M^c}\|_1 \leq \|\delta_M\|_1$, so that $ \| \delta\|^2_1 \leq 2 \|\delta_{M}\|^2_1 \leq 2 s \|\delta_M\|_2^2$. Here $s$ is the number of non-zero components of $t_0$. Hence we can view $s(t_0)$ as a measure of effective dimension of $t_0$. \end{Remark} \begin{Lemma}[\textbf{Regularized Minimum Distance Estimation}]\label{lemma:RMD} Suppose that MD holds. Let $$\bar \lambda := \tilde \ell_n/\sqrt{n}, \quad \tilde \ell_n := (\ell_{1n} B_n + \ell_{2n}).$$ Define the RMD estimand $t_0$ via (\ref{estimand:RMD}) with $\lambda_0 \vee \lambda_1 \leq \bar \lambda$. Then with probability $1- 2\epsilon_n$ the estimator $\hat t$ exists and obeys: $$ (\hat t-t_0)'G(\hat t- t_0) \leq (s(t_0) (4 \bar \lambda)^2) \wedge (8 B_n \bar \lambda). $$ \end{Lemma} There are two bounds on the rate, one is the ``slow rate" bound $8 B_n\bar \lambda$ and the other one is ``fast rate". The ``fast rate" bound is tighter in regimes where the ``effective dimension" $s(t_0)$ is not too large. \begin{Corollary}[\textbf{RMD for BLP and Riesz Representer}]\label{cor:rates} Suppose $\beta_0$ and $\rho_0$ belong to a convex set $T$ with $\|T\|_1 \leq B_n$. Consider a random subset $A$ of $\{1,...,n\}$ of size $m \geq n- n/K$ where $K$ is a fixed integer. Suppose that the dictionary $b(X)$ and $Y$ obey with probability at least $1-\epsilon_n$ $$ \max_{j,k \leq p} |\Bbb{G}_{A} b_k(X) b_j(X)| \leq \ell_{1n}, \quad \max_{j \leq p} | \Bbb{G}_{A} (Y b_j(X)) | \leq \ell_{2n}, \quad \max_{j \leq p} | \Bbb{G}_{A} m(X, b_j)| \leq \ell_{2n}. $$ If we set $\lambda^{\beta}_0 \vee \lambda^{\beta}_1 \vee \lambda^{\rho}_0 \vee \lambda^{\rho}_1 \leq \bar \lambda = \tilde \ell_n/\sqrt{n}$ for $\tilde \ell_n = B \ell_{1n} + \ell_{2n}$, then with probability at least $1-4\epsilon_n$, estimation errors $u = \hat \beta_A - \beta_0$ and $v = \hat \rho_A - \rho_0$ obey $$ u \in R(\beta_0), \quad [u'G u]^{1/2} \leq r_1:= 4 [(\tilde \ell_n (s (\beta_0) / n)^{1/2}) \wedge (\tilde \ell^{1/2}_n B_n n^{-1/4})], $$$$ v \in R(\rho_0), \quad [v'Gv]^{1/2} \leq r_2 := 4 [(\tilde \ell_n (s (\rho_0) / n)^{1/2}) \wedge (\tilde \ell^{1/2}_n B_n n^{-1/4})].$$ \end{Corollary} The corollary follows because the stated conditions imply condition MD by Holder inequality. \subsection{ Approximate Neyman-orthogonal Score Functions and the DML Estimator} Our DML estimator of $\theta_0$ will be based on using the following score function: $$ \psi (W_i, \theta; \beta, \rho) = \theta - m(X, b(X))'\beta - \rho' b(X) (Y - b(X)'\beta). $$ \begin{Lemma}[Basic Properties of the Score]\label{lemma:score} The score function has the following properties: $$ \partial_\beta \psi (X, \theta; \beta, \rho) = m(X, b(X)) + {\rho}' b(X) b(X), \quad \partial_\rho \psi (X, \theta; \beta, \rho) = -b(X) (Y - b(X)\beta), $$$$ \partial^2_{\beta \beta'} \psi (X, \theta; \beta, \rho) = \partial^2_{\rho \rho'} \psi (X, \theta; \beta, \rho) =0, \quad \partial^2_{\beta \rho'} \psi (X, \theta; \beta, \rho) = b(X) b(X)'. $$ This score function is approximately Neyman orthogonal at the sparse approximation $(\beta_0, \rho_0)$, namely $$ \|{\mathrm{E}} [\partial_\beta \psi (X, \theta_0; \beta, \rho_0)] \|_{\infty} = \|{\mathrm{E}} m(X, b) + {\rho_0}' G\|_\infty \leq \lambda^\rho_0, $$$$ \|{\mathrm{E}} [\partial_\rho \psi (X, \theta_0; \beta_0, \rho)]\|_{\infty} = \| {\mathrm{E}}[ b(X) (Y - b(X)'\beta_0)]\|_{\infty} \leq \lambda^\beta_0. $$ \end{Lemma} The second claim of the lemma is immediate from the definition of $(\beta_0, \rho_0)$ and the first follows from elementary calculations. The approximate orthogonality property above says that the score function is approximately invariant to small perturbations of the nuisance parameters $\rho$ and $\beta$ around their ``true values" $\rho_0$ and $\beta_0$. Note that the score function is exactly invariant, and becomes the doubly robust score, whence $\lambda^\rho_0 = 0$ and $\lambda^\beta_0 = 0$. This approximate invariance property plays a crucial role in removing the impact of biased estimation of nuisance parameters $\rho_0$ and $\beta_0$ on the estimation of the main parameters $\theta_0$. We now define the double/de-biased machine learning estimator (DML), which makes use of the cross-fitting, an efficient form of data splitting. \begin{definition}[DML Estimator] Consider the partition of $\{1,...,n\}$ into $K \geq 2$ blocks $(I_k)_{k=1}^K$, with $n/K$ observations in each block $I_k$ (assume for simplicity that $K$ divides $n$). For each $k =1,...,K$, and and $I^c_k = \{1,...,N\} \setminus I_k$, let estimator $\hat \theta_{I_k}$ be defined as the root: $$ {\mathbb{E}_I} \psi (W_i, \hat \theta_{I_k}; \hat \beta_{I^c_k}, \hat \rho_{I^c_k}) = 0,$$ where $ \hat \beta_{I^c_k}$ and $\hat \rho_{I^c_k}$ are RMD estimators of $\beta_0$ and $\rho_0$ that have been obtained using the observations with indices $A=I^c_k$. Define the DML estimator $\hat \theta$ as the average of the estimators $\hat \theta_{I_k}$ obtained in each block $k \in \{1,..., K\}$: $$ \hat \theta = \frac{1}{K} \sum_{k=1}^K \hat \theta_{I_k}. $$ \end{definition} \subsection{Properties of DML with Regularized Riesz Representers}\label{sec:DML} We will note state some sufficient regularity conditions for DML. Let $n$ denote the sample size. To give some asymptotic statements, let $\ell_{1n} \geq 1$, $\ell_{2n} \geq 1$, $\ell_{3n} \geq 1$, $B_n \geq 0$, and $\delta_n \searrow 0$ and $\epsilon_n \searrow 0$ denote sequences of positive constants. Let $c$, $C$, and $q$ be positive constants such that $q > 3$, and let $K \geq 2$ be a fixed integer. We consider sequence of integers $n$ such that $K$ divides $n$ (to simplify notation). Fix all of these sequences and the constants. Consider $P$ that satisfies the following conditions: \begin{itemize} \item[R.1] \textit{Assume condition C holds and that in Definitions 1 and 2, it is possible to set $\lambda^\rho_0$ and $\lambda^\beta_0$ such that (a) $ \sqrt{n} (\lambda^\rho_0 + \lambda^\beta_0) B_n \leq \delta_n$ and (b) the resulting parameter values $\beta_0$ and $\rho_0$ are well behaved with $\rho_0 \in T $ and $\beta_0 \in T$, where $T$ is a convex set with $\|T\|_1 \leq B_n$.} \item[R.2] \textit{Given a random subset $A$ of $\{1,...,n\}$ of size $n - n/K$, the terms in dictionary $b(X)$ and outcome $Y$ obey with probability at least $1-\epsilon_n$ $$ \max_{j,k \leq p} |\Bbb{G}_{A} b_k(X) b_j(X)| \leq \ell_{1n}, \quad \max_{j \leq p} | \Bbb{G}_{A} Y b_j(X) | \leq \ell_{2n}, \quad \max_{j \leq p} | \Bbb{G}_{A} m(X, b_j)| \leq \ell_{2n}. $$} \item[R.3] \textit{Assume that $ c \leq \| \psi(W, \theta_0; \beta_0, \rho_0)\|_{P,q} \leq C$ for $q =2$ and $3$ and that the following continuity relations hold for all $u \in R(\beta_0)$ and $v \in R(\rho_0)$, $$ \sqrt{\Var} ((m(X, b) + {\rho_0}' b(X) b(X)) 'u) \leq \ell_{3n} \| b(X)'u\|_{P,2}$$$$ \sqrt{\Var} ( (Y - b(X)'\beta_0) b(X) 'v) \leq \ell_{3n} \| b(X)'v\|_{P,2} $$$$ \sqrt{\Var} (u 'b(X) b(X)' v) \leq \ell_{3n} ( \|b(X) 'u\|_{P,2} + \|b(X)'v\|_{P,2}). $$} \item[R.4] \textit{For $\tilde \ell_n := \ell_{1n} B_n + \ell_{2n}$, \begin{equation*} \left . \begin{array}{l} r_1 := 4 [(\tilde \ell_n (s (\beta_0) / n)^{1/2}) \wedge (\tilde \ell^{1/2}_n B_n n^{-1/4})] \\ r_2 := 4 [(\tilde \ell_n ( s(\rho_0) /n)^{1/2} ) \wedge (\tilde \ell^{1/2}_n B_n n^{-1/4})] \end{array} \right | \ \text{ we have: } \ n^{1/2} r_1 r_2 + \ell_{3n} (r_1 + r_2) \leq \delta_n. \end{equation*}} \end{itemize} R.1 requires that sparse approximations of $\gamma^*_0$ and $\alpha^*_0$ with respect to the dictionary $b$ admit well-behaved parameters $\beta_0$ and $\rho_0$. R.2 is a weak assumption that can be satisfied by taking $\ell_{1n} = \ell_{2n} = \sqrt{\log (p \vee n) }$ under weak assumptions on moments, as follows from self-normalized moderate deviation bounds (\cite{Shao,belloni2014pivotal}), without requiring subgaussian tails bounds. R.3 imposes a modulus of continuity for bounding variances: if elements of the dictionary are bounded with probability one, $ \| b(X) \|_\infty \leq C$, then we can select $\ell_{3n} = C B_n$ for many functionals of interest, so the assumption is plausible. R.4 imposes condition on the rates $r_1$ and $r_2$ of consistency of $\beta_0$ and $\rho_0$, requiring in particular that $r_1 r_2 n^{1/2}$, an upper bound on the bias of the DML estimator, is small; see also the discussion below. Consider the oracle estimator based upon the true score functions: $$ \bar \theta = \theta_0 + n^{-1} \sum_{i =1}^n \psi_0(W_i), \quad \psi_0(W_i) := \psi(W_i, \theta_0; \beta_0, \rho_0). $$ \begin{Theorem}[Adaptive Estimation and Approximate Gaussian Inference]\label{theorem: DML} Under R.1-R.4, we have the adaptivity property, namely the difference between the DML and the oracle estimator is small: for any $\Delta_n \in (0,1)$, $$ |\sqrt{n}(\hat \theta - \bar \theta)| \leq R_n := \sqrt{K} \bar C \delta_n/\Delta_n $$ with probability at least $1- \Pi_n$ for $\Pi_n:= K ( 4 \epsilon_n + \Delta^2_n)$, where $\bar C$ is an absolute constant. As a consequence, $\hat \theta_0$ concentrates in a $1/\sqrt{n}$ neighborhood of $\theta_0$, with deviations approximately distributed according to the Gaussian law, namely: \begin{equation}\label{eq:normal error} \sup_{z \in \Bbb{R}}\Big | \Pr_P ( \sigma^{-1} \sqrt{n} (\hat \theta_0 - \theta_0) \leq z) - \Phi(z) \Big | \leq \bar C' (n^{-1/2} + R_n) + \Pi_n, \end{equation} where $\sigma^2 = \Var ( \psi_0(W_i))$, $\Phi(z) = \Pr(N(0,1) \leq z)$, and $\bar C'$ only depends on $(c,C)$. \end{Theorem} The constants $\Delta_{n}>0$ can be chosen such that right-hand side of (\ref{eq:normal error}) converges to zero as $n \to \infty$, yielding the following asymptotic result. \begin{Corollary}[Uniform Asymptotic Adaptivity and Gaussianity]\label{corollary:uniform} Fix constants and sequences of constants specified at the beginning of Section \ref{sec:DML}. Let $\mathcal{P}$ be the set of probability laws $P$ that obey conditions R.1-R.4 uniformly for all $n$. Then DML estimator $\hat \theta$ is uniformly asymptotically equivalent to the oracle estimator $\bar \theta$, that is $|\sqrt{n}(\hat \theta - \bar \theta)| = O_P (\delta_n)$ uniformly in $P \in \mathcal{P}$ as $n \to \infty$. Moreover, $\sqrt{n} \sigma^{-1} (\hat \theta - \theta_0)$ is asymptotically Gaussian uniformly in $P \in \mathcal{P}$: $$ \lim_{n \to \infty} \sup_{P \in \mathcal{P}} \sup_{z \in \Bbb{R}}\Big | \Pr_P ( \sigma^{-1} \sqrt{n} (\hat \theta_0 - \theta_0) \leq z) - \Phi(z) \Big | = 0. $$ \end{Corollary} Hence the DML estimator enjoys good properties under the stated regularity conditions. Less primitive regularity conditions can be deduced from the proofs directly. \begin{remark}[Sharpness of Conditions] The key regularity condition imposes that the bounds on estimation errors $r_1$ and $r_2$ are small and that the product $n^{1/2} r_1 r_2$ is small. Ignoring the impact of "slow" factors $\ell_n$'s and assuming $B_n$ is bounded by a constant $B$, this requirement is satisfied if $$ \textit{ one of the effective dimensions is smaller than $\sqrt{n}$, either $s (\beta_0) \ll \sqrt{n} \ \text{ or } s (\rho_0) \ll \sqrt{n}$.} $$ The latter possibility allows one of the parameter values to be ``dense", having unbounded effective dimension, in which case this parameter can be estimated at the ``slow" rate $n^{-1/4}$. These types of conditions are rather sharp, matching similar conditions used in \cite{JM2015} in the case of inference on a single coefficient in Gaussian sparse linear regression models, and those in \cite{zhu2017linear} in the case of point testing general linear hypotheses on regression coefficients in linear Gaussian regression models. \end{remark} Proceeding further, we notice that the difference between our target and the ideal target is: $$ |\theta_0 - \theta^*_0| = |{\mathrm{E}} r_\alpha(X) r_\gamma(X)| \leq \| r_\alpha\|_{P,2} \| r_\gamma\|_{P,2}. $$ Hence we obtain the following corollary. \begin{Corollary}[Inference targets $\theta_0^*$ when approximation errors are small] Suppose that, in addition to R.1-R.4, the product of approximation errors $r_\gamma = \gamma^*_0 - \gamma_0$ and $r_\alpha = \alpha^*_0 - \alpha_0$ is small, $$ \sqrt{n} |{\mathrm{E}} r_\alpha(X) r_\gamma(X)| \leq \delta_n. $$ Then conclusions of Theorem 1 and Corollary 1 hold with $\theta_0$ replaced by $\theta_0^*$ and the constants $\bar C$ and $\bar C'$ replaced by $2 \bar C$ and $2 \bar C'$. \end{Corollary} The plausibility of $\sqrt{n} |{\mathrm{E}} r_\alpha(X) r_\gamma(X)| \leq \sqrt{n} \| r_\alpha\|_{P,2} \| r_\gamma\|_{P,2}$ being small follows from the fact that many rich functional classes admit sparse linear approximations with respect to conventional dictionaries $b$. For instance, \cite{tsybakov:book} and \cite{belloni2014pivotal} give examples of Sobolev and rearranged Sobolev balls, respectively, as the function classes and elements of the Fourier basis as the dictionary $b$, in which sparse approximations have small errors. \begin{remark}[Sharpness of Conditions in the Context of ATE] In the context of vanishing approximation errors, and in the context of Example 3 on Average Treatment Effects, our estimator implicitly estimates the inverse of the propensity score directly rather than inverting a propensity score estimator as in most of the literature. The approximate residual balancing estimator of \cite{athey2016approximate} can also be thought of as implicitly estimating the inverse propensity score. An advantage of the estimator here is its DML form allows us to tradeoff rates at which the mean and the inverse propensity score are estimated while maintaining root-n consistency. Also, we do not require that the conditional mean be linear and literally sparse with the sparsity index $s(\beta_0) \ll \sqrt{n} $; in fact we can have a completely dense conditional mean function when the approximation to the Rietz representer has the effective dimension $s(\rho_0) \ll \sqrt{n}$. More generally, when the approximation errors don't vanish, our analysis also explicitly allows for misspecification of the regression function. \end{remark} \section{Proofs} \textbf{Proof of Lemma \ref{lemma:RMD}}. Consider the event $\mathcal{E}_n$ such that \begin{equation}\label{En1} \| \hat g(t_0)\| \leq \lambda_0 + \lambda_1 \text{ and } \sup_{t \in T} \| \hat g(t) - g(t) \|_\infty \leq \bar \lambda \end{equation} holds. This event holds with probability at least $1-2 \epsilon_n$. Indeed, by the choice of $\lambda_1$ and $\| g(t_0)\| \leq \lambda_0$, we have with probability at least $1-\epsilon_n$: $$ \| \hat g(t_0)\|_\infty \leq \| \hat g(t_0) - g(t_0) \|_\infty + \| g(t_0)\| \leq \lambda_1 + \lambda_0. $$ Hence on the event $\mathcal{E}_n$ we have $$ \| \hat t \|_1 \leq \| t_0 \|_1 \quad \| \hat g(\hat t )\|_\infty \leq \lambda_0 + \lambda_1. $$ This implies, by $\| g(t_0)\| \leq \lambda_0$, by $\lambda_0 \wedge \lambda_1 \leq \bar \lambda$, and by (\ref{En1}), that $$ \| G (\hat t - t_0)\|_\infty = \| g(\hat t) - g(t_0) \|_\infty \leq 2 \lambda_0 + \lambda_1 + \sup_{t \in T} \| \hat g(t) - g(t) \|_\infty \leq 4 \bar \lambda. $$ Then $ \delta = \hat t - t_0$ obeys, by definition of $s(t_0)$, $$ \| \delta \|_1^2 \leq s(t_0) \delta' G \delta \leq s(t_0) \| G \delta \|_\infty \| \delta\|_1 \leq s(t_0) 4 \bar \lambda \| \delta\|_1, $$ which implies that $$ \| \delta \|_1 \leq s(t_0) 4 \bar \lambda, \quad \delta' G \delta \leq s(t_0) (4 \bar \lambda)^2, $$ which establishes the first part of the bound. The second bound follows from $\|\delta\|_1 \leq 2B_n$ and $ \delta' G \delta \leq \| G \delta \|_\infty \|\delta\|_1 \leq 4 \bar \lambda 2 B. $ \qed \textbf{Proof of Theorem \ref{theorem: DML}.} \textbf{Step 1.} We have a random partition $(I_k,I^c_k)$ of $\{1,...,n\}$ into sets of size $m=n/K$ and $n- n/K$. Omit the indexing by $k$ in this step. Here we bound $|\sqrt{m}(\hat \theta_I - \bar \theta_I)|$, where $$ \bar \theta_I = \theta_0 + \mathbb{E}_I \psi_0(W_i). $$ Define $$ \partial_\beta \psi_0 (W_i) := \partial_\beta \psi (X, \theta; \beta_0, \rho_0) = m(X, b(X)) + {\rho_0}' b(X) b(X) $$ $$ \partial_\rho \psi_0 (W_i):= \partial_\rho \psi (X, \theta; \beta_0, \rho_0) = -b(X) (Y - b(X)'\beta_0) $$$$ \partial^2_{\beta \rho'} \psi_0 (W_i) := \partial^2_{\beta \rho'} \psi (X, \theta; \beta_0, \rho_0) = b(X) b(X)'. $$ Define the estimation errors $$u = \hat \beta_{I^c} - \beta_0 \text{ and } v= \hat \rho_{I^c} - \rho_0.$$ Since $ \partial^2_{\beta \beta'} \psi (X, \theta; \beta, \rho) = 0$ and $\partial^2_{\rho \rho'} \psi (X, \theta; \beta, \rho) =0, $ as noted in Lemma \ref{lemma:score}, we have by the exact Taylor expansion: $$ \hat \theta_I = \bar \theta_I + ({\mathbb{E}_I} \partial_\beta \psi_0) u + ({\mathbb{E}_I} \partial_\rho \psi_0) v + u'({\mathbb{E}_I} \partial^2_{\beta \rho'} \psi_0) v. $$ With probability at least $1- 4\epsilon_n$, by Corollary \ref{cor:rates}, the following event occurs: $$ \mathcal{E}_n = \{ u \in R(\beta_0), \ \ v \in R(\rho_0), \ \ \sqrt{u'Gu} \leq r_1, \ \ \sqrt{v'Gv} \leq r_2 \}. $$ Using triangle and Holder inequalities, we obtain that on this event: \begin{eqnarray*} |\sqrt{m}(\hat \theta_I - \bar \theta_I)| \leq \mathrm{rem}_I & := & |{\mathbb{G}_I} \partial_\beta \psi_0 u| + \sqrt{m} \| P \partial_\beta \psi_0 \|_\infty \|u\|_1 \\ & + & |{\mathbb{G}_I} \partial_\rho \psi_0 v| + \sqrt{m} \| P \partial_\rho \psi_0 \|_\infty \|v \|_1 \\ & + & |u ' {\mathbb{G}_I} \partial^2_{\beta \rho'} \psi_0v | + \sqrt{m} |u' [P \partial^2_{\beta \rho'} \psi_0] v|. \end{eqnarray*} Moreover, on this event, by $\|R(\beta_0)\|_1 \leq 2B_n$ and $\|R(\rho_0)\|_1 \leq 2B_n$, by Lemma \ref{lemma:score}, and by R.1: $$ \sqrt{m} \| P \partial_\beta \psi_0 \|_\infty \|u \|_1 \leq \sqrt{m} \lambda^\beta_0 2B_n \leq \delta_n, \sqrt{m} \| P \partial_\rho \psi_0 \|_\infty \|v\|_1 \leq \sqrt{m} \lambda^\rho_0 2B_n \leq \delta_n. $$ Note that $v$ and $u$ are fixed once we condition on the observations $(W_i)_{i \in I^c}$. We have that on the event $\mathcal{E}_n$, by i.i.d. sampling, R.3, and R.4, \begin{eqnarray*} \sqrt{\Var} [ {\mathbb{G}_I} \partial_\beta \psi_0 u \mid (W_i)_{i \in I^c}] & = & \sqrt{\Var} (\partial_\beta \psi_0 u\mid (W_i)_{i \in I^c}) \leq \ell_{3n} \sqrt{u' G u} \leq \delta_n , \\ \sqrt{\Var} [ {\mathbb{G}_I} \partial_\rho \psi_0 v \mid (W_i)_{i \in I^c}] & = & \sqrt{\Var} (\partial_\rho \psi_0 ' v\mid (W_i)_{i \in I^c}) \leq \ell_{3n} \sqrt{v' G v} \leq \delta_n, \\ \sqrt{\Var} [ u' {\mathbb{G}_I} \partial^2_{\beta \rho'} \psi_0 v \mid (W_i)_{i \in I^c}] & = & \sqrt{\Var} [ u ' b b' v \mid (W_i)_{i \in I^c}] \leq \ell_{3n} [ ( u' G u)^{1/2} + (v' G v)^{1/2}] \leq \delta_n, \\ \sqrt{m} | u ' [P \partial^2_{\beta \rho'} \psi_0] v| & \leq & \sqrt{m} | u 'G v | \leq \sqrt{m} ( u 'G u v 'G v)^{1/2} \leq \delta_n. \end{eqnarray*} Hence we have that for some numerical constant $\bar C$ and any $\Delta_n \in (0,1)$: $$ \Pr ( \mathrm{rem}_I >\bar C \delta_n/\Delta_n ) \leq \Pr ( \mathrm{rem}_I > \bar C \delta_n/\Delta_n \cap \mathcal{E}_n) + \Pr(\mathcal{E}_n^c)$$ $$\leq {\mathrm{E}} \Pr ( \mathrm{rem}_I > \bar C \delta_n/\Delta_n \cap \mathcal{E}_n \mid (W_i)_{i \in I^c}) + \Pr(\mathcal{E}_n^c) \leq \Delta_n^2 + 4 \epsilon_n. $$ \textbf{Step 2.} Here we bound the difference between $\hat \theta = K^{-1} \sum_{k=1}^K \hat \theta_{I_k}$ and $\bar \theta = K^{-1} \sum_{k=1}^K \bar \theta_{I_k}$: $$ \sqrt{n}|\hat \theta - \bar \theta| \leq \frac{\sqrt{n}}{\sqrt{m}} \frac{1}{K} \sum_{k=1}^K \sqrt{m} | \hat \theta_{I_k} - \bar \theta_{I_k}| \leq \frac{\sqrt{n}}{\sqrt{m}} \frac{1}{K} \sum_{k=1}^K \mathrm{rem}_{I_k}. $$ By the union bound we have that $$ \Pr\left( \frac{1}{K} \sum_{k=1}^K \mathrm{rem}_{I_k}> \bar C \delta_n/\Delta_n\right) \leq K( \Delta_n^2 + 4 \epsilon_n), $$ and we have that $\sqrt{n/m} = \sqrt{ K}$. So it follows that $$ |\sqrt{n}(\hat \theta - \bar \theta)| \leq R_n := \bar C \sqrt{K} \delta_n/\Delta_n $$ with probability at least $1- \Pi_n$ for $\Pi_n:= K ( 4 \epsilon_n + \Delta^2_n)$, where $\bar C$ is an absolute constant. \textbf{Step 3}. To show the second claim, let $Z_n := \sqrt{n} {\sigma}^{-1}(\bar \theta - \theta_0)$. By the Berry-Esseen bound, for some absolute constant $A$, $$ \sup_{z \in \Bbb{R}} |\Pr ( Z_n \leq z) - \Phi(z)| \leq A \| \psi_0/\sigma \|^3_{P,3} n^{-1/2} \leq A (C/c)^{3} n^{-1/2}, $$ where $ \| \psi_0/\sigma \|^3_{P,3} \leq (C/c)^3$ by R.3. Hence, using Step 2, for any $z \in \Bbb{R}$, we have \begin{eqnarray*} && \Pr ( \sqrt{n} {\sigma}^{-1} (\hat \theta - \theta_0) \leq z) - \Phi(z)\leq \Pr ( Z_n \leq z + {\sigma}^{-1} R_n) + \Pi_n - \Phi(z) \\ && \quad \quad \leq A (C/c)^{3} n^{-1/2} + \bar \phi {\sigma}^{-1} R_n + \Pi_n \leq \bar C'(n^{-1/2} + R_n) + \Pi_n, \end{eqnarray*} where $ \bar \phi = \sup_{z} \phi(z)$, where $\phi$ is the density of $\Phi(z) = \Pr(N(0,1) \leq z)$, and $\bar C'$ depends only on $(C,c)$. Similarly, conclude that $\Pr ( \sqrt{n} \sigma^{-1} (\hat \theta - \theta_0) \leq z) -\Phi(z) \geq \bar C' (n^{-1/2} + R_n) - \Pi_n.$ \qed \newpage \clearpage
{ "timestamp": "2018-03-20T01:12:37", "yymm": "1802", "arxiv_id": "1802.08667", "language": "en", "url": "https://arxiv.org/abs/1802.08667" }
\section{Introduction} When exciting a multilevel atom with more than one coherent field, new phenomena can arise. In such a system, interference between different pathways can lead to suppression of excitations, leading to a trapped state. In recent years, quantum coherence in multi-level atoms and solid-state emitters has been substantially used for realizing optical bistability~\cite{Wang12, Wang16, Asadpour14, Li16, Li17}, electromagnetic induced transparency (EIT)~\cite{Boller91, Harris97, Marangos98, Fleischhauer00, Wang17}, and subluminal/superluminal light propagation and lasers~\cite{Zhou16, Yablon17}. Moreover, various quantum coherence phenomena in $\Lambda$- and $V$-systems have been employed to modify the linear and non-linear optical characteristics of the medium, and control the temporal and spatial profiles of pulses within that medium~\cite{Kasapi95,Lukin01,Fleischhauer05}. Broadly speaking, multi-level transitions can be classified into two groups: open-loop and closed-loop. In an open-loop configuration, the transition from any level to another is excited through a single path. In contrast, in a closed-loop system, the transition from one level to another is excited through more than one paths. In an open-loop system, the dynamics is insensitive to the phase of the fields, and is controlled by the amplitudes thereof only. On the other hand, in a closed-loop system the phases of the fields play a critical role~\cite{Buckle86}, thus enriching the range of possible applications. Among different closed-loop schemes, the double-$\Lambda$ systems have drawn a lot of attention recently. These systems consist of a pair of $\Lambda$-configurations with common ground states. The double-$\Lambda$ scheme has been widely used to produce important phenomena, such as coherent population trapping (CPT)~\cite{Zanon05, Donley13}, spin squeezing~\cite{Dantan03}, entanglement~\cite{Zavatta14}, four-wave mixing~\cite{Babin96, Hemmer95, Lü98}, and entangled images~\cite{Boyer08, Miller08}. Unlike conventional $\Lambda$-systems however, many of these important coherent features, like EIT, can only be satisfied for certain relative phases between the laser fields in a double-$\Lambda$ system. This closed-loop phase constraint has substantial effects on the dynamics of the double-$\Lambda$ system, and has been utilized to control the behavior of atoms phase-sensitively~\cite{Xu13,Korsunsky99}. Due to this unique feature, the double-$\Lambda$ system is expected to play an important role in atom-laser interaction. Therefore, it is useful to develop an easy physical insight and yet accurate description for such systems. In general, the dynamics of a double-$\Lambda$ configuration as a 4-level system can be properly described with 15 real variables, assuming that the total number of atoms is conserved. The time evolution of the level populations and their coherences are determined via the Liouville's equation, with Lindblad terms describing various dephasing and decoherence phenomena. However, due to the complexity of the problem and the large number of variables involved in such a system, the role of various laser fields in controlling the final atomic features are often obscured and drawing a physical insight is hindered. On the other hand, the 2-level system has been the canonical prototype for atom-laser interaction, and its physical features have been transparently understood. Therefore, the dynamics of a complicated multi-level system can be more easily visualized if it could be reduced to an effective 2-level one. Extending the previous work on the $\Lambda$-system~\cite{Shahriar97}, we first propose a generalizable method that reduces a multi-$\Lambda$ system to an effective 2-level atom. Although the numerical results presented in this paper is only for double-$\Lambda$ systems, the method can be easily extended to multiple-$\Lambda$ systems as long as frequency differences of the laser fields in each $\Lambda$-sub system are the same. Using the explicit terms of the effective Hamiltonian the role of the relative phases of the laser fields can be clearly seen. In particular, we study the effect of this phase on controlling the dispersion of the medium and derive the conditions for achieving optical gain for a pair of laser fields in one of the $\Lambda$-sub systems. We show the possibility of simultaneous lasing for these two beams when the pumped medium is inserted inside an optical cavity with a particular length. This dual-beam laser would only be realized when both the relative amplitudes as well as the phases of the optical beams satisfy specific constraints. This paper is organized as follows; The general theoretical model and comparison between the numerical results of the double-$\Lambda$ system and its effective 2-level model are presented in the second section. The effect of the relative phase and amplitude of the laser fields on the medium dispersion have been investigated in this section as well. In the third section we investigate the bi-laser problem by searching the parameter space to obtain simultaneous optical gain for both transitions of one of the $\Lambda$-sub systems. We derive the required and sufficient conditions for dual-beam lasing. For a particular case that could be realized using $^{87}$Rb, for example, we calculate some of the typical values of the cavity length, laser frequencies, and required optical gain for a moderate quality factor cavity. Finally, section four summarizes the paper and presents the outlook and perspectives for future works. \section{Theoretical model} Figure~\ref{Fig1}(a) shows the energy diagram of a typical 4-level, double-$\Lambda$ system investigated in this study. The system is composed of two 3-level sub systems with optical excitations to states $\ket{3}$ and $\ket{4}$, and shared meta-stable ground states $\ket{1}$ and $\ket{2}$. The corresponding coherent interactions and decay rates are shown on the figure as well. A planewave laser field with frequency $\omega_{mn}$, wavevector $k_{mn}$, and a constant phase of $\phi^0_{mn}$ driving a coherent interaction between the states $\ket{m}$ and $\ket{n}$ has the following form: \begin{equation}\label{planewaves} \vec{E}_{mn}(\vec{r},t)=\frac{\vec{E}_{mn}}{2}~\text{exp}[i(-\omega_{mn}t + \vec{k}_{mn}\cdot \vec{r} + \phi^0_{mn})] + c.c. \end{equation} All the couplings are electric-dipole transitions with the Rabi frequency of $\Omega_{mn} = \frac{\vec{M}_{mn}\cdot \vec{E}_{mn}}{\hbar}$, where $\vec{M}_{mn} = \braket{\vec{m}|\hat{p}|\vec{n}}$ is the transition dipole moment between $\ket{m}$ and $\ket{n}$. In this model, it is assumed that there is no electric dipole transition between states $\ket{3}$ and $\ket{4}$, and between states $\ket{1}$ and $\ket{2}$. Furthermore, we assume that each optical field interacts with only one transition, and its couplings to the remaining transitions are either too far detuned or prohibited by polarization selection rules. For each transition, the frequency detuning is defined as $\delta_{mn} = \omega_{mn} - (E_m - E_n)/\hbar$. In each $\Lambda$-system one can define the common detuning $\delta_{3(4)}$ and difference detuning $\Delta_{3(4)}$ as: \begin{subequations}\label{common-difference detuning} \begin{align} \delta_{3(4)} = \frac{\delta_{13(14)} + \delta_{23(24)}}{2}\\ \Delta_{3(4)} = \delta_{13(14)} - \delta_{23(24)} \end{align} \end{subequations} Throughout this paper we use the rotating wave approximation, and work in the transformed basis which is related to the atomic basis via the following equations: \begin{subequations}\label{rotated frame} \begin{align} \ket{\tilde{1}} ={}& \ket{1}\text{exp}[i(\omega_{13}t - \vec{k}_{13}\cdot\vec{r} - \phi^0_{13})]\\ \ket{\tilde{2}} ={}& \ket{2}\text{exp}[i(\omega_{23}t - \vec{k}_{23}\cdot\vec{r} - \phi^0_{23})]\\ \ket{\tilde{3}} ={}& \ket{3}\\ \ket{\tilde{4}} ={}& \ket{4}\text{exp}[-i((\omega_{14}-\omega_{13})t - (\vec{k}_{14} - \vec{k}_{13})\cdot\vec{r} - (\phi^0_{14}-\phi^0_{13}))] \end{align} \end{subequations} The dynamics of the system can be completely described via the Liouville equation for the density matrix. As will be shown later, a steady-state solution without pulsations in the populations of the atomic states only exists if two-photon detunings of both $\Lambda$-sub systems are the same i.e. $\Delta_3 = \Delta_4$. In what follows we further assume that this difference detuning is zero i.e. $\Delta_{3(4)} = 0$ as illustrated in Fig.~\ref{Fig1}(a). Among various excitation schemes, we are interested in the cases where the transitions to $\ket{3}$ are stronger than the transitions to $\ket{4}$, i.e. $\Omega_{13(23)} \gg \Omega_{14(24)}$. For brevity, we refer to the stronger (weaker) beams as pumps (probes). This assumption allows one to treat the evolution of the probe-$\Lambda$ system as a perturbation. The Raman excitation for the pump-$\Lambda$ system is better described in the dark and bright states: $\ket{D}$ and $\ket{B}$~\cite{Shahriar97}. The dark state $\ket{D}$ is the superposition of the two meta-stable ground states with zero transition dipole moment to $\ket{3}$. The bright state $\ket{B}$ on the other hand, is orthogonal to $\ket{D}$ with maximized transition dipole moment to $\ket{3}$. In terms of the rotated states of $\ket{\tilde{1}},\ket{\tilde{2}}$, the dark and bright states are defined as: \begin{subequations}\label{bright-dark definition} \begin{align} \ket{D} \equiv \frac{\Omega_{23}\ket{\tilde{1}} - \Omega_{13}\ket{\tilde{2}}}{\Omega_3}\\ \ket{B} \equiv \frac{\Omega_{13}\ket{\tilde{1}} + \Omega_{23}\ket{\tilde{2}}}{\Omega_3} \end{align} \end{subequations} where $\Omega_3 \equiv \sqrt{\Omega^2_{13} + \Omega^2_{23}}$. Figure~\ref{Fig1}(b) shows the energy levels of this transformed basis and its corresponding transitions. As depicted for zero difference detuning (i.e. $\Delta_3 = 0$), the dark and bright states are degenerate and there is a coherent interaction between the bright and the excited state $\ket{3}$ with Rabi frequency of $\Omega_3$. After the adiabatic elimination of the excited state $\ket{3}$ in the damped amplitude equation~\cite{Shahriar14}, the pump-$\Lambda$ system can be replaced with an equivalent 2-level model governed by the following Hamiltonian: \begin{equation}\label{2-level pump} \hat{\tilde{H}}_{2-level}^{123} = \frac{\hbar}{2}L_3 (2\delta_3 -i\Gamma_3)\ket{B'}\bra{B'} \end{equation} where $\ket{B'}$ is the light-shifted version of $\ket{B}$, and $L_3 = \frac{\Omega_3^2}{\Gamma_3^2 + 4\delta_3^2}$ is a factor determined solely by the parameters of the pump-$\Lambda$ system. This is a non-Hermitian Hamiltonian~\cite{Shahriar14} that accounts for the decay of state $\ket{B'}$ into state $\ket{D}$, and the decay of the coherence between states $\ket{D}$ and $\ket{B'}$. For conservation of the number of atoms, one still has to add a source term to the population of $\ket{D}$, in formulating the density matrix equation for this effective 2-level system. It should be noted that $\ket{D}$ and $\ket{B'}$ are no longer degenerate in energy, due to the light shift of $\ket{B'}$. We now consider the addition of the excitation applied to the probe-$\Lambda$ sub-system, as a perturbation. This is illustrated schematically in Fig.~\ref{Fig1}(c). As derived in the Appendix~\ref{app1}, the complex Hamiltonian describing the resulting system, consisting of states $\ket{D},\ket{B'}$ and $\ket{\tilde{4}}$, can be expressed as: \begin{equation}\label{RW-probe-lambda} \begin{split} \hat{\tilde{H}} & = \frac{\hbar}{2} [L_3 (2\delta_3 - i \Gamma_3) \ket{B'}\bra{B'} - (2\delta_4 + i\Gamma_4)\ket{\tilde{4}}\bra{\tilde{4}} \\ & - \frac{1}{\Omega_3}(\Omega_{23}\Omega_{14} - \Omega_{13}\Omega_{24}~e^{-i\Phi_0}) \ket{D}\bra{\tilde{4}} + h.c. \\ & - \frac{1}{\Omega_3}(\Omega_{13}\Omega_{14} + \Omega_{23}\Omega_{24}~e^{-i\Phi_0})\ket{B'}\bra{\tilde{4}} + h.c.] \end{split} \end{equation} where $\Phi_0 = (\phi_{24} - \phi_{23}) - (\phi_{14}-\phi_{13})$ is the closed-loop phase. Just as before, state $\ket{\tilde{4}}$ can be adiabatically eliminated, using the damped amplitude equations, as shown in the Appendix~\ref{app1}. The resulting Hamiltonian can be expressed as: \begin{equation}\label{equivalent final 2-level} \begin{split} \hat{\tilde{H}}_{2-level}^{1234} &= \frac{\hbar}{2}[\frac{|h_{D4}|^2}{2\delta_4} \ket{D'}\bra{D'} \\ & + \{ \frac{|h_{B'4}|^2}{2\delta_4} + L_3 (2\delta_3 - i\Gamma_3)\} \ket{B''}\bra{B''} \\ & + \frac{h_{D4} h_{B'4}^*}{2\delta_4} \ket{D'}\bra{B''} + h.c.] \end{split} \end{equation} where $\ket{D'}$ is the light-shifted version of $\ket{D}$, and $\ket{B''}$ is the light-shifted version of $\ket{B'}$. The parameters $h_{D4}$ and $h_{B'4}$ are defined in the Appendix~\ref{app1}, and depend on all Rabi frequencies and the closed-loop phase $\Phi_0$. In the limiting case where $\Phi_0 = 0$ and $\Omega_{23}/\Omega_{13} = \Omega_{24}/\Omega_{14}$, $h_{D4} = 0$. In that case $\ket{D'} = \ket{D}$ and there is no coupling between $\ket{D'}$ and $\ket{B''}$, since the dark state for the pump-$\Lambda$ and probe-$\Lambda$ systems are the same. The energy diagram for this final 2-level system is shown in Fig.~\ref{Fig1}(d). As can be inferred, the interference of the pump and probe beams in the double-$\Lambda$ scheme builds up a coherent interaction between the dark and bright states, whose strength depends on Rabi frequencies of all transitions as well as the closed-loop phase $\Phi_0$. The dependency on $\Phi_0$ is periodic with period $2\pi$. Unlike the conventional $\Lambda$-system, this is a unique feature of such a double-$\Lambda$ configuration, embodying the substantial effect of the laser field phases on the system behavior. The Hamiltonian in eq.~\ref{equivalent final 2-level} accounts for the decay of state $\ket{B''}$ as well as the dephasing of the coherence between $\ket{D'}$ and $\ket{B''}$. However, in order to conserve the total number of atoms in the system, one must add a source term to the population of state $\ket{D'}$ in order to establish the density matrix equations of motions (i.e. the Liouville equations) for this 2-level system. The Liouville equations can be solved in steady state to find the values of $\rho_{D'D'}, \rho_{B''B''}$, and $\rho_{B''D'} = \rho_{D'B''}^*$. The \textit{approximate} dynamics of the population of state $\ket{\tilde{4}}$ as well as coherence between this state and the meta-stable ground states can be determined using of the relations between the relevant amplitudes established during the process of adiabatic elimination. As shown in Appendix~\ref{app2}, using this procedure we can write: \begin{equation}\label{approximated coherent terms 4} \begin{split} \tilde{\rho}_{14} =\frac{\Omega_4}{2\delta_4 - i\Gamma_4} \{& (-\cos{\theta_3} \sin{\theta_4} + \sin{\theta_3} \cos{\theta_4} e^{-i\Phi_0})\times \\ & (\cos{\theta_3}~\rho_{DD} + \sin{\theta_3}~\rho_{BD}) - \\ & (\sin{\theta_3} \sin{\theta_4} + \cos{\theta_3} \cos{\theta_4} e^{-i\Phi_0})\times \\ & (\cos{\theta_3}~\rho_{DB} + \sin{\theta_3}~\rho_{BB}) \} \end{split} \end{equation} where $\cos{\theta_{3(4)}} \equiv \Omega_{23(24)}/\Omega_{3(4)}$. The ultimate equivalent 2-level system presented in eq.~\ref{equivalent final 2-level} reduces the number of unknowns from 15 real variables to 3, hence making the calculations very fast and efficient. Moreover, the equivalent system provides an insight into the mutual interaction between the pumps, probes, and their interference. It is worth mentioning that the reduction scheme described here can be easily extended to multiple-$\Lambda$ systems as long as the difference detuning of all $\Lambda$-sub systems (i.e. $\Delta_i$) are the same. To investigate the validity and accuracy of this approximation, in Fig.~\ref{Fig2} we compare the behavior of $\tilde{\rho}_{14}$ and $\tilde{\rho}_{24}$ as a function of $\delta_4$ for different closed-loop phases. The calculations are done for an ideal 4-level system, with $\Omega_{13}=10\Gamma_3, \Omega_{23}=7\Gamma_3$, $\delta_3 = \Gamma_3$, $\Omega_{14} = \Gamma_3/5,\Omega_{24}=\Gamma_3/2$. Here, we have used $\Gamma_4 = 1.05\Gamma_3$ reflecting the ratio of decay rates of the $5~^2 P_{3/2}$ and $5~^2 P_{1/2}$ manifolds in $^{87}$Rb~\cite{Steck}, which is a possible atom for realizing such a double-$\Lambda$ system. In all cases the real and imaginary parts are denoted in blue and red, respectively. Moreover, in each panel the solid lines show the exact values calculated via the complete $4\times 4$-density matrix while the dots show the approximated results obtained from eq.~\ref{equivalent final 2-level} and ~\ref{approximated coherent terms 4}. A very good agreement between the exact and the approximated results can be observed for both coherent terms at various $\Phi_0$. We now consider, analytically, using the reduced 2-level model, certain special cases. Consider first the case where the relative intensities of the two legs are the same for both pump- and probe-$\Lambda$ systems. Specifically, we assume $\Omega_{14}\Omega_{23} = \Omega_{13}\Omega_{24} = \Omega^2$. Using this condition, the coherent interaction between the dark and 4$^{th}$ level in eq.~\ref{RW-probe-lambda} would be further simplified to $\frac{\Omega^2}{\Omega_3}(e^{-i\Phi_0} - 1)$. When $\Phi_0 = 0$ the interaction vanishes and $\ket{D}$ becomes decoupled from $\ket{\tilde{4}}$. In other words when this condition is satisfied both $\Lambda$-sub systems share the same dark state. Moreover, according to eq.~\ref{equivalent final 2-level} the coherent interaction between dark and bright state vanishes as well, hence a coherent population trapping (CPT) happens for the dark state which is independent of individual values of Rabi frequencies. On the other hand, when $\Phi_0 = \pi$ the interaction between the dark state and the 4$^{th}$-level is maximized. A complementary situation happens when $\Omega_{13}\Omega_{14} = \Omega_{23}\Omega_{24} = \Omega^2$. In that case, as eq.~\ref{RW-probe-lambda} implies, the coherent interaction strength between the bright state and the 4$^{th}$-level is $-\frac{\Omega^2}{\Omega_3}(1+e^{-i\Phi_0})$. In contrast to the previous case, here for $\Phi_0 = 0$, the interaction is maximized and for $\Phi_0 = \pi$ the $\ket{B'}$ and $\ket{\tilde{4}}$ states get decoupled. Also, as can be seen in eq.~\ref{equivalent final 2-level}, this decoupled condition leads to a zero coherent interaction between the dark and bright states. Therefore, similar to the previous case, the bright state gets completely decoupled from the dark state. These two special conditions can be simultaneously satisfied if both of the pump beams and both of the probe beams have the same strength. In terms of the effective Rabi frequencies we have $\Omega_{13} = \Omega_{23}=\Omega_3/\sqrt{2}$ and $\Omega_{14} = \Omega_{24} = \Omega_4/\sqrt{2}$ and the effective Hamiltonian of eq.~\ref{equivalent final 2-level} would be simplified as: \begin{equation}\label{equal beam Hamiltonian} \begin{split} \hat{\tilde{H}}_{2-level}^{1234} &= \frac{\hbar}{2} [\frac{\Omega_4^2}{2\delta_4}\cos{\Phi_0} + L_3 (2\delta_3 - i\Gamma_3) \ket{B''}\bra{B''} \\ & + i\frac{\Omega_4^2}{4\delta_4}\sin{\Phi_0} \ket{D'}\bra{B''} + h.c.] \end{split} \end{equation} This Hamiltonian describes the dynamics of a 2-level system with $\omega_1 = 0$ , $\omega_2 = (\Omega_4^2 \cos{\Phi_0} /4\delta_4 + L_3 \delta_3)$, and the effective Rabi frequency of $\Omega_{eq} = \Omega_4^2/4\delta_4 \sin{\Phi_0}$. Moreover the bright state decays to the dark state with an effective population decay rate of $\Gamma_{eq} = L_3\Gamma_3$. The terms in the Hamiltonian of eq.~\ref{equal beam Hamiltonian} explicitly show the effect of the closed-loop phase in modulating the strength of the coherent interaction and the energy gap between the dark and bright states. Both the coupling strength and the energy gap are periodic in $\Phi_0$ with a $\pi/2$ phase shift. While the decay rate and the energy offset of the bright state are solely determined with the pump beams, the probes determine the strength of the coherent interaction between the dark and bright states and the energy gap modulation. For $\Phi_0 = 0,\pi$ the coupling between $\ket{D'}$ and $\ket{B''}$ vanishes and these two states become totally decoupled. In other words, $\ket{D'}$ is a trapped state in this configuration and EIT occurs for both excited states, namely $\ket{3}$ and $\ket{4}$. On the other hand if $\Phi_0 = \pi/2,3\pi/2$ the coupling between the dark and bright states is maximized. Therefore, a coherent interaction is built up between these two states whose strength is solely dependent on the probe-$\Lambda$ system. Specifically, it is proportional to the Rabi frequency of the original atomic states (i.e. $\Omega_{4}$) and decreases as the detuning $\delta_4$ increases. This interaction would lead to population exchange between $\ket{D'}$ and $\ket{B''}$ and consequently populates the excited atomic states $\ket{3}$ and $\ket{4}$, which leads to non-zero coherent terms for $\rho_{14}(24)$ and induces a polarizability for these transitions. Solving for the steady-state of the system described by eq.~\ref{equal beam Hamiltonian}, we get the following expressions for the effective 2-level density matrix elements: \begin{subequations} \begin{align} \rho_{B''B''} = \frac{\Omega_4^4 \sin^2{\Phi_0}}{2\Omega_4^4(1+\cos^2{\Phi_0}) + 16L_3\Omega_3^2 \delta_4^2+ 32 L_3 \Omega_4^2 \delta_3\delta_4\cos{\Phi_0}}\\ \rho_{D'B''} = -\frac{4\Omega_4^2 \delta_4 \sin{\Phi_0}L_3(\Gamma_3 +i2\delta_3)+i\Omega_4^4 \sin{2\Phi_0}}{2\Omega_4^4(1+\cos^2{\Phi_0}) + 16L_3\Omega_3^2 \delta_4^2+ 32 L_3 \Omega_4^2 \delta_3\delta_4\cos{\Phi_0}} \end{align} \end{subequations} It is clear that the coherence between the dark and bright states contains three types of terms: 1) terms that are only related to $\Omega_4$ (self-terms), 2) those only related to $\Omega_3$ (cross-terms), and 3) terms related to both (mutual terms). This is an important feature of this double-$\Lambda$ system, showing how the pumps and the probes can be selected properly to maximize independently the non-linearities, while suppressing single photon absorptions. As the coherent interaction strength is completely tunbale with $\Phi_0$, this closed-loop phase can be utilized further to tune the polarizablity of the medium. Combined with Maxwell's equations, the propagation of the pumps and probes can be studied in such a double-$\Lambda$ configuration. In the next section we study the parameter space of achievable polarizabilities in this system and investigate the possibility of simultaneous phase-locked lasing at two different frequencies in a single cavity. \section{bi-frequency Raman lasing in double-$\Lambda$ configuration} Assume that all the beams are planewaves and their profiles do not change as they propagate through the medium. In other words, we ignore the effect of the slowly varying envelopes for the first order analysis here. The current analysis can be easily extended by considering the effect of time and position dependent slowly varying envelopes in such a medium. Depending on various parameters, the probe beams that excite the coherence represented by $\rho_{14(24)}$ can experience optical gain or loss. Upon satisfying proper criteria, both probe beams can experience optical gain as they propagate through the medium. Therefore, in an optical cavity having resonant modes at both probe frequencies, a double-beam laser can be realized if enough optical gain is available for both probe beams. In this section we derive the proper conditions for having a self-consistent solution for four beams inside the polarizable medium and investigate the possibility of having a bi-frequency laser in a suitably-designed ring-cavity. Since the pump beams are assumed to be much stronger than the probes, the un-depleted approximation can be employed for both transitions to state $\ket{3}$, hence ignoring any modifications to $\Omega_{13}$ and $\Omega_{23}$. From the results of the previous section, we can describe the induced polarizability at the probe frequencies in terms of the coherence terms of $\rho_{14}$ and $\rho_{24}$. By inserting these terms back into the wave equation, we get the following equations describing the propagation of the probes within the medium: \begin{subequations}\label{probe-propagation} \begin{align} ((\frac{\omega_{14}}{c})^2 - k_{14}^2)\Omega_{14} & = -\frac{2\mu_0\omega_{14}^2}{\hbar}N|M_{14}|^2\tilde{\rho}_{14}\\ ((\frac{\omega_{24}}{c})^2 - k_{24}^2)\Omega_{24} &= -\frac{2\mu_0\omega_{24}^2}{\hbar}N|M_{24}|\tilde{\rho}_{24}e^{+i\Phi_0(z)} \end{align} \end{subequations} where $N$ is the atomic density, and $\mu_0$ is the magnetic permeability. A self-consistent solution for all four beams could be obtained if the wavevectors satisfy a phase matching condition: $(k_{24} - k_{14}) = (k_{23} - k_{13})$. Moreover, the wevevectors of the probes get modified as the they propagate through the pumped medium as: \begin{equation}\label{modified k14-k24} k_{14,24} = \frac{\omega_{14,24}}{c}(1 + \xi_{14,24}) \end{equation} Combined with the frequency resonance condition, eq.~\ref{modified k14-k24} leads to $\omega_{14}\xi_{14} = \omega_{24}\xi_{24}$. Plugging eq.~\ref{modified k14-k24} in eq.~\ref{probe-propagation}, we find that $\xi_{14,24}$ are related to the atomic parameters and the induced coherence terms as follows: \begin{subequations}\label{xi_14,24 equations} \begin{align} \xi_{14} & = \frac{\mu_0 N |M_{14}|^2 c^2}{\hbar \Omega_{14}}\tilde{\rho}_{14}\\ \xi_{24} & = \frac{\mu_0 N |M_{24}|^2 c^2}{\hbar \Omega_{24}}\tilde{\rho}_{24}e^{+i((\phi_{24}^0 - \phi_{23}^0)-(\phi_{23}^0 - \phi_{13}^0))} \end{align} \end{subequations} To have a sustainable oscillation from these two beams in a ring cavity with length $L_c$, the cavity resonance condition of $(k_{14}-k_{24})L_c = 2\pi m$ needs to be satisfied. Taking into account the condition that $\omega_{14}\xi_{14} = \omega_{24}\xi_{24}$, and eq. ~\ref{modified k14-k24}, it is clear that the cavity length $L_c$ depends primarily on the energy difference between the meta-stable ground states according to the following equation: \begin{eqnarray}\label{cavity length} L_c = \frac{2m\pi \hbar c}{E_2 - E_1} \end{eqnarray} The second condition for realizing a lasing mode mandates that the optical gain at both probe frequencies be large enough to overcome all the losses and de-coherence phenomena inside the cavity. As ground state energies are often close together (e.g., the hyperfine splitting in the ground state of $^{87}$Rb is 6.8 GHz, which is nearly six orders of magnitude smaller than the optical transition frequencies), it is fair to assume that the cavity quality factor is almost the same for both probes. For a laser cavity with an output coupler mirror transmittivity of $T$, the imaginary parts of $\xi_{14,24}$ must satisfy the following gain-loss balance condition: \begin{equation}\label{laser gain condition} \omega_{14}\xi_{14}'' = \omega_{24}\xi_{24}'' = \frac{T.c}{2L_c} = \frac{T(E_2 - E_1)}{4m\pi \hbar} \end{equation} For each cavity length determined by $m$, there is a unique amount of the optical gain to satisfy the lasing condition for both probe beams in the cavity. Just as for the cavity length, this optical gain is determined primarily by the energy gap between the meta-stable ground states. Figure~\ref{Fig3}(a) and (b) show the variation of the absorption coefficient $\alpha_{14(24)}=\xi''_{14(24)}\omega_{14(24)}/c$ as a function of probe-$\Lambda$ system detuning ($\delta_4$) and the closed-loop phase ($\Phi_0$) where the atom density is $N = 10^{15}~m^{-3}$. As can be seen these parameters substantially vary in a wide range, making the beams get attenuated (negative regions in blue color) or grow (positive regions in yellow color) as they propagate through the cavity. For lasing to occur, the parameters should be chosen in such a way that both beams experience the same amount of gain (i.e. $\alpha\geq 0$ (yellow regions)). Here both pumps are assumed to have the same strength of $\Omega_3 = 10\Gamma_3$ and the common detuning of $\delta_3 = 10\Gamma_3$. The probes are also assumed to have the same rate of $\Omega_4 = \Gamma_3$. Figure~\ref{Fig4} shows the variation of the absorption coefficients $\alpha_{14}$ and $\alpha_{24}$ for both transitions to the 4$^{th}$ level at a fixed detuning value of $\delta_4 = 20\Gamma_3$ as a function of $\Phi_0$. As can be seen the polarizability on both transitions varies substantially as a function of this phase and the beams can experience different amounts of gain or loss. In particular there are three points (denoted with black stars) where both beams experience the same effect. The region of our interest is the section where both transitions experience gain as they propagate through the gas. This corresponds to a region with both $\alpha$ being above the black dashed line. The point where they both experience the same gain is denoted as point 3. At this point with $\Phi_0 = 3\pi/2$ the gain experienced by both beams is $\approx 1.8 ~ m^{-1}$. Note that a fixed $\Phi_0$ satisfying the lasing condition implies that phases of the two laser beams must follow a certain relationship, set by the phases of the pump beams. For a cavity with length of $L=4.38 ~ cm$ (corresponding to $m=1$ in eq.~\ref{cavity length}) this would end up being a per-pass gain of $0.08$, which is large enough to overcome the losses of a ring cavity with the transmittivity of $T\approx 16\%$ for the output coupler mirror. Once a concrete scheme is adopted for realizing this process (e.g. $^{87}$Rb atoms in a vapor cell), the transition matrix elements would be known, thereby making it possible to determine the values of the electric fields, and hence the intensities, for each laser, since the value of the Rabi frequency is established from the preceding discussions. \section{conclusion and outlook} In this paper we have proposed a bi-frequency Raman laser in a double-$\Lambda$ configuration. Unlike conventional Raman lasers the output beams of such a laser are two phase-locked beams separated by a typical value of a few GHz, corresponding to the frequency separation of the meta-stable ground states. Due to the sensitivity of the gain value to the closed-loop phase, the output modes of the laser are phase-locked and are directly determined via the optical pump phases. Furthermore, we have described a systematic scheme that produces an equivalent 2-level model for the 4-level system. The equivalent model explicitly shows how each set of pumps and probes, and the closed-loop phase play roles in controlling the final states of the quantum emitter. The generalization of the procedure for multi-$\Lambda$ systems is straight forward as long as all $\Lambda$-sub systems have the same frequency detuning. By analytically solving for the steady-state of the density matrix, we have identified explicitly the contribution of the closed-loop phase, the probe detuning, and the pump Rabi frequencies in controlling the linear susceptibilities for the probes in the double-$\Lambda$ system. For devices such as Raman atomic interferometers and CPT clocks, it is necessary to realize a pair of laser frequencies that are phase-coherent with each other, while differing in frequency by the ground state hyperfine splitting of an alkali atom, such as $^87$Rb. The bi-frequency laser obtained via the proposed scheme uses a pair of such lasers as pumps, and creates another such pair. However, the bi-frequency laser pair may have properties that are better suited for these applications than the original pump lasers. It is well known that for these applications, it is important to ensure that the absolute frequency of each laser is as narrow as possible, in order to suppress fluctuations in light-shift. Recently, we have shown ~\cite{Scheuer15, Zhou16, Yablon17} that a Raman laser acts as a subluminal laser, with a quantum noise limited linewidth (Schwalow-Townes Linewidth: STL) that is expected to be narrower than that of a conventional laser by a factor equaling the square of the group index. In reference ~\cite{Yablon17} the observed group index was ~663, with an expected STL of ~1.2 micro-Hz. Since the bi-frequency lasers described here are fundamentally Raman lasers, it is expected that these lasers would also have group indices that are substantially larger than unity, which in turn would imply very small STLs. Of course, the actual group indices for the bi-frequency lasers would depend on the details of the actual atomic transitions employed, the cavity parameters, and the pump powers. Investigations are in progress to quantify this feature of a bi-frequency laser employing $^{87}$Rb atoms, taking into account non-idealities due to presence of additional energy levels. The bi-frequency lasing described here is similar to lasers without inversion (LWI), which had been investigated extensively in the past~\cite{Harris89,Imamoglu91,Schully94,Bhatia01}. However, unlike the conventional LWIs, the bi-frequency laser suggested in this work produces two non-degenerate lasers simultaneously. These may prove to be easier to implement experimentally, and enable realization of lasers at frequencies for which creation of population inversion has not shown to be possible with existing technologies. In order to use the bi-frequency lasing process for this goal, it would be necessary to identify a suitable quantum system for which the probe-lambda transition is at a frequency high enough so that a conventional laser does not exist at that frequency, while the pump-lambda transition is at a frequency for which high-power lasers exist. The resulting bi-frequency laser would transfer energy from the low-frequency pump lasers. Of course, in the model shown in this paper, we have not considered depletion of the pumps, assuming that the power in the bi-frequency laser (probe-lambda) is very low compared to that of the pump. However, if the mean frequency of the pump-lambda system is much lower than that of the probe-lambda system, this approximation is not a suitable one, and it is necessary to consider a more comprehensive model where the pump depletion is taken into account. Investigations are underway to identify four-level systems that can be used in this manner to transfer energy from a low-frequency laser to a very high-frequency bi-laser, taking into account pump depletion As a follow up work, it would be important to extend the study to investigate the phase-sensitive non-linear susceptibility of such a double-$\Lambda$ system, in order to study spatial solitons and their dispersive features. Moreover, it would be useful to utilize the powerful, effective 2-level model for further quantum electrodynamical studies in the context of the Jaynes-Cummings or the Travis-Cummings models.
{ "timestamp": "2018-03-16T01:03:38", "yymm": "1802", "arxiv_id": "1802.08874", "language": "en", "url": "https://arxiv.org/abs/1802.08874" }
\section{Introduction} Recently, extracting textual information from natural scene images has become increasingly popular, due to the growing demands of real-world applications (e.g., product search \cite{bai2017integrating}, image retrieval \cite{jaderberg2016reading}, and autonomous driving). Scene text detection, which aims at locating text in natural images, plays an important role in various text reading systems \cite{neumann2010method, epshtein2010detecting, wang2011end, bissacco2013photoocr, jaderberg2014deep,gomez2017textproposals, Busta_2017_ICCV, Li_2017_ICCV}. \begin{figure} \vspace{-2mm} \begin{centering} \includegraphics[scale=0.32]{imgs/overview5} \par\end{centering} \caption{The images in top row and bottom row are the predicted corner points and position-sensitive maps in top-left, top-right, bottom-right, bottom-left order, respectively.} \label{img_overview} \vspace{-5mm} \end{figure} Scene text detection is challenging due to both external and internal factors. The external factors come from the environment, such as noise, blur and occlusion, which are also major problems disturbing general object detection. The internal factors are caused by properties and variations of scene text. Compared with general object detection, scene text detection is more complicated because: 1) Scene text may exist in natural images with arbitrary orientation, so the bounding boxes can also be rotated rectangles or quadrangles; 2) The aspect ratios of bounding boxes of scene text vary significantly; 3) Since scene text can be in the form of characters, words, or text lines, algorithms might be confused when locating the boundaries. \begin{figure*} \vspace{-2mm} \begin{centering} \includegraphics[scale=0.25]{imgs/pipeline3} \par\end{centering} \caption{Overview of our method. Given an image, the network outputs corner points and segmentation maps by corner detection and position-sensitive segmentation. Then candidate boxes are generated by sampling and grouping corner points. Finally, those candidate boxes are scored by segmentation maps and suppressed by NMS.} \label{img_pipeline} \end{figure*} In the past few years, scene text detection has been widely studied \cite{epshtein2010detecting, bissacco2013photoocr,yao2012detecting,jaderberg2014deep,tian2015text,zhang2016multi,Shi_2017_CVPR,tian2017wetext} and has achieved obvious progresses recently, with the rapid development of general object detection and semantic segmentation. Based on general object detection and semantic segmentation models, several well-designed modifications are made to detect text more accurately. Those scene text detectors can be split into two branches. The first branch is based on general object detectors (SSD \cite{liu2016ssd}, YOLO \cite{redmon2016you} and DenseBox \cite{huang2015densebox}), such as TextBoxes \cite{liao2017textboxes}, FCRN \cite{gupta2016synthetic} and EAST \cite{Zhou_2017_CVPR} \emph{etc.}, which predict candidate bounding boxes directly. The second branch is based on semantic segmentation, such as \cite{zhang2016multi} and \cite{yao2016scene}, which generate segmentation maps and produce the final text bounding boxes by post-processing. Different from previous methods, in this paper we combine the ideas of object detection and semantic segmentation and apply them in an alternative way. Our motivations mainly come from two observations: 1) a rectangle can be determined by corner points, regardless of the size, aspect ratio or orientation of the rectangle; 2) region segmentation maps can provide effective location information of text. Thus, we first detect the corner points (top-left, top-right, bottom-right, bottom-left, as shown in Fig. \ref{img_overview}) of text region rather than text boxes directly. Besides, we predict position-sensitive segmentation maps (shown in Fig. \ref{img_overview}) instead of a text/non-text map as in \cite{zhang2016multi} and \cite{yao2016scene}. Finally, we generate candidate bounding boxes by sampling and grouping the detected corner points and then eliminate unreasonable boxes by segmentation information. The pipeline of our proposed method is depicted in Fig. \ref{img_pipeline}. The key advantages of the proposed method are as follows: 1) Since we detect scene text by sampling and grouping corner points, our approach can naturally handle arbitrary-oriented text; 2) As we detect corner points rather than text bounding boxes, our method can spontaneously avoid the problem of large variation in aspect ratio; 3) With position-sensitive segmentation, it can segment text instances well, no matter the instances are characters, words, or text lines; 4) In our method, the boundaries of candidate boxes are determined by corner points. Compared with regressing text bounding box from anchors ( \cite{liao2017textboxes, ma2017arbitrary}) or from text regions (\cite{Zhou_2017_CVPR,He_2017_ICCV}), the yielded bounding boxes are more accurate, particularly for long text. We validate the effectiveness of our method on horizontal, oriented, long and oriented text as well as multi-lingual text from public benchmarks. The results show the advantages of the proposed algorithm in accuracy and speed. Specifically, the F-Measures of our method on ICDAR2015 \cite{karatzas2015icdar}, MSRA-TD500 \cite{yao2012detecting} and MLT \cite{MLT-Challenge} are \textbf{$84.3\%$}, \textbf{$81.5\%$} and \textbf{$72.4\%$} respectively, which outperform previous state-of-the-art methods significantly. Besides, our method is also competitive in efficiency. It can process more than \textbf{10.4} images (512x512 in size) per second. The contributions of this paper are four-fold: (1) We propose a new scene text detector that combines the ideas of object detection and segmentation, which can be trained and evaluated end-to-end. (2) Based on position-sensitive ROI pooling \cite{dai2016r}, we propose a rotated position-sensitive ROI average pooling layer that can handle arbitrary-oriented proposals. (3) Our method can simultaneously handle the challenges (such as rotation, varying aspect ratios, very close instances) in multi-oriented scene text, which are suffered by previous methods. (4) Our method achieves better or competitive results in both accuracy and efficiency. \section{Related Work} \subsection{Regression Based Text Detection} Regression based text detection has become the mainstream of scene text detection in the past two years. Based on general object detectors, several text detection methods were proposed and achieved substantial progress. Originating from SSD \cite{liu2016ssd}, TextBoxes \cite{liao2017textboxes} use "long" default boxes and "long" convolutional filters to cope with the extreme aspect ratios. Similarly, in \cite{ma2017arbitrary} Ma \emph{et al.} utilize the architecture of Faster-RCNN \cite{ren2015faster} and add rotated anchors in RPN to detect arbitrary-oriented scene text. SegLink \cite{Shi_2017_CVPR} predicts text segments and the linkage of them in a SSD style network and links the segments to text boxes, in order to handle long oriented text in natural scene. Based on DenseBox \cite{huang2015densebox}, EAST \cite{Zhou_2017_CVPR} regresses text boxes directly. Our method is also adapted from a general object detector DSSD \cite{fu2017dssd}. But unlike the above methods that regress text boxes or segments directly, we propose to localize the positions of corner points, and then generate text boxes by sampling and grouping the detected corners. \subsection{Segmentation Based Text Detection} Segmentation based text detection is another direction of text detection. Inspired by FCN \cite{long2015fully}, some methods are proposed to detect scene text by using segmentation maps. In \cite{zhang2016multi}, Zhang \emph{et al.} first attempt to extract text blocks from a segmentation map by a FCN. Then they detect characters in those text blocks with MSER \cite{neumann2010method} and group the characters to words or text lines by some priori rules. In \cite{yao2016scene}, Yao \emph{et al.} use a FCN to predict three types of maps (text regions, characters, and linking orientations) of the input images. Then some post-processings are conducted to obtain text bounding boxes with the segmentation maps. Different from the previous segmentation based text detection methods, which usually need complex post-processing, our method is simpler and clearer. In inference stage, the position-sensitive segmentation maps are used to score the candidate boxes by our proposed \textbf{Rotated Position-Sensitive Average ROI Pooling} layer. \subsection{Corner Point Based General Object Detection} Corner point based general object detection is a new stream of general object detection methods. In DeNet \cite{Tychsen-Smith_2017_ICCV}, Tychsen-Smith \emph{et al.} propose a corner detect layer and a sparse sample layer to replace RPN in a Faster-RCNN style two-stage model. In \cite{wang2017point}, Wang \emph{et al.} propose PLN (Point Linking Network) which regresses the corner/center points of bounding-box and their links using a fully convolutional network. Then the bounding boxes of objects are formed using the corner/center points and their links. Our method is inspired by those corner point based object detection methods, but there are key differences. First, the corner detector of our method is different. Second, we use segmentation map to score candidate boxes. Third, it can produce arbitrary-oriented boxes for objects (text). \subsection{Position-Sensitive Segmentation} Recently, instance-aware semantic segmentation methods are proposed with position-sensitive maps. In \cite{dai2016instance}, Dai \emph{et al.} first introduce relative position to segmentation and propose InstanceFCN for instance segment proposal. In FCIS \cite{Li_2017_CVPR}, with the assistance of position-sensitive inside/outside score maps, Li \emph{et al.} propose an end-to-end network for instance-aware semantic segmentation. We also adopt position-sensitive segmentation maps to predict text regions. Compared with the above-mentioned methods, there are three key differences: 1) We optimize the network with position-sensitive ground truth directly (detailed in Sec \ref{sec_label}); 2) Our position-sensitive maps can be used to predict text regions and score proposals simultaneously (detailed in Sec \ref{sec_scoring}), different from FCIS which uses two types of position-sensitive maps (inside and outside); 3) Our proposed Rotated Position-Sensitive ROI Average Pooling can handle arbitrary-oriented proposals. \begin{figure*} \vspace{-3mm} \begin{centering} \includegraphics[scale=0.4]{imgs/network} \par\end{centering} \caption{Network Architecture. The network contains three parts: backbone, conner point detector and position-sensitive segmentation predictor. The backbone is adapted from DSSD \cite{fu2017dssd}. Conner point detectors are built on multiple feature layers (blocks in pink). position-sensitive segmentation predictor shares some features (pink blocks) with corner point detectors.} \label{img_network} \end{figure*} \section{Network} The network of our method is a fully convolutional network that plays the roles of feature extraction, corner detection and position-sensitive segmentation. The network architecture is shown in Fig. \ref{img_network}. Given an image, the network produces candidate corner points and segmentation maps. \subsection{Feature Extraction} The backbone of our model is adapted from a pre-trained VGG16 \cite{simonyan2014very} network and designed with the following considerations: 1) the size of scene text varies hugely, so the backbone must has enough capacity to handle this problem well; 2) backgrounds in natural scenes are complex, so the features should better contain more context. Inspired by the good performance achieved on those problem by FPN \cite{Lin_2017_CVPR} and DSSD \cite{fu2017dssd}, we adopt the backbone in FPN/DSSD architecture to extract features. In detail, we convert the fc6 and fc7 in the VGG16 to convolutional layers and name them conv6 and conv7 respectively. Then several extra convolutional layers (conv8, conv9, conv10, conv11) are stacked above conv7 to enlarge the receptive fields of extracted features. After that, a few deconvolution modules proposed in DSSD \cite{fu2017dssd} are used in a top-down pathway (Fig. \ref{img_network}). Particularly, to detect text with different sizes well, we cascade deconvolution modules with $256$ channels from conv11 to conv3 (the features from conv10, conv9, conv8, conv7, conv4, conv3 are reused), and 6 deconvolution modules are built in total. Including the features of conv11, we name those output features $F_{3}, F_{4}, F_{7}, F_{8}, F_{9}, F_{10}$ and $F_{11}$ for convenience. In the end, the feature extracted by conv11 and deconvolution modules which have richer feature representations are used to detect corner points and predict position-sensitive maps. \subsection{Corner Detection} For a given rotated rectangular bounding box $R=(x,y,w,h,\theta)$, there are 4 corner points (top-left, top-right, bottom-right, bottom-left) and can be represented as two-dimensional coordinates $\{(x_{1},y_{1}),(x_{2},y_{2}),(x_{3},y_{3}),(x_{4},y_{4})\}$ in a clockwise direction. To expediently detect corner points, here we redefine and represent a corner point by a horizontal square $C=(x_{c},y_{c},ss,ss)$, where $x_{c},y_{c}$ are the coordinate of a corner point (such as $x_{1},y_{1}$ for top-left point) as well as the center of the horizontal square. $ss$ is the length short side of the rotated rectangular bounding box $R$. Following SSD and DSSD, we detect corner points with default boxes. Different from the manner in SSD or DSSD where each default box outputs the classification scores and offsets of the corresponding candidate box, corner point detection is more complex because there might be more than one corner points in the same location (such as a location can be the bottom-left corner and top-right corner of two boxes simultaneously). So in our case, a default box should output classification scores and offsets for $4$ candidate boxes corresponding to the $4$ types of corner points. We adapt the prediction module proposed in \cite{fu2017dssd} to predict scores and offsets in two branches in a convolutional manner. In order to reduce the computational complexity, the filters of all convolutions are set to $256$. For an $m\times n$ feature map with $k$ default boxes in each cell, the "score" branch and "offset" branch output $2$ scores and $4$ offsets respectively for each type of corner point of each default box. Here, $2$ for "score" branch means whether a corner point exists in this position. In total, the output channels of the "score" branch and the "offset" branch are $k\times q\times2$ and $k\times q\times4$, where $q$ means the type of corner points. By default, $q$ is equal to 4. In the training stage, we follow the matching strategy of default boxes and ground truth ones in SSD. To detect scene text with different sizes, we use default boxes of multiple sizes on multiple layer features. The scales of all default boxes are listed in Table \ref{tab_scales}. The aspect ratios of default boxes are set to $1$. \begin{table*} \vspace{-2mm} \centering{}% \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline {\small{}layer} & {\small{}$F_{3}$} & {\small{}$F_{4}$} & {\small{}$F_{7}$} & {\small{}$F_{8}$} & {\small{}$F_{9}$} & {\small{}$F_{10}$} & {\small{}$F_{11}$}\tabularnewline \hline \hline {\small{}scales} & {\small{}$4,8,6,10,12,16$} & {\small{}$20,24,28,32$} & {\small{}$36,40,44,48$} & {\small{}$56,64,72,80$} & {\small{}$88,96,104,112$} & {\small{}$124,136,148,160$} & {\small{}$184,208,232,256$}\tabularnewline \hline \end{tabular} \caption{Scales of default boxes on different layers.} \label{tab_scales} \end{table*} \subsection{Position-Sensitive Segmentation} In the previous segmentation based text detection methods \cite{zhang2016multi, yao2016scene}, a segmentation map is generated to represent the probability of each pixel belonging to text regions. However those text regions in score map always can not be separated from each other, as a result of the overlapping of text regions and inaccurate predictions of text pixels. To get the text bounding boxes from the segmentation map, complex post-processing are conducted in \cite{zhang2016multi, yao2016scene}. Inspired by InstanceFCN \cite{dai2016instance}, we use position-sensitive segmentation to generate text segmentation maps. Compared with previous text segmentation methods, relative positions are generated. In detail, for a text bounding box $R$, a $g\times g$ regular grid is used to divide the text bounding box into multiple bins (\emph{i.e.}, for a $2\times2$ grid, a text region can be split into $4$ bins, that is top-left, top-right, bottom-right, bottom-left). For each bin, a segmentation map is used to determine whether the pixels in this map belong to this bin. We build position-sensitive segmentation with corner point detection in a unified network. We reuse the features of {$F_{3}$, $F_{4}$, $F_{7}$, $F_{8}$, $F_{9}$} and build some convolutional blocks on them follow the residual block architecture of corner point detection branch (Shown in Fig. \ref{img_network}). All outputs of those blocks are resized to the scale of $F_{3}$ by bilinear upsampling with the scale factors set to {$1$, $2$, $4$, $8$, $16$}. Then all those outputs with the same scale are added together to generate richer features. We further enlarge the resolution of fused features by two continuous \emph{Conv1x1-BN-ReLU-Deconv2x2} blocks and set the kernels of the last deconvolution layer to $g\times g$. So, the final position-sensitive segmentation maps have $g \times g$ channels and the same size as the input images. In this work, we set $g$ to 2 in default. \section{Training and Inference} \subsection{Training} \subsubsection{Label Generation} \label{sec_label} For an input training sample, we first convert each text box in ground truth into a rectangle that covers the text box region with minimal area and then determine the relative position of $4$ corner points. We determine the relative position of a rotated rectangle by the following rules: 1) the x-coordinates of top-left and bottom-left corner points must less than the x-coordinates of top-right and bottom-right corner points; 2) the y-coordinates of top-left and top-right corner points must less than the y-coordinates of bottom-left and bottom-right corner points. After that, the original ground truth can be represented as a rotated rectangle with relative position of corner points. For convenience, we term the rotated rectangle $R=\{P_{i}|i\in\{1,2,3,4\}\}$, where $P_{i} = (x_{i}, y_{i})$ are the corner points of the rotated rectangle in top-left, top-right, bottom-right, bottom-left order. We generate the label of corner point detection and position-sensitive segmentation using $R$. For corner point detection, we first compute the short side of $R$ and represent the $4$ corner points by horizontal squares as shown in Fig. \ref{img_gt} (a). For position-sensitive segmentation, we generate pixel-wise masks of text/non-text with $R$. We first initialize $4$ masks with the same scale as the input image and set all pixel value to $0$. Then we divide $R$ into four bins with a $2\times2$ regular grid and assign each bin to a mask, such as top-left bin to the first mask. After that, we set the value of all pixels in those bins to $1$, as shown in Fig. \ref{img_gt} (b). \begin{figure} \begin{centering} \includegraphics[scale=0.50]{imgs/gt1} \par\end{centering} \caption{Label generation for corner points detection and position-sensitive segmentation. (a) Corner points are redefined and represented by squares (boxes in white, red, green, blue) with the side length set as the short side of text bounding box $R$ (yellow box). (b) Corresponding ground truth of $R$ in (a) for position-sensitive segmentation.} \label{img_gt} \end{figure} \subsubsection{Optimization} We train the corner detection and position-sensitive segmentation simultaneously. The loss function is defined as: \begin{equation} L=\frac{1}{N_{c}}L_{conf}+\frac{\lambda_{1}}{N_{c}}L_{loc} + \frac{\lambda_{2}}{N_{s}}L_{seg} \end{equation} Where $L_{conf}$ and $L_{loc}$ are the loss functions of the score branch for predicting confidence score and the offset branch for localization in the module of corner point detection. $L_{seg}$ is the loss function of position-sensitive segmentation. $N_{c}$ is the number of positive default boxes, $N_{s}$ is the number of pixels in segmentation maps. $N_{c}$ and $N_{s}$ are used to normalize the losses of corner point detection and segmentation. $\lambda_{1}$ and $\lambda_{2}$ are the balancing factors of the three tasks. In default, we set the $\lambda_{1}$ to 1 and $\lambda_{2}$ to 10. We follow the matching strategy of SSD and train the score branch using Cross Entropy loss: \begin{equation} L_{conf}=CrossEntropy(y_{c}, p_{c}) \end{equation} Where $y_{c}$ is the ground truth of all default boxes, 1 for positive and 0 otherwise. $p_{c}$ is the predicted scores. In consideration of the extreme imbalance between positive and negative samples, the category homogenization is necessary. We use the online hard negative mining proposed in \cite{shrivastava2016training} to balance training samples and set the ratio of positives to negatives to $1:3$. For the offset branch, we regress the offsets relative to default boxes as Fast RCNN \cite{Girshick_2015_ICCV} and optimize them with Smooth L1 loss: \begin{equation} L_{loc}=SmoothL1(y_{l},p_{l}) \end{equation} Where $y_{l}=(\triangle x, \triangle y, \triangle s_{s}, \triangle s_{s})$ is the ground truth of offset branch and $p_{l}=(\triangle\tilde{x}, \triangle\tilde{y}, \triangle\tilde{s_{s}}, \triangle\tilde{s_{s}})$ is the predicted offsets. The $y_{l}$ can be calculated by a default box $B=(x_{b}, y_{b}, ss_{b}, ss_{b})$ and a corner point box $C=(x_{c}, y_{c}, ss_{c}, ss_{c})$: \begin{equation} \triangle x=\frac{x_{b}-x_{c}}{ss_{b}} \end{equation} \begin{equation} \triangle y=\frac{y_{b}-y_{c}}{ss_{b}} \end{equation} \begin{equation} \triangle ss=\log(\frac{ss_{b}}{ss_{c}}) \end{equation} We train position-sensitive segmentation by minimizing the Dice loss \cite{milletari2016v}: \begin{equation} L_{seg}=1-\frac{2y_{s}p_{s}}{y_{s} + p_{s}} \end{equation} Where $y_{s}$ is the label of position-sensitive segmentation and $p_{s}$ is the prediction of our segmentation module. \subsection{Inference} \subsubsection{Sampling and Grouping} In inference stage, many corner points are yielded with the predicted location, short side and confidence score. Points with high score (great than 0.5 in default) are kept. After NMS, 4 corner point sets are composed based on relative position information. We generate the candidate bounding boxes by sampling and grouping the predicted corner points. In theory, a rotated rectangle can be constructed by two points and a side perpendicular to the line segment made up by the two points. For a predicted point, the short side is known, so we can form a rotated rectangle by sampling and grouping two corner points in corner point sets arbitrarily, such as (top-left, top-right), (top-right, bottom-right), (bottom-left, bottom-right) and (top-left, bottom-left) pairs. Several priori rules are used to filter unsuitable pairs: 1) the relative positional relations can not be violated, such as the x-coordinate of top-left point must less than that of top-right point in (top-left, top-right) pair; 2) the shortest side of the constructed rotated rectangle must be greater than a threshold (the default is 5); 3) the predicted short sides $ss_{1}$ and $ss_{2}$ of the two points in a pair must satisfy: \begin{equation} \frac{\max(ss_{1},ss_{2})}{\min(ss_{1},ss_{2})}\leq1.5 \end{equation} \subsubsection{Scoring} \label{sec_scoring} A large number of candidate bounding boxes can be generated after sampling and grouping corner points. Inspired by InstanceFCN\cite{dai2016instance} and RFCN \cite{dai2016r}, we score the candidate bounding boxes by the position-sensitive segmentation maps. The processes are shown in Fig. \ref{img_gt}. To handle the rotated text bounding boxes, we adapt the Position-Sensitive ROI pooling layer in \cite{dai2016r} and propose \textbf{Rotated Position-Sensitive ROI Average pooling layer}. Specifically, for a rotated box, we first split the box into $g\times g$ bins. Then we generate a rectangle for each bin with the minimum area to cover the bin. We loop over all pixels in the minimum rectangle and calculate mean value of all pixels which in the bin. In the end, the score of a rotated bounding box is obtained by averaging the means of $g\times g$ bins. The specific processes are shown in Algorithm \ref{algo_rps}. \begin{figure} \begin{centering} \includegraphics[scale=0.25]{imgs/scoring} \par\end{centering} \caption{Overview of the scoring process. The yellow boxes in (a) are candidate boxes. (b) are predicted segmentation maps. We generate instance segment (c) of candidate boxes by assembling the segmentation maps as \cite{dai2016instance}. Scores are calculated by averaging the instance segment regions. } \label{img_gt} \vspace{-3mm} \end{figure} \begin{algorithm}[t] \caption{Rotated Position-Sensitive ROI Average Pooling} \hspace*{0.02in} {\bf Input:} rotated bounding box $B$, $g\times g$ regular grid $G$, Segmentation maps $S$ \begin{algorithmic}[1] \State Generating $Bins$ by spitting $B$ with $G$. \State $M\leftarrow 0$, $i\leftarrow 0$ \For{$i$ in $range(g\times g)$} \State $bin\leftarrow Bins[i]$, $C\leftarrow 0$, $P\leftarrow 0$, \State $R \leftarrow MiniRect(bin)$ \For{$pixel$ in $R$} \If{$pixel$ in $bin$}      \State $C\leftarrow C + 1$, $P\leftarrow P + G[i][pixel].value$ \EndIf \EndFor \State $M\leftarrow M + \frac{P}{C}$ \EndFor \State $score \leftarrow \frac{M}{g\times g}$ \State \Return score \end{algorithmic} \label{algo_rps} \end{algorithm} The candidate boxes with low score will be filtered out. We set the threshold $\tau$ to 0.6 by default. \section{Experiments} To validate the effectiveness of the proposed method, we conduct experiments on five public datasets: ICDAR2015, ICDAR2013, MSRA-TD500, MLT, COCO-Text, and compare with other state-of-the-art methods. \subsection{Datasets} \textbf{SynthText} \cite{gupta2016synthetic} is a synthetically generated dataset which consists of about 800000 synthetic images. We use the dataset with word level labels to pre-train our model. \textbf{ICDAR2015} is a dataset proposed in the Challenge 4 of the 2015 Robust Reading Competition \cite{karatzas2015icdar} for incidental scene text detection. There are 1000 images for training and 500 images for testing with annotations labeled as word level quadrangles. \textbf{ICDAR2013} is a dataset proposed in the Challenge 2 of the 2013 Robust Reading Competition \cite{karatzas2013icdar} focuses on horizontal text in scene. It contains 229 images for training and 233 images for testing. \textbf{MSRA-TD500} \cite{yao2012detecting} is a dataset collected for detecting arbitrary-oriented long text lines. It consists of 300 training images and 200 test images with text line level annotations. \textbf{MLT} is a dataset that proposed on ICDAR2017 Competition \cite{MLT-Challenge} and focuses on the multi-oriented, multi-script and multi-lingual aspects of scene text. It consists of 7200 training images, 2000 validation images and 9000 test images. \textbf{COCO-Text} \cite{veit2016coco} is a large scale scene text dataset which comes from the MS COCO dataset \cite{lin2014microsoft}. There are 63686 images are annotated and two versions of annotations and splits (V1.1 and V1.4) are released by the official. Previous methods are all evaluated on V1.1 and the new V1.4 are used on ICDAR2017 Competition \cite{COCO-Text-Challenge}. \subsection{Implementation Details} \textbf{Training} Our model is pre-trained on SynthText then finetuned on other datasets (except COCO-Text). We use Adam \cite{kingma2014adam} to optimize our model with the learning rate fixed to $1e-4$. In pre-train stage, we train our model on SynthText for one epoch. During finetuning stage, the number of iterations are decided by the sizes of datasets. \textbf{Data Augmentation} We use the same way of data augmentation as SSD. We randomly sample a patch from the input image in the manner of SSD, then resize the sampled patch to $512\times512$. \textbf{Post Processing} NMS is the only post processing step of our method. We set the threshold of NMS to $0.3$. Our method is implemented in PyTorch \cite{PyTorch}. All the experiments are conducted on a regular workstation (CPU: Intel(R) Xeon(R) CPU E5-2650 v3 @ 2.30GHz; GPU:Titan Pascal; RAM: 64GB). We train our model with the batch size of 24 on $4$ GPUs in parallel and evaluate our model on 1 GPU with batch size set as $1$. \begin{figure*} \noindent \begin{centering} \includegraphics[scale=0.28]{imgs/results1} \par\end{centering} \caption{Examples of detection results. From left to right in columns: ICDAR2015, ICDAR2013, MSRA-TD500, MLT, COCO-Text.} \label{img_results} \end{figure*} \subsection{Detecting Oriented Text} We evaluate our model on the ICDAR2015 dataset to test its ability of arbitrarily oriented text detection. We finetune our model another 500 epochs on the datasets of ICDAR2015 and ICDAR2013. Note that, to detect vertical text better, in the last 15 epochs, we randomly rotate the sampled patches by $90$ degree or $-90$ degree with the probability of $0.2$. In testing, we set $\tau$ to 0.7 and resize the input images to $768\times1280$. Following \cite{Zhou_2017_CVPR,Hu_2017_ICCV,He_2017_ICCV}, we also evaluate our model on ICDAR2015 with multi-scale inputs, $\{512\times512, 768\times768, 768\times1280, 1280\times1280\}$ in default. We compare our method with other state-of-the-art methods and list all the results in Table \ref{tab_icdar2015}. Our method outperforms the previous methods by a large margin. When tested at single scale, our method achieves the F-measure of $80.7\%$, which surpasses all competitors \cite{zhang2016multi,tian2016detecting,yao2016scene,Shi_2017_CVPR,Zhou_2017_CVPR,SSTD} . Our method achieves $84.3\%$ in F-measure with multi-scale inputs, higher than the current best one \cite{He_2017_ICCV} by $3.3\%$. To explore the gain between our method which detects corner points and the method which regresses text boxes directly, we train a network named "baseline" in Table. \ref{tab_icdar2015} using the same settings as our method. The baseline model consists of the same backbone as our method and the prediction module in SSD/DSSD. With slight time cost, our method boost the accuracy greatly ($53.3\%$ \emph{vs} $80.7\%$). \begin{table} \small \begin{centering} \begin{tabular}{|c|c|c|c|c|} \hline \textbf{Method} & \textbf{Precision} & \textbf{Recall} & \textbf{F-measure} & \textbf{FPS} \tabularnewline \hline \hline Zhang \emph{et al.} \cite{zhang2016multi} & 70.8 & 43.0 & 53.6 & 0.48 \tabularnewline \hline CTPN \cite{tian2016detecting} & 74.2 & 51.6 & 60.9 & 7.1 \tabularnewline \hline Yao \emph{et al.} \cite{yao2016scene} & 72.3 & 58.7 & 64.8 & 1.61 \tabularnewline \hline SegLink \cite{Shi_2017_CVPR} & 73.1 & \textbf{76.8} & 75.0 & - \tabularnewline \hline EAST \cite{Zhou_2017_CVPR} & 80.5 & 72.8 & 76.4 & 6.52\tabularnewline \hline SSTD \cite{SSTD} & 80.0 & 73.0 & 77.0 & \textbf{7.7} \tabularnewline \hline \textbf{baseline} & 66.0 & 44.7 & 53.3 & 4.5 \tabularnewline \hline \textbf{ours} & \textbf{94.1} & 70.7 & \textbf{80.7} & 3.6 \tabularnewline \hline \hline EAST $^*$ $^\text{\dag}$ \cite{Zhou_2017_CVPR}& 83.3 & 78.3 & 80.7 & - \tabularnewline \hline WordSup $^*$ \cite{Hu_2017_ICCV} & 79.3 & 77.0 & 78.2 & \textbf{2} \tabularnewline \hline He \emph{et al.} $^*$ $^\text{\dag}$ \cite{He_2017_ICCV} & 82.0 & \textbf{80.0} & 81.0 & 1.1 \tabularnewline \hline \textbf{ours}$^*$ & \textbf{89.5} & 79.7 & \textbf{84.3} & 1 \tabularnewline \hline \end{tabular} \par\end{centering} \caption{Results on ICDAR2015. $^*$ means multi-scale, $^\text{\dag}$ stands for the base net of the model is not VGG16.} \label{tab_icdar2015} \end{table} \subsection{Detecting Horizontal Text} We evaluate the ability of our model to detect horizontal text on ICDAR2013 dataset. We further train our model on ICDAR2013 dataset for 60 epochs on the basis of the finetuned ICDAR2015 model. In testing, the input images are resized to $512\times512$. We also use multi-scale inputs to evaluate our model. The results are listed in Table \ref{tab_icdar2013} and mostly are reported with the "Deteval" evaluation protocol. Our method achieves very competitive results. When tested at single scale, our method achieves the F-measure of $85.8\%$, which is slightly lower than the highest result. Besides, our method can run at 10.4 FPS, faster than most methods. For multi-scale evaluation, our method achieves the F-measure of $88.0\%$, which is also competitive compared with other methods. \begin{table} \footnotesize \begin{centering} \begin{tabular}{|c|c|c|c|c|} \hline \textbf{Method} & \textbf{Precision} & \textbf{Recall} & \textbf{F-measure} & \textbf{FPS} \tabularnewline \hline \hline Neumann \emph{et al.} \cite{neumann2015efficient} & 81.8 & 72.4 & 77.1 & 3 \tabularnewline \hline Neumann \emph{et al.} \cite{neumann2016real} & 82.1 & 71.3 & 76.3 & 3 \tabularnewline \hline Fastext \cite{busta2015fastext} & 84.0 & 69.3 & 76.8 & 6 \tabularnewline \hline Zhang \emph{et al.} \cite{zhang2015symmetry} & 88.0 & 74.0 & 80.0 & 0.02\tabularnewline \hline Zhang \emph{et al.} \cite{zhang2016multi} & 88.0 & 78.0 & 83.0 & 0.5 \tabularnewline \hline Yao \emph{et al.} \cite{yao2016scene} & 88.9 & 80.2 & 84.3 & 1.61 \tabularnewline \hline CTPN \cite{tian2016detecting} & 93.0 & 83.0 & \textbf{88.0} & 7.1 \tabularnewline \hline TextBoxes \cite{liao2017textboxes} & 88.0 & 74.0 & 81.0 & 11 \tabularnewline \hline SegLink \cite{Shi_2017_CVPR} & 87.7 & 83.0 & 85.3 & \textbf{20.6} \tabularnewline \hline SSTD \cite{SSTD} & 89.0 & \textbf{86.0} & \textbf{88.0} & 7.7 \tabularnewline \hline \textbf{ours} & \textbf{93.3} & 79.4 & 85.8 & 10.4 \tabularnewline \hline \hline FCRN $^*$ \cite{gupta2016synthetic} & 92.0 & 75.5 & 83.0 & 0.8 \tabularnewline \hline TextBoxes $^*$ \cite{liao2017textboxes} & 89.0 & 83.0 & 86.0 & 1.3 \tabularnewline \hline He \emph{et al.} $^*$ $^\text{\dag}$ \cite{He_2017_ICCV} & 92.0 & 81.0 & 86.0 & 1.1 \tabularnewline \hline WordSup $^*$ \cite{Hu_2017_ICCV} & \textbf{93.3} & \textbf{87.5} & \textbf{90.3} & \textbf{2} \tabularnewline \hline \textbf{ours}$^*$ & 92.0 & 84.4 & 88.0 & 1\tabularnewline \hline \end{tabular} \par\end{centering} \caption{Results on ICDAR2013. $^*$ means multi-scale, $^\text{\dag}$ stands for the base net of the model is not VGG16. Note that, the methods of the top three lines are evaluated under the "ICDAR2013" evaluation protocol.} \label{tab_icdar2013} \end{table} \subsection{Detecting Long Oriented Text Line} On MSRA-TD500, we evaluate the performance of our method for detecting long and multi-lingual text lines. HUST-TR400 is also used as training data as the MSRA-TD500 only contains 300 training images. The model is initialized with the model pre-trained on SynthText and then finetuned another 240 epochs. In test stage, we input the images with the size $768\times 768$ and set $\tau$ to 0.65. As shown in Table \ref{tab_msra}, our method surpasses all the previous methods by a large margin. Our method achieves state-of-the-art performances both in recall, precision and F-measure ($87.6\%$, $76.2\%$ and $81.5\%$), and much better than the previous best result ($81.5\%$ \emph{vs.} $77.0\%$). That means our method is more capable than other methods of detecting arbitrarily oriented long text. \begin{table} \small \begin{centering} \begin{tabular}{|c|c|c|c|c|} \hline \textbf{Method} & \textbf{Precision} & \textbf{Recall} & \textbf{F-measure} &\textbf{FPS} \tabularnewline \hline \hline TD-ICDAR \cite{yao2012detecting} & 53.0 & 52.0 & 50.0 & -\tabularnewline \hline TD-Mixture \cite{yao2012detecting} & 63.0 & 63.0 & 60.0 & -\tabularnewline \hline Kang \emph{et al.} \cite{kang2014orientation} & 71.0 & 62.0 & 66.0 & -\tabularnewline \hline Zhang \emph{et al.} \cite{zhang2016multi} & 83.0 & 67.0 & 74.0 & 0.48\tabularnewline \hline Yao \emph{et al.} \cite{yao2016scene} & 76.5 & 75.3 & 75.9 & 1.61\tabularnewline \hline EAST \cite{Zhou_2017_CVPR} & 81.7 & 61.6 & 70.2 & 6.52 \tabularnewline \hline EAST $^\text{\dag}$ \cite{Zhou_2017_CVPR} & 87.3 & 67.4 & 76.1 & \textbf{13.2} \tabularnewline \hline SegLink \cite{Shi_2017_CVPR} & 86.0 & 70.0 & 77.0 & 8.9 \tabularnewline \hline He \emph{et al.} $^\text{\dag}$ \cite{He_2017_ICCV} & 77.0 & 70.0 & 74.0 & 1.1 \tabularnewline \hline \textbf{ours} & \textbf{87.6} & \textbf{76.2} & \textbf{81.5} & 5.7 \tabularnewline \hline \end{tabular} \par\end{centering} \caption{Results on MSRA-TD500. $^\text{\dag}$ stands for the base net of the model is not VGG16.} \label{tab_msra} \end{table} \subsection{Detecting Multi-Lingual Text} We verify the ability of our method to detect multi-lingual text on MLT. We finetune about 120 epochs on the model pre-trained on SynthText. When testing in single scale, the sizes of images are set as $768 \times 768$. We evaluate our method online and compare with some public results on the leaderboard \cite{MLT-Challenge}. As shown in Table \ref{tab_mlt}, our method outperforms all competing methods by at least $3.1\%$. \begin{table} \small \begin{centering} \begin{tabular}{|c|c|c|c|c|} \hline \textbf{Method} & \textbf{Precision} & \textbf{Recall} & \textbf{F-measure} \tabularnewline \hline \hline TH-DL \cite{MLT-Challenge} & 67.8 & 34.8 & 46.0 \tabularnewline \hline SARI\_FDU\_RRPN\_V1 \cite{MLT-Challenge} &71.2 & 55.5 & 62.4 \tabularnewline \hline Sensetime OCR \cite{MLT-Challenge} &56.9 & 69.4 & 62.6 \tabularnewline \hline SCUT\_DLVClab1 \cite{MLT-Challenge} &80.3 &54.5 &65.0 \tabularnewline \hline e2e\_ctc01\_multi\_scale \cite{MLT-Challenge} & 79.8 & 61.2 & 69.3 \tabularnewline \hline \textbf{ours} & \textbf{83.8} & 55.6 & 66.8 \tabularnewline \hline \textbf{ours}$^*$ & 74.3 & \textbf{70.6} & \textbf{72.4} \tabularnewline \hline \end{tabular} \par\end{centering} \caption{Results on MLT. $^*$ means multi-scale.} \label{tab_mlt} \vspace{-3mm} \end{table} \subsection{Generalization Ability} To evaluate the generalization ability of our model, we test it on COCO-Text using the model finetuned on ICDAR2015. We set the test image size as $768\times 768$. We use the annotations (V1.1) to compare with other methods, for the sake of fairness. The results are shown in Table \ref{tab_coco}. \textbf{Without training}, on COCO-Text, our method achieves an F-measure of $42.5\%$, better than competitors. Besides, we also evaluate our model on the ICDAR2017 Robust Reading Challenge on COCO-Text \cite{COCO-Text-Challenge} with the annotations V1.4. The results are reported in Table \ref{tab_coco}. Among all the public results in leaderboard \cite{COCO-Text-Challenge}, our method ranks the first. Especially when the threshold of iou is set to 0.75, the result that our method exceeds others in a large margin shows it can detect text more accurately. \begin{table} \small \begin{centering} \begin{tabular}{|c|c|c|c|} \hline \textbf{Method} & \textbf{Precision} & \textbf{Recall} & \textbf{F-measure} \tabularnewline \hline \hline Baseline A \cite{veit2016coco} & 83.8 & 23.3 & 36.5\tabularnewline \hline Baseline B \cite{veit2016coco} & \textbf{89.7} & 10.7 & 19.1 \tabularnewline \hline Baseline C \cite{veit2016coco} & 18.6 & 4.7 & 7.5 \tabularnewline \hline Yao \emph{et al.} \cite{yao2016scene} & 43.2 & 27.1 & 33.3 \tabularnewline \hline EAST \cite{Zhou_2017_CVPR} & 50.4 & \textbf{32.4} & 39.5 \tabularnewline \hline WordSup \cite{Hu_2017_ICCV} & 45.2 & 30.9 & 36.8 \tabularnewline \hline SSTD \cite{SSTD} & 46.0 & 31.0 & 37.0 \tabularnewline \hline \textbf{ours} &69.9 & 26.2 & 38.1 \tabularnewline \hline \textbf{ours}$^*$ & 61.9 & \textbf{32.4} & \textbf{42.5} \tabularnewline \hline \multicolumn{4}{|c|}{COCO-Text Challenge (IOU 0.5)}\tabularnewline \hline UM \cite{COCO-Text-Challenge} & 47.6 & \textbf{65.5} & 55.1 \tabularnewline \hline TDN\_SJTU\_v2 \cite{COCO-Text-Challenge} & 62.4 & 54.3 & 58.1 \tabularnewline \hline Text\_Detection\_DL \cite{COCO-Text-Challenge} & 60.1 & 61.8 & 61.4 \tabularnewline \hline \textbf{ours} & \textbf{72.5} & 52.9 & 61.1 \tabularnewline \hline \textbf{ours}$^*$ & 62.9 & 62.2 & \textbf{62.6} \tabularnewline \hline \multicolumn{4}{|c|}{COCO-Text Challenge (IOU 0.75)}\tabularnewline \hline Text\_Detection\_DL \cite{COCO-Text-Challenge} & 25.2 & 25.5 & 25.4 \tabularnewline \hline UM \cite{COCO-Text-Challenge} & 22.7 & 31.2 & 26.3 \tabularnewline \hline TDN\_SJTU\_v2 \cite{COCO-Text-Challenge} & 31.8 & 27.7 & 29.6 \tabularnewline \hline \textbf{ours} & \textbf{40.0} & 30.0 & 34.6 \tabularnewline \hline \textbf{ours}$^*$ & 35.1 & \textbf{34.8} & \textbf{34.9} \tabularnewline \hline \end{tabular} \par\end{centering} \caption{Results on COCO-Text. $^*$ means multi-scale.} \label{tab_coco} \vspace{-3mm} \end{table} \vspace{-0.1cm} \subsection{Limitations} \vspace{-0.1cm} One limitation of the proposed method is that when two text instances are extremely close, it may predict the two instances as one (Fig. \ref{img_limit}), since the position-sensitive segmentation might fail. Besides, the method is not good at detecting curved text (Fig. \ref{img_limit}), as there are few curved samples in the training set. \begin{figure} \begin{centering} \includegraphics[scale=0.25]{imgs/limitations2} \par\end{centering} \caption{Failure cases of our method. The boxes in green are ground truth. The red boxes are our predictions.} \label{img_limit} \vspace{-4mm} \end{figure} \vspace{-0.1cm} \section{Conclusion} \vspace{-0.1cm} In this paper, we have presented a scene text detector that localize text by corner point detection and position-sensitive segmentation. We evaluated it on several public benchmarks focusing on oriented, horizontal, long oriented and multi-lingual text. The superior performances demonstrate the effectiveness and robustness of our method. In the future, we are interested in constructing an end-to-end OCR system based on the proposed method. \vspace{-1mm} { \small \bibliographystyle{ieee}
{ "timestamp": "2018-02-28T02:05:52", "yymm": "1802", "arxiv_id": "1802.08948", "language": "en", "url": "https://arxiv.org/abs/1802.08948" }
\section{Introduction}\label{sec:intro} The empirical success of deep learning has thus far eluded interpretation through existing lenses of computational complexity \citep{Blum1988TrainingA3}, numerical optimization \citep{choromanska2015loss, Goodfellow2014QualitativelyCN, Dauphin2014IdentifyingAA} and classical statistical learning theory \citep{understanding_dl}: neural networks are highly non-convex models with extreme capacity that train fast and generalize well. In fact, not only do large networks demonstrate good test performance, but \textit{larger} networks often generalize \textit{better}, counter to what would be expected from classical measures, such as VC dimension. This phenomenon has been observed in targeted experiments \citep{Neyshabur2014InSO}, historical trends of Deep Learning competitions \citep{Canziani2016AnAO}, and in the course of this work (Figure \ref{fig:num_weights}). This observation is at odds with Occam's razor, the principle of parsimony, as applied to the intuitive notion of function complexity (see \sref{sec:occam} for extended discussion). One resolution of the apparent contradiction is to examine complexity of functions in conjunction with the input domain. $f(x) = x^3 \sin(x)$ may seem decisively more complex than $g(x) = x$. But restrained to a narrow input domain of $\left[-0.01,0.01\right]$ they appear differently: $g$ remains a linear function of the input, while $f(x) = \mathcal{O}\left(x^4\right)$ resembles a constant $0$. In this work we find that such intuition applies to neural networks, that behave very differently close to the data manifold than away from it (\sref{sec:Sensitivity On and Off the Data Manifold}). We therefore analyze the complexity of models through their capacity to distinguish different inputs in the neighborhood of datapoints, or, in other words, their sensitivity. We study two simple metrics presented in \sref{sec:Sensitivity Measures} and find that one of them, the norm of the input-output Jacobian, correlates with generalization in a very wide variety of scenarios. This work considers sensitivity only in the context of image classification tasks. We interpret the observed correlation with generalization as an expression of a universal prior on (natural) image classification functions that favor robustness (see \sref{sec:occam} for details). While we expect a similar prior to exist in many other perceptual settings, care should be taken when extrapolating our findings to tasks where such a prior may not be justified (e.g. weather forecasting). \begin{figure}[t] \centering \includegraphics[width=0.49\textwidth, trim={0 0 0 1cm}, clip]{plots/gap_to_params.pdf} \includegraphics[width=0.49\textwidth, trim={0 0 0 1cm}, clip]{plots/gap_to_train_loss.pdf} \caption{$2160$ networks trained to 100\% training accuracy on CIFAR10 (see \sref{app:Sensitivity and Generalization} for experimental details). \textbf{Left}: while increasing capacity of the model allows for overfitting (top), very few models do, and a model with the maximum parameter count yields the best generalization (bottom right). \textbf{Right}: train loss does not correlate well with generalization, and the best model (minimum along the $y$-axis) has training loss many orders of magnitude higher than models that generalize worse (left). This observation rules out underfitting as the reason for poor generalization in low-capacity models. See \citep{Neyshabur2014InSO} for similar findings in the case of achievable $0$ training loss.} \label{fig:num_weights} \end{figure} \subsection{Paper Outline} We first define sensitivity metrics for fully-connected neural networks in \sref{sec:Sensitivity Measures}. We then relate them to generalization through a sequence of experiments of increasing level of nuance: \begin{itemize} \item In \sref{sec:Sensitivity On and Off the Data Manifold} we begin by comparing the sensitivity of trained neural networks on and off the training data manifold, i.e. in the regions of best and typical (over the whole input space) generalization. \item In \sref{sec:Sensitivity and Generalization Factors} we compare sensitivity of identical trained networks that differ in a single hyper-parameter which is important for generalization. \item Further, \sref{sec:Sensitivity and Generalization Gap} associates sensitivity and generalization in an unrestricted manner, i.e. comparing networks of a wide variety of hyper-parameters such as width, depth, non-linearity, weight initialization, optimizer, learning rate and batch size. \item Finally, \sref{sec:Sensitivity and Per-Point Generalization} explores how predictive sensitivity (as measured by the Jacobian norm) is for individual test points. \end{itemize} \subsection{Summary of Contributions} The novelty of this work can be summarized as follows: \begin{itemize} \item Study of the behavior of trained neural networks on and off the data manifold through sensitivity metrics (\sref{sec:Sensitivity On and Off the Data Manifold}). \item Evaluation of sensitivity metrics on trained neural networks in a very large-scale experimental setting and finding that they correlate with generalization (\sref{sec:Sensitivity and Generalization Factors}, \sref{sec:Sensitivity and Generalization Gap}, \sref{sec:Sensitivity and Per-Point Generalization}). \end{itemize} \sref{sec:Related Work} puts our work in context of related research studying complexity, generalization, or sensitivity metrics similar to ours. \section{Related Work}\label{sec:Related Work} \subsection{Complexity Metrics} We analyze complexity of fully-connected neural networks for the purpose of model comparison through the following sensitivity measures (see \sref{sec:Sensitivity Measures} for details): \begin{itemize} \item estimating the number of linear regions a network splits the input space into; \item measuring the norm of the input-output Jacobian within such regions. \end{itemize} A few prior works have examined measures related to the ones we consider. In particular, \cite{response_regions, linear_regions, on_the_expressive_power} have investigated the expressive power of fully-connected neural networks built out of piecewise-linear activation functions. Such functions are themselves piecewise-linear over their input domain, so that the number of linear regions into which input space is divided is one measure of how nonlinear the function is. A function with many linear regions has the capacity to build complex, flexible decision boundaries. It was argued in \citep{response_regions, linear_regions} that an upper bound to the number of linear regions scales exponentially with depth but polynomially in width, and a specific construction was examined. \cite{on_the_expressive_power} derived a tight analytic bound and considered the number of linear regions for generic networks with random weights, as would be appropriate, for instance, at initialization. However, the evolution of this measure after training has not been investigated before. We examine a related measure, the number of hidden unit transitions along one-dimensional trajectories in input space, for trained networks. Further motivation for this measure is discussed in \sref{sec:Sensitivity Measures}. Another perspective on function complexity can be gained by studying their robustness to perturbations to the input. Indeed, \cite{Rasmussen2000OccamsR} demonstrate on a toy problem how complexity as measured by the number of parameters may be of limited utility for model selection, while measuring the output variation allows the invocation of Occam's razor. In this work we apply related ideas to a large-scale practical context of neural networks with up to a billion free parameters (\sref{sec:Sensitivity and Generalization Factors}, \sref{sec:Sensitivity and Generalization Gap}) and discuss potential ways in which sensitivity permits the application of Occam's razor to neural networks (\sref{sec:occam}). \cite{sokolic2017robust} provide theoretical support for the relevance of robustness, as measured by the input-output Jacobian, to generalization. They derive bounds for the generalization gap in terms of the Jacobian norm within the framework of algorithmic robustness \citep{xu2012robustness}. Our results provide empirical support for their conclusions through an extensive number of experiments. In a similar spirit \cite{zahavy2018ensemble} propose a different sensitivity measure in terms of adversarial robustness and relate it to generalization of stochastic algorithms theoretically and experimentally. Several other recent papers have also focused on deriving tight generalization bounds for neural networks \citep{bartlett2017spectrally, dziugaite2017computing, neyshabur2018a}. We do not propose theoretical bounds in this paper but establish a correlation between our metrics and generalization in a substantially larger experimental setting than undertaken in prior works. \subsection{Regularization} In the context of regularization, increasing robustness to perturbations is a widely-used strategy: data augmentation, noise injection \citep{jiang2009study}, weight decay \citep{krogh1992simple}, and max-pooling all indirectly reduce sensitivity of the model to perturbations, while \cite{rifai2011contractive, sokolic2017robust} explicitly penalize the Frobenius norm of the Jacobian in the training objective. In this work we relate several of the above mentioned regularizing techniques to sensitivity, demonstrating through extensive experiments that improved generalization is consistently coupled with better robustness as measured by a single metric, the input-output Jacobian norm (\sref{sec:Sensitivity and Generalization Factors}). While some of these findings confirm common-sense expectations (random labels increase sensitivity, Figure \ref{fig:generalization_factors}, top row), others challenge our intuition of what makes a neural network robust (ReLU-networks, with unbounded activations, tend to be more robust than saturating HardSigmoid-networks, Figure \ref{fig:generalization_factors}, third row). \subsection{Inductive Bias of SGD} One of our findings demonstrates an inductive bias towards robustness in stochastic mini-batch optimization compared to full-batch training (Figure \ref{fig:generalization_factors}, bottom row). Interpreting this regularizing effect in terms of some measure of sensitivity, such as curvature, is not new \citep{hochreiter1997flat, keskar2016large}, yet we provide a novel perspective by relating it to reduced sensitivity to {\em inputs} instead of parameters. The inductive bias of SGD (``implicit regularization'') has been previously studied in \citep{Neyshabur2014InSO}, where it was shown through rigorous experiments how increasing the width of a single-hidden-layer network improves generalization, and an analogy with matrix factorization was drawn to motivate constraining the norm of the weights instead of their count. \cite{Neyshabur2017ExploringGI} further explored several weight-norm based measures of complexity that do not scale with the size of the model. One of our measures, the Frobenius norm of the Jacobian is of similar nature (since the Jacobian matrix size is determined by the task and not by a particular network architecture). However, this particular metric was not considered, and, to the best of our knowledge, we are the first to evaluate it in a large-scale setting (e.g. our networks are up to $65$ layers deep and up to $2^{16}$ units wide). \subsection{Adversarial Examples} Sensitivity to inputs has attracted a lot of interest in the context of adversarial examples \citep{szegedy2013intriguing}. Several attacks locate points of poor generalization in the directions of high sensitivity of the network \citep{goodfellow2014explaining, papernot2016limitations, moosavi2016universal}, while certain defences regularize the model by penalizing sensitivity \citep{gu2014towards} or employing decaying (hence more robust) non-linearities \citep{kurakin2016adversarial}. However, work on adversarial examples relates highly specific perturbations to a similarly specific kind of generalization (i.e. performance on a very small, adversarial subset of the data manifold), while this paper analyzes \textit{average-case} sensitivity (\sref{sec:Sensitivity Measures}) and \textit{typical} generalization. Certain measures of adversarial robustness have nevertheless been recently observed to correlate with generalization \citep{zahavy2018ensemble, gilmer2018adversarial, dogus2018intriguing}. \section{Sensitivity Metrics}\label{sec:Sensitivity Measures} We propose two simple measures of sensitivity for a fully-connected neural network (without biases) ${\bf f}: \mathbb{R}^d \to \mathbb{R}^n$ with respect to its input ${\bf x} \in \mathbb{R}^d$ (the output being unnormalized logits of the $n$ classes). Assume ${\bf f}$ employs a piecewise-linear activation function, like ReLU. Then $\bf f$ itself, as a composition of linear and piecewise-linear functions, is a piecewise-linear map, splitting the input space $\mathbb{R}^d$ into disjoint regions, implementing a single affine mapping on each. Then we can measure two aspects of sensitivity by answering \begin{enumerate} \item How does the output of the network change as the input is perturbed within the linear region? \item How likely is the linear region to change in response to change in the input? \end{enumerate} We quantify these qualities as follows: \begin{enumerate} \item \label{ssec:Sensitivity Measures (Jacobian)} For a local sensitivity measure we adopt the Frobenius norm of the class probabilities Jacobian ${\mathbf J}({\bf x}) = \partial {\bf f}_{\sigma}\left({\bf x}\right)/\partial {\bf x^T}$ (with $J_{ij}(\mathbf x) = \partial\left[{\bf f}_{\sigma}\left(\mathbf x\right)\right]_i/\partial {x_j}$), where ${\bf f}_{\sigma} = \bm{\sigma} \circ {\bf f}$ with $\bm{\sigma}$ being the softmax function\footnote{The norm of the Jacobian with respect to logits $\left(\partial {\bf f}\left({\bf x}\right)/\partial {\bf x^T}\right)$ experimentally turned out less predictive of test performance (not shown). See \sref{app:bounding} for discussion of why the softmax Jacobian is related to generalization.}. Given points of interest ${\bf x}_{\textrm{test}}$, we estimate the sensitivity of the function in those regions with the average Jacobian norm: $$\expect{{\bf x}_{\textrm{test}}}{ \left\|\mathbf J\left({\bf x_{\textrm{test}}}\right)\right\|_{F}},$$ that we will further refer to as simply ``Jacobian norm''. Note that this does not require the labels for ${\bf x}_{\textrm{test}}$. \textbf{Interpretation}. The Frobenius norm $\left\|\mathbf J({\bf x})\right\|_{F} = \sqrt{\sum_{ij}J_{ij}({\bf x})^2}$ estimates the average-case sensitivity of ${\bf f}_{\sigma}$ around ${\bf x}$. Indeed, consider an infinitesimal Gaussian perturbation $\Delta{\bf x} \sim \mathcal{N}\left({\bf 0}, \epsilon \mathbf I\right)$: the expected magnitude of the output change is then \begin{align*} \expect{\Delta{\bf x}}{\left\|{\bf f}_{\sigma}\left({\bf x}\right)-{\bf f}_{\sigma}\left({\bf x} + \Delta{\bf x}\right)\right\|_2^2} & \approx \expect{\Delta{\bf x}}{\left\| \mathbf J({\bf x})\Delta{\bf x}\right\|_2^2} = \mathbb{E}_{\Delta{\bf x}}\Big[\sum_i \Big(\sum_j J_{ij} x_j \Big)^2\Big] \\ &= \sum_{ijj'} J_{ij} J_{ij'} \expect{\Delta{\bf x}}{x_j x_{j'}} = \sum_{ij} J_{ij}^2 \expect{\Delta{\bf x}}{x_j^2} \\ & = \epsilon\left\|\mathbf J\left({\bf x}\right)\right\|_{F}^2. \end{align*} \item \label{ssec:Sensitivity Measures (Transitions)} To detect a change in linear region (further called a ``transition''), we need to be able to identify it. We do this analogously to \cite{on_the_expressive_power}. For a network with piecewise-linear activations, we can, given an input ${\bf x}$, assign a code to each neuron in the network ${\bf f}$, that identifies the linear region of the pre-activation of that neuron. E.g. each ReLU unit will have $0$ or $1$ assigned to it if the pre-activation value is less or greater than $0$ respectively. Similarly, a ReLU6 unit (see definition in \sref{app:Non-linearities}) will have a code of $0$, $1$, or $2$ assigned, since it has $3$ linear regions\footnote{For a non-piecewise-linear activation like Tanh, we consider $0$ as the boundary of two regions and find this metric qualitatively similar to counting transitions of a piecewise-linear non-linearity.}. Then, a concatenation of codes of all neurons in the network (denoted by ${\bf c}({\bf x})$) uniquely identifies the linear region of the input ${\bf x}$ (see \sref{app:Linear Region Encoding} for discussion of edge cases). Given this encoding scheme, we can detect a transition by detecting a change in the code. We then sample $k$ equidistant points ${\bf z}_0,\dots,{\bf z}_{k-1}$ on a closed one-dimensional trajectory $\mathcal{T}({\bf x})$ (generated from a data point $\mathbf x$ and lying close to the data manifold; see below for details) and count transitions $t(\mathbf x)$ along it to quantify the number of linear regions: \begin{equation}\label{eq:transitions} t({\bf x}) \coloneqq \sum_{i=0}^{k-1} \left\|{\bf c}\left({\bf z}_i\right) - {\bf c}\left({\bf z}_{(i + 1) \% k}\right) \right\|_1 \approx \oint_{\bf z \in \mathcal{T}({\bf x})} \left\|\frac{\partial {\bf c}({\bf z})}{\partial \left(d{\bf z}\right)}\right\|_1 d{\bf z}, \end{equation} where the norm of the directional derivative $\left\|\partial {\bf c}({\bf z})/\partial \left(d{\bf z}\right)\right\|_1$ amounts to a Dirac delta function at each transition (see \sref{app:Counting} for further details). By sampling multiple such trajectories around different points, we estimate the sensitivity metric: $$\expect{{\bf x}_{\textrm{test}}}{t\left({\bf x}_{\textrm{test}}\right)},$$ that we will further refer to as simply ``transitions'' or ``number of transitions.'' To assure the sampling trajectory $\mathcal{T}({\bf x}_{\textrm{test}})$ is close to the data manifold (since this is the region of interest), we construct it through horizontal translations of the image ${\bf x}_{\textrm{test}}$ in pixel space (Figure \ref{fig:translation_trajectory}, right). We similarly augment our training data with horizontal and vertical translations in the corresponding experiments (Figure \ref{fig:generalization_factors}, second row). As earlier, this metric does not require knowing the label of ${\bf x}_{\textrm{test}}$. \textbf{Interpretation.} We can draw a qualitative parallel between the number of transitions and curvature of the function. One measure of curvature of a function $\bf f$ is the total norm of the directional derivative of its first derivative $\bf f'$ along a path: $$ C\left(\bf f, \mathcal{T}\left(\bf x\right)\right):=\oint_{\bf z \in \mathcal{T}({\bf x})} \left\|\pd{{\mathbf f'}\left({\bf z}\right)}{\left(d{\bf z}\right)}\right\|_F d{\bf z}. $$ A piecewise-linear function $\bf f$ has a constant first derivative $\bf f'$ everywhere except for the transition boundaries. Therefore, for a sufficiently large $k$, curvature can be expressed as $$ C\left(\bf f, \mathcal{T}\left(\bf x\right)\right)=\frac{1}{2}\sum_{i=0}^{k-1} \left\|{\mathbf f'}\left({\bf z}_i\right) - {\mathbf f'}\left({\bf z}_{\left(i+1\right)\% k}\right)\right\|_F, $$ where ${\bf z}_0,\dots, {\bf z}_{k-1}$ are equidistant samples on $\mathcal{T}\left(\bf x\right)$. This sum is similar to $t({\bf x})$ as defined in Equation \ref{eq:transitions}, but quantifies the amount of change in between two linear regions in a non-binary way. However, estimating it on a densely sampled trajectory is a computationally-intensive task, which is one reason we instead count transitions. \end{enumerate} As such, on a qualitative level, the two metrics (Jacobian norm and number of transitions) track the first and second order terms of the Taylor expansion of the function. Above we have defined two sensitivity metrics to describe the learned function around the data, on average. In \sref{sec:Sensitivity On and Off the Data Manifold} we analyze these measures on and off the data manifold by simply measuring them along circular trajectories in input space that intersect the data manifold at certain points, but generally lie away from it (Figure \ref{fig:density}, left). \section{Experimental Results}\label{sec:experimental_results} In the following subsections (\sref{sec:Sensitivity and Generalization Factors}, \sref{sec:Sensitivity and Generalization Gap}) each study analyzes performance of a large number (usually thousands) of fully-connected neural networks having different hyper-parameters and optimization procedures. Except where specified, we include only models which achieve $100\%$ training accuracy. This allows us to study generalization disentangled from properties like expressivity and trainability, which are outside the scope of this work. In order to efficiently evaluate the compute-intensive metrics (\sref{sec:Sensitivity Measures}) in a very wide range of hyper-parameters settings (see e.g. \sref{app:Sensitivity and Generalization}) we only consider fully-connected networks. Extending the investigation to more complex architectures is left for future work. \subsection{Sensitivity On and Off the Training Data Manifold}\label{sec:Sensitivity On and Off the Data Manifold} We analyze the behavior of a trained neural network near and away from training data. We do this by comparing sensitivity of the function along 3 types of trajectories: \begin{enumerate} \item A random ellipse. This trajectory is extremely unlikely to pass anywhere near the real data, and indicates how the function behaves in random locations of the input space that it never encountered during training. \item An ellipse passing through three training points of different class (Figure \ref{fig:density}, left). This trajectory does pass through the three data points, but in between it traverses images that are linear combinations of different-class images, and are expected to lie outside of the natural image space. Sensitivity of the function along this trajectory allows comparison of its behavior on and off the data manifold, as it approaches and moves away from the three anchor points. \item An ellipse through three training points of the same class. This trajectory is similar to the previous one, but, given the dataset used in the experiment (MNIST), is expected to traverse overall closer to the data manifold, since linear combinations of the same digit are more likely to resemble a realistic image. Comparing transition density along this trajectory to the one through points of different classes allows further assessment of how sensitivity changes in response to approaching the data manifold. \end{enumerate} We find that, according to both the Jacobian norm and transitions metrics, functions exhibit much more robust behavior around the training data (Figure \ref{fig:density}, center and right). We further visualize this effect in 2D in Figure \ref{fig:boundaries}, where we plot the transition boundaries of the last (pre-logit) layer of a neural network before and after training. After training we observe that training points lie in regions of low transition density. The observed contrast between the neural network behavior near and away from data further strengthens the empirical connection we draw between sensitivity and generalization in \sref{sec:Sensitivity and Generalization Factors}, \sref{sec:Sensitivity and Generalization Gap} and \sref{sec:Sensitivity and Per-Point Generalization}; it also confirms that, as mentioned in \sref{sec:intro}, if a certain quality of a function is to be used for model comparison, input domain should always be accounted for. \begin{figure} \centering \includegraphics[width=\textwidth, trim={0 0 0 1.4cm}, clip ]{images/trajectories_figures.pdf} \caption{ A 100\%-accurate (on training data) MNIST network implements a function that is much more stable near training data than away from it. \textbf{Left}: depiction of a hypothetical circular trajectory in input space passing through three digits of different classes, highlighting the training point locations ($\pi/3$, $\pi$, $5\pi/3$). \textbf{Center}: Jacobian norm as the input traverses an elliptical trajectory. Sensitivity drops significantly in the vicinity of training data while remaining uniform along random ellipses. \textbf{Right}: transition density behaves analogously. According to both metrics, as the input moves between points of different classes, the function becomes less stable than when it moves between points of the same class. This is consistent with the intuition that linear combinations of different digits lie further from the data manifold than those of same-class digits (which need not hold for more complex datasets). See \sref{app:Sensitivity along a Trajectory} for experimental details.} \label{fig:density} \end{figure} \begin{figure} \centering \hfill Before Training \hfill\hfill After Training \hfill\hfill \includegraphics[width=0.49\textwidth]{images/lr_before.png} \includegraphics[width=0.49\textwidth]{images/lr_after.png} \caption{Transition boundaries of the last (pre-logits) layer over a 2-dimensional slice through the input space defined by 3 training points (indicated by inset squares). \textbf{Left}: boundaries before training. \textbf{Right}: after training, transition boundaries become highly non-isotropic, with training points lying in regions of lower transition density. See \sref{app:Linear Region Boundaries Boundaries} for experimental details.} \label{fig:boundaries} \end{figure} \subsection{Sensitivity and Generalization Factors}\label{sec:Sensitivity and Generalization Factors} In \sref{sec:Sensitivity On and Off the Data Manifold} we established that neural networks implement more robust functions in the vicinity of the training data manifold than away from it. We now consider the more practical context of model selection. Given two perfectly trained neural networks, does the model with better generalization implement a less sensitive function? We study approaches in the machine learning community that are commonly believed to influence generalization (Figure \ref{fig:generalization_factors}, top to bottom): \begin{itemize} \item random labels; \item data augmentation; \item ReLUs; \item full-batch training. \end{itemize} We find that in each case, the change in generalization is coupled with the respective change in sensitivity (i.e. lower sensitivity corresponds to smaller generalization gap) as measured by the Jacobian norm (and almost always for the transitions metric). \begin{figure} \newcommand\dplot[1]{\includegraphics[width=0.32\textwidth, trim={0 0 0 0.5cm}, clip]{#1}} \centering \begin{tabularx}{\textwidth}{ c c c } \textcolor{generalization}{Generalization Gap} & \textcolor{jacobian_norm}{Jacobian norm} & \textcolor{transitions}{Transitions} \\ \dplot{plots/gen_random.pdf} & \dplot{plots/jacobian_random.pdf} & \dplot{plots/transitions_random.pdf} \\ \dplot{plots/gen_augment.pdf} & \dplot{plots/jacobian_augment.pdf} & \dplot{plots/transitions_augment.pdf} \\ \dplot{plots/gen_relu_hardsigmoid.pdf} & \dplot{plots/jacobian_relu_hardsigmoid.pdf} & \dplot{plots/transitions_relu_hardsigmoid.pdf} \\ \dplot{plots/gen_momentum_lbfgs.pdf} & \dplot{plots/jacobian_momentum_lbfgs.pdf} & \dplot{plots/transitions_momentum_lbfgs.pdf} \end{tabularx} \caption{Improvement in \textcolor{generalization}{generalization} (left column) due to using correct labels, data augmentation, ReLUs, mini-batch optimization (top to bottom) is consistently coupled with reduced sensitivity as measured by the \textcolor{jacobian_norm}{Jacobian norm} (center column). \textcolor{transitions}{Transitions} (right column) correlate with generalization in all considered scenarios except for comparing optimizers (bottom right). Each point on the plot corresponds to two neural networks that share all hyper-parameters and the same optimization procedure, but differ in a certain property as indicated by axes titles. The coordinates along each axis reflect the values of the quantity in the title of the plot in the respective setting (i.e. with true or random labels). All networks have reached $100\%$ training accuracy on CIFAR10 in both settings (except for the data-augmentation study, second row; see \sref{app:Sensitivity and Generalization Factors} for details). See \sref{app:Sensitivity and Generalization} for experimental details (\sref{app:Sensitivity and Generalization Factors} for the data-augmentation study) and \sref{sec:How to Read Plots} for plot interpretation.} \label{fig:generalization_factors} \end{figure} \subsubsection{How to Read Plots}\label{sec:How to Read Plots} In Figure \ref{fig:generalization_factors}, for many possible hyper-parameter configurations, we train two models that share all parameters and optimization procedure, but differ in a single binary setting (i.e. trained on true or random labels; with or without data augmentation; etc). Out of all such network pairs, we select only those where each network reached 100\% training accuracy on the whole training set (apart from the data augmentation study). The two generalization or sensitivity values are then used as the $x$ and $y$ coordinates of a point corresponding to this pair of networks (with the plot axes labels denoting the respective value of the binary parameter considered). The position of the point with respect to the diagonal $y = x$ visually demonstrates which configuration has smaller generalization gap / lower sensitivity. \subsection{Sensitivity and Generalization Gap}\label{sec:Sensitivity and Generalization Gap} We now perform a large-scale experiment to establish direct relationships between sensitivity and generalization in a realistic setting. In contrast to \sref{sec:Sensitivity On and Off the Data Manifold}, where we selected locations in the input space, and \sref{sec:Sensitivity and Generalization Factors}, where we varied a single binary parameter impacting generalization, we now sweep simultaneously over many different architectural and optimization choices (\sref{app:Sensitivity and Generalization}). Our main result is presented in Figure \ref{fig:cifar10_cifar100_jacobian}, indicating a strong relationship between the Jacobian norm and generalization. In contrast, Figure \ref{fig:cifar10_cifar100_transitions} demonstrates that the number of transitions is not alone sufficient to compare networks of different sizes, as the number of neurons in the networks has a strong influence on transition count. \begin{figure}[h] \centering \includegraphics[width=0.49\textwidth, trim={0 0.5cm 0 0}]{plots/cifar10_jacobian_narrow.pdf} \includegraphics[width=0.49\textwidth, trim={0 0.5cm 0 0}]{plots/cifar100_jacobian_narrow.pdf} \includegraphics[width=0.49\textwidth, trim={0 0.5cm 0 0}]{plots/mnist_jacobian_narrow.pdf} \includegraphics[width=0.49\textwidth, trim={0 0.5cm 0 0}]{plots/fashion_mnist_jacobian_narrow.pdf} Jacobian norm \caption{Jacobian norm correlates with generalization gap on all considered datasets. Each point corresponds to a network trained to $100$\% training accuracy (or at least $99.9$\% in the case of CIFAR100). See \sref{app:Sensitivity and Generalization Factors} and \sref{app:Sensitivity and Generalization} for experimental details of bottom and top plots respectively.} \label{fig:cifar10_cifar100_jacobian} \end{figure} \clearpage \subsection{Sensitivity and Per-Point Generalization}\label{sec:Sensitivity and Per-Point Generalization} In \sref{sec:Sensitivity and Generalization Gap} we established a correlation between sensitivity (as measured by the Jacobian norm) and generalization averaged over a large test set ($10^4$ points). We now investigate whether the Jacobian norm can be predictive of generalization at individual points. As demonstrated in Figure \ref{fig:per_point_jacobian} (top), Jacobian norm at a point is predictive of the cross-entropy loss, but the relationship is not a linear one, and not even bijective (see \sref{app:bounding} for analytic expressions explaining it). In particular, certain misclassified points (right sides of the plots) have a Jacobian norm many orders of magnitude smaller than that of the correctly classified points (left sides). However, we do remark a consistent tendency for points having the highest values of the Jacobian norm to be mostly misclassified. A similar yet noisier trend is observed in networks trained using $\ell_2$-loss as depicted in Figure \ref{fig:per_point_jacobian} (bottom). These observations make the Jacobian norm a promising quantity to consider in the contexts of active learning and confidence estimation in future research. \begin{figure}[h] \centering \includegraphics[width=0.49\textwidth, trim={0 1.5cm 0 0}]{plots/per_sample_loss_concat_VS_jacobian_norm_concat_on_MNIST_cross_entropy_narrow.pdf} \includegraphics[width=0.49\textwidth, trim={0 1.5cm 0 0}]{plots/per_sample_loss_concat_VS_jacobian_norm_concat_on_CIFAR10_cross_entropy_narrow.pdf} Cross-entropy loss \vspace{0.5cm} \includegraphics[width=0.49\textwidth, trim={0 1.5cm 0 2cm}, clip]{plots/per_sample_loss_concat_VS_jacobian_norm_concat_on_MNIST_l2_narrow.pdf} \includegraphics[width=0.49\textwidth, trim={0 1.5cm 0 2cm}, clip]{plots/per_sample_loss_concat_VS_jacobian_norm_concat_on_CIFAR10_l2_narrow.pdf} $\ell_2$-loss \caption{Jacobian norm plotted against individual test point loss. Each plot shows 5 random networks that fit the respective training set with $100\%$ accuracy, with each network having a unique color. {\bf Top:} Jacobian norm plotted against cross-entropy loss. These plots experimentally confirm the relationship established in \sref{app:bounding} and Figure \ref{fig:per_point_jacobian_i}. {\bf Bottom:} Jacobian norm plotted against $\ell_2$-loss, for networks trained on $\ell_2$-loss, exhibits a similar behavior. See \sref{app: Per-point Generalization} for experimental details and Figure \ref{fig:per_point_jacobian_app} for similar observations on other datasets.} \label{fig:per_point_jacobian} \end{figure} \section{Conclusion}\label{sec:Conclusion} We have investigated sensitivity of trained neural networks through the input-output Jacobian norm and linear regions counting in the context of image classification tasks. We have presented extensive experimental evidence indicating that the local geometry of the trained function as captured by the input-output Jacobian can be predictive of generalization in many different contexts, and that it varies drastically depending on how close to the training data manifold the function is evaluated. We further established a connection between the cross-entropy loss and the Jacobian norm, indicating that it can remain informative of generalization even at the level of individual test points. Interesting directions for future work include extending our investigation to more complex architectures and other machine learning tasks. \newpage
{ "timestamp": "2018-06-20T02:00:45", "yymm": "1802", "arxiv_id": "1802.08760", "language": "en", "url": "https://arxiv.org/abs/1802.08760" }
\section{Introduction} In this paper we study existence of non-negative solutions of second order semi-linear problem posed on a $n$-dimensional Riemannian manifold $M$. In particular we consider the problem, \begin{equation}\label{eq:sl-M} \Delta_M u(x) +b(x)f(u(x))=0,\quad x\in M,\\ \end{equation} where $\Delta_M$ is the Laplace-Beltrami operator defined on $M$ and $u$ is a non-negative function. The function $f:\mathbb{R}\to \mathbb{R}$ represents the nonlinearity and $b:M \to \mathbb{R}$ is a regularizing coefficient. The semi-linear problem \eqref{eq:sl-M} has been studied in several configuration manifolds like $\mathbb{R}^n, S^n$ or some surfaces of revolution; see \cite{XueChao,castro1,brezis1983positive,MR2862087,fischer2014infinitely} among others. On these cases, the existence of solutions have been obtained with an analytical approach after a change of variables. For instance, in the case of $\mathbb{R}^n$ the problem is reduced to a one-dimensional problem when considering polar coordinates and seeking form radial solutions. The resulting problems is a one dimensional semi-linear second order problem having possible some finite number of singularities. Then, existence of solutions of the resulting one-dimensional problem is obtained using tools from analysis. Solution of the one-dimensional reduced problem corresponds to solutions of the original semi-linear $n$-dimensional problem that solely depend on the radial part when considering them in polar coordinates. Note that the construction mentioned above, depends on some particular properties the respective configuration space that requires a particular coordinated system. Indeed, what is behind this construction is a particular geometry on the configuration space; in other words, the solution to the problem depends on the geometry of the configuration space $M$. These examples are particular examples of a family of geometries described via a general polar coordinates system coming from the set of symmetries of the manifold $M$, sometimes referred as {\tt polar actions}. See \cite{Jo,DC}. The main goal of this manuscript is to present a geometrical point of view of some reported solutions methods of the problem \eqref{eq:sl-M}. This geometric interpretation allows us to extended the analysis to second order semi-linear problems posed on more general configuration spaces. This approach lies on a characterization of the Laplace-Beltrami operator on manifolds with polar actions; see \cite{HelgasonIII}. More precisely, If we consider a polar action of a topological group $G$ on the manifold $M$ with one-dimensional transverse submanifold $\Sigma$, under usual assumptions on regularity and growth condition on the non-linear term, we obtain that the problem \eqref{eq:sl-M} has at least one non-negative solution $u\in C^\infty(M)$. The assumptions on the nonlinear term are classical assumptions; for instance, they are similar to a result given in \cite{XueChao}. We stress that the polar action is the geometric properties of the configuration space we mentioned above. For this action we ask an additional condition on the rank of the transverse submanifold $\Sigma$. This is a technical condition that appears in order to reduce the problem \eqref{eq:sl-M} posed on $M$ to a problem posed on $\Sigma$ when seeking for solutions that are invariant with respect to the polar action of $G$. In this paper we consider mainly the case where $\Sigma$ is a one-dimensional manifold (that is, and interval) since for these case there are available results to obtain existence and properties of solutions; see \cite{XueChao} and references therein. In the case of $\Sigma$ being of dimension greater than one, we can still write a second oder semi-linear problem on $\Sigma$ but, of course, obtaining existence and properties of solutions is more involved in this case and it will require more sophisticated tools that the ones used to obtain existence and properties of solutions in the one-dimensional case. The case where $\Sigma$ is of dimension larger that one, will be object of future research. The organization of the paper is the following: in Section~\ref{sec:special solutions} we give a general method to solve the problem in the cases of the euclidean space, the sphere and surface of revolution. Indeed, the semi-linear problem on each case can be reduced to the same type of one-dimensional problem (compare the ordinary differential equations \eqref{eq:equivPDE-Rn}, \eqref{eq:equivPDE-Sn} and \eqref{eq:equivPDE-SR}). In Section~\ref{sec:geometry} we present a brief summary of the geometric tools used to reduce the problem \eqref{eq:sl-M} posed on $M$ to a second order semi-linear problem posed on the submanifold $\Sigma$. In particular we give a description of the Laplace-Beltrami operator and we introduce the polar actions. Also, the case of the two-point homogeneous spaces is described with special interest. Finally, Section~\ref{sec:solution} is devoted to give the proof of the main result, Theorem \ref{thm:main}, which gives existence of solution to \eqref{eq:sl-M} via general polar coordinates. \section{Preliminary examples}\label{sec:special solutions} Here we present a standard method to obtain solutions of problem \eqref{eq:sl-M} for the particular cases of the euclidean space, the sphere and surfaces of revolution. The method we present here can be summarized as follows: \begin{enumerate} \item Once we describe the Laplace-Beltrami operator we use a change of variables (to some suitable coordinates) to get an equivalent lower dimensional problem. \item We restrict us to seek for an ansatz that depends only on one parameter of the change of variable which allows as to reduce the dimension of the domain where the problem is posed. Here we have to assume that all the coefficients and of the differential equation depend solely on this one dimensional parameter. \item by using theory of semi-linear equations (see for example Theorem~1.1 of \cite{XueChao}) we get conditions that guarantee the solution of the obtained one-dimensional problem. \end{enumerate} \begin{example}[The euclidean space $M=\mathbb{R}^n$]\label{ex:Rn} Suppose that the functions $u$ and $b$ are radial on polar coordinates $(r,\theta_1,\dots,\theta_{n-1})$. The equation in \eqref{eq:sl-M} becomes into \[ u'' + \frac{n-1}{r} u'+b(r)f(u)=0,\] which has a singularity at $r=0$. Under the change of variable $s=\int_{1}^r c_nt^{1-n} dt$ we get the following equivalent one-dimensional problem \begin{equation}\label{eq:equivPDE-Rn} z''(s)+(r(s))^{2(n-1)}b(r(s))f(z)=0 \end{equation} which is posed on $\mathbb{R}$. $\diamond$ \end{example} \begin{remark} Note that the same argument, but for $r$ belonging to the interval $(R_1,R_2)$, can be used to write a similar one-dimensional problem in the case when $M$ is the annulus bounded by the spheres of radius $R_1$ and $R_2$. \end{remark} \begin{example}[The sphere $M=S^{n}$]\label{ex:Sn} Suppose that the functions $u$ and $b$ are radial on spherical coordinates $(r,\varphi_1,\dots,\varphi_{n-1})$ where $r$ is the arc-length of a meridian from a fixed point $p\in S^n$. The equation \eqref{eq:sl-M} becomes into \[u'' + (n-1)\frac{\cos(r)}{\sin(r)} u'+b(r)f(u)=0\] which has a singularity at $r=0$ and $r=\pi$. Under a suitable change of variable and defining $s=\int_{r_0}^r c_n\sin(t)^{1-n} dt$ we get the following equivalent ODE \begin{equation}\label{eq:equivPDE-Sn} z''(s)+(c_n\sin(r(s))^{n-1})^2b(r(s))f(z)=0. \end{equation} \begin{remark} If we consider the case $n=2$, $S^2\subset \mathbb{R}^3$ and $p=(0,0,1)$ and the north pole. We can change variable to the vertical axis $z$ using $\sin(r)=\sqrt{1-z^2}$ to get the equation \[ (1-z^2) \frac{d ^2v}{dz^2} + (1-n)z\frac{dv}{dz}+b(z)f(v)=0. \] where $v(z)=u(r)$. Note that due to the change of variables now the singularities are located at $z=-1$ and $z=1$. This case was studied in \cite{fischer2014infinitely} where they show that the semi-linear Laplace-Beltrami equation has infinitely many solutions on the unit sphere which are symmetric with respect to rotations around some axis. They use a fixed point theorem combined with some energy analysis to obtain local solutions. To obtain global solutions they write a Pohozaev-type identity to prove that the energy is continuously increasing with the initial condition and then use phase plane analysis to prove the existence of infinitely many solutions. \end{remark} $\diamond$ \end{example} Under positivity and regularity assumptions on $b$ and asymptotic behavior of $f$, the equations~\eqref{eq:equivPDE-Rn} has regular and positive solution $z(s)$ whenever positive solutions of $z''+b(r(s))(r(s))^{2(n-1)}=0$ exists. Same result holds if we consider the equation~\eqref{eq:equivPDE-Sn} and positive solutions of $z''+b(r(s))(c_n\sin(r(s))^{n-1})^2=0$ also exists. On both cases, we get conditions on $b$ and $f$ in order to guarantee solution to the problem \eqref{eq:sl-M}. \begin{example}[Closed surfaces of revolution]\label{ex:surf.rev.} Consider now a plane curve $\gamma(t)=(x(t),z(t))$ defined on an interval $I=[a,b]$ with non-negative components and parametrized by arc-length. If in addition we assume that $x(a)=x(b)=0,$ then we can construct a simply connected surface of revolution $S$ parametrized by $(x(t)\cos(\theta),x(t)\sin(\theta),z(t))$. The surface $S$ has a Riemannian structure coming from the euclidean structure on $\RR^3$ and it satisfies that the meridian geodesics with origin on $p=(0,0,z(a))$ are the curves with $\theta$ constant \cite{DC}. Indeed, the coordinates $(t,\theta)$ are the geodesic polar coordinates and the Laplace-Beltrami operator on geodesics radial functions, using \eqref{eq:LB-polar.coord.}, is $$\Delta u=u''(t)+(\ln(x(t)))'u'(t),$$ and the problem \eqref{eq:sl-M} is written as \begin{equation} (x u')'+b\varphi f (u)=0. \end{equation} Again, by using a change of variable of the type $s=\int_{r_0}^r \frac{1}{A(t)} dt$ where $A(t)$ is the area of the geodesic sphere of radius $t$, we obtain that the previous system is given by \begin{equation}\label{eq:equivPDE-SR} z''(s)+(A(r(s))^2b(r(s))f(z)=0. \end{equation} which is an ODE.$\diamond$ \end{example} The remarkable fact is that the solutions of the problem \eqref{eq:sl-M} on these three cases can be studied by the same type of second order ODE (cf. equations \eqref{eq:equivPDE-Rn},\eqref{eq:equivPDE-Sn} and \eqref{eq:equivPDE-SR}). Same procedure can be applied to more general situations where we can find a ''radial" variable and the remaining variables can be considered as ''symmetries"-variables. \section{Geometrical setting}\label{sec:geometry} This section is devoted to present some basic facts on Riemannian geometry needed to describe our main problem in a general set-up. The key ingredients will be the Laplace-Beltrami operator and the polar actions. \subsection{Riemannian geometry} Let us begin by presenting some familiar concepts. For a more detailed description we reefer the reader to~\cite{Jo}. A manifold $M$ of dimension $n$ is a connected para-compact Hausdorff space for which every point $x\in M$ has a neighborhood $U_x$ that is homeomorphic to an open subset $\Omega_x$ of $\RR^n$. Such a homeomorphism $\phi_x : U_x \to \Omega_x$ is called a {\it coordinate chart}. If for two charts the function $\phi_x\circ \phi_y^{-1}:\Omega_y\to \Omega_x$ is a $C^r$-diffeomorphism we say that the manifold has a $C^r$-differentiable structure. We denote $T_xM$, the vector space which consist of all tangent vector to curves in $M$ on the point $x$. It is called the \emph{tangent space} of $M$ at the point $x$. A {\it Riemannian metric} on a differentiable manifold $M$ is given by a scalar product on each tangent space $T_x M$ which depends smoothly on the base point $x$. A {\it Riemannian manifold} is a differentiable manifold equipped with a Riemannian metric. In any system of local coordinates $(x_1,\ldots,x_n)$ the Riemannian metric is represented by the positive definite, symmetric matrix $(g_{ij}(x))_{i,j=1,\ldots,n}$ where the coefficients depend smoothly on $x$. Let $\gamma : [a, b]\to M$ be a smooth curve. The length of $\gamma$ is defined as $$L(\gamma)=\int_a^b \|\gamma'(t)\|dt$$ where the norm of the tangent vector $\gamma'$ is given by the Riemannian metric. This value is invariant under re-parametrization. Taking the infimum of the values $L(\gamma)$ among all the curves $\gamma$ joining two points $p,q\in M$ we can define a distance function on $M$ and the topology of this distance coincides with the topology of the manifold structure of $M$. \begin{theorem Let $M$ be a Riemannian manifold, $x\in M$ and $v\in T_xM$. Then there exist $\epsilon>0$ and precisely one geodesic $c : [0, \epsilon] \to M$ with $c(0) = x, c(0)= v$. In addition, $c$ depends smoothly on $x$ and $v$. \end{theorem} The main consequence of this theorem is that we can define what is called {\it normal coordinates} via exponential map. The {\it exponential map} is a diffeomorphism between an open set of $T_xM$ (with center at $0$) defined by geodesics and an open on $M$ (with center at $x$). When we use standard coordinates $(r, \varphi)$, where $\varphi = (\varphi_1, \ldots,\varphi_{n-1})$ parametrizes the unit sphere $S^{n-1}$ on $\RR^n$, we then obtain polar coordinates on $T_xM$ (via an orthonormal linear isomorphism $T_xM\equiv \RR^n$), thus we get new coordinate system on $M$ via the exponential map. Such coordinates are known as {\it geodesic polar coordinates} and satisfies that the lines with $\varphi$ constant are geodesic. \begin{corollary For any $x\in M$, there exists $\rho>0$ such that geodesic polar coordinates can be introduced on $B(x, \rho) := \{q \in M : d(x, q) \leq \rho\}$. For any such $\rho$ and any $q \in \partial B(x, \rho)$, there is precisely one geodesic of shortest length $(= \rho)$ from $x$ to $q$, and in polar coordinates, this geodesic is given by the straight line $x(t) = (t, \varphi_0 ), 0 \leq t \leq \rho$, where $q$ is represented by the coordinates $(\rho, \varphi_0), \varphi_0 \in S^{n-1}$. \end{corollary} \begin{example}[The sphere] The surface $S^2$ can be constructed via radial circles $C_z$ centered at some point in $[-r_0,r_0]$ in the $z$-axis. It provides us a parametrization of $S^2$ via the formula $$\gamma(z,t)=(\sqrt{r_0^2-z^2}\cos(t), \sqrt{r_0^2-z^2}\sin(t),z)$$ for a fixed value of $r_0$. We consider the tangent space $T_pS^2$ at some point $p\in S^2$ (see FIGURE 1). In $T_pS^2$ we can define the (planar) polar coordinates $(r,\theta)$ centered at the point $p$. Without loss of generality we assume $p=(0,0,r_0)$ and we can project each circle defined on these polar coordinates to $S^2$ such that the image of these circles coincides with the circles $C_z$ and the radio $r$ projects on a geodesic transversal to the circles $C_z$. \tdplotsetmaincoords{80}{125} \begin{figure}[t] \centering \begin{tikzpicture}[MyPersp,font=\large] \def\h{4 \def\a{35 \def\aa{35 \pgfmathparse{\h/tan(\a)} \let\b\pgfmathresult \pgfmathparse{sqrt(1/cos(\a)/cos(\a)-1)} \let\c\pgfmathresult \pgfmathparse{\c/sin(\a)} \let\p\pgfmathresult \coordinate (A) at (2,0.5*\b,0); \coordinate (B) at (-2,0.5*\b,0); \coordinate (C) at (-2,-1.5,{(1.5+\b)*tan(\a)}); \coordinate (D) at (2,-1.5,{(1.5+\b)*tan(\a)}); \coordinate (E) at (2,-2.5,0); \coordinate (F) at (-2,-2.5,0); \coordinate (CLS) at (0,0,{\h-\p}); \coordinate (CUS) at (0,0,{\h+\p}); \coordinate (FA) at (0,{\c*cos(\a)},{-\c*sin(\a)+\h}) \coordinate (FB) at (0,{-\c*cos(\a)},{\c*sin(\a)+\h}); \coordinate (SA) at (0,1,{-tan(\a)+\h}); \coordinate (SB) at (0,-1,{tan(\a)+\h}); \coordinate (PA) at ({sin(\aa},{cos(\aa)},{\h+3*\p}); \coordinate (PB) at ({sin(\aa},{cos(\aa)},{\h-3*\p}); \coordinate (P) at ({sin(\aa)},{cos(\aa)},{-tan(\a)*cos(\aa)+\h}); \draw (E)--(A)--(B)--(F)--cycle; \foreach \t in {20,40,...,360 \draw[->,magenta,thick] ({cos(\t)},{sin(\t)},0) --({0.8*cos(\t)},{0.8*sin(\t)},{0.25*\h}); \foreach \t in {20,40,...,360 \draw[->,red,thick] ({0.5*cos(\t)},{0.5*sin(\t)},0) --({0.5*cos(\t)},{0.5*sin(\t)},{0.1*\h}); \foreach \t in {20,40,...,360 \draw[->,gray,thick] ({1.5*cos(\t)},{1.5*sin(\t)},0) --({cos(\t)},{sin(\t)},{0.4*\h}); \draw[magenta,very thick] (1,0,0) \foreach \t in {5,10,...,360} {--({cos(\t)},{sin(\t)},0)}--cycle; \draw[magenta,very thick] (1,0,0) \foreach \t in {0,10,...,360} {--({0.5*cos(\t)},{0.5*sin(\t)},0)}--cycle; \draw[magenta,very thick] (1,0,0) \foreach \t in {0,10,...,360} {--({0.01*cos(\t)},{0.01*sin(\t)},0)}--cycle; \draw[magenta,very thick] (1,0,0) \foreach \t in {0,10,...,360} {--({1.5*cos(\t)},{1.5*sin(\t)},0)}--cycle; \foreach \i in {-1} \foreach \t in {0,15,...,165 {\draw[gray,dashed] ({cos(\t)},{sin(\t)},\h+\i*\p) \foreach \rho in {5,10,...,360} {--({cos(\t)*cos(\rho)},{sin(\t)*cos(\rho)}, {sin(\rho)+\h+\i*\p})}--cycle; } \foreach \t in {-75,-60,...,75 {\draw[orange,very thick] ({cos(\t)},0,{sin(\t)+\h+\i*\p}) \foreach \rho in {5,10,...,360} {--({cos(\t)*cos(\rho)},{cos(\t)*sin(\rho)}, {sin(\t)+\h+\i*\p})}--cycle; } \draw[orange,very thick] (1,0,{\h+\i*\p} \foreach \t in {5,10,...,360} {--({cos(\t)},{sin(\t)},{\h+\i*\p})}--cycle; } \draw[black,thick] (0,0,-2)--(0,0,4); \fill[MyPoints] (0,0,0) circle (1pt)node[right]{$p$}; \fill[MyPoints] (0,2.5,0) node[left]{$T_pM$}; \fill[MyPoints] (1.3,0.2,0) node{$r$}; \end{tikzpicture} \label{figura1}\caption{Polar coordinates on $S^2$} \end{figure} \end{example} \begin{example}[Two points homogeneous spaces]\label{two points homogeneous spaces} A Riemannian manifold $(M,g)$ is {\it two points homogeneous spaces} if the group $I(M)$ of isometries acts transitively on the space of equidistance pair of points. The definition means that for any $p_1,p_2,p_1',p_2'\in M$ so that $d_g(p_1,p_2)=d_g(p'_1,p'_2)$ then there exists $\varphi\in I(M)$ so that $\varphi(p_i)=p_i'$. As a direct consequence of the definition we have that the function $A_p(r)$ defined as the Riemann measure of the geodesic sphere $S_r(P)$ with center at $p\in M$ is independent of the point $p$ hence $A$ is globally defined on $M$. Note that from definition we can observe that we only need one invariant to determine all the equidistant pair of points. The last statement is just the well known fact that any two points homogeneous spaces is the euclidean space or a symmetric space of rank one (see Chapter X in \cite{Hel}). For the complete classification of the symmetric spaces of arbitrary rank we refer to \cite[Table V Chapter X]{Hel2}, and for the convenience of the reader we give the list of the two points homogeneous spaces. \begin{itemize} \item The euclidean space $\RR^n$ and the spheres $S^n$ for $n\geq 1$ \item Real, complex and quaternion projective spaces $P\RR^n, P\CC^n,$ and $P\HH^n $ for $n\geq 2$ \item Real, complex and quaternion hyperbolic space $H\RR^n, H\CC^n,$ and $H\HH^n $ for $n\geq 2$ \item Cayley projective $P\CC_a^2$ and hyperbolic spaces $H\CC_a^2$. \end{itemize} Some special cases of closed surfaces of revolution, classified by the Gaussian curvature, are isometric with two points homogeneous spaces. \end{example} \subsection{The Laplace-Beltrami operator and geodesic polar coordinates} What we need now is the description of the Laplace-Beltrami operator for Riemann manifolds $M$. If we denote by $\bar{g}=\det(g_{ij})$, the Laplace-Beltrami operator acting on a smooth function $u:M\to \mathbb{R}$ is defined as\footnote{\mbox{ For the coordinate-free expression of $\Delta_Mu$ we refer to \cite[Sec~2.1]{Jo}.}} $$\Delta_Mu=\frac{1}{\sqrt{\bar{g}}}\sum_{k}\frac{\partial}{\partial x_k}\left( \sum_{i} g^{ik}\sqrt{\bar{g}}\frac{\partial}{\partial x_i}u \right)$$ where $(g^{ij}):=(g_{ij})^{-1}$. In the case of geodesic polar coordinates we have a special form of the previous operator: \begin{equation}\label{eq:LB-polar.coord.} \Delta_M u=\dfrac{\partial^2u}{\partial r^2}+ \dfrac{1}{\sqrt{\bar{g}}}\dfrac{\partial \sqrt{\bar{g}}}{\partial r} \dfrac{\partial u}{\partial r}+ \dfrac{1}{\sqrt{\bar{g}}}\sum_{i,j=1}^{n-1}\dfrac{\partial }{\partial \varphi_i}(g^{ij}\sqrt{\bar{g}}\dfrac{\partial u}{\partial \varphi_j}). \end{equation For two points homogeneous spaces the previous formula takes the following form, \begin{equation}\label{eq:LB-2pths} \Delta u=u''+(\ln A)'u' \end{equation} where $u$ is a geodesical radial function and $A(r)$ is the Riemann measure of the geodesic sphere of radius $r$ (cf. Lemma 7.12 in \cite{Hel}). \subsection{Generalized polar coordinates} In order to give a generalization of the geodesic polar coordinates given in the previous section for two points homogeneous spaces to a more general set-up, we shall do some remarks on the presentation of the examples \ref{ex:Rn}-\ref{ex:surf.rev.}. The geometry of these will serve to us as a motivation to study the more general notion, the \emph{polar coordinates}. For that goal some definitions are needed. \begin{definition} An \emph{action} of a Lie group $G$ with identity element $e$ on a manifold $M$ is a differentiable map $\phi:G\times M\rightarrow M$ such that \begin{itemize} \item $\phi(e,x)=x$ for all $x\in M$, \item $\phi(\alpha\beta,x)=\phi(\alpha,\phi(\beta,x))$ for all $\alpha,\beta\in G$ and for all $x\in M$. \end{itemize} \end{definition} For short we denote for all $\alpha\in G$ and $x\in M$: $\phi(\alpha,x)=\alpha x$ and we denote by $$G\cdot x:=\{\alpha x|\alpha\in G\}$$ the orbit of $x\in M$. For a Riemannian manifold $(M,g)$ we say that the action of $G$ on $M$ is an \emph{isometry} if for any $\alpha\in G$ we have $g_p(X,Y)=g_{\alpha p}(d_p\phi(\alpha) X,d_p\phi(\alpha) Y)$ for all $X,Y\in T_pM$. With this definition we proceed to revisit some examples. Let us begin with the example of the 2-sphere. The surface $S^2$ can be constructed via radial circles $C_z$ centered at some point in $[-r_0,r_0]$ in the $z$-axis. It provides us a parametrization of $S^2$ via the formula $$\gamma(z,t)=(\sqrt{r_0^2-z^2}\cos(t), \sqrt{r_0^2-z^2}\sin(t),z)$$ for a fixed value of $r_0$. We define an action from the compact group $S^1$ on $S^2$ by $$\lambda\cdot \gamma(z,t)=(\sqrt{r_0^2-z^2}\cos(t+\theta_{\lambda}), \sqrt{r_0^2-z^2}\sin(t+\theta_{\lambda}),z),$$ where $\theta_{\lambda}$ is the angle corresponding to $\lambda\in S^1$. Note that the orbit of the point $\gamma(z,t)$ is exactly the circle $C_z$. Now we fix a point $p\in S^2$ (that is not the north neither the south pole) and consider the meridian $\Sigma$ through $p$, we obtain that $\Sigma=\{(\sqrt{r_0^2-x^2},0,z)|-r_0< z< r_0\}$ for a suitable $x\in (-r_0,r_0)$. For each $z\in (-r_0,r_0)$, the tangent space at $p$ can be decomposed as $T_pS^2=E_1\oplus E_2$ where $E_1$ is a one-dimensional vector space generated by a vector in the direction of $\Sigma$ and $E_2$ is generated by a vector in the direction of the circle $C_z$. Note that in this case the submanifold $\Sigma$ is \emph{transversal} to the action of $S^1$ on $S^2$. We stress that this fact only holds for points that are not the north or the south pole (i.e. $z\in (-r_0,r_0)$) because this particular points are fixed point of the action; nevertheless such points can be considered in the situation described before by using another (invariant by rotation) parametrization. This situation also happens in the other examples presented in section \ref{sec:special solutions}. That is, there exists an action by isometries of a compact group $G$ on the Riemannian manifold $M$, where the orbit of a single point is transversal to a fixed submanifold $\Sigma$ of $M$: \begin{itemize} \item For example \ref{ex:Rn}, there is an action of the group $SO(n)$. For this action the submanifold $\Sigma$ can be chosen as an infinite line from the origin in $\RR^n$ \item For the example \ref{ex:Sn}, the subgroup $G$ of $SO(n+1)$ defined by the rotations about the $x_{n+1}$-axis acts on $S^{n}$ by considering it as a submanifold of the space $\RR^{n+1}$ (that is, $G=SO(n-1)$). The submanifold $\Sigma$ can be chosen as a geodesic line joining the points $(0,\dots ,0, 1)$ and $(0,\dots ,0, -1)$ of $S^n$. \item For the surfaces of revolution, the group $S^1$ acts by rotations on the surface, where $\Sigma$ is image of the curve $\gamma(t)=(x(t),0,z(t))$. Moreover note that this holds for any surface of revolution not only the closed ones. \item In general, for two points homogeneous spaces it always exists such transversal one-dimensional submanifold \end{itemize} All these considerations also hold for two points homogeneous spaces. In this way, the polar geodesics coordinates on these spaces can be understood as the one given by the orbits of an action and the transversal one-dimensional submanifold. to a general set-up of actions from a group of isometries $G$ on a Riemannian manifold $M$, where the submanifold $\Sigma$ can be chosen to be \emph{transversal} to the orbits of the action. \begin{definition}\label{def:polar action}\label{polaraction} Let $G$ be a Lie group acting on $M$ by isometries. One says that the action is \emph{polar} if there is a complete immersed submanifold $\Sigma$ in $M$ that is transversal to non trivial orbits. That is, if $G\cdot x\neq\{x\}$ then $\Sigma$ intersects $G\cdot x$ in a single point and $$T_xM=T_x\Sigma\oplus T_x(G \cdot x).$$\footnote{There is a weaker notion but we do not consider it because we want to avoid fixed points of the $G$-action} \end{definition} From the previous work we can verify that the action of $S^1$ on the sphere $S^n$ is a polar action when consider $\Sigma$ as a meridian without the north and south poles (these are the fixed points of the action). The use of polar action gives meaningful advantages; for instance we can consider more situations than two points homogeneous spaces where we have a suitable polar coordinates, namely, the space $T^2=S^1\times S^1$ with an action of $S^1$ defined by rotating the second coordinate, the set $\{(s_1,s_0): s_1\in S^1\}$ is a submanifold of $T^2$, transversal to the orbits of the action The main advantage we work with is that for manifolds which admits polar actions we obtain a general description of the Laplace-Beltrami operator. In particular, we extend the expression of Laplace-Beltrami operator for general surface of revolution that are not isometric to two points homogeneous spaces. \subsubsection{The radial part of a differential operator} For this section we will assume a polar action and our main reference is \cite{HelgasonIII}. We remake some constructions presented there in order to obtain a clear presentation. Let $(M,g)$ be a complete Riemannian manifold and let $\Sigma$ a submanifold of $M$. For any element $s\in \Sigma$, we consider $\Sigma_s^{\bot}$ the set of all the geodesics starting at $s$ transversally to $\Sigma$. For a fixed neighborhood $U_0$ of a point $s_0\in\Sigma$ we can assume that all the $\Sigma_s^{\bot}$ for $s\in U_0$ are disjoints, then, their union defines a neighborhood $V_0$ of $s_0$ in $\Sigma$. For any $C^{\infty}$-function $u$ with compact support defined on $\Sigma$, we define $\tilde{u}$ on $V_0$ as $u(s)$ in $\Sigma_s^{\bot}$, for all $s\in U_0$. Let $D$ a differential operator defined on $M$. We define $D'$ the \emph{projection} of $D$ on $\Sigma$ by the equation $$D'(u)(s)=D(\tilde{u})(s).$$ \begin{proposition}[\cite{HelgasonIII}, Proposition 1.1] Let $L_M$ and $L_{\Sigma}$ be the Laplace-Beltrami operators on $M$ and $\Sigma$ respectively. Then $L_{\Sigma}$ is the projection of $L_M$. \end{proposition} Also we have a local description that allows to define the transversal part of a differential operator; \begin{lemma}[\cite{HelgasonIII}, Lem1.2] Let $\Sigma$ be a transversal submanifold in $M$. For each $x\in M$ there exist a cross section $B\times W$, with $B$ a relatively compact submanifold of $G$ and $W$ and open neighborhood of $x$ in $\Sigma$, such that the mapping $\eta:B\times W\rightarrow V$ defined by $(b,w)\mapsto bw$ is a diffeomorphism onto an open neighborhood $V$ of $x$ in $M$. \end{lemma} For $x\in M$, let us consider $N=G\cdot x$ and we construct the neighborhood $N_x^{\bot}$ as before. We can assume $N_x^{\bot}$ is transversal in $M$ and we apply the lemma above to construct the neighborhood $V=\cup_{w\in W} Bw$ of $x$. For $u$ a $C^{\infty}$-function on $M$, we restrict it to $W\subset N_x^{\bot}$ and then we extend this restriction to a function on $V$ defined as $u_x(bw)=u(w)$ for $b\in B$ and $w\in W$. For a differential operator $D$ on $M$ we define the \emph{transversal part} of $D$ by: $$D_T(u)(x)=D(u_x)(x).$$We denote it by $D_T$. \begin{lemma}[\cite{HelgasonIII}, Theorem 1.4 ] Let $M$ a Riemannian manifold and $G$ a Lie group acting by isometries on $M$. Let $N$ be any orbit by the action of $G$ in $M$ and let $u^-$ denote the restriction to $\Sigma$ of a $C^{\infty}$-function $u$ on (a subset of) $M$. Then$$L_M(u)=L_\Sigma(u^-)+L_T(u)^-$$ \end{lemma} \begin{theorem}[\cite{HelgasonIII}, Proposition 2.1]\label{teoradialpart} Let us suppose $\Sigma$ transversal to the action of $G$. Let $D$ be a differential operator on $M$. There is a unique operator $\Delta(D)$ on $\Sigma$ such that $D(u)^-=\Delta(D)(u^-)$, for each $G$-invariant function $u$ defined on an open subset of $M$. \end{theorem} The operator $\Delta(D)$ is called the \emph{radial part} of $D$. \\ Henceforth, we suppose $G$ a compact group of isometries of $M$. \begin{theorem}[\cite{HelgasonIII}, Theorem 2.11] For any one-dimensional submanifold $\Sigma\subset M$, transversal to the action of $G$, we have $$\Delta(L_M)=\frac{1}{\sqrt{A(r)}}L_{\Sigma}(\circ\sqrt{A(r)})-\frac{1}{\sqrt{A(r)}}L_{\Sigma}(\sqrt{A(r)})$$ where $A(r)$ denotes the Riemannian $G$-invariant measure of the geodesic sphere of radius $r$ along the submanifold $\Sigma$ and $\circ$ denotes the composition of the operator $L_{\Sigma}$ and the operator multiplication by $\sqrt{A(r)}$. \end{theorem} Let $u$ be a function such that $u(x)=u(gx)$ for all $g\in G$ and for all $x\in M$. Obviously the function $u$ is $G$-invariant, then we can apply the Theorem \ref{teoradialpart} and obtain \begin{equation}\label{eq:reduced LB} L_M(u)=\Delta(L_M)(u)=\frac{1}{\sqrt{A(r)}}L_{\Sigma}(\sqrt{A(r)}u)-\frac{1}{\sqrt{A(r)}}L_{\Sigma}(\sqrt{A(r)})u, \end{equation} the explicit formula for the Laplace-Beltrami operator applied on $u$.\\ For some straightforward calculations, the equation \ref{eq:reduced LB} turns out to be\\ \begin{align*}L_M(u)&=\frac{2\nabla(\sqrt{A(r)})\cdot\nabla(u)}{\sqrt{A(r)}}+L_{\Sigma}(u)\\ &=\frac{(A(r))'u'}{A(r)}+u''\\&=(\ln(A))'u'+u'',\end{align*}\\ recovering the equation \eqref{eq:LB-2pths} presented before for two points homogeneous spaces (see Section \ref{two points homogeneous spaces}) in a general setting\footnote{We have to remark that we are actually using the fact that $\Sigma$ is a one-dimensional submanifold of $M$. }.\\ We finish this section by discussing the case of higher dimensional transverse manifold. Note that the Laplace-Beltrami operator in ~\eqref{eq:LB-2pths} , for two points homogeneous spaces, applied to the PDE in the problem \eqref{eq:sl-M} yields the following equivalent problem for radial functions \begin{equation (Au')'+Abf(u)=0. \end{equation} This one dimensional equation correspond to a general situation presented in the examples leading to equations \eqref{eq:equivPDE-Rn},\eqref{eq:equivPDE-Sn} and \eqref{eq:equivPDE-SR}. In the general case of polar actions, the dimension of $\Sigma$ could be greater than 1, and in these cases the PDE in \eqref{eq:sl-M} is equivalent to \begin{equation}\label{eqdimg} \Delta u+2\dfrac{\nabla \sqrt{A(r)}\cdot \nabla u}{\sqrt{A(r)}}+b(r)f(u)=0 \end{equation} for $G-$invariant function $u$. Hence we get a semi-linear PDE that can be considered for analysis. In particular, several questions could, in principle be studied: \begin{itemize} \item Existence of solutions (that will imply existence of polar action invariant solution of the original problem) \item Existence of infinitely many polar actions invariant solutions since the reduced dimensional problem may be parametrized by some boundary data \item Existence of solution with some desirable properties such as non-negativity and similar properties \end{itemize} and many other analytical questions. To illustrate the use of the geometrical tool presented so far we present, in the next section, the case where the parameter $r$ is one dimensional and we are seeking the existence of a non-negative solution of problem \eqref{eq:sl-M}. The general case will be object of future work by some of the authors. \section{Non-negative solutions for the case of one dimensional transverse submanifold}\label{sec:solution} We start by stating our result. \begin{theorem}\label{thm:main} Consider a polar action of a topological group $G$ on the manifold $M$ with 1-dimensional transverse submanifold $\Sigma$ and denote by $\phi:\Sigma\to (0,\infty)$ the Riemann measure of the geodesic sphere. Let $r_0\in \Sigma$. In addition assume that \begin{enumerate}[(i)] \item the change of variables $s=J(r):=\int_{r_0}^r \phi(t)^{-1}dt$ maps $\Sigma$ to $\RR$ \item the linear problem $$z''(s) +b(r(s))\phi(r(s))^2=0,\quad s\in\RR $$ with $b(r(\cdot))$ an $\alpha$-regular positive function, has a unique positive $C_0(\RR)\cap C^{2+\alpha}_{loc}(\RR)$ solution, \item $f\in C^1((0,\infty), (0,\infty))$ is so that $\lim_{q\to 0^+} f(q)/q=\infty$ and we have $\lim_{q\to \infty} f(q)/q=0$. \end{enumerate} Then the problem \eqref{eq:sl-M} has at least one non-negative solution $u\in C^\infty(M)$. \end{theorem} Assumption $(i)$ is a technical assumption and it refers to the non-integrability of the function $1/\phi$. The case where the change of variables $J$ maps $\Sigma$ to a bounded interval is easier to deal with and it is not presented here. The other to hypothesis (ii)-(iii) recall a result given in \cite{XueChao} and the polar action is the geometric properties of the configuration space we mentioned above. The other condition, the rank of the transverse submanifold, is a technical condition that appears in order to reduce the problem \eqref{eq:sl-M} posed on $M$ to a problem posed on $\sigma$ when seeking for solutions that are invariant with respect to the polar action of $G$. \begin{proof} Here we show the solution for the problem \eqref{eq:sl-M} where $M$ is a surface of revolution or two points homogeneous spaces. As we mentioned in the previous section, if we assume $u$ and $b$ radial functions (in global polar coordinates), then the problem transforms to $$(SL1)\begin{cases} (\phi(r)u'(r))' +b(r)\phi(r)f(u(r))=0,\\ u(0)=d, u(x)\geq 0, \end{cases}$$ where $d$ is a real number. Note that $\phi(r)=x(t)$ in the surface of revolution case and $\phi(r)=A(r)$ in the two points homogeneous spaces case. On both cases the function $\phi(r)$ is non-negative for $r\neq 0$, so we can define $s=J(r)=\int_{r_0}^r \phi(t)^{-1}dt$ with $r_0>0$ and $z(s)=u(J^{-1}(s))$. It is routine to verify that $$\dfrac{d r}{ds}=\phi(r) \hspace{1cm}\mbox{\ and\ }\hspace{1cm} \dfrac{dz}{ds}=u'(r)\phi(r),$$ thus $(SL1)$ turns to be equivalent to $$(SL2)\begin{cases} z''(s) +b(r(s))\phi(r(s))^2f(z(s))=0,\\ z(r_0)=d, u(r(s))\geq 0, \end{cases}$$ where $d$ is any real number. Following Theorem 1.1 in \cite{XueChao} we can conclude that if $z''(s)+b\phi^2=0$ has positive and $C_0(\RR)\cap C^{2+\alpha}_{loc}(\RR)$ solution with $b$ positive and $\alpha$-regular function, then the problem $(SL2)$ has solution if in addition we suppose that $f\in C^1((0,\infty), (0,\infty))$ is so that $\lim_{q\to 0^+} f(q)/q=\infty$ and $\lim_{q\to \infty} f(q)/q=0$. Note that in this case $u(r)=z(J(r))$ is solution of the problem \eqref{eq:sl-M}. \end{proof} \section{Conclusions} In this paper we study second oder semi-linear partial differential equations on a Riemannian manifold. In particular, we prove the existence of solutions that are constants along orbits of a given group action. Using some results obtained by Helgason in \cite{Radialpart} we reduce the dimension of the partial differential equation and are able to bring known tools from analysis to obtain the results. We detailed all the geometrical constructions needed in order to obtain the reduced dimensional problem that, in the general case, is posed on a submanifold of the original domain where the partial differential equation is posed; see~\eqref{eqdimg}. \section*{Acknowledgements} J. G. wants to thank professors A. Castro and G. Rodr\'iguez for useful conversations concerning this type of problems and exponential coordinates.
{ "timestamp": "2018-03-09T02:03:05", "yymm": "1802", "arxiv_id": "1802.08625", "language": "en", "url": "https://arxiv.org/abs/1802.08625" }
\section{Intruduction} Phase transitions in two-dimensional systems with a continuous symmetry have long attracted much attentions since the theoretical prediction by Berezinskii, Kosterlitz and Thouless (BKT), providing a topological ordering through the binding of vortex-antivortex pairs \cite{Berezinskii,Kosterlitz}. Being different from the conventional thermodynamic transition prohibited by the Coleman-Mermin-Wagner theorem in two-dimensional systems \cite{Coleman,Mermin,Hohenberg}, the BKT transition exhibits a critical line below the BKT transition temperature, $T \leq T_{\rm BKT}$, with continuously variable critical exponents and the nonzero helicity modulus (superfluid density) showing discontinuous jump at the BKT transition temperature. The BKT transition has been observed in $^4$He films \cite{Bishop}, thin superconductors \cite{Guster,Hebard,Voss,Wolf,Epstein}, Josephson-junction arrays \cite{Resnick,Voss2}, colloidal crystals \cite{Halperin,Young,Zahn}, and ultracold atomic Bose gases \cite{Hadzibabic}. One of important issues of the BKT transition is a relationship between its universality and topological aspects of vortices. In the two-dimensional Bose systems with no internal degree of freedom, circulations of vortices are quantized by $2 \pi \hbar / m$ with the particle mass $m$, giving the universal jump of the superfluid number density $\Delta \rho_{\rm s}$ at the BKT transition temperature $T_{\rm BKT}$ as \begin{align} \Delta \rho_{\rm s} = \frac{2 m T_{\rm BKT}}{\pi \hbar^2}. \label{eq:single-component-universal-relation} \end{align} On the other hand, multi-component systems in general allow quantized vortices with fractional circulations, which are studied in superfluid $^3$He \cite{Salomaa,Volovik,Autti}, $p$-wave superconductors \cite{Salomaa,Ivanov,Chung,Jang}, multi-band or multi-component superconductors \cite{Babaev,Goryo,fractional-exp,Tanaka}, spinor Bose systems \cite{Ho,Semenoff}, multicomponent Bose systems \cite{Son,Mueller,Kasamatsu-1,Kasamatsu-2,Aftalion,Kuopanportti,Kasamatsu-3,Kasamatsu-4,Eto:2011wp,Eto-2,Eto:2013spa,Cipriani,Cipriani:2013wia,Dantas:2015fka,Tylutki:2016mgy,Eto:2017rfr,Kasamatsu:2015cia,Uranga}, exciton-polariton condensates \cite{Rubo,Keeling}, nonlinear optics \cite{Pismen}, and color superconductors as quark matter \cite{Balachandran}. It has been predicted that the relation \eqref{eq:single-component-universal-relation} is changed for superfluid systems with internal degrees of freedom inducing vortices having fractional circulations \cite{Stein,Korshunov}. However the existence of an unusual BKT transition is not yet conclusive. Here we consider a two-dimensional Bose system with two different quantum sublevels. We consider the situation in which two phases in the both sublevels can be synchronized through the Josephson or internal coherent coupling. When the Josephson coupling is switched off, vortices for both components have fractional circulation $\pm 2 \pi \alpha_i \hbar / m$ ($i = 1, 2$) with the fractional parameter $\alpha_i \in (0,1)$ for the $i$-th component with $\alpha_1 + \alpha_2 = 1$. Under a finite Josephson coupling, on the other hand, a single vortex in each component cannot exist as a stable topological defect. Instead, two vortices in both components are connected by a one-dimensional (sine-Gordon) kink \cite{Son,Tanaka:2001} to form a vortex molecule as a stable topological defect having the integer circulation $\pm 2 \pi \hbar / m$ \cite{Son,Kasamatsu-3}. Dynamics of such vortex molecules have been studied in Refs.~\cite{Tylutki:2016mgy,Eto:2017rfr} and vortex lattices have been studied in Refs.~\cite{Cipriani,Uranga}. In this Letter, we investigate a possibility of the BKT transition in this system and obtain the following results. When the Josephson coupling is switched off, there are two-step BKT transitions induced by bindings of fractional vortex-antivortex pairs of each component. Both the BKT transition temperatures depend on the fractional parameter $\alpha_i$. The universal relation is changed to twice of the right-hand-side in \eqref{eq:single-component-universal-relation} only when the two BKT transition temperature are the same with $\alpha_1 = \alpha_2 = 1/2$. The universal relation is, on the other hand, unchanged, which suggests that the fractional circulation is never sufficient condition for the change of the universal relation \eqref{eq:single-component-universal-relation} opposed to previous expectations \cite{Stein,Korshunov,Mukerjee,James}. When the Josephson coupling is switched on, the BKT transition is induced by bindings of molecule-antimolecule pairs with the normal universal relation \eqref{eq:single-component-universal-relation}, while the bindings of vortex-antivortex pairs for each component do not give a phase transition but does two crossovers. Our results can be tested by two weakly connected ultracold Bose gases or an ultracold Bose mixture with two magnetic hyperfine spin sublevels. For these systems, the inter-component Josephson coupling can be realized by the energy barrier separating two slices or the Rabi oscillation between two sublevels, respectively. The latter system has also been considered as a toy system simulating the quark confinement in quantum chromodynamics (QCD) \cite{Tylutki:2016mgy,Eto:2017rfr}, thereby our result could give some implications for statistical properties such as a transition between confinement and deconfinement phases. Other candidates are multiband superconductors such as superconducting MgB$_2$ compounds \cite{Liu,Brinkman,Golubov,Choi} and Iron-based superconductors \cite{Kamihara,Singh,Mazin}; it is predicted that the first one has less Josephson coupling strength than the second one. Although the above materials have not been completely confirmed as multiband superconductors yet, our results would give some guiding principles. We start from the Hamiltonian $H = \int d^2x\: \mathcal{H}$ as \begin{align} \begin{split} \mathcal{H} &= \sum_{i = 1}^2 \left\{ \frac{\hbar^2}{2 m} |\nabla \psi_i|^2 + \frac{g_1}{2} |\psi_i|^4 \right\} \\ &\quad + g_{2} |\psi_1|^2 |\psi_2|^2 - \frac{q}{2} \left( \psi_1^\ast \psi_2 + \psi_2^\ast \psi_1 \right), \end{split} \label{eq:Hamiltonian} \end{align} for two-dimensional Bose mixtures describing two different quantum sublevels coupled by the Josephson oscillation. Here $\psi_i$ ($i = 1,2$) is the $i$-th Bose field with the particle mass $m$, $g_1 > 0$ is the intra-component interaction strength common for both components, $g_2 > 0$ is the inter-component interaction strength, and $q \geq 0$ is the Josephson (Rabi) coupling strength. Here, we set $g_1 > g_2$ for the miscible ground state. Considering a BKT transition as a phenomenon at finite temperatures, we ignore the quantum fluctuation. We further impose an additional constraint $(1 / L^2) \int d^2x\: |\psi_i|^2 = n_i$ ($i = 1,2$), where $n_i$ and $L$ are the particle number density for the $i$-th component and the system size, respectively. Inserting the uniform ground state $\psi_i = \sqrt{n_i} e^{i \varphi_i}$ with the phase $\varphi_i$ ($i = 1,2$) for the $i$-th component into the Hamiltonian \eqref{eq:Hamiltonian}, we obtain the energy density \begin{align} \mathcal{E} &= \frac{g_1 n^2}{2} - (g_{1} -g_{2}) \tilde{n}^2 - q \tilde{n} \cos\Delta\varphi, \label{eq:energy-density} \end{align} where $n = n_1 + n_2$, $\tilde{n} = \sqrt{n_1 n_2}$, and $\Delta\varphi = \varphi_1 - \varphi_2$ are the total number density, the geometric mean density, and the relative phase, respectively. The last term of the right-hand-side in Eq.~\eqref{eq:energy-density} shows that the zero relative phase $\varphi_1 - \varphi_2 = 0$ is selected for the ground state. We first consider vortices and interactions between them. For $q = 0$, single vortex states \begin{align} \{\pm 1,0\} : \left( \psi_1 \sim \sqrt{n_1}\: e^{\pm i \phi_0},\ \psi_2 \sim \sqrt{n_2} \right), \label{eq:vortex-state-1} \end{align} and \begin{align} \{0,\pm 1\} : \left( \psi_1 \sim \sqrt{n_1},\ \psi_2 \sim \sqrt{n_2}\: e^{\pm i \phi_0} \right), \label{eq:vortex-state-2} \end{align} are topologically stable, with $\phi_0 \equiv \tan^{-1}(y / x)$. The mass circulation $\kappa$ of vortices is given by \begin{align} \begin{split} \kappa &= \frac{\hbar}{m} \oint d\Vec{l} \cdot \left( \frac{|\psi_1|^2 \nabla \varphi_1 + |\psi_2|^2 \nabla \varphi_2}{|\psi_1|^2 + |\psi_2|^2} \right), \end{split} \end{align} where $\Vec{l}$ is the vector for the closed path surrounding a vortex. The circulation is $\kappa_1 = \pm 2 \pi \alpha_1 \hbar / m$ for vortices $\{\pm 1, 0\}$ and $\kappa_2 = \pm 2 \pi \alpha_2 \hbar / m$ for vortices $\{0,\pm 1\}$, where $\alpha_i \equiv n_i / n$ ($i = 1,2$) is the fractional parameter for $i$-th component. The interaction between $\{\pm 1, 0\}$ and $\{0, \pm 1\}$ vortices are weaker than logarithm \cite{Eto:2011wp}, and real time dynamics of them have been studied in Ref.~\cite{Kasamatsu:2015cia}. For $q > 0$, vortices $\{\pm 1, 0\}$ and $\{0, \pm 1\}$ are no more topologically stable. A stable topological defect with nonzero Josephson coupling strength $q > 0$ is a vortex molecule $[1,1]_{r_0}: \{1,0\} \:\oset{$r_0$}{-}\: \{0,1\}$ or its antimolecule $[-1,-1]_{r_0}: \{-1,0\} \:\oset{$r_0$}{-}\: \{0,-1\}$, where $A\:\oset{$r_0$}{-}\:B$ indicates that two vortices $A$ and $B$ are placed with the distance $r_0$. The circulation of the vortex molecule $[1,1]_{r_0}$ is $\kappa_{\rm M} = 2 \pi \hbar / m$ which is the same as that for a single-component Bose system. The profile of the relative phase $\Delta\varphi$ for a vortex molecule is illustrated in Fig. \ref{fig:molecule} (a). A kink structure having the relative phase $\Delta\varphi = \pi$ appears and mediate an attractive force between two vortices that is balanced with repulsion \cite{Kasamatsu-3}. \begin{figure}[tbh] \centering \vspace{\baselineskip} \begin{minipage}{0.49\linewidth} \centering \includegraphics[width=0.99\linewidth]{molecule.eps} \end{minipage} \begin{minipage}{0.49\linewidth} \centering \includegraphics[width=0.99\linewidth]{molecule-pair.eps} \end{minipage} \caption{ \label{fig:molecule} Profile of the relative phase $\Delta\varphi$ for (a) the vortex molecule $[1,1]_{r_0}$ and the vortex-molecule pair $[1,1]_\delta\:\oset{$r_0$}{-}\:[-1,-1]_\delta$ with $\delta = 0.5 r_0$. Closed (open) red and blue circles represent positions of vortices of $\{1,0\}$ and $\{0,1\}$ ($\{-1,0\}$ and $\{0,-1\}$) respectively. Black solid lines show kinks having the relative phase $\Delta\varphi = \pi$. } \end{figure} There are two characteristic structures for defect-antidefect pairs. The first is a single component vortex-antivortex pair $\{1,0\}\:\oset{$r_0$}{-}\:\{-1,0\}$ or $\{0,1\}\:\oset{$r_0$}{-}\:\{0,-1\}$. The profile of the relative phase for the single-component vortex-antivortex pair is almost same as that shown in Fig. \ref{fig:molecule} (a), providing the kink between the pair. The second is a molecule-antimolecule pair $[1,1]_\delta\:\oset{$r_0$}{-}\:[-1,-1]_\delta$ with $\delta < r_0$ which is illustrated in Fig. \ref{fig:molecule} (b). The leading interaction energy $E_{\rm int}$ between vortices in the large $r_0$ limit is given as \begin{subequations} \begin{align} & E_{\rm int}([1,1]_{r_0}) \sim \varepsilon r_0, \label{eq:interaction-molecule} \\ & E_{\rm int}(\{1,0\}\:\oset{$r_0$}{-}\: \{-1,0\}) \sim \hbar \kappa_1 n \log(r_0 / \xi_1) + \varepsilon r_0, \label{eq:interaction-pair-1} \\ & E_{\rm int}(\{0,1\}\:\oset{$r_0$}{-}\:\{0,-1\}) \sim \hbar \kappa_2 n \log(r_0 / \xi_2) + \varepsilon r_0, \label{eq:interaction-pair-2} \\ & E_{\rm int}([1,1]_\delta\:\oset{$r_0$}{-}\:[-1,-1]_\delta) \sim \hbar \kappa_{\rm M} n \log(r_0 / \xi), \label{eq:interaction-molecule-pair} \end{align} \label{eq:vortex-interactions} \end{subequations} where $\varepsilon = \gamma \hbar \sqrt{q n \tilde{n} / m}$ is the energy density of the kink per unit length with $\gamma = \mathcal{O}(1)$, $\xi_{i}$ ($i = 1,2$) is the vortex core size for the $i$-th component, and $\xi = \sqrt{\xi_1 \xi_2}$. For $g_{1} \gg g_{2}$, $\xi_i$ is estimated as $\xi_i \sim \hbar / \sqrt{2 m g n_i}$. In order to make the system to exhibit the BKT transition, a pure logarithmic interaction between defect-antidefect pairs is needed. For $q = 0$, single component vortex-antivortex pairs can induce the BKT transition. The BKT transition temperatures depend on the vortex core sizes $\xi_i$ and we expect the two-step BKT transitions for the imbalanced density $n_1 \neq n_2$. For $q > 0$, additional linear terms in Eqs. \eqref{eq:interaction-pair-1} and \eqref{eq:interaction-pair-2} hinder the BKT transitions by bindings of single component vortex-antivortex pairs connected with kinks. Instead, we expect a new BKT transition by bindings of molecule-antimolecule pairs due to the logarithmic interaction between them in Eq.~\eqref{eq:interaction-molecule-pair}. We here show our numerical results by using the standard Metropolis Monte-Carlo sampling for the superfluid number density $\rho_{\rm s}$ defined as \begin{align} \rho_{\rm s} = \frac{2 m}{\hbar^2} \lim_{\Delta \to 0} \frac{F(\Delta) - F(0)}{\Delta^2}, \end{align} where $F(\Delta)$ is the free energy $- T \log Z(\Delta)$ with the partition function $Z(\Delta) = \langle e^{- H / T} \rangle$ under the twisted boundary condition along the $x$-direction: $\psi_i(x+L,y) = \psi_i(x,y) e^{i \Delta}$. We used $\Delta = 0.01$. \begin{figure}[tbh] \centering \vspace{\baselineskip} \begin{minipage}{0.49\linewidth} \centering \includegraphics[width=0.99\linewidth]{rhos-0-035.eps} \end{minipage} \begin{minipage}{0.49\linewidth} \centering \includegraphics[width=0.99\linewidth]{rhos-1-035.eps} \end{minipage} \vspace{0.25\baselineskip} \\ \begin{minipage}{0.49\linewidth} \centering \includegraphics[width=0.99\linewidth]{rhos-0-045.eps} \end{minipage} \begin{minipage}{0.49\linewidth} \centering \includegraphics[width=0.99\linewidth]{rhos-1-045.eps} \end{minipage} \caption{ \label{fig:rhos} Superfluid number density $\rho_{\rm s}$ for (a): $n_1 / n_2 = 0.5$ and $q = 0$, (b): $n_1 / n_2 = 0.5$ and $q = 0.1 g_1 n$, (c): $n_1 / n_2 = 1$ and $q = 0$, and (d): $n_1 / n_2 = 1$ and $q = 0.1 g n$ as a function of the temperature $T / T_{\rm BKT}^{(0)}$, where $T_{\rm BKT}^{(0)}$ is the BKT transition temperature for the single-component Bose system.. The system sizes are $L = 64 \xi$ (red lines), $L = 96 \xi$ (green lines), $L = 128 \xi$ (blue lines), and $L = 160 \xi$ (yellow lines). The BKT transition temperatures $T_{\rm BKT1}$, $T_{\rm BKT2}$ and $T_{\rm BKT}$ are shown as the thick solid lines. The thin dashed lines in the all panels show the relation $\rho_{\rm s} / T = 2 m / (\pi \hbar^2)$. The thin dotted lines in panels (a) and (c) show relations $(\rho_{\rm s} - 0.57 n)/ T = 2 m / (\pi \hbar^2)$ and $\rho_{\rm s} / T = 4 m / (\pi \hbar^2)$, respectively. The thick dashed line in panel (b) and dotted lines in panels (b) and (d) show the crossover temperatures $T_{\rm COV}$ and $T_{\rm COK}$, respectively (see Fig. \ref{fig:defect}). } \end{figure} Figure \ref{fig:rhos} shows the temperature dependences of the superfluid density $\rho_{\rm s}$ with various system sizes $L$. In Fig.~\ref{fig:rhos} (a), there are strong system size dependence of the superfluid density $\rho_{\rm s}$ just above two BKT transition temperatures $T_{\rm BKT1}$ and $T_{\rm BKT2}$, which can be estimated from the binder ratio (see Appendix). In the thermodynamic limit $L \to \infty$, the behavior of the superfluid density $\rho_{\rm s}$ converges to a discrete jump at the BKT transition temperatures. In Fig.~\ref{fig:rhos} (b) for a nonzero Josephson coupling strength, the system size dependence of the superfluid density at around $T_{\rm BKT} \sim 0.35 T_{\rm BKT}^{(0)}$ disappears, where $T_{\rm BKT}^{(0)}$ is the BKT transition temperature for the single-component Bose system.. The rapidly decreasing structure of the superfluid number density still remains above the temperature $T_{\rm COV} \sim 0.30 T_{\rm BKT}^{(0)}$. The meaning of the temperature $T_{\rm COV}$ is related to bindings of vortex-antivortex pairs for the first component, as explained later. For the balanced density $n_1 / n_2 = 1$, there is only one jump structure of the superfluid density for both zero (Fig. \ref{fig:rhos} (c)) and nonzero (Fig. \ref{fig:rhos} (d)) Josephson coupling strengths. The universal relation in Eq.~\eqref{eq:single-component-universal-relation} can be confirmed in Fig.~\ref{fig:rhos}: the superfluid density $\rho_{\rm s}$ and dashed lines for $\rho_{\rm s} / T = 2 m / (\pi \hbar^2)$ intersect at the BKT transition temperature $T = T_{\rm BKT2}$ in (a) and $T_{\rm BKT}$ in (b) and (d). For $T_{\rm BKT1}$ in (a), we have the same universal relation, {\it i.e.}, $\rho_{\rm s}$ and the dotted line for $(\rho_{\rm s} - 0.57 n) / T = 2 m / (\pi \hbar^2)$ intersect at $T = T_{\rm BKT1}$, where the density $0.57 n$ is the estimated superfluid density $\rho_{\rm s}$ at the temperature just above $T_{\rm BKT1}$ in the thermodynamic limit $L \to \infty$. In (c), we have a different universal relation, {\it i.e.}, $\rho_{\rm s}$ and dotted line for $\rho_{\rm s} / T = 4 m / (\pi \hbar^2)$ intersect at $T_{\rm BKT}$, suggesting the twice of the right-hand-side in Eq.~\eqref{eq:single-component-universal-relation}. The result shown in (a) suggests that the fractional circulation itself does not affect the universal relation, and the change of the universal relation in (c) can be simply understood by considering that the total jump $\Delta \rho_{\rm s}$ of the superfluid density is separated into two contributions from the both components. (b) and (d) show that defects inducing the BKT transition change from fractional vortices of each component to vortex molecules and molecule-antimolecule pairs, also supporting the universal relation \eqref{eq:single-component-universal-relation}. \begin{figure}[tbh] \centering \vspace{\baselineskip} \begin{minipage}{0.49\linewidth} \centering \includegraphics[width=0.99\linewidth]{defect-log-1-035.eps} \end{minipage} \begin{minipage}{0.49\linewidth} \includegraphics[width=0.99\linewidth]{defect-log-1-045.eps} \end{minipage} \caption{ \label{fig:defect} Arrhenius plots for number densities $\langle \rho_{\rm v1} \rangle$ and $\langle \rho_{\rm v2} \rangle$ of vortex-antivortex pairs for first (red lines) and second (green lines) components, and length density $\langle \rho_{\rm k} \rangle$ of kinks (black lines) for (a): $n_1 / n_2 = 0.5$ and $q = 0.1 g n$ and (b): $n_1 / n_2 = 1$ and $q = 0.1 g n$. The BKT transition temperature $T_{\rm BKT}$ are shown as solid lines. The dashed line in panel (a) and dotted lines in both panels show the crossover temperatures $T_{\rm COV}$ and $T_{\rm COK}$ above which $\langle \rho_{\rm v1} \rangle$ and $\langle \rho_{\rm k} \rangle$ deviate from the Arrhenius relation, respectively. } \end{figure} We next calculate the thermal average of the number density $\langle \rho_{\rm v1} \rangle$ ($\langle \rho_{\rm v2} \rangle$) of vortex-antivortex pairs for the first (second) component and the length density $\langle \rho_{\rm k} \rangle$ of kinks. At low temperatures, three densities satisfy the Arrhenius relation $\langle \rho_{\rm v1, v2, k} \rangle \propto e^{- \varepsilon_{\rm v1, v2, k} / T}$. At high temperatures, they deviate from the Arrhenius relation. Figure \ref{fig:defect} shows the Arrhenius plots for densities $\rho_{\rm v1, v2, k}$. With the imbalanced density $n_1 / n_2 = 0.5$ shown in Fig.~\ref{fig:defect} (a), the number density $\langle \rho_{\rm v2} \rangle$ (green line) deviates from the Arrhenius relation at the BKT transition temperature $T_{\rm BKT}$ due to unbinding of vortex-antivortex pairs for the second component. The number density $\rho_{\rm v1}$ (red line) and the length density $\rho_{\rm k}$ (black line), on the other hand, deviate from the Arrhenius relation at lower temperatures $T \simeq 0.40 T_{\rm BKT}^{(0)} = 0.58 T_{\rm BKT} \equiv T_{\rm COV}$ for $\rho_{\rm v1}$ and $T \simeq 0.38 T_{\rm BKT}^{(0)} = 0.54 T_{\rm BKT} \equiv T_{\rm COK}$ for $\rho_{\rm k}$. The temperature $T_{\rm COV}$ does not induce the BKT transition because of the additional harmonic interaction in Eq.~\eqref{eq:interaction-pair-1} but shows the crossover for unbinding of vortex-antivortex pairs for the first component with the rapid decrease of the superfluid density $\rho_{\rm s}$ (see Fig. \ref{fig:rhos} (b)). At the BKT transition temperature $T_{\rm BKT}$, some unbounded vortex-antivortex pairs for the first component couple to those for the second component and form molecule-antimolecule pairs. The temperature $T_{\rm COK}$ also shows the crossover for nucleation of kink rings with no attached vortices, and $T_{\rm COK} = 0$ with the zero Josephson coupling strength $q = 0$ because there is no energy cost for kinks to be nucleated. We note that overall behaviors shown in Figs \ref{fig:rhos} (a), (b), and \ref{fig:defect} (c) are qualitatively unchanged among different imbalanced densities $n_1 / n_2 \neq 1$. \begin{figure}[tbh] \centering \vspace{\baselineskip} \begin{minipage}{0.49\linewidth} \centering \includegraphics[width=0.99\linewidth]{vortex-032-010-035-060.eps} \end{minipage} \begin{minipage}{0.49\linewidth} \includegraphics[width=0.99\linewidth]{vortex-032-010-035-120.eps} \end{minipage} \caption{ \label{fig:vortex} Equilibrium snapshots of vortices and kinks with $L = 64 \xi$, $n_1 / n_2 = 0.5$ and $q = 0.1 g n$ at (a): $T = T_{\rm COV}$ and (b): $T = T_{\rm BKT}$. Closed (open) red and blue circles represent positions of vortices for $\{1,0\}$ and $\{0,1\}$ ($\{-1,0\}$ and $\{0,-1\}$) respectively. Black solid lines show kinks. Green dashed closed lines in panel (b) denote molecule-antimolecule pairs. } \end{figure} Figure \ref{fig:vortex} shows equilibrium snapshots of vortices and kinks. At $T = T_{\rm COV}$, there are vortex-antivortex pairs for the first component connected with kinks as shown in (a). At $T = T_{\rm BKT}$, there are not only vortex-antivortex pairs for both components but also molecule-antimolecule pairs denoted by green dashed closed lines. In both (a) and (b), we can see kink rings with no attaching vortices which start to frequently appear at $T_{\rm COK}$. For the balanced density $n_1 = n_2$ shown in Fig.~\ref{fig:defect} (b), two number densities $\rho_{\rm v1} \simeq \rho_{\rm v2}$ and the length density $\rho_{\rm k}$ deviate from the Arrhenius relation at the BKT transition temperature $T_{\rm BKT}$ ($= T_{\rm COV}$) and the crossover temperature $T_{\rm COK}$, respectively. \begin{figure}[tbh] \centering \vspace{\baselineskip} \begin{minipage}{0.49\linewidth} \centering \includegraphics[width=0.99\linewidth]{tc-imbalance.eps} \end{minipage} \begin{minipage}{0.49\linewidth} \includegraphics[width=0.99\linewidth]{tc-balance.eps} \end{minipage} \caption{ \label{fig:phase-diagram} Phase diagrams for the temperature $T$ and the Josephson coupling $q$ for (a): $n_1 / n_2 = 0.5$ and (b): $n_1 / n_2 = 1$. The red, purple, and blue lines at $q = 0$ show phases for unbounded vortex of both components, unbounded vortex of the first component, and bounded vortex of both component, respectively. The black solid lines and filled circles show the BKT transition temperature $T_{\rm BKT}$. The dashed line with open circles in panel (a) and dotted lines with open squares in the both panels indicate the crossover temperatures $T_{\rm COV}$ and $T_{\rm COK}$. } \end{figure} In conclusion, we have investigated the two-dimensional Bose system with two quantum sublevels. The phases of both components can be coupled through the Josephson oscillation. When the Josephson coupling is absent, topologically stable fractional vortices induce two-step BKT transitions having the normal universal relation in Eq.~\eqref{eq:single-component-universal-relation}, which suggests that the fractional circulation does not affect the universal relation in contrast to the conventional understanding. This result is qualitatively independent of the value of the inter-component interaction strength $g_2$. When the Josephson coupling is switched on, the BKT transition occurs once for both balanced and imbalanced densities, and the universal relation is unchanged even for the balanced density. This result can be understood from the interaction between vortices as shown in Eq.~\eqref{eq:vortex-interactions}. There are additional linear interactions between a vortex and its antivortex in Eqs.~\eqref{eq:interaction-pair-1} and \eqref{eq:interaction-pair-2} which hinder the BKT transition by binding of them. Instead of single-component vortex pairs, molecule-antimolecule pairs having pure logarithmic interactions in Eq.~\eqref{eq:interaction-molecule-pair} induces the BKT transition of this system. For the case of imbalanced densities, however, there is a characteristic temperature $T_{\rm COV}$ at which single-component vortex-antivortex pairs start to form bound states as a relic of the BKT transition without the Josephson coupling. The temperature $T_{\rm COV}$ does thermodynamically not give the transition but gives the crossover. We also find a lower crossover temperature $T_{\rm COK}$ than $T_{\rm COV}$ at which kink rings with no attaching vortices start to be nucleated. We summarize our discussion with the phase diagrams shown in Fig.~\ref{fig:phase-diagram}. \section*{Acknowledgement} We would like to thank Yuki Kawaguchi and Atsutaka Maeda for the helpful suggestions and comments. This work is supported by the Japan Society for the Promotion of Science (JSPS) Grant-in-Aid for Scientific Research (KAKENHI Grant No. 16H03984). The work is also supported in part by Grant-in-Aid for Scientific Research (KAKENHI Grant Numbers 26870295 (MK), 16KT0127 (MK), 26800119 (ME) and 17H06462 (ME)). The work of MN is supported in part by the Ministry of Education, Culture, Sports, Science (MEXT)-Supported Program for the Strategic Research Foundation at Private Universities ``Topological Science'' (Grant No. S1511006), and by a Grant-in-Aid for Scientific Research on Innovative Areas ``Topological Materials Science'' (KAKENHI Grant No.~15H05855) from the the Ministry of Education, Culture, Sports, Science (MEXT) of Japan.
{ "timestamp": "2018-02-27T02:02:31", "yymm": "1802", "arxiv_id": "1802.08763", "language": "en", "url": "https://arxiv.org/abs/1802.08763" }